0% found this document useful (0 votes)
32 views98 pages

Ansible Notes

Ansible is an open-source IT automation tool that simplifies configuration management, application deployment, and orchestration through an agentless architecture using SSH and WinRM. It features a user-friendly YAML syntax for defining tasks and offers advantages such as a push-based model, extensive built-in modules, and cross-platform support. The document covers Ansible's architecture, installation, inventory management, ad-hoc commands, and playbook writing, providing a comprehensive overview for users.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views98 pages

Ansible Notes

Ansible is an open-source IT automation tool that simplifies configuration management, application deployment, and orchestration through an agentless architecture using SSH and WinRM. It features a user-friendly YAML syntax for defining tasks and offers advantages such as a push-based model, extensive built-in modules, and cross-platform support. The document covers Ansible's architecture, installation, inventory management, ad-hoc commands, and playbook writing, providing a comprehensive overview for users.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

1.

Introduction to Ansible
What is Ansible?

Ansible is an open-source IT automation tool used for configuration management, application


deployment, orchestration, and provisioning. It allows you to automate tasks across multiple
servers without requiring an agent to be installed on the target systems.

Ansible uses a simple YAML-based language (Ansible Playbooks) to define automation tasks. It
relies on SSH (for Linux) and WinRM (for Windows) to communicate with remote systems.

It follows an agentless architecture, meaning there is no need to install software on remote


machines, making it lightweight and easy to maintain.

Advantages of Ansible over Other Automation Tools

Ansible stands out from other automation tools like Puppet, Chef, and SaltStack due to the
following advantages:

1. Agentless Architecture
o Unlike Puppet, Chef, and SaltStack, Ansible does not require agents on managed
nodes. It communicates over SSH/WinRM, reducing system overhead.
2. Simple and Human-Readable YAML Syntax
o Ansible uses YAML (Yet Another Markup Language) for writing Playbooks,
making it easy to learn and understand.
3. Push-Based Model
o Ansible uses a push-based automation model (compared to pull-based models
in Puppet and Chef), which allows instant changes and easier debugging.
4. Idempotency
o Ensures that running the same playbook multiple times results in the same
system state, preventing unintended side effects.
5. No Dedicated Master Node Required
o Ansible does not require a centralized master-server-client model like Puppet.
Any machine with Ansible installed can act as a control node.
6. Extensive Built-in Modules
o Comes with hundreds of modules that support provisioning, configuration
management, cloud integrations (AWS, Azure, Google Cloud), networking
automation, and more.
7. Supports Both Declarative and Procedural Approaches
o You can define what state the system should be in (declarative) or how it should
get there (procedural).
8. Cross-Platform Support
o Works on Linux, Windows, macOS, and network devices.
9. Security and Compliance
o Uses OpenSSH, Kerberos, and other authentication methods, ensuring secure
communication.
10. Dynamic Inventory Management

 Supports static and dynamic inventories that pull host data from cloud services,
databases, or external APIs.

Ansible Architecture and Components

Ansible follows a client-server model where the control node (server) manages remote nodes
(clients) via SSH.

Key Components of Ansible Architecture:

1. Control Node
o The system where Ansible is installed and from which automation tasks are
executed.
2. Managed Nodes (Hosts/Clients)
o The remote systems (Linux, Windows, or networking devices) being automated.
3. Inventory
o A file that lists managed nodes (hosts) and their groupings.
4. Modules
o Predefined scripts that perform automation tasks (e.g., install packages, create
users, configure files).
5. Playbooks
o YAML files defining tasks, roles, and configurations.
6. Tasks
o The individual automation steps inside a playbook.
7. Handlers
o Special tasks that trigger actions when changes occur.
8. Roles
o A structured way to organize playbooks and tasks for reusability.
9. Facts
o System information collected automatically about hosts.
10. Plugins

 Extend Ansible’s functionality (e.g., connection plugins, inventory plugins).


Installation and Setup

Ansible can be installed on Linux/macOS (recommended) or Windows (using WSL).

Installing Ansible on Linux (Ubuntu/Debian)

sudo apt update


sudo apt install ansible -y

Installing Ansible on RHEL/CentOS

sudo yum install epel-release -y


sudo yum install ansible -y

Installing Ansible on macOS (via Homebrew)

brew install ansible

Verifying Installation

ansible --version

Configuring SSH Access for Ansible

 Ensure passwordless SSH access for Ansible to communicate with remote systems.

ssh-keygen -t rsa
ssh-copy-id user@remote-host

Installing Ansible on Windows (via WSL)

1. Enable Windows Subsystem for Linux (WSL).


2. Install Ubuntu from Microsoft Store.
3. Install Ansible using apt.

Inventory Files and Hosts

An inventory file defines the list of managed nodes (hosts) Ansible controls. It can be a simple
text file (/etc/ansible/hosts) or dynamically generated.

Basic Inventory File Example (/etc/ansible/hosts)

[webservers]
web1.example.com
web2.example.com

[dbservers]
db1.example.com
db2.example.com

Using an IP-based Inventory

[webservers]
192.168.1.10
192.168.1.11

[dbservers]
192.168.1.20
192.168.1.21

Inventory File with Variables

[webservers]
web1 ansible_host=192.168.1.10 ansible_user=ubuntu ansible_port=22
web2 ansible_host=192.168.1.11 ansible_user=ubuntu ansible_port=22

[dbservers]
db1 ansible_host=192.168.1.20 ansible_user=root ansible_port=22

Checking Connectivity with Inventory

ansible all -m ping -i /etc/ansible/hosts

Output:

web1 | SUCCESS => {"ping": "pong"}


web2 | SUCCESS => {"ping": "pong"}

Dynamic Inventory

For cloud-based environments (AWS, Azure, GCP), Ansible supports dynamic inventories using
plugins.

 Example: AWS dynamic inventory

ansible-inventory -i aws_ec2.yml --list

Summary

 Ansible is a powerful, agentless automation tool using SSH for configuration


management, deployment, and orchestration.
 It offers simplicity, efficiency, and flexibility compared to other tools.
 The architecture consists of Control Nodes, Managed Nodes, Playbooks, Tasks,
Modules, and Inventory.
 Installation is straightforward across Linux, macOS, and Windows (via WSL).
 Inventory files define hosts, and Ansible communicates over SSH/WinRM.

Would you like me to dive deeper into any specific area, such as playbooks, modules, roles, or
real-world examples?

Got it! I'll break down Ansible Basics in extreme depth so you can answer any interview
question confidently.

1. Ad-hoc Commands

What are Ad-hoc Commands?


Ansible ad-hoc commands are one-time command-line operations that you use without writing
a full playbook. They allow you to execute quick tasks across multiple managed nodes. These
commands are useful for troubleshooting, gathering information, and making simple changes.

Syntax of an Ad-hoc Command


The general syntax follows this structure:

ansible <host-pattern> -m <module> -a "<arguments>" -i <inventory>

 <host-pattern> → Defines the target hosts (e.g., all, webservers, 192.168.1.10).


 -m <module> → Specifies the Ansible module to run (e.g., ping, shell, copy).
 -a "<arguments>" → Arguments for the module (if required).
 -i <inventory> → Specifies the inventory file (optional if using default /etc/ansible/hosts).

Common Ad-hoc Commands


1. Check Connectivity (Ping)
ansible all -m ping

 Uses the ping module to check if nodes are reachable.


 Returns "ping": "pong" if successful.

2. Gather System Information (Facts)


ansible all -m setup

 Collects system information such as OS version, network interfaces, CPU details, etc.

3. Execute a Shell Command


ansible all -m shell -a "uptime"

 Runs the uptime command on all hosts.

4. Copy Files
ansible all -m copy -a "src=/etc/hosts dest=/tmp/hosts"

 Copies /etc/hosts to /tmp/hosts on all nodes.

5. Restart a Service
ansible all -m service -a "name=nginx state=restarted"

 Restarts the nginx service on all nodes.

2. Ansible Command-line Tools

1. ansible Command

Used for ad-hoc execution. Example:

ansible all -m ping

2. ansible-playbook Command

Used to run playbooks (YAML scripts defining automation tasks). Example:

ansible-playbook site.yml

3. ansible-doc Command
Used to view documentation on Ansible modules. Example:

ansible-doc -s copy

 Shows usage, parameters, and examples of the copy module.

4. ansible-galaxy Command

Manages roles from Ansible Galaxy (community-contributed roles). Example:

ansible-galaxy install geerlingguy.nginx

 Installs the nginx role by Jeff Geerling.

5. ansible-inventory Command

Used to view and manage inventory. Example:

ansible-inventory --list -i inventory.ini

 Shows the structured inventory.

3. YAML Syntax for Ansible

Ansible uses YAML (Yet Another Markup Language) for playbooks.

Basic YAML Rules:


1. Indentation Matters: Use spaces, NOT tabs.
2. Key-Value Pairs: Defined with key: value format.
3. Lists: Represented using - (dash).
4. Dictionaries: Represented as key-value pairs.

Example of Valid YAML:


---
name: John Doe
age: 30
hobbies:
- Reading
- Running
- Coding
address:
city: New York
country: USA

Example of a Basic Ansible Playbook in YAML


---
- name: Install and Start Apache
hosts: webservers
become: yes

tasks:
- name: Install Apache
apt:
name: apache2
state: present

- name: Start Apache


service:
name: apache2
state: started

4. Writing Basic Playbooks

A playbook is a YAML file that contains:

1. Hosts: Defines the target machines.


2. Tasks: List of automation steps.
3. Modules: Built-in functions for execution.
4. Variables (optional): Dynamic values.
5. Handlers (optional): Respond to changes.

Example Playbook
---
- name: Deploy Web Server
hosts: webservers
become: yes

tasks:
- name: Install Nginx
apt:
name: nginx
state: present

- name: Start Nginx


service:
name: nginx
state: started

How to Run This Playbook


ansible-playbook webserver.yml

5. Ansible Configuration File (ansible.cfg)

The ansible.cfg file allows you to customize how Ansible behaves.

Default Locations
 /etc/ansible/ansible.cfg (System-wide)
 ~/.ansible.cfg (User-specific)
 ansible.cfg (Per-project directory)

Common Configurations in ansible.cfg


[defaults]
inventory = ./inventory
remote_user = ansible
host_key_checking = False
timeout = 30
forks = 10

Important Configurations

1. inventory → Defines the inventory file location.


2. remote_user → Specifies the default SSH user.
3. host_key_checking = False → Prevents SSH key verification.
4. timeout = 30 → Sets the SSH timeout.
5. forks = 10 → Defines the number of parallel processes.

Interview-Ready Takeaways

1. Ad-hoc commands allow quick one-time task execution.


2. Key Ansible CLI tools:
o ansible → Ad-hoc tasks.
o ansible-playbook → Runs playbooks.
o ansible-doc → Module documentation.
o ansible-galaxy → Manages roles.
o ansible-inventory → Views inventory.
3. YAML syntax is crucial → Correct indentation & formatting.
4. Playbooks automate tasks by defining hosts, tasks, and handlers.
5. ansible.cfg optimizes Ansible settings.

Would you like me to generate interview questions for practice?

Inventory Management in Ansible

Inventory management is a fundamental concept in Ansible, allowing users to define and


organize hosts that Ansible will manage. The inventory contains information about target
systems and can be static or dynamic. This deep dive will cover inventory types, grouping,
variables, host patterns, filters, and plugins in great detail.

1. Static vs. Dynamic Inventories


Ansible inventories can be categorized into two main types:

1.1 Static Inventory

A static inventory is a simple text file (typically INI or YAML format) where hosts and groups
are predefined. These inventories do not change unless manually edited.

Example (INI format):

[web]
web1 ansible_host=192.168.1.10
web2 ansible_host=192.168.1.11

[database]
db1 ansible_host=192.168.1.20
db2 ansible_host=192.168.1.21

[all:vars]
ansible_user=admin
ansible_ssh_private_key_file=/home/admin/.ssh/id_rsa

 Groups: [web] and [database] define host groups.


 Host variables: ansible_host specifies the IP of each host.
 Global variables: [all:vars] apply settings to all hosts.

Example (YAML format):


all:
hosts:
web1:
ansible_host: 192.168.1.10
web2:
ansible_host: 192.168.1.11
db1:
ansible_host: 192.168.1.20
children:
web:
hosts:
web1:
web2:
database:
hosts:
db1:

 YAML format provides a hierarchical, structured representation of inventory.

1.2 Dynamic Inventory

A dynamic inventory is generated in real-time from external sources like cloud providers (AWS,
Azure, GCP), CMDBs, or databases.

 Dynamic inventories use inventory plugins or custom scripts.


 These inventories adapt automatically as hosts are added or removed.

Example: AWS EC2 Dynamic Inventory

ansible-inventory -i aws_ec2.yml --list

Dynamic inventory configuration file (aws_ec2.yml):

plugin: amazon.aws.aws_ec2
regions:
- us-east-1
keyed_groups:
- key: tags.Name
prefix: ec2_
filters:
instance-state-name: running

 This pulls running instances from AWS us-east-1.


 Hosts are grouped by their Name tag.

2. Grouping Hosts in Inventory


Hosts can be grouped in an inventory to organize them logically. This is useful when applying
different configurations to different sets of hosts.

2.1 Creating Groups


[web]
web1
web2

[database]
db1
db2

Hosts web1 and web2 are part of the [web] group, while db1 and db2 are in [database].

2.2 Nested Groups

You can define groups within groups (parent-child relationships).

[frontend]
web1
web2

[backend]
db1
db2

[all_servers:children]
frontend
backend

 [all_servers:children] groups frontend and backend together.


 Running a playbook on [all_servers] will affect both frontend and backend.

2.3 Host Aliases

You can define aliases for better readability.

[web]
serverA ansible_host=192.168.1.10
serverB ansible_host=192.168.1.11

Here, serverA and serverB are aliases for their respective IP addresses.

3. Using Variables in Inventory


Ansible variables allow customization for different hosts or groups.

3.1 Host Variables (Per Host)


[web]
web1 ansible_host=192.168.1.10 ansible_user=webadmin
web2 ansible_host=192.168.1.11 ansible_user=webadmin

Each host has its own ansible_user.

3.2 Group Variables (Per Group)


[web:vars]
ansible_user=webadmin
http_port=80

All hosts in [web] use webadmin as ansible_user and have http_port=80.

3.3 Variables in YAML Inventory


all:
vars:
ansible_user: admin
ansible_ssh_private_key_file: /home/admin/.ssh/id_rsa
children:
web:
hosts:
web1:
web2:
vars:
http_port: 80
database:
hosts:
db1:
db2:
vars:
db_port: 3306

Here, http_port applies only to [web], while db_port applies to [database].

4. Host Patterns and Filters


4.1 Basic Host Patterns

Ansible supports patterns to select hosts dynamically.


Single Host

ansible web1 -m ping

Pings web1.

All Hosts

ansible all -m ping

Runs the command on all hosts.

Group of Hosts

ansible web -m ping

Runs only on [web].

Excluding Hosts

ansible all:!database -m ping

Runs on all hosts except [database].

Multiple Groups

ansible "web:database" -m ping

Runs on [web] and [database].

4.2 Regular Expressions

 ansible "web*" -m ping → Matches web1, web2, webserver, etc.


 ansible "*.example.com" -m ping → Targets all hosts with .example.com.

5. Managing Inventory Plugins


Ansible inventory plugins allow dynamic inventory integration.

5.1 Built-in Inventory Plugins

Ansible has built-in plugins for AWS, Azure, GCP, Kubernetes, NetBox, etc.
Example: AWS EC2 Inventory Plugin

plugin: amazon.aws.aws_ec2
regions:
- us-west-1
filters:
instance-state-name: running

 This plugin dynamically fetches AWS EC2 instances.

Example: NetBox Inventory Plugin

plugin: community.general.netbox
url: https://netbox.example.com
token: your_api_token

 Pulls inventory from NetBox, a network automation tool.

5.2 Custom Inventory Plugins

You can create a custom inventory plugin using Python.

Example: Simple Python Inventory Plugin

import json

def get_inventory():
inventory = {
"web": {
"hosts": ["web1", "web2"],
"vars": {"http_port": 80}
},
"_meta": {
"hostvars": {
"web1": {"ansible_host": "192.168.1.10"},
"web2": {"ansible_host": "192.168.1.11"}
}
}
}
print(json.dumps(inventory))

if __name__ == "__main__":
get_inventory()

 Save as custom_inventory.py and run:


 ansible-inventory -i custom_inventory.py --list
Conclusion
Mastering Ansible inventory management is crucial for scalability and automation. You should
now be able to confidently:

✅Work with static and dynamic inventories


✅Organize hosts with grouping
✅Use variables effectively
✅Apply host patterns and filters
✅Manage inventory plugins

With this knowledge, you can handle any Ansible inventory question in an interview! Let me
know if you need practice questions.

In-Depth Guide to Variables and Facts in Ansible

Understanding variables and facts in Ansible is crucial because they control how automation
tasks execute. Variables store dynamic values, while facts provide system-specific details. Let’s
break down each aspect in detail so you can confidently answer any interview question.

1. Defining Variables in Playbooks


What are Variables in Ansible?

Variables in Ansible are placeholders that store values and allow you to make playbooks more
dynamic. Instead of hardcoding values, you use variables, which can be reused, changed, or
assigned based on conditions.

How to Define Variables in a Playbook

Variables in a playbook can be defined in multiple ways:

1. Inline within a playbook (vars)


2. External variable files (vars_files)
3. Using inventory (inventory_vars)
4. As extra command-line arguments (-e flag)
5. Through environment variables
6. Dynamically retrieved via facts (ansible_facts)
7. By registering task outputs (register)

Example of Defining Variables in a Playbook


- name: Example Playbook
hosts: all
vars:
package_name: "nginx"
version: "latest"
tasks:
- name: Install a package
ansible.builtin.yum:
name: "{{ package_name }}"
state: "{{ version }}"

Here, package_name and version are variables, allowing flexibility to install different packages.

Best Practices for Defining Variables

 Use meaningful names (database_user instead of user).


 Keep variables in separate files for better organization.
 Use vars_files when dealing with sensitive or complex configurations.

2. Host and Group Variables (host_vars, group_vars)


Ansible allows defining variables specific to hosts and groups in inventory.

Host Variables (host_vars/)

 Variables defined for individual hosts.


 Stored in the host_vars/ directory.
 Files should be named after the hostname.

Example:

inventory/
├── hosts
├── host_vars/
│ ├── webserver1.yaml
│ ├── dbserver1.yaml

Content of host_vars/webserver1.yaml:

ansible_host: 192.168.1.10
server_role: web

Group Variables (group_vars/)


 Variables applied to a group of hosts.
 Stored in the group_vars/ directory.
 File names should match the group names.

Example:

inventory/
├── group_vars/
│ ├── webservers.yaml
│ ├── databases.yaml

Content of group_vars/webservers.yaml:

web_port: 80
firewall_enabled: true

Usage in a Playbook:

- name: Configure Webservers


hosts: webservers
tasks:
- name: Print Web Port
debug:
msg: "The web server listens on port {{ web_port }}"

How Variables Precedence Works

If a variable is defined in multiple places, Ansible uses the following order of precedence
(highest to lowest):

1. Command-line variables (-e option)


2. Playbook vars
3. Role defaults (defaults/main.yml)
4. Inventory host/group variables (host_vars & group_vars)
5. Ansible Facts
6. Default values in tasks or roles

3. Registered Variables
Registered variables store output from a task and can be used later in a playbook.

Example of Registering a Variable


- name: Get Hostname
command: hostname
register: result

- name: Display Hostname


debug:
msg: "The hostname is {{ result.stdout }}"

Here, result stores the output of the hostname command.

Registered Variable Structure

Registered variables return a dictionary-like object containing:

 stdout: Standard output


 stderr: Standard error
 rc: Return code
 changed: Boolean indicating if a change occurred

Example Output:

{
"stdout": "server1",
"stderr": "",
"rc": 0,
"changed": false
}

Using Registered Variables in Conditions


- name: Check if File Exists
stat:
path: /etc/passwd
register: file_stat

- name: Display File Presence


debug:
msg: "File exists!"
when: file_stat.stat.exists

4. Facts and ansible_facts


What are Facts in Ansible?

Facts are system properties (e.g., OS type, network interfaces, CPU details) collected by Ansible
from remote hosts before executing tasks.
How to View Facts

Run:

ansible all -m setup

This retrieves system information in JSON format.

Using Facts in Playbooks


- name: Display OS Type
debug:
msg: "This server is running {{ ansible_facts['os_family'] }}"

Common ansible_facts keys:

 ansible_os_family → OS family (RedHat, Debian)


 ansible_memtotal_mb → Total memory (MB)
 ansible_distribution → OS name (Ubuntu, CentOS)
 ansible_default_ipv4['address'] → Default IP address

Custom Facts

You can define custom facts in /etc/ansible/facts.d/ as JSON or INI files.

Example (/etc/ansible/facts.d/custom.fact):

[custom]
env=production

Usage in playbooks:

- name: Print Custom Fact


debug:
msg: "Environment: {{ ansible_facts['custom']['env'] }}"

5. Fact Caching
What is Fact Caching?

By default, Ansible gathers facts at the start of a playbook run. Fact caching allows Ansible to
store facts across multiple runs to reduce execution time.

Fact Caching Methods


1. Memory (default, temporary)
2. JSON files (fact_caching=jsonfile)
3. Redis (fact_caching=redis)
4. Database-backed caching (e.g., etcd, memcached)

How to Enable Fact Caching

Modify ansible.cfg:

[defaults]
gathering = smart
fact_caching = jsonfile
fact_caching_connection = /tmp/facts

Benefits of Fact Caching

 Performance Improvement: Avoids re-gathering facts on each run.


 Consistency: Uses the same facts across multiple playbooks.
 Useful for Large Inventories: Reduces redundant SSH connections.

Refreshing Cached Facts


ansible all -m setup --flush-cache

Final Thoughts
Mastering variables and facts in Ansible allows you to write efficient, flexible, and scalable
automation scripts. If you want to ace an interview:

1. Understand different ways to define variables ( vars, vars_files, inventory variables).


2. Know where host and group variables are stored ( host_vars/, group_vars/).
3. Be able to register and use variables in conditions ( register).
4. Understand ansible_facts and how to leverage them (ansible_facts['os_family']).
5. Be aware of fact caching for performance optimization.

Would you like me to generate some real-world interview questions based on this topic?

Ansible Playbooks In-Depth: Mastering Every Concept


Ansible playbooks are the heart of Ansible automation, allowing users to define configurations,
deployments, and orchestrations in a YAML-based format. Below is a detailed breakdown of
every crucial aspect of playbooks to ensure you're fully prepared for any interview question.
1. Structure of an Ansible Playbook
An Ansible playbook is a YAML file that contains one or more plays. Each play defines tasks that
are executed on a set of hosts.

Basic Structure of a Playbook


---
- name: Example Playbook
hosts: webservers
become: yes # Runs tasks with elevated privileges (sudo)
vars:
package_name: nginx # Variables can be defined at different levels

tasks:
- name: Install a package
apt:
name: "{{ package_name }}"
state: present

Key Components of a Playbook

1. YAML format: Playbooks use YAML, requiring proper indentation.


2. Plays: A play is a set of tasks applied to specified hosts.
3. Hosts: Defines the target machines to execute tasks.
4. Tasks: Actions that will be performed (e.g., installing packages).
5. Variables: Allow reusability and customization.
6. Handlers: Special tasks triggered by notify.
7. Loops & Conditionals: Control execution based on logic.

2. Handlers and Notifications


Handlers are special tasks triggered by the notify directive when a task reports a change. They
are executed at the end of a play unless explicitly forced to run immediately.

Example: Using Handlers


---
- name: Restart service when a configuration file changes
hosts: webservers
tasks:
- name: Update configuration file
copy:
src: myconfig.conf
dest: /etc/nginx/nginx.conf
notify: Restart Nginx # Triggers the handler

handlers:
- name: Restart Nginx
service:
name: nginx
state: restarted

Key Concepts of Handlers

 Handlers only run if notified.


 Handlers execute only once even if notified multiple times.
 If no changes occur in tasks, handlers won't execute.
 You can use listen to group multiple notifications.

Immediate Execution with flush_handlers

By default, handlers run at the end of a play. You can force immediate execution:

meta: flush_handlers

This ensures handlers execute immediately when notified.

3. Conditionals (when Statements)


Conditionals allow tasks to execute only if certain conditions are met. The when keyword is used
for this purpose.

Example: Install a Package Only on Ubuntu


- name: Install Apache only on Ubuntu
hosts: webservers
tasks:
- name: Install Apache
apt:
name: apache2
state: present
when: ansible_facts['os_family'] == "Debian"

Key Concepts of Conditionals


 Conditions use Jinja2 expressions.
 You can reference facts, variables, or registered outputs.
 Multiple conditions can be combined using and / or.

Example: Using Multiple Conditions


- name: Install a package if Ubuntu and >= 18.04
tasks:
- name: Install Nginx
apt:
name: nginx
state: present
when: ansible_facts['os_family'] == "Debian" and ansible_facts['distribution_version'] >= "18.04"

4. Loops (with_items, loop, until, etc.)


Loops allow you to repeat a task multiple times for different inputs.

Using with_items (Old Syntax)


- name: Create multiple users
user:
name: "{{ item }}"
state: present
with_items:
- user1
- user2
- user3

Using loop (Modern Syntax)


- name: Install multiple packages
apt:
name: "{{ item }}"
state: present
loop:
- nginx
- curl
- git

Using until for Retry Mechanisms

until retries a task until a condition is met or retries are exhausted.

- name: Wait for service to be ready


shell: systemctl is-active nginx
register: result
until: result.stdout == "active"
retries: 5
delay: 10 # Wait 10 seconds between retries

Loop with Dictionaries (with_dict)


- name: Create users with specific details
user:
name: "{{ item.key }}"
uid: "{{ item.value.uid }}"
shell: "{{ item.value.shell }}"
with_dict:
user1:
uid: 1001
shell: /bin/bash
user2:
uid: 1002
shell: /bin/zsh

5. Blocks and Error Handling (rescue, always)


Blocks allow grouping of tasks, and handling errors with rescue and always.

Example: Using Blocks for Error Handling


- name: Demonstrate error handling
hosts: all
tasks:
- block:
- name: This task will fail
command: /bin/false
- name: This task will not run
debug:
msg: "This will be skipped"
rescue:
- name: Handle failure
debug:
msg: "The previous task failed, but we handled it!"
always:
- name: Always execute
debug:
msg: "This runs regardless of success or failure"

Key Concepts of Blocks

 block: Groups tasks together.


 rescue: Catches errors and defines alternative actions.
 always: Runs no matter what, even if the playbook fails.
6. Delegation, Local Actions, and Remote Execution
By default, tasks execute on the target hosts, but you can delegate tasks to other machines.

Delegation (delegate_to)

Run a task on a different host:

- name: Fetch a file from the control node


copy:
src: /tmp/localfile
dest: /home/user/remotefile
delegate_to: localhost # Executes only on the Ansible control node

Running Tasks on Localhost (local_action)


- name: Execute a command on the local machine
local_action: command echo "This runs locally"

Executing Tasks on a Different Remote Host


- name: Run a task on a different remote server
command: echo "This runs on a different remote machine"
delegate_to: database_server

Interview Preparation: Key Takeaways


1. Understand Playbook Structure: Know how YAML, plays, hosts, tasks, variables, and
handlers fit together.
2. Master Handlers: They run only when notified and execute once per play.
3. Use Conditionals Wisely: Know when and how to use when for logic-based execution.
4. Loops are Powerful: Use loop, until, and with_items effectively.
5. Error Handling is Crucial: Blocks with rescue and always make playbooks resilient.
6. Delegation & Remote Execution: Know how to execute tasks on different hosts using
delegate_to and local_action.

With this knowledge, you should be able to answer any interview question on Ansible
playbooks with confidence! Let me know if you want more examples or explanations.
Deep Dive into Ansible Roles and Modularization

Ansible roles and modularization are essential concepts for managing complex infrastructure as
code efficiently. Mastering these topics will help you confidently answer any interview
questions related to Ansible roles.

1. Role Structure and Best Practices


Roles are a way to structure Ansible playbooks into reusable components. They help in
organizing tasks, handlers, variables, templates, and files logically.

1.1 Standard Role Directory Structure

An Ansible role follows a predefined structure:

my_role/
│── defaults/ # Default variables (lowest precedence)
│ ├── main.yml

│── files/ # Static files to be copied
│ ├── example.conf

│── handlers/ # Tasks triggered by "notify"
│ ├── main.yml

│── meta/ # Role metadata (dependencies, author, license)
│ ├── main.yml

│── tasks/ # Main list of tasks to execute
│ ├── main.yml

│── templates/ # Jinja2 templates (e.g., config files)
│ ├── example.conf.j2

│── vars/ # Role-specific variables (higher precedence than defaults)
│ ├── main.yml

│── README.md # Documentation (best practice)

Each directory serves a specific purpose:

 defaults/ – Contains default values for variables.


 files/ – Stores static files that can be copied to remote machines.
 handlers/ – Defines handlers that get executed when notified.
 meta/ – Defines metadata such as dependencies.
 tasks/ – Contains the main task execution logic.
 templates/ – Stores Jinja2 templates.
 vars/ – Contains variables with higher precedence than defaults.

1.2 Best Practices for Roles

1. Follow the standard directory structure – This ensures roles are reusable and easy to maintain.
2. Keep tasks modular – Break down tasks into multiple files in the tasks/ directory for clarity.
3. Use handlers efficiently – Use handlers for actions that need to be triggered only when
something changes.
4. Parameterize variables – Store variables in defaults/ and vars/ for flexibility.
5. Avoid hardcoding values – Use variables and templates for configuration files.
6. Document roles – Include a README.md explaining how to use the role.
7. Keep tasks idempotent – Ensure tasks do not make unnecessary changes if they already exist.
8. Follow security best practices – Do not store sensitive data in plain text; use Ansible Vault if
needed.

2. Using ansible-galaxy to Create and Manage Roles


Ansible Galaxy is a tool that helps create, share, and manage Ansible roles.

2.1 Creating a Role Using Ansible Galaxy

To create a new role using Ansible Galaxy, run:

ansible-galaxy init my_role

This generates the standard role directory structure:

my_role/
│── defaults/
│── files/
│── handlers/
│── meta/
│── tasks/
│── templates/
│── vars/
│── README.md

2.2 Installing a Role from Ansible Galaxy

You can install a role from Ansible Galaxy (a repository of community-contributed roles):

ansible-galaxy install geerlingguy.nginx


This installs the role in /etc/ansible/roles/ by default.

To specify an installation path:

ansible-galaxy install -p ./roles geerlingguy.nginx

2.3 Managing Roles with a requirements.yml File

Instead of installing roles manually, you can define dependencies in a requirements.yml file:

- name: geerlingguy.nginx
version: 3.0.0
- name: my_custom_role
src: git+https://github.com/myorg/my_custom_role.git

Then install all roles with:

ansible-galaxy install -r requirements.yml

3. Dependencies Between Roles


Roles can depend on other roles, which helps in structuring playbooks effectively.

3.1 Defining Role Dependencies in meta/main.yml

Inside a role's meta/main.yml, you can specify dependencies:

dependencies:
- role: common
- role: database
vars:
db_name: "my_app_db"

This ensures that the common and database roles are executed before the current role.

3.2 Using Dependencies in Playbooks

In a playbook, you can specify roles in the correct order:

- hosts: web_servers
roles:
- common
- database
- web_server
Roles will be executed in the order they are defined.

3.3 Role Dependency Considerations

 Avoid circular dependencies (e.g., Role A depends on Role B, and Role B depends on Role A).
 Use variables within dependent roles to keep them flexible.
 If a dependency is required but not installed, Ansible will throw an error.

4. Sharing Roles via Ansible Galaxy


Ansible Galaxy allows users to share roles publicly.

4.1 Publishing a Role to Ansible Galaxy

To publish a role:

1. Ensure your role has a meta/main.yml file with proper metadata:

galaxy_info:
author: Your Name
description: A role to install and configure Nginx
license: MIT
min_ansible_version: "2.9"
platforms:
- name: Ubuntu
versions:
- all
galaxy_tags:
- web
- nginx
dependencies: []

2. Log in to Ansible Galaxy: First, authenticate using your GitHub account:


3. ansible-galaxy login
4. Publish the role from a GitHub repository: If your role is stored in a public GitHub
repository, navigate to Ansible Galaxy and import it.
5. Manually import the role using the Ansible Galaxy CLI:
6. ansible-galaxy role import your-github-username your-repo-name

4.2 Using a Shared Role in a Playbook

Once a role is shared on Ansible Galaxy, others can install and use it:

ansible-galaxy install your-github-username.my_role


Then, reference it in a playbook:

- hosts: all
roles:
- your-github-username.my_role

Summary
Concept Key Points

Role Structure Organizes tasks, handlers, vars, templates, etc.

Best Practices Use handlers, parameterize vars, document roles, follow idempotency.

Creating Roles Use ansible-galaxy init <role_name> to generate a role.

Installing Roles Use ansible-galaxy install <role_name> or requirements.yml.

Dependencies Define in meta/main.yml, avoid circular dependencies.

Sharing Roles Upload roles to Ansible Galaxy, use metadata properly.

Common Interview Questions

1. What is the structure of an Ansible role?


2. How do you install and use a role from Ansible Galaxy?
3. How do you define role dependencies in Ansible?
4. What are best practices when creating Ansible roles?
5. How do you share a role via Ansible Galaxy?

Mastering these topics will ensure you're well-prepared for any interview question related to
Ansible roles and modularization.

Templates and Jinja2 in Flask: A Deep Dive

Jinja2 is a powerful templating engine used in Flask to generate dynamic HTML content. It
allows you to inject variables, apply control structures (such as loops and conditionals), use
filters to modify data, and even create custom macros for reusable components. Understanding
Jinja2 thoroughly is crucial for Flask development, as it's the core mechanism for rendering
dynamic pages.
1. Using Jinja2 for Dynamic Content
What is Jinja2?

Jinja2 is a template engine for Python, inspired by Django’s template language. It enables
developers to embed Python-like expressions into HTML files to dynamically generate content.

Basic Syntax

Jinja2 uses the following delimiters for different purposes:

 {{ ... }} → Used for variable substitution


 {% ... %} → Used for control structures (loops, conditionals)
 {# ... #} → Used for comments (ignored in rendering)
 {{ ... | filter }} → Used to apply filters to modify data before rendering

Example: Dynamic Content Injection

Flask passes data from the backend to the frontend using the render_template() function.

from flask import Flask, render_template

app = Flask(__name__)

@app.route('/')
def home():
user = "Alice"
return render_template("index.html", username=user)

if __name__ == '__main__':
app.run(debug=True)

Jinja2 Template (index.html):

<!DOCTYPE html>
<html>
<head>
<title>Jinja2 Example</title>
</head>
<body>
<h1>Welcome, {{ username }}!</h1>
</body>
</html>

Output:
If user = "Alice", the rendered HTML page will be:
<h1>Welcome, Alice!</h1>

2. Template Rendering with render_template()


Flask provides the render_template() function to integrate Python variables into HTML templates.

How it Works

1. Flask looks for templates inside the templates/ folder.


2. The render_template() function takes an HTML file name and optional keyword arguments
to pass variables.

from flask import Flask, render_template

app = Flask(__name__)

@app.route('/')
def home():
return render_template("index.html", name="John Doe", age=25)

if __name__ == '__main__':
app.run(debug=True)

Jinja2 Template (index.html):

<p>Name: {{ name }}</p>


<p>Age: {{ age }}</p>

Rendered Output:

<p>Name: John Doe</p>


<p>Age: 25</p>

Passing Complex Data

Jinja2 can handle:

 Lists: students = ["Alice", "Bob", "Charlie"]


 Dictionaries: user = {"name": "John", "age": 30}
 Objects: user = User("John", 30)

Example:

students = ["Alice", "Bob", "Charlie"]


return render_template("index.html", students=students)
<ul>
{% for student in students %}
<li>{{ student }}</li>
{% endfor %}
</ul>

Rendered Output:

<ul>
<li>Alice</li>
<li>Bob</li>
<li>Charlie</li>
</ul>

3. Filters and Control Structures in Jinja2


Jinja2 provides:

 Filters to modify output


 Control structures like loops and conditionals

Common Jinja2 Filters

Filter Example Output


lower `{{ "HELLO" lower }}`
upper `{{ "hello" upper }}`
capitalize `{{ "hello world" capitalize }}`
length `{{ students length }}`
default `{{ username default("Guest") }}`

Example:

<p>Name: {{ name|upper }}</p>


<p>Number of students: {{ students|length }}</p>

Control Structures

1. Conditionals (if-elif-else)

{% if age < 18 %}
<p>You are a minor.</p>
{% elif age >= 18 and age < 65 %}
<p>You are an adult.</p>
{% else %}
<p>You are a senior citizen.</p>
{% endif %}

2. Loops (for)

<ul>
{% for student in students %}
<li>{{ student }}</li>
{% endfor %}
</ul>

3. Loop Control (break, continue)

{% for student in students %}


{% if student == "Bob" %}
<p>Skipping Bob...</p>
{% continue %}
{% endif %}
<li>{{ student }}</li>
{% endfor %}

4. Customizing Output with Jinja2


1. Macros (Reusable Code)

Macros allow reusable code blocks.

Defining a Macro:

{% macro greet(name) %}
<p>Hello, {{ name }}!</p>
{% endmacro %}

Using a Macro:

{{ greet("Alice") }}
{{ greet("Bob") }}

Rendered Output:

<p>Hello, Alice!</p>
<p>Hello, Bob!</p>

2. Extending Templates (inheritance)

Inheritance allows templates to share a common layout.


Base Template (base.html):

<!DOCTYPE html>
<html>
<head>
<title>{% block title %}Default Title{% endblock %}</title>
</head>
<body>
<header>Site Header</header>
<main>
{% block content %}{% endblock %}
</main>
<footer>Site Footer</footer>
</body>
</html>

Child Template (index.html):

{% extends "base.html" %}

{% block title %}Home Page{% endblock %}

{% block content %}
<h1>Welcome to My Website</h1>
{% endblock %}

Rendered Output:

<!DOCTYPE html>
<html>
<head>
<title>Home Page</title>
</head>
<body>
<header>Site Header</header>
<main>
<h1>Welcome to My Website</h1>
</main>
<footer>Site Footer</footer>
</body>
</html>

3. Static Files (CSS, JS, Images)

Use the url_for() function to link static files.

Example:

<link rel="stylesheet" href="{{ url_for('static', filename='styles.css') }}">


Folder Structure:

/project
/static
styles.css
/templates
base.html
index.html

Summary
Feature Syntax Example
Variable Substitution {{ var }} {{ name }}
Control Structures {% if %}, {% for %} {% for item in list %} ... {% endfor %}
Filters `{{ var filter }}`
Macros {% macro func() %} {% macro greet(name) %} ... {% endmacro %}
Template Inheritance {% extends "base.html" %} {% block content %} ... {% endblock %}

By mastering Jinja2, you can dynamically generate HTML, reuse components, and create
efficient web applications. If you can implement these techniques fluently, you’ll be well-
prepared for any interview question on Flask templates and Jinja2.

Deep Dive into Ansible Modules and Plugins


Ansible is a powerful automation tool, and at its core are modules and plugins, which extend its
capabilities. If you fully understand these, you’ll be able to answer any interview question with
confidence.

1. Ansible Modules

Modules are small programs that perform specific tasks in Ansible. They can be categorized into
core modules, community modules, and custom modules.

1.1 Core Modules


Core modules are maintained by the Ansible team and are included with every Ansible
installation. These modules allow interaction with files, packages, users, and systems.
Here are some key categories of core modules:

File Modules

Used for file and directory manipulation.

 file: Manage file permissions, ownership, and type.


 copy: Copy files to a remote machine.
 fetch: Fetch files from a remote machine to the control node.
 stat: Retrieve file properties.

Example:

- name: Ensure a directory exists


ansible.builtin.file:
path: /tmp/testdir
state: directory
mode: '0755'

Package Modules

Used to install, update, and remove software packages.

 yum: Manages RPM-based distributions (CentOS, RHEL).


 apt: Manages Debian-based distributions (Ubuntu).
 dnf: Modern replacement for yum.

Example:

- name: Install Apache using apt


ansible.builtin.apt:
name: apache2
state: present

User & Group Management Modules

Used to create, modify, and remove users and groups.

 user: Manage user accounts.


 group: Manage groups.

Example:

- name: Create a user


ansible.builtin.user:
name: johndoe
state: present
groups: sudo

System Modules

Used to manage services, reboot systems, and configure networking.

 service: Start, stop, or restart a service.


 systemd: Manage services on systemd-based Linux distributions.
 cron: Manage cron jobs.

Example:

- name: Ensure Apache is running


ansible.builtin.service:
name: apache2
state: started
enabled: yes

1.2 Community Modules


Community modules are contributed by Ansible users and are maintained in the Ansible Galaxy
repository. Examples include:

 community.docker.docker_container : Manage Docker containers.


 community.general.ufw: Manage Uncomplicated Firewall (UFW).
 community.aws.s3_bucket: Manage AWS S3 buckets.

To install a community collection:

ansible-galaxy collection install community.docker

To use a community module:

- name: Run a Docker container


community.docker.docker_container:
name: my_container
image: nginx
state: started

1.3 Writing Custom Modules in Python


If built-in modules don't meet your needs, you can create custom modules using Python.

Basic Structure of a Python Module


A custom module should:

 Import AnsibleModule from ansible.module_utils.basic.


 Accept input parameters.
 Perform a task.
 Return output in JSON format.

Example: Custom Module to Add Two Numbers

1. Create a file add_numbers.py

#!/usr/bin/python

from ansible.module_utils.basic import AnsibleModule

def main():
module_args = dict(
num1=dict(type='int', required=True),
num2=dict(type='int', required=True)
)

module = AnsibleModule(argument_spec=module_args)

result = {'sum': module.params['num1'] + module.params['num2']}

module.exit_json(changed=False, result=result)

if __name__ == '__main__':
main()

2. Create a playbook to use the module

- name: Test Custom Module


hosts: localhost
tasks:
- name: Add two numbers
add_numbers:
num1: 5
num2: 10
register: output

- debug:
msg: "Sum is {{ output.result.sum }}"

3. Run the playbook

ansible-playbook -M ./library custom_playbook.yml

The -M ./library tells Ansible where to find the custom module.


2. Ansible Plugins

Plugins extend Ansible’s functionality. There are different types, including lookup, filter, action,
and callback plugins.

2.1 Lookup Plugins


Lookup plugins fetch data from external sources (e.g., files, databases, APIs).

Common Lookup Plugins

 file: Read data from a file.


 env: Get environment variables.
 password: Generate secure passwords.

Example: Using file Lookup Plugin

- name: Read file content


debug:
msg: "{{ lookup('file', '/etc/hostname') }}"

2.2 Writing Custom Lookup Plugins


A lookup plugin should:

 Be stored in lookup_plugins/ directory.


 Define a LookupModule class with a run method.

Example: Custom Lookup Plugin to Generate a Random String 1. Create


lookup_plugins/random_string.py

from ansible.plugins.lookup import LookupBase


import random
import string

class LookupModule(LookupBase):
def run(self, terms, variables=None, **kwargs):
length = int(terms[0]) if terms else 8
return [''.join(random.choices(string.ascii_letters, k=length))]

2. Use the plugin in a playbook

- name: Test custom lookup plugin


hosts: localhost
tasks:
- name: Generate random string
debug:
msg: "{{ lookup('random_string', 10) }}"

2.3 Writing Custom Filter Plugins


Filter plugins modify data inside Jinja2 templates.

Example: Custom Filter to Reverse a String 1. Create filter_plugins/reverse_filter.py

from ansible.plugins.filter import FilterModule

class FilterModule(object):
def filters(self):
return {
'reverse': self.reverse_string
}

def reverse_string(self, value):


return value[::-1]

2. Use the filter in a playbook

- name: Test custom filter


hosts: localhost
tasks:
- name: Reverse a string
debug:
msg: "{{ 'Ansible' | reverse }}"

2.4 Writing Custom Action Plugins


Action plugins run on the control node before executing tasks on remote hosts.

Example: Custom Action Plugin 1. Create action_plugins/custom_action.py

from ansible.plugins.action import ActionBase

class ActionModule(ActionBase):
def run(self, tmp=None, task_vars=None):
result = super(ActionModule, self).run(tmp, task_vars)
result['changed'] = False
result['msg'] = "This is a custom action plugin"
return result
2. Use the plugin in a playbook

- name: Test custom action plugin


hosts: localhost
tasks:
- name: Run custom action plugin
custom_action:
register: result

- debug:
msg: "{{ result.msg }}"

2.5 Writing Custom Callback Plugins


Callback plugins process task results and generate logs or notifications.

Example: Custom Callback Plugin to Log Output 1. Create callback_plugins/custom_logger.py

from ansible.plugins.callback import CallbackBase

class CallbackModule(CallbackBase):
def v2_runner_on_ok(self, result):
print(f"TASK SUCCESS: {result._task.get_name()} - {result._result}")

2. Enable the plugin

[defaults]
callback_whitelist = custom_logger

Conclusion

Mastering Ansible modules and plugins allows you to:

 Efficiently manage infrastructure.


 Extend Ansible’s functionality.
 Answer deep technical interview questions with confidence.

Would you like a hands-on exercise to reinforce your knowledge?

Managing secrets and sensitive data in Ansible is critical for security, especially when dealing
with infrastructure as code. Ansible provides Ansible Vault as a built-in tool to encrypt and
securely manage sensitive information. Below is an in-depth breakdown of how to manage
secrets effectively.
1. Understanding Ansible Vault
Ansible Vault is a security feature within Ansible that allows you to encrypt and protect
sensitive data such as passwords, API keys, SSH keys, and private information used in
playbooks.

Why Use Ansible Vault?

 Encrypt sensitive files (e.g., inventories, variables, playbooks).


 Avoid storing plaintext secrets in version control (like Git).
 Decrypt data only when necessary, minimizing exposure.

2. Using Ansible Vault


Ansible Vault can be used in multiple ways:

 Encrypting entire files.


 Encrypting only specific variables within a YAML file.
 Encrypting strings directly inside playbooks.

Encrypting Entire Files

To encrypt a file (such as secrets.yml), run:

ansible-vault encrypt secrets.yml

It will prompt you for a password. This password will be needed to decrypt the file later.

Viewing Encrypted File


The file will now look like this:

$ANSIBLE_VAULT;1.1;AES256
6162636465666768696a6b6c6d6e6f7071727374757678797a303132333435

Decrypting a File

If you need to view or modify an encrypted file:

ansible-vault decrypt secrets.yml


This will return the file to its unencrypted form.

Editing an Encrypted File (Without Decrypting Completely)

Instead of decrypting manually, you can edit securely:

ansible-vault edit secrets.yml

This command temporarily decrypts the file in a secure session.

Re-encrypting a Decrypted File

If you previously decrypted a file and want to encrypt it again:

ansible-vault encrypt secrets.yml

Changing the Vault Password

To change the password for an encrypted file:

ansible-vault rekey secrets.yml

This will prompt you for the old password and then ask for a new one.

3. Encrypting Specific Variables


Instead of encrypting an entire file, you can encrypt specific variables using ansible-vault
encrypt_string.

For example, to encrypt a password:

ansible-vault encrypt_string --vault-id my_vault 'SuperSecretPassword' --name 'db_password'

This will generate an encrypted variable like:

db_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
323434356165663434643265346633...

You can directly paste this into your vars.yml file.

To decrypt it later:
ansible-vault decrypt_string '$ANSIBLE_VAULT;1.1;AES256...'

4. Using Vault in Playbooks


To use an encrypted vault file in an Ansible playbook:

1. Store encrypted variables in a separate file (e.g., vault.yml)

db_user: admin
db_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
323434356165663434643265346633...

2. Reference this file in your playbook

- name: Deploy database


hosts: db_servers
vars_files:
- vault.yml
tasks:
- name: Print Database Password
debug:
msg: "DB Password is {{ db_password }}"

3. Run the playbook with a password prompt

ansible-playbook deploy.yml --ask-vault-pass

This will prompt you for the vault password to decrypt and use the secrets.

5. Automating Vault with Password Files


Instead of manually entering the password, you can store it in a secure file (ensure it’s properly
protected with permissions):

echo "SuperSecurePassword" > vault_password.txt


chmod 600 vault_password.txt

Then use:

ansible-playbook deploy.yml --vault-password-file vault_password.txt

This is useful for CI/CD pipelines where human intervention isn't ideal.
6. Best Practices for Storing and Using Secrets Securely
1. Never store vault passwords in version control.
o Use environment variables or dedicated secret management tools.
2. Use separate vault files for different environments
o Example:
o group_vars/
o ├── production/
o │ ├── vault.yml
o ├── staging/
o │ ├── vault.yml
o This ensures that staging and production credentials don’t mix.
3. Limit access to vault files
o Use Linux permissions: chmod 600 vault.yml
o Restrict access using ansible.cfg:
o [defaults]
o vault_password_file = /etc/ansible/.vault_pass
4. Rotate secrets periodically
o Change vault passwords and rekey encrypted files regularly.
5. Use external secret management tools
o Ansible Vault is great, but for more security, consider integrating:
 HashiCorp Vault
 AWS Secrets Manager
 Azure Key Vault
 CyberArk Conjur

7. Common Ansible Vault Interview Questions


Here are some advanced questions interviewers might ask:

1. What is Ansible Vault, and why do we use it?

Ansible Vault is a feature that allows users to encrypt sensitive data such as passwords, API
keys, and SSH keys in Ansible projects. It ensures that secrets are not exposed in plaintext.

2. How do you use Ansible Vault in a playbook?

 Store encrypted variables in a YAML file.


 Use vars_files to include them in a playbook.
 Provide the vault password when running the playbook.
3. How can you encrypt only a single variable instead of a whole file?

By using:

ansible-vault encrypt_string 'my_secret_value' --name 'secret_key'

4. What happens if you lose the Ansible Vault password?

If the vault password is lost, there is no way to recover encrypted data. It’s important to store it
securely.

5. Can you use multiple Ansible Vault passwords for different environments?

Yes, you can specify multiple vault passwords using:

ansible-playbook playbook.yml --vault-id prod@prod_vault_password.txt --vault-id dev@dev_vault_password.txt

Final Thoughts
Ansible Vault is an excellent tool for securing secrets, but it must be used correctly. Following
best practices—such as restricting access, rotating passwords, and integrating external vaults—
ensures a strong security posture.

Let me know if you want me to dive even deeper into any of these topics!

Ansible Collections - In-Depth Guide


Ansible Collections are a way to package and distribute modules, roles, plugins, and other
Ansible content. They make it easier to manage, reuse, and distribute automation across
different environments.

1. Understanding Collections and Namespaces

What is an Ansible Collection?

A collection is essentially a structured directory of Ansible content, bundled together for easy
distribution and use. It contains:

 Modules (Custom functionality like built-in Ansible modules)


 Roles (Reusable sets of tasks, handlers, variables, templates, etc.)
 Plugins (Custom filters, lookup plugins, connection plugins, etc.)
 Documentation (README, example playbooks, etc.)
 Tests (Unit and integration tests for verification)

Collection Directory Structure

Each collection follows a defined structure:

ansible_collections/
└── my_namespace/
└── my_collection/
├── docs/
├── plugins/
│ ├── modules/
│ ├── inventory/
│ ├── lookup/
│ ├── filter/
│ └── connection/
├── roles/
│ ├── role1/
│ ├── role2/
├── playbooks/
├── tests/
├── meta/
│ └── runtime.yml
├── README.md
├── galaxy.yml

 ansible_collections/ → Root directory where all collections are stored


 my_namespace/ → Namespace of the collection
 my_collection/ → Collection name
 plugins/ → Stores custom plugins like modules, inventory, etc.
 roles/ → Stores roles within the collection
 meta/runtime.yml → Defines metadata like Ansible version compatibility
 galaxy.yml → Defines collection metadata (used for publishing on Ansible Galaxy)

Namespaces in Ansible Collections

 Namespace is the top-level directory under ansible_collections/.


 It prevents naming conflicts between different collections.
 Format: <namespace>.<collection> (e.g., community.general).

For example, to use a module from a collection:

- name: Create a file using a collection module


hosts: localhost
tasks:
- name: Use a module from a collection
community.general.file:
path: /tmp/testfile
state: touch

Here, community is the namespace, and general is the collection.

2. Using and Creating Custom Collections

Using Ansible Collections

1. Installing a Collection
Collections can be installed from Ansible Galaxy or from a tar.gz archive.
o From Ansible Galaxy:
o ansible-galaxy collection install community.general
o From a tar.gz file:
o ansible-galaxy collection install my_collection.tar.gz
o Install to a specific directory:
o ansible-galaxy collection install -p ./collections my_namespace.my_collection
2. Using Installed Collections in Playbooks
Once installed, you can reference the collection in playbooks:
3. - name: Example using a collection module
4. hosts: localhost
5. tasks:
6. - name: Use a module from a collection
7. my_namespace.my_collection.my_module:
8. param1: value

Or use a fully qualified collection name (FQCN) in roles:

roles:
- my_namespace.my_collection.my_role

Creating a Custom Collection

1. Initialize a Collection
Use the Ansible Galaxy CLI:
2. ansible-galaxy collection init my_namespace.my_collection

This creates the necessary directory structure.

3. Adding Modules
Custom modules are placed in plugins/modules/. Example custom module:
4. # plugins/modules/custom_module.py
5. from ansible.module_utils.basic import AnsibleModule
6.
7. def main():
8. module = AnsibleModule(argument_spec={"message": {"type": "str", "required": True}})
9. response = {"message": module.params["message"]}
10. module.exit_json(changed=False, response=response)
11.
12. if __name__ == '__main__':
13. main()

Example playbook to use the module:

- name: Test custom module


hosts: localhost
tasks:
- name: Run custom module
my_namespace.my_collection.custom_module:
message: "Hello from custom module"

14. Adding Roles


Inside the roles/ directory, create a role just like a standalone Ansible role.
15. Defining Metadata (galaxy.yml)
16. namespace: my_namespace
17. name: my_collection
18. version: 1.0.0
19. description: My custom Ansible collection
20. license: MIT
21. dependencies:
22. ansible.builtin: ">=2.9.10"
23. Building the Collection
To package the collection into a .tar.gz file:
24. ansible-galaxy collection build

This generates my_namespace-my_collection-1.0.0.tar.gz.

25. Publishing the Collection


o Publish to Ansible Galaxy:
o ansible-galaxy collection publish my_namespace-my_collection-1.0.0.tar.gz --api-key=<your-
galaxy-api-key>
o Or distribute via a private repository.

3. Managing Dependencies with requirements.yml

When working with collections, managing dependencies is crucial. Dependencies are specified
in requirements.yml, which allows Ansible to install all required collections automatically.

Basic Example of requirements.yml

collections:
- name: community.general
version: 5.5.0
- name: ansible.utils
version: ">=1.0.0,<2.0.0"

Then install dependencies with:

ansible-galaxy collection install -r requirements.yml

Installing from Different Sources

You can specify sources like Galaxy, GitHub, or local paths:

collections:
- name: community.general # Install from Ansible Galaxy
- name: ansible.utils
source: https://github.com/ansible-collections/ansible.utils.git
- name: my_namespace.my_collection
source: /path/to/local/collection

Using Collections in ansible.cfg

To avoid using FQCN everywhere, set the collections_path in ansible.cfg:

[defaults]
collections_paths = ./collections

Interview Questions & Answers on Ansible Collections

1. What is an Ansible Collection, and why is it used?


A collection is a way to bundle and distribute modules, roles, plugins, and documentation. It
simplifies reuse, reduces complexity, and makes automation more modular.

2. How do you install and use a collection in Ansible?


Use ansible-galaxy collection install <collection_name> to install, and reference it in playbooks using
FQCN.

3. What is the purpose of requirements.yml?


It manages collection dependencies, ensuring they are installed before execution.

4. What is the difference between Ansible Galaxy and a Collection?


Ansible Galaxy is a platform for sharing roles and collections. Collections are structured
packages of Ansible content.
5. How do you create a custom module inside an Ansible Collection?
Place a Python script inside plugins/modules/, ensure it uses AnsibleModule, and include it in
meta/runtime.yml.

This deep dive covers everything needed for an interview. Let me know if you need
clarification!

Performance Optimization in Ansible

Performance optimization in Ansible is crucial when dealing with large infrastructure, complex
playbooks, or frequent execution. Let’s break down each aspect in-depth.

1. Using Forks and Parallel Execution


What are Forks?

 Ansible executes tasks on remote systems using SSH connections.


 By default, Ansible runs up to 5 parallel processes (forks) at a time.
 Increasing the number of forks can significantly improve performance, especially in large
environments.

How to Configure Forks?

There are two ways to adjust the number of forks:

1. Temporary Adjustment (per command execution)


2. ansible-playbook -i inventory playbook.yml --forks 20
o This increases parallelism to 20 hosts at a time.
3. Permanent Adjustment (in Ansible configuration) Modify ansible.cfg:
4. [defaults]
5. forks = 20
o This ensures all playbooks use 20 forks by default.

When to Adjust Forks?

 Increase forks when dealing with hundreds/thousands of hosts to speed up execution.


 Decrease forks if running into resource limitations (CPU/memory bottlenecks) on the
Ansible control node.

How Forks Affect Performance


 More forks mean more simultaneous SSH connections, reducing overall playbook
runtime.
 However, excessive forks can cause network congestion or resource exhaustion.

Example:

Number of Hosts Forks Expected Improvement


10 5 Default, handles 5 at a time
100 20 Speeds up execution
1000 50 May reach SSH/network limits

2. Asynchronous Tasks and Polling


Why Use Asynchronous Execution?

 Normally, Ansible waits for tasks to finish before moving to the next.
 If a task takes a long time (e.g., installing software, database backups), it can block
execution.
 Using asynchronous execution allows Ansible to start a task and move on, checking
status later.

How to Use async and poll

Basic Asynchronous Task

- name: Perform a long-running task asynchronously


shell: sleep 60 # Simulating a long task
async: 120
poll: 10

 async: 120 → Maximum time allowed for execution (in seconds).


 poll: 10 → Ansible will check every 10 seconds for task completion.

Fully Asynchronous (Fire & Forget)

- name: Run a task and don't wait


shell: sleep 300
async: 300
poll: 0

 poll: 0 makes Ansible not wait for completion.


 Use ansible-playbook --check later to verify status.
Fetching Job Status Later

1. Start the Async Task


2. - name: Start a long-running job
3. shell: sleep 300
4. async: 300
5. poll: 0
6. register: async_job
7. Check the Status
8. - name: Check status later
9. async_status:
10. jid: "{{ async_job.ansible_job_id }}"
11. register: job_result
12. until: job_result.finished
13. retries: 30
14. delay: 10
o This method is useful when checking job completion at a later stage.

3. Accelerated Mode
What is Accelerated Mode?

 Ansible normally uses SSH to connect to remote hosts, which is slow due to connection
overhead.
 Accelerated Mode (introduced in Ansible 1.9) reduces this overhead by:
o Keeping persistent connections
o Using a lightweight daemon on the remote host

Enabling Accelerated Mode

1. Modify ansible.cfg:
2. [defaults]
3. transport = accelssh
4. Ensure Python and acceleration support is available on remote hosts.
5. Advantages:
o Reduces SSH setup time.
o Speeds up execution for multiple tasks on the same host.

When to Use Accelerated Mode?

 Useful in large-scale deployments where repeated SSH connections slow down


execution.
 Best suited for environments where SSH key authentication is already set up.
4. Using fact_caching for Performance
What is Fact Caching?

 Ansible collects system information (called facts) before execution.


 Normally, facts are recomputed every time a playbook runs.
 Fact caching stores these facts, avoiding redundant computations and improving
performance.

How to Enable Fact Caching?

Modify ansible.cfg:

[defaults]
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible_facts
fact_caching_timeout = 600

 fact_caching = jsonfile → Stores facts in JSON format.


 fact_caching_connection = /tmp/ansible_facts → Defines storage location.
 fact_caching_timeout = 600 → Facts expire after 10 minutes.

Other Fact Caching Backends

Backend Type Description


jsonfile Stores facts in JSON (default)
redis Stores facts in Redis (faster retrieval)
memory Stores facts temporarily (lost on restart)

Example: Using Cached Facts


- hosts: all
gather_facts: no # Avoid redundant fact gathering
tasks:
- name: Retrieve cached facts
setup:
register: cached_facts

- name: Display OS version from cache


debug:
msg: "OS is {{ cached_facts.ansible_facts.ansible_distribution }}"

 gather_facts: no → Avoids re-collecting facts.


 Uses setup: to load cached facts instead.

Summary
Optimization Technique Benefit
Forks & Parallel Execution Improves speed by increasing simultaneous connections
Async & Polling Prevents blocking, allowing long tasks to run in the background
Accelerated Mode Reduces SSH overhead by using persistent connections
Fact Caching Avoids redundant fact gathering, improving execution speed

Interview Preparation: Sample Questions & Answers


Q1: How does increasing forks improve Ansible performance?

A: Increasing forks allows Ansible to run tasks on multiple hosts simultaneously, reducing the
total execution time. However, excessive forks can cause network congestion and CPU
overload.

Q2: When would you use async and poll in Ansible?

A: Async and poll are useful for long-running tasks (e.g., software installation, backups). Instead
of blocking execution, async allows tasks to run in the background, and poll periodically checks
for completion.

Q3: What is Ansible’s accelerated mode, and when should you use it?

A: Accelerated mode reduces SSH connection overhead by using persistent connections. It is


useful in large-scale environments where repeated SSH handshakes slow down performance.

Q4: How does fact caching improve performance?

A: Fact caching stores gathered facts to prevent redundant computation. This significantly
speeds up playbook execution, especially in large environments where gathering facts takes
time.

Final Thoughts
Mastering Ansible’s performance optimization techniques allows you to handle large-scale
automation efficiently. By tuning forks, leveraging async tasks, enabling accelerated mode, and
using fact caching, you can ensure optimal performance for your infrastructure.

In-Depth Guide to Debugging and Troubleshooting in Ansible


Debugging and troubleshooting are essential skills in Ansible, as issues often arise from
misconfigurations, syntax errors, or connectivity problems. Mastering these techniques ensures
smooth automation and allows you to confidently answer interview questions.

1. Debugging Playbooks with -v, -vv, -vvv


Ansible provides different levels of verbosity when running playbooks. The more vs you add, the
more details you get.

1.1 Verbose Modes

Flag Output Level


-v Basic details like task results
-vv More detailed module arguments and results
-vvv Full JSON output of the modules, including network requests
-vvvv Connection details, SSH debugging, and more (useful for deep debugging)

1.2 Examples

Running with -v (Basic Verbosity)

ansible-playbook myplaybook.yml -v

 Shows task execution summary and simple error messages.

Running with -vv (Medium Verbosity)

ansible-playbook myplaybook.yml -vv

 Displays detailed module arguments and their results.

Running with -vvv (High Verbosity)

ansible-playbook myplaybook.yml -vvv


 Provides JSON-formatted output, which is useful for debugging complex failures.

Running with -vvvv (Maximum Verbosity)

ansible-playbook myplaybook.yml -vvvv

 Shows SSH connection details (helpful for network issues).

2. Using the Debug Module


The debug module helps inspect variables, messages, and other runtime details.

2.1 Basic Debug Example


- name: Debug example
hosts: localhost
tasks:
- name: Print a message
debug:
msg: "This is a debug message"

Output:

TASK [Print a message] ****************************************************************


ok: [localhost] => {
"msg": "This is a debug message"
}

2.2 Debugging Variables

Sometimes, you want to see the value of a variable.

- name: Debug a variable


hosts: localhost
vars:
my_var: "Hello, Ansible!"
tasks:
- name: Print variable
debug:
var: my_var

Output:

TASK [Print variable] ****************************************************************


ok: [localhost] => {
"my_var": "Hello, Ansible!"
}

2.3 Debugging All Variables

To see all available variables:

- name: Print all variables


debug:
var: hostvars

3. Logging and Error Handling


3.1 Enabling Logging

By default, Ansible does not log output to a file. You can enable logging by setting log_path in
ansible.cfg:

[defaults]
log_path = /var/log/ansible.log

Now, every playbook run will log details to /var/log/ansible.log.

3.2 Handling Errors with ignore_errors

If you want a playbook to continue even if a task fails:

- name: Continue despite failure


command: /bin/false
ignore_errors: yes

 Use case: If one task fails, but you want the playbook to keep running.

3.3 Using failed_when to Define Failures

Sometimes, a task does not return a failure even if something went wrong. You can force a
failure condition using failed_when:

- name: Fail if output is not "success"


shell: echo "error"
register: result
failed_when: "'success' not in result.stdout"

 Use case: Useful when commands return unexpected output.


3.4 Using rescue and always for Error Handling

Ansible supports a block structure to handle errors cleanly.

- name: Error handling example


hosts: localhost
tasks:
- block:
- name: This will fail
command: /bin/false
rescue:
- name: Handling failure
debug:
msg: "This task failed, but we handled it."
always:
- name: Always runs
debug:
msg: "Cleanup task."

 block: Contains the main tasks.


 rescue: Runs if the block fails.
 always: Runs no matter what (great for cleanup).

4. Common Issues and Solutions


4.1 YAML Syntax Errors

Issue: Indentation errors in YAML files.

- name: Example playbook


hosts: localhost
tasks:
- name: Incorrect indentation
debug: # <-- Wrong indentation
msg: "Hello, world!"

Solution: Fix indentation.

- name: Corrected playbook


hosts: localhost
tasks:
- name: Correct indentation
debug:
msg: "Hello, world!"
4.2 Undefined Variables

Issue: Using an undefined variable.

- name: Undefined variable example


hosts: localhost
tasks:
- debug:
var: my_var # my_var is not defined

Solution: Define the variable before use.

- hosts: localhost
vars:
my_var: "I am defined!"
tasks:
- debug:
var: my_var

4.3 SSH Connection Issues

Issue: Host unreachable or SSH authentication failures.

fatal: [192.168.1.100]: UNREACHABLE! => {"msg": "Failed to connect"}

Solution:

 Check if the host is reachable (ping 192.168.1.100).


 Ensure SSH key authentication is set up (ssh-copy-id [email protected]).
 Use -vvvv for more connection details.

4.4 Module Not Found

Issue: Trying to use a module that isn't installed.

ERROR! no action detected in task. This often indicates a misspelled module name

Solution:

 Check module name spelling.


 Ensure the module is installed ( ansible-galaxy collection install if required).
4.5 JSON Parsing Errors

Issue: JSON output isn't formatted properly.

- name: JSON issue


shell: echo '{ "name": "Alice" '
register: json_output
failed_when: json_output.stdout | from_json is none

Solution: Validate JSON before parsing.

- name: Correct JSON parsing


shell: echo '{ "name": "Alice" }'
register: json_output
failed_when: json_output.stdout | from_json is none

4.6 Idempotency Issues

Issue: A playbook keeps making changes when it should be idempotent.

Solution:

 Use the changed_when condition.


 Use proper modules instead of command or shell.

Example:

- name: Ensure package is installed


apt:
name: nginx
state: present

Final Tips
1. Use -vvv for deep debugging.
2. Use debug to inspect variables.
3. Enable logging in ansible.cfg.
4. Use failed_when, rescue, and always for robust error handling.
5. Check SSH and network connectivity when hosts are unreachable.
6. Ensure YAML syntax is correct (use yamllint).

With this level of understanding, you should be able to confidently answer any interview
question on Ansible debugging and troubleshooting. Would you like to go even deeper on any
specific topic?
Ansible Tower/AWX (Enterprise Automation) - In-Depth Guide

1. Overview of Ansible Tower/AWX


What are Ansible Tower and AWX?

 Ansible Tower is a commercial product developed by Red Hat to provide a web-based interface,
REST API, and centralized control for Ansible automation.
 AWX is the open-source upstream project of Ansible Tower, providing almost the same features
without enterprise support.

Key Features:

1. Web-Based UI – A centralized dashboard for managing Ansible playbooks and workflows.


2. REST API – Automate and integrate with other tools via a robust API.
3. Job Scheduling – Automate task execution with job templates and schedules.
4. RBAC (Role-Based Access Control) – Secure access by assigning specific roles to users.
5. Inventory Management – Organize and synchronize inventories dynamically.
6. Credential Management – Securely store and use SSH keys, API tokens, and passwords.
7. Workflows & Notifications – Chain jobs together and get alerts based on execution status.
8. Monitoring & Logging – Track job executions and store logs for auditing.

2. Job Templates and Scheduling


What is a Job Template?

A job template in Ansible Tower/AWX defines how an Ansible Playbook is executed. It contains
details like:

 The playbook to run.


 The inventory used.
 The credentials needed.
 Any extra variables or parameters.
 Execution environment (containerized runtimes).

Creating a Job Template

1. Navigate to Templates > Add Job Template.


2. Provide a name and select the playbook.
3. Choose an inventory and set credentials.
4. Define limit (optional) – restrict execution to specific hosts.
5. Set job type (Run or Check mode).
6. Define extra variables if needed.
7. Click Save and launch manually or schedule it.

Scheduling Jobs

You can schedule jobs to run at specific times using the scheduling feature:

1. Go to Templates > Job Template and select a template.


2. Click on Schedule > Add Schedule.
3. Define a name, set the frequency (one-time, recurring, cron-like).
4. Set start date & time.
5. Define time zone for execution.
6. Save and activate the schedule.

Use Case: Automating security patch deployment every Sunday at 2 AM.

3. Managing Credentials Securely


Types of Credentials in Ansible Tower/AWX

1. Machine Credentials – SSH keys or passwords for remote hosts.


2. Vault Credentials – Securely store Ansible Vault passwords.
3. Cloud Provider Credentials – AWS, Azure, GCP API keys.
4. Git/SCM Credentials – Authenticate against Git repositories.
5. Container Registry Credentials – For pulling container images securely.

How to Secure Credentials

1. Encryption at Rest – Credentials are stored encrypted in Tower/AWX.


2. Role-Based Access Control (RBAC) – Restrict who can use credentials.
3. Vault Password Prompting – Use runtime prompts instead of storing secrets.
4. Environment Variables & HashiCorp Vault – Use external vaults for secrets.

Adding a New Credential

1. Navigate to Resources > Credentials > Add Credential.


2. Provide a name and select a credential type.
3. Enter authentication details (username/password, SSH key, token).
4. Assign appropriate RBAC permissions.
5. Save the credential and use it in job templates.
4. RBAC (Role-Based Access Control)
Why is RBAC Important?

 Prevents unauthorized access to inventories, credentials, projects, and job templates.


 Limits execution permissions to specific teams, organizations, or users.
 Enhances security and compliance in enterprise environments.

User Roles in AWX/Tower

1. System Administrator – Full access, can manage all configurations.


2. Auditor – Read-only access for monitoring job executions.
3. Normal User – Can execute jobs but has restricted admin access.
4. Project Admin – Can manage and sync projects.
5. Inventory Admin – Can create and update inventories.

Team-Based Access Control

 Users can be grouped into Teams.


 Permissions are assigned at the team level, avoiding individual management overhead.

How to Assign RBAC Permissions

1. Go to Users or Teams > Select User/Team.


2. Click Permissions > Add Role.
3. Select from predefined roles or create custom roles.
4. Assign access to templates, inventories, or credentials.
5. Save and apply permissions.

5. Monitoring and Logging


Job Execution Monitoring

 View active, pending, and completed jobs via the Jobs Dashboard.
 Filter jobs based on status (failed, running, successful).
 Check job execution history to track playbook runs.

Live Output Logs

 Real-time job output can be viewed from the web UI.


 Logs include host-level execution details.
 Errors and failed tasks are highlighted.
Log Storage & Retention

 AWX/Tower logs can be stored in:


o Local Storage (default)
o Centralized Logging Systems (ELK Stack, Splunk, Graylog)
o Cloud Storage & SIEM Tools (AWS CloudWatch, Azure Monitor)

Setting Up External Logging

1. Configure logging aggregators under Settings > Logging.


2. Define log forwarding to a remote service (Syslog, HTTP, etc.).
3. Enable structured JSON logging for better parsing.

Health Monitoring

 Use Grafana & Prometheus for performance metrics.


 Monitor CPU, Memory, and Disk Usage.
 Set up alerts for failures or anomalies.

Interview Preparation & Example Questions


Basic Questions

1. What is Ansible Tower/AWX, and how does it differ from Ansible CLI?
2. How do job templates simplify automation in Ansible Tower?
3. Explain the different types of credentials stored in AWX.

Advanced Questions

4. How would you configure RBAC to limit access to sensitive job templates?
5. Describe how you would integrate AWX logs with a centralized logging system.
6. How do you schedule and manage recurring automation tasks?

Scenario-Based Questions

7. A playbook execution fails in AWX. How do you troubleshoot?


8. Your company requires an audit log of all automation runs. How do you enable this?
9. You need to deploy an application only to specific hosts in an inventory. How do you achieve
this using job templates?
Final Thoughts
Mastering Ansible Tower/AWX requires a solid understanding of automation workflows,
security practices, and system integrations. Focus on hands-on practice by deploying AWX in a
test environment, setting up job templates, managing credentials securely, and configuring
RBAC. By doing so, you’ll be well-prepared for any interview!

Would you like me to provide hands-on lab exercises for further practice?

Ansible for Network Automation is a powerful tool that simplifies the management and
configuration of network devices. It allows network engineers and DevOps teams to automate
tasks such as configuration changes, device provisioning, and compliance checks across multiple
vendors like Cisco, Juniper, and Arista.

Let’s break this down into its key components:

1. Managing Network Devices with Ansible


Ansible is an agentless automation tool, meaning it does not require software to be installed on
target devices. It operates over SSH or API connections to interact with network devices. The
key benefits of using Ansible for network automation include:

 Consistency – Ensures that all network devices have uniform configurations.


 Scalability – Automates changes across thousands of devices at once.
 Simplified Management – Uses human-readable YAML playbooks.
 Reduction in Human Error – Automates repetitive tasks to eliminate mistakes.
 Vendor Agnostic – Supports multiple vendors like Cisco, Juniper, Arista, and others.

How Ansible Connects to Network Devices

Ansible uses different connection methods depending on the device type:

 SSH – Most commonly used for CLI-based interactions with network devices.
 API (REST, NETCONF, gNMI) – Used for more advanced, structured communication.
 Network CLI – A special Ansible connection method (ansible_connection: network_cli)
designed for network devices.

Common Network Automation Tasks with Ansible

 Gathering device facts (system information, version, interfaces)


 Pushing configuration changes (VLANs, routing, ACLs)
 Backing up configurations
 Checking compliance against security policies
 Automating software upgrades

2. Using Ansible Modules for Cisco, Juniper, Arista, etc.


Ansible provides vendor-specific modules to manage devices from different manufacturers.
These modules abstract the complexity of device-specific commands and provide a common
automation framework.

Cisco Modules

For Cisco devices, Ansible supports:

 IOS (Cisco routers & switches) → cisco.ios collection


 NX-OS (Cisco Nexus switches) → cisco.nxos collection
 ASA (Cisco firewalls) → cisco.asa collection
 IOS-XR (Cisco service provider routers) → cisco.iosxr collection

Common Cisco modules:

 ios_command → Runs show commands


 ios_config → Pushes configuration changes
 ios_facts → Collects system facts
 ios_interfaces → Manages interfaces

Example Playbook for a Cisco Router:

---
- name: Configure a Cisco IOS router
hosts: routers
gather_facts: no
tasks:
- name: Show version
cisco.ios.ios_command:
commands: show version
register: version_output

- name: Display output


debug:
var: version_output.stdout_lines

Juniper Modules
For Juniper devices, Ansible provides the juniper.device collection, which supports:

 Junos routers and switches → juniper.device.junos_command, juniper.device.junos_config,


juniper.device.junos_facts

Example Playbook for Juniper:

---
- name: Configure a Juniper device
hosts: juniper_routers
gather_facts: no
tasks:
- name: Run a show command
juniper.device.junos_command:
commands: show interfaces terse
register: interfaces_output

- name: Display output


debug:
var: interfaces_output.stdout_lines

Arista Modules

For Arista devices, Ansible provides the arista.eos collection.

Common modules:

 eos_command → Runs show commands


 eos_config → Pushes configuration changes
 eos_facts → Collects system information

Example Playbook for Arista:

---
- name: Configure an Arista switch
hosts: arista_switches
gather_facts: no
tasks:
- name: Show interfaces
arista.eos.eos_command:
commands: show interfaces
register: output

- name: Display output


debug:
var: output.stdout_lines
3. Network CLI Connections and YAML Parsing
Ansible primarily interacts with network devices using Network CLI (for SSH-based devices) and
API connections.

Network CLI Connection Method

To automate network devices using CLI, Ansible uses the network_cli connection type. This must
be specified in the inventory file.

Example inventory.yml:

all:
children:
network_devices:
hosts:
router1:
ansible_host: 192.168.1.1
ansible_network_os: cisco.ios
ansible_connection: network_cli
ansible_user: admin
ansible_password: password

In a playbook, you don’t need to specify the connection again if it's in the inventory.

YAML Parsing in Ansible

Ansible playbooks and configuration files use YAML (Yet Another Markup Language), which is
human-readable and structured.

YAML basics in Ansible:

 Uses indentation (spaces, not tabs) for hierarchy.


 Uses - for lists.
 Uses : for key-value pairs.

Example:

- name: Basic Playbook


hosts: routers
tasks:
- name: Show interfaces
cisco.ios.ios_command:
commands: show ip interface brief
register: interfaces_output

- name: Debug Output


debug:
var: interfaces_output.stdout_lines

Parsing and Extracting Data from Command Outputs

When executing commands on network devices, the output is returned as structured data
(lists/dictionaries).

Example of extracted output:

"stdout_lines": [
"Interface IP-Address OK? Method Status Protocol",
"FastEthernet0/0 192.168.1.1 YES manual up up",
"FastEthernet0/1 unassigned YES unset administratively down down"
]

To extract the IP of FastEthernet0/0, you can use Ansible filters like json_query or simple loops.

Example:

- name: Extract IP Address


set_fact:
interface_ip: "{{ interfaces_output.stdout_lines[1].split()[1] }}"

- name: Display IP Address


debug:
var: interface_ip

This will extract 192.168.1.1 from the output.

Common Interview Questions & Answers


1. What is Ansible’s role in network automation?

Ansible simplifies the management of network devices by automating tasks like configuration,
monitoring, and compliance. It uses an agentless approach, relying on SSH or API-based
communication.

2. How does Ansible connect to network devices?

Ansible connects via network_cli (for SSH-based management) or API (NETCONF, REST). The
connection type is defined in the inventory.

3. What are some important Ansible modules for network automation?


 Cisco: ios_command, ios_config
 Juniper: junos_command, junos_config
 Arista: eos_command, eos_config

4. How do you store and manage network device credentials securely in Ansible?

Use Ansible Vault to encrypt credentials and sensitive data:

ansible-vault encrypt credentials.yml

5. How do you verify a configuration change before applying it?

Use Ansible’s check mode:

ansible-playbook network_config.yml --check

This guide should give you deep knowledge about Ansible for network automation, allowing
you to confidently answer any interview question. Let me know if you need more details on any
topic!

Ansible for Cloud Automation

Ansible is a powerful open-source automation tool primarily used for configuration


management, application deployment, and cloud automation. It provides a simple way to
automate provisioning and managing cloud resources across platforms like AWS, Azure, and
Google Cloud Platform (GCP). Unlike other Infrastructure as Code (IaC) tools, Ansible is
agentless and operates over SSH or WinRM, making it easy to use and maintain.

1. Managing AWS, Azure, and GCP Resources with Ansible


1.1 Why Use Ansible for Cloud Automation?

 Declarative & Idempotent: Ensures resources are always in the desired state.
 Agentless: No need to install agents on managed nodes.
 Multi-cloud support: Can automate infrastructure across AWS, Azure, and GCP.
 Modular & Extensible: Uses modules and plugins to support cloud operations.

1.2 Ansible with AWS

Setting Up Ansible for AWS


To manage AWS with Ansible, you'll need:

1. AWS Credentials stored in ~/.aws/credentials or environment variables.


2. Boto3 and Botocore (Python libraries for AWS).
3. Ansible AWS modules (e.g., ec2, s3, rds).

Example: Creating an EC2 Instance with Ansible

- name: Create an EC2 instance


hosts: localhost
gather_facts: no
tasks:
- name: Launch EC2 instance
amazon.aws.ec2_instance:
name: "web-server"
key_name: my-key
instance_type: t2.micro
image_id: ami-12345678
region: us-east-1
state: running
tags:
Environment: production

1.3 Ansible with Microsoft Azure

Setting Up Ansible for Azure

1. Install Azure SDK for Python: pip install azure-mgmt-resource


2. Install Ansible Azure Collection: ansible-galaxy collection install azure.azcollection
3. Set up Azure authentication using a service principal.

Example: Deploying a Virtual Machine on Azure

- name: Create a VM in Azure


hosts: localhost
tasks:
- name: Create a resource group
azure.azcollection.azure_rm_resourcegroup:
name: myResourceGroup
location: eastus

- name: Create a VM
azure.azcollection.azure_rm_virtualmachine:
resource_group: myResourceGroup
name: myVM
vm_size: Standard_B1s
admin_username: azureuser
admin_password: "P@ssword123!"
image:
offer: UbuntuServer
publisher: Canonical
sku: "18.04-LTS"
version: latest

1.4 Ansible with Google Cloud Platform (GCP)

Setting Up Ansible for GCP

1. Install Google Cloud SDK and authenticate.


2. Install Ansible GCP collection: ansible-galaxy collection install google.cloud
3. Use a service account with the required permissions.

Example: Creating a Compute Instance on GCP

- name: Create a VM in GCP


hosts: localhost
tasks:
- name: Create GCP instance
google.cloud.gcp_compute_instance:
name: my-instance
machine_type: n1-standard-1
zone: us-central1-a
project: my-project-id
auth_kind: serviceaccount
disks:
- auto_delete: true
boot: true
initialize_params:
source_image: projects/debian-cloud/global/images/family/debian-10

2. Using Boto3 for AWS Automation


Boto3 is the official AWS SDK for Python, allowing you to automate AWS services
programmatically. While Ansible provides a declarative way to manage AWS, Boto3 is
imperative, giving you full control over AWS resources.

2.1 Installing Boto3


pip install boto3

2.2 Configuring AWS Credentials

Set credentials in ~/.aws/credentials:

[default]
aws_access_key_id=YOUR_ACCESS_KEY
aws_secret_access_key=YOUR_SECRET_KEY
region=us-east-1

2.3 Boto3 Example: Launching an EC2 Instance


import boto3

ec2 = boto3.resource('ec2')

instance = ec2.create_instances(
ImageId='ami-12345678',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
KeyName='my-key'
)

print("Created instance:", instance[0].id)

2.4 Boto3 vs. Ansible for AWS Automation

Feature Ansible Boto3


Ease of Use Easier (YAML Playbooks) Requires Python programming
Idempotency Ensures desired state Must handle idempotency manually
Flexibility Limited to modules Full AWS API access
Multi-cloud Yes (AWS, Azure, GCP) No (AWS only)

When to Use Boto3?

 When you need full control over AWS resources.


 When handling complex logic or workflows.
 When integrating AWS with Python applications.

3. Terraform vs. Ansible for Cloud Infrastructure


Terraform and Ansible are often compared because both can automate cloud infrastructure,
but they serve different purposes.

3.1 Key Differences

Feature Terraform Ansible


Feature Terraform Ansible
Purpose Infrastructure provisioning Configuration management
Language HCL (HashiCorp Configuration Language) YAML
State Management Uses a state file Stateless
Idempotency Built-in state tracking Uses "idempotent modules"
Multi-cloud Yes Yes
Declarative Yes Yes

3.2 Example: Creating an AWS EC2 Instance

Terraform

provider "aws" {
region = "us-east-1"
}

resource "aws_instance" "web" {


ami = "ami-12345678"
instance_type = "t2.micro"
}

Ansible

- name: Create EC2 instance


hosts: localhost
tasks:
- name: Launch EC2
amazon.aws.ec2_instance:
name: "web-server"
image_id: ami-12345678
instance_type: t2.micro
region: us-east-1

3.3 When to Use Terraform vs. Ansible

Use Case Best Tool


Provisioning cloud resources (VMs, networks, databases) Terraform
Configuration management (installing software, updates) Ansible
Immutable Infrastructure (rebuilding from scratch) Terraform
Hybrid Cloud Automation Ansible

3.4 Using Terraform and Ansible Together

 Use Terraform to provision infrastructure.


 Use Ansible to configure servers post-provisioning.
 Example Workflow:
1. Terraform deploys AWS EC2 instances.
2. Ansible configures software on those instances.

4. Summary
 Ansible is great for cloud automation, managing AWS, Azure, and GCP.
 Boto3 provides a Python-based API for AWS but requires manual idempotency.
 Terraform vs. Ansible: Terraform is better for provisioning, while Ansible excels at
configuration management.
 Best Practice: Combine Terraform for infrastructure and Ansible for post-deployment
configuration.

Would you like me to provide real-world scenarios or interview questions on this topic?

Mastering CI/CD with Ansible


Ansible is a powerful automation tool used to provision infrastructure, configure systems, and
deploy applications. When integrated into a CI/CD pipeline, it enables organizations to
automate and streamline software delivery.

This guide will provide an in-depth understanding of using Ansible in CI/CD, so you can
confidently answer any interview question on this topic.

1. Integrating Ansible with Jenkins, GitHub Actions, and GitLab


CI/CD
Ansible can be integrated into various CI/CD tools to automate deployment and configuration
tasks.

1.1 Ansible with Jenkins

Jenkins is a widely used CI/CD tool that integrates well with Ansible for automation.

Steps to Integrate Ansible with Jenkins

1. Install Ansible on Jenkins Server


o Ensure the Jenkins server has Ansible installed:
o sudo apt update && sudo apt install -y ansible
2. Install Required Plugins in Jenkins
o Go to Manage Jenkins → Plugin Manager → Install "Ansible Plugin".
o Install SSH Agent Plugin (if SSH is used for Ansible connections).
3. Configure Ansible in Jenkins
o Go to Manage Jenkins → Global Tool Configuration.
o Find the Ansible section and add the path to your Ansible binary.
4. Create a Jenkins Pipeline for Ansible
o In Jenkins, create a Pipeline project.
o Add the following pipeline script to call an Ansible playbook:
o pipeline {
o agent any
o stages {
o stage('Checkout') {
o steps {
o git branch: 'main', url: 'https://github.com/your-repo.git'
o }
o }
o stage('Run Ansible') {
o steps {
o ansiblePlaybook credentialsId: 'ansible-ssh-key',
o inventory: 'inventory.ini',
o playbook: 'playbook.yml'
o }
o }
o }
o }
o The ansiblePlaybook step runs an Ansible playbook inside the pipeline.
5. Run the Pipeline
o Click "Build Now" to execute the pipeline.

1.2 Ansible with GitHub Actions

GitHub Actions enables CI/CD workflows in GitHub repositories.

Steps to Integrate Ansible with GitHub Actions

1. Create a GitHub Actions Workflow


o Add a .github/workflows/deploy.yml file to your repository.
2. Define the Workflow
o Example workflow to run an Ansible playbook:
o name: Deploy with Ansible
o on:
o push:
o branches:
o - main
o jobs:
o deploy:
o runs-on: ubuntu-latest
o steps:
o - name: Checkout code
o uses: actions/checkout@v3
o
o - name: Install Ansible
o run: sudo apt update && sudo apt install -y ansible
o
o - name: Run Ansible Playbook
o run: ansible-playbook -i inventory.ini playbook.yml
The workflow:
o
 Triggers on push to the main branch.
 Installs Ansible.
 Runs an Ansible playbook.
3. Secure Access Using Secrets
o Store SSH keys or environment variables using GitHub Secrets.
o Modify the playbook execution:
o - name: Run Ansible Playbook
o env:
o ANSIBLE_HOST_KEY_CHECKING: False
o run: ansible-playbook -i inventory.ini playbook.yml --user=${{ secrets.SSH_USER }}

1.3 Ansible with GitLab CI/CD

GitLab CI/CD runs pipelines using .gitlab-ci.yml files.

Steps to Integrate Ansible with GitLab CI/CD

1. Define a .gitlab-ci.yml File


2. stages:
3. - deploy
4.
5. deploy_with_ansible:
6. stage: deploy
7. image: python:3.9
8. before_script:
9. - apt update && apt install -y ansible
10. script:
11. - ansible-playbook -i inventory.ini playbook.yml
12. Commit and Push
o This triggers the pipeline to run Ansible on every commit.
2. Automating Infrastructure Provisioning in CI/CD Pipelines
Why Automate Infrastructure with Ansible?

 Ensures consistent environments across development, staging, and production.


 Automates cloud provisioning in AWS, Azure, or GCP.
 Works seamlessly with Terraform for infrastructure as code (IaC).

Example: Provisioning an AWS EC2 Instance with Ansible

1. Install AWS Ansible Collection

ansible-galaxy collection install amazon.aws

2. Define an Ansible Playbook for EC2 Provisioning

- name: Launch EC2 instance


hosts: localhost
tasks:
- name: Create an EC2 instance
amazon.aws.ec2_instance:
name: "MyServer"
key_name: "my-key"
instance_type: "t2.micro"
image_id: "ami-12345678"
region: "us-east-1"
count: 1
state: "running"

3. Integrate into a CI/CD Pipeline

 Add this playbook to a Jenkins, GitHub Actions, or GitLab CI/CD pipeline.


 Automatically spin up AWS instances when deploying applications.

3. Testing Ansible Playbooks with Molecule


Molecule is a framework for testing Ansible roles and playbooks.

Why Use Molecule?

 Ensures playbooks work correctly before deployment.


 Supports Docker, Vagrant, and cloud providers for testing.
 Can be integrated into CI/CD pipelines.
1. Install Molecule
pip install molecule docker

2. Create a New Molecule Test


molecule init scenario --driver-name docker

 This sets up a test environment for Ansible roles.

3. Define a Playbook for Testing

Modify molecule/default/converge.yml:

- name: Verify Playbook


hosts: all
tasks:
- name: Install Nginx
ansible.builtin.apt:
name: nginx
state: present

4. Run Molecule Tests


molecule test

 This creates a container, runs the playbook, and verifies results.

5. Integrate Molecule in CI/CD Pipelines

 Example GitHub Actions workflow:

name: Molecule Test


on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3

- name: Install dependencies


run: pip install molecule docker

- name: Run Molecule Tests


run: molecule test

 Automates playbook validation in CI/CD pipelines.


Conclusion
By mastering Ansible in CI/CD, you can automate deployment, test playbooks, and manage
infrastructure efficiently. Here's a quick summary of what you should know for interviews:

✅Integrating Ansible with CI/CD tools


✅Automating infrastructure provisioning in pipelines
✅Testing playbooks using Molecule

Would you like practice questions to test your knowledge?

Writing Custom Ansible Modules and Plugins (In-Depth Guide)

Ansible is an automation tool that allows IT professionals to manage configurations, deploy


applications, and orchestrate workflows. While Ansible provides a vast collection of built-in
modules and plugins, sometimes you need to create custom ones to meet specific
requirements. This guide will give you a deep understanding of writing custom Ansible modules
and plugins so you can confidently answer interview questions.

1. Writing a Custom Ansible Module in Python


What is an Ansible Module?

An Ansible module is a standalone script that Ansible executes on target machines. Modules
return JSON output and follow Ansible’s standard response format.

Why Write a Custom Module?

 When no built-in module fulfills your requirements.


 To interact with proprietary software or APIs.
 To encapsulate business logic in automation.

Structure of a Custom Module

A minimal custom Ansible module consists of:

1. Importing required libraries.


2. Parsing input arguments.
3. Performing the desired operations.
4. Returning results in JSON format.

Basic Example: Creating a Custom Module

Let’s create a module called my_module.py that checks if a file exists.

Step 1: Define the Module

#!/usr/bin/python

from ansible.module_utils.basic import AnsibleModule


import os

def main():
module_args = dict(
path=dict(type='str', required=True)
)

module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)

path = module.params['path']
file_exists = os.path.exists(path)

result = dict(
changed=False,
path=path,
exists=file_exists
)

if file_exists:
module.exit_json(**result)
else:
module.fail_json(msg="File not found", **result)

if __name__ == '__main__':
main()

Step 2: Using the Module in a Playbook

Save my_module.py in library/ (relative to your playbook).

- name: Test Custom Module


hosts: localhost
tasks:
- name: Check if file exists
my_module:
path: "/etc/passwd"
register: result

- debug:
var: result

Step 3: Running the Playbook

ansible-playbook test.yml

Key Concepts for Interviews

 AnsibleModule: A helper that simplifies argument parsing, error handling, and JSON
response formatting.
 module.exit_json(): Used to return a successful response.
 module.fail_json(): Used to return a failure response.
 supports_check_mode: Allows the module to run in dry-run mode (--check).

2. Writing Callback and Connection Plugins


Ansible Plugins Overview

Ansible plugins extend its core functionality. The most important types are:

 Callback Plugins: Modify output and logging.


 Connection Plugins: Define how Ansible connects to remote hosts.
 Action Plugins: Enhance module execution logic.

Writing a Callback Plugin

A callback plugin is used to modify Ansible’s output format or send notifications.

Step 1: Create a Custom Callback Plugin

Save this file in callback_plugins/custom_callback.py:

from ansible.plugins.callback import CallbackBase

class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'custom_callback'

def v2_runner_on_ok(self, result, **kwargs):


host = result._host.get_name()
self._display.display(f"✔SUCCESS: {host} - {result._result}")

def v2_runner_on_failed(self, result, **kwargs):


host = result._host.get_name()
self._display.display(f"❌FAILURE: {host} - {result._result}", color='red')

Step 2: Enable the Callback Plugin

Edit ansible.cfg:

[defaults]
callback_whitelist = custom_callback

Step 3: Run Ansible with Custom Output

ansible-playbook test.yml

Writing a Connection Plugin

A connection plugin determines how Ansible connects to remote machines. Common


connection types are SSH, WinRM, and local.

Example: Writing a Custom Connection Plugin

Save it in connection_plugins/custom_connection.py :

from ansible.plugins.connection import ConnectionBase

class Connection(ConnectionBase):
transport = 'custom'

def __init__(self, *args, **kwargs):


super(Connection, self).__init__(*args, **kwargs)

def exec_command(self, cmd, in_data=None, sudoable=True):


return 0, "Custom Connection Executed: " + cmd, ""

def put_file(self, in_path, out_path):


pass

def fetch_file(self, in_path, out_path):


pass

Then specify the connection in your playbook:

- name: Test Custom Connection


hosts: all
connection: custom
tasks:
- command: echo "Hello, Custom Connection"

Key Concepts for Interviews

 CallbackBase: Used to define custom logging or reporting.


 ConnectionBase: Defines how Ansible connects to hosts.
 exec_command(): Runs shell commands in a connection plugin.
 put_file()/fetch_file(): Transfers files between the controller and remote hosts.

3. Using the Ansible SDK for Automation


The Ansible SDK allows developers to integrate Ansible into Python applications.

Installing Ansible SDK


pip install ansible

Running Ansible Playbooks Programmatically


from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.parsing.dataloader import DataLoader
from ansible.inventory.manager import InventoryManager
from ansible.vars.manager import VariableManager

def run_playbook(playbook_path, inventory_path):


loader = DataLoader()
inventory = InventoryManager(loader=loader, sources=[inventory_path])
variable_manager = VariableManager(loader=loader, inventory=inventory)

executor = PlaybookExecutor(
playbooks=[playbook_path],
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
passwords={}
)

executor.run()

run_playbook('test.yml', 'inventory.ini')

Using Ansible Runner

Ansible Runner provides a more structured way to execute Ansible in Python applications.
Installing Ansible Runner

pip install ansible-runner

Executing a Playbook

import ansible_runner

r = ansible_runner.run(playbook='test.yml', inventory='inventory.ini')
print(r.stdout)

Key Concepts for Interviews

 Ansible SDK: Enables Python applications to interact with Ansible.


 PlaybookExecutor: Runs playbooks programmatically.
 Ansible Runner: Provides a structured way to integrate Ansible with other tools.
 InventoryManager & VariableManager: Manage inventory and variables dynamically.

Final Thoughts
By mastering custom modules, callback & connection plugins, and the Ansible SDK, you can
confidently tackle any interview question on this topic. The best way to solidify your knowledge
is through hands-on practice. Try writing different modules and plugins, integrate Ansible with
Python applications, and experiment with real-world automation scenarios.

Would you like specific interview questions to practice?

Ansible best practices are essential for writing maintainable, scalable, and efficient automation.
Below is a deep dive into the key best practices you mentioned.

1. Structuring Projects Efficiently

Proper structure makes playbooks easier to read, debug, and maintain. It also improves
scalability when working on larger projects.

Recommended Project Structure


ansible-project/
│── inventories/
│ ├── production
│ ├── staging
│── roles/
│ ├── webserver/
│ │ ├── tasks/
│ │ │ ├── main.yml
│ │ │ ├── install.yml
│ │ │ ├── configure.yml
│ │ ├── handlers/
│ │ │ ├── main.yml
│ │ ├── templates/
│ │ │ ├── nginx.conf.j2
│ │ ├── files/
│ │ │ ├── index.html
│ │ ├── vars/
│ │ │ ├── main.yml
│ │ ├── defaults/
│ │ │ ├── main.yml
│ │ ├── meta/
│ │ │ ├── main.yml
│── group_vars/
│ ├── all.yml
│ ├── webservers.yml
│── host_vars/
│ ├── server1.yml
│── playbooks/
│ ├── site.yml
│ ├── webservers.yml
│── ansible.cfg
│── requirements.yml
│── README.md

Key Best Practices

 Use roles for reusability: Each role should have a clear purpose (e.g., "webserver").
 Separate variables: Use group_vars and host_vars instead of hardcoding values.
 Use inventory directories: Separate staging, production, and testing environments.
 Keep playbooks in a dedicated directory: Helps in organizing execution logic.
 Use ansible.cfg to define paths: Example:
 [defaults]
 inventory = inventories/production
 roles_path = ./roles

2. Using Tags and include_tasks Wisely

Using Tags

Tags allow you to run specific tasks selectively without executing the entire playbook.
Tagging Best Practices

1. Tagging Individual Tasks


2. - name: Install Nginx
3. apt:
4. name: nginx
5. state: present
6. tags:
7. - install
8. - nginx
o Now, you can run:
o ansible-playbook site.yml --tags "install"
9. Tagging Entire Plays
10. - name: Setup Webserver
11. hosts: webservers
12. roles:
13. - webserver
14. tags:
15. - web
16. Skipping Tags
17. ansible-playbook site.yml --skip-tags "install"
18. Tagging Debugging Steps
19. - name: Debug system information
20. debug:
21. msg: "This is a debug message"
22. tags:
23. - never
24. - debug
o This ensures it only runs when explicitly specified:
o ansible-playbook site.yml --tags "debug"

Using include_tasks Wisely

include_tasks dynamically includes tasks during runtime, while import_tasks is static.

Best Practices

1. Using include_tasks for Conditional Execution


2. - name: Install Database
3. include_tasks: install_db.yml
4. when: db_required | default(false)
o This prevents unnecessary execution.
5. Using include_tasks with Loops
6. - name: Configure multiple services
7. include_tasks: setup_service.yml
8. loop:
9. - nginx
10. - mysql
11. - redis
12. loop_control:
13. loop_var: service_name
o Inside setup_service.yml, you can reference service_name.
14. Combining with Handlers
15. - name: Restart Service if Config Changes
16. include_tasks: restart.yml
17. notify: Restart Service

3. Keeping Playbooks Idempotent

Idempotency ensures that running a playbook multiple times produces the same result without
unnecessary changes.

Key Techniques for Idempotency

1. Use state: present Instead of state: latest


2. - name: Install Nginx
3. apt:
4. name: nginx
5. state: present
o latest may cause unnecessary package updates.
6. Use creates and removes with Commands
7. - name: Extract archive only if not extracted
8. command: tar xvf myfile.tar.gz
9. args:
10. creates: /path/to/extracted_folder
11. Use changed_when for Custom Commands
12. - name: Check disk space
13. shell: df -h | grep /dev/sda1
14. register: disk_output
15. changed_when: false
16. Avoid command or shell When Possible
o Prefer using built-in modules like copy, file, and template.
17. Ensure template and copy Modules Are Idempotent
18. - name: Deploy Configuration
19. template:
20. src: nginx.conf.j2
21. dest: /etc/nginx/nginx.conf
22. owner: root
23. group: root
24. mode: '0644'
25. notify: Restart Nginx
26. Use Handlers to Avoid Unnecessary Restarts
27. - name: Restart Nginx
28. service:
29. name: nginx
30. state: restarted
31. listen: "Restart Nginx"

4. Using assert and fail for Validation

Assertions help prevent incorrect configurations, while fail explicitly stops execution.

Using assert for Validation

1. Ensure Required Variables Are Set


2. - name: Validate Required Variables
3. assert:
4. that:
5. - my_variable is defined
6. - my_variable | length > 0
o Prevents playbooks from running without required variables.
7. Check OS Compatibility
8. - name: Verify OS is Ubuntu
9. assert:
10. that: ansible_distribution == "Ubuntu"
11. Validate Configuration Files
12. - name: Check if Configuration is Valid
13. assert:
14. that:
15. - config_value >= 10
16. - config_value <= 100

Using fail for Custom Error Messages

1. Stop Playbook on Critical Conditions


2. - name: Stop if disk space is low
3. fail:
4. msg: "Disk space is critically low!"
5. when: ansible_mounts[0].size_available < 500000000
6. Ensure a Service is Installed Before Continuing
7. - name: Fail if MySQL is not installed
8. fail:
9. msg: "MySQL is not installed!"
10. when: "'mysql' not in ansible_facts.packages"
Conclusion

By following these best practices, you ensure your Ansible automation is efficient,
maintainable, and reliable. These techniques help in structuring projects properly, using tags
and includes effectively, ensuring idempotency, and enforcing validation with assert and fail.

Want to dive deeper into any of these topics?

Sure! Let's break down each of these advanced Ansible topics thoroughly so you can confidently
answer any interview question about them.

1. Event-Driven Automation with Ansible Rulebooks


What is Event-Driven Automation?
Event-driven automation is a system where predefined rules trigger automation workflows
based on events. Instead of manually running playbooks, actions are executed automatically in
response to events such as infrastructure changes, application alerts, or security incidents.

What are Ansible Rulebooks?


Ansible Rulebooks are part of Ansible Event-Driven Automation (EDA). They define how
Ansible should respond to specific events by processing incoming data and triggering playbooks
or tasks.

Key Components of Ansible Rulebooks

1. Sources – The event sources that generate triggers (e.g., monitoring tools, webhooks,
API calls).
2. Rules – Conditions that determine when an event should trigger an automation action.
3. Actions – The tasks executed when a rule is matched (e.g., running an Ansible
Playbook).

How It Works

1. An event is detected by an event source (e.g., ServiceNow ticket creation, Kubernetes


alert, or security log event).
2. The event is processed by the Ansible Rulebook Engine.
3. If a rule condition matches, an action is executed, such as running a playbook or sending
a notification.

Example: Automating Incident Response with Ansible Rulebooks


- name: Automatically restart a failed service
hosts: localhost
sources:
- name: watch syslog for service failures
syslog:
facility: daemon
rules:
- name: Restart service on failure
condition: event.msg contains "Service XYZ failed"
action:
run_playbook:
name: restart_service.yml

Use Cases

 IT Operations: Automatically remediating system failures.


 Security: Blocking IPs when a threat is detected.
 Infrastructure: Scaling Kubernetes pods when CPU usage is high.

Why Use Event-Driven Ansible?

 Improves efficiency by reducing manual intervention.


 Faster response times to incidents.
 Better integration with monitoring and security tools.

2. Using Ansible with ServiceNow, Kubernetes, and Docker


Ansible can automate ServiceNow, Kubernetes, and Docker through modules and plugins,
enabling better DevOps and ITSM workflows.

Ansible & ServiceNow

What is ServiceNow?
ServiceNow is an IT service management (ITSM) platform that helps organizations handle
incidents, changes, and service requests.

How Ansible Integrates with ServiceNow

 Uses the servicenow.itsm module to create, update, and query incidents.


 Can automate change requests and approvals.
 Enables bidirectional integration (Ansible updates ServiceNow, and ServiceNow triggers
Ansible).
Example Playbook: Creating a ServiceNow Incident

- name: Create an incident in ServiceNow


hosts: localhost
tasks:
- name: Create a ServiceNow incident
servicenow.itsm.incident:
state: new
short_description: "Server outage detected"
description: "The database server is down."
impact: 1
urgency: 1
caller: "admin"
register: incident
- debug:
var: incident

Use Cases

 Automate incident creation for failed servers.


 Auto-close tickets when an issue is resolved.
 Sync configuration management with ServiceNow CMDB.

Ansible & Kubernetes

Why Use Ansible with Kubernetes?

 Simplifies cluster management without needing kubectl commands.


 Automates deployments, scaling, and rolling updates.

Key Ansible Modules for Kubernetes

 k8s: Manages Kubernetes resources (deployments, services, etc.).


 helm: Manages Helm charts for package deployment.
 k8s_auth: Handles Kubernetes authentication.

Example Playbook: Deploying an App to Kubernetes

- name: Deploy an application to Kubernetes


hosts: localhost
tasks:
- name: Create a deployment
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:latest
ports:
- containerPort: 80

Use Cases

 Deploy applications automatically.


 Scale services based on demand.
 Manage Kubernetes networking (ingress, service discovery).

Ansible & Docker

Why Use Ansible with Docker?

 Automates container creation, management, and deployment.


 Replaces manual Docker CLI commands.

Key Ansible Modules for Docker

 community.docker.docker_container – Manages containers.


 community.docker.docker_image – Handles container images.
 community.docker.docker_network – Manages Docker networks.

Example Playbook: Deploying a Docker Container

- name: Deploy a web server using Docker


hosts: localhost
tasks:
- name: Pull the latest Nginx image
community.docker.docker_image:
name: nginx
source: pull
- name: Run the Nginx container
community.docker.docker_container:
name: nginx_server
image: nginx
state: started
ports:
- "8080:80"

Use Cases

 Deploy microservices-based applications.


 Automate containerized CI/CD pipelines.
 Manage multi-container environments.

3. Scaling Ansible in Large Environments


Challenges of Scaling Ansible

 Performance bottlenecks when running playbooks on thousands of hosts.


 Concurrency limits with large inventories.
 Network latency in global environments.

Best Practices for Scaling Ansible

1. Using Ansible AWX/Automation Controller

 Ansible AWX (open-source) and Ansible Automation Controller (enterprise version)


provide a UI, role-based access, and API automation.
 Distributes workload across multiple execution nodes.

2. Optimizing Inventory Management

 Use dynamic inventory scripts to pull real-time host lists from AWS, Azure, or VMware.
 Organize large inventories into groups with host_vars and group_vars.

3. Parallel Execution & Forks

 Increase parallelism by adjusting the forks setting in ansible.cfg:


 [defaults]
 forks = 50
 Use Mitogen plugin for faster execution.
4. Efficient Playbook Design

 Minimize unnecessary tasks by using:


o Handlers (run tasks only when needed).
o Async tasks (background execution).
o Check mode (--check) for dry runs.

5. Caching Facts to Reduce Overhead

 Enable fact caching in ansible.cfg to avoid gathering facts repeatedly.


 [defaults]
 fact_caching = jsonfile
 fact_caching_connection = /tmp/ansible_facts

6. Load Balancing with Multiple Control Nodes

 Use Ansible Execution Environments to distribute workloads.


 Implement Message Queue Systems like Redis for task distribution.

Conclusion
By mastering these topics, you can confidently tackle any advanced Ansible interview question.
Here's a quick recap:

 Ansible Rulebooks enable event-driven automation.


 ServiceNow, Kubernetes, and Docker integrations enhance automation across IT and
DevOps.
 Scaling Ansible requires efficient inventory management, parallel execution, and control
node optimization.

Would you like any hands-on examples or specific interview questions to practice with?

You might also like