0% found this document useful (0 votes)
812 views33 pages

Ansible Quick Guidance On AIX

This document provides guidance on configuring Ansible on AIX systems to manage nodes. It outlines steps to prepare the master node such as installing Ansible, configuring SSH access, and editing the ansible.cfg file. It also discusses preparing managed nodes by installing Python and configuring the inventory file to define host groups. Examples are provided to test connectivity and use modules to manage users and files on remote hosts.
Copyright
© Public Domain
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
812 views33 pages

Ansible Quick Guidance On AIX

This document provides guidance on configuring Ansible on AIX systems to manage nodes. It outlines steps to prepare the master node such as installing Ansible, configuring SSH access, and editing the ansible.cfg file. It also discusses preparing managed nodes by installing Python and configuring the inventory file to define host groups. Examples are provided to test connectivity and use modules to manage users and files on remote hosts.
Copyright
© Public Domain
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Ansible Quick Guidance on AIX

Environment
master_node:
External IP: 169.48.22.141
Internal IP: 192.168.143.141

managed_node1:
External IP: 169.48.22.138
internal IP: 192.168.143.138
Preparation
1- Checking /etc/ssh/sshd_config
LogLevel DEBUG
PermitRootLogin yes
AuthorizedKeysFile .ssh/authorized_keys
Subsystem sftp /usr/sbin/sftp-server
PasswordAuthentication yes

# stopsrc -s sshd
# startsrc -s sshd

2- Install ansible on your master_node using yum:


# yum install ansible
3- Linux and UNIX managed hosts need to have Python 2 (version 2.6 or
later) or Python 3
(version 3.5 or later) installed for most modules to work.
For Red Hat Enterprise Linux 8, you may be able to depend on the
platform-python package.

You can also enable and install the python36 application stream (or the
python27 application stream).
[root@managed_node1]# yum module install python
[root@managed_node1]# yum module install python36
[..]

4- Configuring master_node ansible.cfg


# mkdir /ansible_project
# cd /ansible_project
# vi ansible.cfg
[defaults]
inventory = ./inventory
remote_user = root
ask_pass = false
role_path = ./roles
[privilege_escalation]
become = true
#become_method = sudo
become_user = root
become_ask_pass = false
Definition for ansible.cfg attributes
Inventory
Specifies the path to the inventory file.
remote_user
The name of the user to log in as on the managed hosts. If not
specified, the current user's name is used.
ask_pass
Whether or not to prompt for an SSH password. Can be false if using
SSH public key authentication.
Become
Whether to automatically switch user on the managed host (typically to
root) after connecting. This can also be specified by a play.
become_method
How to switch user (typically sudo, which is the default, but su is an
option).
become_user
The user to switch to on the managed host (typically root, which is the
default).
become_ask_pass
Whether to prompt for a password for your become_method. Defaults
to false.
5- Configure master_node inventory [example]
[AIX_HOSTS]
managed_node1
#################EOF#############

managed_node1 is resolved (by adding it in /etc/hosts file):


# host managed_node1
managed_node1 is 192.168.143.138
Make sure your master_node public key is transferred to all managed
nodes, So you ensure passwordless connection from master node to
managed nodes.
Extra notes for inventory file manipulations

- You can define several host groups in your master_node inventory,


see below sample:
[webservers]
web1.example.com
web2.example.com
192.0.2.42

[db-servers]
db1.example.com
db2.example.com
Or nested nested groups like the below:
[usa]
washington1.example.com
washington2.example.com
[canada]
ontario01.example.com
ontario02.example.com
[north-america:children]
canada
usa
Or even a range in your master_node inventory:
[usa]
washington[1:2].example.com

[canada]
ontario[01:02].example.com

Note: Two host groups always exist:


• The "all" host group contains every host explicitly listed in the
inventory.
• The "ungrouped" host group contains every host explicitly listed in
the inventory that is not a member of any other group.
Check and verify ansible master_node
It is always recommended to see/check ansible binaries locations and
symbolic links:
# ln -s /opt/freeware/bin/ansible /usr/bin/ansible
# ln -s /opt/freeware/bin/ansible-playbook /usr/bin/ansible-playbook
# ln -s /opt/freeware/bin/ansible-doc /usr/bin/ansible-doc
# ln -s /opt/freeware/bin/ansible-galaxy /usr/bin/ansible-galaxy

# ansible --version
ansible 2.9.10
config file = /ansible_project/ansible.cfg
configured module search path = ['/.ansible/plugins/modules',
'/usr/share/ansible/plugins/modules']
ansible python module location = /opt/freeware/lib/python3.7/site-
packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.9 (default, Sep 14 2020, 06:09:55) [GCC 8.3.0]
Check ansible master_node hosts and attempt ad-hoc commands

# cd /etc/ansible_project
# ansible all --list-hosts
hosts (1):
managed_node1
# ansible AIX_HOSTS --list-hosts
hosts (1):
managed_node1
Modules are the tools that ad hoc commands use to accomplish tasks.
Ansible provides hundreds of modules which do different things.
You can usually find a tested, special-purpose module that does what
you need as part of the standard installation.

# ansible -m ping all


[WARNING]: Platform aix on host managed_node1 is using the
discovered Python interpreter at /usr/bin/python, but future
installation of another Python interpreter could change this. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interprete
r_discovery.html for more information.
managed_node1 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"}

- Use ad-hoc command to retrieve hostnames:


# ansible -m command -a hostname all -o
managed_node1 | CHANGED | rc=0 | (stdout) managed_node1
- Use "user" module to create mash user give it UID 4000 on all
managed hosts:
# ansible -m user -a 'name=mash uid=4000 state=present' all
managed_node1 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"comment": "",
"create_home": true,
"group": 1,
"home": "/home/mash",
"name": "mash",
"shell": "/usr/bin/ksh",
"state": "present",
"system": false,
"uid": 4000
}

- Making sure that mash user has been created by a "command"


module:
# ansible -m shell -a 'id mash' all -o
managed_node1 | CHANGED | rc=0 | (stdout) uid=4000(mash)
gid=1(staff)
# ansible -m shell -a 'cat /etc/passwd | grep mash' all -o
managed_node1 | CHANGED | rc=0 | (stdout)
mash:*:4000:1::/home/mash:/usr/bin/ksh

- Checking IP addresses:
# ansible all -m shell -a 'ifconfig -a'
managed_node1 | CHANGED | rc=0 >>
en0:
flags=1e084863,814c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLE
X,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESE
inet 192.168.143.138 netmask 0xfffffff8 broadcast 192.168.143.143
en1:
flags=1e084863,814c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLE
X,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),LARGESE
lo0:
flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MUL
TICAST,GROUPRT,64BIT,LARGESEND,CHAIN>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
- Copying /etc/hosts with another name:
# ansible all -m copy -a "src=/etc/hosts dest=/etc/hosts.$(date
+'%m%d')"
managed_node1 | CHANGED => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "bbf3711f44da38c3326bf02a5f4ffabd89cafa85",
"dest": "/etc/hosts.1110", #####<<==========
"gid": 0,
"group": "system",
"md5sum": "b1ff6ce99b6047a18cc47168f71b1c5c",
"mode": "0644",
"owner": "root",
"size": 2101,
"src": "/.ansible/tmp/ansible-tmp-1605015730.7945442-5374244-
264973000993393/source",
"state": "file",
"uid": 0
}

# ansible all -m shell -a 'ls -l /etc/hosts.1110'


managed_node1 | CHANGED | rc=0 >>
-rw-r--r-- 1 root system 2101 Nov 10 07:43 /etc/hosts.1110
Ansible Documentation
The ansible-doc -l command lists all modules installed on a system. You
can use ansible-doc command to view the documentation of particular
modules by name,
and find information about what arguments the modules take as
options. For example, the following command displays documentation
for the ping module:

# ansible-doc -l | grep -i ping


[..]
ping Try to connect to host, verify a usable python and return `pong'...
[..]

Example of a full module documentation


# ansible-doc ping
> PING (/opt/freeware/lib/python3.7/site-
packages/ansible/modules/system/ping.py)
A trivial test module, this module always returns `pong' on
successful contact. It does not make
sense in playbooks, but it is useful from `/usr/bin/ansible' to verify
the ability to login and
that a usable Python is configured. This is NOT ICMP ping, this is
just a trivial test module
that requires Python on the remote-node. For Windows targets, use
the [win_ping] module instead.
For Network targets, use the [net_ping] module instead.

* This module is maintained by The Ansible Core Team


OPTIONS (= is mandatory):
- data
Data to return for the `ping' return value.
If this parameter is set to `crash', the module will cause an
exception.
[Default: pong]
type: str
SEE ALSO:
* Module net_ping
The official documentation on the net_ping module.

https://docs.ansible.com/ansible/2.9/modules/net_ping_module.html
* Module win_ping
The official documentation on the win_ping module.

https://docs.ansible.com/ansible/2.9/modules/win_ping_module.html
AUTHOR: Ansible Core Team, Michael DeHaan
METADATA:
status:
- stableinterface
supported_by: core
EXAMPLES:
# Test we can logon to 'webservers' and execute python with json lib.
# ansible webservers -m ping
# Example from an Ansible Playbook
- ping:
# Induce an exception to see what happens
- ping:
data: crash
RETURN VALUES:
ping:
description: value provided with the data parameter
returned: success
type: str
sample: pong
YAML playbooks

##########################################################
WRITING_PLAY_BOOK_FOR_USER_CREATION
##########################################################
- name: Create user
hosts: all
tasks:
- name: Add the user 'mash' with a bash shell, appending the group
'staff' and 'security' to the user's groups
user:
name: mash
comment: Ahmed Mashhour
uid: 1040
shell: /usr/bin/bash
groups: staff,security
append: yes
password:
"{ssha512}06$cQR6AB1peJePClfd$1nPY3QfFkPMWR9iu2WXgxVmM8w
Oy/qQjBcX9awgwszDRDa7qtQpRB1N7v7iGsnO7.tTDF6you.FiLU2TUK5S.
."
<<Where the above password is an encryption of 123456, so the
password will be 123456>>
#####################RUNNING_THE_PLAY_BOOK##############
# ansible-playbook my_user.yml --syntax-check
playbook: user_create.yml
# ansible-playbook user_create.ym
PLAY [Create user]
**********************************************************
ok: [managed_node1]
TASK [Add the user 'mash' with a bash shell, appending the group 'staff'
and 'security' to the user's groups] ***
changed: [managed_node1]
PLAY RECAP
**********************************************************
managed_node1 : ok=2 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0

managed_node1# grep -p "mash:" /etc/security/passwd | grep


password
password =
{ssha512}06$cQR6AB1peJePClfd$1nPY3QfFkPMWR9iu2WXgxVmM8wO
y/qQjBcX9awgwszDRDa7qtQpRB1N7v7iGsnO7.tTDF6you.FiLU2TUK5S..
##########################################################
WRITING_PLAY_BOOK_FOR_CREATING_VOLUME_GROUP
##########################################################
- name: Create a volume group on a variable disk name
hosts: all
tasks:
- name: Create a backupvg volume group
aix_lvg:
vg: backupvg
pp_size: 128
vg_type: scalable
pvs: "{{ hdisk }}"
state: present
#####################RUNNING_THE_PLAY_BOOK##############
# ansible-playbook mkvg.yml --syntax-check
playbook: mkvg.yml
# ansible-playbook mkvg.yml --extra-vars "hdisk=hdisk0"
PLAY [Create a volume group] ok: [managed_node1]
TASK [Create a backupvg volume group]
changed: [managed_node1]
PLAY RECAP
managed_node1 : ok=2 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
##########################################################
WRITING_PLAY_BOOK_FOR_CREATING_FILESYSTEM
##########################################################
- name: Create a filesystem on a variable VG name
hosts: all
tasks:
- name: create /mksysb filesystem
aix_filesystem:
filesystem: /mksysb
size: 1G
state: present
vg: "{{ vg }}"
#####################RUNNING_THE_PLAY_BOOK##############
# ansible-playbook mksysb_fs.yml --syntax-check
playbook: mksysb_fs.yml
# ansible-playbook mksysb_fs.yml --extra-vars "vg=backupvg"
PLAY [Create a filesystem on a variable VG name]
ok: [managed_node1]
TASK [create /mksysb filesystem]
changed: [managed_node1]
managed_node1 : ok=2 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
##########################################################
WRITING_PLAY_BOOK_FOR_MOUNTING_FILESYSTEM
##########################################################
- name: Mount a filesystem
hosts: all
tasks:
- name: Mount /mksysb filesystem
aix_filesystem:
filesystem: "{{fs_name}}"
state: mounted
#####################RUNNING_THE_PLAY_BOOK##############
# ansible-playbook mount_fs.yml --syntax-check
playbook: mount_fs.yml

# ansible-playbook mount_fs.yml --extra-vars "fs_name=/mksysb"


PLAY [Mount a filesystem]
ok: [managed_node1]
TASK [Mount /mksysb filesystem]
changed: [managed_node1]
PLAY RECAP
managed_node1 : ok=2 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
##########################################################
WRITING_PLAY_BOOK_FOR_TAKING_MKSYSB
##########################################################
- name: Take mksysb backup
hosts: all
tasks:
- name: Run mksysb backup and store it under /mksysb
mksysb:
name: myserver.mksysb
storage_path: /mksysb
# exclude_files: yes
exclude_wpar_files: yes

#####################RUNNING_THE_PLAY_BOOK##############
# ansible-playbook mksysb.yml --syntax-check
playbook: mksysb1.yml

PLAY [Take mksysb backup]


ok: [managed_node1]
TASK [Run mksysb backup and store it under /mksysb]
changed: [managed_node1]
managed_node1 : ok=2 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
##########################################################
WRITING_PLAY_BOOK_FOR_TAKING_MKSYSB_NAMED_BY_HOSTNAME
##########################################################
- name: Take mksysb backup
hosts: all
tasks:
- name: run a hostname command
shell: hostname
register: host_name
- name: Run mksysb backup and store it under /mksysb
mksysb:
name: "{{ host_name.stdout }}.mksysb"
storage_path: /mksysb
# exclude_files: yes
exclude_wpar_files: yes
#####################RUNNING_THE_PLAY_BOOK##############
# ansible-playbook mksysb2.yml --syntax-check
playbook: mksysb.yml
# ansible-playbook mksysb2.yml
PLAY [Take mksysb backup]
[..]
managed_node1 : ok=2 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
##########################################################
WRITING_PLAY_BOOK_FOR_VULNERABILITY_FIXES
##########################################################
- name: FLRT
hosts: all
tasks:
- name: Download patches for security vulnerabilities
flrtvc:
path: /mksysb
verbose: yes
apar: sec
# download_only: yes
#####################RUNNING_THE_PLAY_BOOK##############
# ansible-playbook flrt.yml --syntax-check
playbook: flrt.yml

# ansible-playbook flrt.yml
PLAY [FLRT]
ok: [managed_node1]
ok: [managed_node1]
[..]
managed_node1 : ok=2 changed=0 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
##########################################################
WRITING_PLAY_BOOK_FOR_MOUNTING_NFS
##########################################################
- name: Mount a file system
hosts: all
tasks:
- name: Mount NFS share
mount:
node: 192.168.143.141
mount_dir: /usr/sys/inst.images/installp/ppc
mount_over_dir: /my_nfs_dir

#####################RUNNING_THE_PLAY_BOOK##############
# ansible-playbook nfs_mount.yml --syntax-check
playbook: nfs_mount.yml

# ansible-playbook nfs_mount.yml
PLAY [Mount a file system]
ok: [managed_node1]
TASK [Mount NFS share]
changed: [managed_node1
managed_node1 : ok=2 changed=1 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
##########################################################
WRITING_PLAY_BOOK_FOR_INSTALLP
##########################################################
- name: Install filesets
hosts: all
tasks:
- name: Install selected Java filesets and expand file systems if
necessary
installp:
extend_fs: yes
agree_licenses: yes
# dependencies: yes
device: /my_nfs_dir
# updates_only: yes
force: yes
install_list: Java8_64.jre,Java8_64.sdk
#####################RUNNING_THE_PLAY_BOOK##############
# ansible-playbook installp.yml --syntax-check
playbook: installp.yml
# ansible-playbook installp.yml
[..]
managed_node1 : ok=2 changed=0 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
Dealing with a collective YAML playbooks
You can have one collective YML file, like the following example:
# cat collective1.yml
###################COLLECTIVE1.YML_FILE_START##############
- name: Collective1 Playbook
hosts: all
tasks:
- name: Add the user 'mash' with a bash shell, appending the group
'staff' and 'security' to the user's groups
user:
name: mash
comment: Ahmed Mashhour
uid: 1040
shell: /usr/bin/bash
groups: staff,security
append: yes
password:
"{ssha512}06$cQR6AB1peJePClfd$1nPY3QfFkPMWR9iu2WXgxVmM8w
Oy/qQjBcX9awgwszDRDa7qtQpRB1N7v7iGsnO7.tTDF6you
.FiLU2TUK5S.."
- name: Create a backupvg volume group
aix_lvg:
vg: backupvg
pp_size: 128
vg_type: scalable
pvs: "{{ hdisk }}"
state: present

- name: Mount NFS share


mount:
node: 192.168.143.141
mount_dir: /usr/sys/inst.images/installp/ppc
mount_over_dir: /my_nfs_dir

- name: Install selected Java filesets and expand file systems if


necessary
installp:
extend_fs: yes
agree_licenses: yes
device: /my_nfs_dir
force: yes
install_list: Java8_64.jre,Java8_64.sdk
###################COLLECTIVE1.YML_FILE_END################
You may run the collective1.yml file which will run the whole tasks
included, moreover, you can run the playbook starting from a certain
task:
# ansible-playbook collective1.yml --start-at-task="Mount NFS share"
## The above will start execution of collective.yml playbook from
"Mount NFS share" task and downward till the end of the file.

- If you want to only run only a certain task inside a playbook, you can
use tags. See example below:

# cat collective2.yml
##################COLLECTIVE2.YML_FILE_START###############
- name: Collective2 Playbook
hosts: all
tasks:
- name: Add the user 'mash' with a bash shell, appending the group
'staff' and 'security' to the user's groups
user:
name: mash
comment: Ahmed Mashhour
uid: 1040
shell: /usr/bin/bash
groups: staff,security
append: yes
password:
"{ssha512}06$cQR6AB1peJePClfd$1nPY3QfFkPMWR9iu2WXgxVmM8w
Oy/qQjBcX9awgwszDRDa7qtQpRB1N7v7iGsnO7.tTDF6you.FiLU2TUK5S.
."
tags: mash user create

- name: Create a backupvg volume group


aix_lvg:
vg: backupvg
pp_size: 128
vg_type: scalable
pvs: "{{ hdisk }}"
state: present
tags: vgcreate

- name: Mount NFS share


mount:
node: 192.168.143.141
mount_dir: /usr/sys/inst.images/installp/ppc
mount_over_dir: /my_nfs_dir
tags: nfsmount
- name: Install selected Java filesets and expand file systems if
necessary
installp:
extend_fs: yes
agree_licenses: yes
# dependencies: yes
device: /my_nfs_dir
# updates_only: yes
force: yes
install_list: Java8_64.jre,Java8_64.sdk
tags: java install
####################COLLECTIVE2.YML_FILE_END###############

# ansible-playbook collective2.yml --tags="mash user create"


The above will read collective2.yml playbook and will only run the user
creation task which has the tag of "mash user create"
Modules collections
You can also download extra modules for managing IBM AIX from
ansible-galaxy:
- Download manually a tarball from:
https://ansible-
galaxy.s3.amazonaws.com/artifact/60/5bf851b950e117b28fcb239c5b5
b2be13653a1b4bf82af34fe7c6ca0191eec?response-content-
disposition=attachment%3B%20filename%3Dibm-power_aix-
1.1.1.tar.gz&AWSAccessKeyId=AKIAJZZ23S6M5JUH2EOA&Signature=wk
jHAOhDo6%2F3FNzUPkS4W9UF%2BIU%3D&Expires=1605024808

- Or just download them from yourn AIX server:


# ansible-galaxy collection install ibm.power_aix
Process install dependency map
Starting collection install process
Installing 'ibm.power_aix:1.1.1' to
'/.ansible/collections/ansible_collections/ibm/power_aix'
Bear in mind that sometimes a module does not exist for something
you want to do, as an end user, you can also
write your own private modules or get modules from a third party.

Ansible searches for custom modules in the location specified by the


ANSIBLE_LIBRARY environment variable, or if that is
not set, by a library keyword in the current Ansible configuration file.
Ansible also searches for custom modules in the ./library
Directory relative to the playbook currently being run.
Thank you very much for taking the time to read through this guide. I
hope it has been not only helpful but an easy read. If you have questions
or you feel you found any inconsistencies, please don’t hesitate to
contact me at:
[email protected]
Or
[email protected]

Ahmed (Mash) Mashhour

You might also like