Red Hat Enterprise Linux-8-Automating System Administration by Using RHEL System Roles-En-US
Red Hat Enterprise Linux-8-Automating System Administration by Using RHEL System Roles-En-US
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
The Red Hat Enterprise Linux (RHEL) system roles are a collection of Ansible roles, modules, and
playbooks that help automate the consistent and repeatable administration of RHEL systems. With
RHEL system roles, you can efficiently manage large inventories of systems by running
configuration playbooks from a single system.
Table of Contents
Table of Contents
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . .
. . . . . . . . . . . 1.. .INTRODUCTION
CHAPTER . . . . . . . . . . . . . . . . . TO
. . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . ROLES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
.CHAPTER
. . . . . . . . . . 2.
. . PREPARING
.............A
. . CONTROL
. . . . . . . . . . . .NODE
. . . . . . AND
. . . . . .MANAGED
. . . . . . . . . . . NODES
. . . . . . . . TO
. . . .USE
. . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . ROLES
. . . . . . . . . . . . . .11. . . . . . . . . . . . .
2.1. PREPARING A CONTROL NODE ON RHEL 8 11
2.2. PREPARING A MANAGED NODE 13
. . . . . . . . . . . 3.
CHAPTER . . ANSIBLE
. . . . . . . . . . VAULT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
..............
.CHAPTER
. . . . . . . . . . 4.
. . .ANSIBLE
. . . . . . . . . IPMI
. . . . . MODULES
. . . . . . . . . . . .IN
. . RHEL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
..............
4.1. THE RHEL_MGMT COLLECTION 20
4.2. USING THE IPMI_BOOT MODULE 21
4.3. USING THE IPMI_POWER MODULE 22
. . . . . . . . . . . 5.
CHAPTER . . THE
. . . . . REDFISH
. . . . . . . . . .MODULES
. . . . . . . . . . . IN
. . .RHEL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
..............
5.1. THE REDFISH MODULES 24
5.2. REDFISH MODULES PARAMETERS 24
5.3. USING THE REDFISH_INFO MODULE 25
5.4. USING THE REDFISH_COMMAND MODULE 26
5.5. USING THE REDFISH_CONFIG MODULE 27
. . . . . . . . . . . 6.
CHAPTER . . .INTEGRATING
. . . . . . . . . . . . . . .RHEL
. . . . . . SYSTEMS
. . . . . . . . . . .INTO
. . . . . .AD
. . .DIRECTLY
. . . . . . . . . . . BY
. . . .USING
. . . . . . .THE
. . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . ROLE
. . . . . . . . . . .29
..............
6.1. THE AD_INTEGRATION RHEL SYSTEM ROLE 29
6.2. CONNECTING A RHEL SYSTEM DIRECTLY TO AD BY USING THE AD_INTEGRATION RHEL SYSTEM
ROLE 29
. . . . . . . . . . . 7.
CHAPTER . . REQUESTING
. . . . . . . . . . . . . . .CERTIFICATES
. . . . . . . . . . . . . . . .BY
. . . USING
. . . . . . . .THE
. . . . RHEL
. . . . . . .SYSTEM
. . . . . . . . .ROLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
..............
7.1. THE CERTIFICATE RHEL SYSTEM ROLE 32
7.2. REQUESTING A NEW SELF-SIGNED CERTIFICATE BY USING THE CERTIFICATE RHEL SYSTEM ROLE
32
7.3. REQUESTING A NEW CERTIFICATE FROM IDM CA BY USING THE CERTIFICATE RHEL SYSTEM ROLE
33
7.4. SPECIFYING COMMANDS TO RUN BEFORE OR AFTER CERTIFICATE ISSUANCE BY USING THE
CERTIFICATE RHEL SYSTEM ROLE 34
.CHAPTER
. . . . . . . . . . 8.
. . .INSTALLING
. . . . . . . . . . . . .AND
. . . . . CONFIGURING
. . . . . . . . . . . . . . . . WEB
. . . . . CONSOLE
. . . . . . . . . . . .BY
. . .USING
. . . . . . . THE
. . . . . RHEL
. . . . . . SYSTEM
. . . . . . . . . .ROLE
. . . . . . . . . . . .36
..............
8.1. INSTALLING THE WEB CONSOLE BY USING THE COCKPIT RHEL SYSTEM ROLE 36
.CHAPTER
. . . . . . . . . . 9.
. . .SETTING
.........A
. . CUSTOM
. . . . . . . . . . CRYPTOGRAPHIC
. . . . . . . . . . . . . . . . . . . .POLICY
. . . . . . . . BY
. . . .USING
. . . . . . .THE
. . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . ROLE
. . . . . . . . . . . . . .38
..............
9.1. SETTING A CUSTOM CRYPTOGRAPHIC POLICY BY USING THE CRYPTO_POLICIES RHEL SYSTEM ROLE
38
.CHAPTER
. . . . . . . . . . 10.
. . . CONFIGURING
. . . . . . . . . . . . . . . . FIREWALLD
. . . . . . . . . . . . . BY
. . . .USING
. . . . . . .THE
. . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . ROLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
..............
10.1. INTRODUCTION TO THE FIREWALL RHEL SYSTEM ROLE 41
10.2. RESETTING THE FIREWALLD SETTINGS BY USING THE FIREWALL RHEL SYSTEM ROLE 41
10.3. FORWARDING INCOMING TRAFFIC IN FIREWALLD FROM ONE LOCAL PORT TO A DIFFERENT LOCAL
PORT BY USING THE FIREWALL RHEL SYSTEM ROLE 42
10.4. MANAGING PORTS IN FIREWALLD BY USING THE FIREWALL RHEL SYSTEM ROLE 44
10.5. CONFIGURING A FIREWALLD DMZ ZONE BY USING THE FIREWALL RHEL SYSTEM ROLE 45
.CHAPTER
. . . . . . . . . . 11.
. . .CONFIGURING
. . . . . . . . . . . . . . . .A. .HIGH-AVAILABILITY
. . . . . . . . . . . . . . . . . . . . . CLUSTER
. . . . . . . . . . .BY
. . .USING
. . . . . . . THE
. . . . . RHEL
. . . . . . SYSTEM
. . . . . . . . . .ROLE
. . . . . . . . . . . . .47
..............
11.1. VARIABLES OF THE HA_CLUSTER RHEL SYSTEM ROLE 47
11.2. SPECIFYING AN INVENTORY FOR THE HA_CLUSTER RHEL SYSTEM ROLE 66
11.2.1. Configuring node names and addresses in an inventory 66
1
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
. . . . . . . . . . . 12.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . .THE
. . . . .SYSTEMD
. . . . . . . . . . .JOURNAL
. . . . . . . . . . .BY
. . . USING
. . . . . . . THE
. . . . . RHEL
. . . . . . SYSTEM
. . . . . . . . . .ROLE
. . . . . . . . . . . . . . . . . . . .94
..............
12.1. CONFIGURING PERSISTENT LOGGING BY USING THE JOURNALD RHEL SYSTEM ROLE 94
. . . . . . . . . . . 13.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . AUTOMATIC
. . . . . . . . . . . . . .CRASH
. . . . . . . .DUMPS
. . . . . . . .BY
. . . USING
. . . . . . . .THE
. . . . RHEL
. . . . . . .SYSTEM
. . . . . . . . .ROLE
. . . . . . . . . . . . . . . . .96
..............
13.1. CONFIGURING THE KERNEL CRASH DUMPING MECHANISM BY USING THE KDUMP
RHEL SYSTEM ROLE 96
. . . . . . . . . . . 14.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . KERNEL
. . . . . . . . . PARAMETERS
. . . . . . . . . . . . . . . PERMANENTLY
. . . . . . . . . . . . . . . . . BY
. . . .USING
. . . . . . .THE
. . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . ROLE
....................
98
14.1. INTRODUCTION TO THE KERNEL_SETTINGS RHEL SYSTEM ROLE 98
14.2. APPLYING SELECTED KERNEL PARAMETERS BY USING THE KERNEL_SETTINGS RHEL SYSTEM ROLE
99
. . . . . . . . . . . 15.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . LOGGING
. . . . . . . . . . .BY
. . . USING
. . . . . . . .THE
. . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . .ROLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
...............
15.1. THE LOGGING RHEL SYSTEM ROLE 101
15.2. APPLYING A LOCAL LOGGING RHEL SYSTEM ROLE 101
15.3. FILTERING LOGS IN A LOCAL LOGGING RHEL SYSTEM ROLE 103
15.4. APPLYING A REMOTE LOGGING SOLUTION BY USING THE LOGGING RHEL SYSTEM ROLE 105
15.5. USING THE LOGGING RHEL SYSTEM ROLE WITH TLS 107
15.5.1. Configuring client logging with TLS 107
15.5.2. Configuring server logging with TLS 109
15.6. USING THE LOGGING RHEL SYSTEM ROLES WITH RELP 112
15.6.1. Configuring client logging with RELP 112
15.6.2. Configuring server logging with RELP 114
. . . . . . . . . . . 16.
CHAPTER . . . MONITORING
. . . . . . . . . . . . . . .PERFORMANCE
. . . . . . . . . . . . . . . . . BY
. . . .USING
. . . . . . .THE
. . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . ROLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117
..............
16.1. INTRODUCTION TO THE METRICS RHEL SYSTEM ROLE 117
16.2. USING THE METRICS RHEL SYSTEM ROLE TO MONITOR YOUR LOCAL SYSTEM WITH VISUALIZATION
117
16.3. USING THE METRICS RHEL SYSTEM ROLE TO SET UP A FLEET OF INDIVIDUAL SYSTEMS TO
MONITOR THEMSELVES 118
16.4. USING THE METRICS RHEL SYSTEM ROLE TO MONITOR A FLEET OF MACHINES CENTRALLY USING
YOUR LOCAL MACHINE 119
16.5. SETTING UP AUTHENTICATION WHILE MONITORING A SYSTEM BY USING THE METRICS RHEL
SYSTEM ROLE 120
16.6. USING THE METRICS RHEL SYSTEM ROLE TO CONFIGURE AND ENABLE METRICS COLLECTION FOR
SQL SERVER 121
. . . . . . . . . . . 17.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . MICROSOFT
. . . . . . . . . . . . . .SQL
. . . . SERVER
. . . . . . . . . BY
. . . .USING
. . . . . . . THE
. . . . .ANSIBLE
. . . . . . . . . .SYSTEM
. . . . . . . . .ROLES
. . . . . . . . . . . . . . .124
...............
2
Table of Contents
17.1. INSTALLING AND CONFIGURING SQL SERVER WITH AN EXISTING TLS CERTIFICATE BY USING THE
MICROSOFT.SQL.SERVER ANSIBLE SYSTEM ROLE 124
17.2. INSTALLING AND CONFIGURING SQL SERVER WITH A TLS CERTIFICATE ISSUED FROM IDM BY USING
THE MICROSOFT.SQL.SERVER ANSIBLE SYSTEM ROLE 126
17.3. INSTALLING AND CONFIGURING SQL SERVER WITH CUSTOM STORAGE PATHS BY USING THE
MICROSOFT.SQL.SERVER ANSIBLE SYSTEM ROLE 129
17.4. INSTALLING AND CONFIGURING SQL SERVER WITH AD INTEGRATION BY USING THE
MICROSOFT.SQL.SERVER ANSIBLE SYSTEM ROLE 131
. . . . . . . . . . . 18.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . NBDE
. . . . . . .BY
. . . USING
. . . . . . . RHEL
. . . . . . .SYSTEM
. . . . . . . . .ROLES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
...............
18.1. INTRODUCTION TO THE NBDE_CLIENT AND NBDE_SERVER RHEL SYSTEM ROLES (CLEVIS AND TANG)
135
18.2. USING THE NBDE_SERVER RHEL SYSTEM ROLE FOR SETTING UP MULTIPLE TANG SERVERS 135
18.3. SETTING UP MULTIPLE CLEVIS CLIENTS BY USING THE NBDE_CLIENT RHEL SYSTEM ROLE 137
.CHAPTER
. . . . . . . . . . 19.
. . . CONFIGURING
. . . . . . . . . . . . . . . . NETWORK
. . . . . . . . . . . .SETTINGS
. . . . . . . . . . .BY
. . . USING
. . . . . . . .THE
. . . . RHEL
. . . . . . .SYSTEM
. . . . . . . . .ROLE
. . . . . . . . . . . . . . . . . . . . . . .139
...............
19.1. CONFIGURING AN ETHERNET CONNECTION WITH A STATIC IP ADDRESS BY USING THE NETWORK
RHEL SYSTEM ROLE WITH AN INTERFACE NAME 139
19.2. CONFIGURING AN ETHERNET CONNECTION WITH A STATIC IP ADDRESS BY USING THE NETWORK
RHEL SYSTEM ROLE WITH A DEVICE PATH 141
19.3. CONFIGURING AN ETHERNET CONNECTION WITH A DYNAMIC IP ADDRESS BY USING THE NETWORK
RHEL SYSTEM ROLE WITH AN INTERFACE NAME 143
19.4. CONFIGURING AN ETHERNET CONNECTION WITH A DYNAMIC IP ADDRESS BY USING THE NETWORK
RHEL SYSTEM ROLE WITH A DEVICE PATH 145
19.5. CONFIGURING VLAN TAGGING BY USING THE NETWORK RHEL SYSTEM ROLE 148
19.6. CONFIGURING A NETWORK BRIDGE BY USING THE NETWORK RHEL SYSTEM ROLE 150
19.7. CONFIGURING A NETWORK BOND BY USING THE NETWORK RHEL SYSTEM ROLE 152
19.8. CONFIGURING AN IPOIB CONNECTION BY USING THE NETWORK RHEL SYSTEM ROLE 154
19.9. ROUTING TRAFFIC FROM A SPECIFIC SUBNET TO A DIFFERENT DEFAULT GATEWAY BY USING THE
NETWORK RHEL SYSTEM ROLE 156
19.10. CONFIGURING A STATIC ETHERNET CONNECTION WITH 802.1X NETWORK AUTHENTICATION BY
USING THE NETWORK RHEL SYSTEM ROLE 161
19.11. SETTING THE DEFAULT GATEWAY ON AN EXISTING CONNECTION BY USING THE NETWORK RHEL
SYSTEM ROLE 163
19.12. CONFIGURING A STATIC ROUTE BY USING THE NETWORK RHEL SYSTEM ROLE 165
19.13. CONFIGURING AN ETHTOOL OFFLOAD FEATURE BY USING THE NETWORK RHEL SYSTEM ROLE
167
19.14. CONFIGURING AN ETHTOOL COALESCE SETTINGS BY USING THE NETWORK RHEL SYSTEM ROLE
169
19.15. INCREASING THE RING BUFFER SIZE TO REDUCE A HIGH PACKET DROP RATE BY USING THE
NETWORK RHEL SYSTEM ROLE 171
19.16. NETWORK STATES FOR THE NETWORK RHEL SYSTEM ROLE 173
. . . . . . . . . . . 20.
CHAPTER . . . .MANAGING
. . . . . . . . . . . . CONTAINERS
. . . . . . . . . . . . . . .BY
. . . USING
. . . . . . . .THE
. . . . .PODMAN
. . . . . . . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . ROLE
. . . . . . . . . . . . . . . . . . . . . . . . 175
...............
20.1. CREATING A ROOTLESS CONTAINER WITH BIND MOUNT 175
20.2. CREATING A ROOTFUL CONTAINER WITH PODMAN VOLUME 176
20.3. CREATING A QUADLET APPLICATION WITH SECRETS 178
. . . . . . . . . . . 21.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . .POSTFIX
. . . . . . . . . .MTA
. . . . . BY
. . . .USING
. . . . . . .THE
. . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . ROLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
...............
21.1. USING THE POSTFIX RHEL SYSTEM ROLE TO AUTOMATE BASIC POSTFIX MTA ADMINISTRATION 182
. . . . . . . . . . . 22.
CHAPTER . . . .INSTALLING
. . . . . . . . . . . . . AND
. . . . . CONFIGURING
. . . . . . . . . . . . . . . . POSTGRESQL
. . . . . . . . . . . . . . . .BY
. . .USING
. . . . . . . THE
. . . . .RHEL
. . . . . . SYSTEM
. . . . . . . . . .ROLE
. . . . . . . . . . .184
...............
22.1. INTRODUCTION TO THE POSTGRESQL RHEL SYSTEM ROLE 184
22.2. CONFIGURING THE POSTGRESQL SERVER BY USING THE POSTGRESQL RHEL SYSTEM ROLE 184
. . . . . . . . . . . 23.
CHAPTER . . . .REGISTERING
. . . . . . . . . . . . . . .THE
. . . . .SYSTEM
. . . . . . . . .BY
. . . USING
. . . . . . . .THE
. . . . RHEL
. . . . . . .SYSTEM
. . . . . . . . .ROLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186
...............
23.1. INTRODUCTION TO THE RHC RHEL SYSTEM ROLE 186
3
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
23.2. REGISTERING A SYSTEM BY USING THE RHC RHEL SYSTEM ROLE 186
23.3. REGISTERING A SYSTEM WITH SATELLITE BY USING THE RHC RHEL SYSTEM ROLE 188
23.4. DISABLING THE CONNECTION TO INSIGHTS AFTER THE REGISTRATION BY USING THE RHC RHEL
SYSTEM ROLE 189
23.5. ENABLING REPOSITORIES BY USING THE RHC RHEL SYSTEM ROLE 190
23.6. SETTING RELEASE VERSIONS BY USING THE RHC RHEL SYSTEM ROLE 191
23.7. USING A PROXY SERVER WHEN REGISTERING THE HOST BY USING THE RHC RHEL SYSTEM ROLE
192
23.8. DISABLING AUTO UPDATES OF INSIGHTS RULES BY USING THE RHC RHEL SYSTEM ROLE 194
23.9. DISABLING INSIGHTS REMEDIATIONS BY USING THE RHC RHEL SYSTEM ROLE 195
23.10. CONFIGURING INSIGHTS TAGS BY USING THE RHC RHEL SYSTEM ROLE 196
23.11. UNREGISTERING A SYSTEM BY USING THE RHC RHEL SYSTEM ROLE 197
. . . . . . . . . . . 24.
CHAPTER . . . .CONFIGURING
. . . . . . . . . . . . . . . .SELINUX
. . . . . . . . . .BY
. . .USING
. . . . . . . THE
. . . . .RHEL
. . . . . . SYSTEM
. . . . . . . . . .ROLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .199
...............
24.1. INTRODUCTION TO THE SELINUX RHEL SYSTEM ROLE 199
24.2. USING THE SELINUX RHEL SYSTEM ROLE TO APPLY SELINUX SETTINGS ON MULTIPLE SYSTEMS
199
24.3. MANAGING PORTS BY USING THE SELINUX RHEL SYSTEM ROLE 200
CHAPTER 25. RESTRICTING THE EXECUTION OF APPLICATIONS BY USING THE FAPOLICYD RHEL
. . . . . . . . . .ROLE
SYSTEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202
...............
25.1. PREVENTING USERS FROM EXECUTING UNTRUSTWORTHY CODE BY USING THE FAPOLICYD RHEL
SYSTEM ROLE 202
. . . . . . . . . . . 26.
CHAPTER . . . .CONFIGURING
. . . . . . . . . . . . . . . .SECURE
. . . . . . . . .COMMUNICATION
. . . . . . . . . . . . . . . . . . . BY
. . . .USING
. . . . . . . RHEL
. . . . . . SYSTEM
. . . . . . . . . .ROLES
. . . . . . . . . . . . . . . . . . . . 204
................
26.1. VARIABLES OF THE SSHD RHEL SYSTEM ROLE 204
26.2. CONFIGURING OPENSSH SERVERS BY USING THE SSHD RHEL SYSTEM ROLE 204
26.3. USING THE SSHD RHEL SYSTEM ROLE FOR NON-EXCLUSIVE CONFIGURATION 206
26.4. OVERRIDING THE SYSTEM-WIDE CRYPTOGRAPHIC POLICY ON AN SSH SERVER BY USING THE SSHD
RHEL SYSTEM ROLE 208
26.5. VARIABLES OF THE SSH RHEL SYSTEM ROLE 209
26.6. CONFIGURING OPENSSH CLIENTS BY USING THE SSH RHEL SYSTEM ROLE 210
.CHAPTER
. . . . . . . . . . 27.
. . . .MANAGING
. . . . . . . . . . . . LOCAL
. . . . . . . . STORAGE
. . . . . . . . . . . BY
. . . .USING
. . . . . . .THE
. . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . ROLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
...............
27.1. INTRODUCTION TO THE STORAGE RHEL SYSTEM ROLE 213
27.2. CREATING AN XFS FILE SYSTEM ON A BLOCK DEVICE BY USING THE STORAGE RHEL SYSTEM ROLE
214
27.3. PERSISTENTLY MOUNTING A FILE SYSTEM BY USING THE STORAGE RHEL SYSTEM ROLE 215
27.4. MANAGING LOGICAL VOLUMES BY USING THE STORAGE RHEL SYSTEM ROLE 216
27.5. ENABLING ONLINE BLOCK DISCARD BY USING THE STORAGE RHEL SYSTEM ROLE 217
27.6. CREATING AND MOUNTING AN EXT4 FILE SYSTEM BY USING THE STORAGE RHEL SYSTEM ROLE
218
27.7. CREATING AND MOUNTING AN EXT3 FILE SYSTEM BY USING THE STORAGE RHEL SYSTEM ROLE
219
27.8. RESIZING AN EXISTING FILE SYSTEM ON LVM BY USING THE STORAGE RHEL SYSTEM ROLE 220
27.9. CREATING A SWAP VOLUME BY USING THE STORAGE RHEL SYSTEM ROLE 221
27.10. CONFIGURING A RAID VOLUME BY USING THE STORAGE RHEL SYSTEM ROLE 222
27.11. CONFIGURING AN LVM POOL WITH RAID BY USING THE STORAGE RHEL SYSTEM ROLE 223
27.12. CONFIGURING A STRIPE SIZE FOR RAID LVM VOLUMES BY USING THE STORAGE RHEL SYSTEM
ROLE 224
27.13. COMPRESSING AND DEDUPLICATING A VDO VOLUME ON LVM BY USING THE STORAGE
RHEL SYSTEM ROLE 225
27.14. CREATING A LUKS2 ENCRYPTED VOLUME BY USING THE STORAGE RHEL SYSTEM ROLE 227
27.15. EXPRESSING POOL VOLUME SIZES AS PERCENTAGE BY USING THE STORAGE RHEL SYSTEM ROLE
228
4
Table of Contents
. . . . . . . . . . . 28.
CHAPTER . . . .MANAGING
. . . . . . . . . . . . SYSTEMD
. . . . . . . . . . . UNITS
. . . . . . . BY
. . . .USING
. . . . . . .THE
. . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . . ROLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .230
...............
28.1. MANAGING SERVICES BY USING THE SYSTEMD RHEL SYSTEM ROLE 230
28.2. DEPLOYING SYSTEMD DROP-IN FILES BY USING THE SYSTEMD RHEL SYSTEM ROLE 231
28.3. DEPLOYING SYSTEMD UNITS BY USING THE SYSTEMD RHEL SYSTEM ROLE 233
. . . . . . . . . . . 29.
CHAPTER . . . .CONFIGURING
. . . . . . . . . . . . . . . .TIME
. . . . . SYNCHRONIZATION
. . . . . . . . . . . . . . . . . . . . . .BY
. . . USING
. . . . . . . .THE
. . . . RHEL
. . . . . . .SYSTEM
. . . . . . . . .ROLE
. . . . . . . . . . . . . . . . . .235
...............
29.1. CONFIGURING TIME SYNCHRONIZATION OVER NTP BY USING THE TIMESYNC RHEL SYSTEM ROLE
235
29.2. CONFIGURING TIME SYNCHRONIZATION OVER NTP WITH NTS BY USING THE TIMESYNC RHEL
SYSTEM ROLE 237
. . . . . . . . . . . 30.
CHAPTER . . . .CONFIGURING
. . . . . . . . . . . . . . . .A. .SYSTEM
. . . . . . . . . FOR
. . . . . SESSION
. . . . . . . . . .RECORDING
. . . . . . . . . . . . . BY
. . . .USING
. . . . . . . THE
. . . . .RHEL
. . . . . . SYSTEM
. . . . . . . . . ROLE
.....................
240
30.1. THE TLOG RHEL SYSTEM ROLE 240
30.2. COMPONENTS AND PARAMETERS OF THE TLOG RHEL SYSTEM ROLE 240
30.3. DEPLOYING THE TLOG RHEL SYSTEM ROLE 240
30.4. DEPLOYING THE TLOG RHEL SYSTEM ROLE FOR EXCLUDING LISTS OF GROUPS OR USERS 242
.CHAPTER
. . . . . . . . . . 31.
. . . CONFIGURING
. . . . . . . . . . . . . . . . VPN
. . . . . CONNECTIONS
. . . . . . . . . . . . . . . . .WITH
. . . . . .IPSEC
. . . . . . .BY
. . .USING
. . . . . . . THE
. . . . . RHEL
. . . . . . SYSTEM
. . . . . . . . . .ROLE
. . . . . . . . . . 244
................
31.1. CREATING A HOST-TO-HOST VPN WITH IPSEC BY USING THE VPN RHEL SYSTEM ROLE 244
31.2. CREATING AN OPPORTUNISTIC MESH VPN CONNECTION WITH IPSEC BY USING THE VPN RHEL
SYSTEM ROLE 246
5
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
6
PROVIDING FEEDBACK ON RED HAT DOCUMENTATION
4. Enter your suggestion for improvement in the Description field. Include links to the relevant
parts of the documentation.
7
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Control node
A control node is the system from which you run Ansible commands and playbooks. Your control
node can be an Ansible Automation Platform, Red Hat Satellite, or a RHEL 9, 8, or 7 host. For more
information, see Preparing a control node on RHEL 8 .
Managed node
Managed nodes are the servers and network devices that you manage with Ansible. Managed nodes
are also sometimes called hosts. Ansible does not have to be installed on managed nodes. For more
information, see Preparing a managed node .
Ansible playbook
In a playbook, you define the configuration you want to achieve on your managed nodes or a set of
steps for the system on the managed node to perform. Playbooks are Ansible’s configuration,
deployment, and orchestration language.
Inventory
In an inventory file, you list the managed nodes and specify information such as IP address for each
managed node. In the inventory, you can also organize the managed nodes by creating and nesting
groups for easier scaling. An inventory file is also sometimes called a hostfile.
certificate Certificate Issuance and Requesting certificates by using RHEL system roles
Renewal
cockpit Web console Installing and configuring web console with the
cockpit RHEL system role
8
CHAPTER 1. INTRODUCTION TO RHEL SYSTEM ROLES
microsoft.sql.server Microsoft SQL Server Configuring Microsoft SQL Server by using the
microsoft.sql.server Ansible role
nbde_client Network Bound Disk Using the nbde_client and nbde_server system roles
Encryption client
nbde_server Network Bound Disk Using the nbde_client and nbde_server system roles
Encryption server
Additional resources
9
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
/usr/share/ansible/roles/rhel-system-roles.<role_name>/README.md file
/usr/share/doc/rhel-system-roles/<role_name>/ directory
10
CHAPTER 2. PREPARING A CONTROL NODE AND MANAGED NODES TO USE RHEL SYSTEM ROLES
Prerequisites
RHEL 8.6 or later is installed. For more information about installing RHEL, see Interactively
installing RHEL from installation media.
NOTE
In RHEL 8.5 and earlier versions, Ansible packages were provided through Ansible
Engine instead of Ansible Core, and with a different level of support. Do not use
Ansible Engine because the packages might not be compatible with Ansible
automation content in RHEL 8.6 and later. For more information, see Scope of
support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and
later AppStream repositories.
Procedure
[root@control-node]# su - ansible
[ansible@control-node]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ansible/.ssh/id_rsa):
Enter passphrase (empty for no passphrase): <password>
Enter same passphrase again: <password>
...
11
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
4. Optional: To prevent Ansible from prompting you for the SSH key password each time you
establish a connection, configure an SSH agent.
[defaults]
inventory = /home/ansible/inventory
remote_user = ansible
[privilege_escalation]
become = True
become_method = sudo
become_user = root
become_ask_pass = True
NOTE
Settings in the ~/.ansible.cfg file have a higher priority and override settings
from the global /etc/ansible/ansible.cfg file.
Uses the account set in the remote_user parameter when it establishes SSH connections to
managed nodes.
Uses the sudo utility to execute tasks on managed nodes as the root user.
Prompts for the root password of the remote user every time you apply a playbook. This is
recommended for security reasons.
6. Create an ~/inventory file in INI or YAML format that lists the hostnames of managed hosts.
You can also define groups of hosts in the inventory file. For example, the following is an
inventory file in the INI format with three hosts and one host group named US:
managed-node-01.example.com
[US]
managed-node-02.example.com ansible_host=192.0.2.100
managed-node-03.example.com
Note that the control node must be able to resolve the hostnames. If the DNS server cannot
resolve certain hostnames, add the ansible_host parameter next to the host entry to specify its
IP address.
12
CHAPTER 2. PREPARING A CONTROL NODE AND MANAGED NODES TO USE RHEL SYSTEM ROLES
On Ansible Automation Platform, perform the following steps as the ansible user:
i. Define Red Hat automation hub as the primary source for content in the ~/.ansible.cfg
file.
ii. Install the redhat.rhel_system_roles collection from Red Hat automation hub:
Next step
Prepare the managed nodes. For more information, see Preparing a managed node .
Additional resources
Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later
AppStream repositories
How to register and subscribe a system to the Red Hat Customer Portal using subscription-
manager
Prerequisites
You prepared the control node. For more information, see Preparing a control node on RHEL 8 .
IMPORTANT
13
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
IMPORTANT
Direct SSH access as the root user is a security risk. To reduce this risk, you will
create a local user on this node and configure a sudo policy when preparing a
managed node. Ansible on the control node can then use the local user account
to log in to the managed node and run playbooks as different users, such as root.
Procedure
The control node later uses this user to establish an SSH connection to this host.
You must enter this password when Ansible uses sudo to perform tasks as the root user.
3. Install the ansible user’s SSH public key on the managed node:
a. Log in to the control node as the ansible user, and copy the SSH public key to the managed
node:
14
CHAPTER 2. PREPARING A CONTROL NODE AND MANAGED NODES TO USE RHEL SYSTEM ROLES
d. Verify the SSH connection by remotely executing a command on the control node:
a. Create and edit the /etc/sudoers.d/ansible file by using the visudo command:
The benefit of using visudo over a normal editor is that this utility provides basic checks,
such as for parse errors, before installing the file.
To grant permissions to the ansible user to run all commands as any user and group on
this host after entering the ansible user’s password, use:
To grant permissions to the ansible user to run all commands as any user and group on
this host without entering the ansible user’s password, use:
Alternatively, configure a more fine-granular policy that matches your security requirements.
For further details on sudoers policies, see the sudoers(5) manual page.
Verification
1. Verify that you can execute commands from the control node on an all managed nodes:
The hard-coded all group dynamically contains all hosts listed in the inventory file.
2. Verify that privilege escalation works correctly by running the whoami utility on all managed
nodes by using the Ansible command module:
15
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
root
...
If the command returns root, you configured sudo on the managed nodes correctly.
Additional resources
16
CHAPTER 3. ANSIBLE VAULT
With Ansible vault, you can encrypt, decrypt, view, and edit sensitive information. They could be included
as:
You can use Ansible vault to securely manage individual variables, entire files, or even structured data
like YAML files. This data can then be safely stored in a version control system or shared with team
members without exposing sensitive information.
IMPORTANT
Files are protected with symmetric encryption of the Advanced Encryption Standard
(AES256), where a single password or passphrase is used both to encrypt and decrypt
the data. Note that the way this is done has not been formally audited by a third party.
To simplify management, it makes sense to set up your Ansible project so that sensitive variables and all
other variables are kept in separate files, or directories. Then you can protect the files containing
sensitive variables with the ansible-vault command.
17
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
---
- name: Create user accounts for all servers
hosts: managed-node-01.example.com
vars_files:
- vault.yml
tasks:
- name: Create user from vault.yml file
user:
name: "{{ username }}"
password: "{{ pwhash }}"
You read-in the file with variables (vault.yml) in the vars_files section of your Ansible Playbook, and
you use the curly brackets the same way you would do with your ordinary variables. Then you either run
the playbook with the ansible-playbook --ask-vault-pass command and you enter the password
manually. Or you save the password in a separate file and you run the playbook with the ansible-
playbook --vault-password-file /path/to/my/vault-password-file command.
Additional resources
18
CHAPTER 3. ANSIBLE VAULT
Ansible vault
19
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
ipmi_boot parameters:
* network
* floppy
* hd
* safe
* optical
* setup
* default
ipmi_power parameters:
20
CHAPTER 4. ANSIBLE IPMI MODULES IN RHEL
* on
* off
* shutdown
* reset
* boot
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The python3-pyghmi package is installed either on the control node or the managed nodes.
The IPMI BMC that you want to control is accessible over network from the control node or the
managed host (if not using localhost as the managed host). Note that the host whose BMC is
being configured by the module is generally different from the managed host, as the module
contacts the BMC over the network using the IPMI protocol.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Set boot device to be used on next boot
hosts: managed-node-01.example.com
tasks:
- name: Ensure boot device is HD
redhat.rhel_mgmt.ipmi_boot:
21
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
user: <admin_user>
password: <password>
bootdev: hd
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/collections/ansible_collections/redhat/rhel_mgmt/README.md file
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The python3-pyghmi package is installed either on the control node or the managed nodes.
The IPMI BMC that you want to control is accessible over network from the control node or the
managed host (if not using localhost as the managed host). Note that the host whose BMC is
being configured by the module is generally different from the managed host, as the module
contacts the BMC over the network using the IPMI protocol.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Power management
22
CHAPTER 4. ANSIBLE IPMI MODULES IN RHEL
hosts: managed-node-01.example.com
tasks:
- name: Ensure machine is powered on
redhat.rhel_mgmt.ipmi_power:
user: <admin_user>
password: <password>
state: on
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/collections/ansible_collections/redhat/rhel_mgmt/README.md file
23
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
1. redfish_info: The redfish_info module retrieves information about the remote Out-Of-Band
(OOB) controller such as systems inventory.
24
CHAPTER 5. THE REDFISH MODULES IN RHEL
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The python3-pyghmi package is installed either on the control node or the managed nodes.
Procedure
25
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage out-of-band controllers using Redfish APIs
hosts: managed-node-01.example.com
tasks:
- name: Get CPU inventory
redhat.rhel_mgmt.redfish_info:
baseuri: "<URI>"
username: "<username>"
password: "<password>"
category: Systems
command: GetCpuInventory
register: result
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
When you run the playbook, Ansible returns the CPU inventory details.
Additional resources
/usr/share/ansible/collections/ansible_collections/redhat/rhel_mgmt/README.md file
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The python3-pyghmi package is installed either on the control node or the managed nodes.
Procedure
26
CHAPTER 5. THE REDFISH MODULES IN RHEL
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage out-of-band controllers using Redfish APIs
hosts: managed-node-01.example.com
tasks:
- name: Power on system
redhat.rhel_mgmt.redfish_command:
baseuri: "<URI>"
username: "<username>"
password: "<password>"
category: Systems
command: PowerOn
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/collections/ansible_collections/redhat/rhel_mgmt/README.md file
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The python3-pyghmi package is installed either on the control node or the managed nodes.
27
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manages out-of-band controllers using Redfish APIs
hosts: managed-node-01.example.com
tasks:
- name: Set BootMode to UEFI
redhat.rhel_mgmt.redfish_config:
baseuri: "<URI>"
username: "<username>"
password: "<password>"
category: Systems
command: SetBiosAttributes
bios_attributes:
BootMode: Uefi
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/collections/ansible_collections/redhat/rhel_mgmt/README.md file
28
CHAPTER 6. INTEGRATING RHEL SYSTEMS INTO AD DIRECTLY BY USING THE RHEL SYSTEM ROLE
realmd to detect available AD domains and configure the underlying RHEL system services, in
this case SSSD, to connect to the selected AD domain
NOTE
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file
/usr/share/doc/rhel-system-roles/ad_integration/ directory
You can use the ad_integration system role to configure a direct integration between a RHEL system
and an AD domain by running an Ansible playbook.
NOTE
Starting with RHEL8, RHEL no longer supports RC4 encryption by default. If it is not
possible to enable AES in the AD domain, you must enable the AD-SUPPORT crypto
policy and allow RC4 encryption in the playbook.
IMPORTANT
Time between the RHEL server and AD must be synchronized. You can ensure this by
using the timesync system role in the playbook.
In this example, the RHEL system joins the domain.example.com AD domain, by using the AD
Administrator user and the password for this user stored in the Ansible vault. The playbook also sets
the AD-SUPPORT crypto policy and allows RC4 encryption. To ensure time synchronization between
29
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
the RHEL system and AD, the playbook sets the adserver.domain.example.com server as the
timesync source.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The following ports on the AD domain controllers are open and accessible from the RHEL
server:
Table 6.1. Ports Required for Direct Integration of Linux Systems into AD Using the
ad_integration system role
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure a direct integration between a RHEL system and an AD domain
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.ad_integration
vars:
ad_integration_realm: "domain.example.com"
30
CHAPTER 6. INTEGRATING RHEL SYSTEMS INTO AD DIRECTLY BY USING THE RHEL SYSTEM ROLE
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file
/usr/share/doc/rhel-system-roles/ad_integration/ directory
31
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
The role uses certmonger as the certificate provider, and currently supports issuing and renewing self-
signed certificates and using the IdM integrated certificate authority (CA).
You can use the following variables in your Ansible playbook with the certificate system role:
certificate_wait
to specify if the task should wait for the certificate to be issued.
certificate_requests
to represent each certificate to be issued and its parameters.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.certificate/README.md file
/usr/share/doc/rhel-system-roles/certificate/ directory
With the certificate system role, you can use Ansible Core to issue self-signed certificates.
This process uses the certmonger provider and requests the certificate through the getcert command.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.certificate
vars:
certificate_requests:
32
CHAPTER 7. REQUESTING CERTIFICATES BY USING THE RHEL SYSTEM ROLE
- name: mycert
dns: "*.example.com"
ca: self-sign
Set the name parameter to the desired name of the certificate, such as mycert.
Set the dns parameter to the domain to be included in the certificate, such as
*.example.com.
By default, certmonger automatically tries to renew the certificate before it expires. You can
disable this by setting the auto_renew parameter in the Ansible playbook to no.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.certificate/README.md file
/usr/share/doc/rhel-system-roles/certificate/ directory
With the certificate system role, you can use anible-core to issue certificates while using an IdM server
with an integrated certificate authority (CA). Therefore, you can efficiently and consistently manage the
certificate trust chain for multiple systems when using IdM as the CA.
This process uses the certmonger provider and requests the certificate through the getcert command.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
33
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
roles:
- rhel-system-roles.certificate
vars:
certificate_requests:
- name: mycert
dns: www.example.com
principal: HTTP/[email protected]
ca: ipa
Set the name parameter to the desired name of the certificate, such as mycert.
Set the dns parameter to the domain to be included in the certificate, such as
www.example.com.
By default, certmonger automatically tries to renew the certificate before it expires. You can
disable this by setting the auto_renew parameter in the Ansible playbook to no.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.certificate/README.md file
/usr/share/doc/rhel-system-roles/certificate/ directory
In the following example, the administrator ensures stopping the httpd service before a self-signed
certificate for www.example.com is issued or renewed, and restarting it afterwards.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
34
CHAPTER 7. REQUESTING CERTIFICATES BY USING THE RHEL SYSTEM ROLE
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.certificate
vars:
certificate_requests:
- name: mycert
dns: www.example.com
ca: self-sign
run_before: systemctl stop httpd.service
run_after: systemctl start httpd.service
Set the name parameter to the desired name of the certificate, such as mycert.
Set the dns parameter to the domain to be included in the certificate, such as
www.example.com.
Set the ca parameter to the CA you want to use to issue the certificate, such as self-sign.
Set the run_before parameter to the command you want to execute before this certificate
is issued or renewed, such as systemctl stop httpd.service.
Set the run_after parameter to the command you want to execute after this certificate is
issued or renewed, such as systemctl start httpd.service.
By default, certmonger automatically tries to renew the certificate before it expires. You can
disable this by setting the auto_renew parameter in the Ansible playbook to no.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.certificate/README.md file
/usr/share/doc/rhel-system-roles/certificate/ directory
35
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Configure the web console to use a custom port number (9050/tcp). By default, the web
console uses port 9090.
Allow the firewalld and selinux system roles to configure the system for opening new ports.
Set the web console to use a certificate from the ipa trusted certificate authority instead of
using a self-signed certificate.
NOTE
You do not have to call the firewall or certificate system roles in the playbook to manage
the firewall or create the certificate. The cockpit system role calls them automatically as
needed.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example, ~/playbook.yml, with the following content:
---
- name: Manage the RHEL web console
hosts: managed-node-01.example.com
tasks:
- name: Install RHEL web console
ansible.builtin.include_role:
name: rhel-system-roles.cockpit
vars:
cockpit_packages: default
cockpit_port: 9050
cockpit_manage_selinux: true
cockpit_manage_firewall: true
36
CHAPTER 8. INSTALLING AND CONFIGURING WEB CONSOLE BY USING THE RHEL SYSTEM ROLE
cockpit_certificates:
- name: /etc/cockpit/ws-certs.d/01-certificate
dns: ['localhost', 'www.example.com']
ca: ipa
cockpit_manage_selinux: true
Allow using the selinux system role to configure SELinux for setting up the correct port
permissions on the websm_port_t SELinux type.
cockpit_manage_firewall: true
Allow the cockpit system role to use the firewalld system role for adding ports.
cockpit_certificates: <YAML_dictionary>
By default, the RHEL web console uses a self-signed certificate. Alternatively, you can add
the cockpit_certificates variable to the playbook and configure the role to request
certificates from an IdM certificate authority (CA) or to use an existing certificate and
private key that is available on the managed node.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.cockpit/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.cockpit/README.md file
/usr/share/doc/rhel-system-roles/cockpit directory
37
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
You can use the crypto_policies system role to configure a large number of managed nodes
consistently from a single control node.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure crypto policies
hosts: managed-node-01.example.com
tasks:
- name: Configure crypto policies
ansible.builtin.include_role:
name: rhel-system-roles.crypto_policies
vars:
- crypto_policies_policy: FUTURE
- crypto_policies_reboot_ok: true
You can replace the FUTURE value with your preferred crypto policy, for example: DEFAULT,
LEGACY, and FIPS:OSPP.
The crypto_policies_reboot_ok: true setting causes the system to reboot after the system
role changes the cryptographic policy.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
38
CHAPTER 9. SETTING A CUSTOM CRYPTOGRAPHIC POLICY BY USING THE RHEL SYSTEM ROLE
WARNING
Note that your system is not CC-compliant after you set the FIPS:OSPP
cryptographic subpolicy. The only correct way to make your RHEL system compliant
with the CC standard is by following the guidance provided in the cc-config
package. See the Common Criteria section in the Compliance Activities and
Government Standards Knowledgebase article for a list of certified RHEL versions,
validation reports, and links to CC guides hosted at the National Information
Assurance Partnership (NIAP) website.
Verification
1. On the control node, create another playbook named, for example, verify_playbook.yml:
---
- name: Verification
hosts: managed-node-01.example.com
tasks:
- name: Verify active crypto policy
ansible.builtin.include_role:
name: rhel-system-roles.crypto_policies
- debug:
var: crypto_policies_active
$ ansible-playbook ~/verify_playbook.yml
TASK [debug] **************************
ok: [host] => {
"crypto_policies_active": "FUTURE"
}
The crypto_policies_active variable shows the policy active on the managed node.
Additional resources
39
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
/usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md file
/usr/share/doc/rhel-system-roles/crypto_policies/ directory
40
CHAPTER 10. CONFIGURING FIREWALLD BY USING THE RHEL SYSTEM ROLE
After you run the firewall role on the control node, the RHEL system role applies the firewalld
parameters to the managed node immediately and makes them persistent across reboots.
The rhel-system-roles.firewall role from the RHEL system roles was introduced for automated
configurations of the firewalld service. The rhel-system-roles package contains this RHEL system role,
and also the reference documentation.
To apply the firewalld parameters on one or more systems in an automated fashion, use the firewall
RHEL system role variable in a playbook. A playbook is a list of one or more plays that is written in the
text-based YAML format.
You can use an inventory file to define a set of systems that you want Ansible to configure.
With the firewall role you can configure many different firewalld parameters, for example:
Zones.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.firewall/README.md file
/usr/share/doc/rhel-system-roles/firewall/ directory
41
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
defined settings and resets firewalld to the defaults. If you combine the previous:replaced parameter
with other settings, the firewall role removes all existing settings before applying new ones.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Reset firewalld example
hosts: managed-node-01.example.com
tasks:
- name: Reset firewalld
ansible.builtin.include_role:
name: rhel-system-roles.firewall
vars:
firewall:
- previous: replaced
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Run this command as root on the managed node to check all the zones:
# firewall-cmd --list-all-zones
Additional resources
/usr/share/ansible/roles/rhel-system-roles.firewall/README.md file
/usr/share/doc/rhel-system-roles/firewall/ directory
42
CHAPTER 10. CONFIGURING FIREWALLD BY USING THE RHEL SYSTEM ROLE
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure firewalld
hosts: managed-node-01.example.com
tasks:
- name: Forward incoming traffic on port 8080 to 443
ansible.builtin.include_role:
name: rhel-system-roles.firewall
vars:
firewall:
- { forward_port: 8080/tcp;443;, state: enabled, runtime: true, permanent: true }
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
# firewall-cmd --list-forward-ports
Additional resources
/usr/share/ansible/roles/rhel-system-roles.firewall/README.md file
/usr/share/doc/rhel-system-roles/firewall/ directory
43
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure firewalld
hosts: managed-node-01.example.com
tasks:
- name: Allow incoming HTTPS traffic to the local host
ansible.builtin.include_role:
name: rhel-system-roles.firewall
vars:
firewall:
- port: 443/tcp
service: http
state: enabled
runtime: true
permanent: true
The permanent: true option makes the new settings persistent across reboots.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
On the managed node, verify that the 443/tcp port associated with the HTTPS service is open:
44
CHAPTER 10. CONFIGURING FIREWALLD BY USING THE RHEL SYSTEM ROLE
# firewall-cmd --list-ports
443/tcp
Additional resources
/usr/share/ansible/roles/rhel-system-roles.firewall/README.md file
/usr/share/doc/rhel-system-roles/firewall/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure firewalld
hosts: managed-node-01.example.com
tasks:
- name: Creating a DMZ with access to HTTPS port and masquerading for hosts in DMZ
ansible.builtin.include_role:
name: rhel-system-roles.firewall
vars:
firewall:
- zone: dmz
interface: enp1s0
service: https
state: enabled
runtime: true
permanent: true
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
45
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
$ ansible-playbook ~/playbook.yml
Verification
On the managed node, view detailed information about the dmz zone:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.firewall/README.md file
/usr/share/doc/rhel-system-roles/firewall/ directory
46
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
The variables you can set for an ha_cluster system role are as follows:
ha_cluster_enable_repos
A boolean flag that enables the repositories containing the packages that are needed by the
ha_cluster system role. When this variable is set to true, the default value, you must have active
subscription coverage for RHEL and the RHEL High Availability Add-On on the systems that you will
use as your cluster members or the system role will fail.
ha_cluster_enable_repos_resilient_storage
(RHEL 8.10 and later) A boolean flag that enables the repositories containing resilient storage
packages, such as dlm or gfs2. For this option to take effect, ha_cluster_enable_repos must be set
to true. The default value of this variable is false.
ha_cluster_manage_firewall
(RHEL 8.8 and later) A boolean flag that determines whether the ha_cluster system role manages
the firewall. When ha_cluster_manage_firewall is set to true, the firewall high availability service
and the fence-virt port are enabled. When ha_cluster_manage_firewall is set to false, the
ha_cluster system role does not manage the firewall. If your system is running the firewalld service,
you must set the parameter to true in your playbook.
You can use the ha_cluster_manage_firewall parameter to add ports, but you cannot use the
parameter to remove ports. To remove ports, use the firewall system role directly.
As of RHEL 8.8, the firewall is no longer configured by default, because it is configured only when
ha_cluster_manage_firewall is set to true.
ha_cluster_manage_selinux
(RHEL 8.8 and later) A boolean flag that determines whether the ha_cluster system role manages
the ports belonging to the firewall high availability service using the selinux system role. When
ha_cluster_manage_selinux is set to true, the ports belonging to the firewall high availability
service are associated with the SELinux port type cluster_port_t. When
ha_cluster_manage_selinux is set to false, the ha_cluster system role does not manage SELinux.
If your system is running the selinux service, you must set this parameter to true in your playbook.
Firewall configuration is a prerequisite for managing SELinux. If the firewall is not installed, the
managing SELinux policy is skipped.
You can use the ha_cluster_manage_selinux parameter to add policy, but you cannot use the
parameter to remove policy. To remove policy, use the selinux system role directly.
ha_cluster_cluster_present
A boolean flag which, if set to true, determines that HA cluster will be configured on the hosts
according to the variables passed to the role. Any cluster configuration not specified in the playbook
and not supported by the role will be lost.
If ha_cluster_cluster_present is set to false, all HA cluster configuration will be removed from the
47
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
If ha_cluster_cluster_present is set to false, all HA cluster configuration will be removed from the
target hosts.
The following example playbook removes all cluster configuration on node1 and node2
roles:
- rhel-system-roles.ha_cluster
ha_cluster_start_on_boot
A boolean flag that determines whether cluster services will be configured to start on boot. The
default value of this variable is true.
ha_cluster_fence_agent_packages
List of fence agent packages to install. The default value of this variable is fence-agents-all, fence-
virt.
ha_cluster_extra_packages
List of additional packages to be installed. The default value of this variable is no packages.
This variable can be used to install additional packages not installed automatically by the role, for
example custom resource agents.
ha_cluster_hacluster_password
A string value that specifies the password of the hacluster user. The hacluster user has full access
to a cluster. To protect sensitive data, vault encrypt the password, as described in Encrypting content
with Ansible Vault. There is no default password value, and this variable must be specified.
ha_cluster_hacluster_qdevice_password
(RHEL 8.9 and later) A string value that specifies the password of the hacluster user for a quorum
device. This parameter is needed only if the ha_cluster_quorum parameter is configured to use a
quorum device of type net and the password of the hacluster user on the quorum device is different
from the password of the hacluster user specified with the ha_cluster_hacluster_password
parameter. The hacluster user has full access to a cluster. To protect sensitive data, vault encrypt
the password, as described in Encrypting content with Ansible Vault . There is no default value for this
password.
ha_cluster_corosync_key_src
The path to Corosync authkey file, which is the authentication and encryption key for Corosync
communication. It is highly recommended that you have a unique authkey value for each cluster. The
key should be 256 bytes of random data.
If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in
Encrypting content with Ansible Vault .
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same
key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no
node has a key, a new key will be generated and distributed to the nodes.
48
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
ha_cluster_pacemaker_key_src
The path to the Pacemaker authkey file, which is the authentication and encryption key for
Pacemaker communication. It is highly recommended that you have a unique authkey value for each
cluster. The key should be 256 bytes of random data.
If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in
Encrypting content with Ansible Vault .
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same
key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no
node has a key, a new key will be generated and distributed to the nodes.
ha_cluster_fence_virt_key_src
The path to the fence-virt or fence-xvm pre-shared key file, which is the location of the
authentication key for the fence-virt or fence-xvm fence agent.
If you specify a key for this variable, it is recommended that you vault encrypt the key, as described in
Encrypting content with Ansible Vault .
If no key is specified, a key already present on the nodes will be used. If nodes do not have the same
key, a key from one node will be distributed to other nodes so that all nodes have the same key. If no
node has a key, a new key will be generated and distributed to the nodes. If the ha_cluster system
role generates a new key in this fashion, you should copy the key to your nodes' hypervisor to ensure
that fencing works.
ha_cluster_pcsd_public_key_srcr, ha_cluster_pcsd_private_key_src
The path to the pcsd TLS certificate and private key. If this is not specified, a certificate-key pair
already present on the nodes will be used. If a certificate-key pair is not present, a random new one
will be generated.
If you specify a private key value for this variable, it is recommended that you vault encrypt the key,
as described in Encrypting content with Ansible Vault .
If these variables are set, ha_cluster_regenerate_keys is ignored for this certificate-key pair.
ha_cluster_pcsd_certificates
(RHEL 8.8 and later) Creates a pcsd private key and certificate using the certificate system role.
If your system is not configured with a pcsd private key and certificate, you can create them in one
of two ways:
49
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Unless you are using IPA and joining the systems to an IPA domain, the certificate system
role creates self-signed certificates. In this case, you must explicitly configure trust settings
outside of the context of RHEL system roles. System roles do not support configuring trust
settings.
ha_cluster_regenerate_keys
A boolean flag which, when set to true, determines that pre-shared keys and TLS certificates will be
regenerated. For more information about when keys and certificates will be regenerated, see the
descriptions of the ha_cluster_corosync_key_src, ha_cluster_pacemaker_key_src,
ha_cluster_fence_virt_key_src, ha_cluster_pcsd_public_key_src, and
ha_cluster_pcsd_private_key_src variables.
The default value of this variable is false.
ha_cluster_pcs_permission_list
Configures permissions to manage a cluster using pcsd. The items you configure with this variable
are as follows:
full - Unrestricted access to a cluster including adding and removing nodes and access
to keys and certificates
The structure of the ha_cluster_pcs_permission_list variable and its default values are as follows:
ha_cluster_pcs_permission_list:
50
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
- type: group
name: hacluster
allow_list:
- grant
- read
- write
ha_cluster_cluster_name
The name of the cluster. This is a string value with a default of my-cluster.
ha_cluster_transport
(RHEL 8.7 and later) Sets the cluster transport method. The items you configure with this variable
are as follows:
type (optional) - Transport type: knet, udp, or udpu. The udp and udpu transport types
support only one link. Encryption is always disabled for udp and udpu. Defaults to knet if not
specified.
links (optional) - List of list of name-value dictionaries. Each list of name-value dictionaries
holds options for one Corosync link. It is recommended that you set the linknumber value for
each link. Otherwise, the first list of dictionaries is assigned by default to the first link, the
second one to the second link, and so on.
For a list of allowed options, see the pcs -h cluster setup help page or the setup description in the
cluster section of the pcs(8) man page. For more detailed descriptions, see the corosync.conf(5)
man page.
The structure of the ha_cluster_transport variable is as follows:
ha_cluster_transport:
type: knet
options:
- name: option1_name
value: option1_value
- name: option2_name
value: option2_value
links:
-
- name: option1_name
value: option1_value
- name: option2_name
value: option2_value
-
- name: option1_name
value: option1_value
- name: option2_name
value: option2_value
compression:
51
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
- name: option1_name
value: option1_value
- name: option2_name
value: option2_value
crypto:
- name: option1_name
value: option1_value
- name: option2_name
value: option2_value
For an example ha_cluster system role playbook that configures a transport method, see
Configuring Corosync values in a high availability cluster .
ha_cluster_totem
(RHEL 8.7 and later) Configures Corosync totem. For a list of allowed options, see the pcs -h
cluster setup help page or the setup description in the cluster section of the pcs(8) man page. For
a more detailed description, see the corosync.conf(5) man page.
The structure of the ha_cluster_totem variable is as follows:
ha_cluster_totem:
options:
- name: option1_name
value: option1_value
- name: option2_name
value: option2_value
For an example ha_cluster system role playbook that configures a Corosync totem, see Configuring
Corosync values in a high availability cluster.
ha_cluster_quorum
(RHEL 8.7 and later) Configures cluster quorum. You can configure the following items for cluster
quorum:
device (optional) - (RHEL 8.8 and later) Configures the cluster to use a quorum device. By
default, no quorum device is used.
52
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
ha_cluster_quorum:
options:
- name: option1_name
value: option1_value
- name: option2_name
value: option2_value
device:
model: string
model_options:
- name: option1_name
value: option1_value
- name: option2_name
value: option2_value
generic_options:
- name: option1_name
value: option1_value
- name: option2_name
value: option2_value
heuristics_options:
- name: option1_name
value: option1_value
- name: option2_name
value: option2_value
For an example ha_cluster system role playbook that configures cluster quorum, see Configuring
Corosync values in a high availability cluster. For an example ha_cluster system role playbook that
configures a cluster using a quorum device, see Configuring a high availability cluster using a quorum
device.
ha_cluster_sbd_enabled
(RHEL 8.7 and later) A boolean flag which determines whether the cluster can use the SBD node
fencing mechanism. The default value of this variable is false.
For an example ha_cluster system role playbook that enables SBD, see Configuring a high
availability cluster with SBD node fencing.
ha_cluster_sbd_options
(RHEL 8.7 and later) List of name-value dictionaries specifying SBD options. Supported options are:
delay-start - defaults to no
watchdog-timeout - defaults to 5
For information about these options, see the Configuration via environment section of the
53
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
For information about these options, see the Configuration via environment section of the
sbd(8) man page.
For an example ha_cluster system role playbook that configures SBD options, see Configuring a
high availability cluster with SBD node fencing.
When using SBD, you can optionally configure watchdog and SBD devices for each node in an
inventory. For information about configuring watchdog and SBD devices in an inventory file, see
Specifying an inventory for the ha_cluster system role .
ha_cluster_cluster_properties
List of sets of cluster properties for Pacemaker cluster-wide configuration. Only one set of cluster
properties is supported.
The structure of a set of cluster properties is as follows:
ha_cluster_cluster_properties:
- attrs:
- name: property1_name
value: property1_value
- name: property2_name
value: property2_value
The following example playbook configures a cluster consisting of node1 and node2 and sets the
stonith-enabled and no-quorum-policy cluster properties.
roles:
- rhel-system-roles.ha_cluster
ha_cluster_node_options
(RHEL 8. 10 and later) This variable defines various settings which vary from one cluster node to
another. It sets the options for the specified nodes, but does not specify which nodes form the
cluster. You specify which nodes form the cluster with the hosts parameter in an inventory or a
playbook.
The items you configure with this variable are as follows:
node_name (mandatory) - Name of the node for which to define Pacemaker node
attributes.
attributes (optional) - List of sets of Pacemaker node attributes for the node. Currently no
more than one set for each node is supported.
54
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
ha_cluster_node_options:
- node_name: node1
attributes:
- attrs:
- name: attribute1
value: value1_node1
- name: attribute2
value: value2_node1
- node_name: node2
attributes:
- attrs:
- name: attribute1
value: value1_node2
- name: attribute2
value: value2_node2
For an example ha_cluster system role playbook that includes node options configuration, see
Configuring a high availability cluster with node attributes .
ha_cluster_resource_primitives
This variable defines pacemaker resources configured by the system role, including fencing
resources. You can configure the following items for each resource:
id (mandatory) - ID of a resource.
instance_attrs (optional) - List of sets of the resource’s instance attributes. Currently, only
one set is supported. The exact names and values of attributes, as well as whether they are
mandatory or not, depend on the resource or fencing agent.
meta_attrs (optional) - List of sets of the resource’s meta attributes. Currently, only one set
is supported.
55
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
The structure of the resource definition that you configure with the ha_cluster system role is as
follows:
- id: resource-id
agent: resource-agent
instance_attrs:
- attrs:
- name: attribute1_name
value: attribute1_value
- name: attribute2_name
value: attribute2_value
meta_attrs:
- attrs:
- name: meta_attribute1_name
value: meta_attribute1_value
- name: meta_attribute2_name
value: meta_attribute2_value
copy_operations_from_agent: bool
operations:
- action: operation1-action
attrs:
- name: operation1_attribute1_name
value: operation1_attribute1_value
- name: operation1_attribute2_name
value: operation1_attribute2_value
- action: operation2-action
attrs:
- name: operation2_attribute1_name
value: operation2_attribute1_value
- name: operation2_attribute2_name
value: operation2_attribute2_value
For an example ha_cluster system role playbook that includes resource configuration, see
Configuring a high availability cluster with fencing and resources .
ha_cluster_resource_groups
This variable defines pacemaker resource groups configured by the system role. You can configure
the following items for each resource group:
id (mandatory) - ID of a group.
resources (mandatory) - List of the group’s resources. Each resource is referenced by its ID
and the resources must be defined in the ha_cluster_resource_primitives variable. At least
one resource must be listed.
meta_attrs (optional) - List of sets of the group’s meta attributes. Currently, only one set is
supported.
The structure of the resource group definition that you configure with the ha_cluster system role is
as follows:
ha_cluster_resource_groups:
- id: group-id
56
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
resource_ids:
- resource1-id
- resource2-id
meta_attrs:
- attrs:
- name: group_meta_attribute1_name
value: group_meta_attribute1_value
- name: group_meta_attribute2_name
value: group_meta_attribute2_value
For an example ha_cluster system role playbook that includes resource group configuration, see
Configuring a high availability cluster with fencing and resources .
ha_cluster_resource_clones
This variable defines pacemaker resource clones configured by the system role. You can configure
the following items for a resource clone:
meta_attrs (optional) - List of sets of the clone’s meta attributes. Currently, only one set is
supported.
The structure of the resource clone definition that you configure with the ha_cluster system role is
as follows:
ha_cluster_resource_clones:
- resource_id: resource-to-be-cloned
promotable: true
id: custom-clone-id
meta_attrs:
- attrs:
- name: clone_meta_attribute1_name
value: clone_meta_attribute1_value
- name: clone_meta_attribute2_name
value: clone_meta_attribute2_value
For an example ha_cluster system role playbook that includes resource clone configuration, see
Configuring a high availability cluster with fencing and resources .
ha_cluster_resource_defaults
(RHEL 8.9 and later) This variable defines sets of resource defaults. You can define multiple sets of
defaults and apply them to resources of specific agents using rules. The defaults you specify with the
ha_cluster_resource_defaults variable do not apply to resources which override them with their
own defined values.
57
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
You can configure the following items for each defaults set:
rule (optional) - Rule written using pcs syntax defining when and for which resources the set
applies. For information on specifying a rule, see the resource defaults set create section of
the pcs(8) man page.
ha_cluster_resource_defaults:
meta_attrs:
- id: defaults-set-1-id
rule: rule-string
score: score-value
attrs:
- name: meta_attribute1_name
value: meta_attribute1_value
- name: meta_attribute2_name
value: meta_attribute2_value
- id: defaults-set-2-id
rule: rule-string
score: score-value
attrs:
- name: meta_attribute3_name
value: meta_attribute3_value
- name: meta_attribute4_name
value: meta_attribute4_value
For an example ha_cluster system role playbook that configures resource defaults, see Configuring
a high availability cluster with resource and resource operation defaults.
ha_cluster_resource_operation_defaults
(RHEL 8.9 and later) This variable defines sets of resource operation defaults. You can define
multiple sets of defaults and apply them to resources of specific agents and specific resource
operations using rules. The defaults you specify with the ha_cluster_resource_operation_defaults
variable do not apply to resource operations which override them with their own defined values. By
default, the ha_cluster system role configures resources to define their own values for resource
operations. For information about overriding these defaults with the
ha_cluster_resource_operations_defaults variable, see the description of the
copy_operations_from_agent item in ha_cluster_resource_primitives.
Only meta attributes can be specified as defaults.
ha_cluster_stonith_levels
58
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
(RHEL 8.10 and later) This variable defines STONITH levels, also known as fencing topology. Fencing
levels configure a cluster to use multiple devices to fence nodes. You can define alternative devices
in case one device fails and you can require multiple devices to all be executed successfully to
consider a node successfully fenced. For more information on fencing levels, see Configuring fencing
levels in Configuring and managing high availability clusters .
You can configure the following items when defining fencing levels:
level (mandatory) - Order in which to attempt the fencing level. Pacemaker attempts levels
in ascending order until one succeeds.
target_pattern - POSIX extended regular expression matching the names of the nodes
this level applies to.
target_attribute - Name of a node attribute that is set for the node this level applies to.
target_attribute and target_value - Name and value of a node attribute that is set for
the node this level applies to.
resouce_ids (mandatory) - List of fencing resources that must all be tried for this level.
By default, no fencing levels are defined.
The structure of the fencing levels definition that you configure with the ha_cluster system role is as
follows:
ha_cluster_stonith_levels:
- level: 1..9
target: node_name
target_pattern: node_name_regular_expression
target_attribute: node_attribute_name
target_value: node_attribute_value
resource_ids:
- fence_device_1
- fence_device_2
- level: 1..9
target: node_name
target_pattern: node_name_regular_expression
target_attribute: node_attribute_name
target_value: node_attribute_value
resource_ids:
- fence_device_1
- fence_device_2
For an example ha_cluster system role playbook that configures fencing defaults, see Configuring a
high availability cluster with fencing levels.
ha_cluster_constraints_location
This variable defines resource location constraints. Resource location constraints indicate which
nodes a resource can run on. You can specify a resources specified by a resource ID or by a pattern,
which can match more than one resource. You can specify a node by a node name or by a rule.
You can configure the following items for a resource location constraint:
59
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
A positive score value means the resource prefers running on the node.
A negative score value means the resource should avoid running on the node.
A score value of -INFINITY means the resource must avoid running on the node.
ha_cluster_constraints_location:
- resource:
id: resource-id
node: node-name
id: constraint-id
options:
- name: score
value: score-value
- name: option-name
value: option-value
The items that you configure for a resource location constraint that specifies a resource pattern are
the same items that you configure for a resource location constraint that specifies a resource ID,
with the exception of the resource specification itself. The item that you specify for the resource
specification is as follows:
pattern (mandatory) - POSIX extended regular expression resource IDs are matched
against.
The structure of a resource location constraint specifying a resource pattern and node name is as
follows:
ha_cluster_constraints_location:
- resource:
pattern: resource-pattern
node: node-name
id: constraint-id
options:
- name: score
value: score-value
- name: resource-discovery
value: resource-discovery-value
You can configure the following items for a resource location constraint that specifies a resource ID
and a rule:
60
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
role (optional) - The resource role to which the constraint is limited: Started,
Unpromoted, Promoted.
rule (mandatory) - Constraint rule written using pcs syntax. For further information, see the
constraint location section of the pcs(8) man page.
Other items to specify have the same meaning as for a resource constraint that does not
specify a rule.
The structure of a resource location constraint that specifies a resource ID and a rule is as follows:
ha_cluster_constraints_location:
- resource:
id: resource-id
role: resource-role
rule: rule-string
id: constraint-id
options:
- name: score
value: score-value
- name: resource-discovery
value: resource-discovery-value
The items that you configure for a resource location constraint that specifies a resource pattern and
a rule are the same items that you configure for a resource location constraint that specifies a
resource ID and a rule, with the exception of the resource specification itself. The item that you
specify for the resource specification is as follows:
pattern (mandatory) - POSIX extended regular expression resource IDs are matched
against.
The structure of a resource location constraint that specifies a resource pattern and a rule is as
follows:
ha_cluster_constraints_location:
- resource:
pattern: resource-pattern
role: resource-role
rule: rule-string
id: constraint-id
options:
- name: score
value: score-value
- name: resource-discovery
value: resource-discovery-value
For an example ha_cluster system role playbook that creates a cluster with resource constraints, see
Configuring a high availability cluster with resource constraints .
ha_cluster_constraints_colocation
This variable defines resource colocation constraints. Resource colocation constraints indicate that
61
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
the location of one resource depends on the location of another one. There are two types of
colocation constraints: a simple colocation constraint for two resources, and a set colocation
constraint for multiple resources.
You can configure the following items for a simple resource colocation constraint:
role (optional) - The resource role to which the constraint is limited: Started,
Unpromoted, Promoted.
resource_leader (mandatory) - The cluster will decide where to put this resource first and
then decide where to put resource_follower.
role (optional) - The resource role to which the constraint is limited: Started,
Unpromoted, Promoted.
Positive score values indicate the resources should run on the same node.
Negative score values indicate the resources should run on different nodes.
A score value of +INFINITY indicates the resources must run on the same node.
A score value of -INFINITY indicates the resources must run on different nodes.
ha_cluster_constraints_colocation:
- resource_follower:
id: resource-id1
role: resource-role1
resource_leader:
id: resource-id2
role: resource-role2
id: constraint-id
options:
- name: score
value: score-value
- name: option-name
value: option-value
You can configure the following items for a resource set colocation constraint:
62
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
ha_cluster_constraints_colocation:
- resource_sets:
- resource_ids:
- resource-id1
- resource-id2
options:
- name: option-name
value: option-value
id: constraint-id
options:
- name: score
value: score-value
- name: option-name
value: option-value
For an example ha_cluster system role playbook that creates a cluster with resource constraints, see
Configuring a high availability cluster with resource constraints .
ha_cluster_constraints_order
This variable defines resource order constraints. Resource order constraints indicate the order in
which certain resource actions should occur. There are two types of resource order constraints: a
simple order constraint for two resources, and a set order constraint for multiple resources.
You can configure the following items for a simple resource order constraint:
action (optional) - The action that must complete before an action can be initiated for
the resource_then resource. Allowed values: start, stop, promote, demote.
action (optional) - The action that the resource can execute only after the action on the
resource_first resource has completed. Allowed values: start, stop, promote, demote.
63
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
ha_cluster_constraints_order:
- resource_first:
id: resource-id1
action: resource-action1
resource_then:
id: resource-id2
action: resource-action2
id: constraint-id
options:
- name: score
value: score-value
- name: option-name
value: option-value
You can configure the following items for a resource set order constraint:
ha_cluster_constraints_order:
- resource_sets:
- resource_ids:
- resource-id1
- resource-id2
options:
- name: option-name
value: option-value
id: constraint-id
options:
- name: score
value: score-value
- name: option-name
value: option-value
For an example ha_cluster system role playbook that creates a cluster with resource constraints, see
Configuring a high availability cluster with resource constraints .
ha_cluster_constraints_ticket
This variable defines resource ticket constraints. Resource ticket constraints indicate the resources
that depend on a certain ticket. There are two types of resource ticket constraints: a simple ticket
constraint for one resource, and a ticket order constraint for multiple resources.
You can configure the following items for a simple resource ticket constraint:
64
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
role (optional) - The resource role to which the constraint is limited: Started,
Unpromoted, Promoted.
ha_cluster_constraints_ticket:
- resource:
id: resource-id
role: resource-role
ticket: ticket-name
id: constraint-id
options:
- name: loss-policy
value: loss-policy-value
- name: option-name
value: option-value
You can configure the following items for a resource set ticket constraint:
ha_cluster_constraints_ticket:
- resource_sets:
- resource_ids:
- resource-id1
- resource-id2
options:
- name: option-name
value: option-value
ticket: ticket-name
id: constraint-id
65
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
options:
- name: option-name
value: option-value
For an example ha_cluster system role playbook that creates a cluster with resource constraints, see
Configuring a high availability cluster with resource constraints .
ha_cluster_qnetd
(RHEL 8.8 and later) This variable configures a qnetd host which can then serve as an external
quorum device for clusters.
You can configure the following items for a qnetd host:
present (optional) - If true, configure a qnetd instance on the host. If false, remove qnetd
configuration from the host. The default value is false. If you set this true, you must set
ha_cluster_cluster_present to false.
regenerate_keys (optional) - Set this variable to true to regenerate the qnetd TLS
certificate. If you regenerate the certificate, you must either re-run the role for each cluster
to connect it to the qnetd host again or run pcs manually.
You cannot run qnetd on a cluster node because fencing would disrupt qnetd operation.
For an example ha_cluster system role playbook that configures a cluster using a quorum device,
see Configuring a cluster using a quorum device .
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
pcs_address - an address used by pcs to communicate with the node. It can be a name, FQDN
or an IP address and it can include a port number.
corosync_addresses - list of addresses used by Corosync. All nodes which form a particular
cluster must have the same number of addresses and the order of the addresses matters.
The following example shows an inventory with targets node1 and node2. node1 and node2 must be
66
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
The following example shows an inventory with targets node1 and node2. node1 and node2 must be
either fully qualified domain names or must otherwise be able to connect to the nodes as when, for
example, the names are resolvable through the /etc/hosts file.
all:
hosts:
node1:
ha_cluster:
node_name: node-A
pcs_address: node1-address
corosync_addresses:
- 192.168.1.11
- 192.168.2.11
node2:
ha_cluster:
node_name: node-B
pcs_address: node2-address:2224
corosync_addresses:
- 192.168.1.12
- 192.168.2.12
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
For each node in an inventory, you can optionally specify the following items:
sbd_devices - Devices to use for exchanging SBD messages and for monitoring. Defaults to
empty list if not set.
The following example shows an inventory that configures watchdog and SBD devices for targets
node1 and node2.
all:
hosts:
node1:
ha_cluster:
67
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
sbd_watchdog_modules:
- module1
- module2
sbd_watchdog: /dev/watchdog2
sbd_devices:
- /dev/vdx
- /dev/vdy
node2:
ha_cluster:
sbd_watchdog_modules:
- module1
sbd_watchdog_modules_blocklist:
- module2
sbd_watchdog: /dev/watchdog1
sbd_devices:
- /dev/vdw
- /dev/vdz
For information about creating a high availability cluster that uses SBD fencing, see Configuring a high
availability cluster with SBD node fencing.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
11.3. CREATING PCSD TLS CERTIFICATES AND KEY FILES FOR A HIGH
AVAILABILITY CLUSTER
(RHEL 8.8 and later)
You can use the ha_cluster system role to create TLS certificates and key files in a high availability
cluster. When you run this playbook, the ha_cluster system role uses the certificate system role
internally to manage TLS certificates.
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The systems that you will use as your cluster members have active subscription coverage for
68
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
The systems that you will use as your cluster members have active subscription coverage for
RHEL and the RHEL High Availability Add-On.
The inventory file specifies the cluster nodes as described in Specifying an inventory for the
ha_cluster system role.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create TLS certificates and key files in a high availability cluster
hosts: node1 node2
roles:
- rhel-system-roles.ha_cluster
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: <password>
ha_cluster_manage_firewall: true
ha_cluster_manage_selinux: true
ha_cluster_pcsd_certificates:
- name: FILENAME
common_name: "{{ ansible_hostname }}"
ca: self-sign
This playbook configures a cluster running the firewalld and selinux services and creates a
self-signed pcsd certificate and private key files in /var/lib/pcsd. The pcsd certificate has the
file name FILENAME.crt and the key file is named FILENAME.key.
When creating your playbook file for production, vault encrypt the password, as described in
Encrypting content with Ansible Vault .
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
The following procedure uses the ha_cluster system role, to create a high availability cluster with no
69
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
The following procedure uses the ha_cluster system role, to create a high availability cluster with no
fencing configured and which runs no resources.
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The systems that you will use as your cluster members have active subscription coverage for
RHEL and the RHEL High Availability Add-On.
The inventory file specifies the cluster nodes as described in Specifying an inventory for the
ha_cluster system role.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create a high availability cluster with no fencing and which runs no resources
hosts: node1 node2
roles:
- rhel-system-roles.ha_cluster
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: <password>
ha_cluster_manage_firewall: true
ha_cluster_manage_selinux: true
This example playbook file configures a cluster running the firewalld and selinux services with
no fencing configured and which runs no resources.
When creating your playbook file for production, vault encrypt the password, as described in
Encrypting content with Ansible Vault .
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
70
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The systems that you will use as your cluster members have active subscription coverage for
RHEL and the RHEL High Availability Add-On.
The inventory file specifies the cluster nodes as described in Specifying an inventory for the
ha_cluster system role.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create a high availability cluster that includes a fencing device and resources
hosts: node1 node2
roles:
- rhel-system-roles.ha_cluster
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: <password>
ha_cluster_manage_firewall: true
ha_cluster_manage_selinux: true
ha_cluster_resource_primitives:
- id: xvm-fencing
71
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
agent: 'stonith:fence_xvm'
instance_attrs:
- attrs:
- name: pcmk_host_list
value: node1 node2
- id: simple-resource
agent: 'ocf:pacemaker:Dummy'
- id: resource-with-options
agent: 'ocf:pacemaker:Dummy'
instance_attrs:
- attrs:
- name: fake
value: fake-value
- name: passwd
value: passwd-value
meta_attrs:
- attrs:
- name: target-role
value: Started
- name: is-managed
value: 'true'
operations:
- action: start
attrs:
- name: timeout
value: '30s'
- action: monitor
attrs:
- name: timeout
value: '5'
- name: interval
value: '1min'
- id: dummy-1
agent: 'ocf:pacemaker:Dummy'
- id: dummy-2
agent: 'ocf:pacemaker:Dummy'
- id: dummy-3
agent: 'ocf:pacemaker:Dummy'
- id: simple-clone
agent: 'ocf:pacemaker:Dummy'
- id: clone-with-options
agent: 'ocf:pacemaker:Dummy'
ha_cluster_resource_groups:
- id: simple-group
resource_ids:
- dummy-1
- dummy-2
meta_attrs:
- attrs:
- name: target-role
value: Started
- name: is-managed
value: 'true'
- id: cloned-group
resource_ids:
- dummy-3
72
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
ha_cluster_resource_clones:
- resource_id: simple-clone
- resource_id: clone-with-options
promotable: yes
id: custom-clone-id
meta_attrs:
- attrs:
- name: clone-max
value: '2'
- name: clone-node-max
value: '1'
- resource_id: cloned-group
promotable: yes
This example playbook file configures a cluster running the firewalld and selinux services. The
cluster includes fencing, several resources, and a resource group. It also includes a resource
clone for the resource group.
When creating your playbook file for production, vault encrypt the password, as described in
Encrypting content with Ansible Vault .
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
73
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The systems that you will use as your cluster members have active subscription coverage for
RHEL and the RHEL High Availability Add-On.
The inventory file specifies the cluster nodes as described in Specifying an inventory for the
ha_cluster system role.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create a high availability cluster that defines resource and resource operation
defaults
hosts: node1 node2
roles:
- rhel-system-roles.ha_cluster
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: <password>
ha_cluster_manage_firewall: true
ha_cluster_manage_selinux: true
# Set a different resource-stickiness value during
# and outside work hours. This allows resources to
# automatically move back to their most
# preferred hosts, but at a time that
# does not interfere with business activities.
ha_cluster_resource_defaults:
meta_attrs:
- id: core-hours
rule: date-spec hours=9-16 weekdays=1-5
score: 2
attrs:
- name: resource-stickiness
value: INFINITY
- id: after-hours
score: 1
attrs:
- name: resource-stickiness
value: 0
# Default the timeout on all 10-second-interval
# monitor actions on IPaddr2 resources to 8 seconds.
ha_cluster_resource_operation_defaults:
meta_attrs:
- rule: resource ::IPaddr2 and op monitor interval=10s
score: INFINITY
attrs:
- name: timeout
value: 8s
This example playbook file configures a cluster running the firewalld and selinux services. The
74
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
This example playbook file configures a cluster running the firewalld and selinux services. The
cluster includes resource and resource operation defaults.
When creating your playbook file for production, vault encrypt the password, as described in
Encrypting content with Ansible Vault .
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The systems that you will use as your cluster members have active subscription coverage for
RHEL and the RHEL High Availability Add-On.
The inventory file specifies the cluster nodes as described in Specifying an inventory for the
ha_cluster system role. For general information about creating an inventory file, see Preparing a
control node on RHEL 8.
75
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
cluster_password: <cluster_password>
fence1_password: <fence1_password>
fence2_password: <fence2_password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml. This example playbook file configures a
cluster running the firewalld and selinux services.
---
- name: Create a high availability cluster
hosts: node1 node2
vars_files:
- vault.yml
tasks:
- name: Configure a cluster that defines fencing levels
ansible.builtin.include_role:
name: rhel-system-roles.ha_cluster
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: "{{ cluster_password }}"
ha_cluster_manage_firewall: true
ha_cluster_manage_selinux: true
ha_cluster_resource_primitives:
- id: apc1
agent: 'stonith:fence_apc_snmp'
instance_attrs:
- attrs:
- name: ip
value: apc1.example.com
- name: username
value: user
- name: password
value: "{{ fence1_password }}"
- name: pcmk_host_map
value: node1:1;node2:2
- id: apc2
agent: 'stonith:fence_apc_snmp'
instance_attrs:
- attrs:
- name: ip
value: apc2.example.com
- name: username
76
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
value: user
- name: password
value: "{{ fence2_password }}"
- name: pcmk_host_map
value: node1:1;node2:2
# Nodes have redundant power supplies, apc1 and apc2. Cluster must
# ensure that when attempting to reboot a node, both power
# supplies # are turned off before either power supply is turned
# back on.
ha_cluster_stonith_levels:
- level: 1
target: node1
resource_ids:
- apc1
- apc2
- level: 1
target: node2
resource_ids:
- apc1
- apc2
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
Ansible vault
77
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The systems that you will use as your cluster members have active subscription coverage for
RHEL and the RHEL High Availability Add-On.
The inventory file specifies the cluster nodes as described in Specifying an inventory for the
ha_cluster system role.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create a high availability cluster with resource constraints
hosts: node1 node2
roles:
- rhel-system-roles.ha_cluster
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: <password>
ha_cluster_manage_firewall: true
ha_cluster_manage_selinux: true
# In order to use constraints, we need resources the constraints will apply
# to.
ha_cluster_resource_primitives:
- id: xvm-fencing
agent: 'stonith:fence_xvm'
instance_attrs:
- attrs:
- name: pcmk_host_list
value: node1 node2
- id: dummy-1
agent: 'ocf:pacemaker:Dummy'
- id: dummy-2
agent: 'ocf:pacemaker:Dummy'
- id: dummy-3
agent: 'ocf:pacemaker:Dummy'
- id: dummy-4
agent: 'ocf:pacemaker:Dummy'
- id: dummy-5
78
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
agent: 'ocf:pacemaker:Dummy'
- id: dummy-6
agent: 'ocf:pacemaker:Dummy'
# location constraints
ha_cluster_constraints_location:
# resource ID and node name
- resource:
id: dummy-1
node: node1
options:
- name: score
value: 20
# resource pattern and node name
- resource:
pattern: dummy-\d+
node: node1
options:
- name: score
value: 10
# resource ID and rule
- resource:
id: dummy-2
rule: '#uname eq node2 and date in_range 2022-01-01 to 2022-02-28'
# resource pattern and rule
- resource:
pattern: dummy-\d+
rule: node-type eq weekend and date-spec weekdays=6-7
# colocation constraints
ha_cluster_constraints_colocation:
# simple constraint
- resource_leader:
id: dummy-3
resource_follower:
id: dummy-4
options:
- name: score
value: -5
# set constraint
- resource_sets:
- resource_ids:
- dummy-1
- dummy-2
- resource_ids:
- dummy-5
- dummy-6
options:
- name: sequential
value: "false"
options:
- name: score
value: 20
# order constraints
ha_cluster_constraints_order:
# simple constraint
- resource_first:
id: dummy-1
79
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
resource_then:
id: dummy-6
options:
- name: symmetrical
value: "false"
# set constraint
- resource_sets:
- resource_ids:
- dummy-1
- dummy-2
options:
- name: require-all
value: "false"
- name: sequential
value: "false"
- resource_ids:
- dummy-3
- resource_ids:
- dummy-4
- dummy-5
options:
- name: sequential
value: "false"
# ticket constraints
ha_cluster_constraints_ticket:
# simple constraint
- resource:
id: dummy-1
ticket: ticket1
options:
- name: loss-policy
value: stop
# set constraint
- resource_sets:
- resource_ids:
- dummy-3
- dummy-4
- dummy-5
ticket: ticket2
options:
- name: loss-policy
value: fence
This example playbook file configures a cluster running the firewalld and selinux services. The
cluster includes resource location constraints, resource colocation constraints, resource order
constraints, and resource ticket constraints.
When creating your playbook file for production, vault encrypt the password, as described in
Encrypting content with Ansible Vault .
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
80
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The systems that you will use as your cluster members have active subscription coverage for
RHEL and the RHEL High Availability Add-On.
The inventory file specifies the cluster nodes as described in Specifying an inventory for the
ha_cluster system role.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create a high availability cluster that configures Corosync values
hosts: node1 node2
roles:
- rhel-system-roles.ha_cluster
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: <password>
ha_cluster_manage_firewall: true
ha_cluster_manage_selinux: true
81
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
ha_cluster_transport:
type: knet
options:
- name: ip_version
value: ipv4-6
- name: link_mode
value: active
links:
-
- name: linknumber
value: 1
- name: link_priority
value: 5
-
- name: linknumber
value: 0
- name: link_priority
value: 10
compression:
- name: level
value: 5
- name: model
value: zlib
crypto:
- name: cipher
value: none
- name: hash
value: none
ha_cluster_totem:
options:
- name: block_unlisted_ips
value: 'yes'
- name: send_join
value: 0
ha_cluster_quorum:
options:
- name: auto_tie_breaker
value: 1
- name: wait_for_all
value: 1
This example playbook file configures a cluster running the firewalld and selinux services that
configures Corosync properties.
When creating your playbook file for production, Vault encrypt the password, as described in
Encrypting content with Ansible Vault .
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
82
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
This playbook uses an inventory file that loads a watchdog module (supported in RHEL 8.9 and later) as
described in Configuring watchdog and SBD devices in an inventory .
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The systems that you will use as your cluster members have active subscription coverage for
RHEL and the RHEL High Availability Add-On.
The inventory file specifies the cluster nodes as described in Specifying an inventory for the
ha_cluster system role.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create a high availability cluster that uses SBD node fencing
hosts: node1 node2
roles:
- rhel-system-roles.ha_cluster
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: <password>
ha_cluster_manage_firewall: true
83
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
ha_cluster_manage_selinux: true
ha_cluster_sbd_enabled: yes
ha_cluster_sbd_options:
- name: delay-start
value: 'no'
- name: startmode
value: always
- name: timeout-action
value: 'flush,reboot'
- name: watchdog-timeout
value: 30
# Suggested optimal values for SBD timeouts:
# watchdog-timeout * 2 = msgwait-timeout (set automatically)
# msgwait-timeout * 1.2 = stonith-timeout
ha_cluster_cluster_properties:
- attrs:
- name: stonith-timeout
value: 72
ha_cluster_resource_primitives:
- id: fence_sbd
agent: 'stonith:fence_sbd'
instance_attrs:
- attrs:
# taken from host_vars
- name: devices
value: "{{ ha_cluster.sbd_devices | join(',') }}"
- name: pcmk_delay_base
value: 30
This example playbook file configures a cluster running the firewalld and selinux services that
uses SBD fencing and creates the SBD Stonith resource.
When creating your playbook file for production, vault encrypt the password, as described in
Encrypting content with Ansible Vault .
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
84
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
(RHEL 8.8 and later) To configure a high availability cluster with a separate quorum device by using the
ha_cluster system role, first set up the quorum device. After setting up the quorum device, you can use
the device in any number of clusters.
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The system that you will use to run the quorum device has active subscription coverage for
RHEL and the RHEL High Availability Add-On.
The inventory file specifies the quorum devices as described in Specifying an inventory for the
ha_cluster system role.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure a quorum device
hosts: nodeQ
roles:
- rhel-system-roles.ha_cluster
vars:
ha_cluster_cluster_present: false
ha_cluster_hacluster_password: <password>
ha_cluster_manage_firewall: true
ha_cluster_manage_selinux: true
ha_cluster_qnetd:
present: true
This example playbook file configures a quorum device on a system running the firewalld and
selinux services.
When creating your playbook file for production, vault encrypt the password, as described in
Encrypting content with Ansible Vault .
85
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The systems that you will use as your cluster members have active subscription coverage for
RHEL and the RHEL High Availability Add-On.
The inventory file specifies the cluster nodes as described in Specifying an inventory for the
ha_cluster system role.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure a cluster to use a quorum device
hosts: node1 node2
roles:
86
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
- rhel-system-roles.ha_cluster
vars:
ha_cluster_cluster_name: my-new-cluster
ha_cluster_hacluster_password: <password>
ha_cluster_manage_firewall: true
ha_cluster_manage_selinux: true
ha_cluster_quorum:
device:
model: net
model_options:
- name: host
value: nodeQ
- name: algorithm
value: lms
This example playbook file configures a cluster running the firewalld and selinux services that
uses a quorum device.
When creating your playbook file for production, vault encrypt the password, as described in
Encrypting content with Ansible Vault .
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
Prerequisites
You have ansible-core installed on the node from which you want to run the playbook.
NOTE
You do not need to have ansible-core installed on the cluster member nodes.
You have the rhel-system-roles package installed on the system from which you want to run
87
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
You have the rhel-system-roles package installed on the system from which you want to run
the playbook.
The systems that you will use as your cluster members have active subscription coverage for
RHEL and the RHEL High Availability Add-On.
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
Procedure
1. Create an inventory file specifying the nodes in the cluster, as described in Specifying an
inventory for the ha_cluster system role.
NOTE
When creating your playbook file for production, vault encrypt the password, as
described in Encrypting content with Ansible Vault .
The following example playbook file configures a cluster running the firewalld and selinux
services with node attributes configured for the nodes in the cluster.
roles:
- linux-system-roles.ha_cluster
88
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
4. Run the playbook, specifying the path to the inventory file inventory you created in Step 1.
WARNING
The ha_cluster system role replaces any existing cluster configuration on the
specified nodes. Any settings not specified in the playbook will be lost.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The systems that you will use as your cluster members have active subscription coverage for
RHEL and the RHEL High Availability Add-On.
The inventory file specifies the cluster nodes as described in Specifying an inventory for the
ha_cluster system role.
You have configured an LVM logical volume with an XFS file system, as described in Configuring
an LVM volume with an XFS file system in a Pacemaker cluster.
You have configured an Apache HTTP server, as described in Configuring an Apache HTTP
Server.
Your system includes an APC power switch that will be used to fence the cluster nodes.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure active/passive Apache server in a high availability cluster
hosts: z1.example.com z2.example.com
roles:
- rhel-system-roles.ha_cluster
vars:
ha_cluster_hacluster_password: <password>
89
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
ha_cluster_cluster_name: my_cluster
ha_cluster_manage_firewall: true
ha_cluster_manage_selinux: true
ha_cluster_fence_agent_packages:
- fence-agents-apc-snmp
ha_cluster_resource_primitives:
- id: myapc
agent: stonith:fence_apc_snmp
instance_attrs:
- attrs:
- name: ipaddr
value: zapc.example.com
- name: pcmk_host_map
value: z1.example.com:1;z2.example.com:2
- name: login
value: apc
- name: passwd
value: apc
- id: my_lvm
agent: ocf:heartbeat:LVM-activate
instance_attrs:
- attrs:
- name: vgname
value: my_vg
- name: vg_access_mode
value: system_id
- id: my_fs
agent: Filesystem
instance_attrs:
- attrs:
- name: device
value: /dev/my_vg/my_lv
- name: directory
value: /var/www
- name: fstype
value: xfs
- id: VirtualIP
agent: IPaddr2
instance_attrs:
- attrs:
- name: ip
value: 198.51.100.3
- name: cidr_netmask
value: 24
- id: Website
agent: apache
instance_attrs:
- attrs:
- name: configfile
value: /etc/httpd/conf/httpd.conf
- name: statusurl
value: http://127.0.0.1/server-status
ha_cluster_resource_groups:
- id: apachegroup
resource_ids:
- my_lvm
90
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
- my_fs
- VirtualIP
- Website
This example uses an APC power switch with a host name of zapc.example.com. If the cluster
does not use any other fence agents, you can optionally list only the fence agents your cluster
requires when defining the ha_cluster_fence_agent_packages variable, as in this example.
When creating your playbook file for production, vault encrypt the password, as described in
Encrypting content with Ansible Vault .
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
4. When you use the apache resource agent to manage Apache, it does not use systemd.
Because of this, you must edit the logrotate script supplied with Apache so that it does not use
systemctl to reload Apache.
Remove the following line in the /etc/logrotate.d/httpd file on each node in the cluster.
For RHEL 8.6 and later, replace the line you removed with the following three lines,
specifying /var/run/httpd-website.pid as the PID file path where website is the name of the
Apache resource. In this example, the Apache resource name is Website.
For RHEL 8.5 and earlier, replace the line you removed with the following three lines.
Verification
1. From one of the nodes in the cluster, check the status of the cluster. Note that all four
resources are running on the same node, z1.example.com.
If you find that the resources you configured are not running, you can run the pcs resource
debug-start resource command to test the resource configuration.
91
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
2. Once the cluster is up and running, you can point a browser to the IP address you defined as the
IPaddr2 resource to view the sample display, consisting of the simple word "Hello".
Hello
3. To test whether the resource group running on z1.example.com fails over to node
z2.example.com, put node z1.example.com in standby mode, after which the node will no
longer be able to host resources.
4. After putting node z1 in standby mode, check the cluster status from one of the nodes in the
cluster. Note that the resources should now all be running on z2.
92
CHAPTER 11. CONFIGURING A HIGH-AVAILABILITY CLUSTER BY USING THE RHEL SYSTEM ROLE
The web site at the defined IP address should still display, without interruption.
NOTE
Removing a node from standby mode does not in itself cause the resources to
fail back over to that node. This will depend on the resource-stickiness value for
the resources. For information about the resource-stickiness meta attribute,
see Configuring a resource to prefer its current node .
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
93
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure persistent logging
hosts: managed-node-01.example.com
vars:
journald_persistent: true
journald_max_disk_size: 2048
journald_per_user: true
journald_sync_interval: 1
roles:
- rhel-system-roles.journald
As a result, the journald service stores your logs persistently on a disk to the maximum size of
2048 MB, and keeps log data separate for each user. The synchronization happens every
minute.
Note that this command only validates the syntax and does not protect against a wrong but valid
94
CHAPTER 12. CONFIGURING THE SYSTEMD JOURNAL BY USING THE RHEL SYSTEM ROLE
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.journald/README.md file
/usr/share/doc/rhel-system-roles/journald/ directory
95
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Using the kdump role enables you to specify where to save the contents of the system’s memory for
later analysis.
WARNING
The kdump Systeme Role replaces the kdump configuration of the managed hosts
entirely by replacing the /etc/kdump.conf file. Additionally, if the kdump role is
applied, all previous kdump settings are also replaced, even if they are not specified
by the role variables, by replacing the /etc/sysconfig/kdump file.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.kdump
vars:
kdump_path: /var/crash
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
96
CHAPTER 13. CONFIGURING AUTOMATIC CRASH DUMPS BY USING THE RHEL SYSTEM ROLE
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.kdump/README.md file
/usr/share/doc/rhel-system-roles/kdump/ directory
97
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
After you run the kernel_settings role from the control machine, the kernel parameters are applied to
the managed systems immediately and persist across reboots.
IMPORTANT
Note that RHEL system role delivered over RHEL channels are available to RHEL
customers as an RPM package in the default AppStream repository. RHEL system role
are also available as a collection to customers with Ansible subscriptions over Ansible
Automation Hub.
RHEL system roles were introduced for automated configurations of the kernel using the
kernel_settings RHEL system role. The rhel-system-roles package contains this system role, and also
the reference documentation.
To apply the kernel parameters on one or more systems in an automated fashion, use the
kernel_settings role with one or more of its role variables of your choice in a playbook. A playbook is a
list of one or more plays that are human-readable, and are written in the YAML format.
You can use an inventory file to define a set of systems that you want Ansible to configure according to
the playbook.
Various kernel subsystems, hardware devices, and device drivers using the
kernel_settings_sysfs role variable
The CPU affinity for the systemd service manager and processes it forks using the
kernel_settings_systemd_cpu_affinity role variable
Additional resources
/usr/share/ansible/roles/rhel-system-roles.kernel_settings/README.md file
/usr/share/doc/rhel-system-roles/kernel_settings/ directory
98
CHAPTER 14. CONFIGURING KERNEL PARAMETERS PERMANENTLY BY USING THE RHEL SYSTEM ROLE
Follow these steps to prepare and apply an Ansible playbook to remotely configure kernel parameters
with persisting effect on multiple managed operating systems.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure kernel settings
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.kernel_settings
vars:
kernel_settings_sysctl:
- name: fs.file-max
value: 400000
- name: kernel.threads-max
value: 65536
kernel_settings_sysfs:
- name: /sys/class/net/lo/mtu
value: 65000
kernel_settings_transparent_hugepages: madvise
name: optional key which associates an arbitrary string with the play as a label and identifies
what the play is for.
hosts: key in the play which specifies the hosts against which the play is run. The value or
values for this key can be provided as individual names of managed hosts or as groups of
hosts as defined in the inventory file.
vars: section of the playbook which represents a list of variables containing selected kernel
parameter names and values to which they have to be set.
role: key which specifies what RHEL system role is going to configure the parameters and
values mentioned in the vars section.
NOTE
99
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
NOTE
You can modify the kernel parameters and their values in the playbook to fit
your needs.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
4. Restart your managed hosts and check the affected kernel parameters to verify that the
changes have been applied and persist across reboots.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.kernel_settings/README.md file
/usr/share/doc/rhel-system-roles/kernel_settings/ directory
Using Variables
Roles
100
CHAPTER 15. CONFIGURING LOGGING BY USING THE RHEL SYSTEM ROLE
Logging solutions provide multiple ways of reading logs and multiple logging outputs.
Local files
systemd/journal
With the logging RHEL system role, you can combine the inputs and outputs to fit your scenario. For
example, you can configure a logging solution that stores inputs from journal in a local file, whereas
inputs read from files are both forwarded to another logging system and stored in the local log files.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.logging/README.md file
/usr/share/doc/rhel-system-roles/logging/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
NOTE
101
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
NOTE
You do not have to have the rsyslog package installed, because the RHEL system role
installs rsyslog when deployed.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Deploying basics input and implicit files output
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.logging
vars:
logging_inputs:
- name: system_input
type: basics
logging_outputs:
- name: files_output
type: files
logging_flows:
- name: flow1
inputs: [system_input]
outputs: [files_output]
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
# rsyslogd -N 1
rsyslogd: version 8.1911.0-6.el8, config validation run...
rsyslogd: End of config validation run. Bye.
# logger test
102
CHAPTER 15. CONFIGURING LOGGING BY USING THE RHEL SYSTEM ROLE
# cat /var/log/messages
Aug 5 13:48:31 <hostname> root[6778]: test
Where <hostname> is the host name of the client system. Note that the log contains the
user name of the user that entered the logger command, in this case root.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.logging/README.md file
/usr/share/doc/rhel-system-roles/logging/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
NOTE
You do not have to have the rsyslog package installed, because the RHEL system role
installs rsyslog when deployed.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Deploying files input and configured files output
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.logging
vars:
logging_inputs:
- name: files_input
type: basics
logging_outputs:
- name: files_output0
type: files
property: msg
property_op: contains
property_value: error
path: /var/log/errors.log
- name: files_output1
type: files
property: msg
property_op: "!contains"
property_value: error
103
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
path: /var/log/others.log
logging_flows:
- name: flow0
inputs: [files_input]
outputs: [files_output0, files_output1]
Using this configuration, all messages that contain the error string are logged in
/var/log/errors.log, and all other messages are logged in /var/log/others.log.
You can replace the error property value with the string by which you want to filter.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
# rsyslogd -N 1
rsyslogd: version 8.1911.0-6.el8, config validation run...
rsyslogd: End of config validation run. Bye.
2. Verify that the system sends messages that contain the error string to the log:
# logger error
# cat /var/log/errors.log
Aug 5 13:48:31 hostname root[6778]: error
Where hostname is the host name of the client system. Note that the log contains the user
name of the user that entered the logger command, in this case root.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.logging/README.md file
/usr/share/doc/rhel-system-roles/logging/ directory
Follow these steps to prepare and apply a Red Hat Ansible Core playbook to configure a remote logging
solution. In this playbook, one or more clients take logs from systemd-journal and forward them to a
remote server. The server receives remote input from remote_rsyslog and remote_files and outputs
the logs to local files in directories named by remote host names.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
NOTE
You do not have to have the rsyslog package installed, because the RHEL system role
installs rsyslog when deployed.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Deploying remote input and remote_files output
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.logging
vars:
logging_inputs:
- name: remote_udp_input
type: remote
udp_ports: [ 601 ]
- name: remote_tcp_input
type: remote
tcp_ports: [ 601 ]
logging_outputs:
- name: remote_files_output
type: remote_files
logging_flows:
- name: flow_0
inputs: [remote_udp_input, remote_tcp_input]
outputs: [remote_files_output]
105
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
- name: forward_output0
type: forwards
severity: info
target: <host1.example.com>
udp_port: 601
- name: forward_output1
type: forwards
facility: mail
target: <host1.example.com>
tcp_port: 601
logging_flows:
- name: flows0
inputs: [basic_input]
outputs: [forward_output0, forward_output1]
[basic_input]
[forward_output0, forward_output1]
NOTE
You can modify the parameters in the playbook to fit your needs.
WARNING
The logging solution works only with the ports defined in the SELinux policy
of the server or client system and open in the firewall. The default SELinux
policy includes ports 601, 514, 6514, 10514, and 20514. To use a different
port, modify the SELinux policy on the client and server systems .
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
1. On both the client and the server system, test the syntax of the /etc/rsyslog.conf file:
# rsyslogd -N 1
rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config
/etc/rsyslog.conf
106
CHAPTER 15. CONFIGURING LOGGING BY USING THE RHEL SYSTEM ROLE
# logger test
# cat /var/log/<host2.example.com>/messages
Aug 5 13:48:31 <host2.example.com> root[6778]: test
Where <host2.example.com> is the host name of the client system. Note that the log
contains the user name of the user that entered the logger command, in this case root.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.logging/README.md file
/usr/share/doc/rhel-system-roles/logging/ directory
As an administrator, you can use the logging RHEL system role to configure a secure transfer of logs
using Red Hat Ansible Automation Platform.
This procedure creates a private key and certificate, and configures TLS on all hosts in the clients group
in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs
over the network.
NOTE
You do not have to call the certificate RHEL system role in the playbook to create the
certificate. The logging RHEL system role calls it automatically.
In order for the CA to be able to sign the created certificate, the managed nodes must be
enrolled in an IdM domain.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
107
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Deploying files input and forwards output with certs
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.logging
vars:
logging_certificates:
- name: logging_cert
dns: ['localhost', 'www.example.com']
ca: ipa
logging_pki_files:
- ca_cert: /local/path/to/ca_cert.pem
cert: /local/path/to/logging_cert.pem
private_key: /local/path/to/logging_cert.pem
logging_inputs:
- name: input_name
type: files
input_log_path: /var/log/containers/*.log
logging_outputs:
- name: output_name
type: forwards
target: your_target_host
tcp_port: 514
tls: true
pki_authmode: x509/name
permitted_server: 'server.example.com'
logging_flows:
- name: flow_name
inputs: [input_name]
outputs: [output_name]
logging_certificates
The value of this parameter is passed on to certificate_requests in the certificate
RHEL system role and used to create a private key and certificate.
logging_pki_files
Using this parameter, you can configure the paths and other settings that logging uses to
find the CA, certificate, and key files used for TLS, specified with one or more of the
following sub-parameters: ca_cert, ca_cert_src, cert, cert_src, private_key,
private_key_src, and tls.
NOTE
If you are using logging_certificates to create the files on the target node, do
not use ca_cert_src, cert_src, and private_key_src, which are used to copy
files not created by logging_certificates.
108
CHAPTER 15. CONFIGURING LOGGING BY USING THE RHEL SYSTEM ROLE
ca_cert
Represents the path to the CA certificate file on the target node. Default path is
/etc/pki/tls/certs/ca.pem and the file name is set by the user.
cert
Represents the path to the certificate file on the target node. Default path is
/etc/pki/tls/certs/server-cert.pem and the file name is set by the user.
private_key
Represents the path to the private key file on the target node. Default path is
/etc/pki/tls/private/server-key.pem and the file name is set by the user.
ca_cert_src
Represents the path to the CA certificate file on the control node which is copied to the
target host to the location specified by ca_cert. Do not use this if using
logging_certificates.
cert_src
Represents the path to a certificate file on the control node which is copied to the target
host to the location specified by cert. Do not use this if using logging_certificates.
private_key_src
Represents the path to a private key file on the control node which is copied to the target
host to the location specified by private_key. Do not use this if using logging_certificates.
tls
Setting this parameter to true ensures secure transfer of logs over the network. If you do not
want a secure wrapper, you can set tls: false.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.logging/README.md file
/usr/share/doc/rhel-system-roles/logging/ directory
This procedure creates a private key and certificate, and configures TLS on all hosts in the server group
in the Ansible inventory.
NOTE
109
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
NOTE
You do not have to call the certificate RHEL system role in the playbook to create the
certificate. The logging RHEL system role calls it automatically.
In order for the CA to be able to sign the created certificate, the managed nodes must be
enrolled in an IdM domain.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Deploying remote input and remote_files output with certs
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.logging
vars:
logging_certificates:
- name: logging_cert
dns: ['localhost', 'www.example.com']
ca: ipa
logging_pki_files:
- ca_cert: /local/path/to/ca_cert.pem
cert: /local/path/to/logging_cert.pem
private_key: /local/path/to/logging_cert.pem
logging_inputs:
- name: input_name
type: remote
tcp_ports: 514
tls: true
permitted_clients: ['clients.example.com']
logging_outputs:
- name: output_name
type: remote_files
remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-
replace%.log
async_writing: true
client_count: 20
io_buffer_size: 8192
logging_flows:
- name: flow_name
inputs: [input_name]
outputs: [output_name]
110
CHAPTER 15. CONFIGURING LOGGING BY USING THE RHEL SYSTEM ROLE
logging_certificates
The value of this parameter is passed on to certificate_requests in the certificate
RHEL system role and used to create a private key and certificate.
logging_pki_files
Using this parameter, you can configure the paths and other settings that logging uses to
find the CA, certificate, and key files used for TLS, specified with one or more of the
following sub-parameters: ca_cert, ca_cert_src, cert, cert_src, private_key,
private_key_src, and tls.
NOTE
If you are using logging_certificates to create the files on the target node, do
not use ca_cert_src, cert_src, and private_key_src, which are used to copy
files not created by logging_certificates.
ca_cert
Represents the path to the CA certificate file on the target node. Default path is
/etc/pki/tls/certs/ca.pem and the file name is set by the user.
cert
Represents the path to the certificate file on the target node. Default path is
/etc/pki/tls/certs/server-cert.pem and the file name is set by the user.
private_key
Represents the path to the private key file on the target node. Default path is
/etc/pki/tls/private/server-key.pem and the file name is set by the user.
ca_cert_src
Represents the path to the CA certificate file on the control node which is copied to the
target host to the location specified by ca_cert. Do not use this if using
logging_certificates.
cert_src
Represents the path to a certificate file on the control node which is copied to the target
host to the location specified by cert. Do not use this if using logging_certificates.
private_key_src
Represents the path to a private key file on the control node which is copied to the target
host to the location specified by private_key. Do not use this if using logging_certificates.
tls
Setting this parameter to true ensures secure transfer of logs over the network. If you do not
want a secure wrapper, you can set tls: false.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
111
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Additional resources
/usr/share/ansible/roles/rhel-system-roles.logging/README.md file
/usr/share/doc/rhel-system-roles/logging/ directory
The RELP sender transfers log entries in form of commands and the receiver acknowledges them once
they are processed. To ensure consistency, RELP stores the transaction number to each transferred
command for any kind of message recovery.
You can consider a remote logging system in between the RELP Client and RELP Server. The RELP
Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by
the remote logging system.
Administrators can use the logging RHEL system role to configure the logging system to reliably send
and receive log entries.
This procedure configures RELP on all hosts in the clients group in the Ansible inventory. The RELP
configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure
transfer of logs over the network.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Deploying basic input and relp output
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.logging
vars:
logging_inputs:
- name: basic_input
112
CHAPTER 15. CONFIGURING LOGGING BY USING THE RHEL SYSTEM ROLE
type: basics
logging_outputs:
- name: relp_client
type: relp
target: logging.server.com
port: 20514
tls: true
ca_cert: /etc/pki/tls/certs/ca.pem
cert: /etc/pki/tls/certs/client-cert.pem
private_key: /etc/pki/tls/private/client-key.pem
pki_authmode: name
permitted_servers:
- '*.server.example.com'
logging_flows:
- name: example_flow
inputs: [basic_input]
outputs: [relp_client]
target
This is a required parameter that specifies the host name where the remote logging system
is running.
port
Port number the remote logging system is listening.
tls
Ensures secure transfer of logs over the network. If you do not want a secure wrapper you
can set the tls variable to false. By default tls parameter is set to true while working with
RELP and requires key/certificates and triplets {ca_cert, cert, private_key} and/or
{ca_cert_src, cert_src, private_key_src}.
If the {ca_cert, cert, private_key} triplet is set, files are expected to be on the default
path before the logging configuration.
If both triplets are set, files are transferred from local path from control node to specific
path of the managed node.
ca_cert
Represents the path to CA certificate. Default path is /etc/pki/tls/certs/ca.pem and the file
name is set by the user.
cert
Represents the path to certificate. Default path is /etc/pki/tls/certs/server-cert.pem and
the file name is set by the user.
private_key
Represents the path to private key. Default path is /etc/pki/tls/private/server-key.pem and
the file name is set by the user.
ca_cert_src
Represents local CA certificate file path which is copied to the target host. If ca_cert is
113
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Represents local CA certificate file path which is copied to the target host. If ca_cert is
specified, it is copied to the location.
cert_src
Represents the local certificate file path which is copied to the target host. If cert is
specified, it is copied to the location.
private_key_src
Represents the local key file path which is copied to the target host. If private_key is
specified, it is copied to the location.
pki_authmode
Accepts the authentication mode as name or fingerprint.
permitted_servers
List of servers that will be allowed by the logging client to connect and send logs over TLS.
inputs
List of logging input dictionary.
outputs
List of logging output dictionary.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.logging/README.md file
/usr/share/doc/rhel-system-roles/logging/ directory
This procedure configures RELP on all hosts in the server group in the Ansible inventory. The RELP
configuration uses TLS to encrypt the message transmission for secure transfer of logs over the
network.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
114
CHAPTER 15. CONFIGURING LOGGING BY USING THE RHEL SYSTEM ROLE
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Deploying remote input and remote_files output
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.logging
vars:
logging_inputs:
- name: relp_server
type: relp
port: 20514
tls: true
ca_cert: /etc/pki/tls/certs/ca.pem
cert: /etc/pki/tls/certs/server-cert.pem
private_key: /etc/pki/tls/private/server-key.pem
pki_authmode: name
permitted_clients:
- '*example.client.com'
logging_outputs:
- name: remote_files_output
type: remote_files
logging_flows:
- name: example_flow
inputs: relp_server
outputs: remote_files_output
port
Port number the remote logging system is listening.
tls
Ensures secure transfer of logs over the network. If you do not want a secure wrapper you
can set the tls variable to false. By default tls parameter is set to true while working with
RELP and requires key/certificates and triplets {ca_cert, cert, private_key} and/or
{ca_cert_src, cert_src, private_key_src}.
If the {ca_cert, cert, private_key} triplet is set, files are expected to be on the default
path before the logging configuration.
If both triplets are set, files are transferred from local path from control node to specific
path of the managed node.
ca_cert
Represents the path to CA certificate. Default path is /etc/pki/tls/certs/ca.pem and the file
name is set by the user.
cert
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.logging/README.md file
/usr/share/doc/rhel-system-roles/logging/ directory
116
CHAPTER 16. MONITORING PERFORMANCE BY USING THE RHEL SYSTEM ROLE
Additional resources
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md file
/usr/share/doc/rhel-system-roles/metrics/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
localhost ansible_connection=local
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage metrics
hosts: localhost
roles:
- rhel-system-roles.metrics
vars:
metrics_graph_service: yes
metrics_manage_firewall: true
metrics_manage_selinux: true
117
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
To view visualization of the metrics being collected on your machine, access the grafana web
interface as described in Accessing the Grafana web UI .
Additional resources
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md file
/usr/share/doc/rhel-system-roles/metrics/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure a fleet of machines to monitor themselves
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.metrics
vars:
metrics_retention_days: 0
metrics_manage_firewall: true
metrics_manage_selinux: true
118
CHAPTER 16. MONITORING PERFORMANCE BY USING THE RHEL SYSTEM ROLE
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md file
/usr/share/doc/rhel-system-roles/metrics/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
localhost ansible_connection=local
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
119
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
To view a graphical representation of the metrics being collected centrally by your machine and
to query the data, access the grafana web interface as described in Accessing the Grafana web
UI.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md file
/usr/share/doc/rhel-system-roles/metrics/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Edit an existing playbook file, for example ~/playbook.yml, and add the authentication-related
variables:
120
CHAPTER 16. MONITORING PERFORMANCE BY USING THE RHEL SYSTEM ROLE
---
- name: Set up authentication by using the scram-sha-256 authentication mechanism
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.metrics
vars:
metrics_retention_days: 0
metrics_manage_firewall: true
metrics_manage_selinux: true
metrics_username: <username>
metrics_password: <password>
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
# pminfo -f -h "pcp://managed-node-01.example.com?username=<username>"
disk.dev.read
Password: <password>
disk.dev.read
inst [0 or "sda"] value 19540
Additional resources
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md file
/usr/share/doc/rhel-system-roles/metrics/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
You have installed Microsoft SQL Server for Red Hat Enterprise Linux and established a trusted
121
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
You have installed Microsoft SQL Server for Red Hat Enterprise Linux and established a trusted
connection to an SQL server.
You have installed the Microsoft ODBC driver for SQL Server for Red Hat Enterprise Linux .
localhost ansible_connection=local
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure and enable metrics collection for Microsoft SQL Server
hosts: localhost
roles:
- rhel-system-roles.metrics
vars:
metrics_from_mssql: true
metrics_manage_firewall: true
metrics_manage_selinux: true
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Use the pcp command to verify that SQL Server PMDA agent (mssql) is loaded and running:
# pcp
platform: Linux sqlserver.example.com 4.18.0-167.el8.x86_64 #1 SMP Sun Dec 15 01:24:23
UTC 2019 x86_64
hardware: 2 cpus, 1 disk, 1 node, 2770MB RAM
timezone: PDT+7
services: pmcd pmproxy
pmcd: Version 5.0.2-1, 12 agents, 4 clients
pmda: root pmcd proc pmproxy xfs linux nfsclient mmv kvm mssql
jbd2 dm
pmlogger: primary logger: /var/log/pcp/pmlogger/sqlserver.example.com/20200326.16.31
pmie: primary engine: /var/log/pcp/pmie/sqlserver.example.com/pmie.log
Additional resources
122
CHAPTER 16. MONITORING PERFORMANCE BY USING THE RHEL SYSTEM ROLE
/usr/share/ansible/roles/rhel-system-roles.metrics/README.md file
/usr/share/doc/rhel-system-roles/metrics/ directory
Performance Co-Pilot for Microsoft SQL Server with RHEL 8.2 blog post
123
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
NOTE
During the installation, the role adds repositories for SQL Server and related packages to
the managed hosts. Packages in these repositories are provided, maintained, and hosted
by Microsoft.
Depending on the RHEL version on the managed host, the version of SQL Server that you can install
differs:
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later.
You stored the certificate in the sql_crt.pem file in the same directory as the playbook.
You stored the private key in the sql_cert.key file in the same directory as the playbook.
Procedure
124
CHAPTER 17. CONFIGURING MICROSOFT SQL SERVER BY USING THE ANSIBLE SYSTEM ROLES
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
sa_pwd: <sa_password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Installing and configuring Microsoft SQL Server
hosts: managed-node-01.example.com
vars_files:
- vault.yml
tasks:
- name: SQL Server with an existing private key and certificate
ansible.builtin.include_role:
name: microsoft.sql.server
vars:
mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true
mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true
mssql_accept_microsoft_sql_server_standard_eula: true
mssql_version: 2022
mssql_password: "{{ sa_pwd }}"
mssql_edition: Developer
mssql_tcp_port: 1433
mssql_manage_firewall: true
mssql_tls_enable: true
mssql_tls_cert: sql_crt.pem
mssql_tls_private_key: sql_cert.key
mssql_tls_version: 1.2
mssql_tls_force: true
mssql_tls_enable: true
Enables TLS encryption. If you enable this setting, you must also define mssql_tls_cert and
mssql_tls_private_key.
mssql_tls_cert: <path>
Sets the path to the TLS certificate stored on the control node. The role copies this file to
the /etc/pki/tls/certs/ directory on the managed node.
mssql_tls_private_key: <path>
Sets the path to the TLS private key on the control node. The role copies this file to the
/etc/pki/tls/private/ directory on the managed node.
125
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
mssql_tls_force: true
Replaces the TLS certificate and private key in their destination directories if they exist.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/microsoft.sql-server/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Verification
On the SQL Server host, use the sqlcmd utility with the -N parameter to establish an encrypted
connection to SQL server and run a query, for example:
If the command succeeds, the connection to the server was TLS encrypted.
Additional resources
/usr/share/ansible/roles/microsoft.sql-server/README.md file
By using the microsoft.sql.server Ansible system role, you can automate this process. You can remotely
install and configure SQL Server with TLS encryption, and the microsoft.sql.server role uses the
certificate Ansible system role to configure certmonger and request a certificate from IdM.
Depending on the RHEL version on the managed host, the version of SQL Server that you can install
differs:
Prerequisites
126
CHAPTER 17. CONFIGURING MICROSOFT SQL SERVER BY USING THE ANSIBLE SYSTEM ROLES
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later.
You enrolled the managed node in a Red Hat Identity Management (IdM) domain.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
sa_pwd: <sa_password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Installing and configuring Microsoft SQL Server
hosts: managed-node-01.example.com
vars_files:
- vault.yml
tasks:
- name: SQL Server with certificates issued by Red Hat IdM
ansible.builtin.include_role:
name: microsoft.sql.server
vars:
mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true
mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true
mssql_accept_microsoft_sql_server_standard_eula: true
mssql_version: 2022
mssql_password: "{{ sa_pwd }}"
mssql_edition: Developer
mssql_tcp_port: 1433
mssql_manage_firewall: true
127
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
mssql_tls_enable: true
mssql_tls_certificates:
- name: sql_cert
dns: server.example.com
ca: ipa
mssql_tls_enable: true
Enables TLS encryption. If you enable this setting, you must also define
mssql_tls_certificates.
mssql_tls_certificates
A list of YAML dictionaries with settings for the certificate role.
name: <file_name>
Defines the base name of the certificate and private key. The certificate role stores the
certificate in the /etc/pki/tls/certs/<file_name>.crt and the private key in the
/etc/pki/tls/private/<file_name>.key file.
dns: <hostname_or_list_of_hostnames>
Sets the hostnames that the Subject Alternative Names (SAN) field in the issued certificate
contains. You can use a wildcard (*) or specify multiple names in YAML list format.
ca: <ca_type>
Defines how the certificate role requests the certificate. Set the variable to ipa if the host is
enrolled in an IdM domain or self-sign to request a self-signed certificate.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/microsoft.sql-server/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Verification
On the SQL Server host, use the sqlcmd utility with the -N parameter to establish an encrypted
connection to SQL server and run a query, for example:
If the command succeeds, the connection to the server was TLS encrypted.
Additional resources
/usr/share/ansible/roles/microsoft.sql-server/README.md file
128
CHAPTER 17. CONFIGURING MICROSOFT SQL SERVER BY USING THE ANSIBLE SYSTEM ROLES
IMPORTANT
If you change the data or log path and re-run the playbook, the previously-used
directories and all their content remains at the original path. Only new databases and logs
are stored in the new location.
Table 17.1. SQL Server default settings for data and log directories
[a] If the directory exists, the role preserves the mode. If the directory does not exist, the role applies the default umask
on the managed node when it creates the directory.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
129
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
sa_pwd: <sa_password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Edit an existing playbook file, for example ~/playbook.yml, and add the storage and log-related
variables:
---
- name: Installing and configuring Microsoft SQL Server
hosts: managed-node-01.example.com
vars_files:
- vault.yml
tasks:
- name: SQL Server with custom storage paths
ansible.builtin.include_role:
name: microsoft.sql.server
vars:
mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true
mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true
mssql_accept_microsoft_sql_server_standard_eula: true
mssql_version: 2022
mssql_password: "{{ sa_pwd }}"
mssql_edition: Developer
mssql_tcp_port: 1433
mssql_manage_firewall: true
mssql_datadir: /var/lib/mssql/
mssql_datadir_mode: '0700'
mssql_logdir: /var/log/mssql/
mssql_logdir_mode: '0700'
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/microsoft.sql-server/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
130
CHAPTER 17. CONFIGURING MICROSOFT SQL SERVER BY USING THE ANSIBLE SYSTEM ROLES
Verification
Additional resources
/usr/share/ansible/roles/microsoft.sql-server/README.md file
Depending on the RHEL version on the managed host, the version of SQL Server that you can install
differs:
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The managed node uses one of the following versions: RHEL 7.9, RHEL 8, RHEL 9.4 or later.
A reverse DNS (RDNS) zone exists in AD, and it contains Pointer (PTR) resource records for
each AD domain controller (DC).
131
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Both the hostnames and the fully-qualified domain names (FQDNs) of the AD DCs resolve
to their IP addresses.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
sa_pwd: <sa_password>
sql_pwd: <SQL_AD_password>
ad_admin_pwd: <AD_admin_password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Installing and configuring Microsoft SQL Server
hosts: managed-node-01.example.com
vars_files:
- vault.yml
tasks:
- name: SQL Server with AD authentication
ansible.builtin.include_role:
name: microsoft.sql.server
vars:
mssql_accept_microsoft_odbc_driver_17_for_sql_server_eula: true
mssql_accept_microsoft_cli_utilities_for_sql_server_eula: true
mssql_accept_microsoft_sql_server_standard_eula: true
mssql_version: 2022
mssql_password: "{{ sa_pwd }}"
mssql_edition: Developer
mssql_tcp_port: 1433
mssql_manage_firewall: true
mssql_ad_configure: true
mssql_ad_join: true
mssql_ad_sql_user: sqluser
mssql_ad_sql_password: "{{ sql_pwd }}"
132
CHAPTER 17. CONFIGURING MICROSOFT SQL SERVER BY USING THE ANSIBLE SYSTEM ROLES
ad_integration_realm: ad.example.com
ad_integration_user: Administrator
ad_integration_password: "{{ ad_admin_pwd }}"
mssql_ad_configure: true
Enables authentication against AD.
mssql_ad_join: true
Uses the ad_integration RHEL system role to join the managed node to AD. The role uses
the settings from the ad_integration_realm, ad_integration_user, and
ad_integration_password variables to join the domain.
mssql_ad_sql_user: <username>
Sets the name of an AD account that the role should create in AD and SQL Server for
administration purposes.
ad_integration_user: <AD_user>
Sets the name of an AD user with privileges to join machines to the domain and to create the
AD user specified in mssql_ad_sql_user.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/microsoft.sql-server/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
5. In your AD domain, enable 128 bit and 256 bit Kerberos authentication for the AD SQL user
which you specified in the playbook. Use one of the following options:
ii. In the Account options list, select This account supports Kerberos AES 128 bit
encryption and This account supports Kerberos AES 256 bit encryption.
6. Authorize AD users that should be able to authenticate to SQL Server. On the SQL Server,
perform the following steps:
133
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
$ kinit [email protected]
b. Authorize an AD user:
Repeat this step for every AD user who should be able to access SQL Server.
Verification
$ kinit <AD_user>@ad.example.com
b. Use the sqlcmd utility to log in to SQL Server and run a query, for example:
Additional resources
/usr/share/ansible/roles/microsoft.sql-server/README.md file
134
CHAPTER 18. CONFIGURING NBDE BY USING RHEL SYSTEM ROLES
RHEL 8.3 introduced Ansible roles for automated deployments of Policy-Based Decryption (PBD)
solutions using Clevis and Tang. The rhel-system-roles package contains these system roles, related
examples, and also the reference documentation.
The nbde_client system role enables you to deploy multiple Clevis clients in an automated way. Note
that the nbde_client role supports only Tang bindings, and you cannot use it for TPM2 bindings at the
moment.
The nbde_client role requires volumes that are already encrypted using LUKS. This role supports to bind
a LUKS-encrypted volume to one or more Network-Bound (NBDE) servers - Tang servers. You can
either preserve the existing volume encryption with a passphrase or remove it. After removing the
passphrase, you can unlock the volume only using NBDE. This is useful when a volume is initially
encrypted using a temporary key or password that you should remove after you provision the system.
If you provide both a passphrase and a key file, the role uses what you have provided first. If it does not
find any of these valid, it attempts to retrieve a passphrase from an existing binding.
PBD defines a binding as a mapping of a device to a slot. This means that you can have multiple bindings
for the same device. The default slot is slot 1.
The nbde_client role provides also the state variable. Use the present value for either creating a new
binding or updating an existing one. Contrary to a clevis luks bind command, you can use state:
present also for overwriting an existing binding in its device slot. The absent value removes a specified
binding.
Using the nbde_client system role, you can deploy and manage a Tang server as part of an automated
disk encryption solution. This role supports the following features:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.nbde_server/README.md file
/usr/share/ansible/roles/rhel-system-roles.nbde_client/README.md file
/usr/share/doc/rhel-system-roles/nbde_server/ directory
/usr/share/doc/rhel-system-roles/nbde_client/ directory
135
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
You can prepare and apply an Ansible playbook containing your Tang server settings.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.nbde_server
vars:
nbde_server_rotate_keys: yes
nbde_server_manage_firewall: true
nbde_server_manage_selinux: true
This example playbook ensures deploying of your Tang server and a key rotation.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
To ensure that networking for a Tang pin is available during early boot by using the grubby tool
on the systems where Clevis is installed, enter:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.nbde_server/README.md file
/usr/share/doc/rhel-system-roles/nbde_server/ directory
With the nbde_client RHEL system role, you can prepare and apply an Ansible playbook that contains
your Clevis client settings on multiple systems.
NOTE
The nbde_client system role supports only Tang bindings. Therefore, you cannot use it
for TPM2 bindings.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.nbde_client
vars:
nbde_client_bindings:
- device: /dev/rhel/root
encryption_key_src: /etc/luks/keyfile
servers:
- http://server1.example.com
- http://server2.example.com
- device: /dev/rhel/swap
encryption_key_src: /etc/luks/keyfile
servers:
- http://server1.example.com
- http://server2.example.com
This example playbook configures Clevis clients for automated unlocking of two LUKS-
encrypted volumes when at least one of two Tang servers is available
The nbde_client system role supports only scenarios with Dynamic Host Configuration
Protocol (DHCP). To use NBDE for clients with static IP configuration use the following
playbook:
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.nbde_client
vars:
nbde_client_bindings:
- device: /dev/rhel/root
encryption_key_src: /etc/luks/keyfile
servers:
137
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
- http://server1.example.com
- http://server2.example.com
- device: /dev/rhel/swap
encryption_key_src: /etc/luks/keyfile
servers:
- http://server1.example.com
- http://server2.example.com
tasks:
- name: Configure a client with a static IP address during early boot
ansible.builtin.command:
cmd: grubby --update-kernel=ALL --args='GRUB_CMDLINE_LINUX_DEFAULT="ip={{
<ansible_default_ipv4.address> }}::{{ <ansible_default_ipv4.gateway> }}:{{
<ansible_default_ipv4.netmask> }}::{{ <ansible_default_ipv4.alias> }}:none"'
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.nbde_client/README.md file
/usr/share/doc/rhel-system-roles/nbde_client/ directory
Looking forward to Linux network configuration in the initial ramdisk (initrd) article
138
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
You can use the network RHEL system role to configure an Ethernet connection with static IP
addresses, gateways, and DNS settings, and assign them to a specified interface name.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: Ethernet connection profile with static IP address settings
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
- name: enp1s0
interface_name: enp1s0
type: ethernet
autoconnect: yes
ip:
address:
- 192.0.2.1/24
- 2001:db8:1::1/64
gateway4: 192.0.2.254
gateway6: 2001:db8:1::fffe
dns:
- 192.0.2.200
139
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
- 2001:db8:1::ffbb
dns_search:
- example.com
state: up
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Query the Ansible facts of the managed node and verify the active network settings:
140
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
"example.com"
]
},
...
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
You can use the network RHEL system role to configure an Ethernet connection with static IP
addresses, gateways, and DNS settings, and assign them to a device based on its path instead of its
name.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
You know the path of the device. You can display the device path by using the udevadm info
/sys/class/net/<device_name> | grep ID_PATH= command.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: Ethernet connection profile with static IP address settings
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
- name: example
match:
path:
141
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
- pci-0000:00:0[1-3].0
- &!pci-0000:00:02.0
type: ethernet
autoconnect: yes
ip:
address:
- 192.0.2.1/24
- 2001:db8:1::1/64
gateway4: 192.0.2.254
gateway6: 2001:db8:1::fffe
dns:
- 192.0.2.200
- 2001:db8:1::ffbb
dns_search:
- example.com
state: up
match
Defines that a condition must be met in order to apply the settings. You can only use this
variable with the path option.
path
Defines the persistent path of a device. You can set it as a fixed path or an expression. Its
value can contain modifiers and wildcards. The example applies the settings to devices that
match PCI ID 0000:00:0[1-3].0, but not 0000:00:02.0.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Query the Ansible facts of the managed node and verify the active network settings:
142
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "192.0.2.0",
"prefix": "24",
"type": "ether"
},
"ansible_default_ipv6": {
"address": "2001:db8:1::1",
"gateway": "2001:db8:1::fffe",
"interface": "enp1s0",
"macaddress": "52:54:00:17:b8:b6",
"mtu": 1500,
"prefix": "64",
"scope": "global",
"type": "ether"
},
...
"ansible_dns": {
"nameservers": [
"192.0.2.1",
"2001:db8:1::ffbb"
],
"search": [
"example.com"
]
},
...
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
You can use the network RHEL system role to configure an Ethernet connection that retrieves its IP
addresses, gateways, and DNS settings from a DHCP server and IPv6 stateless address
autoconfiguration (SLAAC). With this role you can assign the connection profile to the specified
interface name.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
143
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
The managed nodes use the NetworkManager service to configure the network.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: Ethernet connection profile with dynamic IP address settings
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
- name: enp1s0
interface_name: enp1s0
type: ethernet
autoconnect: yes
ip:
dhcp4: yes
auto6: yes
state: up
dhcp4: yes
Enables automatic IPv4 address assignment from DHCP, PPP, or similar services.
auto6: yes
Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements.
If the router announces the managed flag, NetworkManager requests an IPv6 address and
prefix from a DHCPv6 server.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Query the Ansible facts of the managed node and verify that the interface received IP
144
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
Query the Ansible facts of the managed node and verify that the interface received IP
addresses and DNS settings:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
You can use the network RHEL system role to configure an Ethernet connection that retrieves its IP
145
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
You can use the network RHEL system role to configure an Ethernet connection that retrieves its IP
addresses, gateways, and DNS settings from a DHCP server and IPv6 stateless address
autoconfiguration (SLAAC). The role can assign the connection profile to a device based on its path
instead of an interface name.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
You know the path of the device. You can display the device path by using the udevadm info
/sys/class/net/<device_name> | grep ID_PATH= command.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: Ethernet connection profile with dynamic IP address settings
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
- name: example
match:
path:
- pci-0000:00:0[1-3].0
- &!pci-0000:00:02.0
type: ethernet
autoconnect: yes
ip:
dhcp4: yes
auto6: yes
state: up
match: path
Defines that a condition must be met in order to apply the settings. You can only use this
variable with the path option.
path: <path_and_expressions>
Defines the persistent path of a device. You can set it as a fixed path or an expression. Its
146
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
Defines the persistent path of a device. You can set it as a fixed path or an expression. Its
value can contain modifiers and wildcards. The example applies the settings to devices that
match PCI ID 0000:00:0[1-3].0, but not 0000:00:02.0.
dhcp4: yes
Enables automatic IPv4 address assignment from DHCP, PPP, or similar services.
auto6: yes
Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements.
If the router announces the managed flag, NetworkManager requests an IPv6 address and
prefix from a DHCPv6 server.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Query the Ansible facts of the managed node and verify that the interface received IP
addresses and DNS settings:
147
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
...
"ansible_dns": {
"nameservers": [
"192.0.2.1",
"2001:db8:1::ffbb"
],
"search": [
"example.com"
]
},
...
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
You can use the network RHEL system role to configure VLAN tagging and, if a connection profile for
the VLAN’s parent device does not exists, the role can create it as well.
NOTE
If the VLAN device requires an IP address, default gateway, and DNS settings, configure
them on the VLAN device and not on the parent device.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: VLAN connection profile with Ethernet port
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
148
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
network_connections:
# Ethernet profile
- name: enp1s0
type: ethernet
interface_name: enp1s0
autoconnect: yes
state: up
ip:
dhcp4: no
auto6: no
# VLAN profile
- name: enp1s0.10
type: vlan
vlan:
id: 10
ip:
dhcp4: yes
auto6: yes
parent: enp1s0
state: up
type: <profile_type>
Sets the type of the profile to create. The example playbook creates two connection
profiles: One for the parent Ethernet device and one for the VLAN device.
dhcp4: <value>
If set to yes, automatic IPv4 address assignment from DHCP, PPP, or similar services is
enabled. Disable the IP address configuration on the parent device.
auto6: <value>
If set to yes, IPv6 auto-configuration is enabled. In this case, by default, NetworkManager
uses Router Advertisements and, if the router announces the managed flag,
NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. Disable the IP
address configuration on the parent device.
parent: <parent_device>
Sets the parent device of the VLAN connection profile. In the example, the parent is the
Ethernet interface.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
149
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
You can use the network RHEL system role to configure a bridge and, if a connection profile for the
bridge’s parent device does not exists, the role can create it as well.
NOTE
If you want to assign IP addresses, gateways, and DNS settings to a bridge, configure
them on the bridge and not on its ports.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Two or more physical or virtual network devices are installed on the server.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: Bridge connection profile with two Ethernet ports
150
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
# Bridge profile
- name: bridge0
type: bridge
interface_name: bridge0
ip:
dhcp4: yes
auto6: yes
state: up
type: <profile_type>
Sets the type of the profile to create. The example playbook creates three connection
profiles: One for the bridge and two for the Ethernet devices.
dhcp4: yes
Enables automatic IPv4 address assignment from DHCP, PPP, or similar services.
auto6: yes
Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements.
If the router announces the managed flag, NetworkManager requests an IPv6 address and
prefix from a DHCPv6 server.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
151
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Verification
1. Display the link status of Ethernet devices that are ports of a specific bridge:
2. Display the status of Ethernet devices that are ports of any bridge device:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
You can use the network RHEL system role to configure a network bond and, if a connection profile for
the bond’s parent device does not exist, the role can create it as well.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Two or more physical or virtual network devices are installed on the server.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
152
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: Bond connection profile with two Ethernet ports
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
# Bond profile
- name: bond0
type: bond
interface_name: bond0
ip:
dhcp4: yes
auto6: yes
bond:
mode: active-backup
state: up
type: <profile_type>
Sets the type of the profile to create. The example playbook creates three connection
profiles: One for the bond and two for the Ethernet devices.
dhcp4: yes
Enables automatic IPv4 address assignment from DHCP, PPP, or similar services.
auto6: yes
Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements.
If the router announces the managed flag, NetworkManager requests an IPv6 address and
prefix from a DHCPv6 server.
mode: <bond_mode>
Sets the bonding mode. Possible values are:
balance-rr (default)
active-backup
balance-xor
153
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
broadcast
802.3ad
balance-tlb
balance-alb.
Depending on the mode you set, you need to set additional variables in the playbook.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Temporarily remove the network cable from one of the network devices and check if the other
device in the bond handling the traffic.
Note that there is no method to properly test link failure events using software utilities. Tools
that deactivate connections, such as nmcli, show only the bonding driver’s ability to handle port
configuration changes and not actual link failure events.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
You can use the network RHEL system role to configure IPoIB and, if a connection profile for the
InfiniBand’s parent device does not exists, the role can create it as well.
Prerequisites
You have prepared the control node and the managed nodes .
154
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: IPoIB connection profile with static IP address settings
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
# InfiniBand connection mlx4_ib0
- name: mlx4_ib0
interface_name: mlx4_ib0
type: infiniband
type: <profile_type>
Sets the type of the profile to create. The example playbook creates two connection
profiles: One for the InfiniBand connection and one for the IPoIB device.
parent: <parent_device>
Sets the parent device of the IPoIB connection profile.
p_key: <value>
Sets the InfiniBand partition key. If you set this variable, do not set interface_name on the
IPoIB device.
transport_mode: <mode>
Sets the IPoIB connection operation mode. You can set this variable to datagram (default)
or connected.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
155
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
156
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
routed to provider B. By using Ansible and the network RHEL system role, you can automate this
process and remotely configure connection profiles on the hosts defined in a playbook.
You can use the network RHEL system role to configure the connection profiles, including routing
tables and rules.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The managed nodes you want to configure has four network interfaces:
The enp7s0 interface is connected to the network of provider A. The gateway IP in the
provider’s network is 198.51.100.2, and the network uses a /30 network mask.
The enp1s0 interface is connected to the network of provider B. The gateway IP in the
provider’s network is 192.0.2.2, and the network uses a /30 network mask.
The enp8s0 interface is connected to the 10.0.0.0/24 subnet with internal workstations.
The enp9s0 interface is connected to the 203.0.113.0/24 subnet with the company’s
servers.
Hosts in the internal workstations subnet use 10.0.0.1 as the default gateway. In the procedure,
you assign this IP address to the enp8s0 network interface of the router.
Hosts in the server subnet use 203.0.113.1 as the default gateway. In the procedure, you assign
this IP address to the enp9s0 network interface of the router.
Procedure
157
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configuring policy-based routing
hosts: managed-node-01.example.com
tasks:
- name: Routing traffic from a specific subnet to a different default gateway
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
- name: Provider-A
interface_name: enp7s0
type: ethernet
autoconnect: True
ip:
address:
- 198.51.100.1/30
gateway4: 198.51.100.2
dns:
- 198.51.100.200
state: up
zone: external
- name: Provider-B
interface_name: enp1s0
type: ethernet
autoconnect: True
ip:
address:
- 192.0.2.1/30
route:
- network: 0.0.0.0
prefix: 0
gateway: 192.0.2.2
table: 5000
state: up
zone: external
- name: Internal-Workstations
interface_name: enp8s0
type: ethernet
autoconnect: True
ip:
address:
- 10.0.0.1/24
route:
- network: 10.0.0.0
prefix: 24
table: 5000
routing_rule:
- priority: 5
from: 10.0.0.0/24
table: 5000
state: up
zone: trusted
158
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
- name: Servers
interface_name: enp9s0
type: ethernet
autoconnect: True
ip:
address:
- 203.0.113.1/24
state: up
zone: trusted
table: <value>
Assigns the route from the same list entry as the table variable to the specified routing table.
routing_rule: <list>
Defines the priority of the specified routing rule and from a connection profile to which
routing table the rule is assigned.
zone: <zone_name>
Assigns the network interface from a connection profile to the specified firewalld zone.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
b. Use the traceroute utility to display the route to a host on the internet:
# traceroute redhat.com
traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets
1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms
2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms
...
The output of the command displays that the router sends packets over 192.0.2.1, which is
the network of provider B.
159
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
b. Use the traceroute utility to display the route to a host on the internet:
# traceroute redhat.com
traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets
1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms
2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms
...
The output of the command displays that the router sends packets over 198.51.100.2, which
is the network of provider A.
3. On the RHEL router that you configured using the RHEL system role:
# ip rule list
0: from all lookup local
5: from 10.0.0.0/24 lookup 5000
32766: from all lookup main
32767: from all lookup default
By default, RHEL contains rules for the tables local, main, and default.
# firewall-cmd --get-active-zones
external
interfaces: enp1s0 enp7s0
trusted
interfaces: enp8s0 enp9s0
# firewall-cmd --info-zone=external
external (active)
target: default
icmp-block-inversion: no
interfaces: enp1s0 enp7s0
sources:
services: ssh
ports:
160
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
protocols:
masquerade: yes
...
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
You can use an Ansible playbook to copy a private key, a certificate, and the CA certificate to the client,
and then use the network RHEL system role to configure a connection profile with 802.1X network
authentication.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The following files required for the TLS authentication exist on the control node:
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
161
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
pwd: <password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure an Ethernet connection with 802.1X authentication
hosts: managed-node-01.example.com
vars_files:
- vault.yml
tasks:
- name: Copy client key for 802.1X authentication
ansible.builtin.copy:
src: "/srv/data/client.key"
dest: "/etc/pki/tls/private/client.key"
mode: 0600
- name: Ethernet connection profile with static IP address settings and 802.1X
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
- name: enp1s0
type: ethernet
autoconnect: yes
ip:
address:
- 192.0.2.1/24
- 2001:db8:1::1/64
gateway4: 192.0.2.254
gateway6: 2001:db8:1::fffe
dns:
- 192.0.2.200
- 2001:db8:1::ffbb
dns_search:
- example.com
ieee802_1x:
identity: <user_name>
eap: tls
private_key: "/etc/pki/tls/private/client.key"
private_key_password: "{{ pwd }}"
client_cert: "/etc/pki/tls/certs/client.crt"
162
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
ca_cert: "/etc/pki/ca-trust/source/anchors/ca.crt"
domain_suffix_match: example.com
state: up
ieee802_1x
This variable contains the 802.1X-related settings.
eap: tls
Configures the profile to use the certificate-based TLS authentication method for the
Extensible Authentication Protocol (EAP).
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
Ansible vault
In most situations, administrators set the default gateway when they create a connection. However, you
can also set or update the default gateway setting on a previously-created connection.
163
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
WARNING
You cannot use the network RHEL system role to update only specific values in an
existing connection profile. The role ensures that a connection profile exactly
matches the settings in a playbook. If a connection profile with the same name
already exists, the role applies the settings from the playbook and resets all other
settings in the profile to their defaults. To prevent resetting values, always specify
the whole configuration of the network connection profile in the playbook, including
the settings that you do not want to change.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: Ethernet connection profile with static IP address settings
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
- name: enp1s0
type: ethernet
autoconnect: yes
ip:
address:
- 198.51.100.20/24
- 2001:db8:1::1/64
gateway4: 198.51.100.254
gateway6: 2001:db8:1::fffe
dns:
- 198.51.100.200
- 2001:db8:1::ffbb
dns_search:
- example.com
state: up
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
164
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Query the Ansible facts of the managed node and verify the active network settings:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
165
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
WARNING
You cannot use the network RHEL system role to update only specific values in an
existing connection profile. The role ensures that a connection profile exactly
matches the settings in a playbook. If a connection profile with the same name
already exists, the role applies the settings from the playbook and resets all other
settings in the profile to their defaults. To prevent resetting values, always specify
the whole configuration of the network connection profile in the playbook, including
the settings that you do not want to change.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: Ethernet connection profile with static IP address settings
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
- name: enp7s0
type: ethernet
autoconnect: yes
ip:
address:
- 192.0.2.1/24
- 2001:db8:1::1/64
gateway4: 192.0.2.254
gateway6: 2001:db8:1::fffe
dns:
- 192.0.2.200
- 2001:db8:1::ffbb
dns_search:
- example.com
route:
- network: 198.51.100.0
prefix: 24
gateway: 192.0.2.10
- network: 2001:db8:2::
166
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
prefix: 64
gateway: 2001:db8:1::10
state: up
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
Network interface controllers can use the TCP offload engine (TOE) to offload processing certain
operations to the network controller. This improves the network throughput. You configure offload
features in the connection profile of the network interface. By using Ansible and the network RHEL
system role, you can automate this process and remotely configure connection profiles on the hosts
defined in a playbook.
167
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
WARNING
You cannot use the network RHEL system role to update only specific values in an
existing connection profile. The role ensures that a connection profile exactly
matches the settings in a playbook. If a connection profile with the same name
already exists, the role applies the settings from the playbook and resets all other
settings in the profile to their defaults. To prevent resetting values, always specify
the whole configuration of the network connection profile in the playbook, including
the settings that you do not want to change.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: Ethernet connection profile with dynamic IP address settings and offload features
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
- name: enp1s0
type: ethernet
autoconnect: yes
ip:
dhcp4: yes
auto6: yes
ethtool:
features:
gro: no
gso: yes
tx_sctp_segmentation: no
state: up
gro: no
Disables Generic receive offload (GRO).
gso: yes
Enables Generic segmentation offload (GSO).
168
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
tx_sctp_segmentation: no
Disables TX stream control transmission protocol (SCTP) segmentation.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Query the Ansible facts of the managed node and verify the offload settings:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
By using interrupt coalescing, the system collects network packets and generates a single interrupt for
multiple packets. This increases the amount of data sent to the kernel with one hardware interrupt, which
reduces the interrupt load, and maximizes the throughput. You configure coalesce settings in the
connection profile of the network interface. By using Ansible and the network RHEL role, you can
automate this process and remotely configure connection profiles on the hosts defined in a playbook.
169
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
WARNING
You cannot use the network RHEL system role to update only specific values in an
existing connection profile. The role ensures that a connection profile exactly
matches the settings in a playbook. If a connection profile with the same name
already exists, the role applies the settings from the playbook and resets all other
settings in the profile to their defaults. To prevent resetting values, always specify
the whole configuration of the network connection profile in the playbook, including
the settings that you do not want to change.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: Ethernet connection profile with dynamic IP address settings and coalesce
settings
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
- name: enp1s0
type: ethernet
autoconnect: yes
ip:
dhcp4: yes
auto6: yes
ethtool:
coalesce:
rx_frames: 128
tx_frames: 128
state: up
rx_frames: <value>
Sets the number of RX frames.
gso: <value>
Sets the number of TX frames.
170
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
Ring buffers are circular buffers where an overflow overwrites existing data. The network card assigns a
transmit (TX) and receive (RX) ring buffer. Receive ring buffers are shared between the device driver
and the network interface controller (NIC). Data can move from NIC to the kernel through either
hardware interrupts or software interrupts, also called SoftIRQs.
The kernel uses the RX ring buffer to store incoming packets until the device driver can process them.
The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a
kernel data structure called an sk_buff or skb to begin its journey through the kernel and up to the
application that owns the relevant socket.
The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These
ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur,
which in turn will adversely affect network performance.
You configure ring buffer settings in the NetworkManager connection profiles. By using Ansible and the
171
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
You configure ring buffer settings in the NetworkManager connection profiles. By using Ansible and the
network RHEL system role, you can automate this process and remotely configure connection profiles
on the hosts defined in a playbook.
WARNING
You cannot use the network RHEL system role to update only specific values in an
existing connection profile. The role ensures that a connection profile exactly
matches the settings in a playbook. If a connection profile with the same name
already exists, the role applies the settings from the playbook and resets all other
settings in the profile to their defaults. To prevent resetting values, always specify
the whole configuration of the network connection profile in the playbook, including
the settings that you do not want to change.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
You know the maximum ring buffer sizes that the device supports.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the network
hosts: managed-node-01.example.com
tasks:
- name: Ethernet connection profile with dynamic IP address setting and increased ring
buffer sizes
ansible.builtin.include_role:
name: rhel-system-roles.network
vars:
network_connections:
- name: enp1s0
type: ethernet
autoconnect: yes
ip:
dhcp4: yes
auto6: yes
ethtool:
ring:
rx: 4096
tx: 4096
state: up
172
CHAPTER 19. CONFIGURING NETWORK SETTINGS BY USING THE RHEL SYSTEM ROLE
rx: <value>
Sets the maximum number of received ring buffer entries.
tx: <value>
Sets the maximum number of transmitted ring buffer entries.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.network/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
Using the declarative method with the state configurations, you can configure interfaces, and
the NetworkManager creates a profile for these interfaces in the background.
With the network_state variable, you can specify the options that you require to change, and all
the other options will remain the same as they are. However, with the network_connections
variable, you must specify all settings to change the network connection profile.
For example, to create an Ethernet connection with dynamic IP address settings, use the following vars
173
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
For example, to create an Ethernet connection with dynamic IP address settings, use the following vars
block in your playbook:
vars: vars:
network_state: network_connections:
interfaces: - name: enp7s0
- name: enp7s0 interface_name: enp7s0
type: ethernet type: ethernet
state: up autoconnect: yes
ipv4: ip:
enabled: true dhcp4: yes
auto-dns: true auto6: yes
auto-gateway: true state: up
auto-routes: true
dhcp: true
ipv6:
enabled: true
auto-dns: true
auto-gateway: true
auto-routes: true
autoconf: true
dhcp: true
For example, to only change the connection status of dynamic IP address settings that you created as
above, use the following vars block in your playbook:
vars: vars:
network_state: network_connections:
interfaces: - name: enp7s0
- name: enp7s0 interface_name: enp7s0
type: ethernet type: ethernet
state: down autoconnect: yes
ip:
dhcp4: yes
auto6: yes
state: down
Additional resources
/usr/share/ansible/roles/rhel-system-roles.network/README.md file
/usr/share/doc/rhel-system-roles/network/ directory
174
CHAPTER 20. MANAGING CONTAINERS BY USING THE PODMAN RHEL SYSTEM ROLE
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
- hosts: managed-node-01.example.com
vars:
podman_create_host_directories: true
podman_firewall:
- port: 8080-8081/tcp
state: enabled
- port: 12340/tcp
state: enabled
podman_selinux_ports:
- ports: 8080-8081
setype: http_port_t
podman_kube_specs:
- state: started
run_as_user: dbuser
run_as_group: dbgroup
kube_file_content:
apiVersion: v1
kind: Pod
metadata:
name: db
spec:
containers:
- name: db
image: quay.io/db/db:stable
ports:
- containerPort: 1234
hostPort: 12340
volumeMounts:
- mountPath: /var/lib/db:Z
name: db
volumes:
175
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
- name: db
hostPath:
path: /var/lib/db
- state: started
run_as_user: webapp
run_as_group: webapp
kube_file_src: /path/to/webapp.yml
roles:
- linux-system-roles.podma
This procedure creates a pod with two containers. The podman_kube_specs role variable
describes a pod.
The run_as_user and run_as_group fields specify that containers are rootless.
The kube_file_content field containing a Kubernetes YAML file defines the first container
named db. You can generate the Kubernetes YAML file using the podman kube generate
command.
The db bind mount maps the /var/lib/db directory on the host to the /var/lib/db
directory in the container. The Z flag labels the content with a private unshared label,
therefore, only the db container can access the content.
The kube_file_src field defines the second container. The content of the
/path/to/webapp.yml file on the controller node will be copied to the kube_file field on the
managed node.
Set the podman_create_host_directories: true to create the directory on the host. This
instructs the role to check the kube specification for hostPath volumes and create those
directories on the host. If you need more control over the ownership and permissions, use
podman_host_directories.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.podman/README.md file
/usr/share/doc/rhel-system-roles/podman/ directory
176
CHAPTER 20. MANAGING CONTAINERS BY USING THE PODMAN RHEL SYSTEM ROLE
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
- hosts: managed-node-01.example.com
vars:
podman_firewall:
- port: 8080/tcp
state: enabled
podman_kube_specs:
- state: started
kube_file_content:
apiVersion: v1
kind: Pod
metadata:
name: ubi8-httpd
spec:
containers:
- name: ubi8-httpd
image: registry.access.redhat.com/ubi8/httpd-24
ports:
- containerPort: 8080
hostPort: 8080
volumeMounts:
- mountPath: /var/www/html:Z
name: ubi8-html
volumes:
- name: ubi8-html
persistentVolumeClaim:
claimName: ubi8-html-volume
roles:
- linux-system-roles.podman
The procedure creates a pod with one container. The podman_kube_specs role variable
describes a pod.
The kube_file_content field containing a Kubernetes YAML file defines the container
named ubi8-httpd.
The pod mounts the existing persistent volume named ubi8-html-volume with the
177
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
The pod mounts the existing persistent volume named ubi8-html-volume with the
mount path /var/www/html.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.podman/README.md file
/usr/share/doc/rhel-system-roles/podman/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The certificate and the corresponding private key that the web server in the container should
use are stored in the ~/certificate.pem and ~/key.pem files.
Procedure
$ cat ~/certificate.pem
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
$ cat ~/key.pem
-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
178
CHAPTER 20. MANAGING CONTAINERS BY USING THE PODMAN RHEL SYSTEM ROLE
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
root_password: <root_password>
certificate: |-
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
key: |-
-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
Ensure that all lines in the certificate and key variables start with two spaces.
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
3. Create a playbook file, for example ~/playbook.yml, with the following content:
179
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
state: present
skip_existing: true
data: "{{ root_password }}"
- name: mysql-root-password-kube
state: present
skip_existing: true
data: |
apiVersion: v1
data:
password: "{{ root_password | b64encode }}"
kind: Secret
metadata:
name: mysql-root-password-kube
- name: envoy-certificates
state: present
skip_existing: true
data: |
apiVersion: v1
data:
certificate.key: {{ key | b64encode }}
certificate.pem: {{ certificate | b64encode }}
kind: Secret
metadata:
name: envoy-certificates
The procedure creates a WordPress content management system paired with a MySQL
database. The podman_quadlet_specs role variable defines a set of configurations for the
Quadlet, which refers to a group of containers or services that work together in a certain way. It
includes the following specifications:
The volume configuration for MySQL container is defined by the file_src: quadlet-demo-
mysql.volume field.
The Wordpress and envoy proxy containers and configuration are defined by the file_src:
quadlet-demo.kube field. The kube unit refers to the previous YAML files in the [Kube]
section as Yaml=quadlet-demo.yml and ConfigMap=envoy-proxy-configmap.yml.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
180
CHAPTER 20. MANAGING CONTAINERS BY USING THE PODMAN RHEL SYSTEM ROLE
Additional resources
/usr/share/ansible/roles/rhel-system-roles.podman/README.md file
/usr/share/doc/rhel-system-roles/podman/ directory
181
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage postfix
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.postfix
vars:
postfix_conf:
relay_domains: $mydestination
relayhost: example.com
If you want Postfix to use a different hostname than the fully-qualified domain name
(FQDN) that is returned by the gethostname() function, add the myhostname parameter
under the postfix_conf: line in the file:
myhostname = smtp.example.com
If the domain name differs from the domain name in the myhostname parameter, add the
mydomain parameter. Otherwise, the $myhostname minus the first component is used.
mydomain = <example.com>
Use postfix_manage_firewall: true variable to ensure that the SMTP port is open in the
firewall on the servers.
Manage the SMTP related ports, 25/tcp, 465/tcp, and 587/tcp. If the variable is set to false,
the postfix role does not manage the firewall. The default is false.
NOTE
182
CHAPTER 21. CONFIGURING POSTFIX MTA BY USING THE RHEL SYSTEM ROLE
NOTE
If your scenario involves using non-standard ports, set the postfix_manage_selinux: true
variable to ensure that the port is properly labeled for SELinux on the servers.
NOTE
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.postfix/README.md file
/usr/share/doc/rhel-system-roles/postfix/ directory
183
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
You can also use the postgresql role to optimize the database server settings and improve
performance.
The role supports the currently released and supported versions of PostgreSQL on RHEL 8 and RHEL 9
managed nodes.
You can use the postgresql RHEL system role to install, configure, manage, and start the PostgreSQL
server.
WARNING
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage PostgreSQL
hosts: managed-node-01.example.com
roles:
184
CHAPTER 22. INSTALLING AND CONFIGURING POSTGRESQL BY USING THE RHEL SYSTEM ROLE
- rhel-system-roles.postgresql
vars:
postgresql_version: "13"
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.postgresql/README.md file
/usr/share/doc/rhel-system-roles/postgresql/ directory
Using PostgreSQL
185
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
186
CHAPTER 23. REGISTERING THE SYSTEM BY USING THE RHEL SYSTEM ROLE
activationKey: <activation_key>
username: <username>
password: <password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
To register by using an activation key and organization ID (recommended), use the following
playbook:
---
- name: Registering system using activation key and organization ID
hosts: managed-node-01.example.com
vars_files:
- vault.yml
roles:
- role: rhel-system-roles.rhc
vars:
rhc_auth:
activation_keys:
keys:
- "{{ activationKey }}"
rhc_organization: organizationID
---
- name: Registering system with username and password
hosts: managed-node-01.example.com
vars_files:
- vault.yml
vars:
rhc_auth:
login:
username: "{{ username }}"
password: "{{ password }}"
roles:
- role: rhel-system-roles.rhc
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.rhc/README.md file
187
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
/usr/share/doc/rhel-system-roles/rhc/ directory
Ansible Vault
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
activationKey: <activation_key>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Register to the custom registration server and CDN
hosts: managed-node-01.example.com
vars_files:
- vault.yml
roles:
- role: rhel-system-roles.rhc
vars:
rhc_auth:
login:
activation_keys:
keys:
- "{{ activationKey }}"
rhc_organization: organizationID
rhc_server:
hostname: example.com
188
CHAPTER 23. REGISTERING THE SYSTEM BY USING THE RHEL SYSTEM ROLE
port: 443
prefix: /rhsm
rhc_baseurl: http://example.com/pulp/content
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.rhc/README.md file
/usr/share/doc/rhel-system-roles/rhc/ directory
Ansible Vault
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Disable Insights connection
hosts: managed-node-01.example.com
roles:
- role: rhel-system-roles.rhc
vars:
rhc_insights:
state: absent
189
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.rhc/README.md file
/usr/share/doc/rhel-system-roles/rhc/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
You have details of the repositories which you want to enable or disable on the managed nodes.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
To enable a repository:
---
- name: Enable repository
hosts: managed-node-01.example.com
roles:
- role: rhel-system-roles.rhc
vars:
rhc_repositories:
- {name: "RepositoryName", state: enabled}
To disable a repository:
---
- name: Disable repository
hosts: managed-node-01.example.com
vars:
rhc_repositories:
190
CHAPTER 23. REGISTERING THE SYSTEM BY USING THE RHEL SYSTEM ROLE
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.rhc/README.md file
/usr/share/doc/rhel-system-roles/rhc/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
You know the minor RHEL version to which you want to lock the system. Note that you can only
lock the system to the RHEL minor version that the host currently runs or a later minor version.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Set Release
hosts: managed-node-01.example.com
roles:
- role: rhel-system-roles.rhc
vars:
rhc_release: "8.6"
191
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.rhc/README.md file
/usr/share/doc/rhel-system-roles/rhc/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
username: <username>
password: <password>
proxy_username: <proxyusernme>
proxy_password: <proxypassword>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
192
CHAPTER 23. REGISTERING THE SYSTEM BY USING THE RHEL SYSTEM ROLE
---
- name: Register using proxy
hosts: managed-node-01.example.com
vars_files:
- vault.yml
roles:
- role: rhel-system-roles.rhc
vars:
rhc_auth:
login:
username: "{{ username }}"
password: "{{ password }}"
rhc_proxy:
hostname: proxy.example.com
port: 3128
username: "{{ proxy_username }}"
password: "{{ proxy_password }}"
To remove the proxy server from the configuration of the Red Hat Subscription Manager
service:
---
- name: To stop using proxy server for registration
hosts: managed-node-01.example.com
vars_files:
- vault.yml
vars:
rhc_auth:
login:
username: "{{ username }}"
password: "{{ password }}"
rhc_proxy: {"state":"absent"}
roles:
- role: rhel-system-roles.rhc
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.rhc/README.md file
/usr/share/doc/rhel-system-roles/rhc/ directory
Ansible Vault
193
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
You can disable the automatic collection rule updates for Red Hat Insights by using the rhc
RHEL system role. By default, when you connect your system to Red Hat Insights, this option is enabled.
You can disable it by using the rhc RHEL system role.
NOTE
If you disable this feature, you risk using outdated rule definition files and not getting the
most recent validation updates.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
username: <username>
password: <password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Disable Red Hat Insights autoupdates
hosts: managed-node-01.example.com
vars_files:
- vault.yml
roles:
- role: rhel-system-roles.rhc
vars:
rhc_auth:
login:
username: "{{ username }}"
password: "{{ password }}"
194
CHAPTER 23. REGISTERING THE SYSTEM BY USING THE RHEL SYSTEM ROLE
rhc_insights:
autoupdate: false
state: present
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.rhc/README.md file
/usr/share/doc/rhel-system-roles/rhc/ directory
Ansible Vault
NOTE
Enabling remediation with the rhc RHEL system role ensures your system is ready to be
remediated when connected directly to Red Hat. For systems connected to a Satellite, or
Capsule, enabling remediation must be achieved differently. For more information about
Red Hat Insights remediations, see Red Hat Insights Remediations Guide .
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
195
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
---
- name: Disable remediation
hosts: managed-node-01.example.com
roles:
- role: rhel-system-roles.rhc
vars:
rhc_insights:
remediation: absent
state: present
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.rhc/README.md file
/usr/share/doc/rhel-system-roles/rhc/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
196
CHAPTER 23. REGISTERING THE SYSTEM BY USING THE RHEL SYSTEM ROLE
username: <username>
password: <password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Creating tags
hosts: managed-node-01.example.com
vars_files:
- vault.yml
roles:
- role: rhel-system-roles.rhc
vars:
rhc_auth:
login:
username: "{{ username }}"
password: "{{ password }}"
rhc_insights:
tags:
group: group-name-value
location: location-name-value
description:
- RHEL8
- SAP
sample_key:value
state: present
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.rhc/README.md file
/usr/share/doc/rhel-system-roles/rhc/ directory
Ansible Vault
197
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
You can unregister the system from Red Hat if you no longer need the subscription service.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Unregister the system
hosts: managed-node-01.example.com
roles:
- role: rhel-system-roles.rhc
vars:
rhc_state: absent
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.rhc/README.md file
/usr/share/doc/rhel-system-roles/rhc/ directory
198
CHAPTER 24. CONFIGURING SELINUX BY USING THE RHEL SYSTEM ROLE
Cleaning local policy modifications related to SELinux booleans, file contexts, ports, and logins.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.selinux/README.md file
/usr/share/doc/rhel-system-roles/selinux/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Prepare your playbook. You can either start from scratch or modify the example playbook
installed as a part of the rhel-system-roles package:
# cp /usr/share/doc/rhel-system-roles/selinux/example-selinux-playbook.yml <my-selinux-
playbook.yml>
# vi <my-selinux-playbook.yml>
199
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
2. Change the content of the playbook to fit your scenario. For example, the following part ensures
that the system installs and enables the selinux-local-1.pp SELinux module:
selinux_modules:
- { path: "selinux-local-1.pp", priority: "400" }
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook <my-selinux-playbook.yml>
Additional resources
/usr/share/ansible/roles/rhel-system-roles.selinux/README.md file
/usr/share/doc/rhel-system-roles/selinux/ directory
You can automate managing port access in SELinux either by using the seport module, which is quicker
than using the entire role, or by using the selinux RHEL system role, which is more useful when you also
make other changes in SELinux configuration. The methods are equivalent, in fact the selinux
RHEL system role uses the seport module when configuring ports. Each of the methods has the same
effect as entering the command semanage port -a -t http_port_t -p tcp <port_number> on the
managed node.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Optional: To verify port status by using the semanage command, the policycoreutils-python-
utils package must be installed.
Procedure
200
CHAPTER 24. CONFIGURING SELINUX BY USING THE RHEL SYSTEM ROLE
To configure just the port number without making other changes, use the seport module:
Replace <port_number> with the port number to which you want to assign the http_port_t
type.
For more complex configuration of the managed nodes that involves other customizations of
SELinux, use the selinux RHEL system role. Create a playbook file, for example,
~/playbook.yml, and add the following content:
---
- name: Modify SELinux port mapping example
hosts: all
vars:
# Map tcp port <port_number> to the 'http_port_t' SELinux port type
selinux_ports:
- ports: <port_number>
proto: tcp
setype: http_port_t
state: present
tasks:
- name: Include selinux role
ansible.builtin.include_role:
name: rhel-system-roles.selinux
Replace <port_number> with the port number to which you want to assign the http_port_t
type.
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.selinux/README.md file
/usr/share/doc/rhel-system-roles/selinux/ directory
201
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
IMPORTANT
The fapolicyd service prevents only the execution of unauthorized applications that run
as regular users, and not as root.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configuring fapolicyd
hosts: managed-node-01.example.com
tasks:
- name: Allow only executables installed from RPM database and specific files
ansible.builtin.include_role:
name: rhel-system-roles.fapolicyd
vars:
fapolicyd_setup_permissive: false
fapolicyd_setup_integrity: sha256
fapolicyd_setup_trust: rpmdb,file
fapolicyd_add_trusted_file:
- <path_to_allowed_command>
- <path_to_allowed_service>
fapolicyd_setup_permissive: <true|false>
Enables or disables sending policy decisions to the kernel for enforcement. Set this variable
for debugging and testing purposes to false.
202
CHAPTER 25. RESTRICTING THE EXECUTION OF APPLICATIONS BY USING THE FAPOLICYD RHEL SYSTEM ROLE
fapolicyd_setup_integrity: <type_type>
Defines the integrity checking method. You can set one of the following values:
size: The service compares only the file sizes of allowed applications.
ima: The service checks the SHA256 hash that the kernel’s Integrity Measurement
Architecture (IMA) stored in a file’s extended attribute. Additionally, the service
performs a size check. Note that the role does not configure the IMA kernel subsystem.
To use this option, you must manually configure the IMA subsystem.
fapolicyd_setup_trust: <trust_backends>
Defines the list of trust backends. If you include the file backend, specify the allowed
executable files in the fapolicyd_add_trusted_file list.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.fapolicyd.README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.fapolicyd/README.md file
/usr/share/doc/rhel-system-roles/fapolicyd/ directory
203
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
If you do not configure these variables, the system role produces an sshd_config file that matches the
RHEL defaults.
In all cases, Booleans correctly render as yes and no in sshd configuration. You can define multi-line
configuration items using lists. For example:
sshd_ListenAddress:
- 0.0.0.0
- '::'
renders as:
ListenAddress 0.0.0.0
ListenAddress ::
Additional resources
/usr/share/ansible/roles/rhel-system-roles.sshd/README.md file
/usr/share/doc/rhel-system-roles/sshd/ directory
NOTE
You can use the sshd RHEL system role with other RHEL system roles that change SSH
and SSHD configuration, for example the Identity Management RHEL system roles. To
prevent the configuration from being overwritten, make sure that the sshd role uses
namespaces (RHEL 8 and earlier versions) or a drop-in directory (RHEL 9).
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
204
CHAPTER 26. CONFIGURING SECURE COMMUNICATION BY USING RHEL SYSTEM ROLES
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: SSH server configuration
hosts: managed-node-01.example.com
tasks:
- name: Configure sshd to prevent root and password login except from particular subnet
ansible.builtin.include_role:
name: rhel-system-roles.sshd
vars:
sshd:
PermitRootLogin: no
PasswordAuthentication: no
Match:
- Condition: "Address 192.0.2.0/24"
PermitRootLogin: yes
PasswordAuthentication: yes
The playbook configures the managed node as an SSH server configured so that:
password and root user login is enabled only from the subnet 192.0.2.0/24
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
$ ssh <username>@<ssh_server>
$ cat /etc/ssh/sshd_config
...
PasswordAuthentication no
PermitRootLogin no
...
Match Address 192.0.2.0/24
205
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
PasswordAuthentication yes
PermitRootLogin yes
...
3. Check that you can connect to the server as root from the 192.0.2.0/24 subnet:
$ hostname -I
192.0.2.1
If the IP address is within the 192.0.2.1 - 192.0.2.254 range, you can connect to the server.
$ ssh root@<ssh_server>
Additional resources
/usr/share/ansible/roles/rhel-system-roles.sshd/README.md file
/usr/share/doc/rhel-system-roles/sshd/ directory
In RHEL 9 and later by using files in a drop-in directory. The default configuration file is already
placed in the drop-in directory as /etc/ssh/sshd_config.d/00-ansible_system_role.conf.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Non-exclusive sshd configuration
206
CHAPTER 26. CONFIGURING SECURE COMMUNICATION BY USING RHEL SYSTEM ROLES
hosts: managed-node-01.example.com
tasks:
- name: <Configure SSHD to accept some useful environment variables>
ansible.builtin.include_role:
name: rhel-system-roles.sshd
vars:
sshd_config_namespace: <my-application>
sshd:
# Environment variables to accept
AcceptEnv:
LANG
LS_COLORS
EDITOR
In the sshd_config_file variable, define the .conf file into which the sshd system role
writes the configuration options. Use a two-digit prefix, for example 42- to specify the order
in which the configuration files will be applied.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
# cat /etc/ssh/sshd_config.d/42-my-application.conf
# Ansible managed
#
207
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
# cat /etc/ssh/sshd_config
...
# BEGIN sshd system role managed block: namespace <my-application>
Match all
AcceptEnv LANG LS_COLORS EDITOR
# END sshd system role managed block: namespace <my-application>
Additional resources
/usr/share/ansible/roles/rhel-system-roles.sshd/README.md file
/usr/share/doc/rhel-system-roles/sshd/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
sshd_KexAlgorithms:: You can choose key exchange algorithms, for example, ecdh-sha2-
nistp256, ecdh-sha2-nistp384, ecdh-sha2-nistp521,diffie-hellman-group14-sha1, or
diffie-hellman-group-exchange-sha256.
sshd_Ciphers:: You can choose ciphers, for example, aes128-ctr, aes192-ctr, or aes256-
ctr.
sshd_HostKeyAlgorithms:: You can choose a public key algorithm, for example, ecdsa-
sha2-nistp256, ecdsa-sha2-nistp384, ecdsa-sha2-nistp521, ssh-rsa, or ssh-dss.
On RHEL 9 managed nodes, the system role writes the configuration into the
/etc/ssh/sshd_config.d/00-ansible_system_role.conf file, where cryptographic options are
applied automatically. You can change the file by using the sshd_config_file variable. However,
to ensure the configuration is effective, use a file name that lexicographicaly preceeds the
/etc/ssh/sshd_config.d/50-redhat.conf file, which includes the configured crypto policies.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
You can verify the success of the procedure by using the verbose SSH connection and check
the defined variables in the following output:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.sshd/README.md file
/usr/share/doc/rhel-system-roles/sshd/ directory
If you do not configure these variables, the system role produces a global ssh_config file that matches
209
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
If you do not configure these variables, the system role produces a global ssh_config file that matches
the RHEL defaults.
In all cases, booleans correctly render as yes or no in ssh configuration. You can define multi-line
configuration items using lists. For example:
LocalForward:
- 22 localhost:2222
- 403 localhost:4003
renders as:
LocalForward 22 localhost:2222
LocalForward 403 localhost:4003
NOTE
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ssh/README.md file
/usr/share/doc/rhel-system-roles/ssh/ directory
NOTE
You can use the ssh RHEL system role with other system roles that change SSH and
SSHD configuration, for example the Identity Management RHEL system roles. To
prevent the configuration from being overwritten, make sure that the ssh role uses a
drop-in directory (default in RHEL 8 and later).
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: SSH client configuration
hosts: managed-node-01.example.com
210
CHAPTER 26. CONFIGURING SECURE COMMUNICATION BY USING RHEL SYSTEM ROLES
tasks:
- name: "Configure ssh clients"
ansible.builtin.include_role:
name: rhel-system-roles.ssh
vars:
ssh_user: root
ssh:
Compression: true
GSSAPIAuthentication: no
ControlMaster: auto
ControlPath: ~/.ssh/.cm%C
Host:
- Condition: example
Hostname: server.example.com
User: user1
ssh_ForwardX11: no
This playbook configures the root user’s SSH client preferences on the managed nodes with the
following configurations:
Compression is enabled.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Verify that the managed node has the correct configuration by displaying the SSH configuration
file:
# cat ~/root/.ssh/config
# Ansible managed
Compression yes
ControlMaster auto
ControlPath ~/.ssh/.cm%C
ForwardX11 no
GSSAPIAuthentication no
211
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Host example
Hostname example.com
User user1
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ssh/README.md file
/usr/share/doc/rhel-system-roles/ssh/ directory
212
CHAPTER 27. MANAGING LOCAL STORAGE BY USING THE RHEL SYSTEM ROLE
Using the storage role enables you to automate administration of file systems on disks and logical
volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7.
Complete LVM volume groups including their logical volumes and file systems
With the storage role, you can perform the following tasks:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
213
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
NOTE
The storage role can create a file system only on an unpartitioned, whole disk or a logical
volume (LV). It cannot create the file system on a partition.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
The volume name (barefs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.
You can omit the fs_type: xfs line because XFS is the default file system in RHEL 8.
To create the file system on an LV, provide the LVM setup under the disks: attribute,
including the enclosing volume group. For details, see Managing logical volumes by using
the storage RHEL system role.
Do not provide the path to the LV device.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
214
CHAPTER 27. MANAGING LOCAL STORAGE BY USING THE RHEL SYSTEM ROLE
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
The example Ansible applies the storage role to immediately and persistently mount an XFS file system.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data
mount_user: somebody
mount_group: somegroup
mount_mode: 0755
This playbook adds the file system to the /etc/fstab file, and mounts the file system
immediately.
If the file system on the /dev/sdb device or the mount point directory do not exist, the
playbook creates them.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
215
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_pools:
- name: myvg
disks:
- sda
- sdb
- sdc
volumes:
- name: mylv
size: 2G
fs_type: ext4
mount_point: /mnt/dat
The myvg volume group consists of the following disks: /dev/sda, /dev/sdb, and /dev/sdc.
If the myvg volume group already exists, the playbook adds the logical volume to the
volume group.
If the myvg volume group does not exist, the playbook creates it.
The playbook creates an Ext4 file system on the mylv logical volume, and persistently
mounts the file system at /mnt.
216
CHAPTER 27. MANAGING LOCAL STORAGE BY USING THE RHEL SYSTEM ROLE
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data
mount_options: discard
Note that this command only validates the syntax and does not protect against a wrong but valid
217
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext4
fs_label: label-name
mount_point: /mnt/data
The playbook persistently mounts the file system at the /mnt/data directory.
Note that this command only validates the syntax and does not protect against a wrong but valid
218
CHAPTER 27. MANAGING LOCAL STORAGE BY USING THE RHEL SYSTEM ROLE
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: all
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext3
fs_label: label-name
mount_point: /mnt/data
mount_user: somebody
mount_group: somegroup
mount_mode: 0755
The playbook persistently mounts the file system at the /mnt/data directory.
219
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
The example Ansible playbook applies the storage RHEL system role to resize an LVM logical volume
with a file system.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create LVM pool over three disks
hosts: managed-node-01.example.com
tasks:
- name: Resize LVM logical volume with file system
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_pools:
- name: myvg
disks:
- /dev/sda
- /dev/sdb
- /dev/sdc
volumes:
- name: mylv1
size: 10 GiB
fs_type: ext4
mount_point: /opt/mount1
- name: mylv2
220
CHAPTER 27. MANAGING LOCAL STORAGE BY USING THE RHEL SYSTEM ROLE
size: 50 GiB
fs_type: ext4
mount_point: /opt/mount2
The Ext4 file system on the mylv1 volume, which is mounted at /opt/mount1, resizes to 10
GiB.
The Ext4 file system on the mylv2 volume, which is mounted at /opt/mount2, resizes to 50
GiB.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create a disk device with swap
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
221
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
storage_volumes:
- name: swap_fs
type: disk
disks:
- /dev/sdb
size: 15 GiB
fs_type: swap
The volume name (swap_fs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
WARNING
Device names might change in certain circumstances, for example, when you add a
new disk to a system. Therefore, to prevent data loss, do not use specific disk
names in the playbook.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
222
CHAPTER 27. MANAGING LOCAL STORAGE BY USING THE RHEL SYSTEM ROLE
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure the storage
hosts: managed-node-01.example.com
tasks:
- name: Create a RAID on sdd, sde, sdf, and sdg
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_safe_mode: false
storage_volumes:
- name: data
type: raid
disks: [sdd, sde, sdf, sdg]
raid_level: raid0
raid_chunk_size: 32 KiB
mount_point: /mnt/data
state: present
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Managing RAID
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
223
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure LVM pool with RAID
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_safe_mode: false
storage_pools:
- name: my_pool
type: lvm
disks: [sdh, sdi]
raid_level: raid1
volumes:
- name: my_volume
size: "1 GiB"
mount_point: "/mnt/app/shared"
fs_type: xfs
state: present
To create an LVM pool with RAID, you must specify the RAID type by using the raid_level
parameter.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Managing RAID
224
CHAPTER 27. MANAGING LOCAL STORAGE BY USING THE RHEL SYSTEM ROLE
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Configure stripe size for RAID LVM volumes
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_safe_mode: false
storage_pools:
- name: my_pool
type: lvm
disks: [sdh, sdi]
volumes:
- name: my_volume
size: "1 GiB"
mount_point: "/mnt/app/shared"
fs_type: xfs
raid_level: raid1
raid_stripe_size: "256 KiB"
state: present
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Managing RAID
The example Ansible playbook applies the storage RHEL system role to enable compression and
225
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
The example Ansible playbook applies the storage RHEL system role to enable compression and
deduplication of Logical Volumes (LVM) by using Virtual Data Optimizer (VDO).
NOTE
Because of the storage system role use of LVM VDO, only one volume per pool can use
the compression and deduplication.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
In this example, the compression and deduplication pools are set to true, which specifies that
the VDO is used. The following describes the usage of these parameters:
The deduplication is used to deduplicate the duplicated data stored on the storage volume.
The compression is used to compress the data stored on the storage volume, which results
in more storage capacity.
The vdo_pool_size specifies the actual size the volume takes on the device. The virtual size
of VDO volume is set by the size parameter.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
226
CHAPTER 27. MANAGING LOCAL STORAGE BY USING THE RHEL SYSTEM ROLE
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
You can use the storage role to create and configure a volume encrypted with LUKS by running an
Ansible playbook.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create and configure a volume encrypted with LUKS
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
fs_label: label-name
mount_point: /mnt/data
encryption: true
encryption_password: <password>
You can also add other encryption parameters, such as encryption_key, encryption_cipher,
encryption_key_size, and encryption_luks, to the playbook file.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
227
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
$ ansible-playbook ~/playbook.yml
Verification
Version: 2
Epoch: 6
Metadata area: 16384 [bytes]
Keyslots area: 33521664 [bytes]
UUID: a4c6be82-7347-4a91-a8ad-9479b72c9426
Label: (no label)
Subsystem: (no subsystem)
Flags: allow-discards
Data segments:
0: crypt
offset: 33554432 [bytes]
length: (whole device)
cipher: aes-xts-plain64
sector: 4096 [bytes]
...
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Prerequisites
228
CHAPTER 27. MANAGING LOCAL STORAGE BY USING THE RHEL SYSTEM ROLE
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Express volume sizes as a percentage of the pool's total size
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_pools:
- name: myvg
disks:
- /dev/sdb
volumes:
- name: data
size: 60%
mount_point: /opt/mount/data
- name: web
size: 30%
mount_point: /opt/mount/web
- name: cache
size: 10%
mount_point: /opt/cache/mount
This example specifies the size of LVM volumes as a percentage of the pool size, for example:
60%. Alternatively, you can also specify the size of LVM volumes as a percentage of the pool
size in a human-readable size of the file system, for example, 10g or 50 GiB.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
229
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Manage services
Deploy units
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content. Use only the
variables depending on what actions you want to perform.
---
- name: Managing systemd services
hosts: managed-node-01.example.com
tasks:
- name: Perform action on systemd units
ansible.builtin.include_role:
name: rhel-system-roles.systemd
vars:
systemd_started_units:
- <systemd_unit_1>.service
systemd_stopped_units:
- <systemd_unit_2>.service
systemd_restarted_units:
- <systemd_unit_3>.service
systemd_reloaded_units:
- <systemd_unit_4>.service
systemd_enabled_units:
- <systemd_unit_5>.service
systemd_disabled_units:
- <systemd_unit_6>.service
systemd_masked_units:
230
CHAPTER 28. MANAGING SYSTEMD UNITS BY USING THE RHEL SYSTEM ROLE
- <systemd_unit_7>.service
systemd_unmasked_units:
- <systemd_unit_8>.service
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.systemd/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.systemd/README.md file
/usr/share/doc/rhel-system-roles/systemd/ directory
IMPORTANT
The role uses the hard-coded file name 99-override.conf to store drop-in files in
/etc/systemd/system/<name>._<unit_type>/. Note that it overrides existing files with
this name in the destination directory.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a Jinja2 template with the systemd drop-in file contents. For example, create the
~/sshd.service.conf.j2 file with the following content:
{{ ansible_managed | comment }}
[Unit]
After=
After=network.target sshd-keygen.target network-online.target
231
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
This drop-in file specifies the same units in the After setting as the original
/usr/lib/systemd/system/sshd.service file and, additionally, network-online.target. With this
extra target, sshd starts after the network interfaces are actived and have IP addresses
assigned. This ensures that sshd can bind to all IP addresses.
Use the <name>.<unit_type>.conf.j2 convention for the file name. For example, to add a drop-
in for the sshd.service unit, you must name the file sshd.service.conf.j2. Place the file in the
same directory as the playbook.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Managing systemd services
hosts: managed-node-01.example.com
tasks:
- name: Deploy an sshd.service systemd drop-in file
ansible.builtin.include_role:
name: rhel-system-roles.systemd
vars:
systemd_dropins:
- sshd.service.conf.j2
systemd_dropins: <list_of_files>
Specifies the names of the drop-in files to deploy in YAML list format.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.systemd/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Verify that the role placed the drop-in file in the correct location:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.systemd/README.md file
232
CHAPTER 28. MANAGING SYSTEMD UNITS BY USING THE RHEL SYSTEM ROLE
/usr/share/doc/rhel-system-roles/systemd/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a Jinja2 template with the custom systemd unit file contents. For example, create the
~/example.service.j2 file with the contents for your service:
{{ ansible_managed | comment }}
[Unit]
Description=Example systemd service unit file
[Service]
ExecStart=/bin/true
Use the <name>.<unit_type>.j2 convention for the file name. For example, to create the
example.service unit, you must name the file example.service.j2. Place the file in the same
directory as the playbook.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Managing systemd services
hosts: managed-node-01.example.com
tasks:
- name: Deploy, enable, and start a custom systemd service
ansible.builtin.include_role:
name: rhel-system-roles.systemd
vars:
systemd_unit_file_templates:
- example.service.j2
systemd_enabled_units:
- example.service
systemd_started_units:
- example.service
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.systemd/README.md file on the control node.
233
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.systemd/README.md file
/usr/share/doc/rhel-system-roles/systemd/ directory
234
CHAPTER 29. CONFIGURING TIME SYNCHRONIZATION BY USING THE RHEL SYSTEM ROLE
You can set the time service to configure in the timesync_ntp_provider variable of a playbook. If you
do not set this variable, the role determines the time service based on the following factors:
WARNING
The timesync RHEL system role replaces the configuration of the specified given
or detected provider service on the managed host. Consequently, all settings are
lost if they are not specified in the playbook.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Managing time synchronization
hosts: managed-node-01.example.com
tasks:
- name: Configuring NTP with an internal server (preferred) and a public server pool as
fallback
ansible.builtin.include_role:
name: rhel-system-roles.timesync
235
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
vars:
timesync_ntp_servers:
- hostname: time.example.com
trusted: yes
prefer: yes
iburst: yes
- hostname: 0.rhel.pool.ntp.org
pool: yes
iburst: yes
pool: <yes|no>
Flags a source as an NTP pool rather than an individual host. In this case, the service expects
that the name resolves to multiple IP addresses which can change over time.
iburst: yes
Enables fast initial synchronization.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.timesync/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
236
CHAPTER 29. CONFIGURING TIME SYNCHRONIZATION BY USING THE RHEL SYSTEM ROLE
Additional resources
/usr/share/ansible/roles/rhel-system-roles.time_sync/README.md file
/usr/share/doc/rhel-system-roles/time_sync/ directory
Note that you cannot mix NTS servers with non-NTS servers. In mixed configurations, NTS servers are
trusted and clients do not fall back to unauthenticated NTP sources because they can be exploited in
man-in-the-middle (MITM) attacks. For further details, see the authselectmode parameter description
in the chrony.conf(5) man page.
WARNING
The timesync RHEL system role replaces the configuration of the specified given
or detected provider service on the managed host. Consequently, all settings are
lost if they are not specified in the playbook.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Managing time synchronization
hosts: managed-node-01.example.com
tasks:
- name: Configuring NTP with NTS-enabled servers
237
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
ansible.builtin.include_role:
name: rhel-system-roles.timesync
vars:
timesync_ntp_servers:
- hostname: ptbtime1.ptb.de
trusted: yes
nts: yes
prefer: yes
iburst: yes
- hostname: ptbtime2.ptb.de
trusted: yes
nts: yes
iburst: yes
iburst: yes
Enables fast initial synchronization.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.timesync/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
2. For sources with NTS enabled, display information that is specific to authentication of NTP
sources:
238
CHAPTER 29. CONFIGURING TIME SYNCHRONIZATION BY USING THE RHEL SYSTEM ROLE
Verify that the reported cookies in the Cook column is larger than 0.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.time_sync/README.md file
/usr/share/doc/rhel-system-roles/time_sync/ directory
239
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
You can configure the recording to take place per user or user group by means of the SSSD service.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
Recording Sessions
Additional resources
/usr/share/ansible/roles/rhel-system-roles.ha_cluster/README.md file
/usr/share/doc/rhel-system-roles/ha_cluster/ directory
Recording Sessions
The playbook installs the tlog RHEL system role on the system you specified. The role includes tlog-rec-
session, a terminal session I/O logging program, that acts as the login shell for a user. It also creates an
SSSD configuration drop file that can be used by the users and groups that you define. SSSD parses
and reads these users and groups, and replaces their user shell with tlog-rec-session. Additionally, if
the cockpit package is installed on the system, the playbook also installs the cockpit-session-
recording package, which is a Cockpit module that allows you to view and play recordings in the web
console interface.
240
CHAPTER 30. CONFIGURING A SYSTEM FOR SESSION RECORDING BY USING THE RHEL SYSTEM ROLE
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Deploy session recording
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.tlog
vars:
tlog_scope_sssd: some
tlog_users_sssd:
- recorded-user
tlog_scope_sssd
The some value specifies you want to record only certain users and groups, not all or none.
tlog_users_sssd
Specifies the user you want to record a session from. Note that this does not add the user for
you. You must set the user by yourself.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
1. Navigate to the folder where the SSSD configuration drop file is created:
# cd /etc/sssd/conf.d/
# cat /etc/sssd/conf.d/sssd-session-recording.conf
You can see that the file contains the parameters you set in the playbook.
241
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Additional resources
/usr/share/ansible/roles/rhel-system-roles.tlog/README.md file
/usr/share/doc/rhel-system-roles/tlog/ directory
The playbook installs the tlog RHEL system role on the system you specified. The role includes tlog-rec-
session, a terminal session I/O logging program, that acts as the login shell for a user. It also creates an
/etc/sssd/conf.d/sssd-session-recording.conf SSSD configuration drop file that can be used by users
and groups except those that you defined as excluded. SSSD parses and reads these users and groups,
and replaces their user shell with tlog-rec-session. Additionally, if the cockpit package is installed on
the system, the playbook also installs the cockpit-session-recording package, which is a Cockpit
module that allows you to view and play recordings in the web console interface.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Deploy session recording excluding users and groups
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.tlog
vars:
tlog_scope_sssd: all
tlog_exclude_users_sssd:
- jeff
- james
tlog_exclude_groups_sssd:
- admins
tlog_scope_sssd
The value all specifies that you want to record all users and groups.
tlog_exclude_users_sssd
Specifies the user names of the users you want to exclude from the session recording.
242
CHAPTER 30. CONFIGURING A SYSTEM FOR SESSION RECORDING BY USING THE RHEL SYSTEM ROLE
tlog_exclude_groups_sssd
Specifies the group you want to exclude from the session recording.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
1. Navigate to the folder where the SSSD configuration drop file is created:
# cd /etc/sssd/conf.d/
# cat sssd-session-recording.conf
You can see that the file contains the parameters you set in the playbook.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.tlog/README.md file
/usr/share/doc/rhel-system-roles/tlog/ directory
243
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
For host-to-host connections, the role sets up a VPN tunnel between each pair of hosts in the list of
vpn_connections using the default parameters, including generating keys as needed. Alternatively, you
can configure it to create an opportunistic mesh configuration between all hosts listed. The role assumes
that the names of the hosts under hosts are the same as the names of the hosts used in the Ansible
inventory, and that you can use those names to configure the tunnels.
NOTE
The vpn RHEL system role currently supports only Libreswan, which is an IPsec
implementation, as the VPN provider.
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
To configure connections from managed hosts to external hosts that are not listed in the
244
CHAPTER 31. CONFIGURING VPN CONNECTIONS WITH IPSEC BY USING THE RHEL SYSTEM ROLE
To configure connections from managed hosts to external hosts that are not listed in the
inventory file, add the following section to the vpn_connections list of hosts:
vpn_connections:
- hosts:
managed-node-01.example.com:
<external_node>:
hostname: <IP_address_or_hostname>
NOTE
The connections are configured only on the managed nodes and not on the
external node.
2. Optional: You can specify multiple VPN connections for the managed nodes by using additional
sections within vpn_connections, for example, a control plane and a data plane:
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
245
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Replace <connection_name> with the name of the connection from this node, for example
managed_node1-to-managed_node2.
NOTE
By default, the role generates a descriptive name for each connection it creates
from the perspective of each system. For example, when creating a connection
between managed_node1 and managed_node2, the descriptive name of this
connection on managed_node1 is managed_node1-to-managed_node2 but
on managed_node2 the connection is named managed_node2-to-
managed_node1.
3. Optional: If a connection does not successfully load, manually add the connection by entering
the following command. This provides more specific information indicating why the connection
failed to establish:
NOTE
Any errors that may occur during the process of loading and starting the
connection are reported in the /var/log/pluto.log file. Because these logs are
hard to parse, manually add the connection to obtain log messages from the
standard output instead.
Additional resources
/usr/share/ansible/roles/rhel-system-roles.vpn/README.md file
/usr/share/doc/rhel-system-roles/vpn/ directory
Prerequisites
You have prepared the control node and the managed nodes .
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
The IPsec Network Security Services (NSS) crypto library in the /etc/ipsec.d/ directory contains
246
CHAPTER 31. CONFIGURING VPN CONNECTIONS WITH IPSEC BY USING THE RHEL SYSTEM ROLE
The IPsec Network Security Services (NSS) crypto library in the /etc/ipsec.d/ directory contains
the necessary certificates.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
In this example procedure, the control node, which is the system from which you will run the
Ansible playbook, shares the same classless inter-domain routing (CIDR) number as both of the
managed nodes (192.0.2.0/24) and has the IP address 192.0.2.7. Therefore, the control node
falls under the private policy which is automatically created for CIDR 192.0.2.0/24.
To prevent SSH connection loss during the play, a clear policy for the control node is included in
the list of policies. Note that there is also an item in the policies list where the CIDR is equal to
default. This is because this playbook overrides the rule from the default policy to make it
private instead of private-or-clear.
Because vpn_manage_firewall and vpn_manage_selinux are both set to true, the vpn role
uses the firewall and selinux roles to manage the ports used by the vpn role.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
247
Red Hat Enterprise Linux 8 Automating system administration by using RHEL system roles
Additional resources
/usr/share/ansible/roles/rhel-system-roles.vpn/README.md file
/usr/share/doc/rhel-system-roles/vpn/ directory
248