Red - Hat v9
Red - Hat v9
Creating, modifying, and administering file systems in Red Hat Enterprise Linux 9
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
Red Hat Enterprise Linux supports a variety of file systems. Each type of file system solves different
problems and their usage is application specific. Use the information about the key differences and
considerations to select and deploy the appropriate file system based on your specific application
requirements. The supported file systems include local on-disk file systems XFS and ext4, network
and client-and-server file systems NFS and SMB, as well as a combined local storage and file
system management solution, Stratis. You can perform several operations with a file system such as
creating, mounting, backing up, restoring, checking and repairing, as well as limiting the storage
space by using quotas.
Table of Contents
Table of Contents
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . .
.CHAPTER
. . . . . . . . . . 1.. .OVERVIEW
. . . . . . . . . . . .OF
. . . AVAILABLE
. . . . . . . . . . . . .FILE
. . . . .SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
1.1. TYPES OF FILE SYSTEMS 8
1.2. LOCAL FILE SYSTEMS 9
1.3. THE XFS FILE SYSTEM 9
1.4. THE EXT4 FILE SYSTEM 10
1.5. COMPARISON OF XFS AND EXT4 11
1.6. CHOOSING A LOCAL FILE SYSTEM 12
1.7. NETWORK FILE SYSTEMS 13
1.8. SHARED STORAGE FILE SYSTEMS 13
1.9. CHOOSING BETWEEN NETWORK AND SHARED STORAGE FILE SYSTEMS 14
1.10. VOLUME-MANAGING FILE SYSTEMS 14
. . . . . . . . . . . 2.
CHAPTER . . MANAGING
. . . . . . . . . . . . .LOCAL
. . . . . . . .STORAGE
. . . . . . . . . . .BY
. . . USING
. . . . . . . .RHEL
. . . . . .SYSTEM
. . . . . . . . .ROLES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
..............
2.1. CREATING AN XFS FILE SYSTEM ON A BLOCK DEVICE BY USING THE STORAGE RHEL SYSTEM ROLE
15
2.2. PERSISTENTLY MOUNTING A FILE SYSTEM BY USING THE STORAGE RHEL SYSTEM ROLE 16
2.3. CREATING OR RESIZING A LOGICAL VOLUME BY USING THE STORAGE RHEL SYSTEM ROLE 17
2.4. ENABLING ONLINE BLOCK DISCARD BY USING THE STORAGE RHEL SYSTEM ROLE 18
2.5. CREATING AND MOUNTING AN EXT4 FILE SYSTEM BY USING THE STORAGE RHEL SYSTEM ROLE 20
2.6. CREATING AND MOUNTING AN EXT3 FILE SYSTEM BY USING THE STORAGE RHEL SYSTEM ROLE 21
2.7. CREATING A SWAP VOLUME BY USING THE STORAGE RHEL SYSTEM ROLE 22
2.8. CONFIGURING A RAID VOLUME BY USING THE STORAGE RHEL SYSTEM ROLE 23
2.9. CONFIGURING AN LVM POOL WITH RAID BY USING THE STORAGE RHEL SYSTEM ROLE 24
2.10. CONFIGURING A STRIPE SIZE FOR RAID LVM VOLUMES BY USING THE STORAGE RHEL SYSTEM ROLE
25
2.11. CONFIGURING AN LVM-VDO VOLUME BY USING THE STORAGE RHEL SYSTEM ROLE 27
2.12. CREATING A LUKS2 ENCRYPTED VOLUME BY USING THE STORAGE RHEL SYSTEM ROLE 28
2.13. CREATING SHARED LVM DEVICES USING THE STORAGE RHEL SYSTEM ROLE 30
2.14. RESIZING PHYSICAL VOLUMES BY USING THE STORAGE RHEL SYSTEM ROLE 32
2.15. CREATING AN ENCRYPTED STRATIS POOL BY USING THE STORAGE RHEL SYSTEM ROLE 33
. . . . . . . . . . . 3.
CHAPTER . . MANAGING
. . . . . . . . . . . . . PARTITIONS
. . . . . . . . . . . . . .USING
. . . . . . .THE
. . . . .WEB
. . . . .CONSOLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
..............
3.1. DISPLAYING PARTITIONS FORMATTED WITH FILE SYSTEMS IN THE WEB CONSOLE 36
3.2. CREATING PARTITIONS IN THE WEB CONSOLE 37
3.3. DELETING PARTITIONS IN THE WEB CONSOLE 39
. . . . . . . . . . . 4.
CHAPTER . . .MOUNTING
. . . . . . . . . . . . NFS
. . . . .SHARES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
..............
4.1. SERVICES REQUIRED ON AN NFS CLIENT 40
4.2. PREPARING AN NFSV3 CLIENT TO RUN BEHIND A FIREWALL 40
4.3. PREPARING AN NFSV4.0 CLIENT TO RUN BEHIND A FIREWALL 41
4.4. MANUALLY MOUNTING AN NFS SHARE 41
4.5. MOUNTING AN NFS SHARE AUTOMATICALLY WHEN THE SYSTEM BOOTS 42
4.6. CONNECTING NFS MOUNTS IN THE WEB CONSOLE 42
4.7. CUSTOMIZING NFS MOUNT OPTIONS IN THE WEB CONSOLE 44
4.8. SETTING UP AN NFS CLIENT WITH KERBEROS IN A RED HAT IDENTITY MANAGEMENT DOMAIN 45
4.9. CONFIGURING GNOME TO STORE USER SETTINGS ON HOME DIRECTORIES HOSTED ON AN NFS
SHARE 46
4.10. FREQUENTLY USED NFS MOUNT OPTIONS 47
4.11. ENABLING CLIENT-SIDE CACHING OF NFS CONTENT 48
4.11.1. How NFS caching works 48
4.11.2. Installing and configuring the cachefilesd service 50
1
Red Hat Enterprise Linux 9.0 Managing file systems
.CHAPTER
. . . . . . . . . . 5.
. . MOUNTING
. . . . . . . . . . . . .AN
. . . .SMB
. . . . .SHARE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
..............
5.1. SUPPORTED SMB PROTOCOL VERSIONS 53
5.2. UNIX EXTENSIONS SUPPORT 53
5.3. MANUALLY MOUNTING AN SMB SHARE 54
5.4. MOUNTING AN SMB SHARE AUTOMATICALLY WHEN THE SYSTEM BOOTS 55
5.5. CREATING A CREDENTIALS FILE TO AUTHENTICATE TO AN SMB SHARE 56
5.6. PERFORMING A MULTI-USER SMB MOUNT 56
5.6.1. Mounting a share with the multiuser option 57
5.6.2. Verifying if an SMB share is mounted with the multiuser option 57
5.6.3. Accessing a share as a user 57
5.7. FREQUENTLY USED SMB MOUNT OPTIONS 58
. . . . . . . . . . . 6.
CHAPTER . . .OVERVIEW
. . . . . . . . . . . OF
. . . .PERSISTENT
. . . . . . . . . . . . . .NAMING
. . . . . . . . . ATTRIBUTES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
..............
6.1. DISADVANTAGES OF NON-PERSISTENT NAMING ATTRIBUTES 60
6.2. FILE SYSTEM AND DEVICE IDENTIFIERS 60
File system identifiers 61
Device identifiers 61
Recommendations 61
6.3. DEVICE NAMES MANAGED BY THE UDEV MECHANISM IN /DEV/DISK/ 61
6.3.1. File system identifiers 61
The UUID attribute in /dev/disk/by-uuid/ 61
The Label attribute in /dev/disk/by-label/ 62
6.3.2. Device identifiers 62
The WWID attribute in /dev/disk/by-id/ 62
The Partition UUID attribute in /dev/disk/by-partuuid 63
The Path attribute in /dev/disk/by-path/ 63
6.4. THE WORLD WIDE IDENTIFIER WITH DM MULTIPATH 63
6.5. LIMITATIONS OF THE UDEV DEVICE NAMING CONVENTION 64
6.6. LISTING PERSISTENT NAMING ATTRIBUTES 65
6.7. MODIFYING PERSISTENT NAMING ATTRIBUTES 66
.CHAPTER
. . . . . . . . . . 7.
. . PARTITION
. . . . . . . . . . . . .OPERATIONS
. . . . . . . . . . . . . . WITH
. . . . . . PARTED
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67
..............
7.1. VIEWING THE PARTITION TABLE WITH PARTED 67
7.2. CREATING A PARTITION TABLE ON A DISK WITH PARTED 68
7.3. CREATING A PARTITION WITH PARTED 69
7.4. REMOVING A PARTITION WITH PARTED 70
7.5. RESIZING A PARTITION WITH PARTED 72
.CHAPTER
. . . . . . . . . . 8.
. . .STRATEGIES
. . . . . . . . . . . . . FOR
. . . . . REPARTITIONING
. . . . . . . . . . . . . . . . . . .A
. . DISK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .74
..............
8.1. USING UNPARTITIONED FREE SPACE 74
8.2. USING SPACE FROM AN UNUSED PARTITION 74
8.3. USING FREE SPACE FROM AN ACTIVE PARTITION 75
8.3.1. Destructive repartitioning 75
8.3.2. Non-destructive repartitioning 76
.CHAPTER
. . . . . . . . . . 9.
. . .GETTING
. . . . . . . . . .STARTED
. . . . . . . . . . WITH
. . . . . . XFS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79
..............
9.1. THE XFS FILE SYSTEM 79
9.2. COMPARISON OF TOOLS USED WITH EXT4 AND XFS 80
. . . . . . . . . . . 10.
CHAPTER . . . CREATING
. . . . . . . . . . . .AN
. . . .XFS
. . . . FILE
. . . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
..............
10.1. CREATING AN XFS FILE SYSTEM WITH MKFS.XFS 81
2
Table of Contents
. . . . . . . . . . . 11.
CHAPTER . . .BACKING
. . . . . . . . . .UP
. . . AN
. . . .XFS
. . . . .FILE
. . . . .SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82
..............
11.1. FEATURES OF XFS BACKUP 82
11.2. BACKING UP AN XFS FILE SYSTEM WITH XFSDUMP 82
.CHAPTER
. . . . . . . . . . 12.
. . . RESTORING
. . . . . . . . . . . . . AN
. . . .XFS
. . . . FILE
. . . . . .SYSTEM
. . . . . . . . .FROM
. . . . . . .BACKUP
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84
..............
12.1. FEATURES OF RESTORING XFS FROM BACKUP 84
12.2. RESTORING AN XFS FILE SYSTEM FROM BACKUP WITH XFSRESTORE 84
12.3. INFORMATIONAL MESSAGES WHEN RESTORING AN XFS BACKUP FROM A TAPE 85
.CHAPTER
. . . . . . . . . . 13.
. . . INCREASING
. . . . . . . . . . . . . .THE
. . . . .SIZE
. . . . .OF
. . . AN
. . . .XFS
. . . . .FILE
. . . . .SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86
..............
13.1. INCREASING THE SIZE OF AN XFS FILE SYSTEM WITH XFS_GROWFS 86
. . . . . . . . . . . 14.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . XFS
. . . . .ERROR
. . . . . . . .BEHAVIOR
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87
..............
14.1. CONFIGURABLE ERROR HANDLING IN XFS 87
14.2. CONFIGURATION FILES FOR SPECIFIC AND UNDEFINED XFS ERROR CONDITIONS 87
14.3. SETTING XFS BEHAVIOR FOR SPECIFIC CONDITIONS 88
14.4. SETTING XFS BEHAVIOR FOR UNDEFINED CONDITIONS 88
14.5. SETTING THE XFS UNMOUNT BEHAVIOR 89
. . . . . . . . . . . 15.
CHAPTER . . . CHECKING
. . . . . . . . . . . .AND
. . . . . REPAIRING
. . . . . . . . . . . .A
. . FILE
. . . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
..............
15.1. SCENARIOS THAT REQUIRE A FILE SYSTEM CHECK 90
15.2. POTENTIAL SIDE EFFECTS OF RUNNING FSCK 90
15.3. ERROR-HANDLING MECHANISMS IN XFS 91
Unclean unmounts 91
Corruption 91
15.4. CHECKING AN XFS FILE SYSTEM WITH XFS_REPAIR 92
15.5. REPAIRING AN XFS FILE SYSTEM WITH XFS_REPAIR 93
15.6. ERROR HANDLING MECHANISMS IN EXT2, EXT3, AND EXT4 94
15.7. CHECKING AN EXT2, EXT3, OR EXT4 FILE SYSTEM WITH E2FSCK 94
15.8. REPAIRING AN EXT2, EXT3, OR EXT4 FILE SYSTEM WITH E2FSCK 94
. . . . . . . . . . . 16.
CHAPTER . . . MOUNTING
. . . . . . . . . . . . .FILE
. . . . .SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96
..............
16.1. THE LINUX MOUNT MECHANISM 96
16.2. LISTING CURRENTLY MOUNTED FILE SYSTEMS 96
16.3. MOUNTING A FILE SYSTEM WITH MOUNT 97
16.4. MOVING A MOUNT POINT 98
16.5. UNMOUNTING A FILE SYSTEM WITH UMOUNT 98
16.6. MOUNTING AND UNMOUNTING FILE SYSTEMS IN THE WEB CONSOLE 99
16.7. COMMON MOUNT OPTIONS 99
. . . . . . . . . . . 17.
CHAPTER . . . SHARING
..........A
. . MOUNT
. . . . . . . . .ON
. . . .MULTIPLE
. . . . . . . . . . . MOUNT
. . . . . . . . .POINTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
...............
17.1. TYPES OF SHARED MOUNTS 101
17.2. CREATING A PRIVATE MOUNT POINT DUPLICATE 101
17.3. CREATING A SHARED MOUNT POINT DUPLICATE 102
17.4. CREATING A SLAVE MOUNT POINT DUPLICATE 104
17.5. PREVENTING A MOUNT POINT FROM BEING DUPLICATED 105
. . . . . . . . . . . 18.
CHAPTER . . . PERSISTENTLY
. . . . . . . . . . . . . . . . .MOUNTING
. . . . . . . . . . . . FILE
. . . . . SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106
...............
18.1. THE /ETC/FSTAB FILE 106
18.2. ADDING A FILE SYSTEM TO /ETC/FSTAB 107
. . . . . . . . . . . 19.
CHAPTER . . . MOUNTING
. . . . . . . . . . . . .FILE
. . . . .SYSTEMS
. . . . . . . . . . .ON
. . . DEMAND
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .108
...............
19.1. THE AUTOFS SERVICE 108
19.2. THE AUTOFS CONFIGURATION FILES 108
19.3. CONFIGURING AUTOFS MOUNT POINTS 110
3
Red Hat Enterprise Linux 9.0 Managing file systems
19.4. AUTOMOUNTING NFS SERVER USER HOME DIRECTORIES WITH AUTOFS SERVICE 111
19.5. OVERRIDING OR AUGMENTING AUTOFS SITE CONFIGURATION FILES 111
19.6. USING LDAP TO STORE AUTOMOUNTER MAPS 113
19.7. USING SYSTEMD.AUTOMOUNT TO MOUNT A FILE SYSTEM ON DEMAND WITH /ETC/FSTAB 114
19.8. USING SYSTEMD.AUTOMOUNT TO MOUNT A FILE SYSTEM ON-DEMAND WITH A MOUNT UNIT 115
. . . . . . . . . . . 20.
CHAPTER . . . .USING
. . . . . . . SSSD
. . . . . . COMPONENT
. . . . . . . . . . . . . . . FROM
. . . . . . .IDM
. . . . .TO
. . . CACHE
. . . . . . . . THE
. . . . .AUTOFS
. . . . . . . . . MAPS
. . . . . . . . . . . . . . . . . . . . . . . . . . . .116
..............
20.1. CONFIGURING AUTOFS MANUALLY TO USE IDM SERVER AS AN LDAP SERVER 116
20.2. CONFIGURING SSSD TO CACHE AUTOFS MAPS 117
. . . . . . . . . . . 21.
CHAPTER . . . SETTING
. . . . . . . . . .READ-ONLY
. . . . . . . . . . . . . PERMISSIONS
. . . . . . . . . . . . . . . .FOR
. . . . .THE
. . . . ROOT
. . . . . . . FILE
. . . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
..............
21.1. FILES AND DIRECTORIES THAT ALWAYS RETAIN WRITE PERMISSIONS 119
21.2. CONFIGURING THE ROOT FILE SYSTEM TO MOUNT WITH READ-ONLY PERMISSIONS ON BOOT 120
. . . . . . . . . . . 22.
CHAPTER . . . .LIMITING
. . . . . . . . . .STORAGE
. . . . . . . . . . .SPACE
. . . . . . . USAGE
. . . . . . . . ON
. . . . XFS
. . . . .WITH
. . . . . .QUOTAS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121
..............
22.1. DISK QUOTAS 121
22.2. THE XFS_QUOTA TOOL 121
22.3. FILE SYSTEM QUOTA MANAGEMENT IN XFS 121
22.4. ENABLING DISK QUOTAS FOR XFS 122
22.5. REPORTING XFS USAGE 122
22.6. MODIFYING XFS QUOTA LIMITS 123
22.7. SETTING PROJECT LIMITS FOR XFS 124
.CHAPTER
. . . . . . . . . . 23.
. . . .LIMITING
. . . . . . . . . .STORAGE
. . . . . . . . . . .SPACE
. . . . . . . USAGE
. . . . . . . . ON
. . . . EXT4
. . . . . . WITH
. . . . . . QUOTAS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
...............
23.1. INSTALLING THE QUOTA TOOL 125
23.2. ENABLING QUOTA FEATURE ON FILE SYSTEM CREATION 125
23.3. ENABLING QUOTA FEATURE ON EXISTING FILE SYSTEMS 125
23.4. ENABLING QUOTA ENFORCEMENT 126
23.5. ASSIGNING QUOTAS PER USER 127
23.6. ASSIGNING QUOTAS PER GROUP 128
23.7. ASSIGNING QUOTAS PER PROJECT 129
23.8. SETTING THE GRACE PERIOD FOR SOFT LIMITS 130
23.9. TURNING FILE SYSTEM QUOTAS OFF 130
23.10. REPORTING ON DISK QUOTAS 130
. . . . . . . . . . . 24.
CHAPTER . . . .DISCARDING
. . . . . . . . . . . . . .UNUSED
. . . . . . . . . BLOCKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
...............
Requirements 132
24.1. TYPES OF BLOCK DISCARD OPERATIONS 132
Recommendations 132
24.2. PERFORMING BATCH BLOCK DISCARD 132
24.3. ENABLING ONLINE BLOCK DISCARD 133
24.4. ENABLING PERIODIC BLOCK DISCARD 133
.CHAPTER
. . . . . . . . . . 25.
. . . .SETTING
. . . . . . . . . .UP
. . .STRATIS
. . . . . . . . . FILE
. . . . . .SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
...............
25.1. WHAT IS STRATIS 135
25.2. COMPONENTS OF A STRATIS VOLUME 135
25.3. BLOCK DEVICES USABLE WITH STRATIS 136
Supported devices 136
Unsupported devices 136
25.4. INSTALLING STRATIS 136
25.5. CREATING AN UNENCRYPTED STRATIS POOL 137
25.6. CREATING AN UNENCRYPTED STRATIS POOL BY USING THE WEB CONSOLE 138
25.7. CREATING AN ENCRYPTED STRATIS POOL 139
25.8. CREATING AN ENCRYPTED STRATIS POOL BY USING THE WEB CONSOLE 141
25.9. RENAMING A STRATIS POOL BY USING THE WEB CONSOLE 143
4
Table of Contents
.CHAPTER
. . . . . . . . . . 26.
. . . .EXTENDING
. . . . . . . . . . . . .A. .STRATIS
. . . . . . . . . VOLUME
. . . . . . . . . . WITH
. . . . . . ADDITIONAL
. . . . . . . . . . . . . .BLOCK
. . . . . . . .DEVICES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
...............
26.1. COMPONENTS OF A STRATIS VOLUME 153
26.2. ADDING BLOCK DEVICES TO A STRATIS POOL 153
26.3. ADDING A BLOCK DEVICE TO A STRATIS POOL BY USING THE WEB CONSOLE 154
26.4. ADDITIONAL RESOURCES 155
. . . . . . . . . . . 27.
CHAPTER . . . .MONITORING
. . . . . . . . . . . . . . STRATIS
. . . . . . . . . .FILE
. . . . .SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156
...............
27.1. STRATIS SIZES REPORTED BY DIFFERENT UTILITIES 156
27.2. DISPLAYING INFORMATION ABOUT STRATIS VOLUMES 156
27.3. VIEWING A STRATIS POOL BY USING THE WEB CONSOLE 157
27.4. ADDITIONAL RESOURCES 158
. . . . . . . . . . . 28.
CHAPTER . . . .USING
. . . . . . .SNAPSHOTS
. . . . . . . . . . . . . .ON
. . . .STRATIS
. . . . . . . . . FILE
. . . . . SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .159
...............
28.1. CHARACTERISTICS OF STRATIS SNAPSHOTS 159
28.2. CREATING A STRATIS SNAPSHOT 159
28.3. ACCESSING THE CONTENT OF A STRATIS SNAPSHOT 159
28.4. REVERTING A STRATIS FILE SYSTEM TO A PREVIOUS SNAPSHOT 160
28.5. REMOVING A STRATIS SNAPSHOT 161
28.6. ADDITIONAL RESOURCES 161
. . . . . . . . . . . 29.
CHAPTER . . . .REMOVING
. . . . . . . . . . . .STRATIS
. . . . . . . . . FILE
. . . . . .SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
...............
29.1. COMPONENTS OF A STRATIS VOLUME 162
29.2. REMOVING A STRATIS FILE SYSTEM 162
29.3. DELETING A FILE SYSTEM FROM A STRATIS POOL BY USING THE WEB CONSOLE 163
29.4. REMOVING A STRATIS POOL 164
29.5. DELETING A STRATIS POOL BY USING THE WEB CONSOLE 165
29.6. ADDITIONAL RESOURCES 166
. . . . . . . . . . . 30.
CHAPTER . . . .GETTING
. . . . . . . . . . STARTED
. . . . . . . . . . .WITH
. . . . . .AN
. . . EXT4
. . . . . . FILE
. . . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
...............
30.1. FEATURES OF AN EXT4 FILE SYSTEM 167
30.2. CREATING AN EXT4 FILE SYSTEM 167
30.3. MOUNTING AN EXT4 FILE SYSTEM 169
30.4. RESIZING AN EXT4 FILE SYSTEM 169
30.5. COMPARISON OF TOOLS USED WITH EXT4 AND XFS 170
5
Red Hat Enterprise Linux 9.0 Managing file systems
6
PROVIDING FEEDBACK ON RED HAT DOCUMENTATION
4. Enter your suggestion for improvement in the Description field. Include links to the relevant
parts of the documentation.
7
Red Hat Enterprise Linux 9.0 Managing file systems
The following sections describe the file systems that Red Hat Enterprise Linux 9 includes by default, and
recommendations on the most suitable file system for your application.
Disk or local FS XFS XFS is the default file system in RHEL. Red Hat
recommends deploying XFS as your local file system
unless there are specific reasons to do otherwise: for
example, compatibility or corner cases around
performance.
Network or client-and- NFS Use NFS to share files between multiple systems on
server FS the same network.
8
CHAPTER 1. OVERVIEW OF AVAILABLE FILE SYSTEMS
For example, a local file system is the only choice for internal SATA or SAS disks, and is used when your
server has internal hardware RAID controllers with local drives. Local file systems are also the most
common file systems used on SAN attached storage when the device exported on the SAN is not
shared.
All local file systems are POSIX-compliant and are fully compatible with all supported Red Hat
Enterprise Linux releases. POSIX-compliant file systems provide support for a well-defined set of
system calls, such as read(), write(), and seek().
When considering a file system choice, choose a file system based on how large the file system needs to
be, what unique features it must have, and how it performs under your workload.
XFS
ext4
Reliability
Metadata journaling, which ensures file system integrity after a system crash by keeping a
record of file system operations that can be replayed when the system is restarted and the
file system remounted
Quota journaling. This avoids the need for lengthy quota consistency checks after a crash.
Allocation schemes
9
Red Hat Enterprise Linux 9.0 Managing file systems
Extent-based allocation
Delayed allocation
Space pre-allocation
Other features
Online defragmentation
Extended attributes (xattr). This allows the system to associate several additional
name/value pairs per file.
Project or directory quotas. This allows quota restrictions over a directory tree.
Subsecond timestamps
Performance characteristics
XFS has a high performance on large systems with enterprise workloads. A large system is one with a
relatively high number of CPUs, multiple HBAs, and connections to external disk arrays. XFS also
performs well on smaller systems that have a multi-threaded, parallel I/O workload.
XFS has a relatively low performance for single threaded, metadata-intensive workloads: for example, a
workload that creates or deletes large numbers of small files in a single thread.
The ext4 driver can read and write to ext2 and ext3 file systems, but the ext4 file system format is not
compatible with ext2 and ext3 drivers.
Extent-based metadata
Delayed allocation
Journal checksumming
10
CHAPTER 1. OVERVIEW OF AVAILABLE FILE SYSTEMS
The extent-based metadata and the delayed allocation features provide a more compact and efficient
way to track utilized space in a file system. These features improve file system performance and reduce
the space consumed by metadata. Delayed allocation allows the file system to postpone selection of the
permanent location for newly written user data until the data is flushed to disk. This enables higher
performance since it can allow for larger, more contiguous allocations, allowing the file system to make
decisions with much better information.
File system repair time using the fsck utility in ext4 is much faster than in ext2 and ext3. Some file
system repairs have demonstrated up to a six-fold increase in performance.
Running the quotacheck command on an XFS file system has no effect. The first time you turn on
quota accounting, XFS checks quotas automatically.
Certain applications cannot properly handle inode numbers larger than 232 on an XFS file system.
These applications might cause the failure of 32-bit stat calls with the EOVERFLOW return value.
Inode number exceed 232 under the following conditions:
If your application fails with large inode numbers, mount the XFS file system with the -o inode32
option to enforce inode numbers below 232. Note that using inode32 does not affect inodes that are
already allocated with 64-bit numbers.
IMPORTANT
11
Red Hat Enterprise Linux 9.0 Managing file systems
IMPORTANT
Do not use the inode32 option unless a specific environment requires it. The inode32
option changes allocation behavior. As a consequence, the ENOSPC error might
occur if no space is available to allocate inodes in the lower disk blocks.
XFS
For large-scale deployments, use XFS, particularly when handling large files (hundreds of
megabytes) and high I/O concurrency. XFS performs optimally in environments with high bandwidth
(greater than 200MB/s) and more than 1000 IOPS. However, it consumes more CPU resources for
metadata operations compared to ext4 and does not support file system shrinking.
ext4
For smaller systems or environments with limited I/O bandwidth, ext4 might be a better fit. It
performs better in single-threaded, lower I/O workloads and environments with lower throughput
requirements. ext4 also supports offline shrinking, which can be beneficial if resizing the file system is
a requirement.
Benchmark your application’s performance on your target server and storage system to ensure the
selected file system meets your performance and scalability requirements.
12
CHAPTER 1. OVERVIEW OF AVAILABLE FILE SYSTEMS
Such file systems are built from one or more servers that export a set of file systems to one or more
clients. The client nodes do not have access to the underlying block storage, but rather interact with the
storage using a protocol that allows for better access control.
The most common client/server file system for RHEL customers is the NFS file system.
RHEL provides both an NFS server component to export a local file system over the network
and an NFS client to import these file systems.
RHEL also includes a CIFS client that supports the popular Microsoft SMB file servers for
Windows interoperability. The userspace Samba server provides Windows clients with a
Microsoft SMB service from a RHEL server.
Performance characteristics
Shared disk file systems do not always perform as well as local file systems running on the same
system due to the computational cost of the locking overhead. Shared disk file systems perform well
with workloads where each node writes almost exclusively to a particular set of files that are not
shared with other nodes or where a set of files is shared in an almost exclusively read-only manner
across a set of nodes. This results in a minimum of cross-node cache invalidation and can maximize
performance.
Setting up a shared disk file system is complex, and tuning an application to perform well on a shared
disk file system can be challenging.
13
Red Hat Enterprise Linux 9.0 Managing file systems
Red Hat Enterprise Linux provides the GFS2 file system. GFS2 comes tightly integrated with
the Red Hat Enterprise Linux High Availability Add-On and the Resilient Storage Add-On.
Red Hat Enterprise Linux supports GFS2 on clusters that range in size from 2 to 16 nodes.
NFS-based network file systems are an extremely common and popular choice for
environments that provide NFS servers.
Network file systems can be deployed using very high-performance networking technologies
like Infiniband or 10 Gigabit Ethernet. This means that you should not turn to shared storage file
systems just to get raw bandwidth to your storage. If the speed of access is of prime
importance, then use NFS to export a local file system like XFS.
Shared storage file systems are not easy to set up or to maintain, so you should deploy them
only when you cannot provide your required availability with either local or network file systems.
A shared storage file system in a clustered environment helps reduce downtime by eliminating
the steps needed for unmounting and mounting that need to be done during a typical fail-over
scenario involving the relocation of a high-availability service.
Red Hat recommends that you use network file systems unless you have a specific use case for shared
storage file systems. Use shared storage file systems primarily for deployments that need to provide
high-availability services with minimum downtime and have stringent service-level requirements.
Red Hat Enterprise Linux 9 provides the Stratis volume manager. Stratis uses XFS for the
file system layer and integrates it with LVM, Device Mapper, and other components.
Stratis was first released in Red Hat Enterprise Linux 8.0. It is conceived to fill the gap created when
Red Hat deprecated Btrfs. Stratis 1.0 is an intuitive, command line-based volume manager that can
perform significant storage management operations while hiding the complexity from the user:
Volume management
Pool creation
Snapshots
Stratis offers powerful features, but currently lacks certain capabilities of other offerings that it
might be compared to, such as Btrfs or ZFS. Most notably, it does not support CRCs with self healing.
14
CHAPTER 2. MANAGING LOCAL STORAGE BY USING RHEL SYSTEM ROLES
Using the storage role enables you to automate administration of file systems on disks and logical
volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7.
For more information about RHEL system roles and how to apply them, see Introduction to
RHEL system roles.
NOTE
The storage role can create a file system only on an unpartitioned, whole disk or a logical
volume (LV). It cannot create the file system on a partition.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
The volume name (barefs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.
You can omit the fs_type: xfs line because XFS is the default file system in RHEL 9.
To create the file system on an LV, provide the LVM setup under the disks: attribute,
15
Red Hat Enterprise Linux 9.0 Managing file systems
To create the file system on an LV, provide the LVM setup under the disks: attribute,
including the enclosing volume group. For details, see Creating or resizing a logical volume
by using the storage RHEL system role.
Do not provide the path to the LV device.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
The example Ansible applies the storage role to immediately and persistently mount an XFS file system.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data
mount_user: somebody
mount_group: somegroup
mount_mode: 0755
16
CHAPTER 2. MANAGING LOCAL STORAGE BY USING RHEL SYSTEM ROLES
This playbook adds the file system to the /etc/fstab file, and mounts the file system
immediately.
If the file system on the /dev/sdb device or the mount point directory do not exist, the
playbook creates them.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
If the volume group does not exist, the role creates it. If a logical volume exists in the volume group, it is
resized if the size does not match what is specified in the playbook.
If you are reducing a logical volume, to prevent data loss you must ensure that the file system on that
logical volume is not using the space in the logical volume that is being reduced.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
17
Red Hat Enterprise Linux 9.0 Managing file systems
hosts: managed-node-01.example.com
tasks:
- name: Create logical volume
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_pools:
- name: myvg
disks:
- sda
- sdb
- sdc
volumes:
- name: mylv
size: 2G
fs_type: ext4
mount_point: /mnt/data
size: <size>
You must specify the size by using units (for example, GiB) or percentage (for example,
60%).
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Verify that specified volume has been created or resized to the requested size:
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
You can mount an XFS file system with the online block discard option to automatically discard unused
18
CHAPTER 2. MANAGING LOCAL STORAGE BY USING RHEL SYSTEM ROLES
You can mount an XFS file system with the online block discard option to automatically discard unused
blocks.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Enable online block discard
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
mount_point: /mnt/data
mount_options: discard
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
19
Red Hat Enterprise Linux 9.0 Managing file systems
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
The example Ansible playbook applies the storage role to create and mount an Ext4 file system.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext4
fs_label: label-name
mount_point: /mnt/data
The playbook persistently mounts the file system at the /mnt/data directory.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
20
CHAPTER 2. MANAGING LOCAL STORAGE BY USING RHEL SYSTEM ROLES
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
The example Ansible playbook applies the storage role to create and mount an Ext3 file system.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- hosts: all
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: ext3
fs_label: label-name
mount_point: /mnt/data
mount_user: somebody
mount_group: somegroup
mount_mode: 0755
The playbook persistently mounts the file system at the /mnt/data directory.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
21
Red Hat Enterprise Linux 9.0 Managing file systems
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Create a disk device with swap
hosts: managed-node-01.example.com
roles:
- rhel-system-roles.storage
vars:
storage_volumes:
- name: swap_fs
type: disk
disks:
- /dev/sdb
size: 15 GiB
fs_type: swap
The volume name (swap_fs in the example) is currently arbitrary. The storage role identifies
the volume by the disk device listed under the disks: attribute.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Additional resources
22
CHAPTER 2. MANAGING LOCAL STORAGE BY USING RHEL SYSTEM ROLES
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
WARNING
Device names might change in certain circumstances, for example, when you add a
new disk to a system. Therefore, to prevent data loss, use persistent naming
attributes in the playbook. For more information about persistent naming attributes,
see Persistent naming attributes.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Create a RAID on sdd, sde, sdf, and sdg
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_safe_mode: false
storage_volumes:
- name: data
type: raid
disks: [sdd, sde, sdf, sdg]
raid_level: raid0
raid_chunk_size: 32 KiB
mount_point: /mnt/data
state: present
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
23
Red Hat Enterprise Linux 9.0 Managing file systems
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Configure LVM pool with RAID
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_safe_mode: false
storage_pools:
24
CHAPTER 2. MANAGING LOCAL STORAGE BY USING RHEL SYSTEM ROLES
- name: my_pool
type: lvm
disks: [sdh, sdi]
raid_level: raid1
volumes:
- name: my_volume
size: "1 GiB"
mount_point: "/mnt/app/shared"
fs_type: xfs
state: present
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Managing RAID
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
25
Red Hat Enterprise Linux 9.0 Managing file systems
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Configure stripe size for RAID LVM volumes
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_safe_mode: false
storage_pools:
- name: my_pool
type: lvm
disks: [sdh, sdi]
volumes:
- name: my_volume
size: "1 GiB"
mount_point: "/mnt/app/shared"
fs_type: xfs
raid_level: raid0
raid_stripe_size: "256 KiB"
state: present
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Managing RAID
26
CHAPTER 2. MANAGING LOCAL STORAGE BY USING RHEL SYSTEM ROLES
NOTE
Because of the storage system role use of LVM-VDO, only one volume can be created
per pool.
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Create LVM-VDO volume under volume group 'myvg'
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_pools:
- name: myvg
disks:
- /dev/sdb
volumes:
- name: mylv1
compression: true
deduplication: true
vdo_pool_size: 10 GiB
size: 30 GiB
mount_point: /mnt/app/shared
vdo_pool_size: <size>
The actual size that the volume takes on the device. You can specify the size in human-
readable format, such as 10 GiB. If you do not specify a unit, it defaults to bytes.
size: <size>
The virtual size of VDO volume.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
27
Red Hat Enterprise Linux 9.0 Managing file systems
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
28
CHAPTER 2. MANAGING LOCAL STORAGE BY USING RHEL SYSTEM ROLES
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
luks_password: <password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
vars_files:
- vault.yml
tasks:
- name: Create and configure a volume encrypted with LUKS
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_volumes:
- name: barefs
type: disk
disks:
- sdb
fs_type: xfs
fs_label: <label>
mount_point: /mnt/data
encryption: true
encryption_password: "{{ luks_password }}"
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Verification
4e4e7970-1822-470e-b55a-e91efe5d0f5c
29
Red Hat Enterprise Linux 9.0 Managing file systems
Data segments:
0: crypt
offset: 16777216 [bytes]
length: (whole device)
cipher: aes-xts-plain64
sector: 512 [bytes]
...
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Ansible vault
Resource sharing
30
CHAPTER 2. MANAGING LOCAL STORAGE BY USING RHEL SYSTEM ROLES
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
lvmlockd is configured on the managed node. For more information, see Configuring LVM to
share SAN disks among multiple machines.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
become: true
tasks:
- name: Create shared LVM device
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_pools:
- name: vg1
disks: /dev/vdb
type: lvm
shared: true
state: present
volumes:
- name: lv1
size: 4g
mount_point: /opt/test1
storage_safe_mode: false
storage_use_partitions: true
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
31
Red Hat Enterprise Linux 9.0 Managing file systems
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
1. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
hosts: managed-node-01.example.com
tasks:
- name: Resize LVM PV size
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_pools:
- name: myvg
disks: ["sdf"]
type: lvm
grow_to_fill: true
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
$ ansible-playbook ~/playbook.yml
32
CHAPTER 2. MANAGING LOCAL STORAGE BY USING RHEL SYSTEM ROLES
Verification
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
To secure your data, you can create an encrypted Stratis pool with the storage RHEL system role. In
addition to a passphrase, you can use Clevis and Tang or TPM protection as an encryption method.
IMPORTANT
Prerequisites
You have prepared the control node and the managed nodes
You are logged in to the control node as a user who can run playbooks on the managed nodes.
The account you use to connect to the managed nodes has sudo permissions on them.
Procedure
b. After the ansible-vault create command opens an editor, enter the sensitive data in the
<key>: <value> format:
luks_password: <password>
c. Save the changes, and close the editor. Ansible encrypts the data in the vault.
2. Create a playbook file, for example ~/playbook.yml, with the following content:
---
- name: Manage local storage
33
Red Hat Enterprise Linux 9.0 Managing file systems
hosts: managed-node-01.example.com
vars_files:
- vault.yml
tasks:
- name: Create a new encrypted Stratis pool with Clevis and Tang
ansible.builtin.include_role:
name: rhel-system-roles.storage
vars:
storage_pools:
- name: mypool
disks:
- sdd
- sde
type: stratis
encryption: true
encryption_password: "{{ luks_password }}"
encryption_clevis_pin: tang
encryption_tang_url: tang-server.example.com:7500
encryption_password
Password or passphrase used to unlock the LUKS volumes.
encryption_clevis_pin
Clevis method that you can use to encrypt the created pool. You can use tang and tpm2.
encryption_tang_url
URL of the Tang server.
For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-
system-roles.storage/README.md file on the control node.
Note that this command only validates the syntax and does not protect against a wrong but valid
configuration.
Verification
Verify that the pool was created with Clevis and Tang configured:
34
CHAPTER 2. MANAGING LOCAL STORAGE BY USING RHEL SYSTEM ROLES
"clevis_pin": "tang",
"in_use": true,
"key_description": "blivet-mypool",
Additional resources
/usr/share/ansible/roles/rhel-system-roles.storage/README.md file
/usr/share/doc/rhel-system-roles/storage/ directory
Ansible vault
35
Red Hat Enterprise Linux 9.0 Managing file systems
Besides the list of partitions formatted with file systems, you can also use the page for creating new
storage.
Prerequisites
Procedure
You can also use the drop-down menu in the top-right corner to create new local or networked
storage.
36
CHAPTER 3. MANAGING PARTITIONS USING THE WEB CONSOLE
Create a partition
Prerequisites
The web console must be installed and accessible. For details, see Installing the web console .
An unformatted volume connected to the system is visible in the Storage table of the Storage
tab.
Procedure
3. In the Storage table, click the device which you want to partition to open the page and options
for that device.
4. On the device page, click the menu button, ⋮, and select Create partition table.
a. Partitioning:
Compatible with modern system and hard disks > 2TB (GPT)
37
Red Hat Enterprise Linux 9.0 Managing file systems
No partitioning
b. Overwrite:
Select the Overwrite existing data with zeros checkbox if you want the RHEL web
console to rewrite the whole disk with zeros. This option is slower because the program
has to go through the whole disk, but it is more secure. Use this option if the disk
includes any data and you need to overwrite it.
If you do not select the Overwrite existing data with zeros checkbox, the RHEL web
console rewrites only the disk header. This increases the speed of formatting.
6. Click Initialize.
7. Click the menu button, ⋮, next to the partition table you created. It is named Free space by
default.
9. In the Create partition dialog box, enter a Name for the file system.
XFS file system supports large logical volumes, switching physical drives online without
outage, and growing an existing file system. Leave this file system selected if you do not
have a different strong preference.
Logical volumes
Additional option is to enable encryption of partition done by LUKS (Linux Unified Key Setup),
which allows you to encrypt the volume with a passphrase.
13. Select the Overwrite existing data with zeros checkbox if you want the RHEL web console to
rewrite the whole disk with zeros. This option is slower because the program has to go through
the whole disk, but it is more secure. Use this option if the disk includes any data and you need to
overwrite it.
If you do not select the Overwrite existing data with zeros checkbox, the RHEL web console
rewrites only the disk header. This increases the speed of formatting.
14. If you want to encrypt the volume, select the type of encryption in the Encryption drop-down
menu.
If you do not want to encrypt the volume, select No encryption.
15. In the At boot drop-down menu, select when you want to mount the volume.
38
CHAPTER 3. MANAGING PARTITIONS USING THE WEB CONSOLE
a. Select the Mount read only checkbox if you want the to mount the volume as a read-only
logical volume.
b. Select the Custom mount options checkbox and add the mount options if you want to
change the default mount option.
If you want to create and mount the partition, click the Create and mount button.
If you want to only create the partition, click the Create only button.
Formatting can take several minutes depending on the volume size and which formatting
options are selected.
Verification
To verify that the partition has been successfully added, switch to the Storage tab and check
the Storage table and verify whether the new partition is listed.
Prerequisites
Procedure
4. On the device page and in the GPT partitions section, click the menu button, ⋮ next to the
partition you want to delete.
Verification
To verify that the partition has been successfully removed, switch to the Storage tab and check
the Storage table.
39
Red Hat Enterprise Linux 9.0 Managing file systems
rpc.idmapd 4 This process provides NFSv4 client and server upcalls, which map
between NFSv4 names (strings in the form of user@domain) and
local user and group IDs.
rpc.statd 3 This service provides notification to other NFSv3 clients when the local
host reboots, and to the kernel when a remote NFSv3 host reboots.
Additional resources
Procedure
1. By default, NFSv3 RPC services use random ports. To enable a firewall configuration, configure
fixed port numbers in the /etc/nfs.conf file:
a. In the [lockd] section, set a fixed port number for the nlockmgr RPC service, for example:
port=5555
With this setting, the service automatically uses this port number for both the UDP and TCP
protocol.
b. In the [statd] section, set a fixed port number for the rpc.statd service, for example:
port=6666
With this setting, the service automatically uses this port number for both the UDP and TCP
protocol.
40
CHAPTER 4. MOUNTING NFS SHARES
Prerequisites
Procedure
WARNING
You can experience conflicts in your NFSv4 clientid and their sudden expiration if
your NFS clients have the same short hostname. To avoid any possible sudden
expiration of your NFSv4 clientid, you must use either unique hostnames for NFS
clients or configure identifier on each container, depending on what system you are
using. For more information, see the Red Hat Knowledgebase solution NFSv4
clientid was expired suddenly due to use same hostname on several NFS clients.
Procedure
For example, to mount the /nfs/projects share from the server.example.com NFS server to
/mnt, enter:
41
Red Hat Enterprise Linux 9.0 Managing file systems
Verification
As a user who has permissions to access the NFS share, display the content of the mounted
share:
$ ls -l /mnt/
Procedure
1. Edit the /etc/fstab file and add a line for the share that you want to mount:
For example, to mount the /nfs/home share from the server.example.com NFS server to
/home, enter:
# mount /home
Verification
As a user who has permissions to access the NFS share, display the content of the mounted
share:
$ ls -l /mnt/
Additional resources
Prerequisites
42
CHAPTER 4. MOUNTING NFS SHARES
Procedure
2. Click Storage.
5. In the New NFS Mount dialog box, enter the server or IP address of the remote server.
6. In the Path on Server field, enter the path to the directory that you want to mount.
7. In the Local Mount Point field, enter the path to the directory on your local system where you
want to mount the NFS.
8. In the Mount options check box list, select how you want to mount the NFS. You can select
multiple options depending on your requirements.
Check the Mount at boot box if you want the directory to be reachable even after you
restart the local system.
Check the Mount read only box if you do not want to change the content of the NFS.
Check the Custom mount options box and add the mount options if you want to change
the default mount option.
43
Red Hat Enterprise Linux 9.0 Managing file systems
9. Click Add.
Verification
Open the mounted directory and verify that the content is accessible.
Custom mount options can help you to troubleshoot the connection or change parameters of the NFS
mount such as changing timeout limits or configuring authentication.
Prerequisites
Procedure
1. Log in to the RHEL 9 web console. For details, see Logging in to the web console .
2. Click Storage.
3. In the Storage table, click the NFS mount you want to adjust.
5. Click Edit.
44
CHAPTER 4. MOUNTING NFS SHARES
sec=krb5: The files on the NFS server can be secured by Kerberos authentication. Both the
NFS client and server have to support Kerberos authentication.
For a complete list of the NFS mount options, enter man nfs in the command line.
8. Click Apply.
9. Click Mount.
Verification
Open the mounted directory and verify that the content is accessible.
Prerequisites
The NFS client is enrolled in a Red Hat Identity Management (IdM) domain.
Procedure
# kinit admin
IdM automatically created the host principal when you joined the host to the IdM domain.
# klist -k /etc/krb5.keytab
Keytab name: FILE:/etc/krb5.keytab
KVNO Principal
---- --------------------------------------------------------------------------
6 host/[email protected]
6 host/[email protected]
6 host/[email protected]
6 host/[email protected]
45
Red Hat Enterprise Linux 9.0 Managing file systems
# ipa-client-automount
Searching for IPA server...
IPA server: DNS discovery
Location: default
Continue to configure the system with these values? [no]: yes
Configured /etc/idmapd.conf
Restarting sssd, waiting for it to become available.
Started autofs
Verification
1. Log in as an IdM user who has permissions to write on the mounted share.
$ kinit
$ touch /mnt/test.txt
$ ls -l /mnt/test.txt
-rw-r--r--. 1 admin users 0 Feb 15 11:54 /mnt/test.txt
Additional resources
This change affects all users on the host because it changes how dconf manages user settings and
configurations stored in the home directories.
Procedure
1. Add the following line to the beginning of the /etc/dconf/profile/user file. If the file does not
exist, create it.
46
CHAPTER 4. MOUNTING NFS SHARES
service-db:keyfile/user
With this setting, dconf polls the keyfile back end to determine whether updates have been
made, so settings might not be updated immediately.
2. The changes take effect when the users logs out and in.
lookupcache=mode
Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid
arguments for mode are all, none, or positive.
nfsvers=version
Specifies which version of the NFS protocol to use, where version is 3, 4, 4.0, 4.1, or 4.2. This is useful
for hosts that run multiple NFS servers, or to disable retrying a mount with lower versions. If no
version is specified, the client tries version 4.2 first, then negotiates down until it finds a version
supported by the server.
The option vers is identical to nfsvers, and is included in this release for compatibility reasons.
noacl
Turns off all ACL processing. This can be needed when interfacing with old Red Hat Enterprise Linux
versions that are not compatible with the recent ACL technology.
nolock
Disables file locking. This setting can be required when you connect to very old NFS servers.
noexec
Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a
non-Linux file system containing incompatible binaries.
nosuid
Disables the set-user-identifier and set-group-identifier bits. This prevents remote users from
gaining higher privileges by running a setuid program.
retrans=num
The number of times the NFS client retries a request before it attempts further recovery action. If
the retrans option is not specified, the NFS client tries each UDP request three times and each TCP
request twice.
timeo=num
The time in tenths of a second the NFS client waits for a response before it retries an NFS request.
For NFS over TCP, the default timeo value is 600 (60 seconds). The NFS client performs linear
backoff: After each retransmission the timeout is increased by timeo up to the maximum of 600
seconds.
port=num
Specifies the numeric value of the NFS server port. For NFSv3, if num is 0 (the default value), or not
specified, then mount queries the rpcbind service on the remote host for the port number to use.
For NFSv4, if num is 0, then mount queries the rpcbind service, but if it is not specified, the standard
NFS port number of TCP 2049 is used instead and the remote rpcbind is not checked anymore.
rsize=num and wsize=num
These options set the maximum number of bytes to be transferred in a single NFS read or write
47
Red Hat Enterprise Linux 9.0 Managing file systems
These options set the maximum number of bytes to be transferred in a single NFS read or write
operation.
There is no fixed default value for rsize and wsize. By default, NFS uses the largest possible value
that both the server and the client support. In Red Hat Enterprise Linux 9, the client and server
maximum is 1,048,576 bytes. For more information, see the Red Hat Knowledgebase solution What
are the default and maximum values for rsize and wsize with NFS mounts?.
sec=options
Security options to use for accessing files on the mounted export. The options value is a colon-
separated list of one or more security options.
By default, the client attempts to find a security option that both the client and the server support. If
the server does not support any of the selected options, the mount operation fails.
Available options:
sec=sys uses local UNIX UIDs and GIDs. These use AUTH_SYS to authenticate NFS
operations.
sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.
sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS
operations using secure checksums to prevent data tampering.
sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS
traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most
performance overhead.
Additional resources
48
CHAPTER 4. MOUNTING NFS SHARES
FS-Cache is designed to be as transparent as possible to the users and administrators of a system. FS-
Cache allows a file system on a server to interact directly with a client’s local cache without creating an
over-mounted file system. With NFS, a mount option instructs the client to mount the NFS share with
FS-cache enabled. The mount point will cause automatic upload for two kernel modules: fscache and
cachefiles. The cachefilesd daemon communicates with the kernel modules to implement the cache.
FS-Cache does not alter the basic operation of a file system that works over the network. It merely
provides that file system with a persistent place in which it can cache data. For example, a client can still
mount an NFS share whether or not FS-Cache is enabled. In addition, cached NFS can handle files that
will not fit into the cache (whether individually or collectively) as files can be partially cached and do not
have to be read completely up front. FS-Cache also hides all I/O errors that occur in the cache from the
client file system driver.
To provide caching services, FS-Cache needs a cache back end, the cachefiles service. FS-Cache
requires a mounted block-based file system, that supports block mapping (bmap) and extended
attributes as its cache back end:
XFS
ext3
ext4
FS-Cache cannot arbitrarily cache any file system, whether through the network or otherwise: the
shared file system’s driver must be altered to allow interaction with FS-Cache, data storage or retrieval,
and metadata setup and validation. FS-Cache needs indexing keys and coherency data from the cached
file system to support persistence: indexing keys to match file system objects to cache objects, and
coherency data to determine whether the cache objects are still valid.
Using FS-Cache is a compromise between various factors. If FS-Cache is being used to cache NFS
traffic, it may slow the client down, but can massively reduce the network and server loading by satisfying
read requests locally without consuming network bandwidth.
49
Red Hat Enterprise Linux 9.0 Managing file systems
Prerequisites
The file system mounted under the /var/cache/fscache/ directory is ext3, ext4, or xfs.
The file system mounted under /var/cache/fscache/ uses extended attributes, which is the
default if you created the file system on RHEL 8 or later.
Procedure
Verification
1. Mount an NFS share with the fsc option to use the cache:
b. To mount a share permanently, add the fsc option to the entry in the /etc/fstab file:
# cat /proc/fs/fscache/stats
Additional resources
/usr/share/doc/cachefilesd/README file
/usr/share/doc/kernel-doc-
<kernel_version>/Documentation/filesystems/caching/fscache.rst provided by the kernel-
doc package
50
CHAPTER 4. MOUNTING NFS SHARES
To avoid coherency management problems between superblocks, all NFS superblocks that require to
cache the data have unique level 2 keys. Normally, two NFS mounts with the same source volume and
options share a superblock, and therefore share the caching, even if they mount different directories
within that volume.
The following two mounts likely share the superblock as they have the same mount options,
especially if because they come from the same partition on the NFS server:
If the mount options are different, they do not share the superblock:
NOTE
The user can not share caches between superblocks that have different communications
or protocol parameters. For example, it is not possible to share caches between NFSv4.0
and NFSv3 or between NFSv4.1 and NFSv4.2 because they force different superblocks.
Also setting parameters, such as the read size (rsize), prevents cache sharing because,
again, it forces a different superblock.
Opening a file from a shared file system for direct I/O automatically bypasses the cache. This is
because this type of access must be direct to the server.
Opening a file from a shared file system for either direct I/O or writing flushes the cached copy
of the file. FS-Cache will not cache the file again until it is no longer opened for direct I/O or
writing.
Furthermore, this release of FS-Cache only caches regular NFS files. FS-Cache will not cache
directories, symlinks, device files, FIFOs, and sockets.
51
Red Hat Enterprise Linux 9.0 Managing file systems
Cache culling is done on the basis of the percentage of blocks and the percentage of files available in
the underlying file system. There are settings in /etc/cachefilesd.conf which control six limits:
brun/frun: 10%
bcull/fcull: 7%
bstop/fstop: 3%
These are the percentages of available space and available files and do not appear as 100 minus the
percentage displayed by the df program.
IMPORTANT
Culling depends on both bxxx and fxxx pairs simultaneously; the user can not treat them
separately.
52
CHAPTER 5. MOUNTING AN SMB SHARE
NOTE
In the context of SMB, you can find mentions about the Common Internet File System
(CIFS) protocol, which is a dialect of SMB. Both the SMB and CIFS protocol are
supported, and the kernel module and utilities involved in mounting SMB and CIFS shares
both use the name cifs.
Set and display Access Control Lists (ACL) in a security descriptor on SMB and CIFS shares
SMB 1
WARNING
The SMB1 protocol is deprecated due to known security issues, and is only
safe to use on a private network. The main reason that SMB1 is still
provided as a supported option is that currently it is the only SMB protocol
version that supports UNIX extensions. If you do not need to use UNIX
extensions on SMB, Red Hat strongly recommends using SMB2 or later.
SMB 2.0
SMB 2.1
SMB 3.0
SMB 3.1.1
NOTE
Depending on the protocol version, not all SMB features are implemented.
Samba uses the CAP_UNIX capability bit in the SMB protocol to provide the UNIX extensions feature.
53
Red Hat Enterprise Linux 9.0 Managing file systems
Samba uses the CAP_UNIX capability bit in the SMB protocol to provide the UNIX extensions feature.
These extensions are also supported by the cifs.ko kernel module. However, both Samba and the kernel
module support UNIX extensions only in the SMB 1 protocol.
Prerequisites
Procedure
1. Set the server min protocol parameter in the [global] section in the /etc/samba/smb.conf file
to NT1.
2. Mount the share using the SMB 1 protocol by providing the -o vers=1.0 option to the mount
command. For example:
By default, the kernel module uses SMB 2 or the highest later protocol version supported by the
server. Passing the -o vers=1.0 option to the mount command forces that the kernel module
uses the SMB 1 protocol that is required for using UNIX extensions.
Verification
# mount
...
//<server_name>/<share_name> on /mnt type cifs (...,unix,...)
If the unix entry is displayed in the list of mount options, UNIX extensions are enabled.
NOTE
Manually mounted shares are not mounted automatically again when you reboot the
system. To configure that Red Hat Enterprise Linux automatically mounts the share when
the system boots, see Mounting an SMB share automatically when the system boots .
Prerequisites
Procedure
Use the mount utility with the -t cifs parameter to mount an SMB share:
54
CHAPTER 5. MOUNTING AN SMB SHARE
In the -o parameter, you can specify options that are used to mount the share. For details, see
the OPTIONS section in the mount.cifs(8) man page and Frequently used mount options .
Verification
# ls -l /mnt/
total 4
drwxr-xr-x. 2 root root 8748 Dec 4 16:27 test.txt
drwxr-xr-x. 17 root root 4096 Dec 4 07:43 Demo-Directory
Prerequisites
Procedure
1. Add an entry for the share to the /etc/fstab file. For example:
IMPORTANT
To enable the system to mount a share automatically, you must store the user
name, password, and domain name in a credentials file. For details, see Creating
a credentials file to authenticate to an SMB share
In the fourth field of the row in the /etc/fstab, specify mount options, such as the path to the
credentials file. For details, see the OPTIONS section in the mount.cifs(8) man page and
Frequently used mount options .
Verification
55
Red Hat Enterprise Linux 9.0 Managing file systems
# mount /mnt/
Prerequisites
Procedure
1. Create a file, such as /root/smb.cred, and specify the user name, password, and domain name
that file:
username=user_name
password=password
domain=domain_name
2. Set the permissions to only allow the owner to access the file:
You can now pass the credentials=file_name mount option to the mount utility or use it in the
/etc/fstab file to mount the share without being prompted for the user name and password.
However, in certain situations, the administrator wants to mount a share automatically when the system
boots, but users should perform actions on the share’s content using their own credentials. The
multiuser mount options lets you configure this scenario.
IMPORTANT
To use the multiuser mount option, you must additionally set the sec mount option to a
security type that supports providing credentials in a non-interactive way, such as krb5 or
the ntlmssp option with a credentials file. For details, see Accessing a share as a user .
The root user mounts the share using the multiuser option and an account that has minimal access to
the contents of the share. Regular users can then provide their user name and password to the current
session’s kernel keyring using the cifscreds utility. If the user accesses the content of the mounted
share, the kernel uses the credentials from the kernel keyring instead of the one initially used to mount
the share.
56
CHAPTER 5. MOUNTING AN SMB SHARE
Optionally, verify if the share was successfully mounted with the multiuser option.
Prerequisites
Procedure
To mount a share automatically with the multiuser option when the system boots:
1. Create the entry for the share in the /etc/fstab file. For example:
# mount /mnt/
If you do not want to mount the share automatically when the system boots, mount it manually by
passing -o multiuser,sec=security_type to the mount command. For details about mounting an SMB
share manually, see Manually mounting an SMB share .
Procedure
# mount
...
//server_name/share_name on /mnt type cifs (sec=ntlmssp,multiuser,...)
If the multiuser entry is displayed in the list of mount options, the feature is enabled.
When the user performs operations in the directory that contains the mounted SMB share, the server
57
Red Hat Enterprise Linux 9.0 Managing file systems
When the user performs operations in the directory that contains the mounted SMB share, the server
applies the file system permissions for this user, instead of the one initially used when the share was
mounted.
NOTE
Multiple users can perform operations using their own credentials on the mounted share
at the same time.
How the connection will be established with the server. For example, which SMB protocol
version is used when connecting to the server.
How the share will be mounted into the local file system. For example, if the system overrides
the remote file and directory permissions to enable multiple local users to access the content
on the server.
To set multiple options in the fourth field of the /etc/fstab file or in the -o parameter of a mount
command, separate them with commas. For example, see Mounting a share with the multiuser option .
Option Description
credentials=file_name Sets the path to the credentials file. See Authenticating to an SMB share
using a credentials file.
dir_mode=mode Sets the directory mode if the server does not support CIFS UNIX extensions.
file_mode=mode Sets the file mode if the server does not support CIFS UNIX extensions.
password=password Sets the password used to authenticate to the SMB server. Alternatively,
specify a credentials file using the credentials option.
seal Enables encryption support for connections using SMB 3.0 or a later
protocol version. Therefore, use seal together with the vers mount option
set to 3.0 or later. See the example in Manually mounting an SMB share.
sec=security_mode Sets the security mode, such as ntlmsspi, to enable NTLMv2 password
hashing and enabled packet signing. For a list of supported values, see the
option’s description in the mount.cifs(8) man page on your system.
If the server does not support the ntlmv2 security mode, use sec=ntlmssp,
which is the default.
For security reasons, do not use the insecure ntlm security mode.
username=user_name Sets the user name used to authenticate to the SMB server. Alternatively,
specify a credentials file using the credentials option.
58
CHAPTER 5. MOUNTING AN SMB SHARE
Option Description
vers=SMB_protocol_version Sets the SMB protocol version used for the communication with the server.
For a complete list, see the OPTIONS section in the mount.cifs(8) man page on your system.
59
Red Hat Enterprise Linux 9.0 Managing file systems
Traditionally, non-persistent names in the form of /dev/sd(major number)(minor number) are used on
Linux to refer to storage devices. The major and minor number range and associated sd names are
allocated for each device when it is detected. This means that the association between the major and
minor number range and associated sd names can change if the order of device detection changes.
The parallelization of the system boot process detects storage devices in a different order with
each system boot.
A disk fails to power up or respond to the SCSI controller. This results in it not being detected by
the normal device probe. The disk is not accessible to the system and subsequent devices will
have their major and minor number range, including the associated sd names shifted down. For
example, if a disk normally referred to as sdb is not detected, a disk that is normally referred to
as sdc would instead appear as sdb.
A SCSI controller (host bus adapter, or HBA) fails to initialize, causing all disks connected to that
HBA to not be detected. Any disks connected to subsequently probed HBAs are assigned
different major and minor number ranges, and different associated sd names.
The order of driver initialization changes if different types of HBAs are present in the system.
This causes the disks connected to those HBAs to be detected in a different order. This might
also occur if HBAs are moved to different PCI slots on the system.
Disks connected to the system with Fibre Channel, iSCSI, or FCoE adapters might be
inaccessible at the time the storage devices are probed, due to a storage array or intervening
switch being powered off, for example. This might occur when a system reboots after a power
failure, if the storage array takes longer to come online than the system take to boot. Although
some Fibre Channel drivers support a mechanism to specify a persistent SCSI target ID to
WWPN mapping, this does not cause the major and minor number ranges, and the associated sd
names to be reserved; it only provides consistent SCSI target ID numbers.
These reasons make it undesirable to use the major and minor number range or the associated sd
names when referring to devices, such as in the /etc/fstab file. There is the possibility that the wrong
device will be mounted and data corruption might result.
Occasionally, however, it is still necessary to refer to the sd names even when another mechanism is
used, such as when errors are reported by a device. This is because the Linux kernel uses sd names (and
also SCSI host/channel/target/LUN tuples) in kernel messages regarding the device.
File system identifiers are tied to the file system itself, while device identifiers are linked to the physical
60
CHAPTER 6. OVERVIEW OF PERSISTENT NAMING ATTRIBUTES
File system identifiers are tied to the file system itself, while device identifiers are linked to the physical
block device. Understanding the difference is important for proper storage management.
Label
Device identifiers
Device identifiers are tied to a block device: for example, a disk or a partition. If you rewrite the device,
such as by formatting it with the mkfs utility, the device keeps the attribute, because it is not stored in
the file system.
Partition UUID
Serial number
Recommendations
Some file systems, such as logical volumes, span multiple devices. Red Hat recommends
accessing these file systems using file system identifiers rather than device identifiers.
Their content
A unique identifier
Although udev naming attributes are persistent, in that they do not change on their own across system
reboots, some are also configurable.
61
Red Hat Enterprise Linux 9.0 Managing file systems
/dev/disk/by-uuid/3e6be9de-8139-11d1-9106-a43f08d823a6
You can use the UUID to refer to the device in the /etc/fstab file using the following syntax:
UUID=3e6be9de-8139-11d1-9106-a43f08d823a6
You can configure the UUID attribute when creating a file system, and you can also change it later on.
For example:
/dev/disk/by-label/Boot
You can use the label to refer to the device in the /etc/fstab file using the following syntax:
LABEL=Boot
You can configure the Label attribute when creating a file system, and you can also change it later on.
This identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital
Product Data (page 0x83) or Unit Serial Number (page 0x80).
Red Hat Enterprise Linux automatically maintains the proper mapping from the WWID-based device
name to a current /dev/sd name on that system. Applications can use the /dev/disk/by-id/ name to
reference the data on the disk, even if the path to the device changes, and even when accessing the
device from different systems.
NOTE
If your are using an NVMe device, you might run into a disk by-id naming change for some
vendors, if the serial number of your device has leading whitespace.
62
CHAPTER 6. OVERVIEW OF PERSISTENT NAMING ATTRIBUTES
In addition to these persistent names provided by the system, you can also use udev rules to implement
persistent names of your own, mapped to the WWID of the storage.
/dev/disk/by-partuuid/4cd1448a-01 /dev/sda1
/dev/disk/by-partuuid/4cd1448a-02 /dev/sda2
/dev/disk/by-partuuid/4cd1448a-03 /dev/sda3
The Path attribute fails if any part of the hardware path (for example, the PCI ID, target port, or LUN
number) changes. The Path attribute is therefore unreliable. However, the Path attribute may be useful
in one of the following scenarios:
You need to identify a disk that you are planning to replace later.
If there are multiple paths from a system to a device, DM Multipath uses the WWID to detect this. DM
Multipath then presents a single "pseudo-device" in the /dev/mapper/wwid directory, such as
/dev/mapper/3600508b400105df70000e00000ac0000.
63
Red Hat Enterprise Linux 9.0 Managing file systems
Host:Channel:Target:LUN
/dev/sd name
major:minor number
DM Multipath automatically maintains the proper mapping of each WWID-based device name to its
corresponding /dev/sd name on the system. These names are persistent across path changes, and they
are consistent when accessing the device from different systems.
When the user_friendly_names feature of DM Multipath is used, the WWID is mapped to a name of the
form /dev/mapper/mpathN. By default, this mapping is maintained in the file /etc/multipath/bindings.
These mpathN names are persistent as long as that file is maintained.
IMPORTANT
If you use user_friendly_names, then additional steps are required to obtain consistent
names in a cluster.
It is possible that the device might not be accessible at the time the query is performed
because the udev mechanism might rely on the ability to query the storage device when the
udev rules are processed for a udev event. This is more likely to occur with Fibre Channel, iSCSI
or FCoE storage devices when the device is not located in the server chassis.
The kernel might send udev events at any time, causing the rules to be processed and possibly
causing the /dev/disk/by-*/ links to be removed if the device is not accessible.
There might be a delay between when the udev event is generated and when it is processed,
such as when a large number of devices are detected and the user-space udevd service takes
some amount of time to process the rules for each one. This might cause a delay between when
the kernel detects the device and when the /dev/disk/by-*/ names are available.
External programs such as blkid invoked by the rules might open the device for a brief period of
time, making the device inaccessible for other uses.
The device names managed by the udev mechanism in /dev/disk/ may change between major
releases, requiring you to update the links.
64
CHAPTER 6. OVERVIEW OF PERSISTENT NAMING ATTRIBUTES
Procedure
To list the UUID and Label attributes, use the lsblk utility:
For example:
To list the PARTUUID attribute, use the lsblk utility with the --output +PARTUUID option:
For example:
To list the WWID attribute, examine the targets of symbolic links in the /dev/disk/by-id/
directory. For example:
Example 6.6. Viewing the WWID of all storage devices on the system
$ file /dev/disk/by-id/*
/dev/disk/by-id/ata-QEMU_HARDDISK_QM00001
symbolic link to ../../sda
/dev/disk/by-id/ata-QEMU_HARDDISK_QM00001-part1
symbolic link to ../../sda1
/dev/disk/by-id/ata-QEMU_HARDDISK_QM00001-part2
symbolic link to ../../sda2
/dev/disk/by-id/dm-name-rhel_rhel8-root
symbolic link to ../../dm-0
/dev/disk/by-id/dm-name-rhel_rhel8-swap
symbolic link to ../../dm-1
/dev/disk/by-id/dm-uuid-LVM-
QIWtEHtXGobe5bewlIUDivKOz5ofkgFhP0RMFsNyySVihqEl2cWWbR7MjXJolD6g
symbolic link to ../../dm-1
65
Red Hat Enterprise Linux 9.0 Managing file systems
/dev/disk/by-id/dm-uuid-LVM-
QIWtEHtXGobe5bewlIUDivKOz5ofkgFhXqH2M45hD2H9nAf2qfWSrlRLhzfMyOKd
symbolic link to ../../dm-0
/dev/disk/by-id/lvm-pv-uuid-atlr2Y-vuMo-ueoH-CpMG-4JuH-AhEF-wu4QQm
symbolic link to ../../sda2
NOTE
Changing udev attributes happens in the background and might take a long time. The
udevadm settle command waits until the change is fully registered, which ensures that
your next command will be able to use the new attribute correctly.
Replace new-uuid with the UUID you want to set; for example, 1cdfbc07-1c90-4984-b5ec-
f61943f5ea50. You can generate a UUID using the uuidgen command.
Prerequisites
If you are modifying the attributes of an XFS file system, unmount it first.
Procedure
To change the UUID or Label attributes of an XFS file system, use the xfs_admin utility:
To change the UUID or Label attributes of an ext4, ext3, or ext2 file system, use the tune2fs
utility:
To change the UUID or Label attributes of a swap volume, use the swaplabel utility:
66
CHAPTER 7. PARTITION OPERATIONS WITH PARTED
Procedure
1. Start the parted utility. For example, the following output lists the device /dev/sda:
# parted /dev/sda
# (parted) print
For a detailed description of the print command output, see the following:
67
Red Hat Enterprise Linux 9.0 Managing file systems
Type
Valid types are metadata, free, primary, extended, or logical.
File system
The file system type. If the File system field of a device shows no value, this means that its file
system type is unknown. The parted utility cannot recognize the file system on encrypted devices.
Flags
Lists the flags set for the partition. Available flags are boot, root, swap, hidden, raid, lvm, or lba.
Additional resources
WARNING
Formatting a block device with a partition table deletes all data stored on the
device.
Procedure
# parted block-device
# (parted) print
If the device already contains partitions, they will be deleted in the following steps.
68
CHAPTER 7. PARTITION OPERATIONS WITH PARTED
# (parted) print
# (parted) quit
Additional resources
NOTE
Prerequisites
If the partition you want to create is larger than 2TiB, format the disk with the GUID Partition
Table (GPT).
Procedure
# parted block-device
2. View the current partition table to determine if there is enough free space:
# (parted) print
69
Red Hat Enterprise Linux 9.0 Managing file systems
Replace part-type with with primary, logical, or extended. This applies only to the MBR
partition table.
Replace name with an arbitrary partition name. This is required for GPT partition tables.
Replace fs-type with xfs, ext2, ext3, ext4, fat16, fat32, hfs, hfs+, linux-swap, ntfs, or
reiserfs. The fs-type parameter is optional. Note that the parted utility does not create the
file system on the partition.
Replace start and end with the sizes that determine the starting and ending points of the
partition, counting from the beginning of the disk. You can use size suffixes, such as 512MiB,
20GiB, or 1.5TiB. The default size is in megabytes.
To create a primary partition from 1024MiB until 2048MiB on an MBR table, use:
4. View the partition table to confirm that the created partition is in the partition table with the
correct partition type, file system type, and size:
# (parted) print
# (parted) quit
# udevadm settle
# cat /proc/partitions
Additional resources
70
CHAPTER 7. PARTITION OPERATIONS WITH PARTED
WARNING
Procedure
# parted block-device
Replace block-device with the path to the device where you want to remove a partition: for
example, /dev/sda.
2. View the current partition table to determine the minor number of the partition to remove:
(parted) print
(parted) rm minor-number
Replace minor-number with the minor number of the partition you want to remove.
4. Verify that you have removed the partition from the partition table:
(parted) print
(parted) quit
# cat /proc/partitions
7. Remove the partition from the /etc/fstab file, if it is present. Find the line that declares the
removed partition, and remove it from the file.
8. Regenerate mount units so that your system registers the new /etc/fstab configuration:
# systemctl daemon-reload
9. If you have deleted a swap partition or removed pieces of LVM, remove all references to the
partition from the kernel command line:
a. List active kernel options and see if any option references the removed partition:
71
Red Hat Enterprise Linux 9.0 Managing file systems
# grubby --info=ALL
10. To register the changes in the early boot system, rebuild the initramfs file system:
Additional resources
Prerequisites
If the partition you want to create is larger than 2TiB, format the disk with the GUID Partition
Table (GPT).
If you want to shrink the partition, first shrink the file system so that it is not larger than the
resized partition.
NOTE
Procedure
# parted block-device
# (parted) print
The location of the existing partition and its new ending point after resizing.
72
CHAPTER 7. PARTITION OPERATIONS WITH PARTED
Replace 1 with the minor number of the partition that you are resizing.
Replace 2 with the size that determines the new ending point of the resized partition,
counting from the beginning of the disk. You can use size suffixes, such as 512MiB, 20GiB,
or 1.5TiB. The default size is in megabytes.
4. View the partition table to confirm that the resized partition is in the partition table with the
correct size:
# (parted) print
# (parted) quit
# cat /proc/partitions
7. Optional: If you extended the partition, extend the file system on it as well.
Additional resources
73
Red Hat Enterprise Linux 9.0 Managing file systems
NOTE
The following examples are simplified for clarity and do not reflect the exact partition
layout when actually installing Red Hat Enterprise Linux.
The first diagram represents a disk with one primary partition and an undefined partition with
unallocated space. The second diagram represents a disk with two defined partitions with allocated
space.
An unused hard disk also falls into this category. The only difference is that all the space is not part of
any defined partition.
On a new disk, you can create the necessary partitions from the unused space. Most preinstalled
operating systems are configured to take up all available space on a disk drive.
To use the space allocated to the unused partition, delete the partition and then create the appropriate
Linux partition instead. Alternatively, during the installation process, delete the unused partition and
manually create new partitions.
WARNING
If you want to use an operating system (OS) on an active partition, you must
reinstall the OS. Be aware that some computers, which include pre-installed
software, do not include installation media to reinstall the original OS. Check
whether this applies to your OS before you destroy an original partition and the OS
installation.
To optimise the use of available free space, you can use the methods of destructive or non-destructive
repartitioning.
After creating a smaller partition for your existing operating system, you can:
Reinstall software.
75
Red Hat Enterprise Linux 9.0 Managing file systems
The following diagram is a simplified representation of using the destructive repartitioning method.
WARNING
This method deletes all data previously stored in the original partition.
The following is a list of methods, which can help initiate non-destructive repartitioning.
The storage location of some data cannot be changed. This can prevent the resizing of a partition to the
required size, and ultimately lead to a destructive repartition process. Compressing data in an already
existing partition can help you resize your partitions as needed. It can also help to maximize the free
space available.
To avoid any possible data loss, create a backup before continuing with the compression process.
By resizing an already existing partition, you can free up more space. Depending on your resizing
software, the results may vary. In the majority of cases, you can create a new unformatted partition of
the same type, as the original partition.
The steps you take after resizing can depend on the software you use. In the following example, the best
practice is to delete the new DOS (Disk Operating System) partition, and create a Linux partition
instead. Verify what is most suitable for your disk before initiating the resizing process.
Some pieces of resizing software support Linux based systems. In such cases, there is no need to delete
the newly created partition after resizing. Creating a new partition afterwards depends on the software
you use.
The following diagram represents the disk state, before and after creating a new partition.
78
CHAPTER 9. GETTING STARTED WITH XFS
Reliability
Metadata journaling, which ensures file system integrity after a system crash by keeping a
record of file system operations that can be replayed when the system is restarted and the
file system remounted
Quota journaling. This avoids the need for lengthy quota consistency checks after a crash.
Allocation schemes
Extent-based allocation
Delayed allocation
Space pre-allocation
Other features
Online defragmentation
79
Red Hat Enterprise Linux 9.0 Managing file systems
Extended attributes (xattr). This allows the system to associate several additional
name/value pairs per file.
Project or directory quotas. This allows quota restrictions over a directory tree.
Subsecond timestamps
Performance characteristics
XFS has a high performance on large systems with enterprise workloads. A large system is one with a
relatively high number of CPUs, multiple HBAs, and connections to external disk arrays. XFS also
performs well on smaller systems that have a multi-threaded, parallel I/O workload.
XFS has a relatively low performance for single threaded, metadata-intensive workloads: for example, a
workload that creates or deletes large numbers of small files in a single thread.
NOTE
If you want a complete client-server solution for backups over network, you can use
bacula backup utility that is available in RHEL 9. For more information about Bacula, see
Bacula backup solution .
80
CHAPTER 10. CREATING AN XFS FILE SYSTEM
Procedure
If the device is a regular partition, an LVM volume, an MD volume, a disk, or a similar device,
use the following command:
# mkfs.xfs block-device
Replace block-device with the path to the block device. For example, /dev/sdb1,
/dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or /dev/my-
volgroup/my-lv.
When using mkfs.xfs on a block device containing an existing file system, add the -f
option to overwrite that file system.
To create the file system on a hardware RAID device, check if the system correctly detects
the stripe geometry of the device:
If the stripe geometry information is correct, no additional options are needed. Create
the file system:
# mkfs.xfs block-device
If the information is incorrect, specify stripe geometry manually with the su and sw
parameters of the -d option. The su parameter specifies the RAID chunk size, and the
sw parameter specifies the number of data disks in the RAID device.
For example:
2. Use the following command to wait for the system to register the new device node:
# udevadm settle
Additional resources
81
Red Hat Enterprise Linux 9.0 Managing file systems
To back up multiple file systems to a single tape device, simply write the backup to a tape that
already contains an XFS backup. This appends the new backup to the previous one. By default,
xfsdump never overwrites existing backups.
A level 1 dump is the first incremental backup after a full backup. The next incremental
backup would be level 2, which only backs up files that have changed since the last level
1 dump; and so on, to a maximum of level 9.
Exclude files from a backup using size, subtree, or inode flags to filter them.
Additional resources
Prerequisites
Another file system or a tape drive where you can store the backup.
Procedure
82
CHAPTER 11. BACKING UP AN XFS FILE SYSTEM
Replace level with the dump level of your backup. Use 0 to perform a full backup or 1 to 9 to
perform consequent incremental backups.
Replace backup-destination with the path where you want to store your backup. The
destination can be a regular file, a tape drive, or a remote tape device. For example,
/backup-files/Data.xfsdump for a file or /dev/st0 for a tape drive.
Replace path-to-xfs-filesystem with the mount point of the XFS file system you want to
back up. For example, /mnt/data/. The file system must be mounted.
When backing up multiple file systems and saving them on a single tape device, add a
session label to each backup using the -L label option so that it is easier to identify them
when restoring. Replace label with any name for your backup: for example, backup_data.
To back up the content of XFS file systems mounted on the /boot/ and /data/ directories
and save them as files in the /backup-files/ directory:
To back up multiple file systems on a single tape device, add a session label to each backup
using the -L label option:
Additional resources
83
Red Hat Enterprise Linux 9.0 Managing file systems
The simple mode enables users to restore an entire file system from a level 0 dump. This is the
default mode.
The cumulative mode enables file system restoration from an incremental backup: that is,
level 1 to level 9.
A unique session ID or session label identifies each backup. Restoring a backup from a tape containing
multiple backups requires its corresponding session ID or label.
To extract, add, or delete specific files from a backup, enter the xfsrestore interactive mode. The
interactive mode provides a set of commands to manipulate the backup files.
Additional resources
Prerequisites
A file or tape backup of XFS file systems, as described in Backing up an XFS file system .
Procedure
The command to restore the backup varies depending on whether you are restoring from a full
backup or an incremental one, or are restoring multiple backups from a single tape device:
Replace backup-location with the location of the backup. This can be a regular file, a tape
drive, or a remote tape device. For example, /backup-files/Data.xfsdump for a file or
/dev/st0 for a tape drive.
Replace restoration-path with the path to the directory where you want to restore the file
system. For example, /mnt/data/.
84
CHAPTER 12. RESTORING AN XFS FILE SYSTEM FROM BACKUP
To restore a file system from an incremental (level 1 to level 9) backup, add the -r option.
To restore a backup from a tape device that contains multiple backups, specify the backup
using the -S or -L options.
The -S option lets you choose a backup by its session ID, while the -L option lets you choose
by the session label. To obtain the session ID and session labels, use the xfsrestore -I
command.
Replace session-id with the session ID of the backup. For example, b74a3586-e52e-4a4a-
8775-c3334fa8ea2c. Replace session-label with the session label of the backup. For
example, my_backup_session_label.
To restore the XFS backup files and save their content into directories under /mnt/:
To restore from a tape device containing multiple backups, specify each backup by its
session label or session ID:
Additional resources
The informational messages keep appearing until the matching backup is found.
85
Red Hat Enterprise Linux 9.0 Managing file systems
IMPORTANT
Prerequisites
Ensure that the underlying block device is of an appropriate size to hold the resized file system
later. Use the appropriate resizing methods for the affected block device.
Procedure
While the XFS file system is mounted, use the xfs_growfs utility to increase its size:
Replace file-system with the mount point of the XFS file system.
With the -D option, replace new-size with the desired new size of the file system specified in
the number of file system blocks.
To find out the block size in kB of a given XFS file system, use the xfs_info utility:
# xfs_info block-device
...
data = bsize=4096
...
Without the -D option, xfs_growfs grows the file system to the maximum size supported by
the underlying device.
Additional resources
86
CHAPTER 14. CONFIGURING XFS ERROR BEHAVIOR
XFS repeatedly retries the I/O operation until the operation succeeds or XFS reaches a set limit.
The limit is based either on a maximum number of retries or a maximum time for retries.
XFS considers the error permanent and stops the operation on the file system.
You can configure how XFS reacts to the following error conditions:
EIO
Error when reading or writing
ENOSPC
No space left on the device
ENODEV
Device cannot be found
You can set the maximum number of retries and the maximum time in seconds until XFS considers an
error permanent. XFS stops retrying the operation when it reaches either of the limits.
You can also configure XFS so that when unmounting a file system, XFS immediately cancels the retries
regardless of any other configuration. This configuration enables the unmount operation to succeed
despite persistent errors.
Default behavior
The default behavior for each XFS error condition depends on the error context. Some XFS errors such
as ENODEV are considered to be fatal and unrecoverable, regardless of the retry count. Their default
retry limit is 0.
/sys/fs/xfs/device/error/metadata/EIO/
For the EIO error condition
/sys/fs/xfs/device/error/metadata/ENODEV/
For the ENODEV error condition
/sys/fs/xfs/device/error/metadata/ENOSPC/
For the ENOSPC error condition
/sys/fs/xfs/device/error/default/
Common configuration for all other, undefined error conditions
Each directory contains the following configuration files for configuring retry limits:
87
Red Hat Enterprise Linux 9.0 Managing file systems
max_retries
Controls the maximum number of times that XFS retries the operation.
retry_timeout_seconds
Specifies the time limit in seconds after which XFS stops retrying the operation.
Procedure
Set the maximum number of retries, the retry time limit, or both:
To set the maximum number of retries, write the desired number to the max_retries file:
To set the time limit, write the desired number of seconds to the retry_timeout_seconds
file:
value is a number between -1 and the maximum possible value of the C signed integer type. This
is 2147483647 on 64-bit Linux.
In both limits, the value -1 is used for continuous retries and 0 to stop immediately.
device is the name of the device, as found in the /dev/ directory; for example, sda.
Procedure
Set the maximum number of retries, the retry time limit, or both:
To set the maximum number of retries, write the desired number to the max_retries file:
To set the time limit, write the desired number of seconds to the retry_timeout_seconds
file:
value is a number between -1 and the maximum possible value of the C signed integer type. This
is 2147483647 on 64-bit Linux.
In both limits, the value -1 is used for continuous retries and 0 to stop immediately.
88
CHAPTER 14. CONFIGURING XFS ERROR BEHAVIOR
device is the name of the device, as found in the /dev/ directory; for example, sda.
If you set the fail_at_unmount option in the file system, it overrides all other error configurations during
unmount, and immediately unmounts the file system without retrying the I/O operation. This allows the
unmount operation to succeed even in case of persistent errors.
WARNING
You cannot change the fail_at_unmount value after the unmount process starts,
because the unmount process removes the configuration files from the sysfs
interface for the respective file system. You must configure the unmount behavior
before the file system starts unmounting.
Procedure
To cancel retrying all operations when the file system unmounts, enable the option:
To respect the max_retries and retry_timeout_seconds retry limits when the file system
unmounts, disable the option:
device is the name of the device, as found in the /dev/ directory; for example, sda.
89
Red Hat Enterprise Linux 9.0 Managing file systems
IMPORTANT
File system checkers guarantee only metadata consistency across the file system. They
have no awareness of the actual data contained within the file system and are not data
recovery tools.
File system inconsistencies can occur for various reasons, including but not limited to hardware errors,
storage administration errors, and software bugs.
IMPORTANT
File system check tools cannot repair hardware problems. A file system must be fully
readable and writable if repair is to operate successfully. If a file system was corrupted
due to a hardware error, the file system must first be moved to a good disk, for example
with the dd(8) utility.
For journaling file systems, all that is normally required at boot time is to replay the journal if required
and this is usually a very short operation.
However, if a file system inconsistency or corruption occurs, even for journaling file systems, then the
file system checker must be used to repair the file system.
IMPORTANT
It is possible to disable file system check at boot by setting the sixth field in /etc/fstab to
0. However, Red Hat does not recommend doing so unless you are having issues with fsck
at boot time, for example with extremely large or remote file systems.
Additional resources
Generally, running the file system check and repair tool can be expected to automatically repair at least
90
CHAPTER 15. CHECKING AND REPAIRING A FILE SYSTEM
Generally, running the file system check and repair tool can be expected to automatically repair at least
some of the inconsistencies it finds. In some cases, the following issues can arise:
To ensure that unexpected or undesirable changes are not permanently made, ensure you follow any
precautionary steps outlined in the procedure.
Unclean unmounts
Journalling maintains a transactional record of metadata changes that happen on the file system.
In the event of a system crash, power failure, or other unclean unmount, XFS uses the journal (also
called log) to recover the file system. The kernel performs journal recovery when mounting the XFS file
system.
Corruption
In this context, corruption means errors on the file system caused by, for example:
Hardware faults
Bugs in storage firmware, device drivers, the software stack, or the file system itself
Problems that cause parts of the file system to be overwritten by something outside of the file
system
When XFS detects corruption in the file system or the file-system metadata, it may shut down the file
system and report the incident in the system log. Note that if the corruption occurred on the file system
hosting the /var directory, these logs will not be available after a reboot.
91
Red Hat Enterprise Linux 9.0 Managing file systems
User-space utilities usually report the Input/output error message when trying to access a corrupted
XFS file system. Mounting an XFS file system with a corrupted log results in a failed mount and the
following error message:
You must manually use the xfs_repair utility to repair the corruption.
Additional resources
NOTE
Although an fsck.xfs binary is present in the xfsprogs package, this is present only to
satisfy initscripts that look for an fsck.file system binary at boot time. fsck.xfs
immediately exits with an exit code of 0.
Procedure
# mount file-system
# umount file-system
NOTE
If the mount fails with a structure needs cleaning error, the log is corrupted and
cannot be replayed. The dry run should discover and report more on-disk
corruption as a result.
2. Use the xfs_repair utility to perform a dry run to check the file system. Any errors are printed
and an indication of the actions that would be taken, without modifying the file system.
# xfs_repair -n block-device
# mount file-system
Additional resources
92
CHAPTER 15. CHECKING AND REPAIRING A FILE SYSTEM
Procedure
1. Create a metadata image prior to repair for diagnostic or testing purposes using the
xfs_metadump utility. A pre-repair file system metadata image can be useful for support
investigations if the corruption is due to a software bug. Patterns of corruption present in the
pre-repair image can aid in root-cause analysis.
Use the xfs_metadump debugging tool to copy the metadata from an XFS file system to a
file. The resulting metadump file can be compressed using standard compression utilities to
reduce the file size if large metadump files need to be sent to support.
# mount file-system
# umount file-system
# xfs_repair block-device
If the mount failed with the Structure needs cleaning error, the log is corrupted and cannot
be replayed. Use the -L option (force log zeroing) to clear the log:
WARNING
# xfs_repair -L block-device
# mount file-system
Additional resources
93
Red Hat Enterprise Linux 9.0 Managing file systems
A full file system check and repair is invoked for ext2, which is not a metadata journaling file system, and
for ext4 file systems without a journal.
For ext3 and ext4 file systems with metadata journaling, the journal is replayed in userspace and the
utility exits. This is the default action because journal replay ensures a consistent file system after a
crash.
If these file systems encounter metadata inconsistencies while mounted, they record this fact in the file
system superblock. If e2fsck finds that a file system is marked with such an error, e2fsck performs a full
check after replaying the journal (if present).
Additional resources
Procedure
# mount file-system
# umount file-system
# e2fsck -n block-device
NOTE
Any errors are printed and an indication of the actions that would be taken,
without modifying the file system. Later phases of consistency checking may
print extra errors as it discovers inconsistencies which would have been fixed in
early phases if it were running in repair mode.
Additional resources
Procedure
94
CHAPTER 15. CHECKING AND REPAIRING A FILE SYSTEM
1. Save a file system image for support investigations. A pre-repair file system metadata image
can be useful for support investigations if the corruption is due to a software bug. Patterns of
corruption present in the pre-repair image can aid in root-cause analysis.
NOTE
Severely damaged file systems may cause problems with metadata image
creation.
If you are creating the image for testing purposes, use the -r option to create a sparse file of
the same size as the file system itself. e2fsck can then operate directly on the resulting file.
If you are creating the image to be archived or provided for diagnostic, use the -Q option,
which creates a more compact file format suitable for transfer.
# mount file-system
# umount file-system
3. Automatically repair the file system. If user intervention is required, e2fsck indicates the unfixed
problem in its output and reflects this status in the exit code.
# e2fsck -p block-device
Additional resources
95
Red Hat Enterprise Linux 9.0 Managing file systems
On Linux, UNIX, and similar operating systems, file systems on different partitions and removable
devices (CDs, DVDs, or USB flash drives for example) can be attached to a certain point (the mount
point) in the directory tree, and then detached again. While a file system is mounted on a directory, the
original content of the directory is not accessible.
Note that Linux does not prevent you from mounting a file system to a directory with a file system
already attached to it.
When you mount a file system using the mount command without all required information, that is
without the device name, the target directory, or the file system type, the mount utility reads the
content of the /etc/fstab file to check if the given file system is listed there. The /etc/fstab file contains
a list of device names and the directories in which the selected file systems are set to be mounted as
well as the file system type and mount options. Therefore, when mounting a file system that is specified
in /etc/fstab, the following command syntax is sufficient:
# mount directory
# mount device
Additional resources
Procedure
96
CHAPTER 16. MOUNTING FILE SYSTEMS
$ findmnt
To limit the listed file systems only to a certain file system type, add the --types option:
For example:
Additional resources
Prerequisites
Verify that no file system is already mounted on your chosen mount point:
$ findmnt mount-point
Procedure
2. If mount cannot recognize the file system type automatically, specify it using the --types
option:
97
Red Hat Enterprise Linux 9.0 Managing file systems
Additional resources
Procedure
For example, to move the file system mounted in the /mnt/userdirs/ directory to the /home/
mount point:
$ findmnt
$ ls old-directory
$ ls new-directory
Additional resources
Procedure
1. Try unmounting the file system using either of the following commands:
By mount point:
# umount mount-point
By device:
98
CHAPTER 16. MOUNTING FILE SYSTEMS
# umount device
If the command fails with an error similar to the following, it means that the file system is in use
because of a process is using resources on it:
2. If the file system is in use, use the fuser utility to determine which processes are accessing it.
For example:
/run/media/user/FlashDrive: 18351
Afterwards, stop the processes using the file system and try unmounting it again.
NOTE
You also can unmount a file system and the RHEL system will stop using it. Unmounting
the file system enables you to delete, remove, or re-format devices.
Prerequisites
If you want to unmount a file system, ensure that the system does not use any file, service, or
application stored in the partition.
Procedure
3. In the Storage table, select a volume from which you want to delete the partition.
4. In the GPT partitions section, click the menu button, ⋮ next to the partition whose file system
you want to mount or unmount.
99
Red Hat Enterprise Linux 9.0 Managing file systems
The following table lists the most common options of the mount utility. You can apply these mount
options using the following syntax:
Option Description
async Enables asynchronous input and output operations on the file system.
auto Enables the file system to be mounted automatically using the mount -a
command.
exec Allows the execution of binary files on the particular file system.
noauto Default behavior disables the automatic mount of the file system using the
mount -a command.
noexec Disallows the execution of binary files on the particular file system.
nouser Disallows an ordinary user (that is, other than root) to mount and unmount the file
system.
user Allows an ordinary user (that is, other than root) to mount and unmount the file
system.
100
CHAPTER 17. SHARING A MOUNT ON MULTIPLE MOUNT POINTS
private
This type does not receive or forward any propagation events.
When you mount another file system under either the duplicate or the original mount point, it is not
reflected in the other.
shared
This type creates an exact replica of a given mount point.
When a mount point is marked as a shared mount, any mount within the original mount point is
reflected in it, and vice versa.
slave
This type creates a limited duplicate of a given mount point.
When a mount point is marked as a slave mount, any mount within the original mount point is
reflected in it, but no mount within a slave mount is reflected in its original.
unbindable
This type prevents the given mount point from being duplicated whatsoever.
Additional resources
Procedure
1. Create a virtual file system (VFS) node from the original mount point:
101
Red Hat Enterprise Linux 9.0 Managing file systems
Alternatively, to change the mount type for the selected mount point and all mount points
under it, use the --make-rprivate option instead of --make-private.
4. It is now possible to verify that /media and /mnt share content but none of the mounts within
/media appear in /mnt. For example, if the CD-ROM drive contains non-empty media and
the /media/cdrom/ directory exists, use:
5. It is also possible to verify that file systems mounted in the /mnt directory are not reflected
in /media. For example, if a non-empty USB flash drive that uses the /dev/sdc1 device is
plugged in and the /mnt/flashdisk/ directory is present, use:
Additional resources
102
CHAPTER 17. SHARING A MOUNT ON MULTIPLE MOUNT POINTS
Procedure
1. Create a virtual file system (VFS) node from the original mount point:
Alternatively, to change the mount type for the selected mount point and all mount points
under it, use the --make-rshared option instead of --make-shared.
To make the /media and /mnt directories share the same content:
4. It is now possible to verify that a mount within /media also appears in /mnt. For example, if
the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, use:
5. Similarly, it is possible to verify that any file system mounted in the /mnt directory is
reflected in /media. For example, if a non-empty USB flash drive that uses the /dev/sdc1
device is plugged in and the /mnt/flashdisk/ directory is present, use:
103
Red Hat Enterprise Linux 9.0 Managing file systems
Additional resources
Procedure
1. Create a virtual file system (VFS) node from the original mount point:
Alternatively, to change the mount type for the selected mount point and all mount points
under it, use the --make-rshared option instead of --make-shared.
This example shows how to get the content of the /media directory to appear in /mnt as well, but
without any mounts in the /mnt directory to be reflected in /media.
4. Verify that a mount within /media also appears in /mnt. For example, if the CD-ROM drive
contains non-empty media and the /media/cdrom/ directory exists, use:
104
CHAPTER 17. SHARING A MOUNT ON MULTIPLE MOUNT POINTS
5. Also verify that file systems mounted in the /mnt directory are not reflected in /media. For
example, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and
the /mnt/flashdisk/ directory is present, use:
Additional resources
Procedure
Alternatively, to change the mount type for the selected mount point and all mount points
under it, use the --make-runbindable option instead of --make-unbindable.
Any subsequent attempt to make a duplicate of this mount fails with the following error:
Additional resources
105
Red Hat Enterprise Linux 9.0 Managing file systems
1. The block device identified by a persistent attribute or a path in the /dev directory.
4. Mount options for the file system, which includes the defaults option to mount the partition at
boot time with default options. The mount option field also recognizes the systemd mount unit
options in the x-systemd.option format.
NOTE
The systemd-fstab-generator dynamically converts the entries from the /etc/fstab file
to the systemd-mount units. The systemd auto mounts LVM volumes from /etc/fstab
during manual activation unless the systemd-mount unit is masked.
NOTE
The dump utility used for backup of file systems has been removed in RHEL 9, and is
available in the EPEL 9 repository.
The systemd service automatically generates mount units from entries in /etc/fstab.
Additional resources
106
CHAPTER 18. PERSISTENTLY MOUNTING FILE SYSTEMS
Procedure
For example:
3. As root, edit the /etc/fstab file and add a line for the file system, identified by the UUID.
For example:
4. Regenerate mount units so that your system registers the new configuration:
# systemctl daemon-reload
5. Try mounting the file system to verify that the configuration works:
# mount mount-point
Additional resources
107
Red Hat Enterprise Linux 9.0 Managing file systems
One drawback of permanent mounting using the /etc/fstab configuration is that, regardless of how
infrequently a user accesses the mounted file system, the system must dedicate resources to keep the
mounted file system in place. This might affect system performance when, for example, the system is
maintaining NFS mounts to many systems at one time.
An alternative to /etc/fstab is to use the kernel-based autofs service. It consists of the following
components:
Additional resources
All on-demand mount points must be configured in the master map. Mount point, host name, exported
directory, and options can all be specified in a set of files (or other supported network sources) rather
than configuring them manually for each host.
The master map file lists mount points controlled by autofs, and their corresponding configuration files
or network sources known as automount maps. The format of the master map is as follows:
mount-point
The autofs mount point; for example, /mnt/data.
map-file
The map source file, which contains a list of mount points and the file system location from which
those mount points should be mounted.
108
CHAPTER 19. MOUNTING FILE SYSTEMS ON DEMAND
options
If supplied, these apply to all entries in the given map, if they do not themselves have options
specified.
/mnt/data /etc/auto.data
Map files
Map files configure the properties of individual on-demand mount points.
The automounter creates the directories if they do not exist. If the directories exist before the
automounter was started, the automounter will not remove them when it exits. If a timeout is specified,
the directory is automatically unmounted if the directory is not accessed for the timeout period.
The general format of maps is similar to the master map. However, the options field appears between
the mount point and the location instead of at the end of the entry as in the master map:
mount-point
This refers to the autofs mount point. This can be a single directory name for an indirect mount or
the full path of the mount point for direct mounts. Each direct and indirect map entry key (mount-
point) can be followed by a space separated list of offset directories (subdirectory names each
beginning with /) making them what is known as a multi-mount entry.
options
When supplied, these options are appended to the master map entry options, if any, or used instead
of the master map options if the configuration entry append_options is set to no.
location
This refers to the file system location such as a local file system path (preceded with the Sun map
format escape character : for map names beginning with /), an NFS file system or other valid file
system location.
The first column in the map file indicates the autofs mount point: sales and payroll from the server
called personnel. The second column indicates the options for the autofs mount. The third column
indicates the source of the mount.
Following the given configuration, the autofs mount points will be /home/payroll and /home/sales.
The -fstype= option is often omitted and is not needed if the file system is NFS, including mounts for
NFSv4 if the system default is NFSv4 for NFS mounts.
109
Red Hat Enterprise Linux 9.0 Managing file systems
Using the given configuration, if a process requires access to an autofs unmounted directory such as
/home/payroll/2006/July.sxc, the autofs service automatically mounts the directory.
However, Red Hat recommends using the simpler autofs format described in the previous sections.
Additional resources
/usr/share/doc/autofs/README.amd-maps file
Prerequisites
Procedure
1. Create a map file for the on-demand mount point, located at /etc/auto.identifier. Replace
identifier with a name that identifies the mount point.
2. In the map file, enter the mount point, options, and location fields as described in The autofs
configuration files section.
3. Register the map file in the master map file, as described in The autofs configuration files
section.
4. Allow the service to re-read the configuration, so it can manage the newly configured autofs
mount:
# ls automounted-directory
Prerequisites
Procedure
1. Specify the mount point and location of the map file by editing the /etc/auto.master file on a
server on which you need to mount user home directories. To do so, add the following line into
the /etc/auto.master file:
/home /etc/auto.home
2. Create a map file with the name of /etc/auto.home on a server on which you need to mount
user home directories, and edit the file with the following parameters:
* -fstype=nfs,rw,sync host.example.com:/home/&
You can skip fstype parameter, as it is nfs by default. For more information, see autofs(5) man
page on your system.
Automounter maps are stored in NIS and the /etc/nsswitch.conf file has the following
directive:
+auto.master
/home auto.home
111
Red Hat Enterprise Linux 9.0 Managing file systems
beth fileserver.example.com:/export/home/beth
joe fileserver.example.com:/export/home/joe
* fileserver.example.com:/export/home/&
BROWSE_MODE="yes"
Procedure
This section describes the examples of mounting home directories from a different server and
augmenting auto.home with only selected entries.
Given the preceding conditions, let’s assume that the client system needs to override the NIS map
auto.home and mount home directories from a different server.
In this case, the client needs to use the following /etc/auto.master map:
/home /etc/auto.home
+auto.master
* host.example.com:/export/home/&
Because the automounter only processes the first occurrence of a mount point, the /home directory
contains the content of /etc/auto.home instead of the NIS auto.home map.
Alternatively, to augment the site-wide auto.home map with just a few entries:
1. Create an /etc/auto.home file map, and in it put the new entries. At the end, include the NIS
auto.home map. Then the /etc/auto.home file map looks similar to:
mydir someserver:/export/mydir
+auto.home
2. With these NIS auto.home map conditions, listing the content of the /home directory
outputs:
$ ls /home
112
CHAPTER 19. MOUNTING FILE SYSTEMS ON DEMAND
This last example works as expected because autofs does not include the contents of a file map of
the same name as the one it is reading. As such, autofs moves on to the next map source in the
nsswitch configuration.
Prerequisites
LDAP client libraries must be installed on all systems configured to retrieve automounter maps
from LDAP. On Red Hat Enterprise Linux, the openldap package should be installed
automatically as a dependency of the autofs package.
Procedure
1. To configure LDAP access, modify the /etc/openldap/ldap.conf file. Ensure that the BASE,
URI, and schema options are set appropriately for your site.
2. The most recently established schema for storing automount maps in LDAP is described by the
rfc2307bis draft. To use this schema, set it in the /etc/autofs.conf configuration file by
removing the comment characters from the schema definition. For example:
Example 19.6. Setting autofs configuration
DEFAULT_MAP_OBJECT_CLASS="automountMap"
DEFAULT_ENTRY_OBJECT_CLASS="automount"
DEFAULT_MAP_ATTRIBUTE="automountMapName"
DEFAULT_ENTRY_ATTRIBUTE="automountKey"
DEFAULT_VALUE_ATTRIBUTE="automountInformation"
3. Ensure that all other schema entries are commented in the configuration. The automountKey
attribute of the rfc2307bis schema replaces the cn attribute of the rfc2307 schema. Following
is an example of an LDAP Data Interchange Format (LDIF) configuration:
Example 19.7. LDIF Configuration
# auto.master, example.com
dn: automountMapName=auto.master,dc=example,dc=com
objectClass: top
objectClass: automountMap
automountMapName: auto.master
# auto.home, example.com
dn: automountMapName=auto.home,dc=example,dc=com
objectClass: automountMap
automountMapName: auto.home
113
Red Hat Enterprise Linux 9.0 Managing file systems
# /, auto.home, example.com
dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com
objectClass: automount
automountKey: /
automountInformation: filer.example.com:/export/&
Additional resources
Procedure
1. Add desired fstab entry as documented in Persistently mounting file systems . For example:
2. Add x-systemd.automount to the options field of entry created in the previous step.
3. Load newly created units so that your system registers the new configuration:
# systemctl daemon-reload
Verification
# ls /mount/point
Additional resources
114
CHAPTER 19. MOUNTING FILE SYSTEMS ON DEMAND
Managing systemd
Procedure
mount-point.mount
[Mount]
What=/dev/disk/by-uuid/f5755511-a714-44c1-a123-cfde0e4ac688
Where=/mount/point
Type=xfs
2. Create a unit file with the same name as the mount unit, but with extension .automount.
3. Open the file and create an [Automount] section. Set the Where= option to the mount path:
[Automount]
Where=/mount/point
[Install]
WantedBy=multi-user.target
4. Load newly created units so that your system registers the new configuration:
# systemctl daemon-reload
Verification
# ls /mount/point
Additional resources
Managing systemd
115
Red Hat Enterprise Linux 9.0 Managing file systems
Procedure
1. Edit the /etc/autofs.conf file to specify the schema attributes that autofs searches for:
#
# Other common LDAP naming
#
map_object_class = "automountMap"
entry_object_class = "automount"
map_attribute = "automountMapName"
entry_attribute = "automountKey"
value_attribute = "automountInformation"
NOTE
User can write the attributes in both lower and upper cases in the
/etc/autofs.conf file.
2. Optional: Specify the LDAP configuration. There are two ways to do this. The simplest is to let
the automount service discover the LDAP server and locations on its own:
ldap_uri = "ldap:///dc=example,dc=com"
This option requires DNS to contain SRV records for the discoverable servers.
Alternatively, explicitly set which LDAP server to use and the base DN for LDAP searches:
ldap_uri = "ldap://ipa.example.com"
search_base = "cn=location,cn=automount,dc=example,dc=com"
3. Edit the /etc/autofs_ldap_auth.conf file so that autofs allows client authentication with the IdM
LDAP server.
Set the principal to the Kerberos host principal for the IdM LDAP server,
host/FQDN@REALM. The principal name is used to connect to the IdM directory as part of
GSS client authentication.
<autofs_ldap_sasl_conf
116
CHAPTER 20. USING SSSD COMPONENT FROM IDM TO CACHE THE AUTOFS MAPS
usetls="no"
tlsrequired="no"
authrequired="yes"
authtype="GSSAPI"
clientprinc="host/[email protected]"
/>
For more information about host principal, see Using canonicalized DNS host names in IdM .
Prerequisites
Procedure
# vim /etc/sssd/sssd.conf
[sssd]
domains = ldap
services = nss,pam,autofs
3. Create a new [autofs] section. You can leave this blank, because the default settings for an
autofs service work with most infrastructures.
[nss]
[pam]
[sudo]
[autofs]
[ssh]
[pac]
For more information, see the sssd.conf man page on your system.
4. Optional: Set a search base for the autofs entries. By default, this is the LDAP search base, but
a subtree can be specified in the ldap_autofs_search_base parameter.
[domain/EXAMPLE]
117
Red Hat Enterprise Linux 9.0 Managing file systems
ldap_search_base = "dc=example,dc=com"
ldap_autofs_search_base = "ou=automount,dc=example,dc=com"
6. Check the /etc/nsswitch.conf file, so that SSSD is listed as a source for automount
configuration:
8. Test the configuration by listing a user’s /home directory, assuming there is a master map entry
for /home:
# ls /home/userName
If this does not mount the remote file system, check the /var/log/messages file for errors. If
necessary, increase the debug level in the /etc/sysconfig/autofs file by setting the logging
parameter to debug.
118
CHAPTER 21. SETTING READ-ONLY PERMISSIONS FOR THE ROOT FILE SYSTEM
The default set of such files and directories is read from the /etc/rwtab file. Note that the readonly-root
package is required to have this file present in your system.
dirs /var/cache/man
dirs /var/gdm
<content truncated>
empty /tmp
empty /var/cache/foomatic
<content truncated>
files /etc/adjtime
files /etc/ntp.conf
<content truncated>
copy-method path
In this syntax:
Replace copy-method with one of the keywords specifying how the file or directory is copied to
tmpfs.
The /etc/rwtab file recognizes the following ways in which a file or directory can be copied to tmpfs:
empty
An empty path is copied to tmpfs. For example:
empty /tmp
dirs
A directory tree is copied to tmpfs, empty. For example:
dirs /var/run
files
119
Red Hat Enterprise Linux 9.0 Managing file systems
files /etc/resolv.conf
Procedure
1. In the /etc/sysconfig/readonly-root file, set the READONLY option to yes to mount the file
systems as read-only:
READONLY=yes
5. If you need to add files and directories to be mounted with write permissions in the tmpfs file
system, create a text file in the /etc/rwtab.d/ directory and put the configuration there.
For example, to mount the /etc/example/file file with write permissions, add this line to the
/etc/rwtab.d/example file:
files /etc/example/file
IMPORTANT
Changes made to files and directories in tmpfs do not persist across boots.
Troubleshooting
If you mount the root file system with read-only permissions by mistake, you can remount it with
read-and-write permissions again using the following command:
# mount -o remount,rw /
120
CHAPTER 22. LIMITING STORAGE SPACE USAGE ON XFS WITH QUOTAS
The XFS quota subsystem manages limits on disk space (blocks) and file (inode) usage. XFS quotas
control or report on usage of these items on a user, group, or directory or project level. Group and
project quotas are only mutually exclusive on older non-default XFS disk formats.
When managing on a per-directory or per-project basis, XFS manages the disk usage of directory
hierarchies associated with a specific project.
You can configure disk quotas for individual users as well as user groups on the local file systems. This
makes it possible to manage the space allocated for user-specific files (such as email) separately from
the space allocated to the projects that a user works on. The quota subsystem warns users when they
exceed their allotted limit, but allows some extra space for current work (hard limit/soft limit).
If quotas are implemented, you need to check if the quotas are exceeded and make sure the quotas are
accurate. If users repeatedly exceed their quotas or consistently reach their soft limits, a system
administrator can either help the user determine how to use less disk space or increase the user’s disk
quota.
The number of inodes, which are data structures that contain information about files in UNIX file
systems. Because inodes store file-related information, this allows control over the number of
files that can be created.
The XFS quota system differs from other file systems in a number of ways. Most importantly, XFS
considers quota information as file system metadata and uses journaling to provide a higher level
guarantee of consistency.
Additional resources
The XFS quota subsystem manages limits on disk space (blocks) and file (inode) usage. XFS quotas
121
Red Hat Enterprise Linux 9.0 Managing file systems
The XFS quota subsystem manages limits on disk space (blocks) and file (inode) usage. XFS quotas
control or report on usage of these items on a user, group, or directory or project level. Group and
project quotas are only mutually exclusive on older non-default XFS disk formats.
When managing on a per-directory or per-project basis, XFS manages the disk usage of directory
hierarchies associated with a specific project.
Procedure
Replace uquota with uqnoenforce to allow usage reporting without enforcing any limits.
Replace gquota with gqnoenforce to allow usage reporting without enforcing any limits.
Replace pquota with pqnoenforce to allow usage reporting without enforcing any limits.
4. Alternatively, include the quota mount options in the /etc/fstab file. The following example
shows entries in the /etc/fstab file to enable quotas for users, groups, and projects, respectively,
on an XFS file system. These examples also mount the file system with read/write permissions:
# vim /etc/fstab
/dev/xvdb1 /xfs xfs rw,quota 0 0
/dev/xvdb1 /xfs xfs rw,gquota 0 0
/dev/xvdb1 /xfs xfs rw,prjquota 0 0
Additional resources
Prerequisites
Quotas have been enabled for the XFS file system. See Enabling disk quotas for XFS .
122
CHAPTER 22. LIMITING STORAGE SPACE USAGE ON XFS WITH QUOTAS
Procedure
# xfs_quota
# xfs_quota> df
4. Run the help command to display the basic commands available with xfs_quota.
# xfs_quota> help
# xfs_quota> q
Additional resources
Prerequisites
Quotas have been enabled for the XFS file system. See Enabling disk quotas for XFS .
Procedure
1. Start the xfs_quota shell with the -x option to enable expert mode:
# xfs_quota -x
For example, to display a sample quota report for /home (on /dev/blockdevice), use the
command report -h /home. This displays output similar to the following:
123
Red Hat Enterprise Linux 9.0 Managing file systems
---------- ---------------------------------
root 0 0 0 00 [------]
testuser 103.4G 0 0 00 [------]
For example, to set a soft and hard inode count limit of 500 and 700 respectively for user john,
whose home directory is /home/john, use the following command:
In this case, pass mount_point which is the mounted xfs file system.
4. Run the help command to display the expert commands available with xfs_quota -x:
# xfs_quota> help
Additional resources
Procedure
1. Add the project-controlled directories to /etc/projects. For example, the following adds the
/var/log path with a unique ID of 11 to /etc/projects. Your project ID can be any numerical value
mapped to your project.
2. Add project names to /etc/projid to map project IDs to project names. For example, the
following associates a project called logfiles with the project ID of 11 as defined in the previous
step.
3. Initialize the project directory. For example, the following initializes the project directory /var:
Additional resources
124
CHAPTER 23. LIMITING STORAGE SPACE USAGE ON EXT4 WITH QUOTAS
Procedure
Procedure
NOTE
Only user and group quotas are enabled and initialized by default.
# mount /dev/sda
Additional resources
Procedure
125
Red Hat Enterprise Linux 9.0 Managing file systems
# umount /dev/sda
NOTE
# mount /dev/sda
Additional resources
Prerequisites
Procedure
# quotaon /mnt
NOTE
Enable user, group, and project quotas for all file systems:
# quotaon -vaugP
126
CHAPTER 23. LIMITING STORAGE SPACE USAGE ON EXT4 WITH QUOTAS
If neither of the -u, -g, or -P options are specified, only the user quotas are enabled.
Additional resources
NOTE
The text editor defined by the EDITOR environment variable is used by edquota. To
change the editor, set the EDITOR environment variable in your ~/.bash_profile file to
the full path of the editor of your choice.
Prerequisites
Procedure
# edquota username
Replace username with the user to which you want to assign the quotas.
For example, if you enable a quota for the /dev/sda partition and execute the command
edquota testuser, the following is displayed in the default editor configured on the system:
For example, the following shows the soft and hard block limits for the testuser have been set to
50000 and 55000 respectively.
127
Red Hat Enterprise Linux 9.0 Managing file systems
The first column is the name of the file system that has a quota enabled for it.
The second column shows how many blocks the user is currently using.
The next two columns are used to set soft and hard block limits for the user on the file
system.
The inodes column shows how many inodes the user is currently using.
The last two columns are used to set the soft and hard inode limits for the user on the file
system.
The hard block limit is the absolute maximum amount of disk space that a user or group
can use. Once this limit is reached, no further disk space can be used.
The soft block limit defines the maximum amount of disk space that can be used.
However, unlike the hard limit, the soft limit can be exceeded for a certain amount of
time. That time is known as the grace period. The grace period can be expressed in
seconds, minutes, hours, days, weeks, or months.
Verification
Verify that the quota for the user has been set:
# quota -v testuser
Disk quotas for user testuser:
Filesystem blocks quota limit grace files quota limit grace
/dev/sda 1000* 1000 1000 0 0 0
Prerequisites
Procedure
# edquota -g groupname
# edquota -g devel
This command displays the existing quota for the group in the text editor:
128
CHAPTER 23. LIMITING STORAGE SPACE USAGE ON EXT4 WITH QUOTAS
Verification
Prerequisites
Procedure
1. Add the project-controlled directories to /etc/projects. For example, the following adds the
/var/log path with a unique ID of 11 to /etc/projects. Your project ID can be any numerical value
mapped to your project.
2. Add project names to /etc/projid to map project IDs to project names. For example, the
following associates a project called Logs with the project ID of 11 as defined in the previous
step.
# edquota -P 11
NOTE
You can choose the project either by its project ID (11 in this case), or by its
name (Logs in this case).
Verification
# quota -vP 11
NOTE
You can verify either by the project ID, or by the project name.
Additional resources
129
Red Hat Enterprise Linux 9.0 Managing file systems
Procedure
# edquota -t
IMPORTANT
While other edquota commands operate on quotas for a particular user, group, or project,
the -t option operates on every file system with quotas enabled.
Additional resources
Procedure
# quotaoff -vaugP
If neither of the -u, -g, or -P options are specified, only the user quotas are disabled.
The -v switch causes verbose status information to display as the command executes.
Additional resources
Procedure
130
CHAPTER 23. LIMITING STORAGE SPACE USAGE ON EXT4 WITH QUOTAS
# repquota
2. View the disk usage report for all quota-enabled file systems:
# repquota -augP
The -- symbol displayed after each user determines whether the block or inode limits have been
exceeded. If either soft limit is exceeded, a + character appears in place of the corresponding -
character. The first - character represents the block limit, and the second represents the inode limit.
The grace columns are normally blank. If a soft limit has been exceeded, the column contains a time
specification equal to the amount of time remaining on the grace period. If the grace period has expired,
none appears in its place.
Additional resources
131
Red Hat Enterprise Linux 9.0 Managing file systems
Requirements
The block device underlying the file system must support physical discard operations.
Physical discard operations are supported if the value in the
/sys/block/<device>/queue/discard_max_bytes file is not zero.
Batch discard
Is triggered explicitly by the user and discards all unused blocks in the selected file systems.
Online discard
Is specified at mount time and triggers in real time without user intervention. Online discard
operations discard only blocks that are transitioning from the used to the free state.
Periodic discard
Are batch operations that are run regularly by a systemd service.
All types are supported by the XFS and ext4 file systems.
Recommendations
Red Hat recommends that you use batch or periodic discard.
Prerequisites
The block device underlying the file system supports physical discard operations.
Procedure
# fstrim mount-point
132
CHAPTER 24. DISCARDING UNUSED BLOCKS
# fstrim --all
a logical device (LVM or MD) composed of multiple devices, where any one of the device does
not support discard operations,
# fstrim /mnt/non_discard
Additional resources
Procedure
When mounting a file system manually, add the -o discard mount option:
When mounting a file system persistently, add the discard option to the mount entry in the
/etc/fstab file.
Additional resources
Procedure
133
Red Hat Enterprise Linux 9.0 Managing file systems
Verification
134
CHAPTER 25. SETTING UP STRATIS FILE SYSTEMS
Stratis is a local storage management system that supports advanced storage features. The central
concept of Stratis is a storage pool. This pool is created from one or more local disks or partitions, and
file systems are created from the pool.
Thin provisioning
Tiering
Encryption
Additional resources
Stratis website
Externally, Stratis presents the following volume components in the command-line interface and the
API:
blockdev
Block devices, such as a disk or a disk partition.
pool
Composed of one or more block devices.
A pool has a fixed total size, equal to the size of the block devices.
The pool contains most Stratis layers, such as the non-volatile data cache using the dm-cache
target.
Stratis creates a /dev/stratis/my-pool/ directory for each pool. This directory contains links to
devices that represent Stratis file systems in the pool.
135
Red Hat Enterprise Linux 9.0 Managing file systems
filesystem
Each pool can contain one or more file systems, which store files.
File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system
grows with the data stored on it. If the size of the data approaches the virtual size of the file system,
Stratis grows the thin volume and the file system automatically.
IMPORTANT
Stratis tracks information about file systems created using Stratis that XFS is not
aware of, and changes made using XFS do not automatically create updates in Stratis.
Users must not reformat or reconfigure XFS file systems that are managed by Stratis.
NOTE
Stratis uses many Device Mapper devices, which show up in dmsetup listings and the
/proc/partitions file. Similarly, the lsblk command output reflects the internal workings
and layers of Stratis.
Supported devices
Stratis pools have been tested to work on these types of block devices:
LUKS
MD RAID
DM Multipath
iSCSI
NVMe devices
Unsupported devices
Because Stratis contains a thin-provisioning layer, Red Hat does not recommend placing a Stratis pool
on block devices that are already thinly-provisioned.
Procedure
136
CHAPTER 25. SETTING UP STRATIS FILE SYSTEMS
1. Install packages that provide the Stratis service and command-line utilities:
Prerequisites
The block devices on which you are creating a Stratis pool are not in use and are not mounted.
Each block device on which you are creating a Stratis pool is at least 1 GB.
On the IBM Z architecture, the /dev/dasd* block devices must be partitioned. Use the partition
device for creating the Stratis pool.
For information about partitioning DASD devices, see Configuring a Linux instance on IBM Z
NOTE
Procedure
1. Erase any file system, partition table, or RAID signatures that exist on each block device that
you want to use in the Stratis pool:
where block-device is the path to the block device; for example, /dev/sdb.
2. Create the new unencrypted Stratis pool on the selected block device:
NOTE
137
Red Hat Enterprise Linux 9.0 Managing file systems
Prerequisites
The block devices on which you are creating a Stratis pool are not in use and are not mounted.
Each block device on which you are creating a Stratis pool is at least 1 GB.
NOTE
Procedure
2. Click Storage.
5. In the Create Stratis pool dialog box, enter a name for the Stratis pool.
138
CHAPTER 25. SETTING UP STRATIS FILE SYSTEMS
6. Select the Block devices from which you want to create the Stratis pool.
7. Optional: If you want to specify the maximum size for each file system that is created in pool,
select Manage filesystem sizes.
8. Click Create.
Verification
Go to the Storage section and verify that you can see the new Stratis pool in the Devices table.
When you create an encrypted Stratis pool, the kernel keyring is used as the primary encryption
mechanism. After subsequent system reboots this kernel keyring is used to unlock the encrypted Stratis
pool.
When creating an encrypted Stratis pool from one or more block devices, note the following:
Each block device is encrypted using the cryptsetup library and implements the LUKS2 format.
Each Stratis pool can either have a unique key or share the same key with other pools. These
keys are stored in the kernel keyring.
The block devices that comprise a Stratis pool must be either all encrypted or all unencrypted. It
is not possible to have both encrypted and unencrypted block devices in the same Stratis pool.
Block devices added to the data tier of an encrypted Stratis pool are automatically encrypted.
Prerequisites
139
Red Hat Enterprise Linux 9.0 Managing file systems
Stratis v2.1.0 or later is installed. For more information, see Installing Stratis.
The block devices on which you are creating a Stratis pool are not in use and are not mounted.
The block devices on which you are creating a Stratis pool are at least 1GB in size each.
On the IBM Z architecture, the /dev/dasd* block devices must be partitioned. Use the partition
in the Stratis pool.
For information about partitioning DASD devices, see link:Configuring a Linux instance on IBM Z .
Procedure
1. Erase any file system, partition table, or RAID signatures that exist on each block device that
you want to use in the Stratis pool:
where block-device is the path to the block device; for example, /dev/sdb.
2. If you have not created a key set already, run the following command and follow the prompts to
create a key set to use for the encryption.
where key-description is a reference to the key that gets created in the kernel keyring.
3. Create the encrypted Stratis pool and specify the key description to use for the encryption. You
can also specify the key path using the --keyfile-path option instead of using the key-
description option.
where
key-description
References the key that exists in the kernel keyring, which you created in the previous step.
my-pool
Specifies the name of the new Stratis pool.
block-device
Specifies the path to an empty or wiped block device.
NOTE
140
CHAPTER 25. SETTING UP STRATIS FILE SYSTEMS
When creating an encrypted Stratis pool from one or more block devices, note the following:
Each block device is encrypted using the cryptsetup library and implements the LUKS2 format.
Each Stratis pool can either have a unique key or share the same key with other pools. These
keys are stored in the kernel keyring.
The block devices that comprise a Stratis pool must be either all encrypted or all unencrypted. It
is not possible to have both encrypted and unencrypted block devices in the same Stratis pool.
Block devices added to the data tier of an encrypted Stratis pool are automatically encrypted.
Prerequisites
The block devices on which you are creating a Stratis pool are not in use and are not mounted.
Each block device on which you are creating a Stratis pool is at least 1 GB.
Procedure
2. Click Storage.
141
Red Hat Enterprise Linux 9.0 Managing file systems
5. In the Create Stratis pool dialog box, enter a name for the Stratis pool.
6. Select the Block devices from which you want to create the Stratis pool.
7. Select the type of encryption, you can use a passphrase, a Tang keyserver, or both:
Passphrase:
i. Enter a passphrase.
Tang keyserver:
i. Enter the keyserver address. For more information, see Deploying a Tang server with
142
CHAPTER 25. SETTING UP STRATIS FILE SYSTEMS
i. Enter the keyserver address. For more information, see Deploying a Tang server with
SELinux in enforcing mode.
8. Optional: If you want to specify the maximum size for each file system that is created in pool,
select Manage filesystem sizes.
9. Click Create.
Verification
Go to the Storage section and verify that you can see the new Stratis pool in the Devices table.
Prerequisites
Stratis is installed.
The web console detects and installs Stratis by default. However, for manually installing Stratis,
see Installing Stratis.
Procedure
2. Click Storage.
3. In the Storage table, click the Stratis pool you want to rename.
4. On the Stratis pool page, click edit next to the Name field.
143
Red Hat Enterprise Linux 9.0 Managing file systems
6. Click Rename.
If you enable overprovisioning, an API signal notifies you when your storage has been fully allocated. The
notification serves as a warning to the user to inform them that when all the remaining pool space fills up,
Stratis has no space left to extend to.
Prerequisites
Procedure
To set up the pool correctly, you have two possibilities:
By using the --no-overprovision option, the pool cannot allocate more logical space than
actual available physical space.
If set to "yes", you enable overprovisioning to the pool. This means that the sum of the
144
CHAPTER 25. SETTING UP STRATIS FILE SYSTEMS
If set to "yes", you enable overprovisioning to the pool. This means that the sum of the
logical sizes of the Stratis filesystems, supported by the pool, can exceed the amount of
available data space.
Verification
2. Check if there is an indication of the pool overprovisioning mode flag in the stratis pool list
output. The " ~ " is a math symbol for "NOT", so ~Op means no-overprovisioning.
Additional resources
NOTE
Binding a Stratis pool to a supplementary Clevis encryption mechanism does not remove
the primary kernel keyring encryption.
Prerequisites
Stratis v2.3.0 or later is installed. For more information, see Installing Stratis.
You have created an encrypted Stratis pool, and you have the key description of the key that
was used for the encryption. For more information, see Creating an encrypted Stratis pool .
You can connect to the Tang server. For more information, see Deploying a Tang server with
SELinux in enforcing mode
Procedure
145
Red Hat Enterprise Linux 9.0 Managing file systems
Procedure
where
my-pool
Specifies the name of the encrypted Stratis pool.
tang-server
Specifies the IP address or URL of the Tang server.
Additional resources
Prerequisites
Stratis v2.3.0 or later is installed. For more information, see Installing Stratis.
You have created an encrypted Stratis pool. For more information, see Creating an encrypted
Stratis pool.
Procedure
where
my-pool
Specifies the name of the encrypted Stratis pool.
key-description
References the key that exists in the kernel keyring, which was generated when you created
the encrypted Stratis pool.
146
CHAPTER 25. SETTING UP STRATIS FILE SYSTEMS
Prerequisites
You have created an encrypted Stratis pool. For more information, see Creating an encrypted
Stratis pool.
Procedure
1. Re-create the key set using the same key description that was used previously:
where key-description references the key that exists in the kernel keyring, which was generated
when you created the encrypted Stratis pool.
Prerequisites
Stratis v2.3.0 or later is installed on your system. For more information, see Installing Stratis.
You have created an encrypted Stratis pool. For more information, see Creating an encrypted
Stratis pool.
Procedure
where
my-pool specifies the name of the Stratis pool you want to unbind.
Additional resources
147
Red Hat Enterprise Linux 9.0 Managing file systems
The stopped state is recorded in the pool’s metadata. These pools do not start on the following boot,
until the pool receives a start command.
Prerequisites
You have created either an unencrypted or an encrypted Stratis pool. See Creating an
unencrypted Stratis pool
Procedure
Use the following command to start the Stratis pool. The --unlock-method option specifies the
method of unlocking the pool if it is encrypted:
Alternatively, use the following command to stop the Stratis pool. This tears down the storage
stack but leaves all metadata intact:
Verification
Use the following command to list all not previously started pools. If the UUID is specified, the
command prints detailed information about the pool corresponding to the UUID:
Prerequisites
148
CHAPTER 25. SETTING UP STRATIS FILE SYSTEMS
You have created a Stratis pool. See Creating an unencrypted Stratis pool
Procedure
where
number-and-unit
Specifies the size of a file system. The specification format must follow the standard size
specification format for input, that is B, KiB, MiB, GiB, TiB or PiB.
my-pool
Specifies the name of the Stratis pool.
my-fs
Specifies an arbitrary name for the file system.
For example:
Verification
List file systems within the pool to check if the Stratis filesystem is created:
Additional resources
Prerequisites
149
Red Hat Enterprise Linux 9.0 Managing file systems
Procedure
2. Click Storage.
3. Click the Stratis pool on which you want to create a file system.
4. On the Stratis pool page, scroll to the Stratis filesystems section and click Create new
filesystem.
5. In the Create filesystem dialog box, enter a Name for the file system.
150
CHAPTER 25. SETTING UP STRATIS FILE SYSTEMS
8. In the At boot drop-down menu, select when you want to mount your file system.
If you want to create and mount the file system, click Create and mount.
If you want to only create the file system, click Create only.
Verification
The new file system is visible on the Stratis pool page under the Stratis filesystems tab.
Prerequisites
You have created a Stratis file system. For more information, see Creating a Stratis filesystem .
Procedure
To mount the file system, use the entries that Stratis maintains in the /dev/stratis/ directory:
The file system is now mounted on the mount-point directory and ready to use.
Additional resources
Prerequisites
You have created a Stratis file system. See Creating a Stratis filesystem .
Procedure
As root, edit the /etc/fstab file and add a line to set up non-root filesystems:
151
Red Hat Enterprise Linux 9.0 Managing file systems
Additional resources
152
CHAPTER 26. EXTENDING A STRATIS VOLUME WITH ADDITIONAL BLOCK DEVICES
Externally, Stratis presents the following volume components in the command-line interface and the
API:
blockdev
Block devices, such as a disk or a disk partition.
pool
Composed of one or more block devices.
A pool has a fixed total size, equal to the size of the block devices.
The pool contains most Stratis layers, such as the non-volatile data cache using the dm-cache
target.
Stratis creates a /dev/stratis/my-pool/ directory for each pool. This directory contains links to
devices that represent Stratis file systems in the pool.
filesystem
Each pool can contain one or more file systems, which store files.
File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system
grows with the data stored on it. If the size of the data approaches the virtual size of the file system,
Stratis grows the thin volume and the file system automatically.
IMPORTANT
Stratis tracks information about file systems created using Stratis that XFS is not
aware of, and changes made using XFS do not automatically create updates in Stratis.
Users must not reformat or reconfigure XFS file systems that are managed by Stratis.
NOTE
Stratis uses many Device Mapper devices, which show up in dmsetup listings and the
/proc/partitions file. Similarly, the lsblk command output reflects the internal workings
and layers of Stratis.
153
Red Hat Enterprise Linux 9.0 Managing file systems
Prerequisites
The block devices that you are adding to the Stratis pool are not in use and not mounted.
The block devices that you are adding to the Stratis pool are at least 1 GiB in size each.
Procedure
Additional resources
Prerequisites
The block devices on which you are creating a Stratis pool are not in use and are not mounted.
Each block device on which you are creating a Stratis pool is at least 1 GB.
Procedure
2. Click Storage.
3. In the Storage table, click the Stratis pool to which you want to add a block device.
154
CHAPTER 26. EXTENDING A STRATIS VOLUME WITH ADDITIONAL BLOCK DEVICES
5. In the Add block devices dialog box, select the Tier, whether you want to add a block device as
data or cache.
6. Optional: If you are adding the block device to a Stratis pool that is encrypted with a passphrase,
then you must enter the passphrase.
7. Under Block devices, select the devices you want to add to the pool.
8. Click Add.
155
Red Hat Enterprise Linux 9.0 Managing file systems
Standard Linux utilities such as df report the size of the XFS file system layer on Stratis, which is 1 TiB.
This is not useful information, because the actual storage usage of Stratis is less due to thin provisioning,
and also because Stratis automatically grows the file system when the XFS layer is close to full.
IMPORTANT
Regularly monitor the amount of data written to your Stratis file systems, which is
reported as the Total Physical Used value. Make sure it does not exceed the Total Physical
Size value.
Additional resources
Prerequisites
Procedure
To display information about all block devices used for Stratis on your system:
# stratis blockdev
# stratis pool
156
CHAPTER 27. MONITORING STRATIS FILE SYSTEMS
# stratis filesystem
Additional resources
Prerequisites
Procedure
2. Click Storage.
3. In the Storage table, click the Stratis pool you want to view.
The Stratis pool page displays all the information about the pool and the file systems that you
created in the pool.
157
Red Hat Enterprise Linux 9.0 Managing file systems
158
CHAPTER 28. USING SNAPSHOTS ON STRATIS FILE SYSTEMS
A snapshot and its origin are not linked in lifetime. A snapshotted file system can live longer than
the file system it was created from.
A file system does not have to be mounted to create a snapshot from it.
Each snapshot uses around half a gigabyte of actual backing storage, which is needed for the
XFS log.
Prerequisites
You have created a Stratis file system. See Creating a Stratis filesystem .
Procedure
Additional resources
159
Red Hat Enterprise Linux 9.0 Managing file systems
Prerequisites
Procedure
To access the snapshot, mount it as a regular file system from the /dev/stratis/my-pool/
directory:
Additional resources
Prerequisites
Procedure
1. Optional: Back up the current state of the file system to be able to access it later:
# umount /dev/stratis/my-pool/my-fs
# stratis filesystem destroy my-pool my-fs
3. Create a copy of the snapshot under the name of the original file system:
4. Mount the snapshot, which is now accessible with the same name as the original file system:
160
CHAPTER 28. USING SNAPSHOTS ON STRATIS FILE SYSTEMS
The content of the file system named my-fs is now identical to the snapshot my-fs-snapshot.
Additional resources
Prerequisites
Procedure
# umount /dev/stratis/my-pool/my-fs-snapshot
Additional resources
161
Red Hat Enterprise Linux 9.0 Managing file systems
Externally, Stratis presents the following volume components in the command-line interface and the
API:
blockdev
Block devices, such as a disk or a disk partition.
pool
Composed of one or more block devices.
A pool has a fixed total size, equal to the size of the block devices.
The pool contains most Stratis layers, such as the non-volatile data cache using the dm-cache
target.
Stratis creates a /dev/stratis/my-pool/ directory for each pool. This directory contains links to
devices that represent Stratis file systems in the pool.
filesystem
Each pool can contain one or more file systems, which store files.
File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system
grows with the data stored on it. If the size of the data approaches the virtual size of the file system,
Stratis grows the thin volume and the file system automatically.
IMPORTANT
Stratis tracks information about file systems created using Stratis that XFS is not
aware of, and changes made using XFS do not automatically create updates in Stratis.
Users must not reformat or reconfigure XFS file systems that are managed by Stratis.
NOTE
Stratis uses many Device Mapper devices, which show up in dmsetup listings and the
/proc/partitions file. Similarly, the lsblk command output reflects the internal workings
and layers of Stratis.
Prerequisites
162
CHAPTER 29. REMOVING STRATIS FILE SYSTEMS
You have created a Stratis file system. See Creating a Stratis filesystem .
Procedure
# umount /dev/stratis/my-pool/my-fs
Additional resources
NOTE
Deleting a Stratis pool file system erases all the data it contains.
Prerequisites
Stratis is installed.
The web console detects and installs Stratis by default. However, for manually installing Stratis,
see Installing Stratis.
Procedure
163
Red Hat Enterprise Linux 9.0 Managing file systems
2. Click Storage.
3. In the Storage table, click the Stratis pool from which you want to delete a file system.
4. On the Stratis pool page, scroll to the Stratis filesystems section and click the menu button ⋮
next to the file system you want to delete.
Prerequisites
Procedure
# umount /dev/stratis/my-pool/my-fs-1 \
/dev/stratis/my-pool/my-fs-2 \
/dev/stratis/my-pool/my-fs-n
Additional resources
NOTE
Prerequisites
Procedure
165
Red Hat Enterprise Linux 9.0 Managing file systems
2. Click Storage.
3. In the Storage table, click the menu button, ⋮, next to the Stratis pool you want to delete.
166
CHAPTER 30. GETTING STARTED WITH AN EXT4 FILE SYSTEM
Using extents: The ext4 file system uses extents, which improves performance when using large
files and reduces metadata overhead for large files.
Ext4 labels unallocated block groups and inode table sections accordingly, which allows the
block groups and table sections to be skipped during a file system check. It leads to a quick file
system check, which becomes more beneficial as the file system grows in size.
Metadata checksum: By default, this feature is enabled in Red Hat Enterprise Linux 9.
Persistent pre-allocation
Delayed allocation
Multi-block allocation
Stripe-aware allocation
Extended attributes (xattr): This allows the system to associate several additional name and
value pairs per file.
Quota journaling: This avoids the need for lengthy quota consistency checks after a crash.
NOTE
The only supported journaling mode in ext4 is data=ordered (default). For more
information, see the Red Hat Knowledgebase solution Is the EXT journaling
option "data=writeback" supported in RHEL?.
Additional resources
Prerequisites
A partition on your disk. For information about creating MBR or GPT partitions, see Creating a
167
Red Hat Enterprise Linux 9.0 Managing file systems
A partition on your disk. For information about creating MBR or GPT partitions, see Creating a
partition table on a disk with parted.
Procedure
For a regular-partition device, an LVM volume, an MD volume, or a similar device, use the
following command:
# mkfs.ext4 /dev/block_device
For striped block devices (for example, RAID5 arrays), the stripe geometry can be specified
at the time of file system creation. Using proper stripe geometry enhances the performance
of an ext4 file system. For example, to create a file system with a 64k stride (that is, 16 x
4096) on a 4k-block file system, use the following command:
stripe-width=value: Specifies the number of data disks in a RAID device, or the number
of stripe units in the stripe.
NOTE
Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-
41d9-b66d-96d749c02da7.
Replace /dev/block_device with the path to an ext4 file system to have the
UUID added to it: for example, /dev/sda8.
# blkid
168
CHAPTER 30. GETTING STARTED WITH AN EXT4 FILE SYSTEM
Additional resources
Prerequisites
An ext4 file system. For information about creating an ext4 file system, see Creating an ext4 file
system.
Procedure
# mkdir /mount/point
Replace /mount/point with the directory name where mount point of the partition must be
created.
To mount the file system persistently, see Persistently mounting file systems .
# df -h
Additional resources
169
Red Hat Enterprise Linux 9.0 Managing file systems
Prerequisites
An ext4 file system. For information about creating an ext4 file system, see Creating an ext4 file
system.
An underlying block device of an appropriate size to hold the file system after resizing.
Procedure
# umount /dev/block_device
# e2fsck -f /dev/block_device
# resize2fs /dev/block_device size
Replace /dev/block_device with the path to the block device, for example /dev/sdb1.
Replace size with the required resize value using s, K, M, G, and T suffixes.
An ext4 file system may be grown while mounted using the resize2fs command:
NOTE
The size parameter is optional (and often redundant) when expanding. The
resize2fs automatically expands to fill the available space of the container,
usually a logical volume or partition.
# df -h
Additional resources
170
CHAPTER 30. GETTING STARTED WITH AN EXT4 FILE SYSTEM
NOTE
If you want a complete client-server solution for backups over network, you can use
bacula backup utility that is available in RHEL 9. For more information about Bacula, see
Bacula backup solution .
171