IBM - IBM Spectrum Scale Version 5.1.0 Command and Programming Reference (2021)
IBM - IBM Spectrum Scale Version 5.1.0 Command and Programming Reference (2021)
Version 5.1.0
IBM
GC28-3164-02
Note
Before using this information and the product it supports, read the information in “Notices” on page
1557.
This edition applies to version 5 release 1 modification 0 of the following products, and to all subsequent releases and
modifications until otherwise indicated in new editions:
• IBM Spectrum Scale Data Management Edition ordered through Passport Advantage® (product number 5737-F34)
• IBM Spectrum Scale Data Access Edition ordered through Passport Advantage (product number 5737-I39)
• IBM Spectrum Scale Erasure Code Edition ordered through Passport Advantage (product number 5737-J34)
• IBM Spectrum Scale Data Management Edition ordered through AAS (product numbers 5641-DM1, DM3, DM5)
• IBM Spectrum Scale Data Access Edition ordered through AAS (product numbers 5641-DA1, DA3, DA5)
• IBM Spectrum Scale Data Management Edition for IBM® ESS (product number 5765-DME)
• IBM Spectrum Scale Data Access Edition for IBM ESS (product number 5765-DAE)
Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the change.
IBM welcomes your comments; see the topic “How to send your comments” on page xxxvi. When you send information to
IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without
incurring any obligation to you.
© Copyright International Business Machines Corporation 2015, 2021.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents
Tables.................................................................................................................. xi
iii
mmcrcluster command........................................................................................................................... 303
mmcrfileset command............................................................................................................................ 308
mmcrfs command....................................................................................................................................315
mmcrnodeclass command...................................................................................................................... 330
mmcrnsd command.................................................................................................................................332
mmcrsnapshot command....................................................................................................................... 337
mmdefedquota command.......................................................................................................................342
mmdefquotaoff command.......................................................................................................................346
mmdefquotaon command.......................................................................................................................349
mmdefragfs command............................................................................................................................ 353
mmdelacl command................................................................................................................................ 356
mmdelcallback command....................................................................................................................... 358
mmdeldisk command.............................................................................................................................. 360
mmdelfileset command...........................................................................................................................365
mmdelfs command..................................................................................................................................369
mmdelnode command............................................................................................................................ 371
mmdelnodeclass command.................................................................................................................... 374
mmdelnsd command...............................................................................................................................376
mmdelsnapshot command......................................................................................................................378
mmdf command.......................................................................................................................................382
mmdiag command................................................................................................................................... 386
mmdsh command....................................................................................................................................393
mmeditacl command...............................................................................................................................395
mmedquota command............................................................................................................................ 398
mmexportfs command............................................................................................................................ 402
mmfsck command................................................................................................................................... 404
mmfsctl command...................................................................................................................................418
mmgetacl command................................................................................................................................422
mmgetstate command............................................................................................................................ 425
mmhadoopctl command......................................................................................................................... 428
mmhdfs command...................................................................................................................................430
mmhealth command............................................................................................................................... 435
mmimgbackup command........................................................................................................................450
mmimgrestore command........................................................................................................................ 454
mmimportfs command............................................................................................................................ 457
mmkeyserv command............................................................................................................................. 461
mmlinkfileset command..........................................................................................................................477
mmlsattr command................................................................................................................................. 479
mmlscallback command......................................................................................................................... 482
mmlscluster command............................................................................................................................484
mmlsconfig command............................................................................................................................. 487
mmlsdisk command................................................................................................................................ 489
mmlsfileset command............................................................................................................................. 493
mmlsfs command.................................................................................................................................... 498
mmlslicense command........................................................................................................................... 503
mmlsmgr command.................................................................................................................................507
mmlsmount command............................................................................................................................ 509
mmlsnodeclass command...................................................................................................................... 512
mmlsnsd command................................................................................................................................. 514
mmlspolicy command............................................................................................................................. 518
mmlspool command................................................................................................................................520
mmlsqos command................................................................................................................................. 522
mmlsquota command..............................................................................................................................527
mmlssnapshot command........................................................................................................................ 532
mmmigratefs command.......................................................................................................................... 535
mmmount command............................................................................................................................... 537
mmnetverify command........................................................................................................................... 540
mmnfs command.....................................................................................................................................552
iv
mmnsddiscover command...................................................................................................................... 563
mmobj command.....................................................................................................................................565
mmperfmon command............................................................................................................................582
mmpmon command................................................................................................................................ 595
mmprotocoltrace command....................................................................................................................601
mmpsnap command................................................................................................................................605
mmputacl command................................................................................................................................608
mmqos command.................................................................................................................................... 610
mmquotaoff command............................................................................................................................ 641
mmquotaon command............................................................................................................................ 644
mmreclaimspace command....................................................................................................................647
mmremotecluster command...................................................................................................................650
mmremotefs command........................................................................................................................... 653
mmrepquota command...........................................................................................................................656
mmrestoreconfig command.................................................................................................................... 661
mmrestorefs command........................................................................................................................... 665
mmrestripefile command........................................................................................................................ 668
mmrestripefs command.......................................................................................................................... 672
mmrpldisk command...............................................................................................................................679
mmsdrrestore command.........................................................................................................................686
mmsetquota command........................................................................................................................... 691
mmshutdown command..........................................................................................................................695
mmsmb command...................................................................................................................................698
mmsnapdir command............................................................................................................................. 711
mmstartup command.............................................................................................................................. 715
mmtracectl command............................................................................................................................. 717
mmumount command............................................................................................................................. 721
mmunlinkfileset command......................................................................................................................724
mmuserauth command........................................................................................................................... 727
mmwatch command................................................................................................................................ 753
mmwinservctl command......................................................................................................................... 759
spectrumscale command........................................................................................................................ 762
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information...... 793
Overview of IBM Spectrum Scale Data Management API for GPFS.......................................................793
GPFS-specific DMAPI events.............................................................................................................793
DMAPI functions................................................................................................................................ 795
DMAPI configuration attributes......................................................................................................... 798
DMAPI restrictions for GPFS..............................................................................................................799
Concepts of IBM Spectrum Scale Data Management API for GPFS...................................................... 800
Sessions..............................................................................................................................................800
Data management events.................................................................................................................. 800
Mount and unmount...........................................................................................................................802
Tokens and access rights................................................................................................................... 803
Parallelism in Data Management applications.................................................................................. 803
Data Management attributes............................................................................................................. 804
Support for NFS..................................................................................................................................804
Quota.................................................................................................................................................. 805
Memory mapped files........................................................................................................................ 805
Administration of IBM Spectrum Scale Data Management API for GPFS..............................................805
Required files for implementation of Data Management applications.............................................805
GPFS configuration attributes for DMAPI..........................................................................................806
Enabling DMAPI for a file system.......................................................................................................808
Initializing the Data Management application.................................................................................. 808
Specifications of enhancements for IBM Spectrum Scale Data Management API for GPFS................ 808
Enhancements to data structures..................................................................................................... 809
Usage restrictions on DMAPI functions.............................................................................................810
v
Definitions for GPFS-specific DMAPI functions................................................................................ 812
Semantic changes to DMAPI functions............................................................................................. 825
GPFS-specific DMAPI events.............................................................................................................826
Additional error codes returned by DMAPI functions....................................................................... 827
Failure and recovery of IBM Spectrum Scale Data Management API for GPFS.................................... 829
Single-node failure.............................................................................................................................829
Session failure and recovery..............................................................................................................830
Event recovery....................................................................................................................................830
Loss of access rights.......................................................................................................................... 831
DODeferred deletions........................................................................................................................ 831
DM application failure........................................................................................................................ 831
vi
gpfs_ireadlink64() subroutine.................................................................................................................920
gpfs_ireadx() subroutine......................................................................................................................... 922
gpfs_iscan_t structure.............................................................................................................................924
gpfs_lib_init() subroutine........................................................................................................................ 925
gpfs_lib_term() subroutine......................................................................................................................926
gpfs_next_inode() subroutine................................................................................................................. 927
gpfs_next_inode64() subroutine............................................................................................................ 929
gpfs_next_inode_with_xattrs() subroutine............................................................................................ 931
gpfs_next_inode_with_xattrs64() subroutine........................................................................................ 933
gpfs_next_xattr() subroutine.................................................................................................................. 935
gpfs_opaque_acl_t structure...................................................................................................................937
gpfs_open_inodescan() subroutine........................................................................................................ 938
gpfs_open_inodescan64() subroutine.................................................................................................... 941
gpfs_open_inodescan_with_xattrs() subroutine....................................................................................944
gpfs_open_inodescan_with_xattrs64() subroutine............................................................................... 947
gpfs_prealloc() subroutine...................................................................................................................... 950
gpfs_putacl() subroutine......................................................................................................................... 953
gpfs_putacl_fd() subroutine....................................................................................................................955
gpfs_quotactl() subroutine...................................................................................................................... 957
gpfs_quotaInfo_t structure..................................................................................................................... 960
gpfs_seek_inode() subroutine................................................................................................................ 962
gpfs_seek_inode64() subroutine............................................................................................................ 964
gpfs_stat() subroutine............................................................................................................................. 966
gpfs_stat_inode() subroutine.................................................................................................................. 968
gpfs_stat_inode64() subroutine..............................................................................................................970
gpfs_stat_inode_with_xattrs() subroutine............................................................................................. 972
gpfs_stat_inode_with_xattrs64() subroutine......................................................................................... 974
gpfs_stat_x() subroutine......................................................................................................................... 976
gpfsFcntlHeader_t structure................................................................................................................... 978
gpfsGetDataBlkDiskIdx_t structure........................................................................................................979
gpfsGetFilesetName_t structure.............................................................................................................981
gpfsGetReplication_t structure............................................................................................................... 982
gpfsGetSetXAttr_t structure....................................................................................................................984
gpfsGetSnapshotName_t structure........................................................................................................ 986
gpfsGetStoragePool_t structure............................................................................................................. 987
gpfsListXAttr_t structure......................................................................................................................... 988
gpfsRestripeData_t structure.................................................................................................................. 989
gpfsRestripeRange_t structure............................................................................................................... 991
gpfsRestripeRangeV2_t structure........................................................................................................... 993
gpfsSetReplication_t structure............................................................................................................... 996
gpfsSetStoragePool_t structure.............................................................................................................. 998
vii
Config: GET ........................................................................................................................................... 1036
Filesystems: GET .................................................................................................................................. 1040
Filesystems/{filesystemName}: GET ....................................................................................................1047
Filesystems/{filesystemName}/acl/{path}: GET .................................................................................. 1054
Filesystems/{filesystemName}/acl/{path}: PUT .................................................................................. 1057
Filesystems/{filesystemName}/afm/state: GET................................................................................... 1062
Filesystems/{filesystemName}/audit: PUT ..........................................................................................1065
Filesystems/{filesystemName}/directory/{path}: POST.......................................................................1069
Filesystems/{filesystemName}/directory/{path}: DELETE................................................................... 1072
Filesystems/{filesystemName}/directoryCopy/{sourcePath}: PUT..................................................... 1075
Filesystems/{filesystemName}/disks: GET ..........................................................................................1079
Filesystems/{filesystemName}/disks/{diskName}: GET ..................................................................... 1083
Filesystems/{filesystemName}/filesets: GET....................................................................................... 1087
Filesystems/{filesystemName}/filesets: POST.....................................................................................1096
Filesystems/{filesystemName}/filesets/cos: POST ............................................................................. 1101
Filesystems/{filesystemName}/filesets/{filesetName}: DELETE......................................................... 1106
Filesystems/{filesystemName}/filesets/{filesetName}: GET............................................................... 1109
Filesystems/{filesystemName}/filesets/{filesetName}: PUT............................................................... 1118
Filesystems/{filesystemName}/filesets/{filesetName}/afmctl: POST................................................. 1124
Filesystems/{filesystemName}/filesets/{filesetName}/cos/directory: POST......................................1129
Filesystems/{filesystemName}/filesets/{filesetName}/cos/download: POST.................................... 1133
Filesystems/{filesystemName}/filesets/{filesetName}/cos/evict: POST.............................................1137
Filesystems/{filesystemName}/filesets/{filesetName}/cos/upload: POST......................................... 1141
Filesystems/{filesystemName}/filesets/{filesetName}/directory/{path}: POST................................. 1145
Filesystems/{filesystemName}/filesets/{filesetName}/directory/{path}: DELETE..............................1148
Filesystems/{filesystemName}/filesets/{filesetName}/directoryCopy/{sourcePath}: PUT................ 1151
Filesystems/{filesystemName}/filesets/{filesetName}/link: DELETE..................................................1155
Filesystems/{filesystemName}/filesets/{filesetName}/link: POST......................................................1158
Filesystems/{filesystemName}/filesets/{filesetName}/psnaps: POST ...............................................1161
Filesystems/{filesystemName}/filesets/{filesetName}/psnaps/{snapshotName}: DELETE .............. 1165
Filesystems/{filesystemName}/filesets/{filesetName}/quotadefaults: GET ...................................... 1169
Filesystems/{filesystemName}/filesets/{filesetName}/defaultquotas: PUT ...................................... 1173
Filesystems/{filesystemName}/filesets/{filesetName}quotadefaults: POST ..................................... 1177
Filesystems/{filesystemName}/filesets/{filesetName}/quotas: GET ..................................................1181
Filesystems/{filesystemName}/filesets/{filesetName}/quotas: POST ............................................... 1185
Filesystems/{filesystemName}/filesets/{filesetName}/snapshotCopy/{snapshotName}: PUT......... 1189
Filesystems/{filesystemName}/filesets/{filesetName}/snapshotCopy/{snapshotName}/path/
{sourcePath}: PUT............................................................................................................................ 1193
Filesystems/{filesystemName}/filesets/{filesetName}/snapshots: GET ............................................1197
Filesystems/{filesystemName}/filesets/{filesetName}/snapshots: POST ..........................................1201
Filesystems/{filesystemName}/filesets/{filesetName}/snapshots/{snapshotName}: DELETE..........1204
Filesystems/{filesystemName}/filesets/{filesetName}/snapshots/{snapshotName}: GET ............... 1207
Filesystems/{filesystemName}/suspend: PUT .................................................................................... 1210
Filesystems/{filesystemName}/filesets/{filesetName}/symlink/{linkpath}: POST............................. 1213
Filesystems/{filesystemName}/filesets/{filesetName}/symlink/{path}: DELETE................................1216
Filesystems/{filesystemName}/filesets/{filesetName}/watch: PUT ...................................................1219
Filesystems/{filesystemName}/mount: PUT ....................................................................................... 1223
Filesystems/{filesystemName}/owner/{path}: GET .............................................................................1227
Filesystems/{filesystemName}/owner/{path}: PUT .............................................................................1230
Filesystems/{filesystemName}/policies: GET ..................................................................................... 1233
Filesystems/{filesystemName}/policies: PUT ..................................................................................... 1236
Filesystems/{filesystemName}/quotadefaults: GET ........................................................................... 1240
Filesystems/{filesystemName}/quotadefaults: PUT ........................................................................... 1244
Filesystems/{filesystemName}/quotadefaults: POST ......................................................................... 1248
Filesystems/{filesystemName}/quotagracedefaults: GET .................................................................. 1252
Filesystems/{filesystemName}/quotagracedefaults: POST ................................................................ 1255
Filesystems/{filesystemName}/quotamanagement: PUT ................................................................... 1259
Filesystems/{filesystemName}/quotas: GET ....................................................................................... 1262
viii
Filesystems/{filesystemName}/quotas: POST .....................................................................................1266
Filesystems/{filesystemName}/resume: PUT ......................................................................................1270
Filesystems/{filesystemName}/snapshotCopy/{snapshotName}: PUT...............................................1273
Filesystems/{filesystemName}/snapshotCopy/{snapshotName}/path/{sourcePath}: PUT............... 1277
Filesystems/{filesystemName}/snapshots: GET ................................................................................. 1281
Filesystems/{filesystemName}/snapshots: POST ............................................................................... 1284
Filesystems/{filesystemName}/snapshots/{snapshotName}: DELETE............................................... 1287
Filesystems/{filesystemName}/snapshots/{snapshotName}: GET .................................................... 1290
Filesystems/{filesystemName}/symlink/{linkpath}: POST...................................................................1293
Filesystems/{filesystemName}/symlink/{path}: DELETE..................................................................... 1296
Filesystems/{filesystemName}/unmount: PUT ................................................................................... 1299
Filesystems/{filesystemName}/watch: PUT ........................................................................................ 1303
Filesystems/{filesystemName}/watches: GET .................................................................................... 1307
Info: GET ............................................................................................................................................... 1311
Jobs: GET .............................................................................................................................................. 1314
Jobs/{jobId}: DELETE ........................................................................................................................... 1318
Jobs/{jobID}: GET ................................................................................................................................. 1321
NFS/exports: GET ................................................................................................................................. 1324
NFS/exports: POST ............................................................................................................................... 1328
NFS/exports/{exportPath}: GET ........................................................................................................... 1331
NFS/exports/{exportPath}: PUT ........................................................................................................... 1335
NFS/exports/{exportPath}: DELETE...................................................................................................... 1339
Nodeclasses: GET .................................................................................................................................1342
Nodeclasses: POST ...............................................................................................................................1345
Nodeclasses/{nodeclassName}: GET .................................................................................................. 1349
Nodeclasses/{nodeclassName}: DELETE ............................................................................................ 1352
Nodeclasses/{nodeclassName}: PUT .................................................................................................. 1355
Nodes: GET ........................................................................................................................................... 1359
Nodes: POST ......................................................................................................................................... 1365
Nodes/afm/mapping: GET ....................................................................................................................1368
Nodes/afm/mapping: POST ................................................................................................................. 1371
Nodes/afm/mapping: DELETE ..............................................................................................................1374
Nodes/afm/mapping/{mappingName}: GET ........................................................................................1377
Nodes/afm/mapping/{mappingName}: PUT........................................................................................ 1380
Nodes/{name}: DELETE ........................................................................................................................ 1383
Nodes/{name}: GET .............................................................................................................................. 1387
Nodes/{name}: PUT...............................................................................................................................1392
Nodes/{name}/health/events: GET ......................................................................................................1395
Nodes/{name}/health/states: GET .......................................................................................................1399
Nodes/{name}/services: GET ............................................................................................................... 1403
Nodes/{name}/services/{serviceName}: GET ......................................................................................1406
Nodes/{name}/services/{serviceName}: PUT...................................................................................... 1410
NSDs: GET .............................................................................................................................................1413
NSDs/{nsdName}: GET ......................................................................................................................... 1418
Perfmon/data: GET ............................................................................................................................... 1421
Querying performance data by using /perfmon/data request .......................................................1422
Perfmon/sensors/{sensorName}: GET .................................................................................................1426
Perfmon/sensors: GET ......................................................................................................................... 1428
Perfmon/sensors/{sensorName}: PUT .................................................................................................1430
Remotemount/authenticationkey: GET ............................................................................................... 1434
Remotemount/authenticationkey: POST ............................................................................................. 1436
Remotemount/authenticationkey: PUT ............................................................................................... 1439
Remotemount/owningclusters: GET ....................................................................................................1443
Remotemount/owningclusters: POST ..................................................................................................1445
Remotemount/owningclusters/{owningCluster}: DELETE .................................................................. 1449
Remotemount/owningclusters/{owningCluster}: GET ........................................................................ 1452
Remotemount/owningclusters/{owningCluster}: PUT......................................................................... 1455
Remotemount/remoteclusters: GET ....................................................................................................1459
ix
Remotemount/remoteclusters: POST.................................................................................................. 1461
Remotemount/remoteclusters/{remoteCluster}: DELETE................................................................... 1465
Remotemount/remoteclusters/{remoteCluster}: GET ........................................................................ 1468
Remotemount/remoteclusters/{remoteCluster}: PUT......................................................................... 1471
Remotemount/remoteclusters/{remoteCluster}/access/{owningClusterFilesystem}: POST.............1475
Remotemount/remoteclusters/{remoteCluster}/access/{owningClusterFilesystem}: PUT............... 1479
Remotemount/remoteclusters/{remoteCluster}/deny/{owningClusterFilesystem}: DELETE............ 1483
Remotemount/remotefilesystems: GET .............................................................................................. 1487
Remotemount/remotefilesystems: POST ............................................................................................ 1489
Remotemount/remotefilesystems/{remoteFilesystem}: DELETE ...................................................... 1493
Remotemount/remotefilesystems/{remoteFilesystem}: GET .............................................................1496
Remotemount/remotefilesystems/{remoteFilesystem}: PUT .............................................................1498
SMB/shares: GET .................................................................................................................................. 1502
SMB/shares/{shareName}: GET ........................................................................................................... 1507
SMB/shares: POST ................................................................................................................................1511
SMB/shares/{shareName}: PUT ........................................................................................................... 1516
SMB/shares/{shareName}: DELETE...................................................................................................... 1521
SMB/shares/{shareName}/acl: DELETE................................................................................................1524
SMB/shares/{shareName}/acl: GET .....................................................................................................1527
SMB/shares/{shareName}/acl/{name}: DELETE.................................................................................. 1530
SMB/shares/{shareName}/acl/{name}: GET ........................................................................................1533
SMB/shares/{shareName}/acl/{name}: PUT ........................................................................................1536
Thresholds: GET ................................................................................................................................... 1539
Thresholds: POST ................................................................................................................................. 1542
Thresholds/{name}: DELETE ................................................................................................................ 1547
Thresholds/{name}: GET ...................................................................................................................... 1550
Notices............................................................................................................1557
Trademarks............................................................................................................................................1558
Terms and conditions for product documentation............................................................................... 1558
IBM Online Privacy Statement.............................................................................................................. 1559
Glossary.......................................................................................................... 1561
Index.............................................................................................................. 1569
x
Tables
2. Conventions..............................................................................................................................................xxxv
7. GPFS commands........................................................................................................................................... 1
11. Key-value.................................................................................................................................................122
16. Values returned by statvfs or statfs for different settings of linuxStatfsUnits...................................... 187
22. Contents of columns input1 and input2 depending on the value in column Buf type..........................387
xi
24. mmkeyserv tenant show.........................................................................................................................468
xii
49. List of request parameters................................................................................................................... 1065
xiii
74. List of request parameters................................................................................................................... 1173
xiv
99. List of request parameters................................................................................................................... 1266
xv
124. List of request parameters................................................................................................................. 1355
xvi
149. List of request parameters................................................................................................................. 1455
xvii
xviii
About this information
This edition applies to IBM Spectrum Scale version 5.1.0 for AIX®, Linux®, and Windows.
IBM Spectrum Scale is a file management infrastructure, based on IBM General Parallel File System
(GPFS) technology, which provides unmatched performance and reliability with scalable access to critical
file data.
To find out which version of IBM Spectrum Scale is running on a particular AIX node, enter:
lslpp -l gpfs\*
To find out which version of IBM Spectrum Scale is running on a particular Linux node, enter:
rpm -qa | grep gpfs (for SLES and Red Hat Enterprise Linux)
To find out which version of IBM Spectrum Scale is running on a particular Windows node, open Programs
and Features in the control panel. The IBM Spectrum Scale installed program name includes the version
number.
Which IBM Spectrum Scale information unit provides the information you need?
The IBM Spectrum Scale library consists of the information units listed in Table 1 on page xx.
To use these information units effectively, you must be familiar with IBM Spectrum Scale and the AIX,
Linux, or Windows operating system, or all of them, depending on which operating systems are in use at
your installation. Where necessary, these information units provide some background information relating
to AIX, Linux, or Windows. However, more commonly they refer to the appropriate operating system
documentation.
Note: Throughout this documentation, the term "Linux" refers to all supported distributions of Linux,
unless otherwise specified.
• mmcrnsd command
• mmcrsnapshot command
• mmdefedquota command
• mmdefquotaoff command
• mmdefquotaon command
• mmdefragfs command
• mmdelacl command
• mmdelcallback command
• mmdeldisk command
• mmdelfileset command
• mmdelfs command
• mmdelnode command
• mmdelnodeclass command
• mmdelnsd command
• mmdelsnapshot command
• mmdf command
• mmdiag command
• mmdsh command
• mmeditacl command
• mmedquota command
• mmexportfs command
• mmfsck command
• mmfsctl command
• mmgetacl command
• mmgetstate command
• mmhadoopctl command
• mmhdfs command
• mmhealth command
• mmimgbackup command
• mmimgrestore command
• mmimportfs command
• mmkeyserv command
• mmlsfileset command
• mmlsfs command
• mmlslicense command
• mmlsmgr command
• mmlsmount command
• mmlsnodeclass command
• mmlsnsd command
• mmlspolicy command
• mmlspool command
• mmlsqos command
• mmlsquota command
• mmlssnapshot command
• mmmigratefs command
• mmmount command
• mmnetverify command
• mmnfs command
• mmnsddiscover command
• mmobj command
• mmperfmon command
• mmpmon command
• mmprotocoltrace command
• mmpsnap command
• mmputacl command
• mmqos command
• mmquotaoff command
• mmquotaon command
• mmreclaimspace command
• mmremotecluster command
• mmremotefs command
• mmrepquota command
• mmrestoreconfig command
• mmrestorefs command
• mmrestripefile command
• mmsnapdir command
• mmstartup command
• mmtracectl command
• mmumount command
• mmunlinkfileset command
• mmuserauth command
• mmwatch command
• mmwinservctl command
• spectrumscale command
Programming reference
• IBM Spectrum Scale Data
Management API for GPFS
information
• GPFS programming interfaces
• GPFS user exits
• IBM Spectrum Scale management
API endpoints
• Considerations for GPFS
applications
IBM Spectrum Scale: Big Hortonworks Data Platform 3.X • System administrators of IBM
Data and Analytics Guide Spectrum Scale systems
• Planning
• Application programmers who are
• Installation
experienced with IBM Spectrum
• Upgrading and uninstallation Scale systems and familiar with
• Configuration the terminology and concepts in
the XDSM standard
• Administration
• Limitations
• Problem determination
Open Source Apache Hadoop
• Open Source Apache Hadoop
without CES HDFS
• Open Source Apache Hadoop with
CES HDFS
BigInsights® 4.2.5 and Hortonworks
Data Platform 2.6
• Planning
• Installation
• Upgrading software stack
• Configuration
• Administration
• Troubleshooting
• Limitations
• FAQ
Table 2. Conventions
Convention Usage
bold Bold words or characters represent system elements that you must use literally,
such as commands, flags, values, and selected menu options.
Depending on the context, bold typeface sometimes represents path names,
directories, or file names.
bold bold underlined keywords are defaults. These take effect if you do not specify a
underlined different keyword.
italic Italic words or characters represent variable values that you must supply.
Italics are also used for information unit titles, for the first use of a glossary term,
and for general emphasis in text.
<key> Angle brackets (less-than and greater-than) enclose the name of a key on the
keyboard. For example, <Enter> refers to the key on your terminal or workstation
that is labeled with the word Enter.
\ In command examples, a backslash indicates that the command or coding example
continues on the next line. For example:
{item} Braces enclose a list from which you must choose an item in format and syntax
descriptions.
[item] Brackets enclose optional items in format and syntax descriptions.
<Ctrl-x> The notation <Ctrl-x> indicates a control character sequence. For example,
<Ctrl-c> means that you hold down the control key while pressing <c>.
item... Ellipses indicate that you can repeat the preceding item one or more times.
| In synopsis statements, vertical lines separate a list of choices. In other words, a
vertical line means Or.
In the left margin of the document, vertical lines indicate technical changes to the
information.
Note: CLI options that accept a list of option values delimit with a comma and no space between values.
As an example, to display the state on three nodes use mmgetstate -N NodeA,NodeB,NodeC.
Exceptions to this syntax are listed specifically within the command.
Summary of changes
for IBM Spectrum Scale version 5.1.0
as updated, December 2020
This release of the IBM Spectrum Scale licensed program and the IBM Spectrum Scale library includes
the following improvements. All improvements are available after an upgrade, unless otherwise specified.
• Feature updates
• Documented commands, structures, and subroutine
• Messages
• Deprecated items
• Changes in documentation
Important: IBM Spectrum Scale 5.1.x supports Python 3.6 or later. It is strongly recommended that
Python 3.6 is installed through the OS package manager (For example, yum install python3). If you
install Python 3.6 by other means, unexpected results might occur, such as failure to install gpfs.base
for prerequisite checks, and workarounds might be required.
AFM and AFM DR-related changes
• Support for file system-level migration by using AFM. For more information, see Data migration by
using Active File Management in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
• Performance improvement for file system-level migration by setting the afmGateway parameter
value to all. That is, afmGateway=all. For more information, see Data migration by using Active File
Management in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide and mmchfileset
command in the IBM Spectrum Scale: Command and Programming Reference.
• AFM-DR pre-deployment reference checklist. For more information, see General guidelines and
recommendation for AFM-DR in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
• AFM support when the kernel is booted in the FIPS mode.
The mmchnode command supports changing the daemon IP address of a quorum node in a
CCR-enabled cluster.
The mmchnode command supports changing the daemon IP for a quorum node in a CCR-
enabled cluster. For more information, see Changing IP addresses or host names of cluster
nodes in the IBM Spectrum Scale: Administration Guide and mmchnode command in the IBM
Spectrum Scale: Command and Programming Reference.
Encryption
Support for CA-signed client certificates; updating client certificates in one step
The mmkeyserv client create command can create a key client with a user-provided CA-
signed certificate or a system-generated self-signed certificate. The new mmkeyserv client
update command updates an expired or unexpired CA-signed or self-signed client certificate
in one step. It is no longer necessary to delete the old client and create a new one.
The nistCompliance attribute must be set to NIST 800-131A for clusters running at version
5.1 or later
Starting from IBM Spectrum Scale 5.1.0 release, setting nistCompliance to off is not
allowed. In addition, updating a cluster to version 5.1 by using mmchconfig
release=LATEST requires nistCompliance to be set to NIST 800-131A. For more
information, see mmchconfig command in the IBM Spectrum Scale: Command and
Programming Reference.
Security
Sudo wrapper security improvement
You no longer need to add the scp, echo, and mmsdrrestore commands to the sudoers
file. For more information, see Configuring sudo in IBM Spectrum Scale: Administration Guide.
Features
New mmqos command expands QoS features
The mmqos command combines the functionality of the existing mmchqos and mmlsqos
commands, is easy to use, and supports user-created service classes and dynamic I/O service
sharing. With user-created classes and their associated filesets, you can set I/O service usage
levels for functional groups in your organization. Future QoS development will be through
mmqos. For more information, see mmchqos command in the IBM Spectrum Scale: Command
and Programming Reference.
Performance
Optimizing background space reclamation for thin-provision devices
The mmchconfig enhances the performance of background space reclamation for thin
provisioned devices with the backgroundSpaceReclaimThreshold attribute. You can
configure a threshold value to define how often you want the background space reclamation to
run. For more information, see the topic mmchconfig command in the IBM Spectrum Scale:
Command and Programming Reference.
Note: The space reclaim function instructs the device to remove unused blocks as soon as
they are detected. This discarding of blocks increases the space efficiency in the device, and
also reduces the performance impact by write amplification if the device is an SSD like a NVMe
disk.
The performance measurement team in IBM specifies that in an IBM Spectrum Scale setup
with Intel NVMe disk, creating and updating files would experience heavily degraded
performance without any space reclaim. The worst degradation has been estimated to be
more than 80%. With the background space reclaim, the performance degradation will be
limited to a maximum of 20%. Based on these estimations, you can use the background space
reclaim for daily workload, and manually invoke the mmreclaimspace command to reclaim
all reclaimable space in the maintenance window.
Offline mmfsck directory scanning of a file system that has a very sparse inode 0 file
Performance is improved for the mmfsck command in offline mode when it is doing directory
scanning of a file system with a very sparse inode 0 file. A very sparse inode 0 file can occur
when a file system has a large number of independent filesets.
xl Summary of changes
– SLES 12
– Ubuntu 16.04 and 18.04
SMB changes
SMB is supported on SUSE SLES 15 on s390x (RPQ Request required).
Python-related changes
From IBM Spectrum Scale release 5.1.0, all Python code in the IBM Spectrum Scale product is
converted to Python 3. The minimum supported Python version is 3.6.
For compatibility reasons on IBM Spectrum Scale 5.1.0.x and later on Red Hat Enterprise Linux 7.x
(7.7 and later), a few Python 2 files are packaged and they might trigger dependency-related
messages. In certain scenarios, Python 2.7 might also be required to be installed. Multiple versions of
Python can co-exist on the same system. For more information, see the entry about mmadquery in
A feature is a deprecated feature if that specific feature is supported in the current release but the
support might be removed in a future release. In some cases, it might be advisable to plan to
discontinue the use of deprecated functionality.
A feature is a Discontinued feature if it has been removed in a release and is no longer available. You
need to make changes if you were using that functionality in previous releases.
Changes in documentation
IBM Spectrum Scale FAQ improvements
The IBM Spectrum Scale support information has been condensed and moved to Q2.1 What is
supported on IBM Spectrum Scale for AIX, Linux, Power, and Windows? In this new question, the
support information is available for the latest GA (5.1.0) and PTF (5.0.5.3) in the following format:
a table that details the supported operating systems and components and latest tested kernels, a
table that details the support exceptions for Linux, and a table that details the support
exceptions for AIX and Windows. It is the intention for IBM Spectrum Scale to support all features
List of documentation changes in product guides and respective IBM Knowledge Center sections
The following is a list of documentation changes including changes in topic titles, changes in
placement of topics, and deleted topics:
The following commands are specific to IBM Spectrum Scale RAID and are documented in IBM Spectrum
Scale RAID: Administration:
• mmaddcomp
• mmaddcompspec
• mmaddpdisk
• mmchcarrier
• mmchcomp
• mmchcomploc
• mmchenclosure
• mmchfirmware
gpfs.snap command
Creates an informational system snapshot at a single point in time. This system snapshot consists of
information such as cluster configuration, disk configuration, network configuration, network status, GPFS
logs, dumps, and traces.
Synopsis
gpfs.snap [-d OutputDirectory] [-m | -z]
[-a | -N {Node[,Node...] | NodeFile | NodeClass}]
[--check-space | --no-check-space | --check-space-only]
[--cloud-gateway {NONE |BASIC |FULL} ] [--full-collection] [--deadlock [--quick] |
--limit-large-files {YYYY:MM:DD:HH:MM | NumberOfDaysBack | latest}]
[--exclude-aix-disk-attr] [--exclude-aix-lvm] [--exclude-merge-logs]
[--exclude-net] [--gather-logs] [--mmdf] [--performance] [--prefix]
[--protocol ProtocolType[,ProtocolType,...]] [--timeout Seconds]
[--purge-files KeepNumberOfDaysBack][--hadoop]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the gpfs.snap command as the main tool to gather data when a GPFS problem is encountered, such
as a hung file system, a hung GPFS command, or a daemon assert.
The gpfs.snap command gathers information (for example, GPFS internal dumps, traces, and kernel
thread dumps) to solve a GPFS problem.
Note: By default, large debug files are now a delta collection, which means that they are only collected
when there are new files since the previous run of gpfs.snap. To override this default behavior, use
either the --limit-large-files or --full-collection options.
Note: This utility program is a service tool and options might change dynamically. The tool impacts
performance and occupies disk space when it runs.
Parameters
-d OutputDirectory
Specifies the output directory where the snapshot information is stored. You cannot specify a
directory that is located in a GPFS file system that is managed by the same cluster that you are
running the gpfs.snap command against. The default output directory is /tmp/gpfs.snapOut.
-m
Specifying this option is equivalent to specifying --exclude-merge-logs with -N.
-z
Collects gpfs.snap data only from the node on which the command is invoked. No master data is
collected.
-a
Directs gpfs.snap to collect data from all nodes in the cluster. This value is the default.
-N {Node[,Node ...] | NodeFile | NodeClass}
Specifies the nodes from which to collect gpfs.snap data. This option supports all defined node
classes. For more information about how to specify node names, see Specifying nodes as input to
GPFS commands in IBM Spectrum Scale: Administration Guide.
--check-space
Specifies that space checking is performed before data is collected.
--no-check-space
Specifies that no space checking is performed. This value is the default.
--check-space-only
Specifies that only space checking is performed. No data is collected.
--cloud-gateway {NONE | BASIC | FULL}
When this option is set to NONE, no Transparent cloud tiering data is collected. With the BASIC
option, when the Transparent cloud tiering service is enabled, the snap collects information such as
logs, traces, Java™ cores, along with minimal system and IBM Spectrum Scale cluster information
specific to Transparent cloud tiering. No customer sensitive information is collected.
Note: The default behavior of the gpfs.snap command includes basic information of Transparent
cloud tiering, in addition to the GPFS information.
With the FULL option, extra details such as Java Heap dump are collected, along with the information
captured with the BASIC option.
--full-collection
Specifies that all large debug files are collected instead of the default behavior that collects only new
files since the previous run of gpfs.snap.
--deadlock
Collects only the minimum amount of data necessary to debug a deadlock problem. Part of the data
that is collected is the output of the mmfsadm dump all command. This option ignores all other
options except for -a, -N, -d, and --prefix.
--quick
Collects less data when specified along with the --deadlock option. The output includes mmfsadm
dump most, mmfsadm dump kthreads, and 10 seconds of trace in addition to the usual
gpfs.snap output.
--limit-large-files {YYYY:MM:DD:HH:MM | NumberOfDaysBack | latest}]
Specifies a time limit to reduce the number of large files collected.
--exclude-aix-disk-attr
Specifies that data about AIX disk attributes is not collected. Collecting data about AIX disk attributes
on an AIX node that has a large number of disks might be very time-consuming, so using this option
might help improve performance.
--exclude-aix-lvm
Specifies that data about the AIX Logical Volume Manager (LVM) is not collected.
--exclude-merge-logs
Specifies that merge logs and waiters is not collected.
--exclude-net
Specifies that network-related information is not collected.
--gather-logs
Gathers, merges, and chronologically sorts all of the mmfs.log files. The results are stored in the
directory that is specified with -d option.
--mmdf
Specifies that mmdf output is collected.
--performance
Specifies that performance data is to be gathered. It is recommended to issue the gpfs.snap
command with -a option on all nodes or on the pmcollector node that has an ACTIVE THRESHOLD
MONITOR role. Starting from IBM Spectrum Scale version 5.1.0, the performance data package
includes the performance monitoring report from the top metric for last 24 hours as well.
Note: The performance script can take up to 30 minutes to run. Therefore, the script is not included
when all other types of protocol information are gathered by default. Specifying this option is the only
way to turn on the gathering of performance data.
--prefix
Specifies that the prefix name gpfs.snap is added to the tar file.
--protocol ProtocolType[,ProtocolType,...]
Specifies the type or types of protocol information to be gathered. By default, whenever any protocol
is enabled on a file system, information is gathered for all types of protocol information (except for
performance data; see the --performance option). However, when the --protocol option is
specified, the automatic gathering of all protocol information is turned off, and only the specified type
of protocol information is gathered. The following values for ProtocolType are accepted:
smb
nfs
object
authentication
ces
core
none
--timeout Seconds
Specifies the timeout value, in seconds, for all commands.
--purge-files KeepNumberOfDaysBack
Specifies that large debug files is deleted from the cluster nodes based on the
KeepNumberOfDaysBack value. If 0 is specified, all of the large debug files is deleted. If a value
greater than 0 is specified, large debug files that are older than the number of days specified are
deleted. For example, if the value 2 is specified, the previous two days of large debug files are
retained.
This option is not compatible with many of the gpfs.snap options because it only removes files and
does not collect any gpfs.snap data.
--hadoop
Specifies that Hadoop data is to be gathered.
Use the -z option to generate a non-master snapshot. This option is useful if there are many nodes on
which to take a snapshot, and only one master snapshot is needed. For a GPFS problem within a large
cluster (hundreds or thousands of nodes), one strategy might call for a single master snapshot (one
invocation of gpfs.snap with no options), and multiple non-master snapshots (multiple invocations of
gpfs.snap with the -z option).
Use the -N option to obtain gpfs.snap data from multiple nodes in the cluster. When the -N option is
used, the gpfs.snap command takes non-master snapshots of all the nodes that are specified with this
option and a master snapshot of the node on which it was issued.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the gpfs.snap command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum
Scale: Administration Guide.
Examples
1. To collect gpfs.snap on all nodes with the default data, issue the following command:
c34f2n03:# gpfs.snap
gpfs.snap started at Fri Mar 22 13:16:12 EDT 2019.
Gathering common data...
Gathering Linux specific data...
Gathering extended network data...
Gathering local callhome data...
Gathering local sysmon data...
Gathering trace reports and internal dumps...
Gathering Transparent Cloud Tiering data at level BASIC...
gpfs.snap: The Transparent Cloud Tiering snap file was not located on the node c34f2n03.gpfs.net
Gathering cluster wide sysmon data...
Gathering cluster wide callhome data...
gpfs.snap: Spawning remote gpfs.snap calls. Master is c34f2n03.
This may take a while.
After this command has completed, send the tar file (highlighted) to IBM service.
2. The following example collects gpfs.snap data on specific nodes and provides an output directory:
Location
/usr/lpp/mmfs/bin
mmaddcallback command
Registers a user-defined command that GPFS will execute when certain events occur.
Synopsis
mmaddcallback CallbackIdentifier --command CommandPathname
--event Event[,Event...] [--priority Value]
[--async | --sync [--timeout Seconds] [--onerror Action]]
[-N {Node[,Node...] | NodeFile | NodeClass}]
[--parms ParameterString ...]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmaddcallback command to register a user-defined command that GPFS executes when
certain events occur.
The callback mechanism is intended to provide notifications when node and cluster events occur.
Invoking complex or long-running commands, or commands that involve GPFS files, might cause
unexpected and undesired results, including loss of file system availability. This is particularly true when
the --sync option is specified.
Note: For documentation about local events (callbacks) and variables for IBM Spectrum Scale RAID, see
the separate publication IBM Spectrum Scale RAID: Administration.
Parameters
CallbackIdentifier
Specifies a user-defined unique name that identifies the callback. It can be up to 255 characters long.
It cannot contain special characters (for example, a colon, semicolon, blank, tab, or comma) and it
cannot start with the letters gpfs or mm (which are reserved for GPFS internally defined callbacks).
--command CommandPathname
Specifies the full path name of the executable to run when the event occurs. On Windows,
CommandPathname must be a Korn shell script because it will be invoked in the Cygwin ksh
environment.
The executable called by the callback facility must be installed on all nodes on which the callback can
be triggered. Place the executable in a local file system (not in a GPFS file system) so that it is
accessible even when the GPFS file system is unavailable.
--event Event[,Event...]
Specifies a list of events that trigger the callback. The value defines when the callback is invoked.
There are two kinds of events: global events and local events. A global event triggers a callback on all
nodes in the cluster, such as a nodeLeave event, which informs all nodes in the cluster that a node
has failed. A local event triggers a callback only on the node on which the event occurred, such as
mounting a file system on one of the nodes.
Table 8 on page 17 lists the supported global events and their parameters.
Table 9 on page 18 lists the supported local events and their parameters.
Local events for IBM Spectrum Scale RAID are documented in IBM Spectrum Scale RAID:
Administration.
--priority Value
Specifies a floating point number that controls the order in which callbacks for a given event are run.
Callbacks with a smaller numerical value are run before callbacks with a larger numerical value.
Callbacks that do not have an assigned priority are run last. If two callbacks have the same priority,
the order in which they are run is undetermined.
--async | --sync [--timeout Seconds] [--onerror Action]
Specifies whether GPFS will wait for the user program to complete and for how long it will wait. The
default is --async (GPFS invokes the command asynchronously). --onerror Action specifies one of
the following actions that GPFS is to take if the callback command returns a nonzero error code:
continue
GPFS ignores the result from executing the user-provided command. This is the default.
quorumLoss
The node executing the user-provided command will voluntarily resign as, or refrain from taking
over as, cluster manager. This action is valid only in conjunction with the tiebreakerCheck
event.
shutdown
GPFS will be shut down on the node executing the user-provided command.
-N {Node[,Node...] | NodeFile | NodeClass}
Defines the set of nodes on which the callback is invoked. For global events, the callback is invoked
only on the specified set of nodes. For local events, the callback is invoked only if the node on which
the event occurred is one of the nodes specified by the -N option. The default is -N all. For general
information on how to specify node names, see Specifying nodes as input to GPFS commands in IBM
Spectrum Scale: Administration Guide.
This command does not support a NodeClass of mount.
--parms ParameterString ...
Specifies parameters to be passed to the executable specified with the --command parameter. The
--parms parameter can be specified multiple times.
When the callback is invoked, the combined parameter string is tokenized on white-space boundaries.
Constructs of the form %name and %name.qualifier are assumed to be GPFS variables and are
replaced with their appropriate values at the time of the event. If a variable does not have a value in
the context of a particular event, the string UNDEFINED is returned instead.
GPFS recognizes the following variables:
%blockLimit
Specifies the current hard quota limit in KB.
%blockQuota
Specifies the current soft quota limit in KB.
%blockUsage
Specifies the current usage in KB for quota-related events.
%ccrObjectName
Specifies the name of the modified object.
%ccrObjectValue
Specifies the value of the modified object.
%ccrObjectVersion
Specifies the version of the modified object.
%clusterManager[.qualifier]
Specifies the current cluster manager node.
%clusterName
Specifies the name of the cluster where this callback was triggered.
%ckDataLen
Specifies the length of data involved in a checksum mismatch.
%ckErrorCountClient
Specifies the cumulative number of errors for the client side in a checksum mismatch.
%ckErrorCountNSD
Specifies the cumulative number of errors for the NSD side in a checksum mismatch.
%ckErrorCountServer
Specifies the cumulative number of errors for the server side in a checksum mismatch.
%ckNSD
Specifies the NSD involved.
%ckOtherNode
Specifies the IP address of the other node in an NSD checksum event.
%ckReason
Specifies the reason string indicating why a checksum mismatch callback was invoked.
%ckReportingInterval
Specifies the error-reporting interval in effect at the time of a checksum mismatch.
%ckRole
Specifies the role (client or server) of a GPFS node.
%ckStartSector
Specifies the starting sector of a checksum mismatch.
%daName
Specifies the name of the declustered array involved.
%daRemainingRedundancy
Specifies the remaining fault tolerance in a declustered array.
%diskName
Specifies a disk or a comma-separated list of disk names for which this callback is triggered.
%downNodes[.qualifier]
Specifies a comma-separated list of nodes that are currently down. Only nodes local to the given
cluster are listed. Nodes which are in a remote cluster but have temporarily joined the cluster are
not included.
%eventName
Specifies the name of the event that triggered this callback.
%eventNode[.qualifier]
Specifies a node or comma-separated list of nodes on which this callback is triggered. Note that
the list might include nodes which are not local to the given cluster, but have temporarily joined
the cluster to mount a file system provided by the local cluster. Those remote nodes could leave
the cluster if there is a node failure or if the file systems are unmounted.
%eventTime
Specifies the time of the event that triggered this callback.
%filesLimit
Specifies the current hard quota limit for the number of files.
%filesQuota
Specifies the current soft quota limit for the number of files.
%filesUsage
Specifies the current number of files for quota-related events.
%filesetName
Specifies the name of a fileset for which the callback is being executed.
%filesetSize
Specifies the size of the fileset.
%fsErr
Specifies the file system structure error code.
%fsName
Specifies the file system name for file system events.
%hardLimit
Specifies the hard limit for the block.
%homeServer
Specifies the name of the home server.
%inodeLimit
Specifies the hard limit of the inode.
%inodeQuota
Specifies the soft limit of the inode.
%inodeUsage
Specifies the total number of files in the fileset.
%myNode[.qualifier]
Specifies the node where callback script is invoked.
%nodeName
Specifies the node name to which the request is sent.
%nodeNames
Specifies a space-separated list of node names to which the request is sent.
%pcacheEvent
Specifies the pcache related events.
%pdFru
Specifies the FRU (field replaceable unit) number of the pdisk.
%pdLocation
The physical location code of a pdisk.
%pdName
The name of the pdisk involved.
%pdPath
The block device path of the pdisk.
%pdPriority
The replacement priority of the pdisk.
%pdState
The state of the pdisk involved.
%pdWwn
The worldwide name of the pdisk.
%prepopAlreadyCachedFiles
Specifies the number of files that are cached. These number of files are not read into cache
because data is same between cache and home.
%prepopCompletedReads
Specifies the number of reads executed during a prefetch operation.
%prepopData
Specifies the total data read from the home as part of a prefetch operation.
%prepopFailedReads
Specifies the number of files for which prefetch failed. Messages are logged to indicate the failure.
However, there is no indication about the file names that failed to read.
%quorumNodes[.qualifier]
Specifies a comma-separated list of quorum nodes.
%quotaEventType
Specifies either the blockQuotaExceeded event or the inodeQuotaExceeded event. These
events are related to soft quota limit being exceeded,
%quotaID
Specifies the numerical ID of the quota owner (UID, GID, or fileset ID).
%quotaOwnerName
Specifies the name of the quota owner (user name, group name, or fileset name).
%quotaType
Specifies the type of quota for quota-related events. Possible values are USR, GRP, or FILESET.
%reason
Specifies the reason for triggering the event. For the preUnmount and unmount events, the
possible values are normal and forced. For the preShutdown and shutdown events, the
possible values are normal and abnormal. For all other events, the value is UNDEFINED.
%requestType
Specifies the type of request to send to the target nodes.
%rgCount
The number of recovery groups involved.
%rgErr
A code from a recovery group, where 0 indicates no error.
%rgName
The name of the recovery group involved.
%rgReason
The reason string indicating why a recovery group callback was invoked.
%senseDataFormatted
Sense data for the specific fileset structure error in a formatted string output.
%senseDataHex
Sense data for the specific fileset structure error in Big endian hex output.
%snapshotID
Specifies the identifier of the new snapshot.
%snapshotName
Specifies the name of the new snapshot.
%softLimit
Specifies the soft limit of the block.
%storagePool
Specifies the storage pool name for space-related events.
%upNodes[.qualifier]
Specifies a comma-separated list of nodes that are currently up. Only nodes local to the given
cluster are listed. Nodes which are in a remote cluster but have temporarily joined the cluster are
not included.
%userName
Specifies the user name.
%waiterLength
Specifies the length of the waiter in seconds.
Variables recognized by IBM Spectrum Scale RAID are documented in IBM Spectrum Scale RAID:
Administration.
Variables that represent node identifiers accept an optional qualifier that can be used to specify how
the nodes are to be identified. When specifying one of these optional qualifiers, separate it from the
variable with a period, as shown here:
variable.qualifier
name
Specifies that GPFS should use fully-qualified node names. This is the default.
shortName
Specifies that GPFS should strip the domain part of the node names.
clusterManagerTakeOver N/A
Triggered when a new cluster manager node is
elected. This happens when a cluster first
starts up or when the current cluster manager
fails or resigns and a new node takes over as
cluster manager.
nodeJoin %eventNode
Triggered when one or more nodes join the
cluster.
nodeLeave %eventNode
Triggered when one or more nodes leave the
cluster.
quorumReached %quorumNodes
Triggered when a quorum has been established
in the GPFS cluster. This event is triggered only
on the cluster manager, not on all the nodes in
the cluster.
quorumLoss N/A
Triggered when quorum has been lost in the
GPFS cluster.
quorumNodeJoin %eventNode
Triggered when one or more quorum nodes join
the cluster.
quorumNodeLeave %eventNode
Triggered when one or more quorum nodes
leave the cluster.
preShutdown %reason
Triggered when GPFS detects a failure and is
about to shut down.
preStartup N/A
Triggered after the GPFS daemon completes its
internal initialization and joins the cluster, but
before the node runs recovery for any file
systems that were already mounted, and
before the node starts accepting user initiated
sessions.
shutdown %reason
Triggered when GPFS completes the shutdown.
startup N/A
Triggered after a successful GPFS startup
before the node is ready for user initiated
sessions. After this event is triggered GPFS
proceeds to finish starting including mounting
all file systems defined to mount on startup.
tiebreakerCheck N/A
Triggered when the cluster manager detects a
lease timeout on a quorum node before GPFS
runs the algorithm that decides if the node will
remain in the cluster. This event is generated
only in configurations that use tiebreaker disks.
Note: Before you add or delete the
tiebreakerCheck event, you must stop the
GPFS daemon on all the nodes in the cluster.
traceConfigChanged N/A
Triggered when GPFS tracing configuration is
changed.
Options
-S Filename | --spec-file Filename
Specifies a file with multiple callback definitions, one per line. The first token on each line must be the
callback identifier.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmaddcallback command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To register command /tmp/myScript to run after GPFS startup, issue this command:
2. To register a callback on the NFS servers to export or to unexport a particular file system after it has
been mounted or before it has been unmounted, issue this command:
See also
• “mmdelcallback command” on page 358
• “mmlscallback command” on page 482
Location
/usr/lpp/mmfs/bin
mmadddisk command
Adds disks to a GPFS file system.
Synopsis
mmadddisk Device {"DiskDesc[;DiskDesc...]" | -F StanzaFile} [-a] [-r [--strict]]
[-v {yes | no}] [-N {Node[,Node...] | NodeFile | NodeClass}]
[--qos QOSClass]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmadddisk command to add disks to a GPFS file system. When the -r flag is specified, the
command rebalances an existing file system after it adds the disks. The command does not require the
file system to be mounted. The file system can be in use.
The actual number of disks in your file system might be constrained by products other than GPFS that you
installed. See to the individual product documentation.
To add disks to a GPFS file system, first decide which of the following two tasks you want to perform:
1. Create new disks with the mmcrnsd command.
In this case, you must also decide whether to create a new set of NSD and pools stanzas or use the
rewritten NSD and pool stanzas that the mmcrnsd command produces. In a rewritten file, the disk
usage, failure group, and storage pool values are the same as the values that are specified in the
mmcrnsd command.
2. Select disks no longer in use in any file system. To display the disks that are not in use, run the
following command:
mmlsnsd -F
Earlier versions of the product allowed specifying disk information with colon-separated disk descriptors.
Those disk descriptors are no longer supported.
Note: If mmadddisk fails with a NO_SPACE error, try one of the following actions:
• Rebalance the file system.
• Run the command mmfsck -y to deallocate unreferenced subblocks.
• Create a pool with larger disks and move data from the old pool to the new one.
Parameters
Device
The device name of the file system to which the disks are added. File system names need not be fully
qualified. fs0 is as acceptable as /dev/fs0.
This parameter must be first.
DiskDesc
A descriptor for each disk to be added. Each descriptor is delimited by a semicolon (;) and the entire
list must be enclosed in quotation marks (' or "). The use of disk descriptors is discouraged.
-F StanzaFile
Specifies a file that contains the NSD stanzas and pool stanzas for the disks to be added to the file
system.
%nsd:
nsd=NsdName
usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}
failureGroup=FailureGroup
pool=StoragePool
servers=ServerList
device=DiskName
thinDiskType={no | nvme | scsi | auto}
where:
nsd=NsdName
The name of an NSD previously created by the mmcrnsd command. For a list of available disks,
run the mmlsnsd -F command. This clause is mandatory for the mmadddisk command.
usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}
Specifies the type of data to be stored on the disk:
dataAndMetadata
Indicates that the disk contains both data and metadata. This value is the default for disks in
the system pool.
dataOnly
Indicates that the disk contains data and does not contain metadata. This value is the default
for disks in storage pools other than the system pool.
metadataOnly
Indicates that the disk contains metadata and does not contain data.
descOnly
Indicates that the disk contains no data and no file metadata. IBM Spectrum Scale uses this
type of disk primarily to keep a copy of the file system descriptor. It can also be used as a third
failure group in certain disaster recovery configurations. For more information, see the topic
Synchronous mirroring utilizing GPFS replication in the IBM Spectrum Scale: Administration
Guide.
failureGroup=FailureGroup
Identifies the failure group to which the disk belongs. A failure group identifier can be a simple
integer or a topology vector that consists of up to three comma-separated integers. The default is
-1, which indicates that the disk has no point of failure in common with any other disk.
GPFS uses this information during data and metadata placement to ensure that no two replicas of
the same block can become unavailable due to a single failure. All disks that are attached to the
same NSD server or adapter must be placed in the same failure group.
If the file system is configured with data replication, all storage pools must have two failure groups
to maintain proper protection of the data. Similarly, if metadata replication is in effect, the system
storage pool must have two failure groups.
Disks that belong to storage pools in which write affinity is enabled can use topology vectors to
identify failure domains in a shared-nothing cluster. Disks that belong to traditional storage pools
must use simple integers to specify the failure group.
pool=StoragePool
Specifies the storage pool to which the disk is to be assigned. If this name is not provided, the
default is system.
Only the system storage pool can contain metadataOnly, dataAndMetadata, or descOnly
disks. Disks in other storage pools must be dataOnly.
servers=ServerList
A comma-separated list of NSD server nodes. This clause is ignored by the mmadddisk command.
device=DiskName
The block device name of the underlying disk device. This clause is ignored by the mmadddisk
command.
%pool:
pool=StoragePoolName
blockSize=BlockSize
usage={dataOnly | metadataOnly | dataAndMetadata}
layoutMap={scatter | cluster}
allowWriteAffinity={yes | no}
writeAffinityDepth={0 | 1 | 2}
blockGroupFactor=BlockGroupFactor
where:
pool=StoragePoolName
Is the name of a storage pool.
blockSize=BlockSize
Specifies the block size of the disks in the storage pool.
usage={dataOnly | metadataOnly | dataAndMetadata}
Specifies the type of data to be stored in the storage pool:
dataAndMetadata
Indicates that the disks in the storage pool contain both data and metadata. This is the default
for disks in the system pool.
dataOnly
Indicates that the disks contain data and do not contain metadata. This is the default for disks
in storage pools other than the system pool.
metadataOnly
Indicates that the disks contain metadata and do not contain data.
layoutMap={scatter | cluster}
Specifies the block allocation map type. When allocating blocks for a given file, GPFS first uses a
round-robin algorithm to spread the data across all disks in the storage pool. After a disk is
selected, the location of the data block on the disk is determined by the block allocation map type.
If cluster is specified, GPFS attempts to allocate blocks in clusters. Blocks that belong to a
particular file are kept adjacent to each other within each cluster. If scatter is specified, the
location of the block is chosen randomly.
The cluster allocation method may provide better disk performance for some disk subsystems
in relatively small installations. The benefits of clustered block allocation diminish when the
number of nodes in the cluster or the number of disks in a file system increases, or when the file
system's free space becomes fragmented. The cluster allocation method is the default for GPFS
clusters with eight or fewer nodes and for file systems with eight or fewer disks.
The scatter allocation method provides more consistent file system performance by averaging
out performance variations due to block location (for many disk subsystems, the location of the
data relative to the disk edge has a substantial effect on performance). This allocation method is
appropriate in most cases and is the default for GPFS clusters with more than eight nodes or file
systems with more than eight disks.
The block allocation map type cannot be changed after the storage pool has been created.
allowWriteAffinity={yes | no}
Indicates whether the File Placement Optimizer (FPO) feature is to be enabled for the storage
pool. For more information on FPO, see the File Placement Optimizer section in the IBM Spectrum
Scale: Administration Guide.
writeAffinityDepth={0 | 1 | 2}
Specifies the allocation policy to be used by the node writing the data.
A write affinity depth of 0 indicates that each replica is to be striped across the disks in a cyclical
fashion with the restriction that no two disks are in the same failure group. By default, the unit of
striping is a block; however, if the block group factor is specified in order to exploit chunks, the
unit of striping is a chunk.
A write affinity depth of 1 indicates that the first copy is written to the writer node. The second
copy is written to a different rack. The third copy is written to the same rack as the second copy,
but on a different half (which can be composed of several nodes).
A write affinity depth of 2 indicates that the first copy is written to the writer node. The second
copy is written to the same rack as the first copy, but on a different half (which can be composed
of several nodes). The target node is determined by a hash value on the fileset ID of the file, or it is
chosen randomly if the file does not belong to any fileset. The third copy is striped across the disks
in a cyclical fashion with the restriction that no two disks are in the same failure group. The
following conditions must be met while using a write affinity depth of 2 to get evenly allocated
space in all disks:
1. The configuration in disk number, disk size, and node number for each rack must be similar.
2. The number of nodes must be the same in the bottom half and the top half of each rack.
This behavior can be altered on an individual file basis by using the --write-affinity-
failure-group option of the mmchattr command.
This parameter is ignored if write affinity is disabled for the storage pool.
blockGroupFactor=BlockGroupFactor
Specifies how many file system blocks are laid out sequentially on disk to behave like a single
large block. This option only works if --allow-write-affinity is set for the data pool. This
applies only to a new data block layout; it does not migrate previously existing data blocks.
See File Placement Optimizer in IBM Spectrum Scale: Administration Guide.
-a
Specifies asynchronous processing. If this flag is specified, the mmadddisk command returns after
the file system descriptor is updated and the rebalancing scan is started; it does not wait for
rebalancing to finish. If no rebalancing is requested (the -r flag not specified), this option has no
effect.
-r
Rebalances the file system to improve performance. Rebalancing attempts to distribute file blocks
evenly across the disks of the file system. In IBM Spectrum Scale 5.0.0 and later, rebalancing is
implemented by a lenient round-robin method that typically runs faster than the previous method of
strict round robin. To rebalance the file system using the strict round-robin method, include the --
strict option that is described in the following text.
--strict
Rebalances the specified files with a strict round-robin method. In IBM Spectrum Scale v4.2.3 and
earlier, rebalancing always uses this method.
Note: Rebalancing of files is an I/O intensive and time-consuming operation and is important only for
file systems with large files that are mostly invariant. In many cases, normal file update and creation
rebalances a file system over time without the cost of a complete rebalancing.
Note: Rebalancing distributes file blocks across all the disks in the cluster that are not suspended,
including stopped disks. For stopped disks, rebalancing does not allow read operations and allocates
data blocks without writing them to the disk. When the disk is restarted and replicated data is copied
onto it, the file system completes the write operations.
-v {yes | no}
Verify that specified disks do not belong to an existing file system. The default is -v yes. Specify -v
no only when you want to reuse disks that are no longer needed for an existing file system. If the
command is interrupted for any reason, use the -v no option on the next invocation of the command.
Important: Using -v no on a disk that already belongs to a file system corrupts that file system. This
problem is not detected until the next time that file system is mounted.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes that are to participate in the restriping of the file system after the specified disks
are available for use by GPFS. This parameter must be used with the -r option. This command
supports all defined node classes. The default is all or the current value of the
defaultHelperNodes parameter of the mmchconfig command.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure occurred.
Security
You must have root authority to run the mmadddisk command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. Assume that the file ./newNSDstanza contains the following NSD stanza:
%nsd: nsd=gpfs10nsd
servers=k148n07,k148n06
usage=dataOnly
failureGroup=5
pool=pool2
thinDiskType=nvme
To add the disk that is defined in this stanza, run the following command:
See also
• “mmchdisk command” on page 210
• “mmcrnsd command” on page 332
• “mmdeldisk command” on page 360
• “mmlsdisk command” on page 489
• “mmlsnsd command” on page 514
• “mmlspool command” on page 520
Location
/usr/lpp/mmfs/bin
mmaddnode command
Adds nodes to a GPFS cluster.
Synopsis
mmaddnode -N {NodeDesc[,NodeDesc...] | NodeFile} [--accept]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmaddnode command to add nodes to an existing IBM Spectrum Scale cluster. On each new
node, a mount point directory and character mode device is created for each GPFS file system.
Follow these rules when adding nodes to an IBM Spectrum Scale cluster:
• You may issue the command only from a node that already belongs to the IBM Spectrum Scale cluster.
• While a node may mount file systems from multiple clusters, the node itself may only be added to a
single cluster using the mmcrcluster or mmaddnode command.
• The nodes must be available for the command to be successful. If any of the nodes listed are not
available when the command is issued, a message listing those nodes is displayed. You must correct the
problem on each node and reissue the command to add those nodes.
• After the nodes are added to the cluster, use the mmchlicense command to designate appropriate IBM
Spectrum Scale licenses to the new nodes.
Parameters
-N NodeDesc[,NodeDesc...] | NodeFile
Specifies node descriptors, which provide information about nodes to be added to the cluster.
NodeFile
Specifies a file containing a list of node descriptors, one per line, to be added to the cluster.
NodeDesc[,NodeDesc...]
Specifies the list of nodes and node designations to be added to the IBM Spectrum Scale cluster.
Node descriptors are defined as:
NodeName:NodeDesignations:AdminNodeName:LicenseType
where:
NodeName
Specifies the host name or IP address of the node for GPFS daemon-to-daemon
communication.
The host name or IP address must refer to the communication adapter over which the GPFS
daemons communicate. Aliased interfaces are not allowed. Use the original address or a name
that is resolved by the host command to that original address. You can specify a node using
any of these forms:
• Short host name (for example, h135n01)
• Long, fully-qualified, host name (for example, h135n01.ibm.com)
• IP address (for example, 7.111.12.102). IPv6 addresses must be enclosed in brackets (for
example, [2001:192::192:168:115:124]).
Regardless of which form you use, GPFS will resolve the input to a host name and an IP
address and will store these in its configuration files. It is expected that those values will not
change while the node belongs to the cluster.
NodeDesignations
An optional, "-" separated list of node roles:
• manager | client – Indicates whether a node is part of the node pool from which file system
managers and token managers can be selected. The default is client.
• quorum | nonquorum – Indicates whether a node is counted as a quorum node. The default
is nonquorum.
Note: If you are designating a new node as a quorum node, and adminMode central is in
effect for the cluster, GPFS must be down on all nodes in the cluster. Alternatively, you may
choose to add the new nodes as nonquorum and once GPFS has been successfully started
on the new nodes, you can change their designation to quorum using the mmchnode
command.
AdminNodeName
Specifies an optional field that consists of a node interface name to be used by the
administration commands to communicate between nodes. If AdminNodeName is not
specified, the NodeName value is used.
Note: AdminNodeName must be a resolvable network host name. For more information, see
the topic GPFS node adapter interface names in the IBM Spectrum Scale: Concepts, Planning,
and Installation Guide.
LicenseType
Assigns a license of the specified type to the node. Valid values are server, client, and fpo.
For information about these license types, see “mmchlicense command” on page 237.
The full text of the Licensing Agreement is provided with the installation media and can be
found at the IBM Software license agreements website (www.ibm.com/software/sla/
sladb.nsf).
--accept
Specifies that you accept the terms of the applicable license agreement. If one or more node
descriptors includes a LicenseType term, this parameter causes the command to skip the prompt
for you to accept a license.
You must provide a NodeDesc for each node to be added to the IBM Spectrum Scale cluster.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmaddnode command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum
Scale: Administration Guide.
Examples
1. To add nodes k164n06 and k164n07 as quorum nodes, designating k164n06 to be available as a
manager node, issue this command:
mmaddnode -N k164n06:quorum-manager,k164n07:quorum
mmlscluster
2. In the following example the mmaddnode command adds a node without specifying a license:
The command displays a warning message that some nodes do not have licenses.
3. In the following example the mmaddnode command specifies a license to be designated to the node:
# mmaddnode -N c6f2bc4n8:quorum::server
mmaddnode: Node c6f2bc4n8.gpfs.net will be designated as possessing server license.
Please confirm that you accept the terms of the IBM Spectrum Scale server Licensing
Agreement.
The full text can be found at www.ibm.com/software/sla
Enter "yes" or "no": yes
Tue Mar 12 11:41:51 EDT 2019: mmaddnode: Processing node c6f2bc4n8.gpfs.net
mmaddnode: Command successfully completed
mmaddnode: Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
The command prompts the user to accept the terms of the licensing agreement.
4. In the following example the mmaddnode command specifies a license and also specifies the --
accept parameter:
Because the --accept parameter is specified, the command does not need to prompt for acceptance
of the license agreement.
See also
• “mmchconfig command” on page 169
• “mmcrcluster command” on page 303
• “mmchcluster command” on page 164
• “mmdelnode command” on page 371
• “mmlscluster command” on page 484
Location
/usr/lpp/mmfs/bin
mmadquery command
Queries and validates Active Directory (AD) server settings.
Synopsis
mmadquery list {user | uids | gids | groups | dc | trusts | idrange} [Options]
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmadquery command to query an AD Server for users, groups, user IDs, group IDs, known
domain controller and trusts, and to run consistency checks.
Parameters
user
Queries and lists the defined users.
uids
Queries and lists the defined users with user IDs and group IDs.
gids
Queries and lists the defined groups with group IDs.
groups
Queries and lists the defined groups.
dc
Queries and lists the defined domain controllers.
trusts
Queries and lists the defined trusts.
idrange
Queries and lists the ID range used by a given AD server.
Options
--server SERVER
Specifies the IP address of the AD server you want to query. If you do not specify a server,
mmadquery attempts to get the AD server from the /etc/resolv.conf file (nameserver).
Note: This option should be used along with the domain option, which is provided in the following
section.
--domain DOMAIN
Specifies the Windows domain. If you do not specify a domain, mmadquery uses nslookup to
determine the domain based on the server.
Note: This option should be used along with the server option.
--user USER
Specifies the AD user used to run the LDAP query against the AD server. The default is
Administrator.
--pwd-file File
Specifies the file that contains a password to use for authentication.
--filter FILTER
Specifies a search phrase to limit the number of LDAP objects, thus is applied only to first column
of output. Every LDAP object beginning with the search phrase is queried.
--CSV
Shows output in machine parseable (CSV) format.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
--debug or -d
Shows debugging information
--basedn or -b
Includes basedn for LDAP objects queried in query output. This option is not supported when
querying idrange or running a 'stats' query.
--traverse
Traverses all known domains and provide query output for all domains that are detected.
--long or -L
Indicates that you want to see more details. For more information, see Level of query detail below.
This option is not supported for the "stats" queries.
Exit status
0
No errors found.
1
No arguments specified.
10
Failed a check.
11
Unable to determine the AD server to check.
12
Unable to determine the domain.
13
Failed to construct a basedn for an LDAP query.
99
Access to the AD server failed, can be incorrect password, user, or domain.
Security
You must have root authority to run the mmadquery command. For more information, see the topic
Requirements for administering a GPFS file system in the IBM Spectrum Scale: Administration Guide.
Examples
1. To show a list of users for the AD server, run this command:
3. To check user IDs against locally defined ID mapping range, issue the following command:
4. To show a list of users with group membership by domain, run this command:
SUBDOM1$
Administrator Group Policy Creator Owners,Enterprise Admins,Schema Admins,Domain Admins,Administrators
krbtgt Denied RODC Password Replication
Group
aduser1 Administrators
aduser2 bla,unmapped
group
aduser3
aduser4
5. To show the number of users by group and domain, run this command:
MAPPED 2
UN-MAPPED 5
7. To check group IDs against locally defined ID map, run this command:
10. To show a list of ID ranges and to check whether any IDs on the Ad server are outside of the locally
defined ID range, run this command:
Location
/usr/lpp/mmfs/bin
mmafmconfig command
Can be used to manage home caching behavior and mapping of gateways and home NFS exported
servers.
Synopsis
You can use the mmafmconfig command to -
• Set up or update mapping for parallel data transfers by using add, update, or delete options.
• Enable or disable extended attributes/sparse file support from the AFM cache.
or
or
or
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
You can use this command on the home cluster to enable support of extended attributes, sparse files on
an AFM cache fileset pointing to this home. You must run the mmafmconfig enable command on the
home export or target path. Running this command creates the .afm directory that contains the control-
enabled, directio '.afmctl' file. The mmafmconfig disable command removes the .afm directory
from the home export or target path and subsequently, the cache does not support synchronization of
sparse files and files with extended attributes.
You can also use the mmafmconfig command with add, update, delete, or show options on the cache
site to manage mapping of gateway node with home NFS servers for parallel data transfers.
You must run the mmafmconfig enable command at a home fileset path before you link the AFM
fileset at the cache site.
If the AFM cache fileset was linked without running the mmafmconfig enable command at the home
export path, you need to issue the following commands:
1. Issue the following command at the home:
Parameters
MapName
Specifies the name that uniquely identifies the mapping of the gateway nodes with the home NFS
exported servers.
--export-map ExportServerMap
Specifies a comma-separated list of pairs of home NFS exported server nodes (ExportServer) and
gateway nodes (GatewayNode), in the following format:
[ExportServer/GatewayNode][,ExportServer/GatewayNode][,...]
where:
ExportServer
Is the IP address or host name of a member node in the home cluster MapName.
GatewayNode
Specifies a gateway node in the cache cluster (the cluster where the command is issued).
--no-server-resolution
When this option is specified, AFM skips the DNS resolution of a specified hostname and creates the
mapping. The specified hostname in the mapping is not be replaced with an IP address. Ensure that
the specified hostname is resolvable while the mapping is in operation.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
enable
Enables extended attributes or sparse files functions on the AFM cache. Run at the home cluster only.
disable
Disables extended attributes or sparse files functions on the AFM cache. Run at the home cluster only.
ExportPath
Specifies the root of the home exported directory for enabling or disabling the AFM features.
add
Sets up maps for parallel data transfers. Run at cache only.
delete
Deletes maps for parallel data transfers. Run at cache only.
update
Updates maps for parallel data transfers. Run at cache only.
show
Displays all the existing maps. Each map displays the mapping of a gateway node and home NFS
server pair that is separated by a comma ','.
Exit status
0
Successful completion.
Nonzero
A failure has occurred.
Security
You must have root authority to run the mmafmconfig command.
The node on which the command is issued must be able to run remote shell commands on any other node
in the cluster without the use of a password and without producing any extraneous messages. For more
information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum Scale:
Administration Guide.
Example
The following is an example of a mapping for NFS targets, adding all gateway nodes and NFS servers, and
using these mapping for creating AFM filesets. The four cache gateway nodes that are assumed for this
example are -hs22n18, hs22n19, hs22n20, and hs22n21. The gateway nodes are mapped to two home
NFS servers js22n01 with an IP address 192.168.200.11 and js22n02 with an IP address
192.168.200.12.
1. Issue the following command:
# mmafmconfig show
See also
• “mmafmctl command” on page 61
• “mmafmlocal command” on page 78
• “mmchconfig command” on page 169
• “mmchfileset command” on page 222
• “mmchfs command” on page 230
• “mmcrfileset command” on page 308
• “mmcrfs command” on page 315
• “mmlsconfig command” on page 487
• “mmlsfileset command” on page 493
• “mmlsfs command” on page 498
Location
/usr/lpp/mmfs/bin
mmafmcosaccess command
Maps a directory in an AFM to cloud object storage fileset to the bucket on a cloud object storage.
Synopsis
mmafmcosaccess Device FilesetName Path {set | get | delete}
[--bucket bucket --endpoint endpoint {--akey AccessKey
--skey SecretKey | --keyfile filePath}]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
This command is used to map individual directories to a bucket on the cloud object storage.
Parameters
Device
Specifies a device name of a file system to contain a new fileset.
FilesetName
Specifies a name of the AFM to cloud object storage enabled fileset that has an existing relationship.
Path
Specifies a path of a directory under an AFM to cloud object storage fileset that needs to be mapped
to a cloud object storage.
set
Stores the credentials and maps a directory and a bucket.
get
Retrieves the stored credentials that were set for a directory. AFM shows an error if the bucket and
server name combination does not exist.
delete
Deletes the mapping of a directory and a bucket.
--akey AccessKey
Specifies an access key that is used for communication with a cloud object storage.
--skey SecretKey
Specifies a secret key that is used for communication with a cloud object storage.
--bucket
Specifies a unique bucket on a cloud object storage. AFM will use this bucket as a target for a directory
and it will synchronize the data between the bucket and the directory by using the stored credentials.
--endpoint
Specifies an endpoint that is the address of a cloud object storage server on which it receives requests
from clients. An endpoint can be either HTTP or HTTPS. It can either be a DNS qualified hostname or
an IP address. An endpoint also contains a port number on which a cloud object storage server is
running.
--keyfile
Specifies a key file that contains an access key and a secret key. Instead of providing the access key
and the secret key on the command line, a key file can be used. The key file must contain two lines for
akey and skey separated by a colon. An example of the format of a key file /root/keyfile1 is as
follows:
akey:AccessKey
skey:SecretKey
--report
This option must be used with the get option. It generates a report of a specified directory that is
mapped to a cloud object storage.
Note: If access and secret keys are stored by using the mmafmcoskeys command, you need not to use
the --akey or --skey or --keyfile options with this command.
Exit status
0
Successful completion.
nonzero
A failure occurred.
Security
The node on which this command is issued must be able to run remote shell commands on any other
node in the cluster. The node must run these remote shell commands without a password and must not
produce any extraneous messages. For more information, see Requirements for administering a GPFS file
system in the IBM Spectrum Scale: Administration Guide.
Examples
1. To map a directory inside an AFM to cloud object storage fileset, issue the following command:
# mmafmcosaccess fs1 SW1 /gpfs/fs1/SW1/dir1 set --bucket vault1 --endpoint http://IP:port --keyfile /root/
keyfile1
2. To retrieve a directory that is mapped inside an AFM to cloud object storage fileset, issue the following
command:
# mmafmcosaccess fs1 SW1 /gpfs/fs1/SW1/dir1 get
3. To delete a directory that is mapped inside an AFM to cloud object storage fileset, issue the following
command:
See also
• “mmafmcosconfig command” on page 50
• “mmafmcosctl command” on page 55
• “mmafmcoskeys command” on page 58
Location
/usr/lpp/mmfs/bin
mmafmcosconfig command
Creates and displays an AFM to cloud object storage fileset.
Synopsis
mmafmcosconfig Device FilesetName --endpoint http[s]://{[Region@]Server |
[Region@]ExportMap}[:port]
[--object-fs --cleanup --xattr --ssl-cert-verify --user-keys
{--bucket BucketName | --new-bucket BucketName} --dir Path
{--policy PolicyFile | --tmpdir DirectoryPattern --tmpfile FilePattern}
--quota-files NumberOfFiles --quota-blocks NumberOfBlocks
--uid UID --gid GID --perm Permission --mode AccessMode
--chunk-size Size --read-size Size
--acls {--vhb | --gcs} ]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmamfcosconfig command constructs a new AFM to cloud object storage fileset with the specified
name parameters. A single command is provided to create and link the AFM to cloud object storage
fileset. You can specify the parameters such as policies, mode, and permissions to save running extra
commands. To modify any parameters after the creation of the AFM to cloud object storage fileset, you
need to run a specific command separately such as establishing policy or modifying quota on the fileset.
A report that has all details of the AFM to cloud object storage fileset can be generated.
Before you run this command, ensure that the gpfs.afm.cos package is installed on the gateway nodes.
For more information about filesets, see the filesets section in the IBM Spectrum Scale: Administration
Guide.
Parameters
Device
Specifies a device name of a file system to contain a new fileset.
FilesetName
Specifies a name of an AFM to cloud object storage fileset to be created.
Note: The following restrictions should be applied on the fileset names:
• The name must be unique within the file system.
• The length of the name must be in the range 1-255.
• The name root is reserved for a fileset of the root directory of the file system.
• The name cannot be the reserved word "new". However, the character string "new" can appear
within a fileset name.
• The name cannot begin with a hyphen (-).
• The name cannot contain these characters: / ? $ & * ( ) ` # | [ ] \fR.
• The name cannot contain a white-space character such as blank space or tab.
--endpoint
Specifies an endpoint that is the address of a cloud object storage server on which it receives requests
from clients. An endpoint can be either HTTP or HTTPS. It can also be a DNS qualified hostname or an
IP address. An endpoint also contains a port number on which the cloud object storage server is
running.
Region
Specifies a region where a bucket is located on a cloud object server. You can specify a region with
a server or a map name by separating them with '@'.
port
Specifies a port number on a cloud object storage server on which it is listening from the clients.
Default port is 80.
--object-fs
Specifies the behavior of an AFM to cloud object storage fileset. The following two modes can be
enabled on an AFM to cloud object storage fileset:
ObjectFS
This behavior is used to enable auto-synchronization of metadata from a cloud object storage
server to the AFM to cloud object storage enabled fileset. This mode allows AFM to auto-
synchronization fileset metadata when a lookup or readdir operation is performed on the fileset or
the AFM refresh intervals are triggered.
AFM downloads information of objects from the cloud object storage automatically and presents it
as files on the AFM to cloud object storage fileset. The ObjectFS enabled AFM to cloud object
storage fileset behaves in a similar way as an AFM mode fileset on-demand behavior.
For the AFM RO, LU, and IW mode enabled AFM to cloud object storage fileset, AFM automatically
synchronizes objects from the cloud object storage to the files on the cache. Modification of
objects on cloud object storage will be refreshed on the cache.
For the SW and IW mode enabled AFM to cloud object storage fileset, modification on the cache
will queue an upload operation to be synchronized as an object to the cloud storage server.
Enable this mode if you want AFM to cloud object storage fileset behave like an AFM mode fileset.
This behavior generates traffic between the cloud object storage server and the AFM to cloud
object storage fileset.
ObjectOnly
If the --object-fs parameter is not defined, the ObjectOnly mode will be set. This is the default
behavior, which is set on the AFM to cloud object storage server.
With the ObjectOnly mode, refresh of metadata from the cloud object server to an AFM to cloud
object storage fileset will not be on-demand or frequent. You need to manually download data or
metadata from the cloud object storage to the AFM to cloud object storage fileset.
Meanwhile data synchronization from the AFM to cloud object storage (SW, IW mode) to the cloud
object storage will work automatically without manual intervention. This behavior is similar to the
AFM fileset mode behavior.
Enable this mode on an AFM to cloud object storage fileset to avoid frequent trips and reduce the
network contention by selectively performing the operations.
--cleanup
Deletes and cleans up an existing fileset with the same name along with data and re-creates a new
AFM to cloud object storage fileset with given parameters.
Note: Use this option only if you want to delete an existing fileset with the same name and along with
data.
--xattr
Specifies user-extended attributes to be synchronized with a cloud object storage. If this option is
skipped, the AFM to cloud object storage does not synchronize user-extended attributes with the
cloud object storage.
--ssl-cert-verify
Specifies SSL certificate verification. This option is valid only with HTTPS. Default value of this
parameter is disabled.
--user-keys
Specifies adding callback for user keys. Users have to place the mmuidkeys file under /var/
mmfs/etc/mmuid2keys.
--bucket
Identifies a unique bucket on a cloud object storage. AFM will use this bucket as the target for an AFM
to cloud object storage fileset, and it will synchronize data between the AFM to cloud object storage
fileset and the bucket on the cloud object storage.
Credentials to access this bucket must be already set by using the mmafmcoskeys command for this
bucket.
--new-bucket
Specifies a new bucket to be created on a cloud object storage by the AFM to cloud object storage and
used this as the Target.
The name of the bucket is as per the support of the cloud object storage. Check the cloud object
storage for bucket name guidelines.
Credentials must be already set by using the mmafmcoskeys command to create this bucket on the
cloud object storage.
--dir
Specifies a junction path to link an AFM to cloud object storage fileset inside a file system. If not
specified, default junction path will be used.
--policy
Specifies a policy to be applied for an AFM to cloud object storage fileset. If not specified, default
policy is enabled.
--tmpdir
Specifies a directory pattern if a policy is not provided.
--quota-files
Specifies the number of file limit to be set as file quota on an AFM to cloud object storage fileset. If
not specified, default file quota is in effect. Units are in numbers, for example, 104857600.
--quota-blocks
Specifies the data block limits to be set as block quota on an AFM to cloud object storage fileset. If not
specified, default block quota is in effect. Units are in numbers.
--uid
Specifies a user ID to be set on an AFM to cloud object storage fileset. If not specified, default owner
will be set, for example, root.
--gid
Specifies a group ID to be set on an AFM to cloud object storage fileset. If not specified, default group
will be set, for example, root.
--perm
Specifies the access permission to be set on an AFM to cloud object storage fileset. This is in the octal
format. Example 0770. If not specified, default permission will be set.
--mode
All AFM fileset modes support an AFM to cloud object storage fileset. These modes are independent-
writer (IW), single-writer (SW), read-only (RO), and local-updates (LU). Default is the SW mode.
--chunk-size
Specifies the chunk size to control the number of upload parts on a cloud object storage. The unit is
defined in number. Default is 16 MB chunk size.
--read-size
Specifies the read size to control the number of download data blocks into an AFM to cloud object
storage fileset. The unit is defined in number. Default is 16 MB download size.
--acls
Enables the cache to synchronize ACL to the cloud object storage server. If not specified, AFM will not
synchronize ACLs to cloud object storage.
Default is disabled ACL synchronization.
--report
This option is used to generate a colon (':') separated information of the AFM to cloud object storage
enabled fileset. This option shows information such as policy, quota that is set on the fileset. This
option generates a report of the specified bucket and serverName combination.
To change the configuration or a parameter of an AFM to cloud object storage fileset that is already
created, you need to run separate commands such as mmchfileset and mmchpolicy.
Exit status
0
Successful completion.
nonzero
A failure occurred.
Security
You must have root authority to run the mmafmcosconfig command.
The node on which the command is issued must be able to run remote shell commands on any other node
in the cluster without the use of a password and without producing any extraneous messages. For more
information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum Scale:
Administration Guide.
Examples
1. To create an AFM to cloud object storage fileset by using the ObjectFS mode, issue the following
command:
2. To create an AFM to cloud object storage fileset by using the ObjectOnly mode, issue the following
command:
3. To list the fileset report in the colon (:) separate format, issue the following command:
See also
• “mmafmconfig command” on page 45
• “mmafmcosaccess command” on page 48
• “mmafmcosctl command” on page 55
• “mmafmcoskeys command” on page 58
• “mmafmlocal command” on page 78
• “mmchattr command” on page 156
• “mmchconfig command” on page 169
• “mmchfileset command” on page 222
• “mmchfs command” on page 230
Location
/usr/lpp/mmfs/bin
mmafmcosctl command
Controls downloads and uploads between a cloud object storage and an AFM to cloud object storage
fileset.
Synopsis
mmafmcosctl Device FilesetName Path download {--object-list ObjectList | --all}
[{--metadata | --data} --no-sub-dir --prefix NamePrefix --uid UID --gid GID --perm Permission]
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
This command is used to control upload, download, and eviction on an AFM to cloud object storage
fileset. For more information, see Filesets in the IBM Spectrum Scale: Administration Guide.
Parameters
Device
Specifies a device name of a file system.
FilesetName
Specifies a name of an existing AFM to cloud object storage fileset on which the operation is to be
performed.
Path
Specifies a path under an AFM to cloud object storage fileset on which the operation needs to be
performed.
download
Specifies the download operation that is performed on an AFM to cloud object storage fileset. This
operation downloads the objects from a cloud object storage and presents them as files to the AFM to
cloud object storage fileset.
upload
Specifies the upload operation that is performed on an AFM to cloud object storage fileset.
On a local-update (LU)-mode fileset that is enabled with the AFM to cloud object storage, you can
upload data that was created or modified on the fileset locally (on the cache). You can upload this
data to the cloud object storage server as an object. This is supported for the LU mode.
evict
Specifies the eviction operation of the file data (blocks) or file metadata (inode) on an AFM to cloud
object storage fileset.
This operation removes file data and/or file metadata (inodes) only from the AFM to cloud object
storage. This operation removes only data on AFM to cloud object storage. Objects on AFM to cloud
object storage are not removed. This is a local operation.
This operation helps to save space (storage blocks and inodes) so that less priority data will be evicted
from the local cache and makes space for priority data as per the quota. You can either evict all files
and/or metadata by using the --all option or evict selected data or metadata by using the --
object-list option. The objectlist file is a line separated file that contains a full path of files on the
fileset.
If require, you can download objects to the AFM to cloud object storage fileset again by using the
mmafmcosctl download command.
Eviction will be triggered on the local cache and the objects on the cloud object storage will still have
the data or metadata. This is a local operation.
--object-list
Specifies a line separated list of files that you want to download, upload, or evict. Use the option
perform operation on selected files. Otherwise, you can use the --all option. This list file of objects
might contain relative paths that support upload and download operations.
When you use the evict operation, a full file path name must be specified. You can either specify the
--object-list or --all option.
--all
Specified with upload, download and evict parameters. This option performs the upload, download, or
evict operation on all the data inside the given fileset. AFM generates internally a list of all the files
and performs the operation on the list.
--metadata
Specified with download and evict operations. The option enables the AFM to cloud object storage to
download or evict metadata.
When this option is specified with the download, only the objects metadata information is
downloaded from the cloud object storage. This helps to populate all the directory tree structure on
the cache from the bucket.
You can run the download operation along with the --data option to populate data blocks. This is the
default option for downloading objects.
When this option is specified with the evict operation, both the data blocks and file inodes are evicted
only from the AFM fileset. However, data blocks and file inode information will still be available in the
bucket on cloud object storage server. This makes the space on the cache fileset for required data.
This parameter is optional for the evict operation. If this option is not specified for eviction, default is
to evict data blocks.
--data
Specified with the download operation. This option enables the AFM to cloud object storage to
download data of the given list. The --data option downloads metadata implicitly.
--no-sub-dir
Specifies the AFM to cloud object storage to skip downloading the sub directories from a cloud object
storage. You can skip this option to download the sub directories from a cloud object storage.
--prefix
Specifies a prefix for an object on a specified directory on an AFM to cloud object storage fileset.
--uid
Specifies a user ID to be set on an AFM to cloud object storage fileset. If not specified, the owner is
set as default, for example, root.
--gid
Specifies a group ID to be set on an AFM to cloud object storage fileset. If not specified, the group is
set as default, for example, root.
--perm
Specifies the access permission to be set on an AFM to cloud object storage fileset. This is in the octal
format, for example, 0770. If not specified, the permission is set to default.
Exit status
0
Successful completion.
nonzero
A failure occurred.
Security
The mmafmcosctl command execution does not require a root user. This command enables non-root
users to download, upload, or evict data from an AFM to cloud object storage fileset.
The node on which the command is issued must be able to run remote shell commands on any other node
in the cluster. The node must run these remote shell commands without a password and must not
produce any extraneous messages. For more information, see Requirements for administering a GPFS file
system in the IBM Spectrum Scale: Administration Guide.
Examples
1. To download data of selected objects from a bucket to the cache fileset by using the object-list file,
issue the following command:
2. To download all objects from a bucket to the cache, issue the following command:
# mmafmcosctl fs1 SW1 /gpfs/fs1/SW1 download --all --metadata --uid user01 --gid gid01
5. To download files by using the --prefix parameter, issue the following command:
A prefix is a directory inside the /gpfs/fs1/iw1 fileset, and the object-list contains a path for all
files under the prefix.
See also
• “mmafmcosaccess command” on page 48
• “mmafmcosconfig command” on page 50
• “mmafmcoskeys command” on page 58
Location
/usr/lpp/mmfs/bin
mmafmcoskeys command
Manages an access key and a secret key to access a bucket on a cloud object storage.
Synopsis
mmafmcoskeys bucket[:serverName] {set {akey skey | --keyfile filePath} | get | delete}
or
Availability
Available on all IBM Spectrum Scale editions.
Description
This command manages an access key and a secret key to access a bucket on a cloud object storage for
data synchronization. This command can set, get, delete, and report the access key and the secret key
credentials. The keys that are stored for a bucket by AFM have a unique identity across the cluster. If a
bucket name is common across multiple cloud object storage servers, the keys can be set along with the
server name.
The keys can either be specified by using the command line or can be provided as an input key file in the
format of colon delimiter in each line for akey and skey. A report can be displayed to list all access keys
or secret keys that are stored for a bucket. This report has a list of all keys across the cluster.
Parameters
bucket
Identifies a unique bucket on a cloud object storage. AFM will use this bucket as a target for a fileset
and it will synchronize data between the bucket and the fileset by using the keys that are provided.
Note:
• Ensure that the name of bucket is as per the support of the cloud object storage. You need to check
the cloud object storage for bucket names guidelines.
• A duplicate combination of a bucket name and a server name is not allowed.
serverName
If the same bucket exists on multiple cloud object storage, the bucket credentials can be stored by
providing the bucket:serverName format. The combination of a bucket name and a server name
will uniquely identify the keys of the bucket across the cluster. AFM shows a reject message if you try
to store duplicate bucket name.
set
Stores the bucket credentials, which are an access key and a secrete key, with AFM. AFM stores this
credential for communication with a cloud object storage. Wrong credentials are validated only at the
time of communication.
Access and secret keys can be provided to AFM either by using command line where you can use a
space separated word or by using a key file, which has separate lines for akey:AccessKey and
skey:SecretKey.
This command can be used to modify the stored credentials that are used for next communication
with a cloud object storage. It is recommended to modify credentials when the communication
between AFM and a cloud object storage is stopped.
You must store credentials of a bucket before you create an AFM fileset that has a cloud object
storage as a target.
get
Retrieves the stored credentials that were set for the bucket and server combination. AFM shows an
error if the bucket and server name combination does not exist.
delete
Deletes the credentials of the bucket and server name combination that is stored with AFM. The
delete operation removes entries of access keys and secret keys of the specified bucket. Before you
delete the stored credentials of a bucket, you need to ensure that the bucket is not mentioned as a
target on any AFM fileset.
--akey AccessKey
Specifies an access key that belongs to a cloud object storage that is used to access the bucket.
--skey SecretKey
Specifies a secret key that belongs to a cloud object storage that is used to access the bucket.
--keyfile
Specifies a key file that contains an access key and a secret key. Instead of providing the access key
and the secret key on the command line, a key file can be used. The key file must contain two lines for
akey and skey separated by a colon. An example of the format of a key file /root/keyfile1 is as
follows:
akey:AccessKey
skey:SecretKey
--report
This option generates a report of all buckets and serverName combination that are stored with AFM.
Exit status
0
Successful completion.
nonzero
A failure occurred.
Security
You must have root authority to run the mmafmcoskeys command.
The node on which this command is issued must be able to run remote shell commands on any other
node in the cluster. The node must run these remote shell commands without a password and must not
produce any extraneous messages. For more information, see Requirements for administering a GPFS file
system in the IBM Spectrum Scale: Administration Guide.
Examples
1. To set an access key and a secret key of a bucket, issue the following the command:
2. To set an access key and a secret key of a bucket by using a key file, issue the following command:
An example of a key file in the /root/key file is as follows:
akey:AccessKey
skey:SecretKey
Note: The key file must not have any other characters such as space and tab.
3. To set an access key and a secret key of a bucket by using the bucket:serverName combination,
issue the following command:
4. To get an access key and a secret key of a bucket, issue the following command:
5. To get an access key and a secret key of a bucket by using the bucket:serverName combination,
issue the following command:
6. To delete an access key and a secret key of a bucket, issue the following command:
7. To delete an access key and a secret key of a bucket by using the bucket:serverName combination,
issue the following command:
8. To generate a report of all buckets and their credentials that are stored, issue the following command:
See also
• “mmafmcosaccess command” on page 48
• “mmafmcosconfig command” on page 50
• “mmafmcosctl command” on page 55
Location
/usr/lpp/mmfs/bin
mmafmctl command
This command is for various operations and reporting information on all filesets. It is recommended to
read the IBM Spectrum Scale: Administration Guide AFM and AFM Disaster Recovery chapters along with
this manual for detailed description of the functions.
Synopsis
To use the AFM DR functions correctly, use all commands enlisted in this chapter in accordance with the
steps described in the AFM-based DR chapter in IBM Spectrum Scale: Concepts, Planning, and Installation
Guide.
AFM read only mode is referred as RO, single writer mode is referred as SW, independent writer mode is
referred as IW, and local update mode is referred as LU in this manual.
or
or
or
or
or
or
or
or
or
or
or
or
or
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
The usage of options of this command for different operations on both AFM (RO/SW/IW/LU) filesets and
AFM primary/secondary filesets are explained with examples.
File system should be mounted on all gateway nodes for mmafmctl functions to work.
Parameters
Device
Specifies the device name of the file system.
-j FilesetName
Specifies the fileset name.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
-s LocalWorkDirectory
Specifies the temporary working directory.
1. This section describes:
resync
This option is available only for SW cache. In case of inadvertent changes made at home of an SW
fileset, such as delete of a file or change of data in a file etc., the administrator can correct the home
by sending all contents from cache to home using this option. The limitation of this option that
renamed files at home may not be fixed by resync. Using resync requires the cache to be either in
NeedsResync or Active state.
expire | unexpire
This option is available only for RO cache to manually expire or unexpire. When an RO cache is
disconnected, the cached contents are still accessible for the user. However, the administrator can
define a timeout from home beyond which access to the cached contents becomes stale. Such an
event would occur automatically after disconnection (when cached contents are no longer accessible)
and is called expiration; the cache is said to be expired. This option is used to manually force the
cache state to 'Expired'. To expire a fileset manually, the afmExpirationTimeout must be set on
the fileset.
When the home comes back or reconnects, the cache contents become automatically accessible
again and the cache is said to un-expire. The unexpire option is used to force cache to come out of
the 'Expired' state.
The manual expiration and un-expiration can be forced on a cache even when the home is in a
connected state. If a cache is expired manually, the same cache must be unexpired manually.
stop
Run on an AFM or AFM DR fileset to stop replication. You can use this command during maintenance
or downtime, when the I/O activity on the filesets is stopped, or is minimal. After the fileset moves to
a 'Stopped' state, changes or modifications to the fileset are not sent to the gateway node for queuing.
start
Run on a 'Stopped' AFM or AFM DR fileset to start sending updates to the gateway node and resume
replication on the fileset.
2. This section describes:
getstate
This option is applicable for all AFM (RO/SW/IW/LU) and AFM primary filesets. It displays the status of
the fileset in the following fields:
Fileset Name
The name of the fileset.
Fileset Target
The host server and the exported path on it.
Gateway Node
Primary gateway of the fileset. This gateway node is handling requests for this fileset.
Queue Length
Current length of the queue on the primary gateway.
Queue numExec
Number of operations played at home since the fileset is last Active.
Cache State
• Cache states applicable for all AFM RO/SW/IW/LU filesets:
Active, Inactive, Dirty, Disconnected, Stopped, Unmounted
• Cache states applicable for RO filesets:
Expired
• Cache states applicable for SW and IW filesets:
Recovery, FlushOnly, QueueOnly, Dropped, NeedsResync, FailoverInProgress
• Cache states applicable for IW filesets:
FailbackInProgress, FailbackCompleted, NeedsFailback
• Cache states applicable for AFM primary filesets:
PrimInitInProg, PrimInitFail, Active, Inactive, Dirty, Disconnected, Unmounted,
FailbackInProg, Recovery, FlushOnly, QueueOnly, Dropped, Stopped, NeedsResync
For more information about all cache states, see the AFM and AFM-based DR chapters in the IBM
Spectrum Scale: Concepts, Planning, and Installation Guide.
resumeRequeued
This option is applicable for SW/IW and primary filesets. If there are operations in the queue that were
re-queued due to errors at home, the Administrator should correct those errors and can run this
option to retry the re-queued operations.
3. This section describes:
flushPending
Flushes all point-in-time pending messages in the normal queue on the fileset to home. Requeued
messages and messages in the priority queue for the fileset are not flushed by this command.
When --list-file ListFile is specified, the messages pending on the files listed in the list file are
flushed to home. ListFile contains a list of files that you want to flush, one file per line. All files must
have absolute path names, specified from the fileset linked path. If the list of files has filenames with
special characters, use a policy to generate the listfile. Edit to remove all entries other than the
filenames. FlushPending is applicable for SW/IW and primary filesets.
4. This section describes:
This option is applicable only for SW/IW filesets. This option pushes all the data from cache to home. It
should be used only in case home is completely lost due to a disaster and a new home is being set up.
Failover often takes a long time to complete; status can be checked by using the
afmManualResyncComplete callback or via mmafmctl getstate command.
--new-target NewAfmTarget
Specifies a new home server and path, replacing the home server and path originally set by the
afmTarget parameter of the mmcrfileset command. Specified in either of the following formats:
nfs://{Host|Map}/Target_Path
or
gpfs://[Map]/Target_Path
where:
nfs:// or gpfs://
Specifies the transport protocol.
Host|Map
Host
Specifies the server domain name system (DNS) name or IP address.
Map
Specifies the export map name. Information about Mapping is contained in the AFM Overview
> Parallel data transfers section.
See the following examples:
1. An example of using the nfs:// protocol with a map name:
Note: If you are not specifying the map name, a '/' is still needed to indicate the path.
4. An example of using the gpfs:// protocol with a map name:
Target_Path
Specifies the export path.
It is possible to change the protocol along with the target using failover. For example, a cache using an
NFS target bear110:/gpfs/gpfsA/home can be switched to a GPFS target whose remote file
system is mounted at /gpfs/fs1, and vice-versa, as follows:
This option is used for pre-fetching file contents from home before the application requests for the
contents. This reduces the network delay when the application performs data transfers on file and data
that is not in cache. You can also use this option to move files over the WAN when the WAN usage is low.
These files might be the files that are accessed during high WAN usage. Thus, you can use this option for
better WAN management.
Prefetch is an asynchronous process and you can use the fileset when prefetch is in progress. You can
monitor Prefetch using the afmPrepopEnd event. AFM can prefetch the data using the mmafmctl
prefetch command (which specifies a list of files to prefetch). Prefetch always pulls the complete file
contents from home and AFM automatically sets a file as cached when it is completely prefetched.
You can use the prefetch option to -
• populate metadata
• populate data
• view prefetch statistics
Prefetch completion can be monitored using the afmPrepopEnd event.
--retry-failed-file-list
Allows retrying prefetch of files that failed in the last prefetch operation. The list of files to retry is
obtained from .afm/.prefetchedfailed.list under the fileset.
Note: To use this option, you must enable generating a list of failed files. Add --enable-failed-
file-list to the command first.
--metadata-only
Prefetches only the metadata and not the actual data. This is useful in migration scenarios. This
option requires the list of files whose metadata you want. Hence it must be combined with a list file
option.
--enable-failed-file-list
Turns on generating a list of files which failed during prefetch operation at the gateway node. The list
of files is saved as .afm/.prefetchedfailed.list under the fileset. Failures that occur during
processing are not logged in .afm/.prefetchedfailed.list. If you observe any errors during
processing (before queuing), you might need to correct the errors and rerun prefetch.
Files listed in .afm/.prefetchedfailed.list are used when prefetch is retried.
--policy
Specifies that the list-file or home-list-file is generated using a GPFS Policy by which
sequences like '\' or '\n' are escaped as '\\' and '\\n'. If this option is specified, input file list is treated
as already escaped. The sequences are unescaped first before queuing for prefetch operation.
Note: This option can be used only if you are specifying list-file or home-list-file.
--directory LocalDirectoryPath
Specifies path to the local directory from which you want to prefetch files. A list of all files in this
directory and all its sub-directories is generated, and queued for prefetch.
--dir-list-file DirListfile
Specifies path to the file which contains unique entry of directories under the AFM fileset which needs
to be prefetched. This option enables to prefetch individual directories under an AFM fileset. AFM
generate a list of all files and sub-directories inside and queued for prefetch. The input file could also
be a policy generated file for which user needs to specify --policy.
You can either specify --directory or --dir-list-file option with mmafmctl prefetch.
The --policy option can be used only with --dir-list-file and not with --directory.
For example,
--nosubdirs
This option restricts the recursive behavior of --dir-list-file and --directory and prefetch only
until the given level of directory. This option will not prefetch the sub-directories under the given
directory. This is optional parameter.
This option can only be used with --dir-list-file and --directory.
For example,
--list-file ListFile
The specified file is a file containing a list of files, and needs to be prefetched, one file per line. All files
must have fully qualified path names.
If the list of files to be prefetched have filenames with special characters then a policy must be used
to generate the listfile. Remove entries from the file other than the filenames.
An indicative list of files:
• files with fully qualified names from cache
--force
Enables forcefully fetching data from the home during the migration process. This option overrides
any set restrictions and helps to fetch the data forcefully to the cache. This option must be used only
to forcefully fetch the data that was created after the migration process completion.
For example,
--gateway Node
Allows selecting the gateway node that can be used to run the prefetch operation on a fileset, which is
idle or less used. This parameter helps to distribute the prefetch work on different gateway nodes and
overrides the default gateway node, which is assigned to the fileset. This parameter also helps to run
different prefetch operations on different gateway nodes, which might belong to the same fileset or a
different fileset.
For example,
To check the prefetch statistics of this command on gateway Node2, issue the following command:
--prefetch-threads nThreads
Specifies the number of threads to be used for the prefetch operation. Valid values are 1 - 255.
Default value is 4.
For example,
If you run prefetch without providing any options, it displays statistics of the last prefetch command
that is run on the fileset.
If you run the prefetch command with data or metadata options, statistics like queued files, total files,
failed files, total data (in Bytes) is displayed as in the following example of command and system output:
#mmafmctl <FileSystem> prefetch -j <fileset> --enable-failed-file-list --list-
file /tmp/file-list
This option is applicable for RO/SW/IW/LU filesets. When cache space exceeds the allocated quota, data
blocks from non-dirty are automatically de-allocated with the eviction process. This option can be used
for a file that is specifically to be de-allocated based on some criteria. All options can be combined with
each other.
--safe-limit SafeLimit
This is a compulsory parameter for the manual evict option, for order and filter attributes. Specifies
target quota limit (which is used as the low water mark) for eviction in bytes; must be less than the
soft limit. This parameter can be used alone or can be combined with one of the following parameters
(order or filter attributes). Specify the parameter in bytes.
--order LRU | SIZE
Specifies the order in which files are to be chosen for eviction:
LRU
Least recently used files are to be evicted first.
SIZE
Larger-sized files are to be evicted first.
--log-file LogFile
Specifies the file where the eviction log is to be stored. The default is that no logs are generated.
--filter Attribute=Value
Specifies attributes that enable you to control how data is evicted from the cache. Valid attributes are:
FILENAME=FileName
Specifies the name of a file to be evicted from the cache. This uses an SQL-type search query. If
the same file name exists in more than one directory, it will evict all the files with that name. The
complete path to the file should not be given here.
MINFILESIZE=Size
Sets the minimum size of a file to evict from the cache. This value is compared to the number of
blocks allocated to a file (KB_ALLOCATED), which may differ slightly from the file size.
MAXFILESIZE=Size
Sets the maximum size of a file to evict from the cache. This value is compared to the number of
blocks allocated to a file (KB_ALLOCATED), which may differ slightly from the file size.
--list-file ListFile
Contains a list of files that you want to evict, one file per line. All files must have fully qualified path
names. File system quotas need not be specified. If the list of files has file names with special
characters, use a policy to generate the listfile. Edit to remove all entries other than the file
names.
--file FilePath
The fully qualified name of the file that needs to be evicted. File system quotas need not be specified.
Possible combinations of safelimit, order, and filter are:
only Safe limit
Safe limit + LRU
Safe limit + SIZE
Safe limit + FILENAME
Safe limit + MINFILESIZE
Safe limit + MAXFILESIZE
Safe limit + LRU + FILENAME
Safe limit + LRU + MINFILESIZE
Safe limit + LRU + MAXFILESIZE
Safe limit + SIZE + FILENAME
Safe limit + SIZE + MINFILESIZE
Safe limit + SIZE + MAXFILESIZE
7. This section describes:
[-s LocalWorkDirectory]
There is a choice of restoring the latest snapshot data on the secondary during the failover process or
leave the data as is using the --norestore option. Once this is complete, the secondary becomes ready
to host applications.
--norestore
Specifies that restoring from the latest RPO snapshot is not required. This is the default setting.
--restore
Specifies that data must be restored from the latest RPO snapshot.
9. This section describes:
This is to be run on a GPFS fileset or SW/IW fileset which is intended to be converted to primary.
--afmtargetTarget
Specifies the secondary that needs to be configured for this primary. Need not be used for AFM
filesets as target would already have been defined.
--inband
Used for inband trucking. Inband trucking copies data from the primary site to an empty secondary
site during conversion of GPFS filesets to AFM DR primary filesets. If you have already copied data to
the secondary site, AFM checks mtime of files at the primary and secondary site. Here, granularity of
mtime is in microseconds. If mtime values of both files match, data is not copied again and existing
data on the secondary site is used. If mtime values of both files do not match, existing data on the
secondary site is discarded and data from the primary site is written to the secondary site.
--check-metadata
This is the default option. Checks if the disallowed types (like immutable/append-only files) are
present in the GPFS fileset on the primary site before the conversion. Conversion with this option fails
if such files exist.
For SW/IW filesets, presence of orphans and incomplete directories are also checked. SW/IW filesets
should have established contact with at least once home for this option to succeed.
--nocheck-metadata
Used if one needs to proceed with conversion without checking for appendonly/immutable files.
--secondary-snapname SnapshotName
Used while establishing a new primary for an existing secondary or acting primary during failback.
--rpo RPO
Specifies the RPO interval in minutes for this primary fileset. Disabled by default.
10. This section describes:
This is to be run on a GPFS fileset on the secondary site. This converts a GPFS independent fileset to a
secondary and sets the primary ID.
--primaryid PrimaryId
Specifies the unique identifier of the AFM-DR primary fileset which needs to be set at AFM-DR
Secondary fileset to initiate a relationship. You can obtain this fileset identifier by running the
mmlsfileset command using the --afm and -L options.
For example,
--force
If convertToSecondary failed or got interrupted, it will not create afmctl file at the secondary. The
command should be rerun with the --force option.
This is used on an acting primary only. This creates a latest snapshot of the acting primary. This command
deletes any old RPO snapshots on the acting primary and creates a new initial RPO snapshot psnap0.
This RPO snapshot is used in the setup of the new primary.
13. This section describes:
This is to be run on an old primary that came back after the disaster, or on a new primary that is to be
configured after an old primary went down with a disaster. The new primary should have been converted
from GPFS to primary using convertToPrimary option.
--start
Restores the primary to the contents from the last RPO on the primary before the disaster. This option
will put the primary in read-only mode to avoid accidental corruption until the failback process is
completed. In case of new primary that is setup using convertToPrimary, the failback --start
does no change.
--stop
Used to complete the Failback process. This will put the fileset in read-write mode. The primary is
now ready for starting applications.
--force
Used if --stop or --start does not complete successfully due to any errors, and not allow
failbackToPrimary to stop or start again.
14. This section describes:
Exit status
0
Successful completion.
nonzero
A failure occurred.
Security
You must have root authority to run the mmafmctl command.
The node on which the command is issued must be able to run remote shell commands on any other node
in the cluster without the use of a password and without producing any extraneous messages. For more
information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum Scale:
Administration Guide.
Examples
1. Running resync on SW:
2. Expiring a RO fileset:
3. Unexpiring a RO fileset:
Policy created this file, this should be hand-edited to retain only the names:
11012030 65537 0 -- /gpfs/fs1/sw1/migrateDir.popFSDir.22655/file_with_posix_acl1
11012032 65537 0 -- /gpfs/fs1/sw1/migrateDir.popFSDir.22655/populateFS.log
11012033 65537 0 --
/gpfs/fs1/sw1/migrateDir.popFSDir.22655/sparse_file_0_with_0_levels_indirection
At home -
At cache -
Changing to another NFS server in the same home cluster using --target-only option:
List Policy:
RULE EXTERNAL LIST 'List' RULE 'List' LIST 'List' WHERE PATH_NAME LIKE '%'
Policy created this file, this should be hand-edited to retain only file names.
# cat /lfile1
/gpfs/homefs1/dir3/file1
/gpfs/homefs1/dir3/dir1/file1
Inode file is created using the above policy at home, and should be used as such without
hand-editing.
List Policy:
RULE EXTERNAL LIST 'List' RULE 'List' LIST 'List' WHERE PATH_NAME LIKE '%'
# cat /lfile2
113289 65538 0 -- /gpfs/homefs1/dir3/file2
113292 65538 0 -- /gpfs/homefs1/dir3/dir1/file2
# cat /lfile2
113289 65538 0 -- /gpfs/homefs1/dir3/file2
113292 65538 0 -- /gpfs/homefs1/dir3/dir1/file2
# ls -lis /gpfs/fs1/ro2/file10M_1
12605961 10240 -rw-r--r-- 1 root root 10485760 May 21 07:44 /gpfs/fs1/ro2/file10M_1
# ls -lis /gpfs/fs1/ro2/file10M_1
12605961 0 -rw-r--r-- 1 root root 10485760 May 21 07:44 /gpfs/fs1/ro2/file10M_1
12. IW Failback:
# ls -lshi /gpfs/fs1/evictCache
total 6.0M
27858308 1.0M -rw-r--r--. 1 root root 1.0M Feb 5 02:07 file1M
27858307 2.0M -rw-r--r--. 1 root root 2.0M Feb 5 02:07 file2M
27858306 3.0M -rw-r--r--. 1 root root 3.0M Feb 5 02:07 file3M
# ls -lshi /gpfs/fs1/evictCache
total 3.0M
27858308 1.0M -rw-r--r--. 1 root root 1.0M Feb 5 02:07 file1M
27858307 2.0M -rw-r--r--. 1 root root 2.0M Feb 5 02:07 file2M
27858306 0 -rw-r--r--. 1 root root 3.0M Feb 5 02:07 file3M
# ls -lshi /gpfs/fs1/evictCache
total 3.0M
27858308 1.0M -rw-r--r--. 1 root root 1.0M Feb 5 02:07 file1M
27858307 2.0M -rw-r--r--. 1 root root 2.0M Feb 5 02:07 file2M
27858306 0 -rw-r--r--. 1 root root 3.0M Feb 5 02:07 file3M
# ls -lshi /gpfs/fs1/evictCache
total 0
27858308 0 -rw-r--r--. 1 root root 1.0M Feb 5 02:07 file1M
27858307 0 -rw-r--r--. 1 root root 2.0M Feb 5 02:07 file2M
27858306 0 -rw-r--r--. 1 root root 3.0M Feb 5 02:07 file3M
See also
• “mmafmconfig command” on page 45
• “mmafmlocal command” on page 78
• “mmchattr command” on page 156
• “mmchconfig command” on page 169
• “mmchfileset command” on page 222
• “mmchfs command” on page 230
• “mmcrfileset command” on page 308
• “mmcrfs command” on page 315
• “mmlsconfig command” on page 487
• “mmlsfileset command” on page 493
• “mmlsfs command” on page 498
• “mmpsnap command” on page 605
See the AFM and AFM-based DR chapters in IBM Spectrum Scale: Concepts, Planning, and Installation
Guide for details.
Location
/usr/lpp/mmfs/bin
mmafmlocal command
Provides a list of cached files and file statistics such as inode number, allocated blocks, and so on.
Synopsis
mmafmlocal ls [FileName ...]
or
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
The mmafmlocal command provides information about files that exist in the cache.
Parameters
ls
Lists files with data that is in the cache already. This parameter is valid for fully-cached files only.
FileName
Specifies the name of a file to be listed.
stat
Displays statistics for files. If the file is not cached already, the number of allocated blocks is zero.
This parameter is valid for partially-cached and fully-cached files.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmafmlocal command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To list information of all the cached files in a fileset:
mmafmlocal ls
total 10240
-rwxrwxrwx 1 root root 10485760 May 24 09:20 file1
mmafmlocal ls file2
File: file2
Inode number: 1582477
Device ID: 0x2C (44)
Size: 10485760
Blocks: 20480
Block size: 262144
Links: 1
Uid: 0 (root)
Gid: 0 (root)
Mode: 0100777
Access time: 1464281093 (Thu May 26 16:44:53 2016 UTC)
Modify time: 1464096038 (Tue May 24 13:20:38 2016 UTC)
Change time: 1464096038 (Tue May 24 13:20:38 2016 UTC)
See also
• “mmafmconfig command” on page 45
• “mmafmctl command” on page 61
• “mmchattr command” on page 156
• “mmchconfig command” on page 169
• “mmchfileset command” on page 222
• “mmchfs command” on page 230
• “mmcrfileset command” on page 308
• “mmcrfs command” on page 315
• “mmlsconfig command” on page 487
• “mmlsfileset command” on page 493
• “mmlsfs command” on page 498
• “mmpsnap command” on page 605
Location
/usr/lpp/mmfs/bin
mmapplypolicy command
Deletes files, migrates files between storage pools, or does file compression or decompression in a file
system as directed by policy rules.
Synopsis
mmapplypolicy {Device|Directory}
[-A IscanBuckets] [-a IscanThreads] [-B MaxFiles]
[-D yyyy-mm-dd[@hh:mm[:ss]]] [-e] [-f FileListPrefix]
[-g GlobalWorkDirectory] [-I {yes|defer|test|prepare}]
[-i InputFileList] [-L n] [-M name=value...] [-m ThreadLevel]
[-N {all | mount | Node[,Node...] | NodeFile | NodeClass}]
[-n DirThreadLevel] [-P PolicyFile] [-q] [-r FileListPathname...]
[-S SnapshotName] [-s LocalWorkDirectory]
[--choice-algorithm {best | exact | fast}]
[--maxdepth MaxDirectoryDepth]
[--max-merge-files MaxFiles] [--max-sort-bytes MaxBytes]
[--other-sort-options SortOptions] [--qos QosClass]
[--scope {filesystem | fileset | inodespace}]
[--single-instance] [--sort-buffer-size Size]
[--sort-command SortCommand] [--split-filelists-by-weight]
[--split-margin n.n]
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
You can use the mmapplypolicy command to apply rules that manage the following types of tasks:
• Migration and replication of file data to and from storage pools.
• Deleting files.
• File compression or decompression. For more information, see the topic File compression in the IBM
Spectrum Scale: Administration Guide.
For more information about policy rules, see the topic Policies for automating file management in the IBM
Spectrum Scale: Administration Guide.
Remember: When there are many millions of files to scan and process, the mmapplypolicy command
can be very demanding of memory, CPU, I/O, and storage resources. Among the many options, consider
specifying values for -N, -g, -s, -a, -m, --sort-buffer-size, and --qos to control the resource
usage of mmapplypolicy.
You can run the mmapplypolicy command from any node in the cluster that has mounted the file
system.
The mmapplypolicy command does not affect placement rules (for example, the SET POOL and
RESTORE rule) that are installed for a file system by the mmchpolicy command. To display the currently
installed rules, issue the mmlspolicy command.
A given file can match more than one list rule, but will be included in a given list only once. ListName
provides the binding to an EXTERNAL LIST rule that specifies the executable program to use when
processing the generated list.
The EXTERNAL POOL rule defines an external storage pool. This rule does not match files, but serves to
define the binding between the policy language and the external storage manager that implements the
external storage.
Any given file is a potential candidate for at most one MIGRATE or DELETE operation during one
invocation of the mmapplypolicy command. That same file may also match the first applicable LIST
rule.
A file that matches an EXCLUDE rule is not subject to any subsequent MIGRATE, DELETE, or LIST rules.
You should carefully consider the order of rules within a policy to avoid unintended consequences.
For detailed information on GPFS policies, see the IBM Spectrum Scale: Administration Guide.
This command cannot be run from a Windows node. The GPFS API, documented functions in gpfs.h are
not implemented on Windows, however the policy language does support the Windows file attributes, so
you can manage your GPFS Windows files using the mmapplypolicy command running on an AIX or
Linux node.
Note: To terminate mmapplypolicy, use the kill command to send a SIGTERM signal to the process
group running mmapplypolicy.
For example, on Linux if you wanted to terminate mmapplypolicy on a process group whose ID is 3813,
you would enter the following:
If you need to determine which process group is running mmapplypolicy, you can use the following
command (which also tells you which process groups are running tsapolicy and mmhelp-apolicy):
Parameters
Device
Specifies the device name of the file system from which files will have the policy rules applied. File
system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. If specified, this
must be the first parameter.
Directory
Specifies the fully-qualified path name of a GPFS file system subtree from which files will have the
policy rules applied. If specified, this must be the first parameter.
-A IscanBuckets
Specifies the number of buckets of inode numbers (number of inode/filelists) to be created by the
parallel directory scan and processed by the parallel inode scan. Affects the execution of the high-
performance protocol that is used when both -g and -N are specified.
Tip: Set this parameter to the expected number of files to be scanned divided by one million. Then
each bucket will have about one million files.
-a IscanThreads
Specifies the number of threads and sort pipelines each node will run during the parallel inode scan
and policy evaluation. It affects the execution of the high-performance protocol that is used when
both -g and -N are specified. The default is 2. Using a moderately larger number can significantly
improve performance, but might "strain" the resources of the node. In some environments a large
value for this parameter can lead to a command failure.
Tip: Set this parameter to the number of CPU "cores" implemented on a typical node in your GPFS
cluster.
-B MaxFiles
Specifies how many files are passed for each invocation of the EXEC script. The default value is 100.
If the number of files exceeds the value specified for MaxFiles, mmapplypolicy invokes the external
program multiple times.
For more information about file list records, refer to the IBM Spectrum Scale: Administration Guide.
-D yyyy-mm-dd[@hh:mm[:ss]]
Specifies a date and optionally a (UTC) time as year-month-day at hour:minute:second.
The mmapplypolicy command evaluates policy rules as if it were running on the date and time
specified by the -D flag. This can be useful for planning or testing policies, to see how the
mmapplypolicy command would act in the future. If this flag is omitted, the mmapplypolicy
command uses the current date and (UTC) time. If a date is specified but not a time, the time is
assumed to be 00:00:00.
-e
Causes mmapplypolicy to re-evaluate and revalidate the following conditions immediately before
executing the policy action for each chosen file:
• That the PATH_NAME still leads to the chosen file, and that the INODE and GENERATION values are
the same.
• That the rule (iRule) still applies to, and is a first matching rule for, the chosen file.
Note: The -e option is particularly useful with -r, but can be used apart from it. It is useful because in
the time that elapses after the policy evaluation and up to the policy execution, it is possible that the
chosen pathname no longer refers to the same inode (for example the original file was removed or
renamed), or that some of the attributes of the chosen file have changed in some way so that the
chosen file no longer satisfies the conditions of the rule. In general, the longer the elapsed time, the
more likely it is that conditions have changed (depending on how the file system is being used). For
example, if files are only written once and never renamed or erased, except by policy rules that call for
deletion after an expiration interval, then it is probably not necessary to re-evaluate with the -e
option.
For more information about -r, see IBM Spectrum Scale: Administration Guide.
-f FileListPrefix
Specifies the location (a path name or file name prefix or directory) in which the file lists for external
pool and list operations are stored when either the -I defer or -I prepare option is chosen. The
default location is LocalWorkDirectory/mmapplypolicy.processid.
-g GlobalWorkDirectory
Specifies a global work directory in which one or more nodes can store temporary files during
mmapplypolicy command processing. For more information about specifying more than one node to
process the command, see the description of the -N option. For more information about temporary
files, see the description of the -s option.
The global directory can be in the file system that mmapplypolicy is processing or in another file
system. The file system must be a shared file system, and it must be mounted and available for
reading and writing by every node that will participate in the mmapplypolicy command processing.
If the -g option is not specified, then the global work directory is the directory that is specified by the
sharedTmpDir attribute of the mmchconfig command. For more information, see “mmchconfig
command” on page 169. If the sharedTmpDir attribute is not set to a value, then the global work
directory depends on the file system format version of the target file system:
• If the target file system is at file system format version 5.0.1 or later (file system format number
19.01 or later), then the global work directory is the directory .mmSharedTmpDir at the root level
of the target file system.
• If the target file system is at a file system format version that is earlier than 5.0.1 then the command
does not use a global work directory.
If the global work directory that is specified by -g option or by the sharedTmpDir attribute begins
with a forward slash (/) then it is treated as an absolute path. Otherwise it is treated as a path that is
relative to the mount point of the file system.
If both the -g option and the -s option are specified, then temporary files can be stored in both the
specified directories. In general, the local work directory contains temporary files that are written and
read by a single node. The global work directory contains temporary files that are written and read by
more than one node.
Note: The mmapplypolicy command uses high-performance, fault-tolerant protocols in its
processing whenever both of the following conditions are true:
• The command is configured to run on multiple nodes.
• The command is configured to use a global work directory.
If the target file system is at file system format version 5.0.1 or later, then the command always uses
high-performance, fault-tolerant protocols. The reason is that in this situation the command provides
default values for running on multiple nodes (the node class managerNodes) and for sharing a global
directory (.mmSharedTmpDir) if no explicit parameters are specified. For more information, see the
following descriptions:
• The command runs on multiple nodes if any of the following circumstances are true:
– The -N option on the command line specifies a set of helper nodes to run parallel instances of the
policy code.
– The defaultHelperNodes attribute of the mmchconfig command is set. This attribute
specifies a list of helper nodes to be employed if the -N option is not specified.
– The target file system is at file system format version 5.0.1 or later (file system format number
19.01 or later). If neither the -N option nor the defaultHelperNodes attribute is set, the
members of the node class managerNodes are the helper nodes.
• The command uses a global work directory if any of the following circumstances are true:
– The -g option on the command line specifies a global work directory.
– The sharedTmpDir attribute of the mmchconfig command is set. This attribute specifies a
global work directory to be used if the -g option is not specified.
– The target file system is at file system format version 5.0.1 or later (file system format number
19.01 or later). If neither the -g option nor the sharedTmpDir attribute is set, the
directory .mmSharedTmpDir at the root level of the target file system is the global work
directory.
-I {yes | defer | test | prepare}
Specifies what actions the mmapplypolicy command performs on files:
yes
Indicates that all applicable policy rules are run, and the data movement between pools is done
during the processing of the mmapplypolicy command. All defined external lists will be
executed. This is the default action.
defer
Indicates that all applicable policy rules are run, but actual data movement between pools is
deferred until the next mmrestripefs or mmrestripefile command. See also “-f
FileListPrefix” on page 82.
test
Indicates that all policy rules are evaluated, but the mmapplypolicy command only displays the
actions that would be performed had -I defer or -I yes been specified. There is no actual
deletion of files or data movement between pools. This option is intended for testing the effects of
particular policy rules.
prepare
Indicates that all policy execution is deferred and that mmapplypolicy only prepares file lists
that are suitable for execution with the –r option. Records are written for each of the chosen files
and are stored in one or more file lists, under a path name that is specified by the -f option or in
the default local work directory. The actual data movement occurs when the command is rerun
with the -r option.
-i InputFileList
Specifies the path name for a user-provided input file list. This file list enables you to specify multiple
starter directories or files. It can be in either of the following formats:
simple format file list
A list of records with the following format:
PATH_NAME
Each record represents either a single file or a directory. When a directory is specified, the
command processes the entire subtree that is rooted at the specified path name
File names can contain spaces and special characters; however, the special characters '\' and '\n'
must be escaped with the '\' character similarly to the way mmapplypolicy writes path names in
file lists for external pool and list operations.
The end-of-record character must be \n.
Example:
/mak/ea
/mak/old news
/mak/special\\stuff
INODE:GENERATION:path-length!PATH_NAME end-of-record-character
00009a00:0:8!d14/f681
00009a01:1002:8!d14/f682
When you use an expert format file list, the directory scan phase is skipped and only the files that
are specified with the InputFileList parameter are tested against the policy rules.
For more information, see the IBM Spectrum Scale: Administration Guide.
With either format, if a path name is not fully qualified, it is assumed to be relative to one of the
following:
• the Directory parameter on the mmapplypolicy command invocation
Or
• the mount point of the GPFS file system, if Device is specified as the first argument
-L n
Controls the level of information displayed by the mmapplypolicy command. Larger values indicate
the display of more detailed information. These terms are used:
candidate file
A file that matches a MIGRATE, DELETE, or LIST policy rule.
chosen file
A candidate file that has been scheduled for action.
These are the valid values for n:
0
Displays only serious errors.
1
Displays some information as the command runs, but not for each file. This is the default.
2
Displays each chosen file and the scheduled migration or deletion action.
3
Displays the same information as 2, plus each candidate file and the applicable rule.
4
Displays the same information as 3, plus each explicitly EXCLUDEed or LISTed file, and the
applicable rule.
5
Displays the same information as 4, plus the attributes of candidate and EXCLUDEed or LISTed
files.
6
Displays the same information as 5, plus non-candidate files and their attributes.
For examples and more information on this flag, see the section: The mmapplypolicy -L command in
the IBM Spectrum Scale: Problem Determination Guide.
-M name=value...
Indicates a string substitution that will be made in the text of the policy rules before the rules are
interpreted. This allows the administrator to reuse a single policy rule file for incremental backups
without editing the file for each backup.
-m ThreadLevel
The number of threads that are created and dispatched within each mmapplypolicy process during
the policy execution phase. The default value is 24.
-N {all | mount | Node[,Node...] | NodeFile | NodeClass}
Specifies a set of nodes to run parallel instances of policy code for better performance. The nodes
must be in the same cluster as the node from which the mmapplypolicy command is issued. All
node classes are supported. For more information about these options, see Specifying nodes as input
to GPFS commands in the IBM Spectrum Scale: Administration Guide.
If the -N option is not specified, then the command runs parallel instances of the policy code on the
nodes that are specified by the defaultHelperNodes attribute of the mmchconfig command. For
more information, see the topic “mmchconfig command” on page 169. If the defaultHelperNodes
attribute is not set, then the list of helper nodes depends on the file system format version of the
target file system. If the target file system is at file system format version 5.0.1 or later (file system
format number 19.01 or later), then the helper nodes are the members of the node class
managerNodes. Otherwise, the command runs only on the node where the mmapplypolicy
command is issued.
-n DirThreadLevel...
The number of threads that will be created and dispatched within each mmapplypolicy process
during the directory scan phase. The default is 24.
-P PolicyFile
Specifies the name of the file containing the policy rules to be applied. If not specified, the policy rules
currently in effect for the file system are used. Use the mmlspolicy command to display the current
policy rules.
-q
When specified, mmapplypolicy dispatches bunches of files from the file lists specified by the -r
option in a round-robin fashion, so that the multithreaded (-m) and node parallel (-N) policy execution
works on all the file lists "at the same time." When -q is not specified, policy execution works on the
file lists sequentially. In either case bunches of files are dispatched for parallel execution to multiple
threads (-m) on each of the possibly multiple nodes (-N).
-r FileListPathname...
Specifies one or more file lists of files for policy execution. The file lists that were used as input for -r
were created by issuing mmapplypolicy with the -I prepare flag. You can specify several file lists
by doing one of the following:
• Provide the path name of a directory of file lists, or
• Specify the -r option several times, each time with the path name of a different file list.
You can use this parameter to logically continue where mmapplypolicy left off when you specified
the -I prepare option. To do this, invoke mmapplypolicy with all the same parameters and
options (except the -I prepare option), and now substitute the -r option for -f. In between the
invocations, you can process, reorder, filter, or edit the file lists that were created when you invoked -
I prepare. You can specify any or all of the resulting file lists with the -r option.
The format of the records in each file list file can be expressed as:
iAggregate:WEIGHT:INODE:GENERATION:SIZE:iRule:resourceId:attr_flags:
path-lengthl!PATH_NAME:pool-length!POOL_NAME
[;show-length>!SHOW]end-of-record-character
For more information about file list records, refer to the IBM Spectrum Scale: Administration Guide.
-S SnapshotName
Specifies the name of a global snapshot for file system backup operations or for migrating snapshot
data. The name appears as a subdirectory of the .snapshots directory in the file system root and can
be found with the mmlssnapshot command.
Note: GPFS snapshots are read-only. Do not use deletion rules with -S SnapshotName.
Note: Do not use the -S option to scan files in a fileset snapshot. Instead, specify the full path of a
directory in the snapshot as the first parameter of the command, as in the following example:
-s LocalWorkDirectory
Specifies a local directory in which one or more nodes can store temporary files during
mmapplypolicy command processing. The default local work directory is /tmp. For more
information about specifying more than one node to process the command, see the -N option of
mmapplypolicy.
The temporary files contain lists of candidate files and lists of files that are selected to be processed.
If the file system or directory that mmapplypolicy is processing contains many files, then the
temporary files can occupy a large amount of storage space. To make a rough estimate of the size of
the storage that is required, apply the formula K * AVLP * NF, where:
K
Is 3.75.
AVLP
Is the average length of the full path name of a file.
NF
Is the number of files that the command will process.
For example, if AVLP is 80, then the storage space that is required is roughly (300 * NF) bytes of
temporary space.
filesystem
The scan will involve the objects in the entire file system subtree pointed to by the Directory
parameter. This is the default.
fileset
The scope of the scan is limited to the objects in the same fileset as the directory pointed to by the
Directory parameter.
inodespace
The scope is limited to objects in the same single inode space from which the directory pointed to
by the Directory parameter is allocated. The scan may span more than one fileset, if those filesets
share the same inode space.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
--other-sort-options SortOptions
Passes options to the sort command (either the default sort command provided with the operating
system, or an alternative sort command specified by the --sort-command parameter).
--single-instance
Ensures that, for the specified file system, only one instance of mmapplypolicy invoked with the --
single-instance option can execute at one time. If another instance of mmapplypolicy invoked
with the --single-instance option is currently executing, this invocation will do nothing but
terminate.
--sort-buffer-size Size
Sets the sort-buffer size that is passed to the sort command. This parameter limits memory usage by
the sort commands that the mmapplypolicy command calls to do sorts and merges.
The mmapplypolicy command must do multiple, potentially large sorts and merges of temporary
files as part of its processing. It calls the operating-system sort command each time that it must do a
sort. It can have multiple instances of the sort command running at the same time. If the numbers of
items to be sorted are very large, the result can be excessive memory usage. The operating system
can respond by terminating some of the sort processes, which causes the mmapplypolicy command
to run for a longer time or to return with an error.
To prevent excessive memory consumption, you can set the --sort-buffer-size parameter to a
lower value than its default. The --sort-buffer-size parameter is the value that the
mmapplypolicy command passes to the sort command in the buffer-size parameter. The
default value is 8%. If you need a lower value, you might set it to 5%.
You can specify the --sort-buffer-size parameter in any format that the sort program's
buffer-size parameter accepts, such as "5%" or "1M".
In general, accept the default value of this parameter unless the system has excessive memory
consumption that is attributable to large sort operations by the mmapplypolicy command.
See the related parameters --max-merge-files and max-sort-bytes.
--sort-command SortCommand
Specifies the fully-qualified path name for a Posix-compliant sort command to be used instead of the
default, standard command provided by the operating system.
Before specifying an alternative sort command (and for information about a suggested sort
command), see Improving performance with the --sort-command parameter in IBM Spectrum Scale:
Administration Guide.
--split-filelists-by-weight
Specifies that each of the generated file lists contain elements with the same WEIGHT value. This can
be useful in conjunction with the LIST rule and the WEIGHT (DIRECTORY_HASH) clause. In this
case, each generated list will contain files from the same directory.
Note: If you want all of the files from a given directory to appear in just one list, you might have to
specify a sufficiently large -B value.
--split-margin n.n
A floating-point number that specifies the percentage within which the fast-choice algorithm is
allowed to deviate from the LIMIT and THRESHOLD targets specified by the policy rules. For example
if you specified a THRESHOLD number of 80% and a split-margin value of 0.2, the fast-choice
algorithm could finish choosing files when it reached 80.2%, or it might choose files that bring the
occupancy down to 79.8%. A nonzero value for split-margin can greatly accelerate the execution of
the fast-choice algorithm when there are many small files. The default is 0.2.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmapplypolicy command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. This command displays the actions that would occur if a policy were applied, but does not apply the
policy at this time:
/* Deletion rule */
RULE 'delete' DELETE FROM POOL 'sp1' WHERE NAME LIKE '%.tmp'
/* Migration rule */
RULE 'migration to system pool' MIGRATE FROM POOL 'sp1' TO POOL 'system' WHERE NAME LIKE
'%sp1%'
[I] Directories scan: 11 files, 1 directories, 0 other objects, 0 'skipped' files and/or
errors.
[I] Inodes scan: 11 files, 1 directories, 0 other objects, 0 'skipped' files and/or errors.
[I] Summary of Rule Applicability and File Choices:
Rule# Hit_Cnt KB_Hit Chosen KB_Chosen KB_Ill Rule
0 3 1536 0 0 0 RULE 'exclude *.save files' EXCLUDE WHERE(.)
1 3 1536 3 1536 0 RULE 'delete' DELETE FROM POOL 'sp1' WHERE(.)
2 2 1024 2 1024 0 RULE 'migration to system pool' MIGRATE FROM
POOL \
'sp1' TO POOL 'system' WHERE(.)
/* Deletion rule */
RULE 'delete' DELETE FROM POOL 'sp1' WHERE NAME LIKE '%.tmp'
/* Migration rule */
RULE 'migration to system pool' MIGRATE FROM POOL 'sp1' TO POOL 'system' WHERE NAME LIKE
'%sp1%'
[I] Directories scan: 11 files, 1 directories, 0 other objects, 0 'skipped' files and/or
errors.
[I] Inodes scan: 11 files, 1 directories, 0 other objects, 0 'skipped' files and/or errors.
[I] Summary of Rule Applicability and File Choices:
Rule# Hit_Cnt KB_Hit Chosen KB_Chosen KB_Ill Rule
0 3 3072 0 0 0 RULE 'exclude *.save files' EXCLUDE WHERE(.)
1 3 3072 3 3072 0 RULE 'delete' DELETE FROM POOL 'sp1' WHERE(.)
2 2 2048 2 2048 0 RULE 'migration to system pool'MIGRATE FROM
POOL \
'sp1' TO POOL 'system' WHERE(.)
XEC/script;
0 'skipped' files and/or errors.
Additional examples of GPFS policies and using the mmapplypolicy command are in the IBM Spectrum
Scale: Administration Guide.
See also
• “mmchpolicy command” on page 255
• “mmcrsnapshot command” on page 337
• “mmlspolicy command” on page 518
• “mmlssnapshot command” on page 532
• “mmsnapdir command” on page 711
Location
/usr/lpp/mmfs/bin
mmaudit command
Manages setting and viewing the file audit logging configuration in IBM Spectrum Scale.
Synopsis
mmaudit Device enable [--log-fileset FilesetName ]
[--retention Days] [--events {Event1[,Event2...] | ALL}]
[--compliant]
[{--filesets Fileset1[,Fileset2...] |
--skip-filesets Fileset1[,Fileset2...]}] [-q]
or
or
or
or
or
or
Availability
Available with IBM Spectrum Scale Advanced Edition, IBM Spectrum Scale Data Management Edition,
IBM Spectrum Scale Developer Edition, or IBM Spectrum Scale Erasure Code Edition. Available on Linux
x86 and Linux PPC LE.
Description
Enables, disables, and lists configuration data for file audit logging in a specified file system. Lists all file
audit logging enabled file systems in the cluster. Command messages are written to the /var/adm/ras/
mmaudit.log file. The audit records are stored in the audit log fileset in a /Device/.audit_log/
audit_topic/Year/Month/Day directory structure. The audit log files are named
auditLogFile_hostname_date_time. The audit log files are rotated, compressed, and a retention
date is set.
Note: When file audit logging is enabled on a file system, a fileset is created in the file system that is being
audited. This fileset contains the audit logging files that contain the audit events. By default, this fileset is
created as IAM mode noncompliant, but it can be created as IAM mode compliant if file audit logging is
enabled with the --compliant option. By using either IAM mode, expiration dates are set for all files
within the audit fileset. If the fileset is created in IAM mode noncompliant (the default), then the root user
can change the expiration date to the current date so that audit files can be removed to free up disk
space. If the fileset is created in IAM mode compliant (because of the use of the --compliant option),
not even the root user can change the expiration dates of the audit logging files and they cannot be
removed until the expiration date. In addition, commands such as mmrestorefs will fail when restoring
to a snapshot that would require removal of currently immutable (non-expired) files.
Parameters
Device
Specifies the device name of the file system upon which the audit log configuration change or listing is
to occur.
all
Specifies that the command is executed against all devices configured for file audit logging. Currently,
the only supported sub-commands are list and upgradePolicies.
enable
Enables file audit logging for the given device. Enablement entails setting up configuration and putting
the audit policies in place.
The --log-fileset FilesetName option specifies the fileset name where the audit log records for
the file system will be held. The default is .audit_log. The --retention Days option specifies the
number of days to set the expiration date on all audit log record files when they are created. The
default is 365 days. The --events option specifies the list of events that will be audited. For more
information about the events that are supported, see File audit logging events' descriptions in the IBM
Spectrum Scale: Concepts, Planning, and Installation Guide. The default is ALL. The --degraded
option allows file audit logging to be enabled without as many default performance enhancements.
The --compliant option specifies that the file audit logging fileset that is created to hold the file
audit logging files will be IAM mode compliant. The default is noncompliant. In compliant mode, not
even the root user can change the expiration dates of the file audit logging files to the past to free up
space. The --filesets Fileset1[,Fileset2...] option specifies one or more filesets within the file
system to audit. File system activity is only audited within these filesets and no other areas of the file
system. The --skip-filesets Fileset1[,Fileset2...] option specifies one or more filesets within the
file system not to audit. Audit events are generated for all file system activity except activity within
this list of filesets.
disable
Disables file audit logging for the given device. Disablement removes audit policies and audit
configurations that are specific to the device. Existing file audit records are changed to immutable and
the retention period remains.
update
Updates the list of events that will be audited. The new event list will replace the existing set of
events.
list --events [-Y]
Displays the file audit logging configuration information for the given device. The all option displays
the file audit logging configuration information for all devices enabled for file audit logging. The --
events option displays the device minor number, audit generation number, and a list of events that
are being audited. The -Y option provides output in machine-readable (colon-delimited) format.
producerRestart -N { NodeName[,NodeName...] | NodeFile | NodeClass }
Restarts the producers for all file systems under audit on the nodes specified by the -N option. The -N
option supports a comma-separated list of nodes, a full path name to a file containing node names, or
a predefined node class.
Note: Issuing this command causes the event producers for clustered watch folder to be restarted as
well.
upgradePolicies
Updates IBM Spectrum Scale policies that are associated with file audit logging enabled file systems
to allow remotely mounted file systems to generate file audit logging events.
-q
Suppresses all [I] informational messages.
Exit status
0
Successful completion.
nonzero
A failure has occurred. Errors are written to /var/adm/ras/mmaudit.log and /var/log/
messages.
Security
You must have root authority to run the mmaudit command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages.
Examples
1. To enable a file system with the default settings, issue this command:
2. To enable a file system for a specific set of events, issue this command:
3. To enable a file system with a different retention period, issue this command:
5. To disable a file system that was previously enabled, issue this command:
[I] Successfully deleted File Audit Logging policy partition(s) for device: watch1
[I] Successfully updated File Audit Logging configuration for device: watch1
[I] Successfully checked or removed File Audit Logging global catchall and config fileset
skip partitions for device: watch1
[I] Successfully disabled File Audit Logging for device: watch1
6. To update the list of events that are being audited for a specific file system to available events, issue
this command:
7. To see which compliance type the file audit logging fileset is configured with, issue this command:
8. To see which file systems are currently configured for file audit logging, issue this command:
9. To see which events are currently enabled for a file system, issue this command:
See also
• “mmwatch command” on page 753
Location
/usr/lpp/mmfs/bin
mmauth command
Manages secure access to GPFS file systems.
Synopsis
mmauth genkey {new | commit | propagate [-N {Node[,Node...] | NodeFile | NodeClass}]}
or
or
or
or
mmauth grant {RemoteClusterName | all} -f {Device | all} [-a {rw | ro}] [-r {uid:gid | no}]
or
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmauth command prepares a cluster to grant secure access to file systems owned locally. The
mmauth command also prepares a cluster to receive secure access to file systems owned by another
cluster. Use the mmauth command to generate a public/private key pair for the local cluster. A public/
private key pair must be generated on both the cluster owning the file system and the cluster desiring
access to the file system. The administrators of the clusters are responsible for exchanging the public
portion of the public/private key pair. Use the mmauth command to add or delete permission for a cluster
to mount file systems owned by the local cluster.
When a cluster generates a new public/private key pair, administrators of clusters participating in remote
file system mounts are responsible for exchanging their respective public key file /var/mmfs/ssl/
id_rsa.pub generated by this command.
The administrator of a cluster desiring to mount a file system from another cluster must provide the
received key file as input to the mmremotecluster command. The administrator of a cluster allowing
another cluster to mount a file system must provide the received key file to the mmauth command.
The keyword appearing after mmauth determines which action is performed:
add
Adds a cluster and its associated public key to the list of clusters authorized to connect to this cluster
for the purpose of mounting file systems owned by this cluster.
delete
Deletes a cluster and its associated public key from the list of clusters authorized to mount file
systems owned by this cluster.
deny
Denies a cluster the authority to mount a specific file system owned by this cluster.
gencert
Creates a client keystore with the keys and certificates required to communicate with the ISKLM key
server.
genkey
Controls the generation and propagation of the OpenSSL key files:
new
Generates a new public/private key pair for this cluster. The key pair is placed in /var/mmfs/ssl.
This must be done at least once before cipherList, the GPFS configuration parameter that
enables GPFS with OpenSSL, is set.
The new key is in addition to the currently in effect committed key. Both keys are accepted until
the administrator runs mmauth genkey commit.
commit
Commits the new public/private key pair for this cluster. Once mmauth genkey commit is run,
the old key pair will no longer be accepted, and remote clusters that have not updated their keys
(by running mmauth update or mmremotecluster update) will be disconnected.
propagate
Ensures that the currently in effect key files are placed in /var/mmfs/ssl on the nodes specified
with the -N parameter. This may be necessary if the key files are lost and adminMode central is
in effect for the cluster.
grant
Allows a cluster to mount a specific file system owned by this cluster.
show
Shows the list of clusters authorized to mount file system owned by this cluster.
update
Updates the public key and other information associated with a cluster authorized to mount file
systems owned by this cluster.
When the local cluster name (or '.') is specified, mmauth update -l can be used to set the cipherList
value for the local cluster. Note that you cannot use this command to change the name of the local
cluster. Use the mmchcluster command for this purpose.
Parameters
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes on which the key files should be restored. The default is -N all.
For general information on how to specify node names, see Specifying nodes as input to GPFS(tm)
commands in IBM Spectrum Scale: Administration Guide.
This command does not support a NodeClass of mount.
RemoteClusterName
Specifies the remote cluster name requesting access to local GPFS file systems.
all
Indicates all remote clusters defined to the local cluster.
ciphers
Shows the supported ciphers.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Options
-a {rw | ro}
Specifies the type of access allowed:
ro
Specifies read-only access.
rw
Specifies read/write access. This is the default.
-C NewClusterName
Specifies a new, fully-qualified cluster name for the already-defined cluster RemoteClusterName.
-f {Device | all}
Specifies the device name for a file system owned by this cluster. The Device argument is required. If
all is specified, the command applies to all file systems owned by this cluster at the time that the
command is issued.
-k KeyFile
Specifies the public key file generated by the mmauth command in the cluster requesting to remotely
mount the local GPFS file system.
-l CipherList
Sets the security mode for communications between the current cluster and the remote cluster that is
specified in the RemoteClusterName parameter. There are three security modes:
EMPTY
The sending node and the receiving node do not authenticate each other, do not encrypt
transmitted data, and do not check data integrity.
AUTHONLY
The sending and receiving nodes authenticate each other, but they do not encrypt transmitted
data and do not check data integrity. This mode is the default in IBM Spectrum Scale V4.2 or later.
Cipher
The sending and receiving nodes authenticate each other, encrypt transmitted data, and check
data integrity. To set this mode, you must specify the name of a supported cipher, such as
AES128-GCM-SHA256.
For more information about the security mode and supported ciphers, see the topic Security mode in
the IBM Spectrum Scale: Administration Guide.
-r {uid:gid | no}
Specifies a root credentials remapping (root squash) option. The UID and GID of all processes with
root credentials from the remote cluster will be remapped to the specified values. The default is not to
remap the root UID and GID. The uid and gid must be specified as unsigned integers or as symbolic
names that can be resolved by the operating system to a valid UID and GID. Specifying no, off, or
DEFAULT turns off the remapping.
For more information, see the IBM Spectrum Scale: Administration Guide and search on root squash.
--cname CanonicalName
Specifies the canonical name of the client used in the certificate.
--cert ServerCertificateFile
Specifies the path name to a file containing an ISKLM certificate.
--out OutputKeystoreFile
Specifies the path name for the file that will contain the keystore.
--pwd-file KeystorePasswordFile
Specifies the keystore password file. If omitted, you will be prompted to enter the keystore password.
A maximum of 20 characters are allowed.
The --pwd KeystorePassword option is considered deprecated and may be removed in a future
release.
--label ClientCertificateLabel
Specifies the label of the client certificate within the keystore. A maximum of 20 characters are
allowed.
Exit status
0
Successful completion. After a successful completion of the mmauth command, the configuration
change request will have been propagated to all nodes in the cluster.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmauth command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. This is an example of an mmauth genkey new command:
For clustB.kgn.ibm.com, the mmauth genkey new command has been issued, but the mmauth
genkey commit command has not yet been issued.
For more information on the SHA digest, see The SHA digest in IBM Spectrum Scale: Problem
Determination Guide.
See also
• “mmremotefs command” on page 653
• “mmremotecluster command” on page 650
See also the topic about accessing GPFS file systems from other GPFS clusters in the IBM Spectrum Scale:
Administration Guide.
Location
/usr/lpp/mmfs/bin
mmbackup command
Performs a backup of a GPFS file system or independent fileset to an IBM Spectrum Protect server.
Synopsis
mmbackup {Device | Directory} [-t {full | incremental}]
[-N {Node[,Node...] | NodeFile | NodeClass}]
[-g GlobalWorkDirectory] [-s LocalWorkDirectory]
[-S SnapshotName] [-f] [-q] [-v] [-d]
[-a IscanThreads] [-n DirThreadLevel]
[-m ExecThreads | [[--expire-threads ExpireThreads] [--backup-threads BackupThreads |
[[--selective-backup-threads selBackupThreads]
[--incremental-backup-threads incBackupThreads]]]]]
[-B MaxFiles | [[--max-backup-count MaxBackupCount |
[[--max-incremental-backup-count MaxIncBackupCount]
[--max-selective-backup-count MaxSelBackupCount]]]
[--max-expire-count MaxExpireCount]]]
[--max-backup-size MaxBackupSize] [--qos QosClass] [--quote | --noquote]
[--rebuild] [--scope {filesystem | inodespace}]
[--backup-migrated | --skip-migrated] [--tsm-servers TSMServer[,TSMServer...]]
[--tsm-errorlog TSMErrorLogFile] [-L n] [-P PolicyFile]
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
Use the mmbackup command to back up the user data from a GPFS file system or independent fileset to
an IBM Spectrum Protect server or servers. The mmbackup command can be used only to back up file
systems that are owned by the local cluster.
Attention: In IBM Spectrum Scale V4.1 and later, a full backup (-t full) with mmbackup is
required if a full backup was never performed with GPFS 3.3 or later. For more information, see the
topic File systems backed up using GPFS 3.2 or earlier versions of mmbackup in the IBM Spectrum
Scale: Administration Guide.
The IBM Spectrum Protect Backup-Archive client must be installed and at the same version on all the
nodes that are executing the mmbackup command or are named in a node specification with -N. For more
information about IBM Spectrum Protect requirements for the mmbackup command, see the topic
Firewall recommendations for using IBM Spectrum Protect with IBM Spectrum Scale in the IBM Spectrum
Scale: Administration Guide.
You can run multiple simultaneous instances of mmbackup if they are on different file systems or filesets.
See the topic Backup considerations for using IBM Spectrum Protect in the IBM Spectrum Scale: Concepts,
Planning, and Installation Guide and the topic Configuration reference for using IBM Spectrum Protect with
IBM Spectrum Scale in the IBM Spectrum Scale: Administration Guide.
Parameters
Device
The device name for the file system to be backed up. File system names do not need to be fully
qualified. fs0 is as acceptable as /dev/fs0.
Directory
Specifies either the mount point of a GPFS file system or the path to an independent fileset root to
back up. The path /gpfs/fs0 can be used to specify the GPFS file system called fs0.
Notes:
• Do not specify a subdirectory path. Doing so can result in inconsistent backups.
• Do not specify a snapshot directory path. To back up a snapshot, use -S SnapshotName.
• If you specify the path of an independent fileset root, you must also specify --scope inodespace.
-t {full | incremental}
Specifies whether to perform a full backup of all of the files in the file system, or an incremental
backup of only those files that changed since the last backup was performed. The default is an
incremental backup.
Note on GPFS 3.2: A full backup expires all GPFS 3.2 format TSM inventory if all previous backups
were incremental and the GPFS 3.2 backup format was in use previously. If mmbackup on GPFS 3.2
was first used, and then only incremental backups were done on GPFS 3.4 or 3.5, then TSM still
contains 3.2 format backup inventory. This inventory will automatically be marked for expiration by
mmbackup after a successful full or incremental backup.
Note: Do not use -t full with an unlinked fileset. Use an EXCLUDE statement to exclude directories
or files from the backup operation.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies a list of nodes to run parallel instances of the backup process for better performance. The
IBM Spectrum Protect Backup-Archive client must be installed on each node in the list. All the nodes
must be in the same cluster as the node from which the mmbackup command is issued. All node
classes are supported. For more information about these options, see Specifying nodes as input to
GPFS commands in the IBM Spectrum Scale: Administration Guide.
If the -N option is not specified, then the command runs parallel instances of the backup process on
the nodes that are specified by the defaultHelperNodes attribute of the mmchconfig command. If
the defaultHelperNodes attribute is not set, then the command runs only on the node where the
mmbackup command is issued. For more information about the defaultHelperNodes attribute, see
the topic “mmchconfig command” on page 169.
-g GlobalWorkDirectory
Specifies a global work directory in which one or more nodes can store temporary files during
mmbackup command processing. For more information about specifying more than one node to
process the command, see the description of the -N option.
The global directory must be in a shared file system, and the file system must be mounted and
available for reading and writing by every node that will participate in the mmbackup command
processing.
If the -g option is not specified, then the global work directory is the directory that is specified by the
sharedTmpDir attribute of the mmchconfig command. For more information, see “mmchconfig
command” on page 169. If the sharedTmpDir attribute is not set to a value, then the global work
directory depends on the file system format version of the target file system:
• If the target file system is at file system format version 5.0.1 or later (file system format number
19.01 or later), then the global work directory is the directory .mmSharedTmpDir at the root level
of the target file system.
• If the target file system is at a file system format version that is earlier than 5.0.1, then the
command does not use a global work directory.
If the global work directory that is specified by -g option or by the sharedTmpDir attribute begins
with a forward slash (/) then it is treated as an absolute path. Otherwise it is treated as a path that is
relative to the mount point of the file system.
If both the -g option and the -s option are specified, then temporary files can be stored in both the
specified directories. In general, the local work directory contains temporary files that are written and
read by a single node. The global work directory contains temporary files that are written and read by
more than one node.
Note: The specified global work directory and its contents are excluded from the file system or fileset
backup. If the directory is located in the file system that is being backed up, make sure that you use a
directory that is dedicated to the working files. For example, if you specify the root directory of the file
system, the entire contents of the file system are excluded.
-s LocalWorkDirectory
Specifies the local directory to be used for temporary storage during mmbackup command processing.
The default directory is /tmp. A LocalWorkDirectory must exist on each node that is used to run the
mmbackup command or specified in a node specification with -N.
-S SnapshotName
Specifies the name of a global snapshot for any backup operations or a fileset-level snapshot if --
scope inodespace is also specified for a fileset backup. The snapshot must be created before the
mmbackup command is used. Snapshot names can be found with the mmlssnapshot command. If a
fileset is not present in the named snapshot and fileset-level backup is invoked with --scope
inodespace, an error is returned. If a fileset-level snapshot name is used with a file system backup,
an error is returned.
The use of -S SnapshotName is recommended because it provides mmbackup and IBM Spectrum
Protect a consistent view of GPFS from which to perform backup. Deletion of the snapshot that is used
for backup can be performed with the mmdelsnapshot command after mmbackup completes.
Note: The SnapshotName that is specified must be unique to the file system.
-f
Specifies that processing continues when unlinked filesets are detected. All files that belong to
unlinked filesets are ignored.
Note: Because -f has a large impact on performance, avoid using it unless it is necessary to perform
a backup operation with unlinked filesets.
-q
Performs a query operation before issuing mmbackup. The IBM Spectrum Protect server might have
data stored already that is not recognized as having been backed up by mmbackup and its own
shadow database. To properly compute the set of files that currently need to be backed up, mmbackup
can perform an IBM Spectrum Protect query and process the results to update its shadow database.
Use the -q switch to perform this query and then immediately commence the requested backup
operation.
Note: Do not use -q with the --rebuild parameter.
-v
Specifies verbose message output. Use this flag to cause mmbackup to issue more verbose messages
about its processing. See also “Environment” on page 106.
-d
Gathers debugging information that is useful to the IBM Support Center for diagnosing problems.
-a IscanThreads
Specifies the number of threads and sort pipelines each node runs during parallel inode scan and
policy evaluation. The default value is 2.
-n DirThreadLevel
Specifies the number of threads that are created and dispatched within each mmapplypolicy
process during the directory scan phase. The default value is 24.
-m ExecThreads
Specifies the number of threads that are created and dispatched within each mmapplypolicy
process during the policy execution phase. The default value for mmapplypolicy is 24; however, the
default value for mmbackup is 1. This option cannot be used with the --expire-threads, --
backup-threads, --selective-backup-threads, or --incremental-backup-threads
options.
--expire-threads ExpireThreads
Specifies the number of worker threads permitted on each node to perform dsmc expire tasks in
parallel. This option cannot be used with the -m option. Valid values are 1 - 32. The default value is 4.
--backup-threads BackupThreads
Specifies the number of worker threads that are permitted on each node to perform dsmc
selective or dsmc incremental tasks in parallel. This option cannot be used with the -m, --
--quote | --noquote
Specifies whether to decorate (or not to decorate) file-list entries with quotation marks. The
mmbackup command uses file lists to convey the lists of files to the IBM Spectrum Protect Backup-
Archive client program. Depending on IBM Spectrum Protect client configuration options, the file lists
might not require each file name to be surrounded by quotation marks. If certain IBM Spectrum
Protect client configuration options are in use, do not add quotation marks to file-list entries. Use the
--noquote option in these instances.
--rebuild
Specifies whether to rebuild the mmbackup shadow database from the inventory of the IBM Spectrum
Protect server. This option is similar to the -q option; however, no backup operation proceeds after
the shadow database is rebuilt. Use this option if the shadow database of mmbackup is known to be
out of date, but the rebuilding operation must be done at a time when the IBM Spectrum Protect
server is less loaded than during the normal time mmbackup is run.
If backup files with the old snapshot name /Device/.snapshots/.mmbuSnapshot exist in the
inventory of the IBM Spectrum Protect server, those files will be expired from the IBM Spectrum
Protect server after the shadow database is rebuilt and any successful incremental or full backup
completes.
Note: Do not use --rebuild with the -q parameter.
--scope {filesystem | inodespace}
Specifies that one of the following traversal scopes be applied to the policy scan and backup
candidate selection:
filesystem
Scans all the objects in the file system that are specified by Device or that are mounted at the
Directory specified. This is the default behavior.
inodespace
Specifies that the scan is limited in scope to objects in the same single inode space from which the
Directory is allocated. The scan might span more than one fileset if those filesets share inode
space; for example, dependent filesets.
--backup-migrated | --skip-migrated
HSM migrated objects are normally skipped even if they need to be backed up.
--backup-migrated
When specified, HSM migrated objects that need to be backed up are sent for backup. This can cause
an HSM recall and other associated costs. When the same IBM Spectrum Protect server instance is
used for HSM and backup, this option protects migrated, changed objects more efficiently. These
objects are copied internally on the IBM Spectrum Protect server from the HSM storage pool directly
to the BACKUP storage pool without incurring object recalls and their local storage costs. After
backup, these objects will remain migrated. This occurs only if the object has no ACL or XAttrs
attached.
--skip-migrated
When specified, HSM migrated objects that need to be backed up are not sent for backup. The path
names of these objects are listed in a file, mmbackup.hsmMigFiles.TSMServer, that is located in
the Directory or on the Device that is being backed up. The path names that are listed can then be
recalled by the administrator in a staged manner. This is the default action that is taken for HSM
migrated objects.
--tsm-servers TSMServer[,TSMServer...]
Specifies the name of the IBM Spectrum Protect server or servers that are to be used for this backup.
Each of the servers is used for the specified backup task.
If this option is not specified, the mmbackup command will backup to the servers that are specified in
the dsm.sys file.
--tsm-errorlog TSMErrorLogFile
Specifies the path name of the log file to pass to IBM Spectrum Protect Backup-Archive client
commands.
-L n
Controls the level of information that is displayed by the mmapplypolicy command that is invoked
by using the mmbackup command. This option and its value are passed directly to the
mmapplypolicy command (the use of this option is generally for debugging purposes). Refer to the
mmapplypolicy command for an explanation of supported values.
-P PolicyFile
Specifies a customized policy rules file for the backup.
Environment
The behavior of mmbackup can be influenced by several environment variables when set.
Variables that apply to IBM Spectrum Protect Backup-Archive client program dsmc
MMBACKUP_DSMC_MISC
The value of this variable is passed as arguments to dsmc restore and dsmc query
{backup,inclexcl,session} commands.
MMBACKUP_DSMC_BACKUP
The value of this variable is passed as arguments to dsmc, dsmc selective, and dsmc
incremental commands.
MMBACKUP_DSMC_EXPIRE
The value of this variable is passed as arguments to dsmc expire commands.
Variables that change mmbackup output progress reporting
MMBACKUP_PROGRESS_CONTENT
Controls what progress information is displayed to the user as mmbackup runs. It is a bit field with
the following bit meanings:
0x01
Specifies that basic text progress for each server is to be displayed.
0x02
Specifies that additional text progress for phases within each server is to be displayed.
0x04
Specifies that numerical information about files being considered is to be displayed.
MMBACKUP_PROGRESS_INTERVAL
Controls how frequently status callouts are made. The value is the minimum number of seconds
between calls to the MMBACKUP_PROGRESS_CALLOUT script or program. It does not affect how
frequently messages are displayed, except for the messages of MMBACKUP_PROGRESS_CONTENT
category 0x04.
MMBACKUP_PROGRESS_CALLOUT
Specifies the path to a program or script to be called with a formatted argument, as described in
the topic MMBACKUP_PROGRESS_CALLOUT environment variable in the IBM Spectrum Scale:
Administration Guide.
Variables that change mmbackup debugging facilities
In case of a failure, certain debugging and data collection can be enabled by setting the specified
environment variable value.
DEBUGmmbackup
This variable controls what debugging features are enabled. It is interpreted as a bit mask with the
following bit meanings:
0x001
Specifies that basic debug messages are printed to STDOUT. Because mmbackup comprises
multiple components, the debug message prefixes can vary. Some examples include:
mmbackup:mbackup.sh
DEBUGtsbackup33:
0x002
Specifies that temporary files are to be preserved for later analysis.
0x004
Specifies that all dsmc command output is to be mirrored to STDOUT.
DEBUGmmcmi
This variable controls debugging facilities in the mmbackup helper program mmcmi, which is used
when the cluster minReleaseLevel is less than 3.5.0.11.
DEBUGtsbuhelper
This variable controls debugging facilities in the mmbackup helper program tsbuhelper, which is
used when the cluster minReleaseLevel is greater than or equal to 3.5.0.11.
Variables that change mmbackup record locations
MMBACKUP_RECORD_ROOT
Specifies an alternative directory name for storing all temporary and permanent records for the
backup. The directory name that is specified must be an existing directory and it cannot contain
special characters (for example, a colon, semicolon, blank, tab, or comma).
The directory that is specified for MMBACKUP_RECORD_ROOT must be accessible from each node
that is specified with the -N option.
Exit status
0
Successful completion. All of the eligible files were backed up.
1
Partially successful completion. Some files, but not all eligible files, were backed up. The shadow
database or databases reflect the correct inventory of the IBM Spectrum Protect server. Invoke
mmbackup again to complete the backup of eligible files.
2
A failure occurred that prevented backing up some or all files or recording any progress in the shadow
database or databases. Correct any known problems and invoke mmbackup again to complete the
backup of eligible files. If some files were backed up, using the -q or --rebuild option can help
avoid backing up some files additional times.
Security
You must have root authority to run the mmbackup command.
The node on which the command is issued, and all other IBM Spectrum Protect Backup-Archive client
nodes, must be able to execute remote shell commands on any other node in the cluster without the use
of a password and without producing any extraneous messages. For more information, see Requirements
for administering a GPFS file system in IBM Spectrum Scale: Administration Guide.
Examples
1. To perform an incremental backup of the file system gpfs0, issue this command:
mmbackup gpfs0
--------------------------------------------------------
mmbackup: Backup of /gpfs/gpfs0 begins at Mon Apr 7 15:37:50 EDT 2014.
--------------------------------------------------------
Mon Apr 7 15:38:04 2014 mmbackup:Scanning file system gpfs0
Mon Apr 7 15:38:14 2014 mmbackup:Determining file system changes for gpfs0 [balok1].
Mon Apr 7 15:38:14 2014 mmbackup:changed=364, expired=0, unsupported=0 for server [balok1]
Mon Apr 7 15:38:14 2014 mmbackup:Sending files to the TSM server [364 changed, 0 expired].
mmbackup: TSM Summary Information:
Total number of objects inspected: 364
Total number of objects backed up: 364
3. To perform an incremental backup of the file system gpfs0 with more progress information displayed,
first issue this command:
export MMBACKUP_PROGRESS_CONTENT=3
Fri Oct 26 13:13:09 2018 mmbackup:Policy for expiry returned 0 Highest TSM error 0
Fri Oct 26 13:13:09 2018 mmbackup:Performing backup operations
Fri Oct 26 13:13:15 2018 mmbackup:Backup job finished: processed:11 failed:0 rc:0
Fri Oct 26 13:13:15 2018 mmbackup:Completed policy backup run with 0 policy errors, 0 files failed, 0 severe errors, returning rc=0.
Fri Oct 26 13:13:15 2018 mmbackup:Policy for backup returned 0 Highest TSM error 0
Fri Oct 26 13:13:15 2018 mmbackup:Performing backup operations
Fri Oct 26 13:13:21 2018 mmbackup:Backup job finished: processed:10 failed:0 rc:0
Fri Oct 26 13:13:21 2018 mmbackup:Backup job finished: processed:10 failed:0 rc:0
Fri Oct 26 13:13:21 2018 mmbackup:Completed policy backup run with 0 policy errors, 0 files failed, 0 severe errors, returning rc=0.
Fri Oct 26 13:13:21 2018 mmbackup:Policy for backup returned 0 Highest TSM error 0
mmbackup: TSM Summary Information:
Total number of objects inspected: 41
Total number of objects backed up: 31
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects expired: 10
Total number of objects failed: 0
Total number of objects encrypted: 0
Total number of bytes inspected: 731973
Total number of bytes transferred: 739603
Fri Oct 26 13:13:21 2018 mmbackup:analyzing: results from SERVER1.
Fri Oct 26 13:13:21 2018 mmbackup:Copying updated shadow file to the TSM server
Fri Oct 26 13:13:23 2018 mmbackup:Done working with files for TSM Server: SERVER1.
Fri Oct 26 13:13:23 2018 mmbackup:Completed backup and expire jobs.
Fri Oct 26 13:13:23 2018 mmbackup:TSM server SERVER1
had 0 failures or excluded paths and returned 0.
Its shadow database has been updated. Shadow DB state:updated
Fri Oct 26 13:13:23 2018 mmbackup:Completed successfully. TSM exit status: exit 0
----------------------------------------------------------
mmbackup: Backup of /gpfs/gpfs0 completed successfully at Fri Oct 26 13:13:23 EDT 2018.
----------------------------------------------------------
4. To perform an incremental backup of the objects in the inode space of the /gpfs/testfs/infs2
directory to the balok1 server, issue this command:
--------------------------------------------------------
mmbackup: Backup of /gpfs/testfs/infs2 begins at Wed May 27 12:58:39 EDT 2015.
--------------------------------------------------------
Wed May 27 12:58:48 2015 mmbackup:Scanning fileset testfs.indfs2
Wed May 27 12:58:53 2015 mmbackup:Determining fileset changes for testfs.indfs2 [balok1].
Wed May 27 12:58:53 2015 mmbackup:changed=2, expired=2, unsupported=0 for server [balok1]
Wed May 27 12:58:53 2015 mmbackup:Sending files to the TSM server [2 changed, 2 expired].
mmbackup: TSM Summary Information:
Total number of objects inspected: 4
Total number of objects backed up: 2
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects expired: 2
Total number of objects failed: 0
Total number of objects encrypted: 0
Total number of bytes inspected: 53934
Total number of bytes transferred: 53995
----------------------------------------------------------
mmbackup: Backup of /gpfs/testfs/infs2 completed successfully at Wed May 27 12:59:31 EDT
2015.
----------------------------------------------------------
5. To perform an incremental backup of a global snapshot that is called backupsnap6 to the balok1
server, issue this command:
--------------------------------------------------------
mmbackup: Backup of /gpfs/testfs begins at Wed May 27 13:08:45 EDT 2015.
--------------------------------------------------------
Wed May 27 13:08:50 2015 mmbackup:Scanning file system testfs
Wed May 27 13:08:53 2015 mmbackup:Determining file system changes for testfs [balok1].
Wed May 27 13:08:53 2015 mmbackup:changed=130, expired=100, unsupported=0 for server [balok1]
Wed May 27 13:08:53 2015 mmbackup:Sending files to the TSM server [130 changed, 100 expired].
Wed May 27 13:08:59 2015 mmbackup:Policy for expiry returned 9 Highest TSM error 0
mmbackup: TSM Summary Information:
Total number of objects inspected: 230
Total number of objects backed up: 130
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects expired: 100
Total number of objects failed: 0
Total number of objects encrypted: 0
Total number of bytes inspected: 151552
Total number of bytes transferred: 135290
----------------------------------------------------------
mmbackup: Backup of /gpfs/testfs completed successfully at Wed May 27 13:09:05 EDT 2015.
----------------------------------------------------------
See also
• “mmapplypolicy command” on page 80
Location
/usr/lpp/mmfs/bin
mmbackupconfig command
Collects GPFS file system configuration information.
Synopsis
mmbackupconfig Device -o OutputFile
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
The mmbackupconfig command, in conjunction with the mmrestoreconfig command, can be used to
collect basic file system configuration information that can later be used to restore the file system. The
configuration information backed up by this command includes block size, replication factors, number and
size of disks, storage pool layout, filesets and junction points, policy rules, quota information, and a
number of other file system attributes.
This command does not back up user data or individual file attributes.
For more information about the mmimgbackup and mmimgrestore commands, see the topic Scale out
Backup and Restore (SOBAR) in the IBM Spectrum Scale: Administration Guide.
Parameters
Device
The device name of the file system to be backed up. File system names need not be fully-qualified.
fs0 is as acceptable as /dev/fs0.
This must be the first parameter.
-o OutputFile
The path name of a file to which the file system information is to be written. This file must be provided
as input to the subsequent mmrestoreconfig command.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmbackupconfig command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To backup file system fsiam2 to output file backup.config.fsiam2 issue:
See also
• “mmimgbackup command” on page 450
• “mmimgrestore command” on page 454
• “mmrestoreconfig command” on page 661
Location
/usr/lpp/mmfs/bin
mmbuildgpl command
Manages prerequisite packages for Linux and builds the GPFS portability layer.
Synopsis
mmbuildgpl [--quiet] [--build-package] [-v]
Availability
Available on all IBM Spectrum Scale editions. Available and needed only on Linux.
Description
Use the mmbuildgpl command to manage and verify prerequisite packages for Linux and build the GPFS
portability layer. If all packages are installed correctly, mmbuildgpl builds the GPFS portability layer. If
any packages are missing, the package names are displayed. The missing packages can be installed
manually.
Tip: You can configure a cluster to rebuild the GPL automatically whenever a new level of the Linux kernel
is installed or whenever a new level of IBM Spectrum Scale is installed. For more information, see the
description of the autoBuildGPL attribute in the topic mmchconfig command in the IBM Spectrum Scale:
Command and Programming Reference.
Parameters
--quiet
Specifies that when there are any missing packages, the mmbuildgpl command installs the
prerequisite packages automatically by using the default package manager.
--build-package
Builds an installable package (gpfs.gplbin) for the portability layer binaries after compilation is
successful. This option builds an RPM package on SLES and RHEL Linux and a Debian package on
Debian and Ubuntu Linux.
When the command finishes, it displays the location of the generated package as in the following
examples:
Wrote: /root/rpmbuild/RPMS/x86_64/gpfs.gplbin-3.10.0-229.el7.x86_64-5.1.0-x.x86_64.rpm
or
Wrote: /tmp/deb/gpfs.gplbin-4.4.0-127-generic_5.1.0-x_amd64.deb
You can then copy the generated package to other machines for deployment. By default, the
generated package can be deployed only to machines whose architecture, distribution level, Linux
kernel, and IBM Spectrum Scale maintenance level are identical with those of the machine on which
the gpfs.gplbin package was built. However, you can install the generated package on a machine
with a different Linux kernel by setting the MM_INSTALL_ONLY environment variable before you
install the generated package. If you install the gpfs.gplbin package, you do not need to install the
gpfs.gpl package.
Note: During the package generation, temporary files are written to the /tmp/rpm or /tmp/deb
directory, so be sure there is sufficient space available. By default, the generated package goes
to /usr/src/packages/RPMS/<arch> for SUSE Linux Enterprise Server, /usr/src/redhat/RPMS/
<arch> for Red Hat Enterprise Linux, and /tmp/deb for Ubuntu Linux.
Important:
• The GPFS portability layer is specific to both the current kernel and the GPFS version. If either the
kernel or the GPFS version changes, a new GPFS portability layer needs to be built.
• Although operating system kernels might upgrade to a new version, they are not active until after a
reboot. Thus, a GPFS portability layer for this new kernel must be built after a reboot of the
operating system.
• Before you install a new GPFS portability layer, make sure to uninstall the prior version of the GPFS
portability layer first.
-v
Specifies that the output is verbose and contains information for debugging purposes.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmbuildgpl command.
Examples
To build the GPFS portability layer, issue the mmbuildgpl command with no parameters, as in the
following example:
# mmbuildgpl
--------------------------------------------------------
mmbuildgpl: Building GPL (5.0.3.0) module begins at Tue Mar 19 09:09:21 EDT 2019.
--------------------------------------------------------
Verifying Kernel Header...
kernel version = 31000229 (31000229000000, 3.10.0-229.el7.x86_64, 3.10.0-229)
module include dir = /lib/modules/3.10.0-229.el7.x86_64/build/include
module build dir = /lib/modules/3.10.0-229.el7.x86_64/build
kernel source dir = /usr/src/linux-3.10.0-229.el7.x86_64/include
Found valid kernel header file under /usr/src/kernels/3.10.0-229.el7.x86_64/include
Verifying Compiler...
make is present at /bin/make
cpp is present at /bin/cpp
gcc is present at /bin/gcc
g++ is present at /bin/g++
ld is present at /bin/ld
Verifying Additional System Headers...
Verifying kernel-headers is installed ...
Command: /bin/rpm -q kernel-headers
The required package kernel-headers is installed
make World ...
make InstallImages ...
--------------------------------------------------------
mmbuildgpl: Building GPL module completed successfully at Tue Mar 19 09:09:34 EDT 2019.
--------------------------------------------------------
See also
• Building the GPFS portability layer on Linux nodes in IBM Spectrum Scale: Concepts, Planning, and
Installation Guide.
Location
/usr/lpp/mmfs/bin
mmcachectl command
Displays information about files and directories in the local page pool cache.
Synopsis
mmcachectl
show [--device Device [{--fileset Fileset | --inode-num InodeNum}]] [--show-filename] [-Y]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmcachectl command displays information about files and directories in the local page pool cache.
You can display information for a single file, for the files in a fileset, or for all the files in a file system. If
you issue mmcachectl show with no parameters, it displays information for all the files in all the file
systems that have file data cached in the local page pool.
The command lists the following information for each file:
• The file system name.
• The fileset ID.
• The inode number.
• The snapshot ID.
• The file type (inode type): file, directory, link, or special.
• The number of instances of the file that are open.
• The number of instances of the file that are open for direct I/O.
• The file size in bytes.
• The size of the file data that is in the page pool in bytes.
• The cache location of the file attributes:
F
The file attributes are stored in the file cache.
FD
The file attributes are stored in the file cache and the file data is stored in the inode (data-in-inode).
For information about data-in-inode, see the topic Use of disk storage and file structure within a
GPFS file system in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
C
The file attributes are stored in the stat cache. For information about the stat cache, see the topic
Non-pinned memory in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
• The full path name of the file. For a file with multiple hard links, the command displays an asterisk (*)
before the file name to indicate that it is one of the path names of the file. For more information, see the
description of the --show-filename parameter later in this topic.
Parameters
--device Device
Specifies the device name of a file system for which you want to display information.
--fileset Fileset
Specifies the name of a fileset for which you want to display information.
--inode-num InodeNum
Specifies the inode number of a file for which you want to display information.
--show-filename
Causes the command to display the full path name of each file.
Note:
• This parameter is supported only on Linux nodes with a Linux kernel of 3.10.0-0123 or later and is
dependent on the path name being available in the Linux directory cache.
• The parameter is not supported on AIX or on the Windows operating system.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmcachectl command.
The node on which you enter the command must be able to execute remote shell commands on any other
administration node in the cluster. It must be able to do so without the use of a password and without
producing any extraneous messages. For more information, see Requirements for administering a GPFS
file system in the IBM Spectrum Scale: Administration Guide.
Examples
1. The following command lists information for all the files in all the file systems that have file data
cached in the local page pool. In this example, the only file system that has data cached in the local
page pool is gpfs_fs0:
# mmcachectl show
FSname Fileset Inode SnapID FileType NumOpen NumDirect Size Cached
Cached
ID Instances IO (Total) (InPagePool)
(InFileCache)
----------------------------------------------------------------------------------------------------------
--------
gpfs_fs0 0 40 0 special 1 0 524288 8192 F
gpfs_fs0 0 56833 0 file 0 0 0 0 F
gpfs_fs0 0 4 0 special 1 0 524288 524288 F
gpfs_fs0 0 18229 0 link 0 0 14 0 FD
gpfs_fs0 0 3 0 directory 0 0 262144 262144 F
gpfs_fs0 0 56836 0 directory 0 0 131072 0 F
gpfs_fs0 0 56832 0 file 0 0 166611 0 F
gpfs_fs0 0 39 0 special 1 0 524288 8192 F
gpfs_fs0 0 40704 0 file 0 0 10 0 FD
gpfs_fs0 0 41 0 special 1 0 524288 8192 F
gpfs_fs0 0 55554 0 directory 0 0 3968 0 FD
gpfs_fs0 0 42 0 special 0 0 524288 524288 F
gpfs_fs0 0 55552 0 file 0 0 5 0 FD
gpfs_fs0 0 56834 0 file 0 0 8192000000 0 F
gpfs_fs0 0 55553 0 file 0 0 13 0 FD
gpfs_fs0 0 22016 0 file 0 0 10 0 FD
2. The following command lists information for all the files in all the file systems that have file data
cached in the local page pool. Because the --show-filename parameter is specified, the command
also lists the full path name of each file in the last column:
# mmcachectl show --show-filename
FSname Fileset Inode SnapID FileType NumOpen NumDirect Size Cached Cached FileName
ID Instances IO (Total) (InPagePool) (InFileCache)
--------------------------------------------------------------------------------------------------------------------------------
gpfs_fs0 0 40 0 special 1 0 524288 8192 F -
gpfs_fs0 0 56833 0 file 0 0 0 0 F * /gpfs_fs0/big_rand_link1
gpfs_fs0 0 4 0 special 1 0 524288 524288 F -
gpfs_fs0 0 18229 0 link 0 0 14 0 FD /gpfs_fs0/test2_soft_link
gpfs_fs0 0 3 0 directory 0 0 262144 262144 F /gpfs_fs0/
gpfs_fs0 0 56836 0 directory 0 0 131072 0 F /gpfs_fs0/temp_dir
gpfs_fs0 0 56832 0 file 0 0 166611 0 F /gpfs_fs0/bis_csc
gpfs_fs0 0 39 0 special 1 0 524288 8192 F -
gpfs_fs0 0 40704 0 file 0 0 10 0 FD /gpfs_fs0/test
gpfs_fs0 0 41 0 special 1 0 524288 8192 F -
gpfs_fs0 0 55554 0 directory 0 0 3968 0 FD /gpfs_fs0/dir
gpfs_fs0 0 42 0 special 0 0 524288 524288 F -
gpfs_fs0 0 55552 0 file 0 0 5 0 FD /gpfs_fs0/test1
gpfs_fs0 0 56834 0 file 0 0 8192000000 0 F /gpfs_fs0/big_zero
gpfs_fs0 0 55553 0 file 0 0 13 0 FD /gpfs_fs0/test2
gpfs_fs0 0 22016 0 file 0 0 10 0 FD /gpfs_fs0/test3
See also
• mmfsadm command in the IBM Spectrum Scale: Problem Determination Guide.
Location
/usr/lpp/mmfs/bin
mmcallhome command
Manages the call home operations.
Synopsis
mmcallhome group add groupName server [--node {all | childNode[,childNode...]}]
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
mmcallhome run SendFile --file file [--desc DESC | --pmr {xxxxx.yyy.zzz | TSxxxxxxxxx}]
or
or
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmcallhome command to configure, enable, run, schedule, and monitor call home related tasks
in the IBM Spectrum Scale cluster.
By using this command, predefined data from each node can be collected on a regular basis or on demand
and uploaded to IBM. IBM support and development teams can use this data to understand how the
customers are using IBM Spectrum Scale. In case of issues, the data can be referenced for problem
analysis. The data can also possibly be used to provide advice to customers regarding failure prevention.
Since an IBM Spectrum Scale cluster consists of multiple nodes, the call home feature introduces the
concept of the call home group to manage them. A call home group consists of one gateway node (which
is defined as a call home node) and one or more client nodes (which are defined as call home child
nodes). The call home node initiates the data collection from the call home child nodes and uploads data
to IBM using the HTTPS protocol. Since the call home group can be configured independently, the group
concept can be used for special conditions, such as for split clusters, that require all the group members
to be on the same side to avoid unnecessary data transfer over large distance. Also, a call home group can
be mapped to a node group or other cluster-specific attributes. The call home node needs to have access
to the external network via port 443. The maximum recommended number of nodes per group is 128.
Each cluster node can be only a member of at most one group. Multiple call home groups can be defined
within an IBM Spectrum Scale cluster.
For more information about the call home feature, see the Monitoring the IBM Spectrum Scale(tm) system
remotely by using call home section in IBM Spectrum Scale: Problem Determination Guide.
Parameters
group
Manages topology with one of the following actions:
add
Creates a call home group, which is a group of nodes consisting of one call home node and
multiple call home child nodes. Multiple call home groups can be configured within an IBM
Spectrum Scale cluster.
The call home node initiates data collection within the call home group and uploads the data
package to the IBM server.
group
Specifies the name of the call home group.
Note: The group name can consist of any alphanumeric characters and these non-
alphanumeric characters: '-', '_', and '.'.
Important: The group name cannot be global. Call home uses global as the default name
for the group that contains the global values that are applied to all groups.
server
Specifies the name of the call home server belonging to the call home group.
Note: The server name can consist of any alphanumeric characters and these non-
alphanumeric characters: '-', '_', and '.
--node childNode
Specifies the call home child nodes.
Note: The child node name can consist of any alphanumeric characters and these non-
alphanumeric characters: '-', '_', and '.
--node all
Selects all Linux nodes in the IBM Spectrum Scale cluster. When this parameter is omitted,
only the call home node is added to the child node. Additionally, call home node is always
added to the child node group.
list
Displays the configured call home groups.
Note:
If nodes that are members of a call home group are deleted, or their long admin node names
(including domain) are changed, the mmcallhome group list command displays ------
instead of the names of such nodes. In such cases, you must delete the corresponding groups,
and then create new groups if needed. The deletion of the call home groups is not done
automatically, since in some cases this might cause the deletion of the call home groups without
re-creating them.
--long
Displays the node names as long admin node names. Without --long the node names are listed
as short admin node names.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
delete
Deletes the specified call home group.
GroupName
Specifies the name of the call home group that needs to be deleted.
auto
Enables automatic creation of call home groups.
--server ServerName
Specifies one or more call home servers. Each server must be able to access the IBM call
home servers over the internet. If no server is specified, the system detects call home node
automatically. In this scenario, the system checks if the detected node can access the
internet. If a server is specified, the defined nodes are used as call home nodes without any
further check.
If a proxy is needed, specify the proxy by using the mmcallhome proxy command before
running the mmcallhome group auto command.
Note: If this option is specified, the corresponding nodes are assumed to be able to access the
IBM call home servers over the internet and no further checks in this regard is performed. If
this option is not used, the mmcallhome command tests all potential call home nodes for the
connectivity to the IBM call home servers. In this case, if a proxy is configured via the
mmcallhome proxy command, this proxy is used to check the connectivity, otherwise a
direct connection is attempted.
--nodes
Specifies the names of the call home child nodes to distribute into groups.
all
Specifies that all cluster nodes must be distributed into call home groups.
--force
Creates new groups after deleting the existing groups.
Note: If this option is selected but no server nodes are specified using the --server option
or detected automatically, the operation is aborted and the existing groups are not deleted.
--group-names
Specifies the names of the call home groups to create. The number of call home group names
must be bigger or equal to the number of created call home groups or the execution of the
mmcallhome group auto command is aborted with an error. If this option is not specified,
the automatically created groups are named in the following way: autoGroup_1,
autoGroup_2, ...
--enable
Enables the cluster for call home functionality. If no other option is defined, the enable
parameter shows the license and asks for acceptance by default.
LICENSE
Shows license and terminate.
ACCEPT
Does not show license and assumes that the license is accepted.
--disable
Disable call home.
Note: All groups are disabled.
change
Changes an existing call home group by adding new child nodes to it or removing existing child
nodes from the group.
GroupName
Specifies the name of an existing call home group.
--add-nodes ChildNode1[,ChildNode2...]
Specifies the cluster nodes to add to the call home group as call home child nodes.
--delete-nodes ChildNode1[,ChildNode2...]
Specifies the call home child nodes that belong to the specified group, and must be
removed.
capability
Manages the overall call home activities with one of the following actions:
list
Displays the configured customer information such as the current enable or disable status, call
home node, and call home child nodes.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
enable
Enables the call home service.
Note: When you run the mmcallhome capability enable command, you are required to
accept a data privacy disclaimer. For more information, see the Data privacy with call home
section in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
disable
Disables the currently running call home service.
info
Manages customer data with one of the following actions:
list
Displays the configured parameter values.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
change
Sets parameter values.
[--key value]
Indicates a placeholder pointing to the following table:
customer-id Customer ID
This can consist of any alphanumeric characters and
the following non-alphanumeric characters:
• -
• _
• .
In special cases, the following customer IDs should be
used:
• For developer edition: DEVLIC
• For test edition for customers who are trying IBM
Spectrum Scale before buying: TRYBUY
proxy
Configures proxy-related parameters with one of the following actions:
enable
Enables call home to use a proxy for its uploads. Requires the call home settings proxy-
location and proxy-port to be defined.
[--with-proxy-auth]
Enables user ID and password authentication to the proxy server. Requires the call home
settings proxy-username and proxy-password to be set.
disable
Disables proxy access.
list
Displays the currently configured proxy-related parameter values.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
change
Modifies the proxy configuration.
[--key value]
Indicates a placeholder pointing to the table below.
schedule
Configures scheduling of call home tasks with one of the following actions:
list
Shows if the scheduled data collection tasks or the event-based uploads feature are enabled. For
more information about the collected information, see the Types of call home data upload section
in the IBM Spectrum Scale: Problem Determination Guide
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
add --task {DAILY | WEEKLY | EVENT}
Enables the specified scheduled task. The specified task can be a daily or weekly data collection
and upload operation, or an event-based mini-snap creation.
delete --task {DAILY | WEEKLY | EVENT}
Disables the specified scheduled task. The specified task can be a daily or weekly data collection
and upload operation, or an event-based mini-snap creation.
run
Executes one-time gather or send tasks with one of the following options:
GatherSend
Executes a one-time gather-send task, which collects the predefined data and uploads it.
--task daily
Chooses the data to be uploaded, as specified under the daily and all data collection
schedules. For more information about the daily and all data collection schedules, see the
Scheduled data upload section in the IBM Spectrum Scale: Problem Determination Guide.
--task weekly
Chooses the data to be uploaded, as specified under the weekly and all data collection
schedules. For more information about the daily and all data collection schedules, see the
Scheduled data upload section in the IBM Spectrum Scale: Problem Determination Guide.
SendFile
Uploads a specified file to IBM. The maximum file size is limited to 8 GiB from the IBM server side,
but this limit might be increased in the future.
--file file
Specifies the name of the file that needs to be uploaded.
Note: The name can consist of any alphanumeric characters and these non-alphanumeric
characters: '-', '_', '.'
[--desc DESC]
Specifies the description of the file that needs to be uploaded. This is added to the data
package file name.
Note: This text can consist of any alphanumeric characters and these non-alphanumeric
characters: '-', '_', '.', ' ', ','
[--pmr {xxxxx.yyy.zzz | TSxxxxxxxxx}]
Specifies either the dot-delimited PMR descriptor, where x, y and z could be digits, and y might
additionally be a letter, or a Salesforce case descriptor, where each x is a digit.
status
Displays status of the call home tasks with one of the following options:
list
Displays the status of the currently running and the already completed call home tasks.
--task {DAILY | WEEKLY | SENDFILE | SENDPMRDATA}
Specifies the requested call home task type. The following types are available:
DAILY
Daily executed scheduled uploads.
WEEKLY
Weekly executed scheduled uploads.
SENDFILE
Files that are sent on demand, that are not PMR related or Salesforce case related.
SENDPMRDATA
Files that are sent on demand, that are either PMR related or Salesforce case related.
[--numbers num]
Specifies the maximum number of latest tasks that can be listed for each requested task type.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
--verbose
Lists additional information.
When the mmcallhome status list --verbose command is executed, the following information
is shown in the output:
• Group: The name of the call home group where the task was executed.
• Task: Type of the executed call home task: DAILY, WEEKLY, SENDFILE or SENDPMRDATA.
• Start Time: A time stamp, specifying the start time of the call home task. This timestamp is also
used to uniquely identify a call home task within a call home group.
• Updated Time: A time stamp of the last status update for the corresponding task.
• Status: You can get one of the following values as status information:
– success
– running
– failed
– minor error
– aborted
• RC or Step:
– For a task that is currently running, the current execution step is displayed.
– For a successful task, RC=0 is displayed.
– For a failed task, the failure details are displayed.
• Package File Name: Name of the created data package to be uploaded.
• Original Filename: Name of the transferred file for the SendFile tasks.
diff
Displays the configuration difference between any of the following data collection files:
• DAILY
• WEEKLY
• HEARTBEAT
When this command is specified without any option other than --verbose or -Y, it compares the
latest two call home data collection files, and prints out the difference.
--last-days num
Specifies the call home data collection file that was created num days back to compare with
the most recent one.
--old dcFileName
Specifies the file name of the call home data collection file to compare with the most recent
file. The first file that matches is selected. Either the full filename or just a substring such as
the creation date in the format 20200229 can be specified.
Note: If the next option --new <dcFileName> is also specified, this data collection file is
selected instead of the most recent data file to compare with the old call home data collection
file.
--new dcFileName
This parameter can only be used in combination with --old dcFileName option. If denoted,
the specified data collection file is compared with the file specified using the --old option,
instead of the most recent one. The selected file must be newer than the one specified with --
old option.
--verbose
Print out a more verbose output. Instead of just the name of the configuration object that was
deleted or created all its detailed fields are printed out.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
delete
Deletes the status log entries for the specified tasks:
--task {DAILY | WEEKLY | SENDFILE | SENDPMRDATA}
Specifies the task type, for which the status log entries should be deleted.
--startTime starttime
Specifies the start time of the log to delete.
--startTimeBefore starttime
All logs older than the time specified by this option are deleted.
--all
All logs are deleted.
test
Executes detailed system checks:
connection
Checks the connectivity to the IBM e-support infrastructure. A proxy is used if the call home proxy
settings are enabled. If the proxy setting is disabled, direct connections are attempted.
If this command is executed from a node that is a member of a currently existing call home group,
the system performs the connectivity check for the call home master node of this group. If this
command is executed from a node that is not a member of any existing call home group, the
system performs a connectivity check for this node, and checks if it can become a call home
master node.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmcallhome command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To configure a call home group, issue this command:
Call Home Group Call Home Node Call Home Child Nodes
----------------- ---------------- -------------------------
group1 test-11 test-11,test-12,test-13
You can also give the same command with the --long option to view the configured call home
groups with their long names:
3. To change customer information such as customer name, customer ID, country code, and the system
type, issue this command:
Success
Call home country-code has been set to JP
Call home customer-id has been set to 1234
Call home customer-name has been set to SpectrumScaleTest
Call home type has been set to production
6. To create a call home group automatically and enable the cluster for call home functionality by
displaying options for acceptance, issue this command:
By accepting this request, you agree to activate the Call Home Feature of the
Program and allow IBM and its subsidiaries to store and use your contact information
and your support information anywhere they do business worldwide as further
Additional messages:
License was accepted. Call home enabled.
Note: To accept the call home functionality, type accept manually. Type mmcallhome group auto
--enable accept to avoid the explicit acceptance from the user.
7. To use create new group after deleting the existing group, issue this command:
By accepting this request, you agree to activate the Call Home Feature of the
Program and allow IBM and its subsidiaries to store and use your contact information
and your support information anywhere they do business worldwide as further
described in the Program license agreement and the documentation page
"Data privacy with Call Home". If you agree, please respond with "accept" for acceptance,
else with "not accepted" to decline.
accept
Call home enabled has been set to true
Additional messages:
License was accepted. Callhome enabled.
11. To list the registered tasks for gather-send, issue this command:
12. To set the parameters for the proxy server, issue this command:
15. To run one-time send command to upload a file, issue this command:
Running sendFile... (In case of network errors, it may take over 20 minutes for retries.)
Successfully uploaded the given file
Run mmcallhome status list --verbose to see the package name
16. To view the status of the currently running and the already completed call home tasks, issue this
command:
Starting connectivity test between the call home node and IBM
Call home node: g5020-31.localnet.com
Starting time: Fri Aug 31 17:09:58 2018
------------------------------------------------------------
End time: Fri Aug 31 17:10:06 2018
18. To list the differences between two call home data collection files of different date, issue this
command:
Fs Data
Nodeclass Data
See also
• “mmchconfig command” on page 169
• “mmlscluster command” on page 484
• “mmlsconfig command” on page 487
• “mmnfs command” on page 552
• “mmobj command” on page 565
• “mmsmb command” on page 698
• “mmuserauth command” on page 727
Location
/usr/lpp/mmfs/bin
mmces command
Manage CES (Cluster Export Services) configuration.
Synopsis
mmces service enable {NFS | OBJ | SMB | BLOCK | HDFS}
or
or
or
mmces service list [-N {Node[,Node...] | NodeFile | NodeClass} | -a] [-Y] [--verbose]
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
mmces events active [NFS | OBJ | SMB | BLOCK | AUTH | AUTH_OBJ |NETWORK | HDFS]
[-N {Node[,Node...] | NodeFile | NodeClass} | -a]
or
mmces events list [NFS | OBJ | SMB | BLOCK | AUTH | AUTH_OBJ |NETWORK | HDFS]
[-Y] [--time {hour | day | week | month}]
[--severity {INFO | WARNING | ERROR | SEVERE}]
[-N {Node[,Node...] | NodeFile | NodeClass} | -a]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmces command to manage protocol addresses, services, node state, logging level and load
balancing. Assignment of the CES addresses to the network adapters is done by the CES base address
specified in CES legacy mode, or by a user-specified configuration in the CES interface mode. The
protocol functions provided in this command, or any similar command, are generally referred to as CES
(Cluster Export Services). For example, protocol node and CES node are functionally equivalent terms.
Before using the mmces command, CES has to be installed and configured in the system. For more
information on how to install CES, see the Installing Cluster Export Services as part of installing IBM
Spectrum Scale on Linux systems section in the IBM Spectrum Scale: Concepts, Planning, and Installation
Guide.
CES currently supports the NFS, SMB, BLOCK, HDFS, and Object services. Each service can be enabled or
disabled with the mmces service command. Enabling a service is a CES cluster-wide operation. In
addition, enabled services can be started and stopped on individual nodes.
Clients access the CES services using one or more IP addresses in the CES address pool. Addresses can
be added to and removed from the pool using the mmces address add and mmces address remove
commands. Existing addresses can be reassigned to another node with the mmces address move
command.
Addresses can have one or more attributes associated with them. An address attribute is a tag that the
services can identify a specific address as having a special meaning, which is defined by the service
protocol. Addresses can have multiple attributes, but an attribute can only be associated with a single
address. The supported attributes are object_singleton_node, object_database_node, and
HDFS-related attributes. The HDFS-related attributes are assigned automatically. These attributes are
used to manage HDFS and object protocol-related services. For more information, see the Object services
and object protocol nodes table in the Understanding and managing Object services section in the IBM
Spectrum Scale: Administration Guide.
Addresses can have a policy associated with them, and that policy determines how addresses are
automatically distributed. The allowed policies are none, balanced-load, node-affinity, and even-
coverage. A policy of none means addresses are not distributed automatically.
The assignment of addresses to a Network Interface Controller (NIC) can be done by using one of these
two modes:
• CES legacy mode
An address with the same subnet mask as the CES addresses, called the CES Base Address, is defined
on each CES node. CES addresses are assigned to a NIC where the CES Base Address is hosted. In this
mode, only IPv4 addresses are supported and no other non-CES address with the same subnet mask as
the CES Base Address should exist on CES nodes.
• CES interface mode
The customer defines one or more individual NICs on each CES node. The CES addresses are assigned
to these NICs. To manage subnets, like it is currently done for CES nodes, a NIC can be a member of one
or more CES groups. In this case a CES address can only be assigned to a NIC if the NIC is in the same
group as the CES address. In this mode IPv6 and IPv4 addresses are supported.
Note: The address balancing done by policies works node-wide. This means that if a CES address can
be hosted on different NICs on the same node, the NIC where the address is hosted is not defined. Well-
defined address assignment on a node with multiple NICs is achieved by the usage of CES group.
Transitioning from legacy mode to interface mode: In a legacy mode, no IPv6 addresses can be added.
You can switch from the legacy mode to the interface mode only if you have a valid NIC configuration. To
switch from a legacy mode to an interface mode, add the NIC first and then switch the CES mode to
interface. When the NICs are specified in the legacy mode you are in transition. This state is allowed but
not recommended. You must switch to the interface mode, or remove the NIC configuration by using the
mmces interface mode interface command.
Transitioning from interface mode to legacy mode: Use the mmces interface mode legacy --
force command to switch back to the legacy mode while keeping the NIC assignment. The mmces
interface mode legacy --force command deletes the NIC configuration and sets the cluster in
legacy mode. Transitioning from the interface mode to the legacy mode unassigns all the IPv6 addresses.
If you do not have CES Base Addresses, all the IPv4 addresses are also unassigned.
Use the mmces interface command to manage CES modes and NIC assignment.
A CES node can be placed in a suspended state. When a node is in suspended state, all of the CES
addresses for that node are reassigned to other nodes, or set to unassigned. The node will not accept new
address assignments. Any services that are started when the node is suspended continue to run, except
mmcesmonitord. The mmcesmonitord service is stopped, and the lock on the sharedRoot file system
is released. The sharedRoot file system is still mounted, but can be unmounted. The suspended state is
persistent, which means nodes remain suspended following a reboot. After a reboot no services are
running on a suspended node. However, if a node is not suspended, the services that were enabled on the
nodes are restarted after the reboot.
The presence of a CES IP on a name node in an HDFS group activates the HDFS service. When there are
multiple name nodes in an HDFS group, only the name node which has a CESIP assigned to it can be
active. The absence of a CES IP deactivates the HDFS service from that name node. An HDFS group has
the prefix hdfs followed by at least one alphabetical or numeric character. For example, hdfsbda1. You
can assign a group or an attribute to a CES IP. If an HDFS group is assigned or unassigned from a CES IP,
the related attribute is also assigned or unassigned automatically. A CES IP assigned to an HDFS group
must not have any attributes other than the ones automatically assigned. An HDFS group must be
assigned only once to all the CES IPs in the cluster.
Parameters
service
Manages protocol services with one of the following actions:
enable
Enables and starts the specified service on all CES nodes.
disable
Disables and stops the specified service on all CES nodes.
Note: Disabling a service will discard any configuration data from the CES cluster and needs to be
used with caution. If applicable, backup any relevant configuration data. Subsequent service
enablement will start with a clean configuration.
--force
Allows the service disable action to go through without confirmation from the user.
start
Starts the specified service on the nodes specified. If neither the -N or -a parameters are
specified, the service is started on the local node.
stop
Stops the specified service on the nodes specified. If neither the -N or -a parameters are
specified, the service is stopped on the local node.
Note: If a service is stopped on a node that has CES addresses assigned, clients will not be able to
access the service using any of the addresses assigned to that node. Access to the data from
clients is not possible any more for services that are stopped. This state is not persistent, so after
a reboot all the services become active again.
list
Lists the state of the enabled services.
node
Manages CES node state with one of the following actions:
suspend
Suspends the specified nodes. If neither the -N or -a parameters are specified, only the local
node is suspended. The -a stands for all CES nodes.
When a node is suspended, all addresses assigned to the node are reassigned to other nodes, or
set to unassigned. The node will not accept any subsequent address assignments. Suspending a
node triggers CES protocol recovery if the node has CES addresses assigned. This does not stop
any services or the mmces monitor daemon.
If the --stop option is added to the suspend command, all the CES services including the mmces
monitor daemon are stopped. This releases the lock on the sharedRoot file system, and it can be
unmounted without further actions. The sharedRoot file system is still mounted, but can be
unmounted.
Note: After a reboot no services are running on a suspended node. However, if a node is not
suspended, the services that were enabled on the nodes are restarted after the reboot.
resume
Resumes the node so that CES IPs can be retrieved and starts the mmces monitor daemon if it is
not running.
Using the --start option starts all the enabled protocols. The mmces monitor daemon is also
started, if it is not already running. However, if at least one protocol was not successfully started,
then all the protocols that were already running are also stopped.
list
Lists the specified nodes along with their current node state. If the -N parameter is not specified,
all nodes are listed.
verbose
Lists the addresses assigned to the nodes.
--ces-group
Lists the nodes belonging to the specified groups.
log level
Sets the CES log level, which determines the amount of CES related information that is logged in the
file, /var/adm/ras/mmfs.log.latest. The log level values can range from 0, which is the default
value, to 3. Increasing the log level adds to the information being logged.
The values are defined as follows:
level 0
Logs only non-repeated errors and non-repeated warnings. The non-repeated warnings indicate
faulty behavior. Information about the state of an action is also logged during startup and
shutdown.
level 1
Logs all errors and all non-repeated warnings.
level 2
Logs all warnings.
level 3
Logs all important debug information.
Important: A higher log level logs the data of all the lower log levels. Therefore, log level 1 includes
the information of log level 0, log level 2 includes the information of log level 1 and log level 0, and so
on.
The following status information is also accumulated in the log:
error
When an event that degrades the functionality of the system has occurred.
warning
When an unexpected event has occurred. However, this event does not degrade the functionality
of the system. For example, an error was solved by a retry.
information
Documents the result of an operation. This is logged during startup and shutdown or in log level 3.
address
Manages CES addresses with one of the following actions:
add
Adds the addresses specified by the --ces-ip parameter to the CES address pool and assigns
them to a node. The node to which an address is assigned will configure its network interfaces to
accept network communication destined for the address. CES addresses must be different from IP
addresses used for GPFS or CNFS communication.
If --ces-node is specified with add, all addresses specified with the --ces-ip parameter will
be assigned to this node. If --ces-node is not specified, the addresses will be distributed among
the CES nodes.
If an attribute is specified with --attribute, there can only be one address specified with the
--ces-ip parameter.
If --ces-group is specified with add, all new addresses will be associated with the specified
group. The result can be viewed with the mmces address list command.
Note: The provided addresses or host names must be resolvable by forward and reverse name
resolution (DNS or /etc/hosts on all CES nodes). Otherwise you get the following error
message:
You can also perform a manual check by running the following command: mmcmi host <ip
address>.
Ensure that the netmask (PREFIX) setting in the ifcfg-<interface> files is correct.
remove
Removes the addresses specified by the --ces-ip parameter from the CES address pool. The
node to which the address is assigned reconfigures its network interfaces to no longer accept
communication for that address.
move
Moves addresses.
If the --ces-ip parameter is specified, the addresses specified by IP are moved from one CES
node to another. The addresses are reassigned to the node specified by the --ces-node
parameter.
If the --rebalance parameter is specified, the addresses are distributed within 60 seconds
based on the currently configured distribution policy. If the policy is currently undefined or none,
the even-coverage policy is applied.
Note: The information about the address movement is printed after the rebalance is done. The
address movement is also done in the background periodically.
Use this command with caution because IP movement will trigger CES protocol recovery.
change
Changes or removes address attributes.
If the --ces-ip parameter is specified:
• The command associates the attributes that are specified by the --attribute parameter with
the address that is specified by the --ces-ip parameter. If an attribute is already associated
with another address, that association is ended.
• If the --remove-attribute parameter is specified, the command removes the attributes that
are specified by the --attribute parameter from the addresses that are specified by the --
ces-ip parameter.
• The command associates the groups that are specified by the --ces-group parameter with the
address that is specified by the --ces-ip parameter
• If the --remove-group parameter is specified, the command removes the groups that are
specified by the --ces-group parameter from the addresses that are specified by the --ces-
ip parameter.
If the --ces-ip parameter is not specified:
• If the --remove-attribute parameter is specified, the command removes the attributes that
are specified by the --attribute parameter from their current associations.
• If the --remove-group parameter is specified, the command removes the groups that are
specified by the --group parameter from their current associations.
• Specifying --remove-group with the groups specified by the --group parameter removes the
groups from their current associations.
list
Lists the CES addresses along with group, attribute and node assignments.
Options:
--ces-ip List only the addresses provided.
--ces-group List only addresses whose group assignment matches one of the groups
provided.
--attribute List only addresses whose attributes match one of the attributes provided.
--by-node List addresses by node, using the output format from IBM Spectrum Scale V4.1.1
and later.
--extended-list Lists the preferred node of the given address in a new column if the
address balancing mode option is set to node-affinity.
--full-list Lists the information about the preferred node and the node names where the
given address could not be hosted in two new columns.
Note: The [-N {Node[,Node...]| NodeFile | NodeClass| -a] option is deprecated for
IBM Spectrum Scale version 4.2.3 and might be removed in a later release.
policy
Sets the CES address distribution policy.
prefix-ipv4
Sets the default prefix value for the IPv4 address or shows this value.
prefix-value
The value to which you want to set the default prefix for the IPv4 address. The prefix-value can
be any number from 1 to 30.
show
Displays the IPv4 prefix value.
state
Shows the state of one or more nodes in the cluster.
show
Shows the state of the specified service on the nodes specified. If no service is specified, all
services will be displayed. If neither the -N or -a parameters are specified, the state of the local
node is shown.
cluster
Shows the combined state for the services across the whole CES cluster. If no service is specified
an aggregated state will be displayed for each service, where healthy means the service is healthy
on all nodes, degraded means the service is not healthy on one or all nodes, and failed means that
the service is not available on any node. If a service is specified the state of that service will be
listed for each node, along with the name of any event that is contributing to an unhealthy state.
--ces-node
Indicates that the command applies only to the specified CES node name.
--attribute
Specifies either a single attribute or a comma-separated list of attributes as indicated in the
command syntax.
--ces-ip
Specifies either a single or comma-separated list of DNS qualified host names or IP addresses as
indicated in the command syntax.
--rebalance
Distributes addresses immediately based on the currently configured distribution policy. If the
policy is currently undefined or none, the even-coverage policy is applied.
none
Specifies that addresses are not distributed automatically.
balanced-load
Distributes addresses dynamically in order to approach an optimized load distribution throughout
the cluster. The network and CPU load on all the nodes is monitored and addresses are moved
based on given policies.
Addresses that were recently moved or addresses with attributes are not moved.
node-affinity
Attempts to keep addresses associated with the node to which they were assigned. Address node
associations are created with the --ces-node parameter of the mmces address add command
or the mmces address move command. Automatic movements of addresses do not change the
association. Addresses that were enabled without a node specification do not have a node
association. Addresses that are associated with a node but assigned to a different node are moved
back to the associated node.
Addresses that were recently moved or addresses with attributes are not moved.
even-coverage
Attempts to evenly distribute all of the addresses among the available nodes.
Addresses that were recently moved or addresses with attributes are not moved.
--remove-attribute
Indicates that the specified attributes should be removed.
-N {Node[,Node...] | NodeFile | NodeClass}
Indicates that the command applies only to the specified node names.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
-a
Specifies that the command applies to all CES nodes.
NFS
Specifies that the command applies to the NFS service.
OBJ
Specifies that the command applies to the Object service.
SMB
Specifies that the command applies to the SMB service.
BLOCK
Specifies that the command applies to the BLOCK service.
AUTH
Specifies that the command applies to the AUTH service.
NETWORK
Specifies that the command applies to the NETWORK service.
CES
Specifies that the command applies to the CES service.
--verbose
Specifies that the output is verbose.
new-level
Sets the log level to a new value. If the new-level parameter is not specified, the current log level is
displayed.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
--time
Lists the previous events from one of the following intervals:
hour
Lists the events from the past hour.
day
Lists the events from the past day.
week
Lists the events from the past week.
month
Lists the events from the past month.
The events are listed whether or not they are currently contributing to the state of a component.
interface
Shows the state of one or more nodes in the cluster.
mode
Manage the CES mode.
legacy
Set the cluster in CEC legacy mode. IPv6 support is disabled.
Address assignment is done by CES Base Addresses. If the command is executed in the
interface mode it is rejected, because CES cannot verify if CES Base Addresses are assigned.
interface
Set the cluster in CES interface mode and supports IPv6 addresses.
Address assignment is done by NIC configuration. The command is rejected if problems occur
in the CES address assignments when this command is executed.
--clear
Remove the CES NIC configuration and set the cluster to legacy mode. This command
requests user validation.
--force
Overwrites any check or request for user validation.
list
Shows the CES mode. The CES mode is either legacy or interface.
--by-node
Displays the CES mode for the specified node or nodes.
--by-group
Displays the CES mode for the specified group or groups.
ALL
Display all the NICs with at least one group assigned to them.
ANY
Displays all the NICs.
NONE
Displays all the NICs with no group assigned to them.
add
Add specified NICs to the cluster using the specified options. The command is rejected if
problems occur in the CES address assignments when this command is executed.
--nic
Specify the NICs as a comma-separated list.
--ces-group
Sets all the NICs specified in this command into the specified group or groups. Groups are
specified by a comma-separated list.
--ces-node
Limit the NIC assignment to the specified node or nodes. Nodes are specified by a comma-
separated list.
--force
Disable checking for potential problems.
remove
Remove the specified NICs from the cluster using the specified options. The command is rejected
if problems occur in the CES address assignments when this command is executed.
--nic
Specify the NICs as a comma-separated list.
--ces-node
Limit the NIC removal to the specified node or nodes. Nodes are specified by comma-
separated list.
--force
Disable checking for potential problems.
check
Checks the NIC configuration against the node groups and CES addresses.
If the NIC configuration matches the node groups the configuration is valid. The command checks
if any NIC is defined, and if there is any CES address that cannot be assigned to that NIC. You can
check if a CES address cannot be assigned to a NIC by verifying if any of the NIC groups is
assigned to a CES address. No checks are done for suspended nodes or for NICs that are currently
down.
events
Shows one of the following CES events that occurred on a node or nodes:
active
Lists all events that are currently contributing to making the state of a component unhealthy. If no
component is specified, active events for all components are listed. If neither the -N or -a
parameters are specified, the active events for the local node are listed. If there are multiple
events shown by the command they will be listed in the order we recommend they be fixed, with
the most important event to fix at the top.
list
Lists the events that occurred on a node or nodes, whether or not they are currently contributing
to the state of a component. If no component is specified, events for all components are listed. If
--time is specified, only events from the previous chosen interval are listed, otherwise all events
are listed. If --severity is specified, only events of the chosen severity are listed, otherwise all
events are listed. If neither the -N or -a parameters are specified, the events for the local node
are listed.
Events older than 180 days are removed from the list. A maximum of 10,000 events are saved in
the list.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmces command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To add an address to a specified node, issue this command:
When this command is successful, the system does not display output.
2. To add several addresses to a specified node, issue this command:
When this command is successful, the system does not display output.
3. To add an address with the attribute object_singleton_node to a specified node, issue this
command:
When this command is successful, the system does not display output.
4. To add addresses which are distributed among the CES nodes, issue this command:
When this command is successful, the system does not display output.
5. To remove several addresses, issue this command:
When this command is successful, the system does not display output.
When this command is successful, the system does not display output.
7. To remove the attribute object_singleton_node, issue this command:
When this command is successful, the system does not display output.
8. To move an address to another node, issue this command:
When this command is successful, the system does not display output.
9. To set a default prefix for the IPv4 address, issue this command:
CES CESNETMASK_IPv4 is 30
12. To resume a node and start all enabled CES services, issue this command:
14. To suspend a node and stop all CES services, issue this:
15. To enable the Object service in the CES cluster, issue this command:
When this command is successful, the system does not display output.
16. To disable the NFS service in the CES cluster, issue this command:
When this command is successful, the system does not display output.
17. To stop the SMB service on a few nodes, issue this command:
When this command is successful, the system does not display output.
18. To start the SMB service on all CES nodes, issue this command:
When this command is successful, the system does not display output.
19. To show which services are enabled and which are running all CES nodes, issue this command:
20. To display the current CES log level, issue this command:
22. To display the state of all CES components on the local node, issue this command:
23. To display the state of the NFS component on all nodes, issue this command:
24. To display a list of active events of all CES components on the local node, issue this command:
25. To display a list of all NFS events from the last hour on the local node, issue this command:
Block device support in Spectrum Scale is intended for use only in diskless node
remote boot (non-performance-critical), and is not suited for high-bandwidth
block device access needs. Confirm that this matches your use case before enabling
the block service. If you have any questions contact [email protected]
Do you want to continue to enable BLOCK service? (yes/no)
27. To add a NIC to all CES nodes in the cluster, run the following command:
Accepted: 3 Skipped: 0
Additional messages:
Accepted changes:
Node Node Name accepted Nic's accepted Nic's groups
------ ------------------------- ---------------- -----------------------
2 cluster-22.localnet.com eth1
3 cluster-23.localnet.com eth1
4 cluster-24.localnet.com eth1
28. To add a NIC to selected nodes in the cluster with CES group assignment, run the following
command:
Accepted: 2 Skipped: 0
Additional messages:
Accepted changes:
Node Node Name accepted Nic's accepted Nic's groups
------ ------------------------- ----------------
-----------------------
3 cluster-23.localnet.com eth1 ipv6
4 cluster-24.localnet.com eth1 ipv6
29. To remove the CES NIC configuration, run the following command:
Debug: legacy
The CES NIC configuration will be removed and IPv6 addresses are not supported anymore.
CES-Base-IPs will be needed to define NICs where CES IPs are hosted.
Type 'yes' to continue. Any other input will abort the command.
yes
The CES NIC configuration is removed. Cluster is in legacy mode using CES base IPs for IPv4
only.
Existing NIC configuration is valid, and can host all CES IPs.
Additional messages:
CES interface mode: IPv6 support enabled. Address assignement by user defined NIC's.
See also
• “mmchconfig command” on page 169
• “mmlscluster command” on page 484
• “mmlsconfig command” on page 487
• “mmnfs command” on page 552
• “mmobj command” on page 565
• “mmsmb command” on page 698
• “mmuserauth command” on page 727
Location
/usr/lpp/mmfs/bin
mmcesdr command
Manages protocol cluster disaster recovery.
Synopsis
mmcesdr primary config --output-file-path FilePath --ip-list IPAddress[,IPAddress,...]
[--allowed-nfs-clients {--all | --gateway-nodes |
IPAddress[,IPAddress,...]}]
[--rpo RPOValue] [--inband] [-v]
or
or
or
or
or
or
or
or
or
or
or
Availability
Available with IBM Spectrum Scale Advanced Edition, IBM Spectrum Scale Data Management Edition,
IBM Spectrum Scale Developer Edition, or IBM Spectrum Scale Erasure Code Edition.
Description
Use the mmcesdr command to manage protocol cluster disaster recovery.
You can use the mmcesdr primary config command to perform initial configuration for protocols
disaster recovery on the primary cluster and to generate a configuration file that is used on the secondary
cluster. The protocol configuration data can be backed up using the mmcesdr primary backup
command and the backed-up data can be restored using the mmcesdr primary restore command.
The backed-up configuration information for the primary cluster can be updated by using the mmcesdr
primary update command. You can use the mmcesdr primary failback command to fail back the
client operations to the primary cluster.
You can use the mmcesdr secondary config command to perform initial configuration for protocols
disaster recovery on the secondary cluster by using the configuration file generated from the primary
cluster. The secondary read-only filesets can be converted into read-write primary filesets using the
mmcesdr secondary failover command. You can use the mmcesdr secondary failback
command to either generate a snapshot for each acting primary fileset or complete the failback process,
and convert the acting primary filesets on the secondary cluster back into secondary filesets.
For information on detailed steps for protocols disaster recovery, see Protocols cluster disaster recovery in
IBM Spectrum Scale: Administration Guide.
The mmcesdr log file is at /var/adm/ras/mmcesdr.log. This log file is included with the CES
information generated by the gpfs.snap command. The gpfs.snap command generates the CES
information by default, if a protocol is enabled.
Parameters
primary
This command is run on the primary cluster.
config
Perform initial configuration of protocol cluster disaster recovery.
--output-file-path FilePath
Specifies the path to store output of the generated configuration file, which is always named
DR_Config.
--ip-list IPAddress[,IPAddress,...]
Comma-separated list of public IP addresses on the secondary cluster to be used for active
file management (AFM) DR-related NFS exports.
--allowed-nfs-clients {--all | --gateway-nodes | IPAddress[,IPAddress,...]}
Optional. Specifies the entities that can connect to the AFM DR-related NFS shares, where:
--all
Specifies that all clients must be allowed to connect to the AFM DR-related NFS shares. If
omitted, the default value of --all is used.
--gateway-nodes
Specifies the gateway nodes currently defined on the primary that must be allowed to
connect to the AFM DR-related NFS shares.
IPAddress[,IPAddress,...]
Specifies the comma-separated list of IP addresses that must be allowed to connect to the
AFM DR-related NFS shares.
--rpo RPOValue
Optional. Specifies the integer value of the recovery point objective (RPO) to be used for the
AFM DR filesets. By default, this parameter is disabled. The valid range is: 720 <= RPO <=
2147483647. The minimum value that can be set for the RPO is 720.
Note: Setting the value of RPO to less than 720 generates an error.
--inband
Optional. Specifies to use the inband (across the WAN) method of initial data transfer from
primary to secondary cluster. If omitted, the default value of outband is used.
backup
Backs up all protocol configuration and CES configuration into a dedicated, independent fileset
with each protocol in its own subdirectory.
restore
Restores object, NFS, and SMB protocol configuration and CES configuration from the
configuration data backed up.
--new-primary
Optional. Performs restore operation to a newly, failed back primary cluster.
--input-file-path FilePath
Optional. Specifies the original configuration file that was used to set up the secondary cluster.
If not specified, the file that is saved in the configuration independent fileset is used as
default.
--file-config {--recreate | --restore}
Optional. Specifies whether SMB and NFS exports are re-created, or if the entire protocol
configuration is restored. If not specified, the SMB and NFS exports are re-created by default.
update
Updates the backed-up copy of the protocol configuration or CES configuration.
--obj
Specifies the backed up-copy of the object protocol configuration to be updated with the
current object configuration.
--nfs
Specifies the backed-up copy of the IBM NFSv4 stack protocol configuration to be updated
with the current IBM NFSv4 stack configuration.
--smb
Specifies the backed-up copy of the SMB protocol configuration to be updated with the current
SMB configuration.
--ces
Specifies the backed-up copy of the CES configuration to be updated with the current CES
configuration.
failback
Used for several options to failback client operations to a primary cluster.
Failback involves transfer of data from the acting primary (secondary) cluster to the old primary
cluster as well as restoring protocol and possibly CES configuration information and
transformation of protected filesets to primary filesets.
--prep-outband-transfer
Creates independent filesets that out of band data is transferred to.
--input-file-path FilePath
Specifies the configuration file that is the output from the mmcesdr secondary failback
--generate-recovery-snapshots command.
failback
Used for several options to failback client operations to a primary cluster.
Failback involves transfer of data from the acting primary (secondary) cluster to the old primary
cluster as well as restoring protocol and possibly CES configuration information and
transformation of protected filesets to primary filesets.
--convert-new
Specifies that the failback is not going to the old primary but instead a new primary. This step
specifically converts the newly created independent filesets to primary AFM DR filesets.
--output-file-path FilePath
Specifies the path to store output of generated configuration file, DR_Config, with the new
AFM primary IDs.
--input-file-path FilePath
Specifies the configuration file that is the output from the mmcesdr secondary failback
--generate-recovery-snapshots command.
failback
Used for several options to failback client operations to a primary cluster.
Failback involves transfer of data from the acting primary (secondary) cluster to the old primary
cluster as well as restoring protocol and possibly CES configuration information and
transformation of protected filesets to primary filesets.
--start
Begins the failback process and restores the data to the last RPO snapshot.
--apply-updates
Transfers data that was written to the secondary cluster while failover was in-place.
Note: The use of this option might need to be done more than once depending on the system
load.
--stop [--force]
Completes the transfer of data process and puts the filesets in the read-write mode.
Optionally, if this fails you can use the --force option.
Note: In addition to using these options, after stopping the data transfer, you need to use the
mmcesdr primary restore command to restore the protocol and the CES configuration.
--input-file-path FilePath
Optional. Specifies the original configuration file that was used to set up the secondary cluster.
If not provided, the default is to use the one saved in the configuration independent fileset.
secondary
This command is run on the secondary cluster.
config
Perform initial configuration of protocol cluster disaster recovery.
--prep-outband-transfer
Creates independent filesets that out of band data is transferred to as part of the initial
configuration. If out of band data transfer is used for DR configuration, this option must be
used before data is transferred from the primary to the secondary using out of band transfer. If
out of band transfer is used, this command is run once with this option and then again after
the data is transferred without the option.
--input-file-path FilePath
Specifies the path of the configuration file generated from the configuration step of the
primary cluster.
--inband
Optional. Specifies to use the inband (across the WAN) method of initial data transfer from
primary to secondary cluster. If omitted, the default value of outband is used.
Note: If --inband is used for the primary configuration, it must also be used for the
secondary configuration.
failover
Converts secondary filesets from read-only to read-write primary filesets and converts the
secondary protocol configurations to those of the failed primary.
--input-file-path FilePath
Optional. Specifies the original configuration file that was used to set up the secondary cluster.
If not specified, the file that is saved in the configuration independent fileset is used as
default.
--file-config {--recreate | --restore}
Optional. Specifies whether SMB and NFS exports are re-created, or if the entire protocol
configuration is restored. If not specified, the SMB and NFS exports are re-created by default.
--data {--restore | --norestore}
Optional. Specifies that data must be restored from the latest RPO snapshot or that restoring
from the latest RPO snapshot is not required. Default is that restoring from the latest RPO
snapshot is not required.
failback
Runs one of the two failback options: either generates a snapshot for each acting primary fileset or
completes the failback process and convert the acting primary filesets on the secondary cluster
back into secondary filesets
--generate-recovery-snapshots
Generates the psnap0 snapshot for each acting primary fileset and stores in the default
snapshot location for use in creation of a new primary cluster with new primary filesets to fail
back to. The files within the snapshot need to be manually transported to the new primary.
--output-file-path FilePath
Specifies the path to store output of generated snapshot recovery configuration file.
--input-file-path FilePath
Optional. Specifies the path of the original configuration file that was used to set up the
secondary cluster. If not provided, the default is to use the one saved in the configuration
independent fileset.
failback
Runs one of the two failback options: either generates a snapshot for each acting primary fileset or
completes the failback process and convert the acting primary filesets on the secondary cluster
back into secondary filesets
--post-failback-complete
Completes the failback process by converting the acting primary filesets back into secondary,
read-only filesets and ensures that the proper NFS exports for AFM DR exist.
--new-primary
Performs the failback operation to a newly, failed back primary cluster.
--input-file-path FilePath
Specifies the path of the updated configuration file that is created from the mmcesdr
primary failback --convert-new command, which includes updated AFM primary IDs.
--file-config {--recreate | --restore}
Optional. Specifies whether SMB and NFS exports are re-created, or if the entire protocol
configuration is restored. If not specified, the SMB and NFS exports are re-created by default.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmcesdr command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum
Scale: Administration Guide.
Examples
1. Issue the following command on the primary cluster to configure independent fileset exports as AFM
DR filesets and backup configuration information:
mmcesdr primary config --output-file-path /root/ --ip-list "9.11.102.211,9.11.102.210" --rpo 720 --inband
File to be used with secondary cluster in next step of cluster DR setup: /root//DR_Config
2. Issue the following command on the secondary cluster to create the independent filesets that are a
part of the pair of AFM DR filesets associated with those on the primary cluster:
In addition to fileset creation, this command also creates the necessary NFS exports and converts the
independent filesets to AFM DR secondary filesets.
The system displays output similar to this:
Performing step 1/3, creation of independent filesets to be used for AFM DR.
Successfully completed step 1/3, creation of independent filesets to be used for AFM DR.
Performing step 2/3, creation of NFS exports to be used for AFM DR.
Successfully completed step 2/3, creation of NFS exports to be used for AFM DR.
Performing step 3/3, conversion of independent filesets to AFM DR secondary filesets.
Successfully completed step 3/3, conversion of independent filesets to AFM DR secondary filesets.
3. Issue the following command on the primary cluster to configure independent fileset exports as AFM
DR filesets, back up configuration information, and facilitate outband data transfer.
Note: The outband data transfer is the default method of data transfer from the primary cluster to the
secondary cluster when AFM DR fileset relationships are first set up.
:
Successfully completed step 3/5, determination of protocol exports to protect with AFM DR.
Performing step 4/5, conversion of protected filesets into AFM DR primary filesets.
Successfully completed step 4/5, conversion of protected filesets into AFM DR primary filesets.
Performing step 5/5, creation of output DR configuration file.
Successfully completed step 5/5, creation of output DR configuration file.
File to be used with secondary cluster in next step of cluster DR setup: /root//DR_Config
4. Issue the following command on the secondary cluster to create the independent filesets that will
later be paired with those on the primary cluster to form AFM DR pairs as part of failing back to a new
primary cluster:
5. After all the data has been transferred to the secondary, issue the following command to complete
the setup on the secondary:
6. Issue the following command on the secondary cluster after the primary cluster has failed:
Performing step 1/4, saving current NFS configuration to restore after failback.
Successfully completed step 1/4, saving current NFS configuration to restore after failback.
Performing step 2/4, failover of secondary filesets to primary filesets.
Successfully completed step 2/4, failover of secondary filesets to primary filesets.
Performing step 3/4, protocol configuration/exports restore.
Successfully completed step 3/4, protocol configuration/exports restore.
Performing step 4/4, create/verify NFS AFM DR transport exports.
Successfully completed step 4/4, create/verify NFS AFM DR transport exports.
7. Issue the following command on the secondary cluster to prepare recovery snapshots that contain
data that is transferred to the new primary cluster:
Performing step 1/2, generating recovery snapshots for all AFM DR acting primary filesets.
Transfer all data under snapshot located on acting primary cluster at:
/gpfs/fs0/combo1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-1 to
fileset link point of fileset fs0:combo1 on new primary cluster.
Transfer all data under snapshot located on acting primary cluster at:
/gpfs/fs0/combo2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-2 to
fileset link point of fileset fs0:combo2 on new primary cluster.
Transfer all data under snapshot located on acting primary cluster at:
/gpfs/fs0/nfs-ganesha1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-3 to
fileset link point of fileset fs0:nfs-ganesha1 on new primary cluster.
Transfer all data under snapshot located on acting primary cluster at:
/gpfs/fs0/nfs-ganesha2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-4 to
fileset link point of fileset fs0:nfs-ganesha2 on new primary cluster.
Transfer all data under snapshot located on acting primary cluster at:
/gpfs/fs0/smb1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-5 to
fileset link point of fileset fs0:smb1 on new primary cluster.
Transfer all data under snapshot located on acting primary cluster at:
/gpfs/fs0/smb2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-6 to
fileset link point of fileset fs0:smb2 on new primary cluster.
Transfer all data under snapshot located on acting primary cluster at:
/gpfs/fs1/.async_dr/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-2 to
fileset link point of fileset fs1:async_dr on new primary cluster.
Transfer all data under snapshot located on acting primary cluster at:
/gpfs/fs1/obj_sofpolicy1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-3 to
fileset link point of fileset fs1:obj_sofpolicy1 on new primary cluster.
Transfer all data under snapshot located on acting primary cluster at:
/gpfs/fs1/obj_sofpolicy2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-4 to
fileset link point of fileset fs1:obj_sofpolicy2 on new primary cluster.
Transfer all data under snapshot located on acting primary cluster at:
/gpfs/fs1/object_fileset/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-1 to
fileset link point of fileset fs1:object_fileset on new primary cluster.
Successfully completed step 1/2, generating recovery snapshots for all AFM DR acting primary filesets.
Performing step 2/2, creation of recovery output file for failback to new primary.
Successfully completed step 2/2, creation of recovery output file for failback to new primary.
File to be used with new primary cluster in next step of failback to new primary cluster: /root//
DR_Config
8. Issue the following command on the primary cluster to restore the protocol and export services
configuration information:
9. Issue the following command on the secondary cluster to restore the protocol and export services
configuration information:
10. Issue the following command on the primary cluster to back up configuration:
11. Issue the following command on the primary cluster to restore configuration when the primary
cluster is not in a protocols DR relationship with another cluster:
================================================================================
= If all steps completed successfully, remove and then re-create file
= authentication on the Primary cluster.
= Once this is complete, Protocol Cluster Configuration Restore will be complete.
================================================================================
See also
• “mmafmctl command” on page 61
• “mmces command” on page 132
• “mmchconfig command” on page 169
• “mmlscluster command” on page 484
• “mmlsconfig command” on page 487
• “mmnfs command” on page 552
• “mmobj command” on page 565
• “mmpsnap command” on page 605
• “mmrestorefs command” on page 665
• “mmsmb command” on page 698
• “mmuserauth command” on page 727
Location
/usr/lpp/mmfs/bin
mmchattr command
Changes attributes of one or more GPFS files.
Synopsis
mmchattr [-m MetadataReplicas] [-M MaxMetadataReplicas]
[-r DataReplicas] [-R MaxDataReplicas] [-P DataPoolName]
[-D {yes | no}] [-I {yes | defer}] [-i {yes | no}]
[-a {yes | no}] [-l]
[{--set-attr AttributeName[=Value] [--pure-attr-create | --pure-attr-replace]} |
{--delete-attr AttributeName [--pure-attr-delete]}]
[--hex-attr] [--hex-attr-name] [--no-attr-ctime]
[--compact[=[Option][,Option]...]]
[--compression {yes | no | z | lz4 | zfast | alphae | alphah}]
[--block-group-factor BlockGroupFactor]
[--write-affinity-depth WriteAffinityDepth]
[--write-affinity-failure-group "WadfgValueString"]
[--indefinite-retention {yes | no}]
[--expiration-time yyyy-mm-dd[@hh:mm:ss]]
{--inode-number [SnapPath/]InodeNumber [[SnapPath/]InodeNumber...] |
Filename [Filename...]}
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmchattr command to change the replication attributes, storage pool assignments, retention
and immutability attributes, I/O caching policy, file compression or decompression of files in the file
system, and resolve data block replica mismatches using the gpfs.readReplicaRule extended
attribute. For more information, see the Replica mismatches topic in the IBM Spectrum Scale: Problem
Determination Guide.
The replication factor must be less than or equal to the maximum replication factor for the file. If
insufficient space is available in the file system to increase the number of replicas to the value requested,
the mmchattr command ends. However, replication factor for some blocks of the file might increase after
the mmchattr command ends. If later (when you add another disk), free space becomes available in the
file system you can then issue the mmrestripefs command with the -r or -b option to complete the
replication of the file. The mmrestripefile command can be used in a similar manner. You can use the
mmlsattr command to display the replication values.
Data of a file is stored in a specific storage pool. A storage pool is a collection of disks or RAIDs with
similar properties. Because these storage devices have similar properties, you can manage them as a
group. You can use storage pools to do the following tasks:
• Partition storage for the file system.
• Assign file storage locations.
• Improve system performance.
• Improve system reliability.
The Direct I/O caching policy bypasses file cache and transfers data directly from disk into the user space
buffer, as opposed to using the normal cache policy of placing pages in kernel memory. Applications with
poor cache hit rates or a large amount of I/O might benefit from the use of Direct I/O.
The mmchattr command can be run against a file in use.
You must have write permission for the files whose attributes you are changing unless you are changing
the gpfs.readReplicaRule extended attribute and you are the file owner, in which case read
permission is sufficient.
Parameters
-m MetadataReplicas
Specifies how many copies of the file system's metadata to create. Valid values are 1, 2, and 3. This
value cannot be greater than the value of the MaxMetadataReplicas attribute of the file.
-M MaxMetadataReplicas
Specifies the maximum number of copies of indirect blocks for a file. Space is reserved in the inode for
all possible copies of pointers to indirect blocks. Valid values are 1, 2, and 3. This value cannot be less
than the value of the DefaultMetadataReplicas attribute of the file.
-r DataReplicas
Specifies how many copies of the file data to create. Valid values are 1, 2, and 3. This value cannot be
greater than the value of the MaxDataReplicas attribute of the file.
-R MaxDataReplicas
Specifies the maximum number of copies of data blocks for a file. Space is reserved in the inode and
indirect blocks for all possible copies of pointers to data blocks. Valid values are 1, 2, and 3. This value
cannot be less than the value of the DefaultDataReplicas attribute of the file.
-P DataPoolName
Changes the assigned storage pool of the file to the specified DataPoolName. The caller must have
superuser or root privileges to change the assigned storage pool.
-D {yes | no}
Enable or disable the Direct I/O caching policy for files.
-I {yes | defer}
Specifies whether replication and migration between pools, or file compression or decompression, is
to be performed immediately (-I yes), or deferred until a later call to mmrestripefs or
mmrestripefile (-I defer). By deferring the operation, you can complete it when the system is
not loaded with processes or I/O. Also, if multiple files are affected, the data movement can be done
in parallel. The default is -I yes. For more information about file compression and decompression,
see the --compression option in this topic.
-i {yes | no}
Specifies whether the file is immutable (-i yes) or not immutable (-i no).
Note: The immutability attribute is specific to the current instance of the file. Restoring an image of
the file to another location does not retain the immutability option. You must set it yourself.
-a {yes | no}
Specifies whether the file is in appendOnly mode (-a yes) or not (-a no).
Notes:
1. The appendOnly setting is specific to the current instance of the file. Restoring an image of the file
to another location does not retain the appendOnly mode. You must set it yourself.
2. appendOnly mode is not supported for AFM filesets.
-l
Specifies that this command works only with regular files and directories and does not follow
symlinks. The default is to follow symlinks.
--set-attr AttributeName[=Value]
Sets the specified extended attribute name to the specified Value for each file. If no Value is specified,
--set-attr AttributeName sets the extended attribute name to a zero-length value.
--pure-attr-create
When this option is used, the command fails if the specified extended attribute exists.
--pure-attr-replace
When this option is used, the command fails if the specified extended attribute does not exist.
--delete-attr AttributeName
Removes the extended attribute.
For example, to remove wad, wadfg, and bgf, enter the following command:
--pure-attr-delete
When this option is used, the command fails if the specified extended attribute does not exist.
--hex-attr
Inputs the attribute value in hex.
--hex-attr-name
Inputs the attribute name in hex.
--no-attr-ctime
Changes the attribute without setting the ctime of the file. This is restricted to root only.
[--compact[=Option][,Option]...]]
Where Option can be either NumDirectoryEntries, indblk or fragment.
Performs any of the three operations:
• Sets the minimum compaction size of the directories that are specified in the Filename parameter.
• Deletes the indirect blocks that are redundant.
• Reduces the last logical data block to the number of subblocks that are required to store the data.
NumDirectoryEntries
The minimum compaction size is the number of directory slots, including both full and empty
slots, that a directory is allowed to retain when it is automatically compacted. By default, in IBM
Spectrum Scale v4.1 or later, a directory is compacted as much as possible. However, in systems
in which many files are added to and removed from a directory in a short time, file system
performance might be improved by setting the minimum compaction size of the directory.
The compact parameter sets the minimum compaction size of a directory to the specified number
of slots. For example, if a directory contains 5,000 files and you set the minimum compaction size
to 50,000, then the file system adds 45,000 directory slots. The directory can grow beyond
50,000 entries, but the file system does not allow the directory to be compacted below 50,000
slots.
Set NumDirectoryEntries to the total number of directory slots that you want to keep, including
files that the directory already contains. You can specify the number of directory slots either as an
integer or as an integer followed by the letter k (1000 slots) or m (1,000,000 slots). If you expect
the average length of file names to be greater than 19 bytes, calculate the number of slots by the
following formula:
where:
n
Specifies the number of entries (file names) in the directory.
ceiling()
A function that rounds a fractional number up to the next highest integer. For example,
ceiling(1.03125) returns 2.
namelen
Specifies the expected average length of file names.
For example, if you want 50,000 entries with an average file name length of 48, then
NumDirectoryEntries = 50000 * (1 + 1) = 100000.
To restore the default behavior of the file system, specify the compact=0. The directory is
compacted as far as possible.
To see the current value of this parameter, run the mmlsattr command with the -L option. For
more information, see the topic “mmlsattr command” on page 479. To set or read the value of this
parameter in a program, see the topics “gpfs_prealloc() subroutine” on page 950 and
“gpfs_fstat_x() subroutine” on page 868.
Note: The NumDirectoryEntries value is not supported if the file system was created in IBM
Spectrum Scale 4.1 or earlier . The compact parameter that is specified converts the directory to
4.1 format and compacts the directory as far as possible.
The mmchattr --compact=NumDirectoryEntries command run for a regular file will fail
with the following error message Compact failed: NumDirectoryEntries is supported
only for directory.
indblk
Deallocates all redundant indirect blocks of a regular file.
When a file is truncated to zero file size, its indirect blocks can be deallocated. However, in
versions earlier than 5.0.4.1, IBM Spectrum Scale retains the redundant indirect blocks in such
scenarios. For a file system that has a large number of files, the metadata disk space wasted by
these indirect blocks can be significant. You can use the indblk option to deallocate the
unnecessary indirect blocks.
fragment
Applies a ShrinktoFit operation on a regular file to reduce disk usage.
The fragment option reduces the last logical block of data of the file to the actual number of
subblocks that are required. When a file is closed after being written to, GPFS tries to reduce the
last logical block of data of the file to the actual number of subblocks required to save disk space.
Under normal circumstances, the file is compacted when it is closed. In a few rare cases, however,
the shrink might fail and leave a full data block for the last logical block of data of the file. The
fragment option shrinks the last logical block to the actual number of subblocks required.
Note: Specify only the --compact parameter, without any of the suboptions, to perform both the
indblk and fragment operations.
--compression {yes | no | z | lz4 | zfast | alphae | alphah}
Compresses or decompresses the specified files. The compression libraries are intended primarily for
the following uses:
z
Cold data. Favors compression efficiency over access speed.
lz4
Active, nonspecific data. Favors access speed over compression efficiency.
zfast
Active genomic data in FASTA, SAM, or VCF format.
alphae
Active genomic data in FASTQ format. Slightly favors compression efficiency over access speed.
alphah
Active genomic data in FASTQ format. Slightly favors access speed over compression efficiency.
The following table summarizes the effect of each option on compressed or uncompressed files:
You can use the -I defer option to defer the operation until a later call to mmrestripefs or
mmrestripefile. For more information, see the topic File compression in the IBM Spectrum Scale:
Administration Guide.
--block-group-factor BlockGroupFactor
Specifies how many file system blocks are laid out sequentially on disk to behave like a single large
block. This option only works if --allow-write-affinity is set for the data pool. This applies only
to a new data block layout; it does not migrate previously existing data blocks.
--write-affinity-depth WriteAffinityDepth
Specifies the allocation policy to be used. This option only works if --allow-write-affinity is set
for the data pool. This applies only to a new data block layout; it does not migrate previously existing
data blocks.
--write-affinity-failure-group "WadfgValueString"
Indicates the range of nodes (in a shared nothing architecture) where replicas of blocks in the file are
to be written. You use this parameter to determine the layout of a file in the cluster so as to optimize
the typical access patterns of your applications. This applies only to a new data block layout; it does
not migrate previously existing data blocks.
"WadfgValueString" is a semicolon-separated string identifying one or more failure groups in the
following format:
FailureGroup1[;FailureGroup2[;FailureGroup3]]
where each FailureGroupx is a comma-separated string identifying the rack (or range of racks),
location (or range of locations), and node (or range of nodes) of the failure group in the following
format:
Rack1{:Rack2{:...{:Rackx}}},Location1{:Location2{:...{:Locationx}}},ExtLg1{:ExtLg2{:...
{:ExtLgx}}}
1,1,1:2;2,1,1:2;2,0,3:4
means that the first failure group is on rack 1, location 1, extLg 1 or 2; the second failure group is on
rack 2, location 1, extLg 1 or 2; and the third failure group is on rack 2, location 0, extLg 3 or 4.
If the end part of a failure group string is missing, it is interpreted as 0. For example, the following are
interpreted the same way:
2
2,0
2,0,0
Notes:
1. Only the end part of a failure group string can be left off. The missing end part may be the third
field only, or it may be both the second and third fields; however, if the third field is provided, the
second field must also be provided. The first field must always be provided. In other words, every
comma must both follow and precede a number; therefore, none of the following are valid:
2,0,
2,
,0,0
0,,0
,,0
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have write access to the file to run the mmchattr command unless you are changing the
gpfs.readReplicaRule extended attribute and you are the file owner, in which case read permission is
sufficient.
You can issue the mmchattr command only from a node in the GPFS cluster where the file system is
mounted.
Examples
1. To change the metadata replication factor to 2 and the data replication factor to 2 for the
project7.resource file in file system fs1, issue this command:
mmchattr -m 2 -r 2 /fs1/project7.resource
mmlsattr project7.resource
replication factors
metadata(max) data(max) file [flags]
------------- --------- ---------------
2 ( 2) 2 ( 2) /fs1/project7.resource
2. Migrating data from one storage pool to another using the mmchattr command with the -I defer
option, or the mmapplypolicy command with the -I defer option causes the data to be ill-placed.
This means that the storage pool assignment for the file has changed, but the file data has not yet
been migrated to the assigned storage pool.
The mmlsattr -L command causes show ill-placed flags on the files that are ill-placed. The
mmrestripefs, or mmrestripefile command can be used to migrate data to the correct storage
pool, and the ill-placed flag is cleared. This is an example of an ill-placed file:
mmlsattr -L 16Kfile6.tmp
3. The following example shows the result of using the --set-attr parameter.
4. To set the write affinity failure group for a file and to see the results, issue these commands:
See also
• “mmcrfs command” on page 315
• “mmlsattr command” on page 479
• “mmlsfs command” on page 498
Location
/usr/lpp/mmfs/bin
mmchcluster command
Changes GPFS cluster configuration data.
Synopsis
mmchcluster --ccr-enable
or
or
mmchcluster -p LATEST
or
or
mmchcluster -C ClusterName
Note: The primary and secondary configuration server functionality is deprecated and will be removed in
a future release. The default configuration service is CCR. For now, you can switch from the CCR
configuration service to the primary and secondary configuration servers by issuing the mmchcluster
command with the --ccr-disable parameter. See the description of that parameter later in this topic.
For more information see the topic “mmcrcluster command” on page 303.
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmchcluster command serves several purposes. You can use it to do the following:
1. Change the remote shell and remote file copy programs to be used by the nodes in the cluster.
2. Change the cluster name.
3. Enable or disable the cluster configuration repository (CCR).
When using the traditional server-based (non-CCR) configuration repository, you can also do the
following:
1. Change the primary or secondary GPFS cluster configuration server.
2. Synchronize the primary GPFS cluster configuration server.
To display current system information for the cluster, issue the mmlscluster command.
For information on how to specify node names, see the topic Specifying nodes as inputs to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
When issuing the mmchcluster command with the -p or -s options, the specified nodes must be
available in order for the command to succeed. If any of the nodes listed are not available when the
command is issued, a message listing those nodes is displayed. You must correct the problem on each
node and reissue the command.
Attention: The mmchcluster command, when issued with either the -p or -s option, is designed
to operate in an environment where the current primary and secondary cluster configuration
servers are not available. As a result, the command can run without obtaining its regular
serialization locks. To assure smooth transition to a new cluster configuration server, no other
GPFS commands (mm commands) should be running when the command is issued, nor should any
other command be issued until the mmchcluster command has successfully completed.
Parameters
--ccr-enable
Enables the configuration server repository (CCR), which stores redundant copies of configuration
data files on all quorum nodes. The advantage of CCR over the traditional primary or backup
configuration server semantics is that when using CCR, all GPFS administration commands as well as
file system mounts and daemon startups work normally as long as a majority of quorum nodes are
accessible.
For more information, see the topic Cluster configuration data files in the IBM Spectrum Scale:
Concepts, Planning, and Installation Guide.
The CCR operation requires the use of the GSKit toolkit for authenticating network connections. As
such, the gpfs.gskit package, which is available on all Editions, should be installed.
--ccr-disable [--force] [--force-ccr-disable-msg]
Closes the CCR environment and reverts the cluster to the primary and secondary configuration
servers.
Attention: The primary and secondary configuration server functionality is deprecated and will
be removed in a future release.
Before you change from the CCR environment, consider the following issues:
• When the CCR environment is closed, the monitoring function of the mmhealth command is
automatically disabled.
• Certain services that are dependent on the CCR environment must be disabled before you can
change from the CCR environment. These include Transparent Cloud Tiering, Watch Folder and
Audit Logging, Call Home, CES, and the IBM Spectrum Scale GUI.
• You must shut down GPFS on all the nodes of the cluster before you close the CCR and restart GPFS
afterward.
By default the --ccr-disable parameter causes the mmchcluster command to take the following
precautionary actions before it closes the CCR:
• It displays a warning message and prompts you to confirm or cancel the decision to close the CCR.
The warning message states that the primary and secondary configuration feature is deprecated
and will be removed; that the monitoring function of the mmhealth command will be disabled; and
that CCR-dependent services might be running. However, you can block this message and the
confirmation prompt by specifying the --force-ccr-disable-msg parameter.
• It checks whether any services that are dependent on the CCR are running. If so the command
displays an error message and ends without closing the CCR so that you can stop the services
properly. You can block this check by specifying the --force parameter. The following services are
dependent on CCR:
– Transparent Cloud Tiering
– Watch Folder and Audit Logging.
– Call Home
– The IBM Spectrum Scale GUI interface.
– CES
- To close CES, remove all the CES nodes from the cluster.
daemon that processes the administration command specifies this non-root user ID instead of the
root ID when it needs to run internal commands on other nodes. For more information, see the
topic Root-level processes that call administration commands directly in the IBM Spectrum Scale:
Administration Guide.
To disable this feature, specify the key word DELETE instead of a user name, as in the following
example:
-C ClusterName
Specifies a new name for the cluster. If the user-provided name contains dots then the command
assumes that the user-provided name is a fully qualified domain name. Otherwise, to make the cluster
name unique, the command appends the domain of a quorum node to the user-provided name. The
maximum length of the cluster name including any appended domain name is 115 characters.
Since each cluster is managed independently, there is no automatic coordination and propagation of
changes between clusters like there is between the nodes within a cluster. This means that if you
change the name of the cluster, you should notify the administrators of all other GPFS clusters that
can mount your file systems so that they can update their own environments.
Before running this option, ensure that all GPFS daemons on all nodes have been stopped.
See the mmauth, mmremotecluster, and mmremotefs commands.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmchcluster command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum
Scale: Administration Guide.
Examples
To change the primary GPFS server for the cluster, issue this command:
mmchcluster -p k164n06
mmlscluster
See also
• “mmaddnode command” on page 35
• “mmchnode command” on page 241
• “mmcrcluster command” on page 303
• “mmdelnode command” on page 371
• “mmlscluster command” on page 484
• “mmremotecluster command” on page 650
Location
/usr/lpp/mmfs/bin
mmchconfig command
Changes GPFS configuration parameters.
Synopsis
mmchconfig Attribute=value[,Attribute=value...] [-i | -I]
[-N {Node[,Node...] | NodeFile | NodeClass}]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmchconfig command to change the GPFS configuration attributes on a single node, a set of
nodes, or globally for the entire cluster.
Results
The configuration is updated on the specified nodes.
Parameters
-I
Specifies that the changes take effect immediately, but do not persist when GPFS is restarted. This
option is valid only for the following attributes:
• deadlockBreakupDelay
• deadlockDataCollectionDailyLimit
• deadlockDataCollectionMinInterval
• deadlockDetectionThreshold
• deadlockDetectionThresholdForShortWaiters
• deadlockOverloadThreshold
• dioSmallSeqWriteBatching
• diskReadExclusionList
• dmapiMountEvent
• dmapiMountTimeout
• dmapiSessionFailureTimeout
• expelDataCollectionDailyLimit
• expelDataCollectionMinInterval
• fastestPolicyCmpThreshold
• fastestPolicyMaxValidPeriod
• fastestPolicyMinDiffPercent
• fastestPolicyNumReadSamples
• fileHeatLossPercent
• fileHeatPeriodMinutes
• ignorePrefetchLUNCount
• ignoreReplicationForQuota
• ignoreReplicationOnStatfs
• linuxStatfsUnits
• lrocData
• lrocDataMaxFileSize
• lrocDataStubFileSize
• lrocDirectories
• lrocEnableStoringClearText
• lrocInodes
• logRecoveryThreadsPerLog
• logOpenParallelism
• logRecoveryParallelism
• maxMBpS
• nfsPrefetchStrategy
• nsdBufSpace
• nsdCksumTraditional
• nsdDumpBuffersOnCksumError
• nsdInlineWriteMax
• nsdMultiQueue
• pagepool
• panicOnIOHang
• pitWorkerThreadsPerNode
• proactiveReconnect
• readReplicaPolicy
• readReplicaRuleEnabled
• seqDiscardThreshold
• syncbuffsperiteration
• systemLogLevel
• unmountOnDiskFail
• verbsRdmaRoCEToS
• worker1Threads (only when adjusting value down)
• writebehindThreshold
-i
Specifies that the changes take effect immediately and are permanent. This option is valid only for the
following attributes:
• cesSharedRoot
• cnfsGrace
• cnfsMountdPort
• cnfsNFSDprocs
• cnfsReboot
• cnfsSharedRoot
• cnfsVersions
• commandAudit
• confirmShutdownIfHarmful
• dataDiskWaitTimeForRecovery
• dataStructureDump
• deadlockBreakupDelay
• deadlockDataCollectionDailyLimit
• deadlockDataCollectionMinInterval
• deadlockDetectionThreshold
• deadlockDetectionThresholdForShortWaiters
• deadlockOverloadThreshold
• debugDataControl
• dioSmallSeqWriteBatching
• disableInodeUpdateOnFDatasync
• diskReadExclusionList
• dmapiMountEvent
• dmapiMountTimeout
• dmapiSessionFailureTimeout
• expelDataCollectionDailyLimit
• expelDataCollectionMinInterval
• fastestPolicyCmpThreshold
• fastestPolicyMaxValidPeriod
• fastestPolicyMinDiffPercent
• fastestPolicyNumReadSamples
• fileHeatLossPercent
• fileHeatPeriodMinutes
• ignorePrefetchLUNCount
• ignoreReplicationForQuota
• ignoreReplicationOnStatfs
• linuxStatfsUnits
• lrocData
• lrocDataMaxFileSize
• lrocDataStubFileSize
• lrocDirectories
• lrocEnableStoringClearText
• lrocInodes
• logRecoveryThreadsPerLog
• logOpenParallelism
• logRecoveryParallelism
• maxDownDisksForRecovery
• maxFailedNodesForRecovery
• maxMBpS
• metadataDiskWaitTimeForRecovery
• minDiskWaitTimeForRecovery
• mmfsLogTimeStampISO8601
• nfsPrefetchStrategy
• nsdBufSpace
• nsdCksumTraditional
• nsdDumpBuffersOnCksumError
• nsdInlineWriteMax
• nsdMultiQueue
• pagepool
• panicOnIOHang
• pitWorkerThreadsPerNode
• proactiveReconnect
• readReplicaPolicy
• readReplicaRuleEnabled
• restripeOnDiskFailure
• seqDiscardThreshold
• sudoUser
• syncbuffsperiteration
• systemLogLevel
• unmountOnDiskFail
• verbsRdmaRoCEToS
• worker1Threads (only when adjusting value down)
• writebehindThreshold
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the set of nodes to which the configuration changes apply. The default is -N all.
For information on how to specify node names, see the topic Specifying nodes as inputs to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
To see a complete list of the attributes for which the -N flag is valid, see the table "Configuration
attributes on the mmchconfig command" in the topic Changing the GPFS cluster configuration data in
the IBM Spectrum Scale: Administration Guide.
This command does not support a NodeClass of mount.
Attribute=value
Specifies the name of the attribute to be changed and its associated value. More than one attribute
and value pair can be specified. To restore the GPFS default setting for an attribute, specify DEFAULT
as its value.
This command accepts the following attributes:
adminMode
Specifies whether all nodes in the cluster are used for issuing GPFS administration commands or
just a subset of the nodes. Valid values are:
allToAll
Indicates that all nodes in the cluster are used for running GPFS administration commands
and that all nodes are able to execute remote commands on any other node in the cluster
without the need of a password.
central
Indicates that only a subset of the nodes is used for running GPFS commands and that only
those nodes are able to execute remote commands on the rest of the nodes in the cluster
without the need of a password.
For more information, see the topic Requirements for administering a GPFS file system in the IBM
Spectrum Scale: Administration Guide.
afmAsyncDelay
Specifies (in seconds) the amount of time by which write operations are delayed (because write
operations are asynchronous with respect to remote clusters). For write-intensive applications
that keep writing to the same set of files, this delay is helpful because it replaces multiple writes
to the home cluster with a single write containing the latest data. However, setting a very high
value weakens the consistency of data on the remote cluster.
This configuration parameter is applicable only for writer caches (SW, IW, and primary), where
data from cache is pushed to home.
Valid values are between 1 and 2147483647. The default is 15.
afmAsyncOpWaitTimeout
Specifies the time (in seconds) that AFM or AFM DR waits for completion of any inflight
asynchronous operation which is synchronizing with the home or primary cluster. Subsequently,
AFM or AFM DR cancels the operation and tries synchronization again after home or primary
cluster is available.
Default value is 300. The range of valid values is 5 and 2147483647.
afmDirLookupRefreshInterval
Controls the frequency of data revalidations that are triggered by such lookup operations as ls or
stat (specified in seconds). When a lookup operation is done on a directory, if the specified
amount of time has passed, AFM sends a message to the home cluster to find out whether the
metadata of that directory has been modified since the last time it was checked. If the time
interval has not passed, AFM does not check the home cluster for updates to the metadata.
Valid values are 0 through 2147483647. The default is 60. In situations where home cluster data
changes frequently, a value of 0 is recommended.
afmDirOpenRefreshInterval
Controls the frequency of data revalidations that are triggered by such I/O operations as read or
write (specified in seconds). After a directory has been cached, open requests resulting from I/O
operations on that object are directed to the cached directory until the specified amount of time
has passed. Once the specified amount of time has passed, the open request gets directed to a
gateway node rather than to the cached directory.
Valid values are between 0 and 2147483647. The default is 60. Setting a lower value guarantees
a higher level of consistency.
afmDisconnectTimeout
The Waiting period in seconds to detect the status of the home cluster. If the home cluster is
inaccessible, the metadata server (MDS) changes the state to 'disconnected'.
afmEnableNFSSec
If enabled at cache/primary, exported paths from home/secondary with kerberos-enabled
security levels like sys, krb5, krb5i, krb5p are mounted at cache/primary in the increasing
order of security level - sys, krb5, krb5i, krb5p. For example, the security level of exported
path is krb5i then at cache, AFM/AFM DR tries to mount with level sys, followed by krb5, and
finally mounts with the security level krb5i. If disabled at cache/primary, exported paths from
home/secondary are mounted with security level sys at cache/primary. You must configure KDC
clients on all the gateway nodes at cache/primary before enabling this parameter.
Valid values are yes and no. The default value is no.
afmExpirationTimeout
Is used with afmDisconnectTimeout (which can be set only through mmchconfig) to control
how long a network outage between the cache and home clusters can continue before the data in
the cache is considered out of sync with home. After afmDisconnectTimeout expires, cached
data remains available until afmExpirationTimeout expires, at which point the cached data is
considered expired and cannot be read until a reconnect occurs.
Valid values are 0 through 2147483647. The default is disable.
afmFileLookupRefreshInterval
Controls the frequency of data revalidations that are triggered by such lookup operations as ls or
stat (specified in seconds). When a lookup operation is done on a file, if the specified amount of
time has passed, AFM sends a message to the home cluster to find out whether the metadata of
the file has been modified since the last time it was checked. If the time interval has not passed,
AFM does not check the home cluster for updates to the metadata.
Valid values are 0 through 2147483647. The default is 30. In situations where home cluster data
changes frequently, a value of 0 is recommended.
afmFileOpenRefreshInterval
Controls the frequency of data revalidations that are triggered by such I/O operations as read or
write (specified in seconds). After a file has been cached, open requests resulting from I/O
operations on that object are directed to the cached file until the specified amount of time has
passed. Once the specified amount of time has passed, the open request gets directed to a
gateway node rather than to the cached file.
Valid values are 0 through 2147483647. The default is 30. Setting a lower value guarantees a
higher level of consistency.
afmHardMemThreshold
Sets a limit to the maximum amount of memory that AFM can use on each gateway node to record
changes to the file system. After this limit is reached, the fileset goes into a 'dropped' state.
Exceeding the limit and the fileset going into a 'dropped' state due to accumulated pending
requests might occur if -
• the cache cluster is disconnected for an extended period of time.
• the connection with the home cluster is on a low bandwidth.
afmHashVersion
Specifies the version of hashing algorithm to be used to assign AFM and AFM ADR filesets across
gateway nodes, thus running as few recoveries as possible. This minimizes impact of gateway
nodes joining or leaving the active cluster.
Valid values are 1,2, 4 or 5. Default value is 2.
afmMaxParallelRecoveries
Specifies the number of filesets per gateway node on which event recovery is run. The default
value is 0. When the value is 0, event recovery is run on all filesets of the gateway node.
afmNumReadThreads
Defines the number of threads that can be used on each participating gateway node during
parallel read. The default value of this parameter is 1; that is, one reader thread will be active on
every gateway node for each big read operation qualifying for splitting per the parallel read
threshold value. The valid range of values is 1 to 64.
afmNumWriteThreads
Defines the number of threads that can be used on each participating gateway node during
parallel write. The default value of this parameter is 1; that is, one writer thread will be active on
every gateway node for each big write operation qualifying for splitting per the parallel write
threshold value. Valid values can range from 1 to 64.
afmParallelMounts
When this parameter is enabled, the primary gateway node of a fileset at a cache cluster attempts
to mount the exported path from multiple NFS servers that are defined in the mapping. Then, this
primary gateway node sends unique messages through each NFS mount to improve performance
by transferring data in parallel.
Before enabling this parameter, define the mapping between the primary gateway node and NFS
servers by issuing the mmafmconfig command.
afmParallelReadChunkSize
Defines the minimum chunk size of the read that needs to be distributed among the gateway
nodes during parallel reads. Values are interpreted in terms of bytes. The default value of this
parameter is 128 MiB, and the valid range of values is 0 to 2147483647. It can be changed cluster
wide with the mmchconfig command. It can be set at fileset level using mmcrfileset or
mmchfileset commands.
afmParallelReadThreshold
Defines the threshold beyond which parallel reads become effective. Reads are split into chunks
when file size exceeds this threshold value. Values are interpreted in terms of MiB. The default
value is 1024 MiB. The valid range of values is 0 to 2147483647. It can be changed cluster wide
with the mmchconfig command. It can be set at fileset level using mmcrfileset or
mmchfileset commands.
afmParallelWriteChunkSize
Defines the minimum chunk size of the write that needs to be distributed among the gateway
nodes during parallel writes. Values are interpreted in terms of bytes. The default value of this
parameter is 128 MiB, and the valid range of values is 0 to 2147483647. It can be changed cluster
wide with the mmchconfig command. It can be set at fileset level using mmcrfileset or
mmchfileset commands.
afmParallelWriteThreshold
Defines the threshold beyond which parallel writes become effective. Writes are split into chunks
when file size exceeds this threshold value. Values are interpreted in terms of MiB. The default
value of this parameter is 1024 MiB, and the valid range of values is 0 to 2147483647. It can be
changed cluster wide with the mmchconfig command. It can be set at fileset level using
mmcrfileset or mmchfileset commands.
afmReadSparseThreshold
Specifies the size in MB for files in cache beyond which sparseness is maintained. For all files
below the specified threshold, sparseness is not maintained.
afmRefreshAsync
Modifies the cache data refresh operation to asynchronous mode. Cache data refresh operation in
asynchronous mode improves performance of applications that query data. Upon readdir or
lookup request, a revalidation request for files or directories is queued as an asynchronous
request to the gateway but the last known synchronized state of the cache data is returned to the
applications. Cache data is refreshed after revalidation with home is complete. Revalidation time
depends on the network availability and its bandwidth.
Valid values are 'no' and 'yes'. With the default value as 'no', cache data is validated with home
synchronously. Specify the value as 'yes' if you want the cache data refresh operation to be in
asynchronous mode.
afmRevalOpWaitTimeout
Specifies the time that AFM waits for revalidation to get response from the home cluster.
Revalidation checks if any changes are available at home (data and metadata) that need to be
updated to the cache cluster. Revalidation is performed when application trigger operations like
lookup, open at cache. If revalidation is not completed within this time, AFM cancels the operation
and returns data available at cache to the application.
Default value is 180. The range of valid values is 5 and 2147483647.
afmRPO
Specifies the recovery point objective (RPO) interval for an AFM DR fileset. This attribute is
disabled by default. You can specify a value with the suffix M for minutes, H for hours, or W for
weeks. For example, for 12 hours specify 12H. If you do not add a suffix, the value is assumed to
be in minutes. The range of valid values is 720 minutes - 2147483647 minutes.
afmSecondaryRW
Specifies if the secondary is read-write or not.
yes
Specifies that the secondary is read-write.
no
Specifies that the secondary is not read-write.
afmShowHomeSnapshot
Controls the visibility of the home snapshot directory in cache. For this to be visible in cache, this
variable has to be set to yes, and the snapshot directory name in the cache and home cannot be
the same.
yes
Specifies that the home snapshot link directory is visible.
no
Specifies that the home snapshot link directory is not visible.
See Peer snapshot -psnap in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
afmSyncOpWaitTimeout
Specifies the time that AFM or AFM DR waits for completion of any inflight synchronous operation
which is synchronizing with the home or primary cluster. When any application is performing any
synchronous operation at cache or secondary, AFM or AFM DR tries to get a response from home
or primary cluster. If home or primary cluster is not responding, application might be
unresponsive. If operation does not complete in this timeout interval, AFM or AFM DR cancels the
operation.
Default value is 180. The range of valid values is 5 and 2147483647.
atimeDeferredSeconds
Controls the update behavior of atime when the relatime option is enabled. The default value is
86400 seconds (24 hours). A value of 0 effectively disables relatime and causes the behavior to
be the same as the atime setting.
For more information, see the topic Mount options specific to IBM Spectrum Scale in the IBM
Spectrum Scale: Administration Guide.
autoBuildGPL={yes | no | mmbuildgplOptions}
Causes IBM Spectrum Scale to detect when the GPFS portability layer (GPL) needs to be rebuilt
and to rebuild it automatically. A rebuild is triggered if the GPFS kernel module is missing or if a
new level of IBM Spectrum Scale is installed. The mmbuildgpl command is called to do the
rebuild. For the rebuild to be successful, the requirements of the mmbuildgpl command must be
met; in particular, the build tools and kernel headers must be present on each node. This attribute
takes effect when the GPFS daemon is restarted. For more information, see the topics
“mmbuildgpl command” on page 112 and Using the mmbuildgpl command to build the GPFS
portability layer on Linux nodes in the IBM Spectrum Scale: Concepts, Planning, and Installation
Guide.
Note: This parameter does not apply to the AIX and Windows environments.
yes
Causes the GPL to be rebuilt when necessary.
no
Takes no action when the GPL needs to be rebuilt. This is the default value.
mmbuildgplOptions
Causes the GPL to be rebuilt when necessary and causes mmbuildgpl to be called with the
indicated options. This value is a hyphen-separated list of options in any order:
quiet
Causes mmbuildgpl to be called with the --quiet parameter.
verbose
Causes mmbuildgpl to be called with the -v option.
Note that yes, no, and mmbuildgplOptions are mutually exclusive, and that mmbuildgplOptions
implies yes. You cannot specify both yes and mmbuildgplOptions. You can specify both quiet
and verbose on the command line by separating them with a hyphen, as in
autoBuildGPL=quiet-verbose. See Table 14 on page 177.
The -N flag is valid for this attribute.
autoload
Starts GPFS automatically whenever the nodes are rebooted. Valid values are yes or no.
The -N flag is valid for this attribute.
automountDir
Specifies the directory to be used by the Linux automounter for GPFS file systems that are being
mounted automatically. The default directory is /gpfs/automountdir. This parameter does not
apply to AIX and Windows environments.
backgroundSpaceReclaimThreshold
Specifies the percentage of reclaimable blocks that must occur in an allocation space for devices
capable of space reclaim, such as NVMe and thin provisioned disks, to trigger a background space
reclaim. The default value is 0 indicating that background space reclaim is disabled. You can
enable it by setting it to a value larger than 0 but less than or equal to 100. Specifying a lower
value causes the background space reclaim to occur more frequently.
cesSharedRoot
Specifies a directory in a GPFS file system to be used by the Cluster Export Services (CES)
subsystem. For the CES shared root, the recommended value is a dedicated file system, but it is
not enforced. The CES shared root can also be a part of an existing GPFS file system. In any case,
cesSharedRoot must reside on GPFS and must be available when it is configured through
mmchconfig.
GPFS must be down on all CES nodes in the cluster when changing the cesSharedRoot attribute.
cifsBypassTraversalChecking
Controls the GPFS behavior while performing access checks for directories
GPFS grants the SEARCH access when the following conditions are met:
• The object is a directory
• The parameter value is yes
• The calling process is a Samba process
GPFS grants the SEARCH access regardless of the mode or ACL.
cipherList
Sets the security mode for the cluster. The security mode determines the level of the security that
the cluster provides for communications between nodes in the cluster and also for
communications with other clusters. There are three security modes:
EMPTY
The sending node and the receiving node do not authenticate each other, do not encrypt
transmitted data, and do not check data integrity.
AUTHONLY
The sending and receiving nodes authenticate each other, but they do not encrypt transmitted
data and do not check data integrity. This mode is the default in IBM Spectrum Scale V4.2 or
later.
Cipher
The sending and receiving nodes authenticate each other, encrypt transmitted data, and check
data integrity. To set this mode, you must specify the name of a supported cipher, such as
AES128-GCM-SHA256.
Note: Although after mmchconfig is issued, the mmfsd daemon accepts the new
cipherList immediately, it uses the cipher only for new TCP/TLS connections to other
nodes. Existing mmfsd daemon connections remain with the prior cipherList settings. For
the new cipherList to take complete effect immediately, GPFS needs to be restarted on all
nodes in a rolling fashion, one node at a time, to prevent cluster outage. When a cipher other
than AUTHONLY or EMPTY is in effect, it can lead to significant performance degradation, as
this results in encryption and data integrity verification of the transmitted data.
For more information about the security mode and supported ciphers, see the topic Security mode
in the IBM Spectrum Scale: Administration Guide.
cnfsGrace
Specifies the number of seconds a CNFS node will deny new client requests after a node failover
or failback, to allow clients with existing locks to reclaim them without the possibility of some
other client that is being granted a conflicting access. For v3, only new lock requests are denied.
For v4, new lock, read, and write requests are rejected. The cnfsGrace value also determines the
time period for the server lease.
Valid values are 10 - 600. The default is 90 seconds. While a short grace period is good for fast
server failover, it comes at the cost of increased load on server to effect lease renewal.
GPFS must be down on all CNFS nodes in the cluster when changing the cnfsGrace attribute.
cnfsMountdPort
Specifies the port number to be used for rpc.mountd. See the IBM Spectrum Scale: Administration
Guide for restrictions and additional information.
cnfsNFSDprocs
Specifies the number of nfsd kernel threads. The default is 32.
cnfsReboot
Specifies whether the node reboots when CNFS monitoring detects an unrecoverable problem
that can be handled only by node failover.
Valid values are yes or no. The default is yes and recommended. If node reboot is not desired for
other reasons, it should be noted that clients that were communicating with the failing node are
likely to get errors or hang. CNFS failover is only guaranteed with cnfsReboot enabled.
The -N flag is valid for this attribute.
cnfsSharedRoot
Specifies a directory in a GPFS file system to be used by the clustered NFS subsystem.
GPFS must be down on all CNFS nodes in the cluster when changing the cnfsSharedRoot
attribute.
See the IBM Spectrum Scale: Administration Guide for restrictions and additional information.
cnfsVersions
Specifies a comma-separated list of protocol versions that CNFS should start and monitor.
The default is 3,4.
GPFS must be down on all CNFS nodes in the cluster when changing the cnfsVersions attribute.
See the IBM Spectrum Scale: Administration Guide for additional information.
commandAudit
Controls the logging of audit messages for GPFS commands that change the configuration of the
cluster. This attribute is not supported on Windows operating systems. For more information, see
the topic Audit messages for cluster configuration changes in the IBM Spectrum Scale: Problem
Determination Guide.
on
Starts audit messages. Messages go to syslog and the GPFS log.
syslogOnly
Starts audit messages. Messages go to syslog only. This value is the default.
off
Stops audit messages.
The -N flag is valid for this attribute.
confirmShutdownIfHarmful={yes|no}
Specifies whether the mmshutdown command checks that shutting down the listed nodes will
cause a loss of function in the cluster. For more information, see “mmshutdown command” on
page 695.
The default value is yes.
dataDiskCacheProtectionMethod
The dataDiskCacheProtectionMethod parameter defines the cache protection method for
disks that are used for the GPFS file system. The valid values for this parameter are 0, 1, and 2.
The default value is 0. The default value indicates that the disks are Power-Protected and, when
the down disk is started, only the standard GPFS log recovery is required. If the value of this
parameter is 1, the disks are Power-Protected with no disk cache. GPFS works the same as before.
If the value of this parameter is 2, when a node stops functioning, files that have data in disk
cache must be recovered to a consistent state when the disk is started.
This parameter impacts only disks in the FPO storage pool. If the physical disk-write cache is
enabled, the value of this parameter must be set to 2. Otherwise, maintain the default.
dataDiskWaitTimeForRecovery
Specifies a period, in seconds, during which the recovery of dataOnly disks is suspended to give
the disk subsystem a chance to correct itself. This parameter is taken into account when the
affected disks belong to a single failure group. If more than one failure group is affected, the delay
is based on the value of minDiskWaitTimeForRecovery.
Valid values are 0 - 3600 seconds. The default is 3600. If restripeOnDiskFailure is no,
dataDiskWaitTimeForRecovery has no effect.
dataStructureDump
Specifies a path for storing dumps. You can specify a directory or a symbolic link. The default is to
store dumps in /tmp/mmfs. This attribute takes effect immediately whether or not -i is specified.
It is a good idea to create a directory or a symbolic link for problem determination information. Do
not put it in a GPFS file system, because it might not be available if GPFS fails. When a problem
occurs, GPFS can write 200 MiB or more of problem determination data into the directory. Copy
and delete the files promptly so that you do not get a NOSPACE error if another failure occurs.
Important: Before you change the value of dataStructureDump, stop the GPFS trace.
Otherwise you will lose GPFS trace data. Restart the GPFS trace afterward. For more information,
see the topic Generating GPFS trace reports in the IBM Spectrum Scale: Problem Determination
Guide.
The -N flag is valid for this attribute.
deadlockBreakupDelay
Specifies how long to wait after a deadlock is detected before attempting to break up the
deadlock. Enough time must be provided to allow the debug data collection to complete.
The default is 0, which means that the automated deadlock breakup is disabled. A positive value
enables the automated deadlock breakup. If automated deadlock breakup is to be enabled, a
delay of 300 seconds or longer is recommended.
deadlockDataCollectionDailyLimit
Specifies the maximum number of times that debug data can be collected each day.
The default is 3. If the value is 0, then no debug data is collected when a potential deadlock is
detected.
deadlockDataCollectionMinInterval
Specifies the minimum interval between two consecutive collections of debug data.
The default is 3600 seconds.
deadlockDetectionThreshold
Specifies the initial deadlock detection threshold. The effective deadlock detection threshold
adjusts itself over time. A suspected deadlock is detected when a waiter waits longer than the
effective deadlock detection threshold.
The default is 300 seconds. If the value is 0, then automated deadlock detection is disabled.
deadlockDetectionThresholdForShortWaiters
Specifies the deadlock detection threshold for short waiters. The default value is 60 seconds. Do
not set a large value, because short waiters are supposed to complete and disappear quickly.
deadlockOverloadThreshold
Specifies the threshold for detecting a cluster overload condition. If the overload index on a node
exceeds the deadlockOverloadThreshold, then the effective deadlockDetectionThreshold is
raised. The overload index is calculated heuristically and is based mainly on the I/O completion
times.
The default is 1. If the value is 0, then overload detection is disabled.
debugDataControl
Controls the amount of debug data that is collected. This attribute takes effect immediately
whether or not -i is specified. The -N flag is valid for this attribute.
none
No debug data is collected.
light
The minimum amount of debug data that is most important for debugging issues is collected.
This is the default value.
medium
More debug data is collected.
heavy
The maximum amount of debug data is collected, targeting internal test systems.
verbose
Needed only for troubleshooting special cases and can result in large dumps.
The following table provides more information about these settings:
defaultHelperNodes
Specifies a default set of nodes that can be used by commands that are able to distribute work to
multiple nodes. To specify values for this parameter, follow the rules that are described for the -N
option in the topic Specifying nodes as input to GPFS commands in the IBM Spectrum Scale:
Administration Guide.
To override this setting when you use such commands, explicitly specify the helper nodes with the
-N option of the command that you issue.
The following commands can use the nodes that this parameter provides: mmadddisk,
mmapplypolicy, mmbackup, mmchdisk, mmcheckquota, mmdefragfs, mmdeldisk,
mmdelsnapshot, mmfileid, mmfsck, mmimgbackup, mmimgrestore, mmrestorefs,
mmrestripefs, and mmrpldisk.
When the command runs, its lists the NodeClass values.
defaultMountDir
Specifies the default parent directory for GPFS file systems. The default value is /gpfs. If an
explicit mount directory is not provided with the mmcrfs, mmchfs, or mmremotefs command, the
default mount point is set to DefaultMountDir/DeviceName.
dioSmallSeqWriteBatching={yes | no}
Controls whether GPFS enables a performance optimization that allows multiple Direct I/O (DIO)
Asynchronous Input/Output (AIO) write requests to be handled as buffered I/O and be batched
together into larger write operations. When enabled, GPFS tries to combine multiple small
sequential asynchronous Direct I/O writes when committing the writes to storage. Valid values are
yes or no. The default value is no.
When dioSmallSeqWriteBatching is set to yes GPFS holds small (up to 64 KiB) AIO/DIO
write requests for a few microseconds, to allow for the held request to be combined together with
additional contiguous writes that might occur.
disableInodeUpdateOnFdatasync
Controls the inode update on fdatasync for mtime and atime updates. Valid values are yes or no.
When disableInodeUpdateOnFdatasync is set to yes, the inode object is not updated on disk
for mtime and atime updates on fdatasync() calls. File size updates are always synced to the
disk.
When disableInodeUpdateOnFdatasync is set to no, the inode object is updated with the
current mtime on fdatasync() calls. This is the default.
diskReadExclusionList
Specifies the list of NSD names that should be excluded from data block reads. It is used for
resolving data block replica mismatches. Separate the NSD names with a semicolon (;) and
enclose the list in quotes. For example:
diskReadExclusionList="gpfs1nsd;gpfs2nsd;gpfs3nsd"
diskReadExclusionList=""
diskReadExclusionList=DEFAULT
For more information, see the Replica mismatches topic in the IBM Spectrum Scale: Problem
Determination Guide.
dmapiDataEventRetry
Controls how GPFS handles data events that are enabled again immediately after the event is
handled by the DMAPI application. Valid values are as follows:
-1
Specifies that GPFS always regenerates the event as long as it is enabled. This value should be
used only when the DMAPI application recalls and migrates the same file in parallel by many
processes at the same time.
0
Specifies to never regenerate the event. Do not use this value if a file might be migrated and
recalled at the same time.
RetryCount
Specifies the number of times the data event should be retried. The default is 2.
For further information regarding DMAPI for GPFS, see GPFS-specific DMAPI events in the IBM
Spectrum Scale: Command and Programming Reference.
dmapiEventTimeout
Controls the blocking of file operation threads of NFS, while in the kernel waiting for the handling
of a DMAPI synchronous event. The parameter value is the maximum time, in milliseconds, the
thread blocks. When this time expires, the file operation returns ENOTREADY, and the event
continues asynchronously. The NFS server is expected to repeatedly retry the operation, which
eventually finds the response of the original event and continue. This mechanism applies only to
read, write, and truncate event types, and only when such events come from NFS server threads.
The timeout value is given in milliseconds. The value 0 indicates immediate timeout (fully
asynchronous event). A value greater than or equal to 86400000 (which is 24 hours) is considered
infinity (no timeout, fully synchronous event). The default value is 86400000.
For the parameter change to take effect, restart the GPFS daemon on the nodes that are specified
in the -N option. If the -N option is not used, restart the GPFS daemon on all nodes.
For further information about DMAPI for GPFS, see GPFS-specific DMAPI events in the IBM
Spectrum Scale: Command and Programming Reference.
The -N flag is valid for this attribute.
dmapiMountEvent
Controls the generation of the mount, preunmount, and unmount events. Valid values are:
all
mount, preunmount, and unmount events are generated on each node. This is the default
behavior.
SessionNode
mount, preunmount, and unmount events are generated on each node and are delivered to
the session node, but the session node does not deliver the event to the DMAPI application
unless the event is originated from the SessionNode itself.
LocalNode
mount, preunmount, and unmount events are generated only if the node is a session node.
For further information regarding DMAPI for GPFS, see GPFS-specific DMAPI events in the IBM
Spectrum Scale: Command and Programming Reference.
dmapiMountTimeout
Controls the blocking of mount operations, waiting for a disposition for the mount event to be set.
This timeout is activated, at most once on each node, by the first external mount of a file system
that has DMAPI enabled, and only if there has never before been a mount disposition. Any mount
operation on this node that starts while the timeout period is active waits for the mount
disposition. The parameter value is the maximum time, in seconds, that the mount operation
waits for a disposition. When this time expires and there is still no disposition for the mount event,
the mount operation fails, returning the EIO error. The timeout value is given in full seconds. The
value 0 indicates immediate timeout (immediate failure of the mount operation). A value greater
than or equal to 86400 (which is 24 hours) is considered infinity (no timeout, indefinite blocking
until there is a disposition). The default value is 60.
The -N flag is valid for this attribute.
For further information regarding DMAPI for GPFS, see GPFS-specific DMAPI events in the IBM
Spectrum Scale: Command and Programming Reference.
dmapiSessionFailureTimeout
Controls the blocking of file operation threads, while in the kernel, waiting for the handling of a
DMAPI synchronous event that is enqueued on a session that has experienced a failure. The
parameter value is the maximum time, in seconds, the thread waits for the recovery of the failed
session. When this time expires and the session has not yet recovered, the event is canceled and
the file operation fails, returning the EIO error. The timeout value is given in full seconds. The
value 0 indicates immediate timeout (immediate failure of the file operation). A value greater than
or equal to 86400 (which is 24 hours) is considered infinity (no timeout, indefinite blocking until
the session recovers). The default value is 0.
For further information regarding DMAPI for GPFS, see GPFS-specific DMAPI events in the IBM
Spectrum Scale: Command and Programming Reference.
The -N flag is valid for this attribute.
enableIPv6
Controls whether the GPFS daemons communicate through the IPv6 network. The following
values are valid:
no
Specifies that the GPFS daemons do not communicate through the IPv6 network. This is the
default value.
Note: If any of the node interfaces that are specified for the mmcrcluster command resolves
to an IPv6 address, the mmcrcluster command automatically enables the new cluster for
IPv6 and sets the enableIPv6 attribute to yes. For more information, see Enabling a cluster
for IPv6 in the IBM Spectrum Scale: Administration Guide.
yes
Specifies that the GPFS daemons communicate through the IPv6 network. yes requires that
the daemon be down on all nodes.
prepare
After the command completes, the daemons can be recycled on all nodes at a time chosen by
the user (before proceeding to run the command with commit specified).
commit
Verifies that all currently active daemons have received the new value, allowing the user to
add IPv6 nodes to the cluster.
Note:
Before changing the value of enableIPv6, the GPFS daemon on the primary configuration server
must be inactive. After changing the parameter, the GPFS daemon on the rest of nodes in the
cluster should be recycled. This can be done one node a time.
To use IPv6 addresses for GPFS, the operating system must be properly configured as IPv6
enabled, and IPv6 addresses must be configured on all the nodes within the cluster.
encryptionKeyCacheExpiration
Specifies the refresh interval, in seconds, of the file system encryption key cache that is used
internally by the mmfsd daemon. The default value of this parameter is 900 seconds. The refresh
operation of the encryption key cache requires the remote key server to be accessible and
functional. A restart of the GPFS mmfsd daemon is required for any change in the value of this
parameter to become effective. For more information, see Encryption keys in the IBM Spectrum
Scale: Administration Guide.
A value of 0 indicates that the encryption key cache does not expire and it is not periodically
refreshed.
A value of 60 (seconds) or greater indicates that the encryption key cache expires after the
specified amount of time. The encryption key cache is refreshed automatically by the mmfsd
daemon from the remote key server. No administrative action is required.
Note:
Changing the value of the encryptionKeyCacheExpiration requires cluster services (mmfsd)
to be restarted on each node in order to take effect.
Though a value of 0 means the encryption key cache does not expire and is not refreshed
periodically, connectivity to the remote key server is required from all nodes that access an
encrypted file system. Regardless of the value of the encryptionKeyCacheExpiration, access
to the remote key server is required after the following scenarios:
• When you are unmounting or mounting an encrypted file system.
• When you are using the mmchpolicy command to install a new policy for the file system, even if
the key does not change.
• When a KMIP client is registered or deregistered by using the mmkeyserv client register
or mmkeyserv client deregister command.
enforceFilesetQuotaOnRoot
Controls whether fileset quotas should be enforced for the root user the same way as for any other
users. Valid values are yes or no. The default is no.
expelDataCollectionDailyLimit
Specifies the maximum number of times that debug data associated with expelling nodes can be
collected in a 24-hour period. Sometimes exceptions are made to help capture the most relevant
debug data.
The default is 3. If the value is 0, then no expel-related debug data is collected.
expelDataCollectionMinInterval
Specifies the minimum interval, in seconds, between two consecutive expel-related data
collection attempts on the same node.
The default is 3600 seconds.
failureDetectionTime
Indicates to GPFS the amount of time it takes to detect that a node has failed.
GPFS must be down on all the nodes when changing the failureDetectionTime attribute.
fastestPolicyCmpThreshold
Indicates the disk comparison count threshold, above which GPFS forces selection of this disk as
the preferred disk to read and update its current speed.
Valid values are >= 3. The default is 50. In a system with SSD and regular disks, the value of the
fastestPolicyCmpThreshold parameter can be set to a greater number to let GPFS refresh
the speed statistics for slower disks less frequently.
fastestPolicyMaxValidPeriod
Indicates the time period after which the disk's current evaluation is considered invalid (even if its
comparison count has exceeded the threshold) and GPFS prefers to read this disk in the next
selection to update its latest speed evaluation.
Valid values are >= 1 in seconds. The default is 600 (10 minutes).
fastestPolicyMinDiffPercent
A percentage value indicating how GPFS selects the fastest between two disks. For example, if
you use the default fastestPolicyMinDiffPercent value of 50, GPFS selects a disk as faster only if it
is 50% faster than the other. Otherwise, the disks remain in the existing read order.
Valid values are 0 - 100 in percentage points. The default is 50.
fastestPolicyNumReadSamples
Controls how many read samples are taken to evaluate the disk's recent speed.
Valid values are 3 - 100. The default is 5.
fileHeatLossPercent
The file heat attribute of a file increases in value when the file is accessed but decreases in value
over time if the file is not accessed. The fileHeatLossPercent attribute specifies the percent
of file access heat that an unaccessed file loses at the end of each tracking period. The valid range
is 0 - 100. The default value is 10, which indicates that an unaccessed file loses 10 percent of its
file access heat at the end of each tracking period. The tracking period is set by
fileHeatPeriodMinutes. For more information, see the topic File heat: Tracking the file access
temperature in the IBM Spectrum Scale: Command and Programming Reference.
This attribute does not take effect until the GPFS daemon is stopped and restarted.
fileHeatPeriodMinutes
A nonzero value enables file heat tracking and specifies the frequency with which the file heat
attribute is updated. A value of 0 disables file access heat tracking. The default value is 0. For
more information, see the topic File heat: Tracking the file access temperature in the IBM Spectrum
Scale: Command and Programming Reference.
This attribute does not take effect until the GPFS daemon is stopped and restarted.
FIPS1402mode
Controls whether GPFS uses a FIPS-140-2-compliant encryption module for encrypted
communications between nodes and for file encryption. Valid values are yes or no. The default
value is no.
When it is enabled, FIPS 140-2 mode applies only to the following two features of IBM Spectrum
Scale:
• Encryption and decryption of file data when it is transmitted between nodes in the current
cluster or between a node in the current cluster and a node in another cluster. To enable this
feature, issue the following command:
mmchconfig cipherList=SupportedCipher
where SupportedCipher is a cipher that is supported by IBM Spectrum Scale, such as AES128-
GCM-SHA256. For more information, see the following topics:
– Security mode in the IBM Spectrum Scale: Administration Guide.
– Setting security mode for internode communications in a cluster in the IBM Spectrum Scale:
Administration Guide.
• Encryption of file data as it is written to storage media and decryption of file data as it is read
from storage media. For more information about file data encryption, see the following section
of the documentation:
– Encryption in the IBM Spectrum Scale: Administration Guide.
Note: For performance reasons, do not enable FIPS 140-2 mode unless all the nodes in the
cluster are running FIPS-certified kernels in FIPS mode. This note applies only to encryption of
file data as it is written to storage media and decryption of file data as it is read from storage
media. This note does not apply to encryption and decryption of file data when it is transmitted
between nodes.
FIPS 140-2 mode does not apply to other components of IBM Spectrum Scale that use
encryption, such as object encryption.
frequentLeaveCountThreshold
Specifies the number of times a node exits the cluster within the last
frequentLeaveTimespanMinutes before autorecovery ignores the next exit of that node. If the
exit count of a node within the last frequentLeaveTimespanMinutes is greater than
frequentLeaveCountThreshold, autorecovery ignores the corresponding node exit.
The valid values are 0 - 10. The default is 0, which means autorecovery always handles the exit of
a node no matter how frequent a node exits.
If restripeOnDiskFailure is no, frequentLeaveCountThreshold has no effect.
frequentLeaveTimespanMinutes
Specifies the time span that is used to calculate the exit frequency of a node. If the exit count of a
node within the last frequentLeaveTimespanMinutes is greater than the
frequentLeaveCountThreshold, autorecovery ignores the corresponding node exit.
The valid values are 1 - 1440. The default is 60.
If restripeOnDiskFailure is no, frequentLeaveTimespanMinuteshas no effect.
ignorePrefetchLUNCount
The GPFS client node calculates the number of sequential access prefetch and write-behind
threads to run concurrently for each file system by using the count of the number of LUNs in the
file system and the value of maxMBpS. However, if the LUNs being used are composed of multiple
physical disks, this calculation can underestimate the amount of IO that can be done concurrently.
Setting the value of the ignorePrefetchLUNCount parameter to yes does not include the LUN
count and uses the maxMBpS value to dynamically determine the number of threads to schedule
the prefetchThreads value.
This parameter impacts only the GPFS client node. The GPFS NSD server does not include this
parameter in the calculation.
The valid values for this parameter are yes and no. The default value is no and can be used in
traditional LUNs where one LUN maps to a single disk or an n+mP array. Set the value of this
parameter to yes when the LUNs presented to GPFS are made up of a large numbers of physical
disks.
The -N flag is valid for this attribute.
ignoreReplicationForQuota
Specifies whether the quota commands ignore data replication factor. Valid values are yes or no.
The default value is no.
The ignoreReplicationForQuota parameter hides the data replication factor for both input
and output quotas commands. This parameter adjusts the values of quota commands according to
the data replication factor. For example, if the data replication factor is 2, it means that for every
block in the file, there are effectively 2 blocks being used. For a file of 1 MB, internally 2 MB is
allocated. But for the end user, the file size must use 1MB of quota. Without the
ignoreReplicationForQuota attribute, the quota management reports the file size as 2 MB.
Similarly, quota command inputs are also adjusted with the replication factor. For example, a 1 GB
quota limits the overall sum of file sizes to 1 GB, even though internally, the data block usage is 2
GB because of the replication factor.
ignoreReplicationOnStatfs
Specifies whether df command output on GPFS file system ignores data replication factor. Valid
values are yes or no. The default value is no.
The ignoreReplicationOnStatfs parameter ignores the replication factor and helps to report
only the actual file size when a df command is used. For example, if the data replication factor is
2, it means that for every block in the file, there are effectively 2 blocks being used. For a file of 1
MB, internally 2 MB is allocated. But for the end user, the file size must use 1MB of quota. Without
the ignoreReplicationOnStatfs attribute, the df command reports the file size as 2 MB.
linuxStatfsUnits={posix | subblock | fullblock}
Controls the values that are returned by the Linux functions statfs and statvfs for f_bsize,
f_rsize, f_blocks, and f_bfree:
Table 16. Values returned by statvfs or statfs for different settings of linuxStatfsUnits
linuxStatfsUn f_bsize f_frsize f_blocks f_bfree
its
posix Block size Subblock size Units of Units of
subblocks subblocks
subblock Subblock size Subblock size Units of Units of
subblocks subblocks
fullblock Block size Block size Units of blocks Units of blocks
posix
Returns the correct values as they are specified by POSIX for statvfs. This setting might
break Linux applications that are written for an earlier version of the statfs function and that
incorrectly assume that file system capacity (f_blocks) and free space (f_bfree) are
reported in units given by f_bsize rather than f_frsize.
subblock
Returns values that result in correct disk space requirement calculations but that do not break
earlier Linux applications.
fullblock
Returns the same values as do versions of IBM Spectrum Scale that are earlier than 5.0.3. This
is the default value.
Note:
• The posix value is preferable for applications that use the POSIX-compliant statvfs function.
• Linux applications that were built with the earlier Linux statfs function might depend on the
behavior that is provided by the fullblock option.
For more information, see the description of the -b Blocksize option in the topic “mmcrfs
command” on page 315.
The -N flag is valid for this attribute.
lrocData
Controls whether user data is populated into the local read-only cache. Other configuration
options can be used to select the data that is eligible for the local read-only cache. When using
more than one such configuration option, data that matches any of the specified criteria is eligible
to be saved.
Valid values are yes or no. The default value is yes.
If lrocData is set to yes, by default the data that was not already in the cache when accessed by
a user is subsequently saved to the local read-only cache. The default behavior can be overridden
using the lrocDataMaxFileSize and lrocDataStubFileSize configuration options to save
all data from small files or all data from the initial portion of large files.
lrocDataMaxFileSize
Limits the data that can be saved in the local read-only cache to only the data from small files.
A value of -1 indicates that all data is eligible to be saved. A value of 0 indicates that small files are
not to be saved. A positive value indicates the maximum size of a file to be considered for the local
read-only cache. For example, a value of 32768 indicates that files with 32 KB of data or less are
eligible to be saved in the local read-only cache. The default value is 0.
lrocDataStubFileSize
Limits the data that can be saved in the local read-only cache to only the data from the first
portion of all files.
A value of -1 indicates that all file data is eligible to be saved. A value of 0 indicates that stub data
is not eligible to be saved. A positive value indicates that the initial portion of each file that is
eligible is to be saved. For example, a value of 32768 indicates that the first 32 KB of data from
each file is eligible to be saved in the local read-only cache. The default value is 0.
lrocDirectories
Controls whether directory blocks is populated into the local read-only cache. The option also
controls other file system metadata such as indirect blocks, symbolic links, and extended attribute
overflow blocks.
Valid values are yes or no. The default value is yes.
lrocEnableStoringClearText
Controls whether encrypted file data can be read into a local read-only cache (LROC) device. Valid
values are yes and no. The default value is no.
If the value is yes, encrypted files can benefit from the performance improvements that are
provided by an LROC device. However, be aware that IBM Spectrum Scale holds encrypted file
data in memory as cleartext. Because LROC storage is non-volatile, an attacker can capture the
cleartext by removing the LROC device from the system and reading the contents at some other
location.
Warning: You must take steps to protect the cleartext while it is in LROC device storage. One
method is to install an LROC device that internally encrypts data that is written into it and decrypts
data that is read from it. However, be aware that a device of this type voids the IBM Spectrum
Scale secure deletion guarantee, because IBM Spectrum Scale does not manage the encryption
key for the device.
For more information, see the following topics:
Encryption and local read-only cache (LROC) in the IBM Spectrum Scale: Administration Guide.
Local read-only cache in the IBM Spectrum Scale: Administration Guide.
lrocInodes
Controls whether inodes from open files is populated into the local read-only cache; the cache
contains the full inode, including all disk pointers, extended attributes, and data.
Valid values are yes or no. The default value is yes.
logRecoveryThreadsPerLog
Controls the number of threads that are available for recovering a single log file. The default value
is 8 and the valid range is 1 - 64. Setting a higher value expedites the recovery of single log files
that are being replayed. The improvement in processing speed depends on the log file size and
user workload.
logOpenParallelism
Controls the number of log files that can be opened in parallel during a log recovery. The default
value is 8 and the valid range is 1 - 256. Setting a higher value improves the recovery speed of the
file system when it is mounted on multiple nodes.
logRecoveryParallelism
Controls the number of log files that can be recovered concurrently. The default value is 1 and the
valid range is 1 - 64. Setting a higher value expedites the recovery of the file system when multiple
nodes fail at the same time.
maxActiveIallocSegs
Specifies the number of active inode allocation segments that are maintained on the specified
nodes. The valid range is 1 - 64. A value greater than 1 can significantly improve performance in
the following scenario:
maxFailedNodesForRecovery
Specifies the maximum number of nodes that might be unavailable before automatic disk
recovery actions are canceled.
Valid values are in the range 0 - 300. The default is 3. If restripeOnDiskFailure is no,
maxFailedNodesForRecovery has no effect.
maxFcntlRangesPerFile
Specifies the number of fcntl locks that are allowed per file. The default is 200. The minimum
value is 10 and the maximum value is 200000.
maxFilesToCache
Specifies the number of inodes to cache for open files or files that are recently closed. This
parameter does not limit the number of files that can remain concurrently open on the node.
Storing the inode of a file in cache permits faster re-access to the file. The default is 4000, but
increasing this number might improve throughput for workloads with high file reuse. However,
increasing this number excessively might cause paging at the file system manager node. The value
should be large enough to handle the number of concurrently open files plus allow caching of
recently used files.
The -N flag is valid for this attribute.
maxMBpS
Specifies an estimate of how many megabytes of data can be transferred per second into or out of
a single node. The default is 2048 MiB per second. The value is used in calculating the amount of
I/O that can be done to effectively prefetch data for readers and write-behind data from writers.
By lowering this value, you can artificially limit how much I/O one node can put on all of the disk
servers.
The -N flag is valid for this attribute.
maxMissedPingTimeout
See the minMissedPingTimeout parameter.
maxReceiverThreads
Controls the maximum number of receiver threads that handle incoming TCP packets. The actual
number of receiver threads that are configured is limited by the number of logical CPUs on the
node. If the number of logical CPUs is less than maxReceiverThreads, then the number of
threads that are handling the packets is set to the number of logical CPUs.
The default value of maxReceiverThreads is 16. The maximum value of
maxReceiverThreads is 128. Range is 1-128.
This parameter should be increased on large clusters to limit the number of sockets that are
handled by each receive thread. For clusters with 2048 nodes or more, set the parameter to the
number of logical CPUs.
The -N flag is valid for this attribute.
maxStatCache
Specifies the number of inodes to keep in the stat cache. The stat cache maintains only enough
inode information to perform a query on the files kept in the stat cache as would be needed by the
'ls' command. The valid range for maxStatCache is 0 - 100,000,000.
The default value of the maxStatCache parameter depends on the value of maxFilesToCache
parameter in certain scenarios. The following table provides examples of such scenarios:
If you do not accept either of the default values, set maxStatCache to an appropriate size based
on the number of nodes in the cluster, the number of token managers in the cluster, the size of the
Local Read-Only Cache (LROC) if one is configured, and any other relevant factors.
Note: In versions of IBM Spectrum Scale earlier than 5.0.2, the stat cache is not effective on the
Linux platform unless the LROC is configured. In versions earlier than 5.0.2, follow these
guidelines:
• If LROC is not enabled on the node, set maxStatCache to 0.
• If LROC is enabled on the node, accept a default value of maxStatCache or set maxStatCache
to an appropriate size as described in the previous paragraphs.
This pre-5.0.2 restriction applies to all the versions and distributions of Linux that IBM Spectrum
Scale supports.
The -N flag is valid for this attribute.
metadataDiskWaitTimeForRecovery
Specifies a period, in seconds, during which the recovery of metadata disks is suspended to give
the disk subsystem a chance to correct itself. This parameter is taken into account when the
affected disks belong to a single failure group. If more than one failure group is affected, the delay
is based on the value of minDiskWaitTimeForRecovery.
Valid values are 0 - 3600 seconds. The default is 2400. If restripeOnDiskFailure is no,
metadataDiskWaitTimeForRecovery has no effect.
minDiskWaitTimeForRecovery
Specifies a period, in seconds, during which the recovery of disks is suspended to give the disk
subsystem a chance to correct itself. This parameter is taken into account when more than one
failure group is affected. If the affected disks belong to a single failure group, the delay is based on
the values of dataDiskWaitTimeForRecovery and metadataDiskWaitTimeForRecovery.
Valid values are 0 - 3600 seconds. The default is 1800. If restripeOnDiskFailure is no,
minDiskWaitTimeForRecovery has no effect.
minIndBlkDescs
Specifies the total number of indirect blocks in cache where the disk addresses of data blocks or
indirect blocks of files are stored. Each indirect block descriptor caches one indirect block.
Caching these indirect blocks enable faster retrieval of the disk location of the data to be read or
written, thus improving the performance of I/Os.
The default value for minIndBlkDesc is 5000. The minIndBlkDesc value is calculated by
assigning the maximum value between minIndBlkDesc and maxFilesToCache.
For example:
• If user does not set the value for minIndBlkDesc, then its effective value is MAX
(5000,maxFilesToCache), whichever is the higher.
• If user sets the value for minIndBlkDesc, then its effective value is MAX (minIndBlkDesc,
maxFilesToCache), whichever is the higher.
The minIndBlkDesc value must be large enough to handle the number of concurrently open
files, or traverse many data blocks in large files, if the maxFilesToCache is set to a small value.
The -N flag is valid for this attribute.
minMissedPingTimeout
The minMissedPingTimeout and maxMissedPingTimeout parameters set limits on the
calculation of missedPingTimeout (MPT). The MPT is the allowable time for pings sent from the
Cluster Manager (CM) to a node that has not renewed its lease to fail. The default MPT value is 5
seconds less than leaseRecoveryWait. The CM will wait the MPT seconds after the lease has
expired before declaring a node out of the cluster. The values of the minMissedPingTimeout
and maxMissedPingTimeout are in seconds; the default values are 3 and 60 respectively. If
these values are changed, only GPFS on the quorum nodes that elect the CM must be recycled to
take effect.
This parameter can be used to cover over a central network switch failure timeout or other
network glitches that might be longer than leaseRecoveryWait. This might prevent false node
down conditions, but it extends the time for node recovery to finish and might block other nodes
from progressing if the failing node holds the tokens for many shared files.
As is the case with leaseRecoveryWait, a node is usually expelled from the cluster if there is a
problem with the network or the node runs out of resources like paging. For example, if there is an
application that is running on a node that is paging the machine too much or overrunning network
capacity, GPFS might not have the chance to contact the Cluster Manager node to renew the lease
within the timeout period.
The default value of this parameter is 3. A valid value is any number in the range 1 - 300.
mmapRangeLock
Specifies POSIX or non-POSIX mmap byte-range semantics. Valid values are yes or no (yes is the
default). A value of yes indicates POSIX byte-range semantics apply to mmap operations. A value
of no indicates non-POSIX mmap byte-range semantics apply to mmap operations.
If using InterProcedural Analysis (IPA), turn off this option:
mmchconfig mmapRangeLock=no -i
This allows more lenient intranode locking, but imposes internode whole file range tokens on files
using mmap while writing.
mmfsLogTimeStampISO8601
Controls the time stamp format for GPFS log entries. Specify yes to use the ISO 8601 time stamp
format for log entries or no to use the earlier time stamp format. The default value is yes. You can
specify the log time stamp format for the entire cluster or for individual nodes. You can have
different log time stamp formats on different nodes of the cluster. For more information, see the
Time stamp in GPFS log entries topic in IBM Spectrum Scale: Problem Determination Guide.
The -N flag is valid for this attribute. This attribute takes effect immediately, whether or not -i is
specified.
nfsPrefetchStrategy
With the nfsPrefetchStrategy parameter, GPFS optimizes prefetching for NFS file-style
access patterns. This parameter defines a window of the number of blocks around the current
position that are treated as fuzzy-sequential access. The value of this parameter can improve the
performance while reading large files sequentially. However, because of kernel scheduling, some
read requests that come to GPFS are not sequential. If the file system block size is smaller than
the read request sizes, increasing the value of this parameter provides a bigger window of blocks.
The default value is 0. A valid value is any number in the range 0 - 10.
Setting the value of nfsPrefetchStrategy to 1 or greater can improve the sequential read
performance when large files are accessed by using NFS and the filesystem block size is smaller
than the NFS transfer block size.
nistCompliance
Controls whether GPFS operates in the NIST 800-131A mode. (This applies to security transport
only, not to encryption, as encryption always uses NIST-compliant mechanisms.)
Valid values are:
off
Specifies that there is no compliance to NIST standards. For clusters that are operating below
the GPFS 4.1 level, this is the default. For clusters at the version 5.1 level or higher, setting
nistCompliance to off is not allowed.
SP800-131A
Specifies that security transport is to follow the NIST SP800-131A recommendations. For
clusters at the GPFS 4.1 level or higher, this is the default.
Note: In a remote cluster setup, all clusters must have the same nistCompliance value.
noSpaceEventInterval
Specifies the time interval between calling a callback script of two noDiskSpace events of a file
system. The default value is 120 seconds. If this value is set to zero, the noDiskSpace event is
generated every time the file system encounters the noDiskSpace event. The noDiskSpace
event is generated when a callback script is registered for this event with the mmaddcallback
command.
nsdBufSpace
This option specifies the percentage of the page pool that is reserved for the network transfer of
NSD requests. Valid values are within the range of 10 to 70. The default value is 30. On IBM
Spectrum Scale RAID recovery group NSD servers, this value should be decreased to its minimum
of 10, since vdisk-based NSDs are served directly from the RAID buffer pool (as governed by
nsdRAIDBufferPoolSizePct). On all other NSD servers, increasing either this value or the
amount of page pool, or both, could improve NSD server performance. On NSD client-only nodes,
this parameter is ignored. For more information about IBM Spectrum Scale RAID, see IBM
Spectrum Scale RAID: Administration.
The -N flag is valid for this attribute.
nsdCksumTraditional
This attribute enables checksum data-integrity checking between a traditional NSD client node
and its NSD server. Valid values are yes and no. The default value is no. (Traditional in this context
means that the NSD client and server are configured with IBM Spectrum Scale rather than with
IBM Spectrum Scale RAID. The latter is a component of IBM Elastic Storage Server (ESS) and of
IBM GPFS Storage Server (GSS).)
The checksum procedure detects any corruption by the network of the data in the NSD RPCs that
are exchanged between the NSD client and the server. A checksum error triggers a request to
retransmit the message.
When this attribute is enabled on a client node, the client indicates in each of its requests to the
server that it is using checksums. The server uses checksums only in response to client requests
in which the indicator is set. A client node that accesses a file system that belongs to another
cluster can use checksums in the same way.
You can change the value of the this attribute for an entire cluster without shutting down the
mmfsd daemon, or for one or more nodes without restarting the nodes.
Note:
• Enabling this feature can result in significant I/O performance degradation and a considerable
increase in CPU usage.
• To enable checksums for a subset of the nodes in a cluster, issue a command like the following
one:
If a node in the cluster is an NSD server or is configured for IBM Spectrum Scale RAID, this
parameter takes effect on the node only when the GPFS daemon is restarted, even if the -i or -I
option is specified.
The -N flag is valid for this attribute.
pagepoolMaxPhysMemPct
Percentage of physical memory that can be assigned to the page pool. Valid values are 10 - 90
percent. The default is 75 percent (with the exception of Windows, where the default is 50
percent).
The -N flag is valid for this attribute.
panicOnIOHang={yes | no}
Controls whether the GPFS daemon panics the node kernel when a local I/O request is pending in
the kernel for more than five minutes. This attribute applies only to disks that the node is directly
attached to.
yes
Causes the GPFS daemon to panic the node kernel.
no
Takes no action. This is the default value.
This attribute is not supported in the Microsoft Windows environment.
Note: With the diskIOHang event of the mmaddcallback command, you can add notification
and data collection scripts to isolate the reason for a long I/O wait.
The -N flag is valid for this attribute.
pitWorkerThreadsPerNode
Controls the maximum number of threads to be involved in parallel processing on each node that
is serving as a Parallel Inode Traversal (PIT) worker.
By default, when a command that uses the PIT engine is run, the file system manager asks all
nodes in the local cluster to serve as PIT workers; however, you can specify an exact set of nodes
to serve as PIT workers by using the -N option of a PIT command. Note that the current file
system manager node is a mandatory participant, even if it is not in the list of nodes you specify.
On each participating node, up to pitWorkerThreadsPerNode can be involved in parallel
processing. The range of accepted values is 0 to 8192. The default value varies within the 2-16
range, depending on the file system configuration. If a file system contains vdisk-based NSD disks,
the default value varies within the 8-64 range.
prefetchPct
GPFS uses the prefetchPct parameter as a guideline to limit the page pool space that is to be
used for prefetch and write-behind buffers for active sequential streams. The default value of the
prefetchPct parameter is 20% of the pagepool value. If the workload is sequential with very
little caching of small files or random IO, increase the value of this parameter to 60% of the
pagepool value, so that each stream can have more buffers that are cached for prefetch and
write-behind operations.
The default value of this parameter is 20. The valid value can be any numbers in the range 0 - 60.
prefetchThreads
Controls the maximum number of threads that are dedicated to prefetching data for files that are
read sequentially, or to handle sequential write-behind.
Functions in the GPFS daemon dynamically determine the actual degree of parallelism for
prefetching data. The default value is 72. The minimum value is 2. The maximum value of
prefetchThreads plus worker1Threads plus nsdMaxWorkerThreads is 8192 on all 64-bit
platforms.
The -N flag is valid for this attribute.
proactiveReconnect={yes | no}
When enabled causes nodes to proactively close problematic TCP connections with other nodes
and to reestablish new connections in their place. The default value of the proactiveReconnect
parameter is 'no' when the minimum release level of a cluster is less than 5.0.4. The default value
of the proactiveReconnect parameter is 'yes' when the minimum release level of a cluster is at
least 5.0.4.
yes
Each node monitors the state of TCP connections with other nodes in the cluster and
proactively closes and reestablishes a connection whenever the TCP congestion state and TCP
retransmission timeout (RTO) (similar to the information that is displayed by the ss -i
command in Linux) indicate that data might not be flowing.
In certain environments that are prone to short-term network outages, this feature can
prevent nodes from being expelled from a cluster when TCP connections go into error states
that are caused by packet loss in the network or in the adapter. If a TCP connection is
successfully reestablished and operates normally, nodes on either side of the connection are
not expelled.
no
Disables proactive closing and reestablishing of problematic TCP connections between nodes.
For both the yes and no settings, a message is written to the mmfs.log file whenever a TCP
connection reaches an error state.
This attribute is supported only on Linux.
The -N flag is valid for this attribute.
profile
Specifies a predefined profile of attributes to be applied. System-defined profiles are located
in /usr/lpp/mmfs/profiles/. All the configuration attributes listed under a cluster stanza are
changed as a result of this command. The following system-defined profile names are accepted:
• gpfsProtocolDefaults
• gpfsProtocolRandomIO
A user's profiles must be installed in /var/mmfs/etc/. The profile file specifies GPFS configuration
parameters with values different than the documented defaults. A user-defined profile must not
begin with the string 'gpfs' and must have the .profile suffix.
User-defined profiles consist of the following stanzas:
%cluster:
[CommaSeparatedNodesOrNodeClasses:]ClusterConfigurationAttribute=Value
...
• fastestPolicyCmpThreshold
• fastestPolicyMaxValidPeriod
• fastestPolicyMinDiffPercent
In a system with SSD and regular disks, the value of fastestPolicyCmpThreshold can be set
to a greater number to let GPFS refresh the speed statistics for the slower disks less frequently.
The default value is maintained for all other configurations. The default value of this parameter is
default. The valid values are default, local, and fastest.
To return this attribute to the default setting, specify readReplicaPolicy=DEFAULT -i.
readReplicaRuleEnabled
Specifies if the gpfs.readReplicaRule extended attribute or the diskReadExclusionList
configuration option are evaluated during the data block read. The valid values are yes and no. The
default value is no. For more information, see the Replica mismatches topic in the IBM Spectrum
Scale: Problem Determination Guide.
release=LATEST
Increases the minimum release level of a cluster to the latest version of IBM Spectrum Scale that
is supported by all the nodes of the cluster. For example, if the minimum release level of a cluster
is 5.0.1.3 but IBM Spectrum Scale 5.0.2.2 is installed on all the nodes, then release=LATEST
increases the minimum release level of the cluster to 5.0.2.0.
The effect of increasing the minimum release level is to enable the features that are installed with
the version of IBM Spectrum Scale that the new minimum release level specifies. To return to the
preceding example, increasing the minimum release level to 5.0.2.0 enables the features that are
installed with IBM Spectrum Scale 5.0.2.0.
Issuing mmchconfig with the release=LATEST parameter is one of the final steps in upgrading
the nodes of a cluster to a later version of IBM Spectrum Scale. For more information, see the
topic Completing the upgrade to a new level of IBM Spectrum Scale in the IBM Spectrum Scale:
Concepts, Planning, and Installation Guide.
Before you use this parameter, consider any possible unintended consequences. For more
information read the help topic Minimum release level of a cluster in the IBM Spectrum Scale:
Administration Guide.
To process this parameter, the mmchconfig command must access each node in the cluster to
determine the version of IBM Spectrum Scale that is installed. If the command cannot access one
or more of the nodes, it displays an error message and terminates. You must correct the
communication problem and issue the command again. Repeat this process until the command
verifies the information for all the nodes and ends successfully.
This parameter causes the mmchconfig command to fail with an error message if the
cipherList configuration attribute of the cluster is not set to AUTHONLY or higher. For more
information, see the topic Completing the migration to a new level of IBM Spectrum Scale in the
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
To display the minimum release level, issue the following command:
mmlsconfig minReleaseLevel
restripeOnDiskFailure
Specifies whether GPFS will attempt to automatically recover from certain common disk failure
situations.
When a disk experiences a failure and becomes unavailable, the recovery procedure first attempts
to restart the disk and if this fails, the disk is suspended and its data that is moved to other disks.
Similarly, when a node joins the cluster, all disks for which the node is responsible are checked
and an attempt is made to restart any that are in a down state.
Whether a file system is a subject of a recovery attempt is determined by the max replication
values for the file system. If the mmlsfs -M or -R value is greater than one, then the recovery
code is executed. The recovery actions are asynchronous and GPFS continues its processing while
the recovery attempts take place. The results from the recovery actions and any errors that are
encountered is recorded in the /var/adm/ras/autorecovery.log.<timestamp> log.
For more information on GPFS disk fail auto recovery, see Big Data best practices in the IBM
Spectrum Scale wiki in developerWorks®.
rpcPerfNumberDayIntervals
Controls the number of days that aggregated RPC data is saved. Every day the previous 24 hours
of one-hour RPC data is aggregated into a one-day interval.
The default value for rpcPerfNumberDayIntervals is 30, which allows the previous 30 days of
one-day intervals to be displayed. To conserve memory, fewer intervals can be configured to
reduce the number of recent one-day intervals that can be displayed. The values that are allowed
for rpcPerfNumberDayIntervals are in the range 4 - 60.
rpcPerfNumberHourIntervals
Controls the number of hours that aggregated RPC data is saved. Every hour the previous 60
minutes of 1-minute RPC data is aggregated into a one-hour interval.
The default value for rpcPerfNumberHourIntervals is 24, which allows the previous day's
worth of one-hour intervals to be displayed. To conserve memory, fewer intervals can be
configured to reduce the number of recent one-hour intervals that can be displayed. The values
that are allowed for rpcPerfNumberHourIntervals are 4, 6, 8, 12, or 24.
rpcPerfNumberMinuteIntervals
Controls the number of minutes that aggregated RPC data is saved. Every minute the previous 60
seconds of 1-second RPC data is aggregated into a 1-minute interval.
The default value for rpcPerfNumberMinuteIntervals is 60, which allows the previous hour's
worth of 1-minute intervals to be displayed. To conserve memory, fewer intervals can be
configured to reduce the number of recent 1-minute intervals that can be displayed. The values
that are allowed for rpcPerfNumberMinuteIntervals are 4, 5, 6, 10, 12, 15, 20, 30, or 60.
rpcPerfNumberSecondIntervals
Controls the number of seconds that aggregated RPC data is saved. Every second RPC data is
aggregated into a 1-second interval.
The default value for rpcPerfNumberSecondIntervals is 60, which allows the previous
minute's worth of 1-second intervals to be displayed. To conserve memory, fewer intervals can be
configured to reduce the number of recent 1-second intervals that can be displayed. The values
that are allowed for rpcPerfNumberSecondIntervals are 4, 5, 6, 10, 12, 15, 20, 30, or 60.
rpcPerfRawExecBufferSize
Specifies the number of bytes to allocate for the buffer that is used to store raw RPC execution
statistics. For each RPC received by a node, 16 bytes of associated data is saved in this buffer
when the RPC completes. This circular buffer must be large enough to hold 1 second's worth of
raw execution statistics.
The default value for rpcPerfRawExecBufferSize is 10 MiB, which produces 655360 entries.
The data in this buffer is processed every second. It is a good idea to set the buffer size 10% to
20% larger than what is needed to hold 1 second's worth of data.
rpcPerfRawStatBufferSize
Specifies the number of bytes to allocate for the buffer that is used to store raw RPC performance
statistics. For each RPC sent to another node, 56 bytes of associated data is saved in this buffer
when the reply is received. This circular buffer must be large enough to hold 1 second's worth of
raw performance statistics.
The default value for rpcPerfRawStatBufferSize is 30 MiB, which produces 561737 entries.
The data in this buffer is processed every second. It is a good idea to set the buffer size 10% to
20% larger than what is needed to hold one second's worth of data.
seqDiscardThreshold
With the seqDiscardThreshold parameter, GPFS detects a sequential read or write access
pattern and specifies what has to be done with the page pool buffer after it is consumed or flushed
by write-behind threads. This is the highest performing option in a case where a very large file is
read or written sequentially. The default for this value is 1 MiB, which means that if a file is
sequentially read and is greater than 1 MiB, GPFS does not keep the data in cache after
consumption. There are some instances where large files are reread by multiple processes such as
data analytics. In some cases, you can improve the performance of these applications by
increasing the value of the seqDiscardThreshold parameter so that it is larger than the sets of
files that have to be cached. If the value of the seqDiscardthreshold parameter is increased,
GPFS attempts to keep as much data in cache as possible for the files that are below the
threshold.
The value of seqDiscardThreshold is file size in bytes. The default is 1/ MB. Increase this value
if you want to cache files that are sequentially read or written and are larger than 1 MiB in size.
Ensure that there are enough buffer descriptors to cache the file data. For more information about
buffer descriptors, see the maxBufferDescs parameter.
sharedTmpDir
Specifies a default global work directory where the mmapplypolicy command or the mmbackup
command can store the temporary files that it generates during its processing. The command uses
this directory when no global work directory was specified on the command line with the -g
option. The directory must be in a file system that meets the requirements of the -g option. For
more information, see “mmapplypolicy command” on page 80.
Note: The mmapplypolicy command or the mmbackup command uses this directory regardless
of the format version of the target file system. That is, to take advantage of this attribute, you do
not need to upgrade your file system to file system format version 5.0.1 or later (file system format
number 19.01 or greater).
sidAutoMapRangeLength
Controls the length of the reserved range for Windows SID to UNIX ID mapping. See Identity
management on Windows in the IBM Spectrum Scale: Administration Guide for additional
information.
sidAutoMapRangeStart
Specifies the start of the reserved range for Windows SID to UNIX ID mapping. See Identity
management on Windows in the IBM Spectrum Scale: Administration Guide for additional
information.
subnets
Specifies subnets that are used to communicate between nodes in a GPFS cluster or a remote
GPFS cluster.
The subnets option must use the following format:
subnets="Subnet[/ClusterName[;ClusterName...][ Subnet[/ClusterName[;ClusterName...]...]"
where:
Subnet
Is a subnet specification such as 192.168.2.0.
ClusterName
Can be either a cluster name or a shell-style regular expression, which is used to match cluster
names, such as:
CL[23].kgn.ibm.com
Matches CL2.kgn.ibm.com and CL3.kgn.ibm.com.
CL[0-7].kgn.ibm.com
Matches CL0.kgn.ibm.com, CL1.kgn.ibm.com, ... CL7.kgn.ibm.com.
CL*.ibm.com
Matches any cluster name that starts with CL and ends with .ibm.com.
CL?.kgn.ibm.com
Matches any cluster name that starts with CL, is followed by any one character, and then
ends with .kgn.ibm.com.
The order in which you specify the subnets determines the order in which GPFS uses these
subnets to establish connections to the nodes within the cluster. GPFS follows the network
settings of the operating system for a specified subnet address, including the network mask. For
example, if you specify subnets="192.168.2.0" and a 23-bit mask is configured, then the
subnet spans IP addresses 192.168.2.0 - 192.168.3.255. In contrast, with a 25-bit mask,
the subnet spans IP addresses 192.168.2.0 - 192.168.2.127.
GPFS does not impose limits on the number of bits in the subnet mask.
This feature cannot be used to establish fault tolerance or automatic failover. If the interface
corresponding to an IP address in the list is down, GPFS does not use the next one on the list.
When you use subnets, both the interface corresponding to the daemon address and the interface
that matches the subnet settings must be operational.
For more information about subnets, see Using remote access with public and private IP addresses
in the IBM Spectrum Scale: Administration Guide.
Specifying a cluster name or a cluster name pattern for each subnet is needed only when a private
network is shared across clusters. If the use of a private network is confined within the local
cluster, then you do not need to specify the cluster name in the subnet specification.
Limitation and fix: Although there is no upper limit to the number of subnets that can be specified
in the subnets option, a limit does exist as to the number of subnets that are listed in the
subnets option that a given node can be a part of. That limit is seven for nodes that do not have a
fix that increases the limit and 64 for nodes that do have the fix. For example, the 7-subnet limit
precludes the effective use of more than seven network interfaces on a node if each interface
belongs to a distinct subnet that is listed in the subnets option.
The fix that increases the limit to 64 is included as part of the following APARs: IJ06771 for
Version 4.1.1, IJ06770 for 4.2.3, and IJ06762 for 5.0.1.
If a node exceeds the limit, then some or all of its network interfaces that belong to the subnets in
the subnets option might not be used in communicating with other nodes, with the primary GPFS
daemon interface being used instead.
sudoUser={UserName | DELETE}
Specifies a non-root admin user ID to be used when sudo wrappers are enabled and a root-level
background process calls an administration command directly instead of through sudo. The GPFS
daemon that processes the administration command specifies this non-root user ID instead of the
root ID when it needs to run internal commands on other nodes. For more information, see the
topic Root-level processes that call administration commands directly in the IBM Spectrum Scale:
Administration Guide.
UserName
Enables this feature and specifies the non-root admin user ID.
DELETE
Disables this feature, as in the following example:
mmchconfig sudoUser=DELETE
syncBuffsPerIteration
This parameter is used to expedite buffer flush and the rename operations that are done by
MapReduce jobs.
The default value is 100. It should be set to 1 for the GPFS FPO cluster for Big Data applications.
Keep it as the default value for all other cases.
syncSambaMetadataOps
Is used to enable and disable the syncing of metadata operations that are issued by the SMB
server.
If set to yes, fsync() is used after each metadata operation to provide reasonable failover
behavior on node failure. This ensures that the node taking over can see the metadata changes.
Enabling syncSambaMetadataOps can affect performance due to more sync operations.
If set to no, the additional sync overhead is avoided at the potential risk of losing metadata
updates after a failure.
systemLogLevel
Specifies the minimum severity level for messages that are sent to the system log. The severity
levels from highest to lowest priority are: alert, critical, error, warning, notice,
configuration, informational, detail, and debug. The value that is specified for this
attribute can be any severity level, or the value none can be specified so no messages are sent to
the system log. The default value is notice.
GPFS generates some critical log messages that are always sent to the system logging service.
This attribute only affects messages originating in the GPFS daemon (mmfsd). Log messages
originating in some administrative commands are only stored in the GPFS log file.
This attribute is only valid for Linux nodes.
tiebreakerDisks
Controls whether GPFS uses the node-quorum-with-tiebreaker algorithm in place of the regular
node-based quorum algorithm. See the IBM Spectrum Scale: Concepts, Planning, and Installation
Guide. To enable this feature, specify the names of 1 - 3 disks. Separate the NSD names with a
semicolon (;) and enclose the list in quotes. The disks do not have to belong to any particular file
system, but must be directly accessible from the quorum nodes. For example:
tiebreakerDisks="gpfs1nsd;gpfs2nsd;gpfs3nsd"
tiebreakerDisks=no
When you change the tiebreaker disks, be aware of the following requirements:
• In a CCR-based cluster, if the disks that are specified in the tiebreakerDisks parameter
belong to any file system, the GPFS daemon does not need to be up on all of the nodes in the
cluster. However, any file system that contains the disks must be available to run the mmlsfs
command.
• In a traditional server-based (non-CCR) configuration repository cluster, the GPFS daemon must
be shut down on all the nodes of the cluster.
Note: When you add or delete a tiebreakerCheck event, IBM Spectrum Scale must be down on
all the nodes of the cluster. For more information, see “mmaddcallback command” on page 12
and “mmdelcallback command” on page 358.
tscCmdPortRange=Min-Max
Specifies the range of port numbers to be used for extra TCP/IP ports that some administration
commands need for their processing. Defining a port range makes it easier for you set firewall
rules that allow incoming traffic on only those ports. For more information, see the topic IBM
Spectrum Scale port usage in the IBM Spectrum Scale: Administration Guide.
If you used the spectrumscale installation toolkit to install a version of IBM Spectrum Scale that
is earlier than version 5.0.0, then this attribute is initialized to 60000-61000. Otherwise, this
attribute is initially undefined and the port numbers are dynamically assigned from the range of
ephemeral ports that are provided by the operating system.
uidDomain
Specifies the UID domain name for the cluster.
GPFS must be down on all the nodes when changing the uidDomain attribute.
See the IBM white paper UID Mapping for GPFS in a Multi-cluster Environment (IBM Knowledge
Center (www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_uid/
uid_gpfs.html).
unmountOnDiskFail={yes | no | meta}
Controls how the GPFS daemon responds when it detects a disk failure:
yes
The GPFS daemon force-unmounts the file system that contains the failed disk. Other file
systems on the local node and all the nodes in the cluster continue to function normally,
unless they have a dependency on the failed disk.
Note: The local node can remount the file system when the disk problem is resolved.
Use this setting in the following situations:
• The cluster contains SAN-attached disks in large multinode configurations and does not use
replication.
• A node in the cluster hosts descOnly disks. Other nodes in the cluster should not set this
value. For more information, see the topic Data Mirroring and replication in the IBM
Spectrum Scale: Administration Guide.
no
This setting is the default. The GPFS daemon marks the disk as failed, notifies all nodes that
use the disk that the disk has failed, and continues to function without the failed disk as long it
can. If the number of failure groups with a failed disk is the same as or greater than the
metadata replication factor, the daemon panics the file system. Note that the metadata
replication factor that is used for checking is the current metadata replication set or actual
replication factor in effect for log files. If all failure groups contain failed disks, then the
daemon panics the file system, instead of marking the disk down.
Note: When the disk problem is resolved, issue the mmchdisk <file system> start
command to make the disk active again.
This setting is appropriate when the node is using metadata-and-data replication, because the
cluster can work from the replica until the failed disk is active again.
meta
This setting has the same effect as no, except that the GPFS daemon does not panic the file
system unless it cannot access any replica of the metadata. Except that even if all failure
groups contain the failed data disks, the daemon still marks the data disks down, instead of
panicking the file system as with the no option.
However, subsequent data flush attempts on failed data disks might still result in panicking
the file system.
Note that the intention of this setting is to allow users to list all directories and read some files,
even if some or all data disks are not available for read.
Important: Set the attribute to meta for FPO deployment or when the metadata replication
factor and the data replication factor are greater than one.
The -N flag is valid for this attribute.
usePersistentReserve
Specifies whether to enable or disable Persistent Reserve (PR) on the disks. Valid values are yes
or no (no is the default). GPFS must be stopped on all nodes when setting this attribute.
To enable PR and to obtain recovery performance improvements, your cluster requires a specific
environment:
• All disks must be PR-capable.
• On AIX, all disks must be hdisks; on Linux, they must be generic (/dev/sd*) or DM-MP
(/dev/dm-*) disks.
• If the disks have defined NSD servers, all NSD server nodes must be running the same operating
system (AIX or Linux).
• If the disks are SAN-attached to all nodes, all nodes in the cluster must be running the same
operating system (AIX or Linux).
For more information, see Reduced recovery time using Persistent Reserve in the IBM Spectrum
Scale: Concepts, Planning, and Installation Guide.
verbsHungRDMATimeout
Specifies the number of seconds that IBM Spectrum Scale waits before waking up a thread that is
waiting for a response to an RDMA request. The default value is 30 seconds. The valid range is 15
- 8640000 seconds. This feature cleans up long-waiting threads that wait for the completion
event of RDMA read, write and send work requests which might never receive a response.
A sample RDMA read log entry is shown:
2020-07-28_19:06:49.972-0400: [I] RDMA hang break: wake up thread 28097
(waiting 33.82 sec for RDMA read on index 0 cookie 1)
verbsPorts
Specifies the addresses for RDMA transfers between an NSD client and server, where an address
can be either of the following identifiers:
• InfiniBand device name, port number, and fabric number
• Network interface name and fabric number
You must enable verbsRdma to enable verbsPorts. If you want to specify a network interface
name and a fabric number, you must enable verbsRdmaCm.
The format for verbsPorts is as follows:
where:
ibAddress
Is an InfiniBand address with the following format:
Device[/Port[/Fabric]]
where:
Device
Is the HCA device name.
Port
Is a one-based port number, such as 1 or 2. The default value is 1. If you do not specify a
port, then the port is 1.
Fabric
Is the number of an InfiniBand (IB) fabric (IB subnet on a switch). The default value is 0. If
you do not specify a fabric number, then the fabric number is 0.
niAddress
Is a network interface address with the following format:
Interface[/Fabric]
where:
Interface
Is a network interface name.
Fabric
Is the number of an InfiniBand (IB) fabric (IB subnet on a switch). The default value is 0. If
you do not specify a fabric number, then the fabric number is 0.
For this attribute to take effect, you must restart the GPFS daemon on the nodes on which the
value of verbsPorts changed.
The -N flag is valid for this attribute.
The following examples might be helpful:
• The following assignment creates two RDMA connections between an NSD client and server that
use both ports of a dual-ported adapter with fabric number 7 on port 1 and fabric number 8 on
port 2:
verbsPorts="mlx4_0/1/7 mlx4_0/2/8"
• The following assignment, without the fabric number, creates two RDMA connections between
an NSD client and server that use both ports of a dual-ported adapter with the fabric number
defaulting to 0:
verbsPorts="mthca0/1 mthca0/2"
• The following assignment creates two RDMA connections between an NSD client and server that
use network interface names. The first connection is network interface ib3 with a default fabric
number of 0. The second connection is network interface ib2 with a fabric number of 7:
verbsPorts="ib3 ib2/7"
• The following assignment creates four RDMA connections that include both InfiniBand
addresses and network interface addresses:
verbsPortsWaitTimeout
Specifies the number of seconds that the GPFS daemon startup service waits for the RDMA ports
on a node to become active. The default value is 60 seconds. If the timeout expires, the startup
service takes the action that is specified by the attribute
verbsRdmaFailBackTCPIfNotAvailable. This attribute applies only to the RDMA ports that
are specified by the verbsPorts attribute. For more information, see the topic Suboptimal
performance due to VERBS RDMA being inactive in the IBM Spectrum Scale: Problem Determination
Guide.
Note: To monitor the state of the RDMA ports, the GPFS startup service requires the following
commands to be installed on the node:
/usr/sbin/ibstat
/usr/bin/ibdev2netdev
/usr/bin/netstat
The -N flag is valid for this attribute. This attribute takes effect when IBM Spectrum Scale is
restarted.
verbsRdma
Enables or disables InfiniBand RDMA using the Verbs API for data transfers between an NSD
client and NSD server. Valid values are enable or disable.
The -N flag is valid for this attribute.
verbsRdmaCm
Enables or disables the RDMA Connection Manager (RDMA CM or RDMA_CM) using the RDMA_CM
API for establishing connections between an NSD client and NSD server. Valid values are enable
or disable. You must enable verbsRdma to enable verbsRdmaCm.
If RDMA CM is enabled for a node, the node is only able to establish RDMA connections using
RDMA CM to other nodes with verbsRdmaCm enabled. RDMA CM enablement requires IPoIB (IP
over InfiniBand) with an active IP address for each port. Although IPv6 must be enabled, the GPFS
implementation of RDMA CM does not currently support IPv6 addresses, so an IPv4 address must
be used.
If verbsRdmaCm is not enabled when verbsRdma is enabled, the older method of RDMA
connection prevails.
The -N flag is valid for this attribute.
verbsRdmaFailBackTCPIfNotAvailable={yes | no}
Specifies the action for the GPFS startup service to take if the timeout that is specified by the
verbsPortsWaitTimeout attribute expires. This attribute applies only to the RDMA ports that
are specified by the verbsPorts attribute.
Note: To monitor the state of the RDMA ports, the GPFS startup service requires the following
commands to be installed on the node:
/usr/sbin/ibstat
/usr/bin/ibdev2netdev
/usr/bin/netstat
yes
The GPFS startup service configures communication for the GPFS daemon based on the
number of active RDMA ports:
• If some of the RDMA ports are active, the GPFS daemon is configured to use the active
RDMA ports for RDMA transfers.
• If none of the RDMA ports are active, the GPFS daemon is configured to use the TCP/IP
connections of the node for RDMA transfers.
no
The startup service exits, even if some of the RDMA ports are active. Correct the problem and
restart IBM Spectrum Scale by running the mmstartup command on the node.
The -N flag is valid for this attribute. This attribute takes effect when IBM Spectrum Scale is
restarted.
verbsRdmaPkey
Specifies an InfiniBand partition key for a connection between the specified node and an
Infiniband server that is included in an InfiniBand partition. This parameter is valid only if
verbsRdmaCm is set to disable.
Only one partition key is supported per IBM Spectrum Scale cluster.
The -N flag is valid for this attribute.
verbsRdmaRoCEToS
Specifies the Type of Service (ToS) value for clusters using RDMA over Converged Ethernet (RoCE).
Acceptable values for this parameter are 0, 8, 16, and 24. The default value is -1.
If the user-specified value is neither the default nor an acceptable value, the script exits with an
error message to indicate that no change has been made. However, a RoCE cluster continues to
operate with an internally set ToS value of 0 even if the mmchconfig command failed. Different
ToS values can be set for different nodes or groups of nodes.
The -N flag is valid for this attribute.
The verbsPorts parameter can use IP netmask/subnet to specify network interfaces to use for
RDMA CM. However, this format is allowed only when verbsRdmaCm=yes. Otherwise these
entries are ignored. This allows the use of VLANs and multiple IP interfaces per IB device in
general.
verbsRdmaSend
Enables or disables the use of InfiniBand RDMA send and receive rather than TCP for most GPFS
daemon-to-daemon communication. Valid values are yes or no. The default value is no. The
verbsRdma option must be enabled and valid verbsPorts must be defined before
verbsRdmaSend can be enabled.
When the attribute is set to no, only data transfers between an NSD server and an NSD client are
eligible for RDMA. When the attribute is set to yes, the GPFS daemon uses InfiniBand RDMA
connections for daemon-to-daemon communications only with nodes that are at IBM Spectrum
Scale 5.0.0 or later.
If verbsRdmaSend is enabled, then the value set for the nsdInlineWriteMax parameter is
ignored. In such cases, you can set the maximum transaction size that can be sent as embedded
data in an NSD-write RPC by using the verbsRecvBufferSize parameter.
The -N flag is valid for this attribute.
verbsRecvBufferCount
Defines the number of RDMA recv buffers created for each RDMA connection that is enabled for
RDMA send when verbsRdmaSend is enabled. The default value is 128.
The -N flag is valid for this attribute.
verbsRecvBufferSize
Defines the size, in bytes, of the RDMA send and recv buffers that are used for RDMA connections
that are enabled for RDMA send when verbsRdmaSend is enabled. This parameter also specifies
the maximum transaction size that can be sent as embedded data in an NSD-write RPC. The
default value is 4096.
The -N flag is valid for this attribute.
workerThreads
Controls an integrated group of variables that tune file system performance. Use this variable to
tune file systems in environments that are capable of high sequential or random read/write
workloads or small-file activity. For new installations of the product, this variable is preferred over
worker1Threads and preFetchThreads.
The default value is 48. If protocols are installed, then the default value is 512. The valid range is
1-8192. However, the maximum value of workerThreads plus preFetchThreads plus
nsdMaxWorkerThreads is 8192. The -N flag is valid with this variable.
This variable controls both internal and external variables. The internal variables include
maximum settings for concurrent file operations, for concurrent threads that flush dirty data and
metadata, and for concurrent threads that prefetch data and metadata. You can further adjust the
external variables with the mmchconfig command:
logBufferCount
prefetchThreads
worker3Threads
The prefetchThreads parameter is described in this help topic. See the Tuning Parameters
article in the IBM Spectrum Scale wiki in developerWorks for descriptions of the
logBufferCount and worker3Threads parameters.
Important: After you set workerThreads to a non-default value, avoid setting
worker1Threads. If you do, at first only worker1Threads is changed. But when IBM Spectrum
Scale is restarted, all corresponding variables are automatically tuned according to the value of
worker1Threads, instead of workerThreads.
worker1Threads
For some categories of file I/O, this variable controls the maximum number of concurrent file I/O
operations. You can increase this value to increase the I/O performance of the file system.
However, increasing this variable beyond some point might begin to degrade file system
performance.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmchconfig command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To change the maximum file system block size that is allowed to 8 MiB, issue this command:
mmchconfig maxblocksize=8M
mmlsconfig
See also
• “mmaddnode command” on page 35
• “mmchnode command” on page 241
• “mmcrcluster command” on page 303
• “mmdelnode command” on page 371
• “mmlsconfig command” on page 487
• “mmlscluster command” on page 484
Location
/usr/lpp/mmfs/bin
mmchdisk command
Changes state or parameters of one or more disks in a GPFS file system.
Synopsis
mmchdisk Device {resume | start} -a
[-N {Node[,Node...] | NodeFile | NodeClass}]
[--inode-criteria CriteriaFile]
[-o InodeResultFile]
[--qos QOSClass]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmchdisk command to change the state or the parameters of one or more disks in a GPFS file
system.
The state of a disk is a combination of its status and availability, displayed with the mmlsdisk command.
Disk status is normally either ready, emptied, suspended, or to be emptied. A transitional status
such as replacing, replacement, or being emptied might appear if a disk is being deleted or
replaced. An emptied disk indicates that there is not any file system data or metadata on the disk and that
new data will not be put on the disk. A disk status of emptied means that the disk can be removed
without needing to migrate data. A suspended or being emptied disk is one that the user has decided not
to place any new data on. Existing data on a suspended or being emptied disk may still be read or
updated. Typically, a disk is suspended before it is removed from a file system. When a disk is suspended,
the mmrestripefs command migrates the data off that disk. Disk availability is either up or down.
Be sure to use stop before you take a disk offline for maintenance. You should also use stop when a disk
has become temporarily inaccessible due to a disk failure that is repairable without loss of data on that
disk (for example, an adapter failure or a failure of the disk electronics).
The Disk Usage (dataAndMetadata, dataOnly, metadataOnly, or descOnly) and Failure Group
parameters of a disk are adjusted with the change option. See the Recoverability considerations topic in
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
The mmchdisk change command does not move data or metadata that resides on the disk. After you
issue the mmchdisk change command for DiskUsage or State, you might have to issue the
mmrestripefs command with the -r option to relocate data so that it conforms to the new disk
parameters.
The mmchdisk command can be issued for a mounted or unmounted file system. When maintenance is
complete or the failure has been repaired, use the mmchdisk command with the start option. If the
failure cannot be repaired without loss of data, you can use the mmdeldisk command.
Note: The mmchdisk command does not change the NSD servers that are associated with the disk, or the
storage pool of the disk:
• To change the NSD servers use the mmchnsd command.
• To change the storage pool use the mmdeldisk and mmadddisk commands.
Prior to GPFS 3.5, the disk information for the mmchdisk change option was specified in the form of disk
descriptors defined as follows (with the second, third, sixth and seventh fields reserved):
DiskName:::DiskUsage:FailureGroup:::
For backward compatibility, the mmchdisk command still accepts the traditional disk descriptor format,
but its use is deprecated.
Parameters
Device
The device name of the file system to which the disks belong. File system names need not be fully-
qualified. fs0 is as acceptable as /dev/fs0.
This must be the first parameter.
suspend
or
empty
Instructs GPFS to stop allocating space on the specified disk. Put a disk in this state when you are
preparing to remove the file system data from the disk or if you want to prevent new data from being
put on the disk. This is a user-initiated state that GPFS never enters without an explicit command to
change the disk state. Existing data on a suspended disk may still be read or updated.
A disk remains in a suspended or to be emptied state until it is explicitly resumed. Restarting
GPFS or rebooting nodes does not restore normal access to a suspended disk.
resume
Informs GPFS that a disk previously suspended is now available for allocating new space. If the disk is
currently in a stopped state, it remains stopped until you specify the start option. Otherwise, normal
read and write access to the disk resumes.
stop
Instructs GPFS to stop any attempts to access the specified disks. Use this option to tell the file
system manager that a disk has failed or is currently inaccessible because of maintenance.
A disk remains stopped until it is explicitly started by the mmchdisk command with the start option.
You cannot run mmchdisk stop on a file system with a default replication setting of 1.
start
Informs GPFS that disks previously stopped are now accessible. This is accomplished by first
changing the disk availability from down to recovering. The file system metadata is then scanned
and any missing updates (replicated data that was changed while the disk was down) are repaired. If
this operation is successful, the availability is changed to up. If the metadata scan fails, availability is
set to unrecovered. This situation can occur if too many disks in the file system are down. The
metadata scan can be re-initiated at a later time by issuing the mmchdisk start command again.
If more than one disk in the file system is down, they must all be started at the same time by issuing
the mmchdisk Device start -a command. If you start them separately and metadata is stored on
any disk that remains down, the mmchdisk start command fails.
change
Instructs GPFS to change the disk usage parameter, the failure group parameter, or both, according to
the values specified in the NSD stanzas.
-d "DiskDesc[;DiskDesc...]"
A descriptor for each disk to be changed.
Specify only disk names when using the suspend, resume, stop, or start options. Delimit multiple
disk names with semicolons and enclose the list in quotation marks. For example,
"gpfs1nsd;gpfs2nsd"
When using the change option, include the disk name and any new Disk Usage and Failure Group
positional parameter values in the descriptor. Delimit descriptors with semicolons and enclose the list
in quotation marks; for example, "gpfs1nsd:::dataOnly;gpfs2nsd:::metadataOnly:12".
The use of disk descriptors is discouraged.
-F StanzaFile
Specifies a file containing the NSD stanzas for the disks to be changed. NSD stanzas have the
following format:
%nsd:
nsd=NsdName
usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}
failureGroup=FailureGroup
pool=StoragePool
servers=ServerList
device=DiskName
thinDiskType={no | nvme | scsi | auto}
where:
nsd=NsdName
The name of the NSD to change. For a list of disks that belong to a particular file system, issue the
mmlsnsd -f Device, mmlsfs Device -d, or mmlsdisk Device command. The mmlsdisk Device
command will also show the current disk usage and failure group values for each of the disks. This
clause is mandatory for the mmchdisk command.
usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}
Specifies the type of data to be stored on the disk:
dataAndMetadata
Indicates that the disk contains both data and metadata. This is the default for disks in the
system pool.
dataOnly
Indicates that the disk contains data and does not contain metadata. This is the default for
disks in storage pools other than the system pool.
metadataOnly
Indicates that the disk contains metadata and does not contain data.
descOnly
Indicates that the disk contains no data and no file metadata. IBM Spectrum Scale uses this
type of disk primarily to keep a copy of the file system descriptor. It can also be used as a third
failure group in certain disaster recovery configurations. For more information, see the topic
Synchronous mirroring utilizing GPFS replication in the IBM Spectrum Scale: Administration
Guide.
This clause is meaningful only for the mmchdisk change option.
failureGroup=FailureGroup
Identifies the failure group to which the disk belongs. A failure group identifier can be a simple
integer or a topology vector that consists of up to three comma-separated integers. The default is
-1, which indicates that the disk has no point of failure in common with any other disk.
GPFS uses this information during data and metadata placement to ensure that no two replicas of
the same block can become unavailable due to a single failure. All disks that are attached to the
same NSD server or adapter must be placed in the same failure group.
If the file system is configured with data replication, all storage pools must have two failure groups
to maintain proper protection of the data. Similarly, if metadata replication is in effect, the system
storage pool must have two failure groups.
Disks that belong to storage pools in which write affinity is enabled can use topology vectors to
identify failure domains in a shared-nothing cluster. Disks that belong to traditional storage pools
must use simple integers to specify the failure group.
pool=StoragePool
Specifies the storage pool to which the disk is to be assigned. If this name is not provided, the
default is system.
Only the system storage pool can contain metadataOnly, dataAndMetadata, or descOnly
disks. Disks in other storage pools must be dataOnly.
servers=ServerList
A comma-separated list of NSD server nodes. This clause is ignored by the mmadddisk command.
device=DiskName
The block device name of the underlying disk device. This clause is ignored by the mmadddisk
command.
thinDiskType={no | nvme | scsi | auto}
Specifies the space reclaim disk type:
no
The disk does not support space reclaim. This value is the default.
nvme
The disk is a TRIM capable NVMe device that supports the mmreclaimspace command.
scsi
The disk is a thin provisioned SCSI disk that supports the mmreclaimspace command.
auto
The type of the disk is either nvme or scsi. IBM Spectrum Scale will try to detect the actual
disk type automatically. To avoid problems, you should replace auto with the correct disk
type, nvme or scsi, as soon as you can.
Note: In 5.0.5, the space reclaim auto-detection is enhanced. It is encouraged to use the auto key-
word after your cluster is upgraded to 5.0.5.
-a
Specifies to change the state of all of the disks belonging to the file system, Device. This operand is
valid only on the resume and start options.
-N {Node[,Node...] | NodeFile | NodeClass }
Specifies a list of nodes that should be used for making the requested disk changes. This command
supports all defined node classes. The default is all or the current value of the
defaultHelperNodes parameter of the mmchconfig command.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
--inode-criteria CriteriaFile
Specifies the interesting inode criteria flag, where CriteriaFile contains a list of the following flags with
one per line:
BROKEN
Indicates that a file has a data block with all of its replicas on disks that have been removed.
Note: BROKEN is always included in the list of flags even if it is not specified.
dataUpdateMiss
Indicates that at least one data block was not updated successfully on all replicas.
exposed
Indicates an inode with an exposed risk; that is, the file has data where all replicas are on
suspended disks. This could cause data to be lost if the suspended disks have failed or been
removed.
illCompressed
Indicates an inode in which file compression or decompression is deferred, or in which a
compressed file is partly decompressed to allow the file to be written into or memory-mapped.
illPlaced
Indicates an inode with some data blocks that might be stored in an incorrect storage pool.
illReplicated
Indicates that the file has a data block that does not meet the setting for the replica.
metaUpdateMiss
Indicates that there is at least one metadata block that has not been successfully updated to all
replicas.
unbalanced
Indicates that the file has a data block that is not well balanced across all the disks in all failure
groups.
Note: If a file matches any of the specified interesting flags, all of its interesting flags (even those not
specified) will be displayed.
-o InodeResultFile
Contains a list of the inodes that met the interesting inode flags that were specified on the --inode-
criteria parameter. The output file contains the following:
INODE_NUMBER
This is the inode number.
DISKADDR
Specifies a dummy address for later tsfindinode use.
SNAPSHOT_ID
This is the snapshot ID.
ISGLOBAL_SNAPSHOT
Indicates whether or not the inode is in a global snapshot. Files in the live file system are
considered to be in a global snapshot.
INDEPENDENT_FSETID
Indicates the independent fileset to which the inode belongs.
MEMO (INODE_FLAGS FILE_TYPE [ERROR])
Indicates the inode flag and file type that will be printed:
Inode flags:
BROKEN
exposed
dataUpdateMiss
illCompressed
illPlaced
illReplicated
metaUpdateMiss
unbalanced
File types:
BLK_DEV
CHAR_DEV
DIRECTORY
FIFO
LINK
LOGFILE
REGULAR_FILE
RESERVED
SOCK
*UNLINKED*
*DELETED*
Notes:
1. An error message will be printed in the output file if an error is encountered when repairing the
inode.
2. DISKADDR, ISGLOBAL_SNAPSHOT, and FSET_ID work with the tsfindinode tool
(/usr/lpp/mmfs/bin/tsfindinode) to find the file name for each inode. tsfindinode
uses the output file to retrieve the file name for each interesting inode.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the other QoS class. (Unlike other commands that have the --qos option, the mmchdisk command
runs in the other class by default.) This parameter has no effect unless the QoS service is enabled.
For more information, see the topic “mmchqos command” on page 260. Specify one of the following
QoS classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmchdisk command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To suspend active disk gpfs2nsd, issue this command:
mmlsdisk fs0
In IBM Spectrum Scale versions earlier than V4.1.1, the product displays information similar to the
following example:
Note: In product versions earlier than V4.1.1, the mmlsdisk command lists the disk status as
suspended. In product versions V4.1.1 and later, the mmlsdisk command lists the disk status as to
be emptied with both mmchdisk suspend or mmchdisk empty commands.
mmlsdisk fs0 -L
In product version V4.1.1 and later, the system displays information similar to the following example:
3. To specify that metadata should no longer be stored on disk gpfs1nsd, issue this command:
mmlsdisk fs0
4. To start a disk and check for files matching the interesting inode criteria located on the disk, issue this
command:
mmnsddiscover: Attempting to rediscover the disks. This may take a while ...
mmnsddiscover: Finished.
vmip2.gpfs.net: GPFS: 6027-1805 [N] Rediscovered nsd server access to vmip2_nsd3.
GPFS: 6027-589 Scanning file system metadata, phase 1 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 2 ...
Scanning file system metadata for data storage pool
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 3 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-589 Scanning file system metadata, phase 4 ...
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-565 Scanning user file metadata ...
100.00 % complete on Wed Apr 15 10:20:37 2015 65792 inodes with total 398 MB data
processed)
GPFS: 6027-552 Scan completed successfully.
GPFS: 6027-3312 No inode was found matching the criteria.
The disk was started successfully. No files matching the requested criteria were found.
See also
• Displaying GPFS disk states in the IBM Spectrum Scale: Administration Guide.
• “mmadddisk command” on page 28
• “mmchnsd command” on page 251
• “mmdeldisk command” on page 360
• “mmlsdisk command” on page 489
• “mmlsnsd command” on page 514
• “mmrpldisk command” on page 679
Location
/usr/lpp/mmfs/bin
mmcheckquota command
Checks file system user, group and fileset quotas.
Synopsis
mmcheckquota [-v] [-N {Node[,Node...] | NodeFile | NodeClass}]
[--qos QosClass] {-a | Device [:Fileset] [Device [:Fileset] ...]}
or
or
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
The mmcheckquota command serves two purposes:
1. Count inode and space usage in a file system by user, group and fileset, and write the collected data
into quota files.
Note: In cases where small files do not have an additional block allocated for them, quota usage may
show less space usage than expected.
2. Replace either the user, group, or fileset quota files, for the file system designated by Device, thereby
restoring the quota files for the file system. These files must be contained in the root directory of
Device. If a backup copy does not exist, an empty file is created when the mmcheckquota command is
issued.
The mmcheckquota command counts inode and space usage for a file system and writes the collected
data into quota files. Indications leading you to the conclusion you should run the mmcheckquota
command include:
• MMFS_QUOTA error log entries. This error log entry is created when the quota manager has a problem
reading or writing the quota file.
• Quota information is lost due to a node failure. A node failure could leave users unable to open files or
deny them disk space that their quotas should allow.
• The in-doubt value is approaching the quota limit.
The sum of the in-doubt value and the current usage may not exceed the hard limit. Consequently, the
actual block space and number of files available to the user of the group may be constrained by the in-
doubt value. If the in-doubt value approaches a significant percentage of the quota, use the
mmcheckquota command to account for the lost space and files.
• User, group, or fileset quota files are corrupted.
The mmcheckquota command is I/O-intensive and should be run when the system load is light. When
issuing the mmcheckquota command on a mounted file system, negative in-doubt values may be
reported if the quota server processes a combination of up-to-date and back-level information. This is a
transient situation and can be ignored.
If a file system is ill-replicated, the mmcheckquota command will not be able to determine exactly how
many valid replicas actually exist for some of the blocks. If this happens, the used block count results
from mmcheckquota will not be accurate. It is recommended that you run mmcheckquota to restore
accurate usage count after the file system is no longer ill-replicated.
Parameters
-a
Checks all GPFS file systems in the cluster from which the command is issued.
--backup BackupDirectory
Specifies a backup directory, which must be in the same GPFS file system as the root directory of
Device.
In IBM Spectrum Scale V4.1.1 and later, you can use this parameter to copy quota files. The command
copies three quota files to the specified directory.
Device
Specifies the device name of the file system. File system names do not need to be fully-qualified. fs0
is as acceptable as /dev/fs0.
Fileset
Specifies the name of a fileset that is located on a device for which inode and space usage to be
counted.
-g GroupQuotaFileName
Replaces the current group quota file with the file indicated.
When replacing quota files with the -g option, the quota file must be in the root directory of the GPFS
file system.
-j FilesetQuotaFilename
Replaces the current fileset quota file with the file indicated.
When replacing quota files with the -j option, the quota file must be in the root directory of the GPFS
file system.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes that will participate in a parallel quota check of the system. This command
supports all defined node classes. The default is all or the current value of the
defaultHelperNodes parameter of the mmchconfig command.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
-u UserQuotaFilename
Replaces the current user quota file with the file indicated.
When replacing quota files with the -u option, the quota file must be in the root directory of the GPFS
file system.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
Options
-v
Reports discrepancies between calculated and recorded disk quotas.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmcheckquota command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
GPFS must be running on the node from which the mmcheckquota command is issued.
Examples
1. To check quotas for file system fs0, issue this command:
mmcheckquota fs0
mmcheckquota -a
The system displays information only if a problem is found or if quota management is not enabled for a
file system:
3. To report discrepancies between calculated and recorded disk quotas, issue this command:
mmcheckquota -v fs1
See also
• “mmedquota command” on page 398
• “mmfsck command” on page 404
• “mmlsquota command” on page 527
• “mmquotaon command” on page 644
• “mmquotaoff command” on page 641
• “mmrepquota command” on page 656
Location
/usr/lpp/mmfs/bin
mmchfileset command
Changes the attributes of a GPFS fileset.
Synopsis
mmchfileset Device {FilesetName | -J JunctionPath}
[-j NewFilesetName] [-t NewComment] [-p afmAttribute=Value...]
[--allow-permission-change PermissionChangeMode]
[--inode-limit MaxNumInodes[:NumInodesToPreallocate]]
[--iam-mode Mode]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmchfileset command changes attributes for an existing GPFS fileset.
For information on GPFS filesets, see the IBM Spectrum Scale: Administration Guide.
Parameters
Device
The device name of the file system that contains the fileset.
File system names need not be fully qualified. fs0 is as acceptable as /dev/fs0.
FilesetName
Specifies the name of the fileset.
-J JunctionPath
Specifies the junction path name for the fileset.
A junction is a special directory entry that connects a name in a directory of one fileset to the root
directory of another fileset.
-j NewFilesetName
Specifies the new name that is to be given to the fileset. This name must be fewer than 256
characters in length. The root fileset cannot be renamed.
-t NewComment
Specifies an optional comment that appears in the output of the mmlsfileset command. This
comment must be fewer than 256 characters in length. This option cannot be used on the root fileset.
-p afmAttribute=Value
Specifies an AFM configuration attribute and its value. More than one -p option can be specified.
The following AFM configuration attributes are valid:
afmAsyncDelay
Specifies (in seconds) the amount of time by which write operations are delayed (because write
operations are asynchronous with respect to remote clusters). For write-intensive applications
that keep writing to the same set of files, this delay is helpful because it replaces multiple writes
to the home cluster with a single write containing the latest data. However, setting a very high
value weakens the consistency of data on the remote cluster.
This configuration parameter is applicable only for writer caches (Single writer, Independent
writer, and Primary ) in which data from cache is pushed to home.
Valid values are 1 - 2147483647. The default is 15.
afmDirLookupRefreshInterval
Controls the frequency of data revalidations that are triggered by such lookup operations as ls or
stat (specified in seconds). When a lookup operation is done on a directory, if the specified
amount of time has passed, AFM sends a message to the home cluster to find out whether the
metadata of that directory has been modified since the last time it was checked. If the time
interval has not passed, AFM does not check the home cluster for updates to the metadata.
Valid values are 0 through 2147483647. The default is 60. In situations where home cluster data
changes frequently, a value of 0 is recommended.
afmDirOpenRefreshInterval
Controls the frequency of data revalidations that are triggered by such I/O operations as read or
write (specified in seconds). After a directory has been cached, open requests resulting from I/O
operations on that object are directed to the cached directory until the specified amount of time
has passed. Once the specified amount of time has passed, the open request gets directed to a
gateway node rather than to the cached directory.
Valid values are between 0 and 2147483647. The default is 60. Setting a lower value guarantees
a higher level of consistency.
afmEnableAutoEviction
Enables eviction on a given fileset. A yes value specifies that eviction is allowed on the fileset. A
no value specifies that eviction is not allowed on the fileset.
See also the topic about cache eviction in the IBM Spectrum Scale: Administration Guide.
afmExpirationTimeout
Is used with afmDisconnectTimeout (which can be set only through mmchconfig) to control
how long a network outage between the cache and home clusters can continue before the data in
the cache is considered out of sync with home. After afmDisconnectTimeout expires, cached
data remains available until afmExpirationTimeout expires, at which point the cached data is
considered expired and cannot be read until a reconnect occurs.
Valid values are 0 through 2147483647. The default is disable.
afmFastCreate
Enable at the AFM cache and AFM-DR primary fileset level. AFM sends RPC to the gateway node
for each update that is happening on the fileset. If the workload mostly involves new files creation,
this parameter reduces the RPC exchanges between the application and the gateway node,
improves the application performance, and minimizes the memory queue requirement at the
gateway node.
To set or unset the afmFastCreate parameter on an AFM or AFM-DR fileset, you need to stop or
unlink the fileset. For more information, see Stop and start replication on a fileset in IBM Spectrum
Scale: Concepts, Planning, and Installation Guide.
Valid values are 'yes' or no'.
afmFileLookupRefreshInterval
Controls the frequency of data revalidations that are triggered by such lookup operations as ls or
stat (specified in seconds). When a lookup operation is done on a file, if the specified amount of
time has passed, AFM sends a message to the home cluster to find out whether the metadata of
the file has been modified since the last time it was checked. If the time interval has not passed,
AFM does not check the home cluster for updates to the metadata.
Valid values are 0 through 2147483647. The default is 30. In situations where home cluster data
changes frequently, a value of 0 is recommended.
afmFileOpenRefreshInterval
Controls the frequency of data revalidations that are triggered by such I/O operations as read or
write (specified in seconds). After a file has been cached, open requests resulting from I/O
operations on that object are directed to the cached file until the specified amount of time has
passed. Once the specified amount of time has passed, the open request gets directed to a
gateway node rather than to the cached file.
Valid values are 0 through 2147483647. The default is 30. Setting a lower value guarantees a
higher level of consistency.
afmGateway
Specifies the user-defined gateway node for an AFM or AFM DR fileset, that gets preference over
internal hashing algorithm. If the specified gateway node is not available, then AFM internally
assigns a gateway node from the available list to the fileset. afmHashVersion value must be
already set as '5'. Before setting the afmGateway value; stop replication on the fileset or unlink
the AFM fileset. After setting the value, start replication on the fileset or link the AFM fileset.
An example of setting the gateway node by using stop and start AFM fileset -
An example of setting the gateway node by using unlink and link AFM fileset -
To revert to the internal hashing algorithm of assigning gateway nodes, you must delete the
assigned gateway node value of the AFM fileset.
An example of deleting the gateway node by using stop and start AFM fileset -
An example of deleting the gateway node by using unlink and link AFM fileset -
Note: Ensure that the file system is upgraded to IBM Spectrum Scale 5.0.2 or later.
This parameter also accepts 'all' as a value. afmGateway=all can be set on RO and LU mode AFM
fileset. This parameter supports the value 'all' for the file system-level migration. This value
improves the file system level migration performance by using the inode-based hashing. When
this value is set, all operations that belong to an inode are queued to the gateway node. It helps
load balance operations by using inode number.
Set afmGateway=all by using the stop and start AFM fileset. For example,
afmMode
Specifies the mode in which the cache operates. Valid values are the following:
single-writer | sw
Specifies single-writer mode.
read-only | ro
Specifies read-only mode. (For mmcrfileset, this is the default value.)
local-updates | lu
Specifies local-updates mode.
independent-writer | iw
Specifies independent-writer mode.
Primary | drp
Specifies the primary mode for AFM asynchronous data replication.
Secondary | drs
Specifies the secondary mode for AFM asynchronous data replication.
Changing from single-writer/read-only modes to read-only/local-updates/single-writer is
supported. When changing from read-only to single-writer, the read-only cache is up-to-date.
When changing from single-writer to read-only, all requests from cache should have been played
at home. Changing from local-updates to read-only/local-updates/single-writer is restricted. A
typical dataset is set up to include a single cache cluster in single-writer mode (which generates
the data) and one or more cache clusters in local-updates or read-only mode. AFM single-writer/
independent-writer filesets can be converted to primary. Primary/secondary filesets cannot be
converted to AFM filesets.
In case of AFM asynchronous data replication, the mmchfileset command cannot be used to
convert to primary from secondary. For detailed information, see AFM-based Asynchronous
Disaster Recovery (AFM DR) in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
For more information, see the topic about caching modes in the IBM Spectrum Scale:
Administration Guide chapter about active file management.
afmNumFlushThreads
Defines the number of threads used on each gateway to synchronize updates to the home cluster.
The default value is 4, which is sufficient for most installations. The current maximum value is
1024, which is too high for most installations; setting this parameter to such an extreme value
should be avoided.
afmNumReadThreads
Defines the number of threads that can be used on each participating gateway node during
parallel read. The default value of this parameter is 1; that is, one reader thread will be active on
every gateway node for each big read operation qualifying for splitting per the parallel read
threshold value. The valid range of values is 1 to 64.
afmNumWriteThreads
Defines the number of threads that can be used on each participating gateway node during
parallel write. The default value of this parameter is 1; that is, one writer thread will be active on
every gateway node for each big write operation qualifying for splitting per the parallel write
threshold value. Valid values can range from 1 to 64.
afmParallelMounts
When this parameter is enabled, the primary gateway node of a fileset at a cache cluster attempts
to mount the exported path from multiple NFS servers that are defined in the mapping. Then, this
primary gateway node sends unique messages through each NFS mount to improve performance
by transferring data in parallel.
Before enabling this parameter, define the mapping between the primary gateway node and NFS
servers by issuing the mmafmconfig command.
afmParallelReadChunkSize
Defines the minimum chunk size of the read that needs to be distributed among the gateway
nodes during parallel reads. Values are interpreted in terms of bytes. The default value of this
parameter is 128 MiB, and the valid range of values is 0 to 2147483647. It can be changed cluster
wide with the mmchconfig command. It can be set at fileset level using mmcrfileset or
mmchfileset commands.
afmParallelReadThreshold
Defines the threshold beyond which parallel reads become effective. Reads are split into chunks
when file size exceeds this threshold value. Values are interpreted in terms of MiB. The default
value is 1024 MiB. The valid range of values is 0 to 2147483647. It can be changed cluster wide
with the mmchconfig command. It can be set at fileset level using mmcrfileset or
mmchfileset commands.
afmParallelWriteChunkSize
Defines the minimum chunk size of the write that needs to be distributed among the gateway
nodes during parallel writes. Values are interpreted in terms of bytes. The default value of this
parameter is 128 MiB, and the valid range of values is 0 to 2147483647. It can be changed cluster
wide with the mmchconfig command. It can be set at fileset level using mmcrfileset or
mmchfileset commands.
afmParallelWriteThreshold
Defines the threshold beyond which parallel writes become effective. Writes are split into chunks
when file size exceeds this threshold value. Values are interpreted in terms of MiB. The default
value of this parameter is 1024 MiB, and the valid range of values is 0 to 2147483647. It can be
changed cluster wide with the mmchconfig command. It can be set at fileset level using
mmcrfileset or mmchfileset commands.
afmPrefetchThreshold
Controls partial file caching and prefetching. Valid values are the following:
0
Enables full file prefetching. This is useful for sequentially accessed files that are read in their
entirety, such as image files, home directories, and development environments. The file will be
prefetched after three blocks have been read into the cache.
1-99
Specifies the percentage of file size that must be cached before the entire file is prefetched. A
large value is suitable for a file accessed either randomly or sequentially but partially, for
which it might be useful to ingest the rest of the file when most of it has been accessed.
100
Disables full file prefetching. This value only fetches and caches data that is read by the
application. This is useful for large random-access files, such as databases, that are either too
big to fit in the cache or are never expected to be read in their entirety. When all data blocks
are accessed in the cache, the file is marked as cached.
0 is the default value.
For local-updates mode, the whole file is prefetched when the first update is made.
afmPrimaryId
Specifies the unique primary ID of the primary fileset for asynchronous data replication. This is
used for connecting a secondary to a primary.
afmReadDirOnce
Enables AFM to perform one-time readdir of a directory from the home after the data migration
to the cache and the application is started on the cache data. That is, the application is moved
from the home to the cache and the application modifies the directory, which makes the directory
dirty. When this parameter is set for a fileset, the prefetch operation is run on the fileset by using
--readdir-only to move new or modified data from the home to the cache, even if the cache
directory is dirty. After the data migration to the cache and the application is started on the cache
data, this parameter synchronizes new files at the home directory to the cache for a single time.
Valid values are 'yes' and 'no'. This parameter can be set on AFM RO, LU, and IW filesets. This
parameter is not useful for the DMAPI migration.
afmReadSparseThreshold
Specifies the size in MB for files in cache beyond which sparseness is maintained. For all files
below the specified threshold, sparseness is not maintained.
afmRefreshOnce
Enables AFM to perform revalidation on files and directories only one time. This parameter
improves the application performance after the data migration to the cache and the application is
started on the cache data during the migration from old system to the new system. When this
parameter is set to yes, files and directories revalidation is performed only one time. Therefore,
only one revalidation request goes to the home or target.
Valid values are 'yes' and 'no'. This parameter can be set on AFM RO, LU, and IW filesets. This
parameter is not useful for the DMAPI migration.
afmRPO
Specifies the recovery point objective (RPO) interval for an AFM DR fileset. This attribute is
disabled by default. You can specify a value with the suffix M for minutes, H for hours, or W for
weeks. For example, for 12 hours specify 12H. If you do not add a suffix, the value is assumed to
be in minutes. The range of valid values is 720 minutes - 2147483647 minutes.
afmShowHomeSnapshot
Controls the visibility of the home snapshot directory in cache. For this to be visible in cache, this
variable has to be set to yes, and the snapshot directory name in the cache and home cannot be
the same.
yes
Specifies that the home snapshot link directory is visible.
no
Specifies that the home snapshot link directory is not visible.
See Peer snapshot -psnap in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
afmTarget
The only allowed value is disable. It is used to convert AFM filesets to regular independent
filesets; for example:
After an AFM fileset is converted to a regular fileset, the fileset cannot be changed back to an AFM
fileset.
--allow-permission-change PermissionChangeMode
Specifies the new permission change mode. This mode controls how chmod and ACL operations are
handled on objects in the fileset. Valid modes are as follows:
chmodOnly
Specifies that only the UNIX change mode operation (chmod) is allowed to change access
permissions (ACL commands and API will not be accepted).
setAclOnly
Specifies that permissions can be changed using ACL commands and API only (chmod will not be
accepted).
chmodAndSetAcl
Specifies that chmod and ACL operations are permitted. If the chmod command (or setattr file
operation) is issued, the result depends on the type of ACL that was previously controlling access
to the object:
• If the object had a Posix ACL, it will be modified accordingly.
• If the object had an NFSv4 ACL, it will be replaced by the given UNIX mode bits.
Note: This is the default setting when a fileset is created.
chmodAndUpdateAcl
Specifies that chmod and ACL operations are permitted. If chmod is issued, the ACL will be
updated by privileges derived from UNIX mode bits.
--inode-limit MaxNumInodes[:NumInodesToPreallocate]
Specifies the new inode limit for the inode space that is owned by the specified fileset. The
FilesetName or JunctionPath must refer to an independent fileset. The NumInodesToPreallocate
specifies an optional number of additional inodes to pre-allocate for the inode space. Use the mmchfs
command to change inode limits for the root fileset.
The MaxNumInodes and NumInodesToPreallocate values can be specified with a suffix, for example
100 K or 2 M.
Note: Preallocated inodes cannot be deleted or moved to another independent fileset. It is
recommended to avoid preallocating too many inodes because there can be both performance and
memory allocation costs associated with such preallocations. In most cases, there is no need to
preallocate inodes because GPFS dynamically allocates inodes as needed.
--iam-mode Mode
Specifies an integrated archive manager (IAM) mode for the fileset. You can set IAM modes to modify
some of the file-operation restrictions that normally apply to immutable filesets. The IAM modes are
as follows, listed in order of increasing strictness:
ad | advisory
nc | noncompliant
co | compliant
cp | compliant-plus
Note that IAM modes can be upgraded from advisory to noncompliant to compliant to
compliant-plus, but not downgraded. When a fileset is set to one of these IAM modes, a number of
operations on files and directories are no longer allowed. Because the IAM mode for a fileset cannot
be downgraded, those disallowed operations become persistent and cannot be undone. For more
information, see the topic Immutability and appendOnly features in the IBM Spectrum Scale:
Administration Guide.
Note: If you set new values for afmParallelReadChunkSize, afmParallelReadThreshold,
afmParallelWriteChunkSize, and afmParallelWriteThreshold; you need not relink filesets for
the new values to take effect.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority or be a fileset owner to run the mmchfileset command with the -t option.
All other options require root authority.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. The following command renames fileset fset1 to fset2 and gives it the comment "first fileset":
mmlsfileset gpfs1 -L
See also
• “mmchfs command” on page 230
• “mmcrfileset command” on page 308
• “mmdelfileset command” on page 365
• “mmlinkfileset command” on page 477
• “mmlsfileset command” on page 493
• “mmunlinkfileset command” on page 724
Location
/usr/lpp/mmfs/bin
mmchfs command
Changes the attributes of a GPFS file system.
Synopsis
mmchfs Device [-A {yes | no | automount}] [-D {posix | nfs4}] [-E {yes | no}]
[-k {posix | nfs4 | all}] [-K {no | whenpossible | always}]
[-L LogFileSize] [-m DefaultMetadataReplicas] [-n NumNodes]
[-o MountOptions] [-r DefaultDataReplicas] [-S {yes | no | relatime}]
[-T Mountpoint] [-t DriveLetter] [-V {full | compat}] [-z {yes | no}]
[--filesetdf | --nofilesetdf]
[--inode-limit MaxNumInodes[:NumInodesToPreallocate]]
[--log-replicas LogReplicas] [--mount-priority Priority]
[--perfileset-quota | --noperfileset-quota]
[--rapid-repair | --norapid-repair]
[--write-cache-threshold HAWCThreshold]
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmchfs command to change the attributes of a GPFS file system.
Parameters
Device
The device name of the file system to be changed.
File system names need not be fully qualified. fs0 is as acceptable as /dev/fs0. However, file
system names must be unique across GPFS clusters.
This must be the first parameter.
-A {yes | no | automount}
Indicates when the file system is to be mounted:
yes
When the GPFS daemon starts.
no
The file system is mounted manually.
automount
On non-Windows nodes, when the file system is first accessed. On Windows nodes, when the
GPFS daemon starts.
Note:
• The file system must be unmounted before the automount settings are changed.
• IBM Spectrum Protect for Space Management does not support file systems with the -A option
set to automount.
-D {nfs4 | posix}
Specifies whether a deny-write open lock blocks writes, which is required by NFS V4, Samba, and
Windows. File systems that support NFS V4 must have -D nfs4 set. The option -D posix allows
NFS writes even in the presence of a deny-write open lock. If you intend to export the file system on
NFS V4 or Samba, or mount your file system on Windows, you must use -D nfs4. For NFS V3 (or if
the file system is not NFS exported at all) use -D posix.
-E {yes | no}
Specifies whether to report exact mtime values. If -E no is specified, the mtime value is periodically
updated. If you want to always display exact modification times, specify -E yes.
Important: The new value takes effect the next time the file system is mounted.
-k {posix | nfs4 | all}
Specifies the type of authorization supported by the file system:
posix
Traditional GPFS ACLs only (NFS V4 and Windows ACLs are not allowed). Authorization controls
are unchanged from earlier releases.
nfs4
Support for NFS V4 and Windows ACLs only. Users are not allowed to assign traditional GPFS ACLs
to any file system objects (directories and individual files).
all
Any supported ACL type is permitted. This includes traditional GPFS (posix) and NFS V4 and
Windows ACLs (nfs4).
The administrator is allowing a mixture of ACL types. For example, fileA may have a posix ACL,
while fileB in the same file system may have an NFS V4 ACL, implying different access
characteristics for each file depending on the ACL type that is currently assigned.
Avoid specifying nfs4 or all unless files are exported to NFS V4 or Samba clients, or the file system
is mounted on Windows. NFS V4 and Windows ACLs affect file attributes (mode) and have access and
authorization characteristics that are different from traditional GPFS ACLs.
-K {no | whenpossible | always}
Specifies whether strict replication is to be enforced:
no
Strict replication is not enforced. GPFS tries to create the needed number of replicas, but still
returns EOK if it can allocate at least one replica.
whenpossible
Strict replication is enforced provided the disk configuration allows it. If there is only one failure
group, strict replication is not enforced.
always
Strict replication is enforced.
For more information, see the topic Strict replication in the IBM Spectrum Scale: Problem
Determination Guide.
-L LogFileSize
Specifies the size of the internal log files. The LogFileSize must be a multiple of the metadata block
size. The default log file size is 32 MiB in most cases. However, if the data block size (parameter -B) is
less than 512 KiB or if the metadata block size (parameter --metadata-block-size) is less than
256 KiB, then the default log file size is either 4 MiB or the metadata block size, whichever is greater.
The minimum size is 256 KiB and the maximum size is 1024 MiB. Specify this value with the K or M
character, for example: 8M. For more information, see “mmcrfs command” on page 315.
The default log size works well in most cases. An increased log file size is useful when the highly
available write cache feature (parameter --write-cache-threshold) is enabled.
The new log file size is not effective until you apply one of the two following methods:
• The first method requires you in part to restart the GPFS daemon on the manager nodes, but you
can do so one node at a time. Follow these steps:
1. Restart the GPFS daemon (mmfsd) on all the manager nodes of the local cluster. This action is
required even if the affected file system is not mounted on any of the manager nodes. You can do
this action one manager node at a time.
2. Remount the file system on all the local and remote nodes that have it mounted. You can do this
action one node at a time. The new log file size becomes effective when the file system is
remounted on the last affected node.
• The second method requires you to unmount the file system on all the affected nodes at the same
time. Follow these steps:
1. Unmount the file system on all local and remote nodes that have it mounted. The file system
must be in the unmounted state on all the nodes at the same time.
2. Remount the file system on any or all the affected nodes.
-m DefaultMetaDataReplicas
Changes the default number of metadata replicas. Valid values are 1, 2, and 3. This value cannot be
greater than the value of MaxMetaDataReplicas set when the file system was created.
Changing the default replication settings using the mmchfs command does not change the replication
setting of existing files. After running the mmchfs command, the mmrestripefs command with the -
R option can be used to change all existing files or you can use the mmchattr command to change a
small number of existing files.
-n NumNodes
Changes the number of nodes for a file system but does not change the existing system metadata
structures. This setting is just an estimate and can be used only to affect the layout of the system
metadata for storage pools created after the setting is changed.
-o MountOptions
Specifies the mount options to pass to the mount command when mounting the file system. For a
detailed description of the available mount options, see GPFS-specific mount options in the IBM
Spectrum Scale: Administration Guide.
-Q {yes | no}
If -Q yes is specified, quotas are activated automatically when the file system is mounted. If -Q no
is specified, the quota files remain in the file system, but are not used.
For more information, see the topic Enabling and disabling GPFS quota management in the IBM
Spectrum Scale: Administration Guide.
-r DefaultDataReplicas
Changes the default number of data replicas. Valid values are 1, 2, and 3. This value cannot be greater
than the value of MaxDataReplicas set when the file system was created.
Changing the default replication settings using the mmchfs command does not change the replication
setting of existing files. After running the mmchfs command, the mmrestripefs command with the -
R option can be used to change all existing files or you can use the mmchattr command to change a
small number of existing files.
-S {yes | no | relatime}
Controls how the file attribute atime is updated.
Note: The attribute atime is updated locally in memory, but the value is not visible to other nodes
until after the file is closed. To get an accurate value of atime, an application must call subroutine
gpfs_stat or gpfs_fstat.
yes
The atime attribute is not updated. The subroutines gpfs_stat and gpfs_fstat return the
time that the file system was last mounted with relatime=no. For more information, see the
topics “mmmount command” on page 537 with the -o parameter and Mount options specific to
IBM Spectrum Scale in the IBM Spectrum Scale: Administration Guide.
no
The atime attribute is updated whenever the file is read. This option is the default if the minimum
release level (minReleaseLevel) of the cluster is less than 5.0.0 when the file system is created.
relatime
The atime attribute is updated whenever the file is read, but only if one of the following
conditions is true:
• The current file access time (atime) is earlier than the file modification time (mtime).
• The current file access time (atime) is greater than the atimeDeferredSeconds attribute. For
more information, see the topic mmchconfig command in the IBM Spectrum Scale: Command
and Programming Reference.
This setting is the default if the minimum release level (minReleaseLevel) of the cluster is 5.0.0
or greater when the file system is created.
For more information, see the topic atime values in the IBM Spectrum Scale: Concepts, Planning, and
Installation Guide.
-T Mountpoint
Change the mount point of the file system starting at the next mount of the file system.
The file system must be unmounted on all nodes before this command is issued.
-t DriveLetter
Changes the Windows drive letter for the file system.
The file system must be unmounted on all nodes before the command is issued.
-V {full | compat}
Changes the file system format to the latest format supported by the currently installed level of GPFS.
This option might cause the file system to become permanently incompatible with earlier releases of
GPFS.
Note: The -V option cannot be used to make file systems that were created earlier than GPFS 3.2.1.5
available to Windows nodes. Windows nodes can mount only file systems that were created with
GPFS 3.2.1.5 or later.
Before issuing -V, see Migration, coexistence and compatibility in IBM Spectrum Scale: Concepts,
Planning, and Installation Guide. Ensure that all nodes in the cluster have been updated to the latest
level of GPFS code and that you have successfully run the mmchconfig release=LATEST command.
For information about specific file system format and function changes when you upgrade to the
current release, see the topic File system format changes between versions of GPFS in the IBM
Spectrum Scale: Administration Guide.
full
Enables all new functionality that requires different on-disk data structures. Nodes in remote
clusters that are running an earlier version of IBM Spectrum Scale will no longer be able to mount
the file system. With this option the command fails if it is issued while any node that has the file
system mounted is running an earlier version of IBM Spectrum Scale.
compat
Enables only backward-compatible format changes. Nodes in remote clusters that were able to
mount the file system before the format changes can continue to do so afterward.
-W NewDeviceName
Assign NewDeviceName to be the device name for the file system.
-z {yes | no}
Enable or disable DMAPI on the file system. Turning this option on requires an external data
management application such as IBM Spectrum Protect hierarchical storage management (HSM)
before the file system can be mounted.
Note: IBM Spectrum Protect for Space Management does not support file systems with the -A option
set to automount.
For further information regarding DMAPI for GPFS, see GPFS-specific DMAPI events in the IBM
Spectrum Scale: Command and Programming Reference.
--filesetdf
Specifies that when quotas are enforced for a fileset (other than the root fileset), the numbers
reported by the df command are based on the quotas for the fileset (rather than the entire file
system). This option affects the df command behavior only on Linux nodes.
--nofilesetdf
Specifies that the numbers reported by the df command are not based on the quotas for a fileset. The
df command returns the numbers for the entire file system. This is the default.
--inode-limit MaxNumInodes[:NumInodesToPreallocate]
MaxNumInodes specifies the maximum number of files that can be created. Allowable values range
from the current number of created inodes (determined by issuing the mmdf command with -F),
through the maximum number of files possibly supported as constrained by the formula:
maximum number of files = (total file system space) / (inode size + subblock size)
Note: This formula works only for simpler configurations. For complex configurations, such as
separation of data and metadata, this formula might not provide an accurate result.
If your file system has additional disks added or the number of inodes was insufficiently sized at file
system creation, you can change the number of inodes and hence the maximum number of files that
can be created.
For file systems that do parallel file creates, if the total number of free inodes is not greater than 5%
of the total number of inodes, there is the potential for slowdown in file system access. Take this into
consideration when changing your file system.
NumInodesToPreallocate specifies the number of inodes that are preallocated by the system right
away. If this number is not specified, GPFS allocates inodes dynamically as needed.
The MaxNumInodes and NumInodesToPreallocate values can be specified with a suffix, for example
100K or 2M. Note that in order to optimize file system operations, the number of inodes that are
actually created may be greater than the specified value.
This option applies only to the root fileset. Preallocated inodes cannot be deleted or moved to another
independent fileset. It is recommended to avoid preallocating too many inodes because there can be
both performance and memory allocation costs associated with such preallocations. In most cases,
there is no need to preallocate inodes because GPFS dynamically allocates inodes as needed. When
there are multiple inode spaces, use the --inode-limit option of the mmchfileset command to
alter the inode limits of independent filesets. The mmchfileset command can also be used to
modify the inode limit of the root inode space. The --inode-limit option of the mmlsfs command
shows the sum of the inode limits of all inode spaces in the file system. Use the mmlsfileset
command to see the inode limit of the root fileset.
--log-replicas LogReplicas
Specifies the number of recovery log replicas. Valid values are 1, 2, 3, or DEFAULT. If DEFAULT is
specified, the number of log replicas is the same as the number of metadata replicas currently in
effect for the file system and changes when this number is changed.
Changing the default replication settings using the mmchfs command does not change the replication
setting of existing files. After running the mmchfs command, the mmrestripefs command with the -
R option can be used to change existing log files.
This option is applicable only if the recovery log is stored in the system.log storage pool. For more
information about the system.log storage pool, see the topic The system.log storage pool in the IBM
Spectrum Scale: Concepts, Planning, and Installation Guide.
--maintenance-mode Device {yes [--wait] | no}
Turns file system maintenance mode on or off when you change file system attributes. The values are
yes and no (the default).
Specifying --wait, which is valid only when you use it with the yes parameter, turns on the
maintenance mode after you have unmounted the file system. If you specify yes without --wait and
the file system is mounted, the command fails.
For more information on maintenance mode, see File system maintenance mode in the IBM Spectrum
Scale: Administration Guide.
--mount-priority Priority
Controls the order in which the individual file systems are mounted at daemon startup or when one of
the all keywords is specified on the mmmount command.
File systems with higher Priority numbers are mounted after file systems with lower numbers. File
systems that do not have mount priorities are mounted last. A value of zero indicates no priority.
--perfileset-quota
Sets the scope of user and group quota limit checks to the individual fileset level, rather than to the
entire file system.
--noperfileset-quota
Sets the scope of user and group quota limit checks to the entire file system, rather than to individual
filesets.
--rapid-repair
Keeps track of incomplete replication on an individual file block basis (as opposed to the entire file).
This may result in a faster repair time when very large files are only partially ill-replicated.
--norapid-repair
Specifies that replication status is kept on a whole file basis (rather than on individual block basis).
--write-cache-threshold HAWCThreshold
Specifies the maximum length (in bytes) of write requests that will be initially buffered in the highly-
available write cache before being written back to primary storage. Only synchronous write requests
are guaranteed to be buffered in this fashion.
A value of 0 disables this feature. 64K is the maximum supported value. Specify in multiples of 4K.
This feature can be enabled or disabled at any time (the file system does not need to be unmounted).
For more information about this feature, see the topic Highly-available write cache (HAWC) in the IBM
Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmchfs command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To change the default replicas for metadata to 2 and the default replicas for data to 2 for new files created
in the fs0 file system, issue this command:
mmchfs fs0 -m 2 -r 2
mmlsfs fs0 -m -r
See also
• “mmchfileset command” on page 222
• “mmcrfs command” on page 315
• “mmdelfs command” on page 369
• “mmdf command” on page 382
• “mmfsck command” on page 404
• “mmlsfs command” on page 498
• “mmrestripefs command” on page 672
Location
/usr/lpp/mmfs/bin
mmchlicense command
Controls the type of GPFS license associated with the nodes in the cluster.
Synopsis
mmchlicense {client|fpo|server} [--accept] -N {Node[,Node...] | NodeFile | NodeClass}
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmchlicense command to change the type of GPFS license associated with the nodes in the
cluster.
For information on IBM Spectrum Scale license designation, see IBM Spectrum Scale license designation
in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Parameters
client | fpo | server
The type of GPFS license to be assigned to the nodes specified with the -N parameter.
client
The IBM Spectrum Scale Client license permits exchange of data between nodes that locally
mount the same GPFS file system. No other export of the data is permitted. The GPFS client may
not be used for nodes to share GPFS data directly through any application, service, protocol or
method, such as Network File System (NFS), Common Internet File System (CIFS), File Transfer
Protocol (FTP), or Hypertext Transfer Protocol (HTTP). For these functions, an IBM Spectrum Scale
Server license would be required. The use of any of the following components or functions of IBM
Spectrum Scale Client is not authorized:
• Configuring a virtual server in the following IBM Spectrum Scale roles: Configuration Manager,
Quorum node, Manager node, Network Shared Disk (NSD) Server node, Cluster Export Services
node (also known as Protocol node), Advanced File Management (AFM) Gateway node,
Transparent Cloud Tiering Gateway node.
• Exporting IBM Spectrum Scale data to virtual servers that do not have a valid IBM Spectrum
Scale license through any application, protocol or method, including Network File System (NFS),
Server Message Block (SMB), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP),
Object Protocol (OpenStack Swift, Amazon S3 API).
server
The IBM Spectrum Scale Server license permits the licensed node to perform GPFS management
functions such as cluster configuration manager, quorum node, manager node, and Network
Shared Disk (NSD) server. In addition, the IBM Spectrum Scale Server license permits the licensed
node to share GPFS data directly through any application, service protocol or method such as NFS,
CIFS, FTP, or HTTP. Therefore, protocol nodes also require an IBM Spectrum Scale Server license.
fpo
The IBM Spectrum Scale FPO license permits the licensed node to perform NSD server functions
for sharing GPFS data with other nodes that have an IBM Spectrum Scale FPO or IBM Spectrum
Scale Server license. This license cannot be used to share data with nodes that have an IBM
Spectrum Scale Client license or non-GPFS nodes. The use of any of the following components or
functions of IBM Spectrum Scale FPO is not authorized:
• Configuring a virtual server in the following IBM Spectrum Scale roles: Configuration Manager,
Quorum node, Manager node, Cluster Export Services node (also known as Protocol node),
Advanced File Management (AFM) Gateway node, Transparent Cloud Tiering Gateway node.
• Configuring a virtual server as an IBM Spectrum Scale Network Shared Disk (NSD) Server node
for providing IBM Spectrum Scale data access to virtual servers that do not have a valid IBM
Spectrum Scale Server or IBM Spectrum Scale FPO license entitlement.
• Exporting IBM Spectrum Scale data to virtual servers that do not have a valid IBM Spectrum
Scale license through any application, protocol or method, including Network File System (NFS),
Server Message Block (SMB), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP),
Object Protocol (OpenStack Swift, Amazon S3 API).
The full text of the Licensing Agreement is provided with the installation media and can be found at
the IBM Software license agreements website (www.ibm.com/software/sla/sladb.nsf).
--accept
Indicates that you accept the applicable licensing terms. The license acceptance prompt will be
suppressed.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes that are to be assigned the specified license type.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmchlicense command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To designate nodes k145n04 and k145n05 as possessing a GPFS server license, issue this command:
See also
• “mmlslicense command” on page 503
Location
/usr/lpp/mmfs/bin
mmchmgr command
Assigns a new file system manager node or cluster manager node.
Synopsis
mmchmgr {Device | -c} [Node]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmchmgr command assigns a new file system manager node or cluster manager node.
Parameters
Device
The device name of the file system for which the file system manager node is to be changed. File
system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
-c
Changes the cluster manager node.
Node
The target node to be appointed as either the new cluster manager node or the new file system
manager node. Target nodes for manager functions are selected according to the following criteria:
• Target nodes for the cluster manager function must be specified from the list of quorum nodes.
• Target nodes for the file system manager function should be specified from the list of manager
nodes, although this is not strictly required.
If Node is not specified, the new manager is selected automatically.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmchmgr command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. Assume the file system manager for the file system gpfs1 is currently k164n05. To migrate the file
system manager responsibilities to k164n06, issue this command:
GPFS: 6027-628 Sending migrate request to current manager node 89.116.68.69 (k164n05).
GPFS: 6027-629 [N] Node 89.116.68.69 (k164n05) resigned as manager for gpfs1.
GPFS: 6027-630 [N] Node 89.116.68.70 (k164n06) appointed as manager for gpfs1.
mmlsmgr gpfs1
mmchmgr -c c5n107
mmlsmgr -c
See also
• “mmlsmgr command” on page 507
Location
/usr/lpp/mmfs/bin
mmchnode command
Changes node attributes.
Synopsis
mmchnode change-options -N {Node[,Node...] | NodeFile | NodeClass}
[--cloud-gateway-nodeclass CloudGatewayNodeClass]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmchnode command to change one or more attributes on a single node or on a set of nodes. If
conflicting node designation attributes are specified for a given node, the last value is used. If any of the
attributes represent a node-unique value, the -N option must resolve to a single node.
Do not use the mmchnode command to change the gateway node role while IO is happening on the fileset.
Run the flushpending command to flush any pending messages from queues before running the
mmchnode command for the gateway node role changes.
Parameters
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes whose states are to be changed.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
[--cloud-gateway-nodeclass CloudGatewayNodeClass]
Use this option to specify a node class you will use for Transparent cloud tiering management along
with the -N option where you will specify individual node names. Both -N with a node list and --cloud-
gateway-node with a node class will be required.
-S Filename | --spec-file=Filename
Specifies a file with a detailed description of the changes to be made. Each line represents the
changes to an individual node and has the following format:
node-identifier change-options
change-options
A blank-separated list of attribute[=value] pairs. The following attributes can be specified:
--admin-interface={hostname | ip_address}
Specifies the name of the node to be used by GPFS administration commands when
communicating between nodes. The admin node name must be specified as an IP address or a
hostname that is resolved by the host command to the desired IP address. If the keyword
DEFAULT is specified, the admin interface for the node is set to be equal to the daemon interface
for the node.
--client
Specifies that the node should not be part of the pool of nodes from which cluster managers, file
system managers, and token managers are selected.
--cloud-gateway-enable
Enables one or more nodes as Transparent cloud tiering nodes on the cluster based on the -N
option parameters.
--cloud-gateway-disable
Disables one or more Transparent cloud tiering nodes from the cluster based on the -N option
parameters. Only disable a Transparent cloud tiering node if you no longer need it to migrate or
recall data from the configured cloud.
--ces-enable
Enables Cluster Export Services (CES) on the node.
--ces-disable
Disables CES on the node.
--ces-group=Group[,Group...]
Adds one or more groups to the specified nodes. Each group that is listed is added to all the
specified nodes.
--noces-group=Group[,Group...]
Removes one or more groups from the specified nodes.
--cnfs-disable
Disables the CNFS functionality of a CNFS member node.
--cnfs-enable
Enables a previously-disabled CNFS member node.
--cnfs-groupid=groupid
Specifies a failover recovery group for the node. If the keyword DEFAULT is specified, the CNFS
recovery group for the node is set to zero.
For more information, see Implementing a clustered NFS environment on Linux in IBM Spectrum
Scale: Administration Guide.
--cnfs-interface=ip_address_list
A comma-separated list of host names or IP addresses to be used for GPFS cluster NFS serving.
The specified IP addresses can be real or virtual (aliased). These addresses must be configured to
be static (not DHCP) and to not start at boot time.
The GPFS daemon interface for the node cannot be a part of the list of CNFS IP addresses.
If the keyword DEFAULT is specified, the CNFS IP address list is removed and the node is no
longer considered a member of CNFS.
If adminMode central is in effect for the cluster, all CNFS member nodes must be able to execute
remote commands without the need for a password.
For more information, see Implementing a clustered NFS environment on Linux in IBM Spectrum
Scale: Administration Guide.
--daemon-interface={hostname | ip_address}
Specifies the host name or IP address to be used by the GPFS daemons for node-to-node
communication. The host name or IP address must refer to the communication adapter over
which the GPFS daemons communicate. Alias interfaces are not allowed. Use the original address
or a name that is resolved by the host command to the original address. You cannot set --
daemon-interface=DEFAULT.
You must stop all the nodes in the cluster with mmshutdown before you set this attribute. This
requirement holds true even if you are changing only one node.
See examples 8 and 9 at the end of this topic.
If the minimum release level of the cluster is IBM Spectrum Scale 5.1.0 or later, the following
features are available:
• You can specify the --daemon-interface option for a quorum node even when CCR is
enabled. For earlier versions of IBM Spectrum Scale, temporarily change the quorum node to a
nonquorum node, issue the mmchnode command with the --daemon-interface option for
the nonquorum node, and change the nonquorum node back to a quorum node.
• You can change the IP addresses or host names of cluster nodes when a node quorum is not
available. For more information, see Changing IP addresses and host names in the IBM Spectrum
Scale: Administration Guide.
--gateway | --nogateway
Specifies whether the node is to be designated as a gateway node or not.
--manager | --nomanager
Designates the node as part of the pool of nodes from which file system managers and token
managers are selected.
--nonquorum
Designates the node as a non-quorum node. If two or more quorum nodes are downgraded at the
same time, GPFS must be stopped on all nodes in the cluster. GPFS does not have to be stopped if
the nodes are downgraded one at a time.
--perfmon | --noperfmon
Specifies whether the node is to be designated as a perfmon node or not.
--nosnmp-agent
Stops the SNMP subagent and specifies that the node should no longer serve as an SNMP
collector node. For more information, see GPFS SNMP support in IBM Spectrum Scale: Problem
Determination Guide.
--quorum
Designates the node as a quorum node.
Note: If you are designating a node as a quorum node, and adminMode central is in effect for
the cluster, you must ensure that GPFS is up and running on that node (mmgetstate reports the
state of the node as active).
--snmp-agent
Designates the node as an SNMP collector node. If the GPFS daemon is active on this node, the
SNMP subagent will be started as well. For more information, GPFS SNMP support in IBM
Spectrum Scale: Problem Determination Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmchnode command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To change nodes k145n04 and k145n05 to be both quorum and manager nodes, issue this command:
2. To change nodes k145n04 and k145n05 to be both quorum and manager nodes, and node k45n06 to
be a non-quorum node, issue this command:
mmchnode -S /tmp/specFile
3. To enable all the nodes specified in the node class TCTNodeClass1 as Transparent cloud tiering nodes,
issue this command:
You can verify the Transparent cloud tiering nodes by issuing this command:
4. To designate only a few nodes (node1 and node2) in the node class, TCTNodeClass1, as Transparent
cloud tiering server nodes, issue this command:
Note: It only designates node1 and node2 as Transparent cloud tiering server nodes from the node
class, TCTNodeClass1. Administrators can continue to use the node class for other purposes.
5. To disable all Transparent cloud tiering nodes from the node class, TCTNodeClass1, issue this
command:
6. To disable only a few nodes (node1 and node2) from the node class, TCTNodeClass1, as Transparent
cloud tiering server nodes, issue this command:
Note: It only disables node1 and node2 as Transparent cloud tiering server nodes from the node class,
TCTNodeClass1.
7. To add groups to specified nodes, issue the mmchnode --ces-group command. For example:
Note: This command adds groups g1 and g2 to both nodes 2 and 3. Run the mmces node list
command to view the group allocation:
# mmlscluster
GPFS cluster information
========================
GPFS cluster name: small_cluster.localnet.com
GPFS cluster id: 5072947464461061246
GPFS UID domain: small_cluster.localnet.com
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
Repository type: CCR
GPFS cluster configuration servers:
-----------------------------------
Primary server: node-6.localnet.com (not in use)
Secondary server: (none)
Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------
1 node-6.localnet.com 192.168.124.36 node-6.localnet.com quorum
2 node-7.localnet.com 192.168.124.37 node-7.localnet.com quorum
3 node-8.localnet.com 192.168.124.38 node-8.localnet.com
c. The mmlscluster command now shows the new daemon interface for node-6. Some lines of
output are omitted:
# mmlscluster
GPFS cluster information
========================
...
Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------------
1 node-6-2.new-localnet.com 10.20.40.36 node-6.localnet.com quorum
2 node-7.localnet.com 192.168.124.37 node-7.localnet.com quorum
3 node-8.localnet.com 192.168.124.38 node-8.localnet.com
9. The following example changes the daemon interface and the administration interface of multiple
nodes:
a. The data file /tmp/specfile contains the following lines:
b. The mmchmode command changes the daemon interfaces and the administration interfaces of
node-6, node-7, and node-8 based on the information in the data file:
mmchnode -S /tmp/specFile
Wed Sep 23 20:19:48 CEST 2020: mmchnode: Processing node node-6
Wed Sep 23 20:19:49 CEST 2020: mmchnode: Processing node node-7
Wed Sep 23 20:19:50 CEST 2020: mmchnode: Processing node node-8
Verifying GPFS is stopped on all nodes ...
Wed Sep 23 20:19:52 CEST 2020: mmchnode: Collecting ccr.nodes file version from all
quorum nodes ...
Wed Sep 23 20:19:53 CEST 2020: mmchnode: Applying new change to ccr.nodes version 4 on
all available quorum nodes ...
Wed Sep 23 20:19:56 CEST 2020: mmchnode: Committing new version of ccr.nodes file ...
c. The mmlscluster command now shows the new daemon interfaces and administration interfaces
for the three nodes. Some lines of output are omitted:
# mmlscluster
GPFS cluster information
========================
GPFS cluster name: small_cluster.localnet.com
...
Node Daemon node name IP address Admin node name
Designation
--------------------------------------------------------------------------------------
1 node-6-2.new-localnet.com 10.20.40.36 node-6-2.new-localnet.com quorum
2 node-7-2.new-localnet.com 10.20.40.37 node-7-2.new-localnet.com quorum
3 node-8-2.new-localnet.com 10.20.40.38 node-8-2.new-localnet.com
See also
• “mmchconfig command” on page 169
• “mmlscluster command” on page 484
Location
/usr/lpp/mmfs/bin
mmchnodeclass command
Changes user-defined node classes.
Synopsis
mmchnodeclass ClassName {add | delete | replace}
-N {Node[,Node...] | NodeFile | NodeClass}
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmchnodeclass command to make changes to existing user-defined node classes.
Parameters
ClassName
Specifies the name of an existing user-defined node class to modify.
add
Adds the nodes specified with the -N option to ClassName.
delete
Deletes the nodes specified with the -N option from ClassName.
replace
Replaces all ClassName members with a new list of nodes specified with the -N option.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the member names of nodes and node classes that will be used for the add, delete, or
replace action.
NodeClass cannot be used to add members that already contain other node classes. For example, two
user-defined node classes called siteA and siteB were used to create a new node class called
siteAandB, as follows:
The siteAandB node class cannot later be specified for NodeClass when adding to existing node
classes.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmchnodeclass command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To display the current members of a user-defined node class called siteA, issue this command:
mmlsnodeclass siteA
To add node c8f2c1vp4 to the member list of the user-defined node class siteA, issue this command:
mmlsnodeclass siteA
To delete node c8f2c4vp2 from the member list of siteA, issue this command:
mmlsnodeclass siteA
To replace all the current members of siteA with the members of node class linuxNodes, issue this
command:
mmlsnodeclass siteA
See also
• “mmcrnodeclass command” on page 330
• “mmdelnodeclass command” on page 374
• “mmlsnodeclass command” on page 512
Location
/usr/lpp/mmfs/bin
mmchnsd command
Changes Network Shared Disk (NSD) configuration attributes.
Synopsis
mmchnsd {"DiskDesc[;DiskDesc...]" | -F StanzaFile}
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmchnsd command serves several purposes. You can use it to:
• Specify a server list for an NSD that does not have one.
• Change the NSD server nodes specified in the server list.
• Delete the server list. The disk must now be SAN-attached to all nodes in the cluster on which the file
system will be mounted.
In IBM Spectrum Scale 5.0.0 or later, you do not need to unmount the file system before changing NSDs.
Note: This feature applies only to non-vdisk NSDs. To enable this feature, you must upgrade all the nodes
in the cluster to IBM Spectrum Scale 5.0.0 or later.
In versions of IBM Spectrum Scale that are earlier than 5.0.0, you must unmount the file system that
contains the NSD that is being changed before you issue the mmchnsd command.
You must follow these rules when you change NSDs:
• Identify the disks by the NSD names that were given to them by the mmcrnsd command.
• Explicitly specify values for all NSD servers on the list even if you are only changing one of the values.
• Connect the NSD to the new nodes prior to issuing the mmchnsd command.
Note: The mmchdisk command does not change the name of the NSD, the NSD servers that are
associated with the disk, the storage pool of the disk:
• The name of the NSD cannot be changed.
• To change the NSD servers use the mmchnsd command.
• To change the storage pool use the mmdeldisk and mmadddisk commands.
Prior to GPFS 3.5, the disk information was specified in the form of disk descriptors defined as:
DiskName:ServerList:
For backward compatibility, the mmchnsd command will still accept the traditional disk descriptors but
their use is discouraged.
Parameters
DiskDesc
A descriptor for each NSD to be changed. Each descriptor is separated by a semicolon (;). The entire
list must be enclosed in single or double quotation marks. The use of disk descriptors is discouraged.
-F StanzaFile
Specifies a file containing the NSD stanzas for the disks to be changed. NSD stanzas have this format:
%nsd:
nsd=NsdName
servers=ServerList
usage=DiskUsage
failureGroup=FailureGroup
pool=StoragePool
device=DiskName
thinDiskType={no | nvme | scsi | auto}
where:
nsd=NsdName
Is the NSD name that was given to the disk by the mmcrnsd command. This clause is mandatory
for the mmchnsd command.
servers=ServerList
Is a comma-separated list of NSD server nodes. You can specify up to eight NSD servers in this
list. The defined NSD will preferentially use the first server on the list. If the first server is not
available, the NSD will use the next available server on the list.
When specifying server nodes for your NSDs, the output of the mmlscluster command lists the
host name and IP address combinations recognized by GPFS. The utilization of aliased host names
not listed in the mmlscluster command output may produce undesired results.
If you do not define a ServerList, GPFS assumes that the disk is SAN-attached to all nodes in the
cluster. If all nodes in the cluster do not have access to the disk, or if the file system to which the
disk belongs is to be accessed by other GPFS clusters, you must specify a value for ServerList.
To remove the NSD server list, do not specify a value for ServerList (remove or comment out the
servers=ServerList clause of the NSD stanza).
usage=DiskUsage
Specifies the type of data to be stored on the disk. If this clause is specified, the value must match
the type of usage already in effect for the disk; mmchnsd cannot be used to change this value.
failureGroup=FailureGroup
Identifies the failure group to which the disk belongs. A failure group identifier can be a simple
integer or a topology vector that consists of up to three comma-separated integers. The default is
-1, which indicates that the disk has no point of failure in common with any other disk.
GPFS uses this information during data and metadata placement to ensure that no two replicas of
the same block can become unavailable due to a single failure. All disks that are attached to the
same NSD server or adapter must be placed in the same failure group.
If the file system is configured with data replication, all storage pools must have two failure groups
to maintain proper protection of the data. Similarly, if metadata replication is in effect, the system
storage pool must have two failure groups.
Disks that belong to storage pools in which write affinity is enabled can use topology vectors to
identify failure domains in a shared-nothing cluster. Disks that belong to traditional storage pools
must use simple integers to specify the failure group.
If this clause is specified, the value must match the failure group already in effect for the disk;
mmchnsd cannot be used to change this value.
pool=StoragePool
Specifies the storage pool to which the disk is to be assigned. If this clause is specified, the value
must match the storage pool already in effect for the disk; mmchnsd cannot be used to change this
value.
device=DiskName
Is the block device name of the underlying disk device. This clause is ignored by the mmchnsd
command.
nvme
The disk TRIM capable NVMe device that supports the mmreclaimspace command.
scsi
The disk is a thin provisioned SCSI disk that supports the mmreclaimspace command.
auto
The type of the disk is either nvme or scsi. IBM Spectrum Scale will try to detect the actual
disk type automatically. To avoid problems, you should replace auto with the correct disk
type, nvme or scsi, as soon as you can.
Note: In 5.0.5, the space reclaim auto-detection is enhanced. It is encouraged to use the auto
key-word after your cluster is upgraded to 5.0.5.
For more information, see the topic IBM Spectrum Scale with data reduction storage devices in the
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmchnsd command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
If the disk gpfs1nsd is currently defined with k145n05 as the first server and k145n07 as the second
server, and you want to replace k145n05 with k145n09, create a file ./newNSDstanza that contains:
%nsd: nsd=gpfs1nsd
servers=k145n09,k148n07
mmchnsd -F ./newNSDstanza
mmlsnsd -d gpfs1nsd
See also
• “mmchdisk command” on page 210
• “mmcrcluster command” on page 303
• “mmcrnsd command” on page 332
• “mmlsnsd command” on page 514
Location
/usr/lpp/mmfs/bin
mmchpolicy command
Establishes policy rules for a GPFS file system.
Synopsis
mmchpolicy Device PolicyFilename [-t DescriptiveName] [-I {yes | test}]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmchpolicy command to establish the rules for policy-based lifecycle management of the files
in a given GPFS file system. Some of the things that can be controlled with the help of policy rules are:
• File placement at creation time
• Snapshot data placement during file writes and deletes
• Replication factors
• Movement of data between storage pools
• File deletion
The mmapplypolicy command must be run to move data between storage pools or delete files.
Policy changes take effect immediately on all nodes that have the affected file system mounted. For
nodes that do not have the file system mounted, policy changes take effect upon the next mount of the
file system.
For file systems that are created at or upgraded to product version V4.1.1 or later: If there are no SET
POOL policy rules installed to a file system by mmchpolicy, the system acts as if the single rule SET
POOL 'first-data-pool' is in effect, where first-data-pool is the firstmost non-system pool that is available
for file data storage, if such a non-system pool is available. ("Firstmost" is the first according to an internal
index of all pools.) However, if there are no policy rules installed and there is no non-system pool, the
system acts as if SET POOL 'system' is in effect.
This change applies only to file systems that were created at or upgraded to V4.1.1 or later. Until a file
system is upgraded, if no SET POOL rules are present (set by mmchpolicy) for the file system, all data
will be stored in the 'system' pool.
For information on GPFS policies, see the IBM Spectrum Scale: Administration Guide.
Parameters
Device
Specifies the device name of the file system for which policy information is to be established or
changed. File system names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0. This
must be the first parameter.
PolicyFilename
Specifies the name of the file that contains the policy rules. If you specify DEFAULT, GPFS replaces
the current policy file with a single policy rule that assigns all newly-created files to the system
storage pool.
Options
-I {yes | test}
Specifies whether to activate the rules in the policy file PolicyFileName.
yes
The policy rules are validated and immediately activated. This is the default.
test
The policy rules are validated, but not installed.
-t DescriptiveName
Specifies an optional descriptive name to be associated with the policy rules. The string must be less
than 256 characters in length. If not specified, the descriptive name defaults to the base name
portion of the PolicyFileName parameter.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmchpolicy command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. This command validates a policy before it is installed:
mmlspolicy fs2
See also
• “mmapplypolicy command” on page 80
Location
/usr/lpp/mmfs/bin
mmchpool command
Modifies storage pool properties.
Synopsis
mmchpool Device {PoolName[,PoolName...] | all}
[--block-group-factor BlockGroupFactor]
[--write-affinity-depth WriteAffinityDepth]
or
Availability
Available on all IBM Spectrum Scale editions.
When running the mmchpool command, the file system must be unmounted on all nodes.
Description
Use the mmchpool command to change storage pool properties.
Parameters
Device
Specifies the device name of the file system for which storage pool information is to be changed. File
system names do not need to be fully qualified; for example, fs0 is as acceptable as /dev/fs0.
PoolName[,PoolName...]
Specifies one or more storage pools for which attributes will be changed.
all
Changes the attributes for all the storage pools in the specified file system.
--block-group-factor BlockGroupFactor
Specifies how many file system blocks are laid out sequentially on disk to behave like a single large
block. This option only works if --allow-write-affinity is set for the data pool. This applies only
to a new data block layout; it does not migrate previously existing data blocks.
--write-affinity-depth WriteAffinityDepth
Specifies the allocation policy to be used. This option only works if --allow-write-affinity is set
for the data pool. This applies only to a new data block layout; it does not migrate previously existing
data blocks.
-F PoolDescriptorFile
Specifies a file used to describe the storage pool attributes. The file contains one line per storage
pool, in the following format:
%pool:name:blockSize:diskUsage:reserved:maxDiskSize:allocationType:allowWriteAffinity:writeAffinityDepth:blockGroupFactor:
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmchpool command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Example
For example, to change the writeAffinityDepth to 2 for FPO pool pool1 of file system fs1, issue this
command:
# mmlspool fs_1 p1 -L
Pool:
name = pool1
poolID = 65537
blockSize = 4 MB
usage = dataOnly
maxDiskSize = 11 TB
layoutMap = cluster
allowWriteAffinity = yes
writeAffinityDepth = 2
blockGroupFactor = 128
See also
• “mmlspool command” on page 520
Location
/usr/lpp/mmfs/bin
mmchqos command
Controls Quality of Service for I/O operations (QoS) settings for a file system.
Synopsis
mmchqos Device --enable [--reset] [--force]
[--fine-stats Seconds] [--pid-stats {yes|no}]
[--stat-poll-interval Seconds] [--stat-slot-time Milliseconds]
[[-N {Node[,Node...] | NodeFile | NodeClass}]
[-C {all | all_remote | ClusterName[,ClusterName...]}]
pool=PoolName[,QOSClass={nnnIOPS | unlimited}][,QOSClass=...] ...]
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Attention: The mmqos command provides the same functionality as both the mmchqos command
and the mmlsqos command and has additional features. Future QoS features will be added to the
mmqos command rather than to either the mmchqos command or the mmlsqos command. For
more information, see “mmqos command” on page 610.
With the mmchqos command, you can regulate I/O access to a specified storage pool by allocating shares
of I/O operations to two QoS classes:
• A maintenance class for I/O-intensive, potentially long-running GPFS commands. Typically you assign
fewer IOPS to this class to prevent the I/O-intensive commands from dominating file system
performance and significantly delaying other tasks.
• An other class for all other processes. Typically you assign more IOPS or unlimited to this class so
that normal processes have greater access to I/O resources and finish more quickly.
A third class, misc, is used to count the IOPS that some critical file system processes consume. You
cannot assign IOPS to this class, but its count of IOPS is displayed in the output of the mmlsqos
command.
When QoS is enabled, it restricts the active processes in a QoS class from collectively consuming more
than the number of IOPS that you allocate to the class. It queues further I/O attempts until more I/O
operations become available.
Important:
• You can allocate shares of IOPS separately for each storage pool.
• QoS divides each IOPS allocation equally among the specified nodes that have mounted the file system.
See Table 19 on page 264.
• Allocations persist across unmounting and remounting the file system.
• QoS stops applying allocations when you unmount the file system and resumes when you remount it.
• When you change allocations or mount the file system, a brief delay due to reconfiguration occurs
before QoS starts applying allocations.
Each QoS node collects statistics at certain intervals and periodically sends the accumulated statistics to
a QoS manager node. The QoS manager node combines and processes the statistics so that they are
available for reporting by the mmlsqos command. In larger clusters the frequency of collecting statistics
and the frequency of sending accumulated statistics can generate so many messages between QoS nodes
and a QoS manager node that the performance of QoS is degraded. To prevent that result, the values of
two internal QoS variables are dynamically adjusted based on the number of nodes that have mounted
the file system. The two internal variables are the interval between each collection of statistics and the
interval between each sending of accumulated statistics. These variables apply globally to all the QoS
nodes in the cluster. As the values of these two variables become greater, the frequency of statistics-
related messages between QoS nodes and QoS manager nodes decreases and the impact on QoS
performance diminishes.
The following table shows how QoS sets the collection interval and the message interval based on the
number of nodes that have mounted the file system:
For example, the first row of the table shows that when the cluster contains fewer than 32 nodes, each
QoS node collects statistics every 1 seconds and sends accumulated statistics to a QoS manager node
every 5 seconds.
You can set the collection interval and the message interval to custom values with the mmchqos
command line parameters --stat-slot-time and --stat-poll-interval. These custom values
override the default values that are shown in the previous table and are not affected by changes in the
number of nodes that have mounted the file system. For more information see the description of these
parameters later in this topic.
For more information about the mmchqos command, see Setting the Quality of Service for I/O operations
(QoS) in IBM Spectrum Scale: Administration Guide.
Parameters
Device
The device name of the file system to which the command applies.
--enable
Causes QoS to start or to continue to apply IOPS allocations:
• If you are specifying the --enable option for the first time, then QoS sets the QoS classes of pools
that you do not specify in the command to unlimited IOPS. For QoS to be in effect, you must
enable it with IOPS allocations. For more information, see Setting the Quality of Service for I/O
operations (QoS) in IBM Spectrum Scale: Administration Guide.
• Subsequent mmchqos --enable commands cumulatively augment the IOPS allocations.
You can use the following commands to manage the accumulated allocations for a file system:
– To see the accumulated allocations, issue the following command:
mmlsqos fs0
--disable
Causes QoS to stop applying IOPS allocations. Lets the file system run without any participation by
QoS.
--reset
Causes QoS to set any QoS classes that you do not specify in the same command to unlimited
IOPS.
When you enter multiple mmchqos commands for different storage pools, QoS typically records the
settings for each pool and regulates the I/O consumption of each pool accordingly. However, with the
--reset parameter, QoS discards the settings for all pools that are not specified in the same
command and sets their QoS classes to unlimited. You can use this feature to reset the settings for
any pools that you no longer want QoS to regulate and monitor.
--force
Causes QoS to accept an IOPS value lower than 100 IOPS.
Assigning less than 100 IOPS to a class is typically ineffective, because processes in that class run for
an indefinitely long time. Therefore, the mmchqos command rejects IOPS values less than 100IOPS
with an error message, unless you specify the --force option.
--fine-stats Seconds
Specifies how many seconds of fine-grained statistics to save in memory for the mmlsqos command
to display. The default value is 0, which indicates that no fine-grained statistics are saved. The valid
range is 0 - 3840 seconds.
Note: The value that you specify for Seconds is mapped to a larger actual value. The mmlsqos
command reports the actual value.
Fine-grained statistics are taken at one-second intervals and contain more information than regular
statistics. For more information, see the topic “mmlsqos command” on page 522.
--pid-stats {yes|no}
Enables or disables the keeping of fine-grained statistics for each QoS program that is active on each
node of the cluster. The default value is no. This parameter is effective only if --fine-stats is not
zero.
When this parameter is disabled, statistics reflect the combined data of all the QoS processes running
on each node.
[--stat-poll-interval Seconds]
Sets the interval between each sending of accumulated statistics information by a QoS node to a QoS
manager node. The value must be a multiple of the --stat-slot-time. For example, if the --
stat-slot-time is 1000 milliseconds, then the --stat-poll-interval must be one of the
following values: 1 second, 2 seconds, 3 seconds, and so on. This parameter overrides the default
values that are shown in Table 18 on page 261 and is not affected by changes in the number of nodes
that have mounted the file system.
To return to default operation, in which QoS dynamically adjusts the interval based on the number of
files that have mounted the file system, set this attribute to 0. See Table 18 on page 261 and the
description of this interval in the Description section of this topic.
[--stat-slot-time Milliseconds]
Sets the interval between each collection of statistics information by a QoS node. If the value is less
than 1000 milliseconds, it must be one of the following values, in milliseconds: 100, 125, 200, 250, or
500 milliseconds. This parameter overrides the values that are shown in Table 18 on page 261 and is
not affected by changes in the number of nodes that have mounted the file system. See the
description of this interval in the Description section of this topic.
To return to default operation, in which QoS dynamically adjusts the interval based on the number of
files that have mounted the file system, set this attribute to 0. See Table 18 on page 261 and the
description of this interval in the Description section of this topic.
pool=PoolName
Specifies a storage pool to whose maintenance or other class the IOPS are to be allocated. If you
specify an asterisk (*) as the pool name, then the IOPS are allocated to the QoS classes of the
unspecified pools. The unspecified pools are storage pools that you have not specified by name in any
previous mmchqos command.
QOSClass={nnnIOPS | unlimited}
Identifies a QoS class and allocates IOPS to it.
QOSClass
The QoS class to which IOPS are assigned. You can specify one of the following classes:
maintenance
Most I/O-intensive, potentially slow-running GPFS administration commands run in this class
by default. See the list of commands that support QoS in Table 20 on page 264. Typically, you
allocate fewer IOPS to this QoS class so that the commands that belong to it do not reduce
overall file system performance.
When you start one of these commands, you can explicitly assign it to either QoS class. The
assignment is effective only for the instance of the command that you are starting. In certain
situations, you might assign one of these commands to the other class so that it runs faster
and completes sooner.
other
All other processes that use I/O run in this class by default. Typically you assign more IOPS or
unlimited to this class so that normal processes have greater access to I/O resources and
finish more quickly.
Some I/O-intensive, potentially slow-running GPFS administration commands run in this class
by default. (Currently just one: mmchdisk.) See the list of commands that support QoS in
Table 20 on page 264. When you start one of these commands, you can explicitly assign it to
either QoS class. The assignment is effective only for the instance of the command that you
are starting. In certain situations, you might want to assign one of these commands to the
maintenance class so that normal processes can finish more quickly.
The following table lists the GPFS commands that support QoS and the QoS class that the
command runs in by default:
nnnIOPS
You can use the following values for IOPS:
• A value in the range 0IOPS - 1999999999IOPS. For an IOPS value less than 100, you must
specify the -force option. Otherwise, QoS displays an error message like the following one:
QoS divides the IOPS allocation equally among the specified nodes that have mounted the file
system.
You can type nnnIOPS either with or without the trailing characters IOPS. So either of the
following two examples is valid:
(1) maintenance=400
(2) maintenance=400IOPS
-F StanzaFile
Specifies a file that contains QoS stanzas that describe allocations of IOPS. QoS stanzas have the
following format:
%qos:
pool=PoolName
class=QOSClass
iops=Value
nodes={Node[,Node...] | NodeFile | NodeClass}
cluster={all | all_remote | ClusterName[,ClusterName...]}
where:
%qos
Identifies the stanza as a QoS stanza.
pool=PoolName
Specifies a storage pool to whose maintenance or other class the IOPS are allocated. If you
specify an asterisk (*) as the pool name, then the IOPS are allocated to the QoS classes of
unspecified pools. Unspecified pools are storage pools that you have not specified by name in any
previous mmchqos command.
class=QOSClass
Specifies a QoS class. It must be maintenance or other.
iops=Value
Specifies the IOPS that are to be allocated to the QoS class. QoS divides the allocated IOPS
among the specified nodes that have mounted the file system.
Exit status
0
Successful completion.
Nonzero
A failure occurred.
Security
You must have root authority to run the mmchqos command.
The node on which you enter the command must be able to execute remote shell commands on any other
administration node in the cluster. It must be able to do so without the use of a password and without
producing any extraneous messages. For more information, see Requirements for administering a GPFS
file system in IBM Spectrum Scale: Administration Guide.
Examples
1. The following command enables QoS. If you are using the --enable option for the first time, then QoS
sets the value of all the QoS classes to unlimited. If not, then the QoS classes retain their current
settings:
2. The following command disables QoS but does not change the current allocations of IOPS:
3. The following command enables QoS and allocates 123 IOPS to the maintenance class of each
unspecified pool:
4. The following command enables QoS and allocates 222 IOPS to the maintenance class of each
unspecified pool. It also allocates 576 IOPS to the maintenance class of the pool mySSDs. You might
make an allocation like this one to favor a pool of high-speed storage (mySSDs) that you expect to be
accessed frequently. The command does not change the allocations of the QoS other classes:
5. The following command enables QoS and allocates IOPS to the classes of the unspecified pools and
three named pools:
6. The following command enables QoS and allocates IOPS to the classes of three named pools:
7. The following command enables QoS and allocates IOPS to both classes of the system pool. Also,
because the command contains the --reset parameter, it sets both classes of all the other storage
pools in the file system to unlimited. The reset affects not only any unspecified pools, but also any
named pools that are not explicitly mentioned in this command.
8. The first part of the following command assigns IOPS to the QoS classes of the unspecified pools. It
assigns 222 IOPS to the maintenance class of each unspecified pool.
The second part of the command allocates 456 IOPS to the other class of the storage pool mySAN,
rather than allocating to it the typical value unlimited. You might make an allocation like this one to a
SAN controller that serves both GPFS and other systems.
9. The following command allocates IOPS as they are specified in the stanza file qos.stanza:
%qos:
pool=sp1
class=maintenance
iops=800
nodes=node1,aixNodes
The command divides the allocated IOPS equally among the nodes node1 and the class aixNodes.
Assuming that the node class contains three nodes, then 200 IOPS are assigned to each of the four
nodes:
See also
• “mmlsqos command” on page 522
• Setting the Quality of Service for I/O operations (QoS) in the IBM Spectrum Scale: Administration Guide.
Location
/usr/lpp/mmfs/bin
mmclidecode command
Decodes the parseable command output field.
Synopsis
mmclidecode EncodedString
Availability
Available on all IBM Spectrum Scale editions.
Description
The parseable output of a command might contain a colon (:) which interferes with the field delimiter
character colon (:) of the command output. Therefore, the filed output that might contain colon needs to
be encoded. Since, Percent-encoding, also known as URL encoding, is used to encode the output fields,
the characters in the following table are also encoded if present in the output field.
Parameters
EncodedString
The encoded string to decode.
Exit status
0
Successful completion.
Nonzero
A failure occurred.
Security
The mmclidecode execution does not require root user.
Examples
1. The date is encoded due to the presence of colon (:). It can be decoded as:
mmclidecode "%2Fmnt%2Fgpfs0"
/mnt/gpfs0
Location
/usr/lpp/mmfs/bin
mmclone command
Creates and manages file clones.
Synopsis
mmclone snap SourceFile [CloneParentFile]
or
or
or
or
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
Use the mmclone command to create and manage file clones. Clones are writable snapshots of individual
files. Cloning a file is similar to creating a copy of a file, but the creation process is faster and more space
efficient because no additional disk space is consumed until the clone or the original file is modified. The
keyword specified after mmclone determines which action is performed:
snap
Creates a read-only snapshot of an existing file for the purpose of cloning. This read-only snapshot
becomes known as the clone parent.
If only one file is specified with the mmclone snap command, it will convert that file to a clone parent
without creating a separate clone parent file. When using this method to create a clone parent, the
specified file cannot be open for writing or have hard links.
copy
Creates a file clone from a clone parent created with the mmclone snap command or from a file in a
snapshot.
split
Splits a file clone from all clone parents.
redirect
Splits a file clone from the immediate clone parent only.
show
Displays the current status for one or more specified files. When a file is a clone, the report will show
the parent inode number. When a file was cloned from a file in a snapshot, mmclone show displays
the snapshot and fileset information.
The Depth field in the mmclone show output denotes the distance of the file from the root of the
clone tree of which it is a member. The root of a clone tree has depth 0. This field is blank if the file in
question is not a clone. This field is not updated when a clone’s ancestor is redirected or split from the
clone tree. However, even if a clone’s ancestor has been split or redirected, the depth of the clone
should always be greater than that of each of its ancestors.
The maximum depth for a clone tree is 1000.
Note: The mmclone command does not copy extended attributes.
If a snapshot has file clones, those file clones should be deleted or split from their clone parents prior to
deleting the snapshot. Use the mmclone split or mmclone redirect command to split file clones.
Use a regular delete (rm) command to delete a file clone. If a snapshot is deleted that contains a clone
parent, any attempts to read a block that refers to the missing snapshot will return an error. A policy file
can be created to help determine if a snapshot has file clones.
For more information about file clones and policy files, see the IBM Spectrum Scale: Administration Guide.
Parameters
SourceFile
Specifies the name of a file to clone.
CloneParentFile
When CloneParentFile is specified with a mmclone snap command, it indicates the name of the read-
only clone parent that will be created from SourceFile.
When CloneParentFile is specified with a mmclone copy command, it indicates the name of a read-
only clone parent. The CloneParentFile can be a clone parent created with the mmclone snap
command or a file in a snapshot.
TargetFile
Specifies the name of the writable file clone that will be created from CloneParentFile.
Filename
Specifies the name of one or more files to split, redirect, or show.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
To run the mmclone command, you must have read access to the source file that will be cloned, and write
access to the directory where the file clone will be created.
Examples
1. To create a clone parent called base.img from a file called test01.img, issue this command:
To use this clone parent to create a file clone called test02.img, issue this command:
After the file clone is created, use the mmclone show command to show information about all img
files in the current directory:
yes 0 base.img
no 1 148488 test01.img
no 1 148488 test02.img
2. To create a file clone called file1.clone from a file called file1 in the snap1 snapshot, issue this
command:
See also
• “mmcrsnapshot command” on page 337
• “mmdelsnapshot command” on page 378
Location
/usr/lpp/mmfs/bin
mmcloudgateway command
Creates and manages the cloud storage tier.
Synopsis
mmcloudgateway account create --cloud-nodeclass CloudNodeClass --account-name AccountName
--cloud-type {S3 |SWIFT3 |OPENSTACK-SWIFT | CLEVERSAFE-NEW|
AZURE}
{--username UserName [--pwd-file PasswordFile] |
--src-keystore-path SourceKeystorePath
--src-keystore-alias-name SourceKeystoreAliasName
--src-keystore-type SourceKeystoreType
[--src-keystore-pwd-file
SourceKeystorePasswordFile]}
[--tenant-id TenantID]
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
Availability
Available with IBM Spectrum Scale Advanced Edition, IBM Spectrum Scale Data Management Edition,
IBM Spectrum Scale Developer Edition, or IBM Spectrum Scale Erasure Code Edition.
Description
Use the mmcloudgateway command to manage and administer the Transparent cloud tiering feature.
This CLI has an extensive set of options organized by categories like account, service, files, etc. You can
receive category-based command usage by typing the category with the help option. For example
mmcloudgateway <category> -h and only that category usage is displayed.
Parameters
account
Manages cloud storage accounts with one of the following actions:
create
Creates a cloud storage account.
--cloud-nodeclass CloudNodeClass
Specifies the node class that was created by using the mmcrnodeclass command.
--account-name AccountName
Specifies a name that uniquely identifies the cloud object storage account on the node class.
Note: No special characters are allowed in the name except for “-” and “_”.
--cloud-type CloudType
Specifies the name of the object storage provider.
--username Name
Specifies the user name of the cloud object storage account. For Amazon S3 and IBM Cloud®
Object Storage, it represents the access key.
Note: Skip this parameter for locked vaults.
--pwd-file PasswordFile
Specifies a file that includes the password.
Note: Skip this parameter for locked vaults.
--src-keystore-path SourceKeystorePath
Specifies the keystore path for the X.509 certificate and the private key that is used to
authenticate the vault.
--src-keystore-alias-name SourceKeystoreAliasName
Specifies the alias name that needs to be imported to the Transparent cloud tiering keystore
from the specified keystore.
--src-keystore-type SourceKeystoreType
Specifies the type of source keystore. Allowed keystore types are JKS, JCEKS.
--src-keystore-pwd-file SourceKeystorePasswordFile
Specifies the password file of the source keystore.
--tenant-id Tenant ID
Specifies the tenant ID for the cloud storage provider account.
Note: Optional for cloud type “S3” but mandatory for cloud types “Swift3” and “Swift-
Keystone.”
update
Updates the cloud storage account information.
--cloud-nodeclass CloudNodeClass
Specifies the node class that was created by using the mmcrnodeclass command.
--account-name AccountName
Specifies a name that uniquely identifies the cloud object storage account on the node.
Note: No special characters are allowed in the name except for “-” and “_”.
--username Name
Specifies the user name of the cloud object storage account. For Amazon S3 and IBM Cloud
Object Storage, it represents the access key.
Note: Skip this parameter for locked vaults.
--pwd-file PasswordFile
Specifies a file that includes the password.
Note: Skip this parameter for locked vaults.
--src-keystore-path SourceKeystorePath
Specifies the keystore path for the X.509 certificate and the private key that is used to
authenticate the vault.
--src-keystore-alias-name SourceKeystoreAliasName
Specifies the alias name that needs to be imported to the Transparent cloud tiering keystore
from the specified keystore.
--src-keystore-type SourceKeystoreType
Specifies the type of source keystore. Allowed keystore types are JKS, JCEKS.
--src-keystore-pwd-file SourceKeystorePasswordFile
Specifies the password file of the source keystore.
delete
Removes a cloud storage account.
--cloud-nodeclass CloudNodeClass
Specifies the node class that was created by using the mmcrnodeclass command.
--account-name AccountName
Specifies the unique name that was provided to the cloud object storage account on the
Transparent cloud tiering node.
list
Lists the registered cloud accounts. Displays more information about the configured cloud account
such as the cloud provider name, cloud provider tenant ID, cloud provider URL.
--cloud-nodeclass CloudNodeClass
Specifies the node class that was created by using the mmcrnodeclass command.
--name-list
When this option is specified, the output will include only the names of the cloud accounts
(without other details) that are configured in a node class.
--account-name AccountName
Specifies the account name that you want to list. The output will display the details of the
cloud account that is specified here.
cloudStorageAccessPoint
Manages cloud storage access points (CSAPs) with one of the following actions:
create
Creates one or more cloud storage access points.
--cloud-nodeclass CloudNodeClass
Specifies the node class that was created by using the mmcrnodeclass command.
--cloud-storage-access-point-name CloudStorageAccessPointName
Specifies the CSAP name that needs to be associated with the node class.
--account-name AccountName
Specifies the cloud account name that needs to be associated with the CSAP.
--url URL
Specifies the cloud endpoint with which Cloud services open up a network connection. For S3,
the URL is implicit (derived from the region parameter automatically).
--region Region
Specifies the geographic region where the cloud service is installed and running.
Note: This parameter is only applicable for Swift and S3. To know the supported regions for
Amazon S3, see Amazon S3 in IBM Spectrum Scale: Administration Guide.
--mpu-parts-size MPUPartsSize
Specifies multi-part upload size in bytes. Any value expressed in bytes between 5 MB and 4
GB is allowed. Default being 128 MB.
--server-cert-path ServerCertPath
Specifies the certificate path for the self-signed certificates that are presented by the private
object storage servers. This is required only when the cloud URL uses HTTPS.
--slice-size SliceSize
Specifies the internal unit of transferring data within Cloud service modules. Higher slice size
indicates better performance. Default value is 512 KB.
--proxy-ip ProxyIP
If you require a proxy server to access your cloud over the network, specify the proxy server IP
address in dotted decimal format. You must specify the --proxy-port option when enabling
a proxy server.
Note: This supports only IPv4.
--proxy-port ProxyPort
Specifies the port number that the proxy server listens to.
update
Updates the CSAP details.
--cloud-nodeclass CloudNodeClass
Specifies the node class that was created by using the mmcrnodeclass command.
--cloud-storage-access-point-name CloudStorageAccessPointName
Specifies the CSAP name that needs to be associated with the node class.
--url URL
Specifies the cloud endpoint with which Cloud services open up a network connection. For S3,
the URL is implicit (derived from the region parameter automatically).
--region Region
Specifies the geographic region where the cloud service is installed. Applicable only for Swift.
--server-cert-path ServerCertPath
Specifies the certificate path for the self-signed certificates that are presented by the private
object storage servers. This is required only when the cloud URL uses HTTPS.
--slice-size SliceSize
Specifies the internal unit of transferring data within Cloud service modules. Higher slice size
indicates better performance. Default value is 512 KB.
--proxy-ip ProxyIP
If you require a proxy server to access your cloud over the network, specify the proxy server IP
address in dotted decimal format. You must specify the --proxy-port option when enabling
a proxy server.
Note: This supports only IPv4.
--proxy-port ProxyPort
Specifies the port number that the proxy server listens to.
delete
Delete the CSAPs.
--cloud-nodeclass CloudNodeClass
Specifies the node class that was created by using the mmcrnodeclass command.
--cloud-storage-access-point-name CloudStorageAccessPointName
Specifies the CSAP that needs to be deleted.
list
Lists the CSAP details including account name, URL, and MPU part size.
--cloud-nodeclass CloudNodeClass
Specifies the node class that is associated with the CSAP.
--name-list
When this option is specified, the output will include only the CSAP names.
--cloud-storage-access-point-name CloudStorageAccessPointName
Specifies the CSAP whose details need to be listed.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
cloudService
Manages s a cloud service that can be used either for tiering or sharing. The cloud service caters to a
specific file system or a fileset and to a specific cloud account:
create
Creates a cloud service.
--cloud-nodeclass CloudNodeClass
Specifies the node class that is associated with the cloud service.
--cloud-service-name CloudServiceName
Specifies a name to identify the cloud service.
--cloud-service-type CloudServiceType{Sharing | Tiering}
Specifies whether the cloud service is used for tiering or sharing operation. If it is used for
tiering, then "Tiering" should be specified, and if it is used for sharing, then "Sharing" should
be specified.
--account-name AccountName
Specifies the cloud account that is associated with the cloud service.
update
Updates the cloud service details.
--cloud-nodeclass CloudNodeClass
Specifies the node class associated with the cloud service whose details need to be updated.
--cloud-service-name CloudServiceName
Specifies the cloud service whose details need to be updated.
--enable | --disable
Specifies whether the cloud service needs to be enabled or disabled.
delete
Deletes the cloud service that no longer needs to be used.
--cloud-nodeclass CloudNodeClass
Specifies the node class associated with the cloud service that you want to delete.
--cloud-service-name CloudServiceName
Specifies the name of the cloud service that needs to be removed.
list
Lists the cloud service information such as name, cloud account name, and the type of cloud
service (tiering/sharing).
--cloud-nodeclass CloudNodeClass
Specifies the node class associated with the cloud service that you want to list.
--name-list
When this option is specified, the output will include only the CSAP names.
--cloud-service-name CloudServiceName
Specifies the name of the cloud service that needs to be listed.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
keyManager
Manages a key manager for encrypting data between local file system and the cloud storage tier, with
the following options:
create
Uses an SKLM key manager with Cloud services.
--cloud-nodeclass CloudNodeClass
Specifies the node class that is associated with the Cloud services.
--key-manager-name KeyManagerName
Specifies the key manager name.
--key-manager-type
Specifies the type of the key manager. If it is a remote key manager, specify "RKM," and if it is a
local key manager, specify "LKM".
--alias Alias
Specifies the alias name of the local key manager.
--sklm-hostname SKLMHostname
Specifies the host name or IP address of the IBM Security Lifecycle Manager server.
--sklm-port SKLMPort
Specifies the port number on which the IBM Security Key Lifecycle Manager server listens for
requests. Default value is 9080.
--sklm-adminuser SKLMAdminUser
Specifies the administrator user name of the IBM Security Key Lifecycle Manager server REST
Global Admin. Default value is SKLMAdmin.
--sklm-groupname SKLMGroupname
Specifies the group user name of the IBM Security Key Lifecycle Manager server REST Global
Admin.
--sklm-pwd-file SKLMPasswordFile
Specifies the password file of the IBM Security Key Lifecycle Manager server REST Global
Admin.
update
Updates the key manager details.
--cloud-nodeclass CloudNodeClass
Specifies the node class that was created by using the mmcrnodeclass command.
--key-manager-name KeyManagerName
Specifies the key manager name.
--sklm-port SKLMPort
Specifies the port number on which the IBM Security Key Lifecycle Manager server listens for
requests. Default value is 9080.
--sklm-adminuser SKLMAdminUser
Specifies the administrator user name of the IBM Security Key Lifecycle Manager server REST
Global Admin. Default value is SKLMAdmin.
--sklm-pwd-file SKLMPasswordFile
Specifies the password file of the IBM Security Key Lifecycle Manager server REST Global
Admin.
--update-certificate
Specifies this parameter if you want to update the REST certificate.
rotate
Rotates the existing key and creates a new SKLM key.
--cloud-nodeclass CloudNodeClass
Specifies the node class that was created by using the mmcrnodeclass command.
--key-manager-name KeyManagerName
Specifies the key manager name.
list
Lists the key manager information such as name, location of the certificate, admin user ID..
--cloud-nodeclass CloudNodeClass
Specifies the node class.
--name-list
When this option is specified, the output will include only the account names.
--key-manager-name KeyManagerName
Specifies the key manager name.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
clientAssist
Allows administrators to enable or disable a node with outbound cloud access as a client-assist node.
This node assists in recalls from a Cloud services client node. This is a special node performing recall
operations from client nodes directly, without redirecting requests to Cloud services server node.
containerPairSet
Manages the cloud containers (data and metadata) that are associated with the cloud storage
accounts, with these options:
create
Creates a container pair set.
--cloud-nodeclass CloudNodeClass
Specifies the node class that is associated with the containers that are going to be created.
--container-pair-set-name ContainerPairSetName
Specifies a name that uniquely identifies the containers.
--cloud-service-name CloudServiceName
Specifies the cloud service that is going to be associated with the containers you are going to
create.
--scope-to-filesystem | scope-to-fileset
Specifies whether the containers are going to be associated with a file system or a fileset.
Default value is --scope-to-filesystem.
--path Path
Specifies the path to the file system or fileset.
--data-container DataContainer
Specifies a name for the data container. If you do not specify a value here, the --container-
pair-set-name is used by default.
--meta-container MetaContainer
Specifies a name for the metadata container. If you do not specify a value here, the --
container-pair-set-name is used by default.
--cloud-directory-path CloudDirectoryPath
Specifies the path where the database (cloud directory) is maintained.
Note: By default, the database is maintained inside the .mcstore folder in file system which
is associated with the cloud service.
--etag-enable{ENABLE|DISABLE}
Specifies whether you want to enable an integrity check on the data that is migrated to or
recalled from the cloud storage. Disabled by default.
--enc-enable{ENABLE|DISABLE}
Specifies whether you want to enable encryption on the data that is transferred to the object
storage. Disabled by default.
--data-location DataLocation
Specifies the location ID of the data container. This parameter is applicable only to IBM
Cloud™ Object Storage.
--meta-location MetaLocation
Specifies the location ID of the metadata container. This parameter is applicable only to IBM
Cloud™ Object Storage.
--key-manager KeyManager
Specifies the key manager that needs to be chosen for encryption.
--active-key Activekey
Specifies the current active encryption key.
--thumbnail-size ThumbnailSize
Specifies the number of bytes that Transparent cloud tiering must store on the local file
system for displaying thumbnail of files that are migrated to a storage tier. The value that you
specify is applicable to each file in the file system that is managed by Transparent cloud
tiering. Valid range is 1 - 1048576 bytes (1 MB). If you specify a value that is lower than the
file system block size, then the file system block size is used. For example, if you specify a
value of 128 KB and the file system block size is 256 KB, then 256 KB data of each file is
stored locally and used for thumbnail.
After the thumbnail-size parameter is enabled with a filesystem, you can verify the setting by
using the mmcloudgateway filesystem list command, and additionally, the thumbnails
are displayed when you browse the files on Windows Explorer or a similar tool.
Note: Thumbnail is disabled by default. If you do not specify a valid value, then thumbnail is
not enabled for the file system. You should judiciously make a decision to enable this option,
as once enabled, you cannot disable this.
--transparent-recalls {ENABLE|DISABLE}
Specify whether or not you want to enable or disable transparent recalls for the container that
you create. Enabled by default.
--destroy-event-handling {ENABLE|DISABLE}
Specify whether or not you want to enable or disable this function. It will install the policy
partition to handle the destroy events when files are deleted. Enabled by default.
--policy-tmp-dir
Specify the folder where the policy file needs to be temporarily stored.
--auto-spillover {ENABLE|DISABLE}
Specify whether you want to enable or disable automatic creation of a new container, after the
default limit or the specified limit in terms of number of files for an existing container is
reached. Enabled by default.
--auto-spillover-threshold AutoSpilloverThreshold
Specify the number of files after which you want a new container to be automatically created.
Default value is 100M (100,000,000).
test
Validates the cloud storage account and the CSAPs before you proceed with cloud operations.
--cloud-nodeclass CloudNodeClass
Specifies the node class that is associated with the containers that are created.
--container-pair-set-name ContainerPairSetName
Specifies a name that uniquely identifies the cloud object storage account on the node.
update
Updates the information that is used for creating a container pair set.
--cloud-nodeclass CloudNodeClass
Specifies the node class that is associated with the containers that are created.
--container-pair-set-name ContainerPairSetName
Specifies a name that uniquely identifies the cloud object storage account on the node.
--etag {ENABLE|DISABLE}
Specifies whether you want to enable an integrity check on the data that is migrated to or
recalled from the cloud storage. To enable, specify "ENABLE". To disable, specify "DISABLE".
--enc {ENABLE|DISABLE}
Specifies whether you want to enable encryption on the data that is transferred to the object
storage. To enable, specify "ENABLE". To disable, specify "DISABLE".
--active-key Activekey
Specifies the current active encryption key.
--transparent-recalls {ENABLE|DISABLE}
Specifies whether you want to enable or disable transparent recalls for the container pair set
associated with the specified node class. To enable, specify "ENABLE". To disable, specify
"DISABLE".
--destroy-event-handling {ENABLE|DISABLE}
Specifies whether you want to enable or disable the destroy event handling. To enable, specify
"ENABLE". To disable, specify "DISABLE".
--policy-tmp-dir
Specify the folder where the policy file needs to be temporarily stored.
--auto-spillover {ENABLE|DISABLE}
Specify whether you want to enable or disable automatic creation of a new container, after the
default limit or the specified limit in terms of number of files for an existing container is
reached. Enabled by default.
--auto-spillover-threshold AutoSpilloverThreshold
Specify the number of files after which you want a new container to be automatically created.
Default value is 100M (100,000,000).
delete
Deletes the container pair set associated with a specific node class.
--cloud-nodeclass CloudNodeClass
Specifies the node class that is associated with the containers that are created.
--container-pair-set-name ContainerPairSetName
Specifies a name that uniquely identifies the cloud object storage account on the node.
--policy-tmp-dir
Specify the folder where the policy file needs to be temporarily stored.
list
Lists the container pair set.
--cloud-nodeclass CloudNodeClass
Specifies the node class.
--cloud-service-name CloudServiceName
Specifies the name of the cloud service that needs to be listed.
--name-list
When this option is specified, the output will include only the account names.
--container-pair-set-name ContainerPairSetName
Specifies a name that uniquely identifies the cloud object storage account on the node.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
maintenance
Configure the Transparent cloud tiering for maintenance activities. For more information, see the
Setting up maintenance tasks topic in the IBM Spectrum Scale: Administration Guide.
create
Creates a maintenance window, overriding the default values:
--cloud-nodeclass CloudNodeClass
Specifies the node class.
--maintenance-name MaintenanceName
Specifies a name for the maintenance window. All maintenance operations are considered
within a maintenance window. For example, daily_maintenance.
--daily HH:MM-HH:MM
Indicates that the maintenance window is run daily. Specify the time interval in the hh:mm-
hh:mm format. For example, 03:00-03:30. If the end time equals the start time, the
maintenance logic will only execute once. If the end time is less than the start time, it is
assumed that the end time refers to that time on the following day. Therefore, 03:00-02:59 for
instance will extend to 2:59 on the following day.
--weekly W:HH:MM-W:HH:MM
Indicates that the maintenance window is run weekly. Specify the time interval in the
w:hh:mm-w:hh:mm format, where w represents the day of the week. Day of the week can be a
number from 0 to 7 (both 0 and 7 are included and they represent Sunday). For example,
01:03:00-01:03:30. Maintenance window 1:03:00-1:02:59 will extend until 2:59 the following
week, and similarly, 1:03:00-0:03:00 will extend for 6 days (Monday through Sunday).
update
Modifies the maintenance window, overriding the previous values:
--cloud-nodeclass CloudNodeClass
Specifies the node class.
--maintenance-name MaintenanceName
Specifies a name to the maintenance window that you delete. For example, daily_reconcile.
--daily HH:MM-HH:MM
Indicates that the maintenance window is run daily. Specify the time interval in the hh:mm-
hh:mm format. For example, 03:00-03:30.
--weekly W:HH:MM-W:HH:MM
Indicates that the maintenance window is run weekly. Specify the time interval in the
w:hh:mm- w:hh:mm format, where w represents the day of the week. For example,
01:03:00-01:03:30.
delete
Deletes the maintenance window that is no longer needed.
--cloud-nodeclass CloudNodeClass
Specifies the node class.
--maintenance-name MaintenanceName
Specifies a name to the maintenance window that you create. For example,
window_for_reconcile.
list
Lists the frequencies for the tasks and the maintenance windows for the node class.
--cloud-nodeclass CloudNodeClass
Specifies the node class associated with the maintenance window that you want to list.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
setState
Specify this option to change the state of a maintenance window that is created for a specified
container. For example, you can have multiple maintenance tasks for a container but can disable
or enable them according to your requirements. If a maintenance task is disabled, this task will no
more be run from the next maintenance window. If a maintenance window is disabled,
maintenance operations will not be executed for this window. Once enabled, maintenance will
begin executing at the start of the window.
--cloud-nodeclass CloudNodeClass
Specifies the node class associated with the maintenance window whose state needs to be
changed.
--maintenance-name MaintenanceName
Specifies the name of the maintenance window.
--state {disable | enable}
Specify one of the options listed here (disable or enable).
setFrequency
Use this option to modify the default frequency of the maintenance operations such as reconcile,
backup, and delete. By default, reconcile is done monthly, backup is done weekly, and deletion is
done daily. Use this option to change the frequency of any of these operations. For example, you
want to run the reconcile and delete operations as per the default schedule but want to change
the backup frequency to monthly. In this case, you can set the frequency of the backup operation
to "monthly".
--cloud-nodeclass CloudNodeClass
Specifies the node class associated with the maintenance task.
--task {reconcile | backup | delete}
Specifies the name of the maintenance task whose frequency needs to be changed.
--frequency {never|daily|weekly|monthly}
Specify one of the options listed here.
status
Lists the summary of the maintenance schedule for a specified node class.
--cloud-nodeclass CloudNodeClass
Specifies the node class for which to display the maintenance status information.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
config
Configures and tunes the Transparent cloud tiering node parameters with one of the following actions:
set
Sets the following system parameters, overriding the default values:
--cloud-nodeclass CloudNodeClass
Specifies the node class.
Attribute=value
Specifies the attribute that you want to change and the value that you want to set. For
example, if you want to change the default value of the recalls-thread attribute and set it to
20, specify "recalls-threads=20." Similarly, you can set the value of other attributes also.
Note: If you want to set an attribute back to its default value, specify "DEFAULT" as the value
of the attribute. For example, if you want to set the value of recalls-thread attribute back to its
default value, specify, "recalls-thread=DEFAULT".
For a list of available attributes and their description, see the Tuning Cloud services parameters
topic in the IBM Spectrum Scale: Administration Guide.
list
Lists the current configurations such as IP address, port number, thread-pool size, tracing level,
slice size.
--cloud-nodeclass CloudNodeClass
Specifies the node class that was created by using the mmcrnodeclass command.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
node
Enables administrators to manage registration of Cloud services within a cluster and also display the
node class the nodes are part of.
list
Lists the nodes that are identified for enabled for Cloud services.
--nodeclass-sort
Use this option to display the node list sorted according to the node class.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
service
Manages the Transparent cloud tiering service with these options:
start
Starts the Transparent cloud tiering service for a node or set of nodes and make the service
available for file movement.
-N
Specifies the nodes.
alltct
Indicates that the service will be stopped on all Transparent cloud tiering nodes within the
cluster.
Node[,Node...]
Specifies the list of nodes where the service needs to be started.
NodeFile
Specifies a file, containing the list of nodes where the service needs to be started.
NodeClass
Specifies the node class.
stop
Stops the Transparent cloud tiering service for a node or set of nodes.
-N
Specifies the nodes.
alltct
Indicates that the service will be stopped on all Transparent cloud tiering nodes within the
cluster.
Node[,Node...]
Specifies the list of nodes where the service needs to be stopped.
NodeFile
Specifies a file, containing the list of nodes where the service needs to be started.
NodeClass
Specifies the node class.
status
Displays detailed status of the Transparent cloud tiering service including running state of the
daemon service, cloud account name, and its connectivity status. For more information on various
statuses that are associated with the Transparent cloud tiering service, see the "Transparent cloud
tiering service status description" topic in the IBM Spectrum Scale: Problem Determination Guide.
-N
Specifies the nodes.
alltct
Indicates that the service will be stopped on all Transparent cloud tiering nodes within the
cluster.
Node[,Node...]
Specifies the list of nodes where the status of the service needs to be checked.
NodeFile
Specifies a file, containing the list of nodes where the status of the service needs to be started.
NodeClass
Specifies the node class.
--cloud-storage-access-point-name
Specifies the CSAP. The report will include the status of all Cloud services associated with the
specified CSAP.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
version
Displays the Transparent cloud tiering version number associated with each node in a node class.
This also includes the type of the node (server or client).
Note: This command can run on all nodes in the cluster and will display if a node is a cloud node
or not. If it is, it will show more column-based details. This makes this command different from
others since normally the commands act only on cloud nodes for report generation.
-N
Use this option before specifying any node or node class.
all
Indicates that the service version will be displayed for all nodes.
alltct
Indicates that the service version will be displayed for all Transparent cloud tiering nodes
within the cluster.
Node[,Node...]
Specifies the list of nodes for which the versions need to be checked.
NodeFile
Specifies a file, containing the list of nodes for which the versions need to be started.
NodeClass
Specifies the node class whose version is displayed at a cluster level.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
backupConfig
Backs up the Cloud services configuration from the Clustered Configuration Repository (CCR). For
more information, see the Backing up the Cloud services configuration topic in the IBM Spectrum
Scale: Administration Guide.
--backup-file BackupFile
Specifies a name to be used for the backup file. The system generates a tar file by pulling all
Cloud services-specific configuration files from the Clustered Configuration Repository (CCR).
The backed up file will be stored on your local machine at a specified location. The
backupConfig command also accepts a location on a GPFS file system that would be available
on all GPFS nodes.
restoreConfig
Restores the Cloud services configuration data in the event of any system outage or crash. For
more information, see the Restoring the Cloud services configuration topic in the IBM Spectrum
Scale: Administration Guide.
--backup-file BackupFile
Specifies the file name of the backed-up file including the path. The system restores the Cloud
services-specific configuration setting to the CCR by using this file.
files
Manages files, with the following options:
migrate
Migrates the specified files to the cloud storage tier.
-v
Specifies the verbose message.
--co-resident-state
Indicates that the files are migrated in the co-resident status, which means that files will be
available both locally and on the cloud after migration. You can open such files from the local
file system without recalling them from the cloud storage tier.
--File [File...]
Specifies multiple files that need to be migrated to the cloud storage tier. This parameter must
be a complete file name. It cannot be a fragment of a file name and it cannot be a path.
recall
Recalls the specified files from the cloud storage tier.
-v
Specifies the verbose message.
--local-cluster
Indicates that cluster from where the recall is done.
--remote-cluster
Indicates the remote cluster.
--File [File...]
Specifies multiple files that need to be recalled from the storage tier. This parameter must be a
complete file name. It cannot be a fragment of a file name and it cannot be a path.
restore
Restores a file or list of files from the cloud storage tier when the local files are lost. The files to be
restored along with their options can be either specified at the command line, or in a separate file
provided by the -F option. For more information, see the Restoring files topic in the IBM Spectrum
Scale: Administration Guide.
-v
Specifies the verbose message.
--overwrite
Overwrite the files if needed. If this option is not set, files will not be overwritten, and the files
that are retrieved from the cloud will remain in temporary locations.
--restore-stubs-only
Restores only the file stubs.
-F
Loads file arguments from the given file name.
--dry-run
Queries the local database and prints what would have been sent to the server. Does not
contact the server. This is intended for debugging.
--restore-location RestoreLocation
Specifies the target location of the files to be restored.
--id Id
Specifies the version ID of a file if the file has multiple versions.
File
Specifies the files to be restored.
delete
Deletes the specified files or file sets.
--delete-local-file
Deletes the local files and the corresponding cloud objects.
--recall-cloud-file
Recalls the files from the cloud before they are deleted on the cloud. The status of local files
becomes resident after the operation.
--require-local-file
Removes the extended attributes from a co-resident file and makes it resident, without
deleting the corresponding cloud objects. The option requires the file data to be present on
the file system and will not work on a non-resident file.
--keep-last-cloud-file
This option deletes all the versions of the file except the last one from the cloud. For example,
if a file has three versions on the cloud, then versions 1 and 2 are deleted and version 3 is
retained.
--File [File...]
Specifies multiple files. This parameter must be a complete file name. It cannot be a fragment
of a file name and it cannot be a path.
destroy
Manually cleans up the cloud objects of the deleted files before the retention period expires. This
cleanup will occur for all objects from the root of the file system provided by the --filesystem-
path option. For more information, see the Deleting cloud objects topic in the IBM Spectrum Scale:
Administration Guide.
--cloud-retention-period-days
Specifies the number of days for which the deleted files from the file system need to be
retained on the cloud. For example, you delete 100 files from the file system and need to keep
them on the cloud for 20 days, specify the value as 20.
--preview
Displays how many objects will be cleaned up and how much space will be reclaimed.
--timeout
Specifies the duration (in minutes) for which the command should run. If this value is not
specified, the command will run until all the candidate files are deleted, and it can be
resource-intensive.
--container-pair-set-name
Specifies the cloud container where the objects are stored.
--filesystem-path
The path to the file system where the files are migrated from. All objects under the root of the
file system of the specified file system path will be cleaned.
reconcile
Reconciles files between your file system and the cloud storage tier. For more information, see the
Reconciling files between IBM Spectrum Scale file system and the cloud storage tier topic in the IBM
Spectrum Scale: Administration Guide.
--container-pair-set-name
Specifies the cloud container where the objects are stored.
Device
Specifies the device name associated with the file system.
cloudList
Lists the files on the cloud.
--path Path
Lists files and directories under the specified path.
--recursive
List all files in all directories under the current directory.
--depth Depth
List directories up to the specified depth under the specified path. Default is to list up to the
full depth. Specify 0 to list only the current directory.
--file [File]
Specifies the names of the files that need to be listed. This parameter must be a complete file
name. It cannot be a fragment of a file name and it cannot be a path.
--file-versions File
Displays information about all versions of the files specified by the full path.
--files-usage --path Path
Displays cloud data and metadata space usage under the specified path.
--reconcile-status --path Path
Displays the progress of the reconcile operation.
--start YYYY-MM-DD[-HH:mm]
Specifies the starting time.
--end YYYY-MM-DD[-HH:mm]
Specifies the ending time.
backupDB
Backs up the Transparent cloud tiering database to the cloud storage tier. For more information,
see the Backing up the Cloud services database to the cloud topic in the IBM Spectrum Scale:
Administration Guide.
--container-pair-set-name
Specifies the container associated with the database that needs to be backed up.
checkDB
Verifies the integrity of the Transparent cloud tiering database after a power outage or a system
crash. For more information, see the Checking the Cloud services database integrity topic in the
IBM Spectrum Scale: Administration Guide.
--container-pair-set-name
Specifies the container associated with the database that needs to be verified.
rebuildDB
Manually rebuilds the database.
--container-pair-set-name
Specifies the container associated with the database that needs to be rebuilt.
Device
Specifies the device name associated with the file system whose database is corrupted and
which is in need of manual recovery.
defragDB
Defragment the database and release the capacity occupied by the empty spaces.
--container-pair-set-name
Specifies the container associated with the database that needs to be rebuilt.
list
Lists the files and the associated states.
--File [File]
Specifies the names of the files that need to be listed. This parameter must be a complete file
name. It cannot be a fragment of a file name and it cannot be a path.
import
Imports data from a storage server.
--cloud-service-name
Specifies the cloud service.
--container Container
Specifies the name of the cloud container to import from. If no container option is specified,
the default configured container name is used.
--import-only-stub
Creates only a stub file, data in the file is not imported.
--import-metadata
Attempts to restore metadata of the file from the cloud object. The data is in IBM Spectrum
Scale format. The cloud object must have been exported using the --export-metadata
option. If metadata is not attached to the cloud object, this option has no effect.
--directory
Imports files into the given directory using only the file name from the cloud. Mutually
exclusive with the --target-name and --directory-root options.
--directory-root
Imports files starting at the given directory, keeping the cloud naming hierarchy intact.
Mutually exclusive with the --directory and --target-name options.
--target-name
Imports a single file from the cloud to the specified target name. Mutually exclusive with the
--directory and --directory-root options.
--File [File]
Specifies the names of the files that need to be imported. This parameter must be a complete
file name. It cannot be a fragment of a file name and it cannot be a path.
export
Exports files to the cloud.
--tag Tag
Specifies an optional identifier to associate with the files. This ID will be stored in the manifest
file if one is specified.
--target-name TargetName
Export a single file to the cloud to the specified target name.
--container Container
Specifies the name of the cloud container to export to. If no container option is specified, the
default configured container name will be used.
--manifest-file ManifestFile
Specifies a manifest file that will contain an entry for each file exported to the cloud. Entries
will be in the CSV format of : Tag, Container, TimeStamp of Blob on cloud, or file name.
--export-metadata
Attempts to attach a file's metadata to the cloud object. This is IBM Spectrum Scale specific
data and format, and it contains the user-defined attributes, ACLs, etc. A file exported with this
option can be fully restored by the corresponding import command. This metadata is stored in
the blob metadata, and as such there is limited space available, and the metadata might not
be written if it is too large.
--fail-if-metadata-too-big
If the metadata of the file is very large, it causes the entire export to fail. Valid only with the --
export-metadata option.
--strip-filesystem-root
Removes the root of the IBM Spectrum Scale file system from the name as stored on the
cloud. This could be used to export /filesystem1/dir/file and then import that file into a
differently named file system root directory.
--File [File]
Specifies the names of the files that need to be exported. This parameter must be a complete
file name. It cannot be a fragment of a file name and it cannot be a path.
checkCloudObject
Displays true if the specified object or container is present on cloud, false otherwise. If you specify the
cloud service, the container associated with the cloud service is used to search the object. Use the -Y
option for parsable output.
deleteCloudObject
Deletes the specified object or container on cloud. If you specify the cloud service, the container
associated with the cloud service is used to search the object for deletion.
Security
You must have root authority to run the mmcloudgateway command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To view the registered nodes in a node class, issue this command:
2. To start the Transparent cloud tiering service on the node class TCTNodeClass1, issue this command:
3. To verify the status of the Transparent cloud tiering service, issue this command:
4. To create a cloud storage account with IBM Cloud Object Storage version 3.7.2 and above as cloud
type, issue this command:
5. To create a cloud account for the S3 cloud type, issue a command similar to this:
6. To create a cloud account for deploying a WORM solution by using locked vaults, issue a command
like the following:
Note: Please ensure to keep a backup of the Source Key Store used to import
the private key and certificates.
Transparent Cloud Tiering will remove the private key and certificate from
the trust store if the account delete command is run.
mmcloudgateway: Command completed successfully on c350f3u30.
mmcloudgateway: You can now delete the password file '/root/pwd/file.txt'
mmcloudgateway: Command completed.
7. To update a cloud account for a locked vault, issue the following command:
mmcloudgateway: Sending the command to the first successful node starting with
vm641.pk.slabs.ibm.com
mmcloudgateway: This may take a while...
Note: Please ensure to keep a backup of the Source Key Store used to import the
private key and certificates.
Transparent Cloud Tiering will remove the private key and certificate from the
trust store if the account delete command is run.
mmcloudgateway: Command completed successfully on jupiter.pk.slabs.ibm.com.
mmcloudgateway: You can now delete the password file '/root/pwd'
mmcloudgateway: Command completed.
8. To list the cloud accounts for all node classes, issue this command:
9. To list all the cloud accounts configured for the node class, cloud1, issue the following command:
userName : admin
tenantId : admin
10. To list only the names of the cloud accounts configured for all node classes present in the cluster,
issue the following command:
12. To migrate a file (file1) to the configured cloud storage tier, issue this command:
13. To migrate multiple files (file1 and file2) to the configured cloud storage tier, issue this command:
14. To verify that the file is migrated to the configured cloud storage tier, issue this command:
Note: The State is displaying as Non-resident . This means that the file is successfully migrated to
the cloud storage tier.
15. To recall a file from the configured cloud storage tier, issue this command:
Note: If you run the mmcloudgateway filesystem list file1 command, the value of the State
attribute is displayed as Co-resident . This means that the file is successfully recalled.
16. To recall multiple files (file1 and file2) from the configured cloud storage tier, issue this command:
mmcloudgateway: Sending the Transparent Cloud Tiering request to the first successful
server.
mmcloudgateway: This may take a while...
mmcloudgateway: Command completed successfully on c350f2u18.
mmcloudgateway: Command completed.
18. To back up the Cloud services database associated with the container, cpair1, issue this command:
19. To verify the integrity of the Cloud services database associated with the container, cpair1, after a
system crash or an outage, issue this command:
20. To back up the Transparent cloud tiering configuration data to a file called tctbackup, issue this
command:
21. To restore the configuration to the CCR file by using the backed-up file, tctbackup.31187.tar, issue
the following command:
You are about to restore the TCT Configuration settings to the CCR.
Any new settings since the backup was made will be lost.
The TCT servers should be stopped prior to this operation.
22. To export a local file named /dir1/dir2/file1 to the cloud and store it in a container named
MyContainer, issue this command:
Note: A manifest file will be created, and the object exported to the cloud will have an entry in that
manifest file, tagged with MRI_Images.
23. To import files from the cloud, issue the following command(s):.
Note: This command creates a local directory structure as necessary when importing the file from the
cloud. If the --directory option is specified, only the file name of the cloud object is used.
24. To check the Transparent cloud tiering service version of the node class, TCTNodeClass1, issue this
command:
25. To reconcile files between the file system, fs1, and the container pair set, contain1, issue this
command:
Processing /fs1
Wed Jun 28 10:34:28 EDT 2017 Reconcile started.
Wed Jun 28 10:34:28 EDT 2017 Creating snapshot of the File System...
Wed Jun 28 10:34:28 EDT 2017 Running policy on Snapshot
to generate list of files to process.
Wed Jun 28 10:36:23 EDT 2017 Removing snapshot.
Wed Jun 28 10:36:27 EDT 2017 Reconcile is using a deletion retention
period of 30 days.
Wed Jun 28 10:36:27 EDT 2017 Reconcile will be processing 5043 inode entries.
Wed Jun 28 10:36:27 EDT 2017 Processed 463 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 921 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 1372 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 1824 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 2264 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 2726 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 3161 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 3603 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 4032 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 4471 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 4912 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 4953 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 5004 entries out of 5043.
Wed Jun 28 10:36:27 EDT 2017 Processed 5043 entries out of 5043.
Wed Jun 28 10:36:28 EDT 2017 Reconcile found 1 files that had
been migrated and were not in the directory.
Wed Jun 28 10:36:28 EDT 2017 Reconcile detected 0 deleted files
that were deleted more than 30 days ago.
Wed Jun 28 10:36:28 EDT 2017 Reconcile detected 5043 migrated files
that have been deleted from the local
file system, but have not been deleted from object storage because
they are waiting for their retention policy
time to expire.
Wed Jun 28 10:36:28 EDT 2017 Please use the 'mmcloudgateway files cloudList'
command to view the progress of the deletion of the cloud objects.
Wed Jun 28 10:36:29 EDT 2017 Reconcile successfully finished.
mmcloudgateway: Command completed.
See also
• “mmchconfig command” on page 169
• “mmlscluster command” on page 484
• “mmchnode command” on page 241
• “mmlsconfig command” on page 487
• “mmnfs command” on page 552
• “mmobj command” on page 565
• “mmsmb command” on page 698
• “mmuserauth command” on page 727
Location
/usr/lpp/mmfs/bin
mmcrcluster command
Creates a GPFS cluster from a set of nodes.
Synopsis
mmcrcluster -N {NodeDesc[,NodeDesc...] | NodeFile}
[ [-r RemoteShellCommand] [-R RemoteFileCopyCommand] |
--use-sudo-wrapper [--sudo-user UserName] ]
[-C ClusterName] [-U DomainName] [-A]
[-c ConfigFile | --profile ProfileName]
Note: The primary and secondary configuration server functionality is deprecated and will be removed in
a future release. The default configuration service is CCR. The following parameters have been removed
from the mmcrcluster command: --ccr-enable, --ccr-disable, -p PrimaryServer, and -s
SecondaryServer. For more information see the topic “mmchcluster command” on page 164.
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmcrcluster command to create a GPFS cluster.
Upon successful completion of the mmcrcluster command, the /var/mmfs/gen/mmsdrfs and the /var/
mmfs/gen/mmfsNodeData files are created on each of the nodes in the cluster. Do not delete these files
under any circumstances. For more information, see Quorum in IBM Spectrum Scale: Concepts, Planning,
and Installation Guide.
Follow these rules when creating your GPFS cluster:
• While a node may mount file systems from multiple clusters, the node itself may only be added to a
single cluster using the mmcrcluster or mmaddnode command.
• The nodes must be available for the command to be successful. If any of the nodes listed are not
available when the command is issued, a message listing those nodes is displayed. You must correct the
problem on each node and issue the mmaddnode command to add those nodes.
• Designate at least one but not more than seven nodes as quorum nodes. How many quorum nodes
altogether you will have depends on whether you intend to use the node quorum with tiebreaker
algorithm or the regular node based quorum algorithm. For more information, see Quorum in IBM
Spectrum Scale: Concepts, Planning, and Installation Guide.
• After the nodes are added to the cluster, use the mmchlicense command to designate appropriate
GPFS licenses to the new nodes.
• Clusters that will include both UNIX and Windows nodes must use ssh and scp for the remote shell and
copy commands. For more information, see Installing and configuring OpenSSH on Windows nodes in
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
• Carefully consider the remote execution and remote copy tooling you want to use within your cluster.
Once a cluster has been created, it is complicated to change, especially if additional nodes are added.
The default tools as specified under -r RemoteShellCommand and -R RemoteFileCopyCommand by
default use /usr/bin/ssh and /usr/bin/scp respectively. For more information, see GPFS cluster
creation considerations in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Parameters
-N NodeDesc[,NodeDesc...] | NodeFile
Specifies node descriptors, which provide information about nodes to be added to the cluster.
NodeFile
Specifies a file containing a list of node descriptors, one per line, to be added to the cluster.
NodeDesc[,NodeDesc...]
Specifies the list of nodes and node designations to be added to the GPFS cluster. Node
descriptors are defined as:
NodeName:NodeDesignations:AdminNodeName
where:
NodeName
Specifies the host name or IP address of the node for GPFS daemon-to-daemon
communication. For hosts with multiple adapters, see the IBM Spectrum Scale: Administration
Guide and search on Using remote access with public and private IP addresses.
The host name or IP address must refer to the communication adapter over which the GPFS
daemons communicate. Aliased interfaces are not allowed. Use the original address or a name
that is resolved by the host command to that original address. You can specify a node using
any of these forms:
• Short host name (for example, h135n01)
• Long, fully-qualified, host name (for example, h135n01.ibm.com)
• IP address (for example, 7.111.12.102). IPv6 addresses must be enclosed in brackets (for
example, [2001:192::192:168:115:124]).
Regardless of which form you use, GPFS will resolve the input to a host name and an IP
address and will store these in its configuration files. It is expected that those values will not
change while the node belongs to the cluster.
NodeDesignations
An optional, "-" separated list of node roles:
• manager | client – Indicates whether a node is part of the node pool from which file system
managers and token managers can be selected. The default is client.
• quorum | nonquorum – Indicates whether a node is counted as a quorum node. The default
is nonquorum.
AdminNodeName
Specifies an optional field that consists of a node name to be used by the administration
commands to communicate between nodes. If AdminNodeName is not specified, the
NodeName value is used.
Note: AdminNodeName must be a resolvable network host name. For more information, see
the topic GPFS node adapter interface names in the IBM Spectrum Scale: Concepts, Planning,
and Installation Guide.
You must provide a NodeDesc for each node to be added to the GPFS cluster.
-r RemoteShellCommand
Specifies the fully-qualified path name for the remote shell program to be used by GPFS. The default
value is /usr/bin/ssh.
The remote shell command must adhere to the same syntax format as the ssh command, but may
implement an alternate authentication mechanism.
-R RemoteFileCopy
Specifies the fully-qualified path name for the remote file copy program to be used by GPFS. The
default value is /usr/bin/scp.
The remote copy command must adhere to the same syntax format as the scp command, but may
implement an alternate authentication mechanism.
-C ClusterName
Specifies a name for the cluster. If the user-provided name contains dots then the command assumes
that the user-provided name is a fully qualified domain name. Otherwise, to make the cluster name
unique, the command appends the domain of a quorum node to the user-provided name. The
maximum length of the cluster name including any appended domain name is 115 characters.
If the -C flag is omitted, the cluster name defaults to the name of a quorum node within the cluster
definition.
-U DomainName
Specifies the UID domain name for the cluster.
-A
Specifies that GPFS daemons are to be automatically started when nodes come up. The default is not
to start daemons automatically.
-c ConfigFile
Specifies a file containing GPFS configuration parameters with values different than the documented
defaults. A sample file can be found in /usr/lpp/mmfs/samples/mmfs.cfg.sample. See the
mmchconfig command for a detailed description of the different configuration parameters.
The -c ConfigFile parameter should be used only by experienced administrators. Use this file to set
up only those parameters that appear in the mmfs.cfg.sample file. Changes to any other values may
be ignored by GPFS. When in doubt, use the mmchconfig command instead.
--profile ProfileName
Specifies a predefined profile of attributes to be applied. System-defined profiles are located
in /usr/lpp/mmfs/profiles/. All the configuration attributes listed under a cluster stanza will be
changed as a result of this command.
The following system-defined profile names are accepted:
• gpfsProtocolDefaults
• gpfsProtocolRandomIO
A user's profiles must be installed in /var/mmfs/etc/. The profile file specifies GPFS configuration
parameters with values different than the documented defaults. A user-defined profile must not begin
with the string 'gpfs' and must have the .profile suffix.
User-defined profiles consist of the following stanzas:
%cluster:
[CommaSeparatedNodesOrNodeClasses:]ClusterConfigurationAttribute=Value
...
%filesystem:
FilesystemConfigurationAttribute=Value
See the mmchconfig command for a detailed description of the different configuration parameters. A
sample file can be found in /usr/lpp/mmfs/samples/sample.profile.
Note: User-defined profiles should be used only by experienced administrators. When in doubt, use the
mmchconfig command instead.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmcrcluster command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To create a GPFS cluster made of all of the nodes that are listed in the file /u/admin/nodelist, issue
the following command:
mmcrcluster -N /u/admin/nodelist
k164n04.kgn.ibm.com:quorum
k164n05.kgn.ibm.com:quorum
k164n06.kgn.ibm.com
mmlscluster
See also
• “mmaddnode command” on page 35
• “mmchconfig command” on page 169
• “mmdelnode command” on page 371
• “mmlscluster command” on page 484
• “mmlsconfig command” on page 487
Location
/usr/lpp/mmfs/bin
mmcrfileset command
Creates a GPFS fileset.
Synopsis
mmcrfileset Device FilesetName [-p afmAttribute=Value...] [-t Comment]
[--inode-space {new [--inode-limit MaxNumInodes[:NumInodesToPreallocate]] | ExistingFileset}]
[--allow-permission-change PermissionChangeMode]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmcrfileset command constructs a new fileset with the specified name. The new fileset is empty
except for a root directory, and does not appear in the directory namespace until the mmlinkfileset
command is issued. The mmcrfileset command is separate from the mmlinkfileset command to
allow the administrator to establish policies and quotas on the fileset before it is linked into the
namespace.
For information on filesets, see the Filesets section in the IBM Spectrum Scale: Administration Guide.
Parameters
Device
The device name of the file system to contain the new fileset.
File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0.
FilesetName
Specifies the name of the fileset to be created.
Note the following restrictions on fileset names:
• The name must be unique within the file system.
• The length of the name must be in the range 1-255.
• The name root is reserved for the fileset of the root directory of the file system.
• The name cannot be the reserved word new. However, the character string new can appear within a
fileset name.
• The name cannot begin with a hyphen (-).
• The name cannot contain the following characters: / ? $ & * ( ) ` # | [ ] \
• The name cannot contain a white-space character such as blank space or tab.
-p afmAttribute=Value
Specifies an AFM configuration parameter and its value. More than one -p option can be specified.
The following AFM configuration parameter is required for the mmcrfileset command:
afmTarget
Identifies the home that is associated with the cache; specified in either of the following forms:
nfs://{Host|Map}/Target_Path
or
gpfs://[Map]/Target_Path
where:
nfs:// or gpfs://
Specifies the transport protocol.
Host|Map
Host
Specifies the server domain name system (DNS) name or IP address.
Map
Specifies the export map name. Information about Mapping is contained in the AFM
Overview > Parallel data transfers section.
See the following examples:
1. An example of using the nfs:// protocol with a map name:
Note: If you are not specifying the map name, a '/' is still needed to indicate the path.
4. An example of using the gpfs:// protocol with a map name:
Target_Path
Specifies the export path.
The following optional AFM configuration parameters are also valid:
afmAsyncDelay
Specifies (in seconds) the amount of time by which write operations are delayed (because write
operations are asynchronous with respect to remote clusters). For write-intensive applications
that keep writing to the same set of files, this delay is helpful because it replaces multiple writes
to the home cluster with a single write containing the latest data. However, setting a very high
value weakens the consistency of data on the remote cluster.
This configuration parameter is applicable only for writer caches (SW, IW, and primary), where
data from cache is pushed to home.
Valid values are between 1 and 2147483647. The default is 15.
afmDirLookupRefreshInterval
Controls the frequency of data revalidations that are triggered by such lookup operations as ls or
stat (specified in seconds). When a lookup operation is done on a directory, if the specified
amount of time has passed, AFM sends a message to the home cluster to find out whether the
metadata of that directory has been modified since the last time it was checked. If the time
interval has not passed, AFM does not check the home cluster for updates to the metadata.
Valid values are 0 through 2147483647. The default is 60. In situations where home cluster data
changes frequently, a value of 0 is recommended.
afmDirOpenRefreshInterval
Controls the frequency of data revalidations that are triggered by such I/O operations as read or
write (specified in seconds). After a directory has been cached, open requests resulting from I/O
operations on that object are directed to the cached directory until the specified amount of time
has passed. Once the specified amount of time has passed, the open request gets directed to a
gateway node rather than to the cached directory.
Valid values are between 0 and 2147483647. The default is 60. Setting a lower value guarantees
a higher level of consistency.
afmEnableAutoEviction
Enables eviction on a given fileset. A yes value specifies that eviction is allowed on the fileset. A
no value specifies that eviction is not allowed on the fileset.
See also the topic about cache eviction in the IBM Spectrum Scale: Administration Guide.
afmExpirationTimeout
Is used with afmDisconnectTimeout (which can be set only through mmchconfig) to control
how long a network outage between the cache and home clusters can continue before the data in
the cache is considered out of sync with home. After afmDisconnectTimeout expires, cached
data remains available until afmExpirationTimeout expires, at which point the cached data is
considered expired and cannot be read until a reconnect occurs.
Valid values are 0 through 2147483647. The default is disable.
afmFileLookupRefreshInterval
Controls the frequency of data revalidations that are triggered by such lookup operations as ls or
stat (specified in seconds). When a lookup operation is done on a file, if the specified amount of
time has passed, AFM sends a message to the home cluster to find out whether the metadata of
the file has been modified since the last time it was checked. If the time interval has not passed,
AFM does not check the home cluster for updates to the metadata.
Valid values are 0 through 2147483647. The default is 30. In situations where home cluster data
changes frequently, a value of 0 is recommended.
afmGateway
Specifies the user-defined gateway node for an AFM or AFM DR fileset, that gets preference over
internal hashing algorithm. If the specified gateway node is not available, then AFM internally
assigns a gateway node from the available list to the fileset. afmHashVersion value must be
already set as '5'.
Note: Ensure that the filesystem is upgraded to IBM Spectrum Scale 5.0.2 or later.
The following command is an example of setting a user-defined gateway node to an AFM or AFM
DR fileset -
#mmcrfileset <FileSystem> <fileset> -p
afmMode=<afmMode>,afmGateway=<GatewayNode> --inode-space new -p
afmTarget=<Target>
afmMode
Specifies the mode in which the cache operates. Valid values are the following:
single-writer | sw
Specifies single-writer mode.
read-only | ro
Specifies read-only mode. (For mmcrfileset, this is the default value.)
local-updates | lu
Specifies local-updates mode.
independent-writer | iw
Specifies independent-writer mode.
Primary | drp
Specifies the primary mode for AFM asynchronous data replication.
Secondary | drs
Specifies the secondary mode for AFM asynchronous data replication.
100
Disables full file prefetching. This value only fetches and caches data that is read by the
application. This is useful for large random-access files, such as databases, that are either too
big to fit in the cache or are never expected to be read in their entirety. When all data blocks
are accessed in the cache, the file is marked as cached.
0 is the default value.
For local-updates mode, the whole file is prefetched when the first update is made.
afmPrimaryId
Specifies the unique primary ID of the primary fileset for asynchronous data replication. This is
used for connecting a secondary to a primary.
afmRPO
Specifies the recovery point objective (RPO) interval for an AFM DR fileset. This attribute is
disabled by default. You can specify a value with the suffix M for minutes, H for hours, or W for
weeks. For example, for 12 hours specify 12H. If you do not add a suffix, the value is assumed to
be in minutes. The range of valid values is 720 minutes - 2147483647 minutes.
afmShowHomeSnapshot
Controls the visibility of the home snapshot directory in cache. For this to be visible in cache, this
variable has to be set to yes, and the snapshot directory name in the cache and home cannot be
the same.
yes
Specifies that the home snapshot link directory is visible.
no
Specifies that the home snapshot link directory is not visible.
See Peer snapshot -psnap in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
-t Comment
Specifies an optional comment that appears in the output of the mmlsfileset command. This
comment must be less than 256 characters in length.
--inode-space {new | ExistingFileset}
Specifies the type of fileset to create, which controls how inodes are allocated:
new
Creates an independent fileset and its own dedicated inode space.
ExistingFileset
Creates a dependent fileset that will share inode space with the specified ExistingFileset. The
ExistingFileset can be root or any other independent fileset.
If --inode-space is not specified, a dependent fileset will be created in the root inode space.
--inode-limit MaxNumInodes[:NumInodesToPreallocate]
Specifies the inode limit for the new inode space. The NumInodesToPreallocate specifies an optional
number of inodes to preallocate when the fileset is created. This option is valid only when creating an
independent fileset with the --inode-space new parameter.
Note: Preallocated inodes cannot be deleted or moved to another independent fileset. It is
recommended to avoid preallocating too many inodes because there can be both performance and
memory allocation costs associated with such preallocations. In most cases, there is no need to
preallocate inodes because GPFS dynamically allocates inodes as needed.
--allow-permission-change PermissionChangeMode
Specifies the new permission change mode. This mode controls how chmod and ACL operations are
handled on objects in the fileset. Valid modes are as follows:
chmodOnly
Specifies that only the UNIX change mode operation (chmod) is allowed to change access
permissions (ACL commands and API will not be accepted).
setAclOnly
Specifies that permissions can be changed using ACL commands and API only (chmod will not be
accepted).
chmodAndSetAcl
Specifies that chmod and ACL operations are permitted. If the chmod command (or setattr file
operation) is issued, the result depends on the type of ACL that was previously controlling access
to the object:
• If the object had a Posix ACL, it will be modified accordingly.
• If the object had an NFSv4 ACL, it will be replaced by the given UNIX mode bits.
Note: This is the default setting when a fileset is created.
chmodAndUpdateAcl
Specifies that chmod and ACL operations are permitted. If chmod is issued, the ACL will be
updated by privileges derived from UNIX mode bits.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmcrfileset command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. This example creates a fileset in file system gpfs1:
2. This example adds fset2 in file system gpfs1 with the comment "another fileset":
mmlsfileset gpfs1 -L
See also
• “mmchfileset command” on page 222
• “mmdelfileset command” on page 365
• “mmlinkfileset command” on page 477
• “mmlsfileset command” on page 493
• “mmunlinkfileset command” on page 724
Location
/usr/lpp/mmfs/bin
mmcrfs command
Creates a GPFS file system.
Synopsis
mmcrfs Device {"DiskDesc[;DiskDesc...]" | -F StanzaFile}
[-A {yes | no | automount}] [-B BlockSize] [-D {posix | nfs4}]
[-E {yes | no}] [-i InodeSize] [-j {cluster | scatter}]
[-k {posix | nfs4 | all}] [-K {no | whenpossible | always}]
[-L LogFileSize] [-m DefaultMetadataReplicas]
[-M MaxMetadataReplicas] [-n NumNodes] [-Q {yes | no}]
[-p afmAttribute=Value[,afmAttribute=Value...]...]
[-r DefaultDataReplicas] [-R MaxDataReplicas]
[-S {yes | no | relatime}] [-T Mountpoint] [-t DriveLetter]
[-v {yes | no}] [-z {yes | no}] [--filesetdf | --nofilesetdf]
[--inode-limit MaxNumInodes[:NumInodesToPreallocate]]
[--log-replicas LogReplicas] [--metadata-block-size MetadataBlockSize]
[--perfileset-quota | --noperfileset-quota]
[--mount-priority Priority] [--version VersionString]
[--write-cache-threshold HAWCThreshold]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmcrfs command to create a GPFS file system. The first parameter must be Device and it must
be followed by either DiskDescList or -F StanzaFile. You can mount a maximum of 256 file systems in an
IBM Spectrum Scale cluster at any one time, including remote file systems.
The performance of a file system is affected by the values that you set for block size, replication, and the
maximum number of files (number of inodes).
For information about block size see the descriptions in this help topic of the -B BlockSize parameter and
the --metadata-block-size parameter. Here are some general facts from those descriptions:
• The block size, subblock size, and number of subblocks per block of a file system are set when the file
system is created and cannot be changed later.
• All the data blocks in a file system have the same block size and the same subblock size. Data blocks
and subblocks in the system storage pool and those in user storage pools have the same sizes. An
example of a valid block size and subblock size is a 4 MiB block with an 8 KiB subblock.
• All the metadata blocks in a file system have the same block size and the same subblock size. The
metadata blocks and subblocks are set to the same sizes as data blocks and subblocks, unless the --
metadata-block-size parameter is specified.
• If the system storage pool contains only metadataOnly NSDs, the metadata block can be set to a
different size than the data block size with the --metadata-block-size parameter.
Note: This setting can result in a change in the data subblock size and in the number of subblocks in a
data block. For an example, see the subsection "Subblocks" in the description of the -B parameter later
in this help topic.
• The data blocks and metadata blocks must have the same number of subblocks, even when the data
block size and the metadata block size are different.
• The number of subblocks per block is derived from the smallest block size of any storage pool in the file
system, including the system metadata pool.
• The block size cannot exceed the value of the cluster attribute maxblocksize, which can be set by the
mmchconfig command.
For more information, see the topic Block size in the IBM Spectrum Scale: Concepts, Planning, and
Installation Guide.
For information about replication factors, see the descriptions of the -m, -M, -r, and -R parameters in this
help topic.
For information about the maximum number of files (number of inodes), see the description of the --
inode-limit parameter later in this help topic.
Results
Upon successful completion of the mmcrfs command, these tasks are completed on all the nodes of the
cluster:
• The mount point directory is created.
• The file system is formatted.
In GPFS v3.4 and earlier, disk information for the mmcrfs command was specified with disk descriptors,
which have the following format. The second, third, and sixth fields are reserved:
DiskName:::DiskUsage:FailureGroup::StoragePool:
For compatibility with earlier versions, the mmcrfs command still accepts the traditional disk descriptors,
but their use is deprecated.
Parameters
Device
The device name of the file system to be created.
File system names need not be fully qualified. fs0 is as acceptable as /dev/fs0. However, file
system names must be unique within a GPFS cluster. Do not specify an existing entry in /dev.
This must be the first parameter.
"DiskDesc[;DiskDesc...]"
A descriptor for each disk to be included. Each descriptor is separated by a semicolon (;). The entire
list must be enclosed in quotation marks (' or "). The use of disk descriptors is discouraged.
-F StanzaFile
Specifies a file that contains the NSD stanzas and pool stanzas for the disks that are to be added to
the file system. NSD stanzas have the following format:
%nsd:
nsd=NsdName
usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}
failureGroup=FailureGroup
pool=StoragePool
servers=ServerList
device=DiskName
thinDiskType={no | nvme | scsi | auto}
where:
nsd=NsdName
Specifies the name of an NSD that was previously created by the mmcrnsd command. For a list of
available disks, issue the mmlsnsd -F command. This clause is mandatory for the mmcrfs
command.
dataAndMetadata
Indicates that the disk contains both data and metadata. This is the default for disks in the
system pool.
dataOnly
Indicates that the disk contains data and does not contain metadata. This is the default for
disks in storage pools other than the system pool.
metadataOnly
Indicates that the disk contains metadata and does not contain data.
descOnly
Indicates that the disk contains no data and no file metadata. IBM Spectrum Scale uses this
type of disk primarily to keep a copy of the file system descriptor. It can also be used as a third
failure group in certain disaster recovery configurations. For more information, see the topic
Synchronous mirroring utilizing GPFS replication in the IBM Spectrum Scale: Administration
Guide.
failureGroup=FailureGroup
Identifies the failure group to which the disk belongs. A failure group identifier can be a simple
integer or a topology vector that consists of up to three comma-separated integers. The default is
-1, which indicates that the disk has no point of failure in common with any other disk.
GPFS uses this information during data and metadata placement to ensure that no two replicas of
the same block can become unavailable due to a single failure. All disks that are attached to the
same NSD server or adapter must be placed in the same failure group.
If the file system is configured with data replication, all storage pools must have two failure groups
to maintain proper protection of the data. Similarly, if metadata replication is in effect, the system
storage pool must have two failure groups.
Disks that belong to storage pools in which write affinity is enabled can use topology vectors to
identify failure domains in a shared-nothing cluster. Disks that belong to traditional storage pools
must use simple integers to specify the failure group.
pool=StoragePool
Specifies the storage pool to which the disk is to be assigned. The default is the system storage
pool. To specify the system storage pool explicitly, type system:
pool=system
Only the system storage pool can contain metadataOnly, dataAndMetadata, or descOnly
disks. Disks in other storage pools must be dataOnly.
servers=ServerList
A comma-separated list of NSD server nodes. This clause is ignored by the mmcrfs command.
device=DiskName
The block device name of the underlying disk device. This clause is ignored by the mmcrfs
command.
scsi
The disk is a thin provisioned SCSI disk that supports the mmreclaimspace command.
auto
The type of the disk is either nvme or scsi. IBM Spectrum Scale will try to detect the actual
disk type automatically. To avoid problems, you should replace auto with the correct disk
type, nvme or scsi, as soon as you can.
Note: In 5.0.5, the space reclaim auto-detection is enhanced. It is encouraged to use the auto
key-word after your cluster is upgraded to 5.0.5.
For more information, see the topic IBM Spectrum Scale with data reduction storage devices in the
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Pool stanzas have the following format:
%pool:
pool=StoragePoolName
blockSize=BlockSize
usage={dataOnly | metadataOnly | dataAndMetadata}
layoutMap={scatter | cluster}
allowWriteAffinity={yes | no}
writeAffinityDepth={0 | 1 | 2}
blockGroupFactor=BlockGroupFactor
where:
pool=StoragePoolName
Is the name of a storage pool.
blockSize=BlockSize
Specifies the block size of the disks in the storage pool.
usage={dataOnly | metadataOnly | dataAndMetadata}
Specifies the type of data to be stored in the storage pool:
dataAndMetadata
Indicates that the disks in the storage pool contain both data and metadata. This is the default
for disks in the system pool.
dataOnly
Indicates that the disks contain data and do not contain metadata. This is the default for disks
in storage pools other than the system pool.
metadataOnly
Indicates that the disks contain metadata and do not contain data.
layoutMap={scatter | cluster}
Specifies the block allocation map type. When allocating blocks for a given file, GPFS first uses a
round-robin algorithm to spread the data across all disks in the storage pool. After a disk is
selected, the location of the data block on the disk is determined by the block allocation map type.
If cluster is specified, GPFS attempts to allocate blocks in clusters. Blocks that belong to a
particular file are kept adjacent to each other within each cluster. If scatter is specified, the
location of the block is chosen randomly.
The cluster allocation method may provide better disk performance for some disk subsystems
in relatively small installations. The benefits of clustered block allocation diminish when the
number of nodes in the cluster or the number of disks in a file system increases, or when the file
system's free space becomes fragmented. The cluster allocation method is the default for GPFS
clusters with eight or fewer nodes and for file systems with eight or fewer disks.
The scatter allocation method provides more consistent file system performance by averaging
out performance variations due to block location (for many disk subsystems, the location of the
data relative to the disk edge has a substantial effect on performance). This allocation method is
appropriate in most cases and is the default for GPFS clusters with more than eight nodes or file
systems with more than eight disks.
The block allocation map type cannot be changed after the storage pool has been created.
allowWriteAffinity={yes | no}
Indicates whether the File Placement Optimizer (FPO) feature is to be enabled for the storage
pool. For more information on FPO, see the File Placement Optimizer section in the IBM Spectrum
Scale: Administration Guide.
writeAffinityDepth={0 | 1 | 2}
Specifies the allocation policy to be used by the node writing the data.
A write affinity depth of 0 indicates that each replica is to be striped across the disks in a cyclical
fashion with the restriction that no two disks are in the same failure group. By default, the unit of
striping is a block; however, if the block group factor is specified in order to exploit chunks, the
unit of striping is a chunk.
A write affinity depth of 1 indicates that the first copy is written to the writer node. The second
copy is written to a different rack. The third copy is written to the same rack as the second copy,
but on a different half (which can be composed of several nodes).
A write affinity depth of 2 indicates that the first copy is written to the writer node. The second
copy is written to the same rack as the first copy, but on a different half (which can be composed
of several nodes). The target node is determined by a hash value on the fileset ID of the file, or it is
chosen randomly if the file does not belong to any fileset. The third copy is striped across the disks
in a cyclical fashion with the restriction that no two disks are in the same failure group. The
following conditions must be met while using a write affinity depth of 2 to get evenly allocated
space in all disks:
1. The configuration in disk number, disk size, and node number for each rack must be similar.
2. The number of nodes must be the same in the bottom half and the top half of each rack.
This behavior can be altered on an individual file basis by using the --write-affinity-
failure-group option of the mmchattr command.
This parameter is ignored if write affinity is disabled for the storage pool.
blockGroupFactor=BlockGroupFactor
Specifies how many file system blocks are laid out sequentially on disk to behave like a single
large block. This option only works if --allow-write-affinity is set for the data pool. This
applies only to a new data block layout; it does not migrate previously existing data blocks. For
more information, see the topic File placement optimizer in the IBM Spectrum Scale:
Administration Guide.
-A {yes | no | automount}
Indicates when the file system is to be mounted:
yes
When the GPFS daemon starts. This is the default.
no
The file system is mounted manually.
automount
On non-Windows nodes, when the file system is first accessed. On Windows nodes, when the
GPFS daemon starts.
Note: IBM Spectrum Protect for Space Management does not support file systems with the -A
option set to automount.
-B BlockSize
Specifies the size of data blocks in the file system. By default this parameter sets the block size and
subblock size for all the data blocks and metadata blocks in the file system. This statement applies to
all the data blocks and metadata blocks in the system storage pool and all the data blocks in user
storage pools.
Note: You can specify a different size for metadata blocks and subblocks, and this setting can change
the size and number of data subblocks. For more information see the description of the --metadata-
block-size parameter later in this help topic.
Specify the value of the -B parameter with the character K or M. For example, to set the block size to 4
MiB with an 8 KiB subblock, type "-B 4M". The following table shows the supported block sizes with
their subblock size:
Attention:
• A data block size of 4 MiB provides good sequential performance, makes efficient use of disk
space, and provides good performance for small files. It works well for the widest variety of
workloads.
• For information about suggested block sizes for different types of I/O and for different
workloads and configuration types, see the topic Block size in the IBM Spectrum Scale:
Concepts, Planning, and Installation Guide.
Subblocks
By default the data blocks and metadata blocks in a file system are set to the same block size and
the same subblock size. The block size and subblock size are determined either by a setting from
Table 21 on page 320 (if -B is specified) or by the default sizes (if -B is not specified).
However, if metadata blocks are set to a different size than data blocks by the --metadata-
block-size parameter, which is described later in this help topic, the following steps are taken
automatically to determine the subblock sizes for data blocks and metadata blocks:
1. Determine the number of subblocks. This step is necessary because data blocks and metadata
blocks must have the same number of subblocks:
a. Choose the block type with the smaller block size (usually the metadata block).
b. Set the subblock size from the appropriate row in Table 21 on page 320.
c. Find the number of subblocks by dividing the block size by the subblock size. This value will
be the number of subblocks for both data blocks and metadata blocks.
For example, suppose that initially the block sizes are set to 16 MiB for data blocks and 1 MiB
for metadata blocks. The smaller block size is 1 MiB for metadata blocks. From Table 21 on
page 320, the subblock size for a block size of 1 MiB is 8 KiB. Therefore the number of
subblocks is (1 MiB / 8 KiB) or 128 subblocks. Thus the following settings are determined:
• For metadata blocks, from the table, the block size is 1 MiB and the metadata subblock size
is 8 KiB.
• Both data blocks and metadata blocks must have 128 subblocks.
2. Determine the subblock size for the other block type (usually the data block) by dividing the
block size by the number of subblocks from Step 1. Continuing the example from Step 1, a data
block must have 128 subblocks. Therefore the subblock size for data blocks is (16 MiB / 128)
or 128 KiB. Note that this is different than the standard subblock size of 16 KIB for a 16 MIB
block.
-D {nfs4 | posix}
Specifies whether a deny-write open lock blocks write operations, as it is required to do by NFS V4.
File systems supporting NFS V4 must have -D nfs4 set. The option -D posix allows NFS writes
even in the presence of a deny-write open lock. If you intend to export the file system using NFS V4 or
Samba, you must use -D nfs4. For NFS V3 (or if the file system is not NFS exported at all) use -D
posix. The default is -D nfs4.
-E {yes | no}
Specifies whether to report exact mtime values (-E yes), or to periodically update the mtime value
for a file system (-E no). If it is more desirable to display exact modification times for a file system,
specify or use the default -E yes.
-i InodeSize
Specifies the byte size of inodes. Supported inode sizes are 512, 1024, and 4096 bytes. The default is
4096.
-j {cluster | scatter}
Specifies the default block allocation map type to be used if layoutMap is not specified for a given
storage pool.
-k {posix | nfs4 | all}
Specifies the type of authorization supported by the file system:
posix
Traditional GPFS ACLs only (NFS V4 and Windows ACLs are not allowed). Authorization controls
are unchanged from earlier releases.
nfs4
Support for NFS V4 and Windows ACLs only. Users are not allowed to assign traditional GPFS ACLs
to any file system objects (directories and individual files).
all
Any supported ACL type is permitted. This includes traditional GPFS (posix) and NFS V4 and
Windows ACLs (nfs4).
The administrator is allowing a mixture of ACL types. For example, fileA might have a posix
ACL, while fileB in the same file system may have an NFS V4 ACL, implying different access
characteristics for each file depending on the ACL type that is currently assigned. The default is -k
all.
Avoid specifying nfs4 or all unless files are to be exported to NFS V4 or Samba clients, or the file
system is mounted on Windows. NFS V4 and Windows ACLs affect file attributes (mode) and have
access and authorization characteristics that are different from traditional GPFS ACLs.
-Q {yes | no}
Activates quotas automatically when the file system is mounted. The default is -Q no. Issue the
mmdefedquota command to establish default quota values. Issue the mmedquota command to
establish explicit quota values.
To activate GPFS quota management after the file system has been created:
1. Mount the file system.
2. To establish default quotas:
a. Issue the mmdefedquota command to establish default quota values.
b. Issue the mmdefquotaon command to activate default quotas.
3. To activate explicit quotas:
Note: IBM Spectrum Protect for Space Management does not support file systems with the -A option
set to automount.
--filesetdf
Specifies that when quotas are enforced for a fileset (other than the root fileset), the numbers
reported by the df command are based on the quotas for the fileset (rather than the entire file
system). This option affects the df command behavior only on Linux nodes.
--nofilesetdf
Specifies that the numbers reported by the df command are not based on the quotas for a fileset. The
df command returns the numbers for the entire file system. This is the default.
--inode-limit MaxNumInodes[:NumInodesToPreallocate]
Specifies the maximum number of files in the file system.
In a file system that does parallel file creates, the number of free inodes must be greater than 5% of
the total number of inodes. If not, the performance of the file system can be degraded. To increase the
number of inodes, issue the mmchfs command.
The parameter NumInodesToPreallocate specifies the number of inodes that the system immediately
preallocates. If you do not specify a value for NumInodesToPreallocate, GPFS dynamically allocates
inodes as needed.
You can specify the NumInodes and NumInodesToPreallocate values with a suffix, for example 100K
or 2M. Note that in order to optimize file system operations, the number of inodes that are actually
created may be greater than the specified value.
Note: Preallocated inodes created using the mmcrfs command are allocated only to the root fileset,
and these inodes cannot be deleted or moved to another independent fileset. It is recommended to
avoid preallocating too many inodes because there can be both performance and memory allocation
costs associated with such preallocations. In most cases, there is no need to preallocate inodes
because GPFS dynamically allocates inodes as needed.
--log-replicas LogReplicas
Specifies the number of recovery log replicas. Valid values are 1, 2, 3, or DEFAULT. If not specified, or
if DEFAULT is specified, the number of log replicas is the same as the number of metadata replicas
currently in effect for the file system.
This option is applicable only if the recovery log is stored in the system.log storage pool. For more
information about the system.log storage pool, see the topic The system.log storage pool in the IBM
Spectrum Scale: Concepts, Planning, and Installation Guide.
--metadata-block-size MetadataBlockSize
Sets the metadata block size. Setting the metadata block size to a smaller value than the data block
size can improve file system performance, especially when the data block size is greater than 1 MiB.
By default the data blocks and metadata blocks in a file system are set to the same block size and the
same subblock size. For more information about these settings see the description of the -B BlockSize
parameter earlier in this help topic.
To set metadata blocks to a different size than data blocks, you must define a metadata-only system
pool and specify the --metadata-block-size parameter when you issue the mmcrfs command.
Follow these steps:
1. Define a pool stanza for a metadata-only system pool. Include the following settings:
• Set pool to a valid pool name.
• Do not set a value for blockSize. The metadata block size is set in the --metadata-block-
size parameter of the mmcrfs command.
• Set usage to metadataOnly.
2. Define an NSD stanza that includes the following settings:
• Set usage to metadataOnly.
• Set pool to the name of the pool stanza that you defined in Step 1.
3. Include the NSD stanza and the pool stanza in the stanza file that you will pass to the mmcrfs
command.
4. When you run the mmcrfs command, include the --metadata-block-size parameter and
specify a valid block size from Table 21 on page 320.
Note: When metadata blocks are set to a different size than data blocks, the subblock sizes are
ultimately determined by an automatic sequence of steps in the mmcrfs command processing. For
more information see the "Subblocks" subtopic in the description of the --B BlockSize parameter
earlier in this help topic.
--perfileset-quota
Sets the scope of user and group quota limit checks to the individual fileset level (rather than the
entire file system).
--noperfileset-quota
Sets the scope of user and group quota limit checks to the entire file system (rather than per
individual fileset). This is the default.
--mount-priority Priority
Controls the order in which the individual file systems are mounted at daemon startup or when one of
the all keywords is specified on the mmmount command.
File systems with higher Priority numbers are mounted after file systems with lower numbers. File
systems that do not have mount priorities are mounted last. A value of zero indicates no priority. This
is the default.
--version VersionString
Specifies the file system format version of the new file system, such as 4.2.3.0. A file system format
version is associated with a file system format number (for example, 17.0) that determines the
features that are enabled in the new file system. For more information about these values, see the
topic File system format changes between versions of IBM Spectrum Scale in the IBM Spectrum Scale:
Administration Guide.
If you do not specify this parameter, the file system format version of the new file system defaults to
the version of IBM Spectrum Scale that is installed on the node where you issue the command. For
example, if IBM Spectrum Scale version 4.2.3 is installed on the node where you issue the command,
then the default file system format version for the new file system is 4.2.3.0.
Whether you specify the file system format version or let it assume the default value, the file system
format version must be in the range 4.1.1.0 - mRL, where mRL is the minimum release level of the
cluster (minReleaseLevel).
The file system format version also affects the default value of the -B BlockSize parameter. For more
information, see the description of that parameter earlier in this help topic.
Important:
• A remote node with an installed product version of IBM Spectrum Scale (for example, 4.2.3) that is
less than the file system format version of the new file system (such as 5.0.0) will not be able to
access the file system.
• Windows nodes can mount only file systems with a file system format version greater than or equal
to 3.2.1.5.
• If you do not specify this parameter and the installed product version of the node where you issue
the command is greater than the minimum release level (minReleaseLevel) of the cluster, then
the command returns with an error message and prompts you to upgrade the minimum release
level. To avoid this result, specify a file system format version with the --version parameter.
• In many contexts you might want to let the file system format version assume its default value.
However, specifying an explicit file system format version can be useful or necessary in the following
contexts:
– When nodes in the cluster are running different versions of IBM Spectrum Scale.
– When you want to make the file system available to remote clusters in which nodes are running
an earlier version of IBM Spectrum Scale.
--profile ProfileName
Specifies a predefined profile of attributes to be applied. System-defined profiles are located
in /usr/lpp/mmfs/profiles/. All the file system attributes listed under a file system stanza are changed
as a result of this command. The following system-defined profile names are accepted:
• gpfsProtocolDefaults
• gpfsProtocolRandomIO
The file system attributes are applied at file system creation. If there is a current profile in place on
the system (use mmlsconfig profile to check), then the file system is created with those attributes and
values listed in the profile's file system stanza. The default is to use whatever attributes and values
associate with the current profile setting.
Furthermore, any and all file system attributes from an installed profile file can be by-passed with '--
profile=userDefinedProfile', where the userDefinedProfile is a profile file has been installed by the user
in /var/mmfs/etc/.
User-defined profiles consist of the following stanzas:
%cluster:
[CommaSeparatedNodesOrNodeClasses:]ClusterConfigurationAttribute=Value
...
%filesystem:
FilesystemConfigurationAttribute=Value
...
nfs://{Host|Map}/Source_Path
Where:
nfs://
Specifies the transport protocol.
Source_Path
Specifies the export path.
Host
Specifies the server domain name system (DNS) name or IP address.
Map
Specifies the export map name. For more information about mapping, see Parallel data
transfers in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
The afmTarget parameter examples are as follows:
1. Use NFS protocol without mapping.
afmMode
Specifies the AFM fileset mode. Valid values are as follows:
read-only | ro
Specifies the read-only mode. You can fetch data into the ro-mode fileset for read-only
purpose.
local-updates | lu
Specifies the local-updates mode. You can fetch data into the lu-mode fileset and update it
locally. The modified data will not be synchronized to the home and stays local.
Conversion of the ro mode to the lu mode is supported for file system-level migration. For more
information, see Caching modes in the IBM Spectrum Scale: Concepts, Planning, and Installation
Guide.
To disable the AFM relationship from the file system, complete the following steps:
1. Unmount the file system on all cluster nodes.
2. Disable the AFM relationship by issuing the following command:
Warning! Once disabled, AFM cannot be re-enabled on this fileset. Do you wish to
continue? (yes/no) yes
Warning! Fileset should be verified for uncached files and orphans. If already
verified, then skip this step. Do you wish to verify same? (yes/no) no
afmDirLookupRefreshInterval
Controls the frequency of data revalidations that are triggered by lookup operations such as ls or
stat (specified in seconds). When a lookup operation is performed on a directory, if the specified
time passed, AFM sends a message to the home cluster to find out whether the metadata of that
directory is modified since the last time it was checked. If the time interval did not pass, AFM does
not check the home cluster for updates to the metadata.
Valid values are 0 – 2147483647. The default is 60. Where home cluster data changes frequently,
value 0 is recommended.
afmDirOpenRefreshInterval
Controls the frequency of data revalidations that are triggered by such I/O operations as read or
write (specified in seconds). After a directory is cached, open requests that are resulting from I/O
operations on that object are directed to the cached directory until the specified amount of time
has passed. After the specified time passed, the open request is directed to a gateway node rather
than to the cached directory.
Valid values are 0 - 2147483647. The default is 60. Set a lower value for a higher level of
consistency.
afmFileLookupRefreshInterval
Controls the frequency of data revalidations that are triggered by lookup operations such as ls or
stat (specified in seconds). When a lookup operation is performed on a file, if the specified time
passed, AFM sends a message to the home cluster to find out whether the metadata of the file is
modified since the last time it was checked. If the time interval did not pass, AFM does not check
the home cluster for updates to the metadata.
Valid values are 0 – 2147483647. The default is 30. Where home cluster data changes frequently,
value 0 is recommended.
afmFileOpenRefreshInterval
Controls the frequency of data revalidations that are triggered by I/O operations such as read or
write (specified in seconds). After a file is cached, open requests from I/O operations on that
object are directed to the cached file until the specified time passed. After the specified time
passed, the open request is directed to a gateway node rather than to the cached file.
Valid values are 0 – 2147483647. The default is 30. Set a lower value for a higher level of
consistency.
afmParallelReadChunkSize
Defines the minimum chunk size of the read that needs to be distributed among the gateway
nodes during parallel reads. Values are interpreted in bytes. The default value of this parameter is
128 MiB, and the valid range of values is 0 – 2147483647. It can be changed cluster wide with the
mmchconfig command. It can be set at fileset level by using the mmcrfileset or
mmchfileset commands.
afmParallelReadThreshold
Defines the threshold beyond which parallel reads become effective. Reads are split into chunks
when file size exceeds this threshold value. Values are interpreted in MiB. The default value is
1024 MiB. The valid range of values is 0 – 2147483647. It can be changed cluster wide with the
mmchconfig command. It can be set at fileset level by using mmcrfileset or mmchfileset
commands.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmcrfs command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
This example shows how to create a file system named gpfs1 using three disks, each with a block size of
512 KiB, allowing metadata and data replication to be 2, turning quotas on, and creating /gpfs1 as the
mount point. The NSD stanzas describing the three disks are assumed to have been placed in file/tmp/
freedisks. To complete this task, issue the command:
See also
• “mmchfs command” on page 230
• “mmdelfs command” on page 369
• “mmdf command” on page 382
• “mmedquota command” on page 398
• “mmfsck command” on page 404
• “mmlsfs command” on page 498
• “mmlspool command” on page 520
Location
/usr/lpp/mmfs/bin
mmcrnodeclass command
Creates user-defined node classes.
Synopsis
mmcrnodeclass ClassName -N {Node[,Node...] | NodeFile | NodeClass}
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmcrnodeclass command to create user-defined node classes. After a node class is created, it
can be specified as an argument on commands that accept the -N NodeClass option.
Parameters
ClassName
Specifies a name that uniquely identifies the user-defined node classes to create. An existing node
name cannot be specified. Class names that end with nodes or system-defined node classes are
reserved for use by GPFS.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes and node classes that will become members of the user-defined node class
ClassName.
NodeClass cannot be a node class that already contains other node classes. For example, two user-
defined node classes called siteA and siteB could be used to create a new node class called
siteAandB, as follows:
The siteAandB node class cannot later be specified for NodeClass when creating new node classes.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmcrnodeclass command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To create a user-defined node class called siteA that contains nodes c8f2c4vp1 and c8f2c4vp2, issue
this command:
mmlsnodeclass siteA
See also
• “mmchnodeclass command” on page 248
• “mmdelnodeclass command” on page 374
• “mmlsnodeclass command” on page 512
Location
/usr/lpp/mmfs/bin
mmcrnsd command
Creates Network Shared Disks (NSDs) used by GPFS.
Synopsis
mmcrnsd -F StanzaFile [-v {yes | no}]
Availability
Available on all IBM Spectrum Scale editions.
Note: If the cluster is running the IBM Spectrum Scale Developer Edition, the mmcrnsd command
calculates the total disk size for all the proposed new NSDs along with any previously created NSDs. If the
total exceeds the license limit, the mmcrnsd command fails with an error message. For more information,
see the “mmlslicense command” on page 503.
Description
The mmcrnsd command is used to create cluster-wide names for NSDs used by GPFS.
This is the first GPFS step in preparing disks for use by a GPFS file system. The input to this command
consists of a file containing NSD stanzas describing the properties of the disks to be created. This file can
be updated as necessary by the mmcrnsd command and can be supplied as input to the mmcrfs,
mmadddisk, or mmrpldisk command.
The names that are created by the mmcrnsd command are necessary since disks connected to multiple
nodes might have different disk device names on each node. The NSD names uniquely identify each disk.
This command must be run for all disks that are to be used in GPFS file systems. The mmcrnsd command
is also used to assign each disk an NSD server list that can be used for I/O operations on behalf of nodes
that do not have direct access to the disk.
To identify that a disk has been processed by the mmcrnsd command, a unique NSD volume ID is written
to the disk. All of the NSD commands (mmcrnsd, mmlsnsd, and mmdelnsd) use this unique NSD volume
ID to identify and process NSDs.
After the NSDs are created, the GPFS cluster data is updated and they are available for use by GPFS.
Note: It is customary to use whole LUNs as NSDs. This generally provides the best performance and fault
isolation. When SCSI-3 PR is in use, whole LUN use is required. In other deployment scenarios, it is
possible to use disk partitions rather than whole LUNs, as long as care is taken to ensure that sharing of
the same LUN through multiple partitions does not have an undesirable performance impact.
On Windows, GPFS will only create NSDs from empty disk drives. mmcrnsd accepts Windows Basic disks
or Unknown/Not Initialized disks. It always re-initializes these disks so that they become Basic GPT Disks
with a single GPFS partition. NSD data is stored in GPFS partitions. This allows other operating system
components to recognize the disks are used. mmdelnsd deletes the partition tables that are created by
mmcrnsd.
Results
Upon successful completion of the mmcrnsd command, these tasks are completed:
• NSDs are created.
• The StanzaFile contains NSD names to be used as input to the mmcrfs, mmadddisk, or the mmrpldisk
commands.
• A unique NSD volume ID to identify each disk as an NSD has been written to the disk.
• An entry for each new disk is created in the GPFS cluster data.
Parameters
-F StanzaFile
Specifies the file containing the NSD stanzas for the disks to be created. NSD stanzas have this format:
%nsd: device=DiskName
nsd=NsdName
servers=ServerList
usage={dataOnly | metadataOnly | dataAndMetadata | descOnly | localCache}
failureGroup=FailureGroup
pool=StoragePool
thinDiskType={no | nvme | scsi | auto}
where:
device=DiskName
On UNIX, specifies the block device name that appears in /dev for the disk that you want to
define as an NSD. Examples of disks that are accessible through a block device are SAN-attached
disks.
Important: If a server node is specified, DiskName must be the /dev name for the disk device of
the first listed NSD server node. If no server node is specified, DiskName must be the name of the
disk device for the node from which the mmcrnsd command is issued.
On Windows, the disk number (for example, 3) of the disk you want to define as an NSD. Disk
numbers appear in Windows Disk Management console and the DISKPART command line utility.
Important: If a server node is specified, DiskName must be the disk number from the first NSD
server node that is defined in the server list. If no server node is specified, DiskName must be the
name of the disk device for the node from which the mmcrnsd command is issued.
For the latest supported disk types, see the IBM Spectrum Scale FAQ in the IBM Knowledge
Center.
This clause is mandatory for the mmcrnsd command.
nsd=NsdName
Specifies the name of the NSD to be created. This name must not already be used as another
GPFS disk name, and it must not begin with the reserved string 'gpfs'.
Note: This name can contain only the following characters: 'A' through 'Z', 'a' through 'z', '0'
through '9', or '_' (the underscore). All other characters are not valid.
If you do not specify this clause, GPFS generates a unique name for the disk and adds the
appropriate nsd=NsdName clause to the stanza file. The NSD is assigned a name according to the
convention:
gpfsNNnsd
where NN is a unique nonnegative integer that is not used in any prior NSD.
servers=ServerList
Specifies a comma-separated list of NSD server nodes. You can specify up to eight NSD servers in
this list. The defined NSD preferentially uses the first server on the list. If the first server is not
available, the NSD uses the next available server on the list.
When you specify server nodes for your NSDs, the output of the mmlscluster command lists the
host name and IP address combinations that are recognized by GPFS. The utilization of aliased
host names that are not listed in the mmlscluster command output might produce undesired
results.
There are two cases where a server list either must be omitted or is optional:
• For IBM Spectrum Scale RAID, a server list is not allowed. The servers are determined from the
underlying vdisk definition. For more information about IBM Spectrum Scale RAID, see the IBM
Spectrum Scale RAID: Administration and Programming Reference.
• For SAN configurations where the disks are SAN-attached to all nodes in the cluster, a server list
is optional. However, if all nodes in the cluster do not have access to the disk, or if the file
system to which the disk belongs is to be accessed by other GPFS clusters, you must specify a
server list.
usage={dataOnly | metadataOnly | dataAndMetadata | descOnly | localCache}
Specifies the type of data to be stored on the disk:
dataAndMetadata
Indicates that the disk contains both data and metadata. This is the default for disks in the
system pool.
dataOnly
Indicates that the disk contains data and does not contain metadata. This is the default for
disks in storage pools other than the system pool.
metadataOnly
Indicates that the disk contains metadata and does not contain data.
descOnly
Indicates that the disk contains no data and no file metadata. Such a disk is used solely to
keep a copy of the file system descriptor, and can be used as a third failure group in certain
disaster recovery configurations. For more information, see the topic Synchronous mirroring
utilizing GPFS replication in the IBM Spectrum Scale: Administration Guide.
localCache
Indicates that the disk is to be used as a local read-only cache device.
This clause is ignored by the mmcrnsd command and is passed unchanged to the output file
produced by the mmcrnsd command.
failureGroup=FailureGroup
Identifies the failure group to which the disk belongs. A failure group identifier can be a simple
integer or a topology vector that consists of up to three comma-separated integers. The default is
-1, which indicates that the disk has no point of failure in common with any other disk.
GPFS uses this information during data and metadata placement to ensure that no two replicas of
the same block can become unavailable due to a single failure. All disks that are attached to the
same NSD server or adapter must be placed in the same failure group.
If the file system is configured with data replication, all storage pools must have two failure groups
to maintain proper protection of the data. Similarly, if metadata replication is in effect, the system
storage pool must have two failure groups.
Disks that belong to storage pools in which write affinity is enabled can use topology vectors to
identify failure domains in a shared-nothing cluster. Disks that belong to traditional storage pools
must use simple integers to specify the failure group.
This clause is ignored by the mmcrnsd command, and is passed unchanged to the output file
produced by the mmcrnsd command.
pool=StoragePool
Specifies the name of the storage pool to which the NSD is assigned. This clause is ignored by the
mmcrnsd command and is passed unchanged to the output file produced by the mmcrnsd
command.
The default value for pool is system.
thinDiskType={no | nvme | scsi | auto}
Specifies the space reclaim disk type:
no
The disk device supports space reclaim. This value is the default.
nvme
The disk is a TRIM capable NVMe device that supports the mmreclaimspace command.
scsi
The disk is a thin provisioned SCSI disk that supports the mmreclaimspace command.
auto
The type of the disk is either nvme or scsi. IBM Spectrum Scale will try to detect the actual
disk type automatically. To avoid problems, you should replace auto with the correct disk
type, nvme or scsi, as soon as you can.
Note: In 5.0.5, the space reclaim auto-detection is enhanced. It is encouraged to use the auto
key-word after your cluster is upgraded to 5.0.5.
For more information, see the topic IBM Spectrum Scale with data reduction storage devices in the
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
-v {yes | no}
Verify that the disks are not already formatted as an NSD.
A value of -v yes specifies that the NSDs are to be created only if each disk has not been formatted
by a previous invocation of the mmcrnsd command, as indicated by the NSD volume ID on the disk. A
value of -v no specifies that the disks are to be created irrespective of their previous state. The
default is -v yes.
Important: Using -v no when a disk already belongs to a file system can corrupt that file system by
making that physical disk undiscoverable by that file system. This will not be noticed until the next
time that file system is mounted.
Upon successful completion of the mmcrnsd command, the StanzaFile file is rewritten to reflect changes
made by the command, as follows:
• If an NSD stanza is found to be in error, the stanza is commented out.
• If an nsd=NsdName clause is not specified, and an NSD name is generated by GPFS, an nsd= clause
will be inserted in the corresponding stanza.
You must have write access to the directory where the StanzaFile file is located in order to rewrite the
created NSD information.
The disk usage, failure group, and storage pool specifications are preserved only if you use the rewritten
file produced by the mmcrnsd command. If you do not use this file, you must either accept the default
values or specify new values when creating NSD stanzas for other commands.
For backward compatibility, the mmcrnsd command will still accept the traditional disk descriptors, but
their use is discouraged. For additional information about GPFS stanzas, see the help topic "Stanza files"
in the IBM Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmcrnsd command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To create two new NSDs from the stanza file /tmp/newNSDstanza containing:
%nsd: device=/dev/sdav1
servers=k145n05,k145n06
failureGroup=4
%nsd:
device=/dev/sdav2
nsd=sd2pA
servers=k145n06,k145n05
usage=dataOnly
failureGroup=5
pool=poolA
mmcrnsd -F /tmp/newNSDstanza
As a result, two NSDs are created. The first one is assigned a name by the system, for example,
gpfs1023nsd. The second disk is assigned the name sd2pA (as indicated by the nsd= clause in the
stanza).
The newNSDstanza file is rewritten and looks like this (note the addition of an nsd= clause in the first
stanza):
%nsd:
device=/dev/sdav2
nsd=sd2pA
servers=k145n06,k145n05
usage=dataOnly
failureGroup=5
pool=poolA
See also
• “mmadddisk command” on page 28
• “mmchnsd command” on page 251
• “mmcrfs command” on page 315
• “mmdeldisk command” on page 360
• “mmdelnsd command” on page 376
• “mmlsnsd command” on page 514
• “mmrpldisk command” on page 679
Location
/usr/lpp/mmfs/bin
mmcrsnapshot command
Creates a snapshot of a file system or fileset at a single point in time.
Synopsis
mmcrsnapshot Device [[Fileset]:]Snapshot[,[[Fileset]:]Snapshot[-j
FilesetName[,FilesetName...]]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmcrsnapshot command to create global snapshots or fileset snapshots at a single point in
time. System data and existing snapshots are not copied. The snapshot function allows a backup or mirror
program to run concurrently with user updates and still obtain a consistent copy of the file system as of
the time the copy was created. Snapshots also provide an online backup capability that allows easy
recovery from common problems such as accidental deletion of a file, and comparison with older versions
of a file.
In IBM Spectrum Scale Release 4.2.1 and later, snapshot commands support the specification of multiple
snapshots. Users can easily create and delete multiple snapshots. Also, system performance is increased
by batching operations and reducing overhead.
In this release, the following new usages of the mmcrsnapshot command have been introduced:
A global snapshot is an exact copy of changed data in the active files and directories of a file system.
Snapshots of a file system are read-only and they appear in a .snapshots directory located in the file
system root directory. The files and attributes of the file system can be changed only in the active copy.
A fileset snapshot is an exact copy of changed data in the active files and directories of an independent
fileset plus all dependent filesets. Fileset snapshots are read-only and they appear in a .snapshots
directory located in the root directory of the fileset. The files and attributes of the fileset can be changed
only in the active copy.
Snapshots may be deleted only by issuing the mmdelsnapshot command. The .snapshots directory
cannot be deleted, though it can be renamed with the mmsnapdir command using the -s option.
Because global snapshots are not full, independent copies of the entire file system, they should not be
used as protection against media failures. For information about protection against media failures, see the
Recoverability considerations topic in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
For more information on global snapshots, see Creating and maintaining snapshots of GPFSfile systems in
the IBM Spectrum Scale: Administration Guide.
For more information on fileset snapshots, see Fileset-level snapshots in the IBM Spectrum Scale:
Administration Guide.
Parameters
Device
The device name of the file system for which the snapshot is to be created. File system names need
not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
This must be the first parameter.
Fileset
The name of the fileset that contains the fileset snapshot to be created. If Fileset is not specified, the
mmcrsnapshot command creates a global snapshot named Snapshot.
Note: Ensure that multiple snapshots and multiple filesets are not used together.
Snapshot
Specifies the name given to the snapshot.
The snapshot names are separated by a comma.
The snapshot specifier describes global and fileset snapshots. For example, Fileset1:Snapshot1
specifies a fileset snapshot named Snapshot1 for fileset Fileset1. If Fileset1 is empty, Snapshot1 is a
global snapshot named Snapshot1.
Each global snapshot name must be unique from any other global snapshots. If you do not want to
traverse the root of the file system to access the global snapshot, a more convenient mechanism that
enables a connection in each directory of the active file system can be enabled with the -a option of
the mmsnapdir command.
Note: Ensure that the snapshot name is using the "@GMT-yyyy.MM.dd-HH.mm.ss" format in order to
be identifiable by the Windows VSS.
For a fileset snapshot, Snapshot appears as a subdirectory of the .snapshots directory in the root
directory of the fileset. Fileset snapshot names can be duplicated across different filesets. A fileset
snapshot can also have the same name as a global snapshot. The mmsnapdir command provides an
option to make global snapshots also available through the .snapshots in the root directory of all
independent filesets.
Note:
• Ensure that the snapshot name does not include a colon (:) and a comma (,).
• Ensure that multiple snapshots and multiple filesets are not used together.
-j FilesetName
Creates a snapshot that includes the specified fileset and all the dependent filesets that share the
same inode space. FilesetName refers to an independent fileset. If -j is not specified, the
mmcrsnapshot command creates a global snapshot that includes all filesets.
If -j is specified, fileset names can be considered as a list separated by commas.
Note:
• When a list of snapshots separated by a comma (,) is used with the -j option, the fileset is
applicable to each snapshot that does not use the colon (:) syntax. The fileset name must not
consist of a white space.
• Only one global snapshot and one snapshot can be specified in each fileset. Therefore, multiple
snapshots must not be created with the -j option. IBM recommends that at the most one global
snapshot and one fileset snapshot must be included in each independent fileset. If multiple
snapshots are specified with the -j option, a snapshot with the same name is created for each
fileset that has been listed.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmcrsnapshot command when creating global snapshots.
Independent fileset owners can run the mmcrsnapshot command to create snapshots of the filesets
they own.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To create a global snapshot snap1, for the file system fs1, issue this command:
Before issuing the command, the directory structure would appear similar to:
/fs1/file1
/fs1/userA/file2
/fs1/userA/file3
After the command has been issued, the directory structure would appear similar to:
/fs1/file1
/fs1/userA/file2
/fs1/userA/file3
/fs1/.snapshots/snap1/file1
/fs1/.snapshots/snap1/userA/file2
/fs1/.snapshots/snap1/userA/file3
If a second snapshot were to be created at a later time, the first snapshot would remain as is.
Snapshots are made only of active file systems, not existing snapshots. For example:
After the command has been issued, the directory structure would appear similar to:
/fs1/file1
/fs1/userA/file2
/fs1/userA/file3
/fs1/.snapshots/snap1/file1
/fs1/.snapshots/snap1/userA/file2
/fs1/.snapshots/snap1/userA/file3
/fs1/.snapshots/snap2/file1
/fs1/.snapshots/snap2/userA/file2
/fs1/.snapshots/snap2/userA/file3
2. To create a snapshot Snap3 of the fileset FsetF5-V2 for the file system fs1, issue this command:
To display the snapshot that contains the FsetF5-V2 fileset, issue this command:
3. To create a snapshot of the gpfs0 file system that can be viewed over SMB protocol with Windows VSS,
issue this command:
4. To create a snapshot for the fset1, fset2, fset3 filesets for the file system fs1, run the following
command:
Flushing dirty data for snapshot fset1:snap1 fset2:snap1 fset3:snap1 (1..3) of 3...
Quiescing all file system operations.
Snapshot fset1:snap1 created with id 1.
Snapshot fset2:snap1 created with id 2.
Snapshot fset3:snap1 created with id 3.
5. To specify different snapshot names for each fileset (fset1, fset2, and fset3) for the file system fs1,
run the following command:
Flushing dirty data for snapshot fset1:snapA fset2:snapB fset3:snapC (1..3) of 3...
Quiescing all file system operations.
Snapshot fset1:snapA created with id 4.
Snapshot fset2:snapB created with id 5.
Snapshot fset3:snapC created with id 6.
mmlssnapshot fs1
See also
• “mmdelsnapshot command” on page 378
• “mmlssnapshot command” on page 532
• “mmrestorefs command” on page 665
• “mmsnapdir command” on page 711
Location
/usr/lpp/mmfs/bin
mmdefedquota command
Sets default quota limits.
Synopsis
mmdefedquota {-u | -g | -j} Device
or
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
Use the mmdefedquota command to set or change default quota limits. Default quota limits can be set
for new users, groups, and filesets for a specified file system. Default quota limits can also be applied at a
more granular level for new users and groups in a specified fileset.
Default quota limits can be set or changed only if the -Q yes option is in effect for the file system and if
quotas are enabled with the mmdefquotaon command. To set default quotas at the fileset level, the --
perfileset-quota option must also be in effect. If --perfileset-quota is in effect, all users and
groups in the fileset root are not impacted by default quota unless they are explicitly set. The -Q yes
and --perfileset-quota options are specified when you create a file system with the mmcrfs
command or changing file system attributes with the mmchfs command. Use the mmlsfs command to
display the current settings of these quota options.
The mmdefedquota command displays the current values for these limits, if any, and prompts you to
enter new values in your default editor:
• The current block usage: The amount of disk space that is used by this user, group, or fileset, in 1 KB
units; display only.
• The current inode usage: Display only.
• Inode soft limit.
• Inode hard limit.
• Block soft limit: The amount of disk space that this user, group, or fileset is allowed to use during normal
operation.
• Block hard limit: The amount of disk space that this user, group, or fileset is allowed to use during the
grace period.
Note on block limits:
– The command displays the current block limits in KB.
– When you specify a block limit, you can add a suffix to the number to indicate the unit of measure: g,
G, k, K, m, M, p, P, t, or T. If you do not specify a suffix, the command assumes that the number is in
bytes.
– The maximum block limit is 999999999999999 K (about 931322 T). For values greater than
976031318016 K (909 T) you must specify the equivalent value with the suffix K, M, or G or without
any suffix.
Note: A block or inode limit of 0 indicates no limit.
The mmdefedquota command waits for the edit window to be closed before it checks and applies new
values. If an incorrect entry is made, reissue the command and enter the correct values.
When you set quota limits for a file system, consider replication within the file system. For more
information, see the topic Listing quotas in the IBM Spectrum Scale: Administration Guide.
The EDITOR environment variable must contain a complete path name, for example:
export EDITOR=/bin/vi
Parameters
Device
The device name of the file system to have default quota values set for.
File system names need not be fully qualified.
Fileset
The name of a fileset in the file system to have default quota values set for.
Options
-g
Specifies that the default quota value is to be applied for new groups that access the specified file
system or fileset.
-j
Specifies that the default quota value is to be applied for new filesets in the specified file system.
-u
Specifies that the default quota value is to be applied for new users that access the specified file
system or fileset.
Note:
• The maximum files limit is 2147483647.
• See the Note on block limits earlier in this topic.
• If you want to display the current grace period, issue the command mmrepquota -t.
Exit status
0
Successful completion.
Nonzero
A failure has occurred.
Security
You must have root authority to run the mmdefedquota command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
GPFS must be running on the node from which the mmdefedquota command is issued.
Examples
1. To set default quotas for new users of the file system gpfs1, issue the following command:
mmdefedquota -u gpfs1
The following code block shows how to change the soft block limit to 19 GB, the hard block limit to 20
GB, the inode soft limit to 1 KB, and the inode hard limit to 20 KB:
After the edit window is closed, issue the following command to confirm the change:
mmlsquota -d -u gpfs1
2. To set default quotas for new users of fileset fset1 in file system gpfs1, issue the following
command:
mmdefedquota -u gpfs1:fset1
*** Edit quota limits for USR DEFAULT entry for fileset fset1
NOTE: block limits will be rounded up to the next multiple of the block size.
block units may be: K, M, G, T or P, inode units may be: K, M or G.
gpfs1: blocks in use: 0K, limits (soft = 0K, hard = 31457280K)
inodes in use: 0, limits (soft = 0, hard = 0)
Change the soft block limit to 3 GB and the hard block limit to 6 GB, as shown in the following code
block:
*** Edit quota limits for USR DEFAULT entry for fileset fset1
NOTE: block limits will be rounded up to the next multiple of the block size.
block units may be: K, M, G, T or P, inode units may be: K, M or G.
gpfs1: blocks in use: 0K, limits (soft = 3G, hard = 6G)
inodes in use: 0, limits (soft = 0, hard = 0)
After the edit window is closed, issue this command to confirm the change:
mmlsquota -d gpfs1:fset1
See also
• “mmchfs command” on page 230
• “mmcheckquota command” on page 218
• “mmcrfs command” on page 315
• “mmdefquotaoff command” on page 346
• “mmdefquotaon command” on page 349
Location
/usr/lpp/mmfs/bin
mmdefquotaoff command
Deactivates default quota limit usage.
Synopsis
mmdefquotaoff [-u] [-g] [-j] [-v] [-d] {Device [Device...] | -a}
or
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmdefquotaoff command deactivates default quota limits for file systems and filesets. If default
quota limits are deactivated, new users, groups, or filesets will then have a default quota limit of 0,
indicating no limit.
If none of the following options are specified, the mmdefquotaoff command deactivates all default
quotas:
-u
-j
-g
If the -a option is not used, Device must be the last parameter specified.
Parameters
Device
The device name of the file system to have default quota values deactivated.
If more than one file system is listed, the names must be delimited by a space. File system names
need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
Fileset
The name of a fileset in the file system to have default quota values deactivated.
Options
-a
Deactivates default quotas for all GPFS file systems in the cluster. When used in combination with the
-g option, only group quotas are deactivated. When used in combination with the -u or -j options,
only user or fileset quotas, respectively, are deactivated.
-d
Resets quota limits to zero for users, groups, or filesets.
When --perfileset-quota is not in effect for the file system, this option will reset quota limits to
zero only for users, groups, or filesets that have default quotas established.
When --perfileset-quota is in effect for the file system, this option will reset quota limits to zero
for users, groups, or filesets that have default quotas established only if both the file system and
fileset-level default quotas are zero. If either file system or fileset-level default quotas exist, the
default quotas will be switched to the level that is non-zero.
If this option is not chosen, existing quota entries remain in effect.
-g
Specifies that default quotas for groups are to be deactivated.
-j
Specifies that default quotas for filesets are to be deactivated.
-u
Specifies that default quotas for users are to be deactivated.
-v
Prints a message for each file system or fileset in which default quotas are deactivated.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdefquotaoff command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
GPFS must be running on the node from which the mmdefquotaoff command is issued.
Examples
1. To deactivate default user quotas on file system fs0, issue this command:
mmdefquotaoff -u fs0
mmlsquota -d -u fs0
2. To deactivate default group quotas on all file systems, issue this command:
mmdefquotaoff -g -a
mmlsquota -d -g
3. To deactivate both user and group default quotas for fileset fset1 on file system gpfs1, issue this
command:
mmdefquotaoff -d gpfs1:fset1
mmlsquota -d gpfs1:fset1
See also
• “mmcheckquota command” on page 218
• “mmdefedquota command” on page 342
• “mmdefquotaon command” on page 349
• “mmedquota command” on page 398
• “mmlsquota command” on page 527
• “mmquotaoff command” on page 641
• “mmrepquota command” on page 656
Location
/usr/lpp/mmfs/bin
mmdefquotaon command
Activates default quota limit usage.
Synopsis
mmdefquotaon [-u] [-g] [-j] [-v] [-d] {Device [Device... ] | -a}
or
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmdefquotaon command activates default quota limits for file systems and filesets. If default quota
limits are not applied, new users, groups, or filesets will have a quota limit of 0, indicating no limit.
To use default quotas, the -Q yes option must be in effect for the file system. To use default quotas at
the fileset level, the --perfileset-quota option must also be in effect. The -Q yes and --
perfileset-quota options are specified when creating a file system with the mmcrfs command or
changing file system attributes with the mmchfs command. Use the mmlsfs command to display the
current settings of these quota options.
If none of the following options are specified, the mmdefquotaon command activates all default quota
limits:
-u
-j
-g
If the -a option is not used, Device must be the last parameter specified.
Default quotas are established for new users, groups of users or filesets by issuing the mmdefedquota
command. Under the -d option, all users without an explicitly set quota limit will have a default quota
limit assigned.
Parameters
Device
The device name of the file system to have default quota values activated.
If more than one file system is listed, the names must be delimited by a space. File system names
need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
Fileset
The name of a fileset in the file system to have default quota values activated.
Options
-a
Activates default quotas for all GPFS file systems in the cluster. When used in combination with the -g
option, only group quotas are activated. When used in combination with the -u or -j options, only
user or fileset quotas, respectively, are activated.
-d
Assigns default quota limits to existing users, groups, or filesets when the mmdefedquota command
is issued.
When --perfileset-quota is not in effect for the file system, this option will only affect existing
users, groups, or filesets with no established quota limits.
When --perfileset-quota is in effect for the file system, this option will affect existing users,
groups, or filesets with no established quota limits, and it will also change existing users or groups
that refer to default quotas at the file system level into users or groups that refer to fileset-level
default quota. For more information about default quota priorities, see the topic Default quotas in the
IBM Spectrum Scale: Administration Guide
.
If this option is not chosen, existing quota entries remain in effect and are not governed by the default
quota rules.
-g
Specifies that default quotas for groups are to be activated.
-j
Specifies that default quotas for filesets are to be activated.
-u
Specifies that default quotas for users are to be activated.
-v
Prints a message for each file system or fileset in which default quotas are activated.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdefquotaon command.
The node on which you enter the command must be able to execute remote shell commands on any other
administration node in the cluster. It must be able to do so without the use of a password and without
producing any extraneous messages. For more information, see Requirements for administering a GPFS
file system in the IBM Spectrum Scale: Administration Guide.
GPFS must be running on the node from which the mmdefquotaon command is issued.
Examples
1. To activate default user quotas on file system fs0, issue this command:
mmdefquotaon -u fs0
mmlsfs fs0 -Q
2. To activate default group quotas on all file systems in the cluster, issue this command:
mmdefquotaon -g -a
To confirm the change, individually for each file system, issue this command:
mmlsfs fs1 -Q
3. To activate user, group, and fileset default quotas on file system fs2, issue this command:
mmdefquotaon fs2
mmlsfs fs2 -Q
4. To activate user default quota for fileset fset1 on file system gpfs1, issue this command:
mmdefquotaon -d -u gpfs1:fset1
mmlsquota -d gpfs1:fset1
In this example, notice the entryType for user quota displays default on. To also activate group
default quota for fset1 on file system gpfs1, issue this command:
mmdefquotaon -d -g gpfs1:fset1
mmlsquota -d gpfs1:fset1
In this example, notice that the entryType for group quota also displays default on now.
See also
• “mmcheckquota command” on page 218
• “mmchfs command” on page 230
Location
/usr/lpp/mmfs/bin
mmdefragfs command
Reduces disk fragmentation by increasing the number of full free blocks available to the file system.
Synopsis
mmdefragfs Device [-i] [-u BlkUtilPct] [-P PoolName]
[-N {Node[,Node...] | NodeFile | NodeClass}] [--qos QOSClass]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmdefragfs command to reduce fragmentation of a file system. The mmdefragfs command
moves existing file system data within a disk to make more efficient use of disk blocks. The data is
migrated to unused sub-blocks in partially allocated blocks, thereby increasing the number of free full
blocks.
The mmdefragfs command can be run against a mounted or unmounted file system. However, best
results are achieved when the file system is unmounted. When a file system is mounted, allocation status
may change causing retries to find a suitable unused sub-block.
Note: On a file system that has a very low level of fragmentation, negative numbers can be seen in the
output of mmdefragfs for free sub-blocks. This indicates that the block usage has in fact increased after
running the mmdefragfs command. If negative numbers are seen, it does not indicate a problem and you
do not need to rerun the mmdefragfs command.
Parameters
Device
The device name of the file system to have fragmentation reduced. File system names need not be
fully-qualified. fs0 is as acceptable as /dev/fs0.
This must be the first parameter.
-P PoolName
Specifies the pool name to use.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes that can be used in this disk defragmentation. This parameter supports all defined
node classes. The default is all or the current value of the defaultHelperNodes parameter of the
mmchconfig command.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
Options
-i
Specifies to query the current disk fragmentation state of the file system. Does not perform the actual
defragmentation of the disks in the file system.
-u BlkUtilPct
The average block utilization goal for the disks in the file system. The mmdefragfs command reduces
the number of allocated blocks by increasing the percent utilization of the remaining blocks. The
command automatically goes through multiple iterations until BlkUtilPct is achieved on all of the disks
in the file system or until no progress is made in achieving BlkUtilPct from one iteration to the next, at
which point it exits.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdefragfs command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To query the fragmentation state of file system fs0, issue this command:
mmdefragfs fs0 -i
2. To reduce fragmentation of the file system fs0 on all defined, accessible disks that are not stopped or
suspended, issue this command:
mmdefragfs fs0
3. To reduce fragmentation of all files in the fs1 file system until the disks have 100% full block
utilization, issue this command:
See also
• “mmdf command” on page 382
Location
/usr/lpp/mmfs/bin
mmdelacl command
Deletes a GPFS access control list.
Synopsis
mmdelacl [-d] Filename
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
Use the mmdelacl command to delete the extended entries of an access ACL of a file or directory, or to
delete the default ACL of a directory.
Parameters
Filename
The path name of the file or directory for which the ACL is to be deleted. If the -d option is specified,
Filename must contain the name of a directory.
Options
-d
Specifies that the default ACL of a directory is to be deleted.
Since there can be only one NFS V4 ACL (no separate default), specifying the -d flag for a file with an
NFS V4 ACL is an error. Deleting an NFS V4 ACL necessarily removes both the ACL and any inheritable
entries contained in it.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
The mmdelacl command may be issued only by the file or directory owner, the root user, or by someone
with control (c) authority in the ACL for the file.
You may issue the mmdelacl command only from a node in the GPFS cluster where the file system is
mounted.
Examples
To delete the default ACL for a directory named project2, issue this command:
mmdelacl -d project2
mmgetacl -d project2
#owner:uno
#group:system
See also
• “mmeditacl command” on page 395
• “mmgetacl command” on page 422
• “mmputacl command” on page 608
Location
/usr/lpp/mmfs/bin
mmdelcallback command
Deletes one or more user-defined callbacks from the GPFS system.
Synopsis
mmdelcallback CallbackIdentifier[,CallbackIdentifier...]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmdelcallback command to delete one or more user-defined callbacks from the GPFS system.
Parameters
CallbackIdentifier
Specifies a user-defined unique name that identifies the callback to be deleted. Use the
mmlscallback command to see the name of the callbacks that can be deleted.
Note: Before you add or delete the tiebreakerCheck event, you must stop the GPFS daemon on all
the nodes in the cluster.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdelcallback command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To delete the test1 callback from the GPFS system, issue this command:
mmmdelcallback test1
See also
• “mmaddcallback command” on page 12
• “mmlscallback command” on page 482
Location
/usr/lpp/mmfs/bin
mmdeldisk command
Deletes disks from a GPFS file system.
Synopsis
mmdeldisk Device {"DiskName[;DiskName...]" | -F DescFile} [-a] [-c]
[-m | -r | -b [--strict]] [-N {Node[,Node...] | NodeFile | NodeClass}]
[--inode-criteria CriteriaFile] [-o InodeResultFile]
[--qos QOSClass]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmdeldisk command migrates all data that would otherwise be lost to the remaining disks in the file
system. It then removes the disks from the file system descriptor, preserves replication at all times, and
optionally rebalances the file system after removing the disks.
The mmdeldisk command has the following two functions:
• Copying unreplicated data off the disks and removing references to the disks (deldisk step).
• Rereplicating or rebalancing blocks across the remaining disks (restripe step).
These two functions can be done in one pass over the file system, or in two passes if the -a option is
specified.
Run the mmdeldisk command when system demand is low.
If a replacement for a failing disk is available, use the mmrpldisk command in order to keep the file
system balanced. Otherwise, use one of these procedures to delete a disk:
• If the file system is replicated, replica copies can be preserved at all times by using the default -r
option or the -b option.
• Using the -m option will not preserve replication during the deldisk step because it will only copy the
minimal amount of data off the disk being deleted so that every block has at least one copy. Also, using
the -a option will not preserve replication during the deldisk step, but will then re-establish
replication during the subsequent restripe step.
• If you want to move all data off the disk before running mmdeldisk, use mmchdisk to suspend all the
disks that will be deleted and run mmrestripefs with the -r or -b option. This step is no longer
necessary, now that mmdeldisk does the same function. If mmdeldisk fails (or is canceled), it leaves
the disks in the suspended state, and mmdeldisk can be retried when the problem that caused
mmdeldisk to stop is corrected.
If there are some disks marked as suspended or to be emptied and you run mmdeldisk command
against any disk in the file system, mmrestripefs -r command is triggered by default. All the data
from both the disks being deleted and suspended or to be emptied disks will be restriped. This slows
down the execution of mmdeldisk command and triggers unexpected data movement. In order to
avoid this, you should run mmchdisk fsName resume command to mark those suspended or to be
emptied disks back to ready status before you run mmdeldisk command.
• If the disk is permanently damaged and the file system is not replicated, or if the mmdeldisk command
repeatedly fails, see the IBM Spectrum Scale: Problem Determination Guide and search for Disk media
failure.
If the last disk in a storage pool is deleted, the storage pool is deleted. The mmdeldisk command is not
permitted to delete the system storage pool. A storage pool must be empty in order for it to be deleted.
Results
Upon successful completion of the mmdeldisk command, these tasks are completed:
• Data that has not been replicated from the target disks is migrated to other disks in the file system.
• Remaining disks are rebalanced, if specified.
Parameters
Device
The device name of the file system to delete the disks from. File system names need not be fully-
qualified. fs0 is as acceptable as /dev/fs0. This must be the first parameter.
"DiskName[;DiskName...]"
Specifies the names of the disks to be deleted from the file system. If there is more than one disk to
be deleted, delimit each name with a semicolon (;) and enclose the list in quotation marks.
-F DiskFile
Specifies a file that contains the names of the disks (one name per line), to be deleted from the GPFS
cluster.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes that participate in the restripe of the file system after the specified disks have
been removed. This command supports all defined node classes. The default is all or the current
value of the defaultHelperNodes parameter of the mmchconfig command.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
--inode-criteria CriteriaFile
Specifies the interesting inode criteria flag, where CriteriaFile contains a list of the following flags with
one per line:
BROKEN
Indicates that a file has a data block with all of its replicas on disks that have been removed.
Note: BROKEN is always included in the list of flags even if it is not specified.
dataUpdateMiss
Indicates that at least one data block was not updated successfully on all replicas.
exposed
Indicates an inode with an exposed risk; that is, the file has data where all replicas are on
suspended disks. This could cause data to be lost if the suspended disks have failed or been
removed.
illCompressed
Indicates an inode in which file compression or decompression is deferred, or in which a
compressed file is partly decompressed to allow the file to be written into or memory-mapped.
illPlaced
Indicates an inode with some data blocks that might be stored in an incorrect storage pool.
illReplicated
Indicates that the file has a data block that does not meet the setting for the replica.
metaUpdateMiss
Indicates that there is at least one metadata block that has not been successfully updated to all
replicas.
unbalanced
Indicates that the file has a data block that is not well balanced across all the disks in all failure
groups.
Note: If a file matches any of the specified interesting flags, all of its interesting flags (even those not
specified) will be displayed.
-o InodeResultFile
Contains a list of the inodes that met the interesting inode flags that were specified on the --inode-
criteria parameter. The output file contains the following:
INODE_NUMBER
This is the inode number.
DISKADDR
Specifies a dummy address for later tsfindinode use.
SNAPSHOT_ID
This is the snapshot ID.
ISGLOBAL_SNAPSHOT
Indicates whether or not the inode is in a global snapshot. Files in the live file system are
considered to be in a global snapshot.
INDEPENDENT_FSETID
Indicates the independent fileset to which the inode belongs.
MEMO (INODE_FLAGS FILE_TYPE [ERROR])
Indicates the inode flag and file type that will be printed:
Inode flags:
BROKEN
exposed
dataUpdateMiss
illCompressed
illPlaced
illReplicated
metaUpdateMiss
unbalanced
File types:
BLK_DEV
CHAR_DEV
DIRECTORY
FIFO
LINK
LOGFILE
REGULAR_FILE
RESERVED
SOCK
*UNLINKED*
*DELETED*
Notes:
1. An error message will be printed in the output file if an error is encountered when repairing the
inode.
2. DISKADDR, ISGLOBAL_SNAPSHOT, and FSET_ID work with the tsfindinode tool
(/usr/lpp/mmfs/bin/tsfindinode) to find the file name for each inode. tsfindinode
uses the output file to retrieve the file name for each interesting inode.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
Options
-a
Specifies that the mmdeldisk command not wait for rereplicating or rebalancing to complete before
returning. When this flag is specified, the mmdeldisk command runs asynchronously and returns
after the file system descriptor is updated and the rebalancing scan is started, but it does not wait for
rebalancing to finish. If no rebalancing is requested (-r option is not specified), this option has no
effect.
If -m is specified, this option has no effect. If -r or -b is specified (no option defaulting to -r), then
the deldisk step is done using -m, and the restripe step is done using the specified option.
-b
Rebalances the file system to improve performance. Rebalancing removes file blocks from disks that
are being deleted and attempts to distribute file blocks evenly across the remaining disks of the file
system. In IBM Spectrum Scale 5.0.0 and later, rebalancing is implemented by a lenient round-robin
method that typically runs faster than the previous method of strict round robin. To rebalance the file
system using the strict round-robin method, include the --strict option that is described in the
following text.
--strict
Rebalances the specified files with a strict round-robin method. In IBM Spectrum Scale v4.2.3 and
earlier, rebalancing always uses this method.
Note: This option might result in much more data being moved than with the -r option.
Note: Rebalancing of files is an I/O intensive and time-consuming operation and is important only for
file systems with large files that are mostly invariant. In many cases, normal file update and creation
rebalances a file system over time without the cost of a complete rebalancing.
-c
Specifies that processing continues even in the event that unreadable data exists on the disks being
deleted. Data that has not been replicated is lost. Replicated data is not lost as long as the disks
containing the replication are accessible.
-m
Does minimal data copying to preserve any data that is located only on the disks being removed. This
is the fastest way to get a disk out of the system, but it could reduce replication of some blocks of the
files and metadata.
Note: This might be I/O intensive if there is a lot of data to be copied or re-replicated off the disks that
are being deleted.
-r
Preserves replication of all files and metadata during the mmdeldisk operation (except when the -a
option is specified). This is the default.
Note: This might be I/O intensive if there is a lot of data to be copied or re-replicated off the disks that
are being deleted.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdeldisk command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Example
To delete gpfs1016nsd from file system fs1 and rebalance the files across the remaining disks, issue
this command:
See also
• “mmadddisk command” on page 28
• “mmchdisk command” on page 210
• “mmlsdisk command” on page 489
• “mmrpldisk command” on page 679
Location
/usr/lpp/mmfs/bin
mmdelfileset command
Deletes a GPFS fileset.
Synopsis
mmdelfileset Device FilesetName [-f] [--qos QOSClass]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmdelfileset command deletes a GPFS fileset. When deleting a fileset, consider these points:
• The root fileset cannot be deleted.
• A fileset that is not empty cannot be deleted unless the -f flag is specified.
Remember: If you use the -f flag, make sure that you remove all NFS exports defined on or inside the
junction path of the fileset before you delete it. If you do not remove those exports before you delete
the fileset, you might experience unexpected errors or issues when you create new filesets and exports.
• A fileset that is currently linked into the namespace cannot be deleted until it is unlinked with the
mmunlinkfileset command.
• A dependent fileset can be deleted at any time.
• An independent fileset cannot be deleted if it has any dependent filesets or fileset snapshots.
• Deleting a dependent fileset that is included in a fileset or global snapshot removes it from the active
file system, but it remains part of the file system in a deleted state.
• Deleting an independent fileset that is included in any global snapshots removes it from the active file
system, but it remains part of the file system in a deleted state.
• A fileset in the deleted state is displayed in the mmlsfileset output with the fileset name in
parenthesis. If the -L flag is specified, the latest including snapshot is also displayed. The --deleted
option of the mmlsfileset command can be used to display only deleted filesets.
• The contents of a deleted fileset are still available in the snapshot, through some path name containing
a .snapshots component, because it was saved when the snapshot was created.
• When the last snapshot that includes the fileset has been deleted, the fileset is fully removed from the
file system.
For information on GPFS filesets, see Information Lifecycle Management for GPFS in IBM Spectrum Scale:
Administration Guide.
Parameters
Device
The device name of the file system that contains the fileset.
File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0.
FilesetName
Specifies the name of the fileset to be deleted.
-f
Forces the deletion of the fileset. All fileset contents are deleted. Any child filesets are first unlinked.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdelfileset command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. This sequence of commands illustrates what happens when attempting to delete a fileset that is
linked.
a. Command:
mmlsfileset gpfs1
b. Command:
c. Command:
mmlsfileset gpfs1
2. This sequence of commands illustrates what happens when attempting to delete a fileset that
contains user files.
a. Command:
mmlsfileset gpfs1
b. Command:
c. Command:
mmlsfileset gpfs1
See also
• “mmchfileset command” on page 222
Location
/usr/lpp/mmfs/bin
mmdelfs command
Removes a GPFS file system.
Synopsis
mmdelfs Device [-p]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmdelfs command removes all the structures for the specified file system from the nodes in the
cluster.
Before you can delete a file system using the mmdelfs command, you must unmount it on all nodes.
Results
Upon successful completion of the mmdelfs command, these tasks are completed on all nodes:
• Deletes the character device entry from /dev.
• Removes the mount point directory where the file system had been mounted.
Parameters
Device
The device name of the file system to be removed. File system names need not be fully-qualified. fs0
is as acceptable as /dev/fs0.
This must be the first parameter.
-p
Indicates that the disks are permanently damaged and the file system information should be removed
from the GPFS cluster data even if the disks cannot be marked as available.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdelfs command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To delete file system fs0, issue this command:
mmdelfs fs0
GPFS: 6027-573 All data on the following disks of fs0 will be destroyed:
gpfs9nsd
gpfs10nsd
gpfs15nsd
gpfs17nsd
GPFS: 6027-574 Completed deletion of file system fs0.
mmdelfs: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
See also
• “mmcrfs command” on page 315
• “mmchfs command” on page 230
• “mmlsfs command” on page 498
Location
/usr/lpp/mmfs/bin
mmdelnode command
Removes one or more nodes from a GPFS cluster.
Synopsis
mmdelnode {-a | -N Node[,Node...] | NodeFile | NodeClass}
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmdelnode command to delete one or more nodes from the GPFS cluster. You may issue the
mmdelnode command on any GPFS node.
A node cannot be deleted if any of the following are true:
1. It is a primary or secondary GPFS cluster configuration server.
The node being deleted cannot be the primary or secondary GPFS cluster configuration server unless
you intend to delete the entire cluster.
You can determine whether a node is the primary or secondary configuration server by issuing the
mmlscluster command. If the node is listed as of the servers and you still want to delete it without
deleting the cluster, first use the mmchcluster command to assign another node as the server.
2. Before you can delete a node, unmount all of the GPFS file systems and stop GPFS on the node to be
deleted.
3. Exercise caution when shutting down GPFS on quorum nodes. If the number of remaining quorum
nodes falls below the requirement for a quorum, you will be unable to perform file system operations.
For more information, see Quorum in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Each GPFS cluster is managed independently, so there is no automatic coordination and propagation of
changes between clusters like there is between the nodes within a cluster. This means that if you
permanently delete nodes that are being used as contact nodes by other GPFS clusters that can mount
your file systems, you should notify the administrators of those GPFS clusters so that they can update
their own environments.
Results
Upon successful completion of the mmdelnode command, the specified nodes are deleted from the GPFS
cluster.
Parameters
-a
Delete all nodes in the cluster.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the set of nodes to be deleted from the cluster.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
This command does not support a NodeClass of mount.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdelnode command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
You may issue the mmdelnode command from any node that will remain in the GPFS cluster.
Examples
1. To delete all of the nodes in the cluster, issue this command:
mmdelnode -a
mmdelnode -N k145n12,k145n13,k145n14
See also
• “mmaddnode command” on page 35
• “mmcrcluster command” on page 303
• “mmchconfig command” on page 169
• “mmlsfs command” on page 498
• “mmlscluster command” on page 484
Location
/usr/lpp/mmfs/bin
mmdelnodeclass command
Deletes user-defined node classes.
Synopsis
mmdelnodeclass ClassName[,ClassName...]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmdelnodeclass command to delete existing user-defined node classes.
Parameters
ClassName
Specifies an existing user-defined node class to delete.
If ClassName was used to change configuration attributes with mmchconfig, and the configuration
attributes are still referencing ClassName, then ClassName cannot be deleted. Use the mmchconfig
command to remove the references to ClassName before deleting this user-defined node class.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdelnodeclass command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To display the current user-defined node classes, issue this command:
mmlsnodeclass --user
mmdelnodeclass siteA
To display the updated list of user-defined node classes, issue this command:
mmlsnodeclass --user
See also
• “mmcrnodeclass command” on page 330
• “mmchnodeclass command” on page 248
• “mmlsnodeclass command” on page 512
Location
/usr/lpp/mmfs/bin
mmdelnsd command
Deletes Network Shared Disks (NSDs) from the GPFS cluster.
Synopsis
mmdelnsd {"DiskName[;DiskName...]" | -F DiskFile}
or
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmdelnsd command serves two purposes:
1. To delete NSDs from the GPFS cluster.
2. To remove the unique NSD volume ID left on a disk after the failure of a previous invocation of the
mmdelnsd command. The NSD had been successfully deleted from the GPFS cluster but there was a
failure to clear the unique volume ID from the disk.
NSDs being deleted cannot be part of any file system. To determine if an NSD belongs to a file system or
not, issue the mmlsnsd -d DiskName command. If an NSD belongs to a file system, either the
mmdeldisk or the mmdelfs command must be issued prior to deleting the NSDs from the GPFS cluster.
NSDs being deleted cannot be tiebreaker disks. To list the tiebreaker disks, issue the mmlsconfig
tiebreakerDisks command. Use the mmchconfig command to assign new tiebreaker disks prior to
deleting NSDs from the cluster. For information on tiebreaker disks, see Quorum in IBM Spectrum Scale:
Concepts, Planning, and Installation Guide.
Results
Upon successful completion of the mmdelnsd command, these tasks are completed:
• All references to the disks are removed from the GPFS cluster data.
• Each disk is cleared of its unique NSD volume ID.
• On Windows, the disk's GPT partition table is removed leaving the disk Unknown/Not Initialized.
Parameters
DiskName[;DiskName...]
Specifies the names of the NSDs to be deleted from the GPFS cluster. Specify the names generated
when the NSDs were created. Use the mmlsnsd -F command to display disk names. If there is more
than one disk to be deleted, delimit each name with a semicolon (;) and enclose the list of disk names
in quotation marks.
-F DiskFile
Specifies a file containing the names of the NSDs, one per line, to be deleted from the GPFS cluster.
-N Node[,Node...]
Specifies the nodes to which the disk is attached. If no nodes are listed, the disk is assumed to be
directly attached to the local node.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
-p NSDId
Specifies the NSD volume ID of an NSD that needs to be cleared from the disk as indicated by the
failure of a previous invocation of the mmdelnsd command.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdelnsd command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To delete gpfs47nsd from the GPFS cluster, issue this command:
mmdelnsd "gpfs47nsd"
2. If after running mmdelnsd to delete an NSD, you experience a failure, the disk was not found. Run
mmdelnsd -p NSD Volume ID. For example:
mmdelnsd -p COA8910B626630E
This will remove the NSD definition from the GPFS configuration even if the NSD ID is not removed
from the physical disk because it has been permanently lost.
See also
• “mmcrnsd command” on page 332
• “mmlsnsd command” on page 514
Location
/usr/lpp/mmfs/bin
mmdelsnapshot command
Deletes a GPFS snapshot.
Synopsis
mmdelsnapshot Device [[Fileset]:]Snapshot[,[[Fileset]:]Snapshot...
[-j FilesetName[,FilesetName...]][--qos QOSClass]
[-N{all | mount | Node[,Node...]|NodeFile | NodeClass
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmdelsnapshot command to delete a GPFS snapshot.
Once the command is issued, the snapshot is marked for deletion and cannot be recovered.
If the node from which the mmdelsnapshot command is issued or the file system manager node fails,
the snapshot might not be completely deleted. The mmlssnapshot command shows these snapshots
with status DeleteRequired. Reissue the mmdelsnapshot from another node to complete the removal,
or allow the snapshot to be cleaned up automatically by a later mmdelsnapshot command. A snapshot in
this state cannot be accessed.
Any files open in the snapshot are forcibly closed. The user receives an errno of ESTALE on the next file
access.
If a snapshot has file clones, you must delete the file clones or split them from their clone parents before
you delete the snapshot. Use the mmclone split or mmclone redirect command to split file clones.
Use a regular delete (rm) command to delete a file clone. If a snapshot is deleted that contains a clone
parent, any attempts to read a block that refers to the missing snapshot returns an error. A policy file can
be created to help determine whether a snapshot has file clones. See the IBM Spectrum Scale:
Administration Guide for more information about file clones and policy files.
In IBM Spectrum Scale 4.2.1 and later, snapshot commands support the specification of multiple
snapshots. Users can easily delete multiple snapshots for maintenance and cleanup operations. Also,
system performance is increased by batching operations and reducing overhead.
In this release, the following new usages of the mmdelsnapshot command have been introduced:
mmdelsnapshot fs [[Fileset]:]Snapshot[,[[Fileset]:]Snapshot...]
Parameters
Device
The device name of the file system for which the snapshot is to be deleted. File system names do not
need to be fully qualified.
Fileset
Specifies the name of the fileset that contains the fileset snapshot to be deleted. If Fileset is not
specified, the mmdelsnapshot command deletes a global snapshot named Snapshot.
Note: Ensure that multiple snapshots and multiple filesets are not used together.
Snapshot
Specifies the name of the snapshot to be deleted.
The snapshot names are separated by a comma.
The snapshot specifier describes global and fileset snapshots. For example, Fileset1:Snapshot1
specifies a fileset snapshot named Snapshot1 for fileset Fileset1. If Fileset1 is empty, Snapshot1 is a
global snapshot named Snapshot1.
Note: Ensure that the snapshot name does not include a colon (:), a comma (,), and whitespaces. If
the snapshot name consists of a whitespace, the whitespace must be quoted and part of the snapshot
name. For example, if the snap1 and snap A snapshots must be deleted, use the command
'mmdelsnapshot pk2 "snap A, snap1"'.
-j FilesetName
Specifies the name of the fileset that contains the fileset snapshot to be deleted (SnapshotName). If -
j is not specified, the mmdelsnapshot command attempts to delete a global snapshot named
SnapshotName.
Note: When a list of snapshots separated by a comma (,) is used with the -j option, the fileset is
applicable to each snapshot that does not use the colon (:) syntax. The fileset name must not consist
of a white space.
-N{all | mount | Node[,Node...]|NodeFile | NodeClass
Specifies the nodes that participate in deleting the snapshot. This command supports all defined node
classes. The default is all or the current value of the defaultHelperNodes parameter of the
mmchconfig command.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
Nonzero
A failure occurred.
Security
You must have root authority to run the mmdelsnapshot command when you delete global snapshots.
Independent fileset owners can run the mmdelsnapshot command to delete snapshots of filesets that
they own.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To delete the snapshot snap1 for the file system fs1, run the following command:
Before you issue the command, the directory might have the following structure:
/fs1/file1
/fs1/userA/file2
/fs1/userA/file3
/fs1/.snapshots/snap1/file1
/fs1/.snapshots/snap1/userA/file2
/fs1/.snapshots/snap1/userA/file3
After you issue the command, the directory has the following structure:
/fs1/file1
/fs1/userA/file2
/fs1/userA/file3
/fs1/.snapshots
2. To delete the snap1 snapshot from the fset1, fset2, and fset3 filesets for the file system fs1, run the
following command:
3. To specify the snapshot names that must be deleted from each fileset in the file system fs1, run the
following command:
See also
• “mmclone command” on page 271
• “mmcrsnapshot command” on page 337
• “mmlssnapshot command” on page 532
• “mmrestorefs command” on page 665
• “mmsnapdir command” on page 711
Location
/usr/lpp/mmfs/bin
mmdf command
Queries available file space on a GPFS file system.
Synopsis
mmdf Device [-d] [-F] [-m] [-P PoolName] [-Y |--block-size {BlockSize | auto}]
[--qos QOSClass]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmdf command to display available file space on a GPFS file system. For each disk in the GPFS
file system, the mmdf command displays this information, by failure group and storage pool:
• The size of the disk.
• The failure group of the disk.
• Whether the disk is used to hold data, metadata, or both.
• Available space in full blocks.
• Available space in subblocks ("fragments")
Displayed values are rounded down to a multiple of 1024 bytes. If the subblock ("fragment") size used by
the file system is not a multiple of 1024 bytes, then the displayed values may be lower than the actual
values. This can result in the display of a total value that exceeds the sum of the rounded values displayed
for individual disks. The individual values are accurate if the subblock size is a multiple of 1024 bytes.
For the file system, the mmdf command displays the total number of inodes and the number available.
The mmdf command may be run against a mounted or unmounted file system.
Notes:
1. This command is I/O intensive and should be run when the system load is light.
2. An asterisk at the end of a line means that this disk is in a state where it is not available for new block
allocation.
Parameters
Device
The device name of the file system to be queried for available file space. File system names need not
be fully-qualified. fs0 is as acceptable as /dev/fs0.
This must be the first parameter.
-d
List only disks that can hold data.
-F
List the number of inodes and how many of them are free.
-m
List only disks that can hold metadata.
-P PoolName
Lists only disks that belong to the requested storage pool.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
--block-size {BlockSize | auto}
BlockSize
Specifies the unit in which the number of blocks is displayed. The value must be of the form [n]K,
[n]M, [n]G, or [n]T, where:
n
Is an optional variable in the range 1 - 1023.
K, M, G, T
Stand for KiB, MiB, GiB, and TiB. For example, if you specify 1M, the number of blocks is
displayed in units of 1 MiB.
auto
Causes the command to automatically scale the number of blocks to an easy-to-read value.
The default value is 1K.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
If you are a root user, the node on which the command is issued must be able to execute remote shell
commands on any other node in the cluster without the use of a password and without producing any
extraneous messages. For more information, see Requirements for administering a GPFS file system in IBM
Spectrum Scale: Administration Guide.
If you are a non-root user, you may specify only file systems that belong to the same cluster as the node
on which the mmdf command was issued.
Examples
1. To query all disks in the fs2 file system that can hold data, issue this command:
mmdf fs2 -d
Disks in storage pool: sp1 (Maximum disk size allowed is 359 GB)
gpfs1002nsd 8897968 1 no yes 8342016 (94%) 928 (0%)
--------- ------------- ------------
(pool total) 8897968 8342016 (94%) 928 (0%)
2. To query all disks in the fs1 file system with the number of blocks automatically scaled to an easy-to-
read value, issue this command:
Disks in storage pool: sp1 (Maximum disk size allowed is 484 GB)
gpfs1006nsd 33.4G 4 no yes 33.4G (100%) 1.562M
( 0%)
gpfs1007nsd 33.4G 4 no yes 33.4G (100%) 1.562M
( 0%)
gpfs1008nsd 33.4G 4 no yes 33.4G (100%) 1.562M
( 0%)
gpfs1010nsd 33.4G 4 no yes 33.4G (100%) 1.562M
( 0%)
gpfs1011nsd 33.4G 4 no yes 33.4G (100%) 1.562M
( 0%)
gpfs1012nsd 33.4G 5 no yes 33.4G (100%) 1.562M
( 0%)
gpfs1013nsd 33.4G 5 no yes 33.4G (100%) 1.562M
( 0%)
gpfs1014nsd 33.4G 5 no yes 33.4G (100%) 1.562M
( 0%)
gpfs1015nsd 33.4G 5 no yes 33.4G (100%) 1.562M
( 0%)
gpfs1016nsd 33.4G 5 no yes 33.4G (100%) 1.562M
( 0%)
------------- --------------------
-------------------
(pool total) 334G 334G (100%) 15.62M
( 0%)
============= ====================
===================
(data) 334G 334G (100%) 15.62M
( 0%)
(metadata) 167G 166.2G (100%) 11.56M
( 0%)
============= ====================
===================
(total) 501G 500.2G (100%) 27.19M
( 0%)
Inode Information
-----------------
Number of used inodes: 4043
Number of free inodes: 497717
Number of allocated inodes: 501760
Maximum number of inodes: 514048
mmdf fs1 -F
Inode Information
-----------------
Number of used inodes: 4043
Number of free inodes: 497717
Number of allocated inodes: 501760
Maximum number of inodes: 514048
See also
• “mmchfs command” on page 230
• “mmcrfs command” on page 315
• “mmdelfs command” on page 369
• “mmlsfs command” on page 498
Location
/usr/lpp/mmfs/bin
mmdiag command
Displays diagnostic information about the internal GPFS state on the current node.
Synopsis
mmdiag [--afm [fileset={all|device[:filesetName]}|gw][-Y]]
[--all [-Y]] [--version [-Y]] [--waiters [-Y]] [--deadlock [-Y]] [--threads [-Y]]
[--lroc [-Y]] [--memory [-Y]] [--network [-Y]] [--config [-Y]] [--trace [-Y]]
[--iohist [verbose] [-Y]] [--tokenmgr [-Y]] [--commands [-Y]]
[--dmapi [session|event|token|disposition|all]]
[--rpc [node[=name]|size|message|all|nn{S|s|M|m|H|h|D|d}] [-Y]]
[--stats [-Y]][--nsd [all] [-Y]] [--eventproducer [-Y]]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmdiag command to query various aspects of the GPFS internal state for troubleshooting and
tuning purposes. The mmdiag command displays information about the state of GPFS on the node where
it is executed. The command obtains the required information by querying the GPFS daemon process
(mmfsd), and thus functions only when the GPFS daemon is running.
Results
The mmdiag command displays the requested information and returns 0 if successful.
Parameters
--afm
Displays status and statistics of linked AFM and AFM DR filesets that are assigned to the gateway
node. Accepts the following options:
Note: If you do not specify any option, status and statistics of all filesets are displayed.
fileset=all
Displays status and statistics of all active filesets.
fileset=device
Displays status and statistics of all active filesets on a specified device.
fileset=device:filesetName
Displays status and statistics of a specified fileset on the specified device.
gw
Displays gateway statistics like queue length and memory.
--all
Displays all available information. This option is the same as specifying all of the mmdiag parameters.
--commands
Displays all the commands currently running on the local node.
--config
Displays configuration parameters and their settings. The list of configuration parameters that are
shown here consists of configuration parameters that are known to mmfsd. Note that some
configuration parameters (for example, trace settings) are scanned only by the layers of code above
mmfsd, and those parameters are shown in mmlsconfig output but not here.
While mmlsconfig displays only a subset of configuration parameters (generally those parameters
that have nondefault settings), the list here shows a larger parameter set. All of the documented
mmfsd configuration parameters are shown, plus some of the undocumented parameters (generally
those parameters that are likely to be helpful in tuning and troubleshooting).
Note that parameter values that are shown here are those parameters that are currently in effect (as
opposed to the values shown in mmlsconfig output, which can show the settings that become
effective on the next GPFS restart).
--deadlock
Displays the longest waiters that exceed the deadlock detection thresholds.
If a deadlock situation occurs, administrators can use this information from all nodes in a cluster to
help decide how to break up the deadlock.
--dmapi
Displays various DMAPI information. If no other options are specified, summary information is
displayed for sessions, pending events, cached tokens, stripe groups, and events that are waiting for
reply. The --dmapi parameter accepts the following options:
session
Displays a list of sessions.
event
Displays a list of pending events.
token
Displays a list of cached tokens, stripe groups, and events that waiting for reply.
disposition
Displays the DMAPI disposition for events.
all
Displays all of the session, event, token, and disposition information with more details.
Note: -Y is not supported with the --dmapi option.
--eventproducer
Displays statistics for file audit logging and clustered watch folder producers. The statistics include
counts of how many messages have been sent, how many messages have been delivered (the target
sink has acknowledged that the message has been received), messages that the producer failed to
deliver, the amount of bytes sent, breakdown of the types of messages that were sent and delivered,
and information on the status of the producer. For more information about the producer state and
state changes, use the mmhealth command.
--iohist [verbose]
Displays recent I/O history. The information about I/O requests recently submitted by GPFS code is
shown here. It can provide some insight into various aspects of GPFS IO, such as the type of data or
metadata being read or written, the distribution of I/O sizes, and I/O completion times for individual
I/Os. This information can be useful in performance tuning and troubleshooting.
verbose
Displays additional columns of information info1, info2, context, and thread. The contents
of the columns are as follows:
info1, info2
The contents of columns info1 and info2 depend on the buffer type. The buffer type is
displayed in the Buf type column of the command output:
Table 22. Contents of columns input1 and input2 depending on the value in column Buf
type
Buf type (Buffer type) info1 info2
data The inode number of the file The block number of the file
metadata The inode number of the file (For internal use by IBM)
LLIndBlock The inode number of the file (For internal use by IBM)
Table 22. Contents of columns input1 and input2 depending on the value in column Buf
type (continued)
Buf type (Buffer type) info1 info2
inode (For internal use by IBM) The inode number of the file
Other types, such as (For internal use by IBM) (For internal use by IBM)
diskDesc, sgDesc, and
others.
context
The I/O context that started this I/O.
thread
The name of the thread that started this I/O.
The node that the command is issued from determines the I/O completion time that is shown.
If the command is issued from a Network Shared Disk (NSD) server node, the command shows the
time that is taken to complete or serve the read or write I/O operations that are sent from the client
node. This refers to the latency of the operations that are completed on the disk by the NSD server.
If the command is issued on an NSD client node that does not have local access to the disk, the
command shows the complete time (requested by the client node) that is taken by the read or write
I/O operations to complete. This refers to the latency of I/O request to the NSD server and the latency
of I/O operations that are completed on the disk by the NSD server.
--lroc
Displays status and statistics for local read-only cache (LROC) devices. This parameter is valid for
x86_64, PPC64, and PPC64LE Linux nodes.
--memory
Displays information about mmfsd memory usage. Several distinct memory regions are allocated and
used by mmfsd, and it can be important to know the memory usage situation for each one.
Heap memory that is allocated by mmfsd
This area is managed by the OS and is not associated with a preset limit that is enforced by GPFS.
Memory pools 1 and 2
Both of these pools refer to a single memory area, also known as the shared segment. It is used to
cache various kinds of internal GPFS metadata and for many other internal uses. This memory
area is allocated by a special, platform-specific mechanism and is shared between user space and
kernel code. The preset limit on the maximum shared segment size, current usage, and some prior
usage information are shown here.
Memory pool 3
This area is also known as the token manager pool. This memory area is used to store the token
state on token manager servers. The preset limit on the maximum memory pool size, current
usage, and some prior-usage information are shown here.
This information can be useful when you are troubleshooting ENOMEM errors that are returned by
GPFS to a user application and memory allocation failures reported in a GPFS log file.
--network
Displays information about mmfsd network connections and pending Remote Procedure Calls (RPCs).
Basic information and statistics about all existing mmfsd network connections to other nodes is
displayed, including information about broken connections. If any RPCs are pending (that is, sent but
not yet replied to), the information about each one is shown, including the list of RPC destinations and
the status of the request for each destination. This information can be helpful in following a multinode
chain of dependencies during a deadlock or performance-problem troubleshooting.
--nsd [all]
Displays status and queue statistics for NSD queues that contain pending requests.
all
Displays status and queue statistics for all NSD queues.
--rpc
Displays RPC performance statistics. The --rpc parameter accepts the following options:
node[=name]
Displays all per node statistics (channel wait, send time TCP, send time verbs, receive time TCP,
latency TCP, latency verbs, and latency mixed). If name is specified, all per node statistics for just
the specified node are displayed.
size
Displays per size range statistics.
message
Displays per message type RPC execution time.
all
Displays everything.
nn{S|s|M|m|H|h|D|d}
Displays per node RPC latency statistics for the latest number of intervals, which are specified by
nn, for the interval specified by one of the following characters:
S|s
Displays second intervals only.
M|m
Displays first the second intervals since the last-minute boundary followed by minute
intervals.
H|h
Displays first the second and minute intervals since their last minute and hour boundary
followed by hour intervals.
D|d
Displays first the second, minute, and hour intervals since their last minute, hour, and day
boundary followed by day intervals.
Averages are displayed as a number of milliseconds with three decimal places (one-microsecond
granularity).
--stats
Displays some general GPFS statistics.
GPFS uses a diverse array of objects to maintain the file system state and cache various types of
metadata. The statistics about some of the more important object types are shown here.
OpenFile
This object is needed to access an inode. The target maximum number of cached OpenFile objects
is governed by the maxFilesToCache configuration parameter. Note that more OpenFile objects
can be cached, depending on the workload.
CompactOpenFile
These objects contain an abbreviated form of an OpenFile, and are collectively known as stat
cache. The target maximum number of cached CompactOpenFile objects is governed by the
maxStatCache parameter of the mmchconfig command.
OpenInstance
This object is created for each open file instance (file or directory that is opened by a distinct
process).
BufferDesc
This object is used to manage buffers in the GPFS page pool.
indBlockDesc
This object is used to cache indirect block data.
All of these objects use the shared segment memory. For each object type, a preset target exists,
which is derived from configuration parameters and the memory available in the shared segment. The
information about current object usage can be helpful in performance tuning.
--threads
Displays mmfsd thread statistics and the list of active threads. For each thread, its type and kernel
thread ID are shown. All non-idle mmfsd threads are shown. For those threads that are currently
waiting for an event, the wait reason and wait time in seconds are shown. This information provides
more detail than the data displayed by mmdiag --waiters.
--tokenmgr
Displays information about token management. For each mounted GPFS file system, one or more
token manager nodes is appointed. The first token manager is always colocated with the file system
manager, while other token managers can be appointed from the pool of nodes with the manager
designation. The information that is shown here includes the list of currently appointed token
manager nodes and, if the current node is serving as a token manager, some statistics about prior
token transactions.
--trace
Displays current trace status and trace levels. During GPFS troubleshooting, it is often necessary to
use the trace subsystem to obtain the debug data necessary to understand the problem. See Trace
facility in IBM Spectrum Scale: Problem Determination Guide. It is important to have trace levels set
correctly, per instructions provided by the IBM Support Center. The information that is shown here
makes it possible to check the state of tracing and to see the trace levels currently in effect.
--version
Displays information about the GPFS build currently running on this node. This information helps in
troubleshooting installation problems. The information that is displayed here can be more
comprehensive than the version information that is available from the OS package management
infrastructure, in particular when an e-fix is installed.
--waiters
Displays mmfsd threads that are waiting for events. This information can be helpful in troubleshooting
deadlocks and performance problems. For each thread, the thread name, wait time in seconds, and
wait reason are typically shown. Only non-idle threads that are currently waiting for some event to
occur are displayed. Note that only mmfsd threads are shown; any application I/O threads that might
be waiting in GPFS kernel code would not be present here.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdiag command.
Examples
1. To display a list of waiters, enter the following command:
mmdiag --waiters
In this example, all waiters have a short wait duration and represent a typical snapshot of normal GPFS
operation.
2. To display information about memory use, enter the following command:
mmdiag --memory
In this example, a typical memory usage picture is shown. None of the memory pools are close to
being full, and no prior allocation failures occurred.
3. To display information about the network, enter the following command:
mmdiag --network
Pending messages:
(none)
Inter-node communication configuration:
tscTcpPort 1191
my address 9.114.53.217/25 (eth2) <c0n2>
my addr list 9.114.53.217/25 (eth2)
my node number 4
TCP Connections between nodes:
Device null:
hostname node destination status err sock sent(MB) recvd(MB)
ostype
c941f1n05.pok.stglabs.ibm.com <c0n1> 9.114.78.25 broken 233 -1 0 0
Linux/L
Device eth2:
hostname node destination status err sock sent(MB) recvd(MB)
ostype
c941f3n03.pok.stglabs.ibm.com <c0n0> 9.114.78.43 connected 0 61 0 0
Linux/L
c870f4ap06 <c0n3> 9.114.53.218 connected 0 64 0 0
Linux/B
Connection details:
<c0n1> 9.114.78.25/0 (c941f1n05.pok.stglabs.ibm.com)
connection info:
retry(success): 0(0)
<c0n0> 9.114.78.43/0 (c941f3n03.pok.stglabs.ibm.com)
connection info:
retry(success): 0(0)
tcp connection state: established tcp congestion state: open
packet statistics:
lost: 0 unacknowledged: 0
retrans: 0 unrecovered retrans: 0
network speed(µs):
rtt(round trip time): 456 medium deviation of rtt: 127
pending data statistics(byte):
read/write calls pending: 0
GPFS Send-Queue: 0 GPFS Recv-Queue: 0
Socket Send-Queue: 0 Socket Recv-Queue: 0
<c0n3> 9.114.53.218/0 (c870f4ap06)
connection info:
retry(success): 0(0)
tcp connection state: established tcp congestion state: open
packet statistics:
lost: 0 unacknowledged: 0
retrans: 0 unrecovered retrans: 0
network speed(µs):
rtt(round trip time): 8813 medium deviation of rtt: 13754
pending data statistics(byte):
read/write calls pending: 0
GPFS Send-Queue: 0 GPFS Recv-Queue: 0
Socket Send-Queue: 0 Socket Recv-Queue: 0
Device details:
devicename speed mtu duplex rx_dropped rx_errors tx_dropped tx_errors
eth2 1000 1500 full 0 0 0 0
diag verbs: VERBS RDMA class not initialized
4. To display information about status and statistics of all AFM and AFM DR relationships, enter the
following command: mmdiag --afm The command displays output similar to the following example:
5. To display gateway statistics, enter the following command: mmdiag --afm gw The command
displays output similar to the following example:
Location
/usr/lpp/mmfs/bin
mmdsh command
Runs commands on multiple nodes or network connected hosts at the same time.
Synopsis
mmdsh -N {Node[,Node...] | NodeFile | NodeClass}
[-l LoginName] [-i] [-s] [-r RemoteShellPath]
[-v [-R ReportFile]] [-f FanOutValue] Command
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmdsh command to remotely execute a command concurrently on each of the nodes that are
specified in the -N option.
Note: For security reasons, the mmdsh command is limited to the list of allowable remote commands
when sudo wrappers are implemented. For more information on how to configure sudo wrappers, see
Configuring sudo in IBM Spectrum Scale: Administration Guide.
CAUTION: The mmdsh command runs any authorized command that you specify concurrently
against the list of nodes that you specify. To avoid accidentally damaging or corrupting your
clusters or file systems, ensure that you have specified the correct command and the correct list of
nodes before you run mmdsh.
Parameters
-N {Node[,Node...] | NodeFile | NodeClass}
Runs the command on the nodes in the given node specification. The nodespecification
argument can be a comma-separated list of nodes, a node file, or a node class. The nodes in the list or
the file can be specified as long or short admin or daemon node names, node numbers, node number
ranges, or IP addresses.
-l LoginName
Allows the user to specify a log-in name for the nodes. The log-in names are entered in the command
line.
-i
Displays the set of nodes before the command is run on those nodes.
-s
Suppresses the prepending of the hostname string to each line of output generated by running the
command on the remote node.
-r RemoteShellPath
Specifies the full path of the remote shell command that must be used.
-v
Verifies that a node is reachable with an ICMP echo command (network ping) before adding it to the
set of nodes on which the command must be run.
-R ReportFile
Reports the list of hosts removed from the working collective when host verification (host ping) fails.
The report is written to the specified file with one host per line. The report is generated only when
combined with the -v parameter.
-f FanOutValue
Specifies a fanout value used for concurrent execution. The default value is 64.
Command
To be run on the remote hosts.
Exit status
The return code from this command does not reliably indicate the success or failure of the command that
is executed on the remote node. To determine the overall command status, review the messages that are
returned by the remote commands.
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmdsh command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Example
1. To run the command on a list of nodes specified in NodeFile, run the following command:
2. To run the command on a specified list of hosts, run the following command:
mmdsh -N host1,host2,host3 ls
3. To run the command on the quorum nodes specified by a node class, run the following command:
4. To run the command on the hosts listed in the hostfile, run the following command:
5. To run the command on a specified node along with a log-in name, run the following command:
mmeditacl command
Creates or changes a GPFS access control list.
Synopsis
mmeditacl [-d] [-k {nfs4 | posix | native}] Filename
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmeditacl command for interactive editing of the ACL of a file or directory. This command uses
the default editor, specified in the EDITOR environment variable, to display the current access control
information, and allows the file owner to change it. The command verifies the change request with the
user before making permanent changes.
This command cannot be run from a Windows node.
The EDITOR environment variable must contain a complete path name, for example:
export EDITOR=/bin/vi
For information about NFS V4 ACLs, see the topics Managing GPFS access control lists and NFS and GPFS
in the IBM Spectrum Scale: Administration Guide.
Users may need to see ACLs in their true form as well as how they are translated for access evaluations.
There are four cases:
1. By default, mmeditacl returns the ACL in a format consistent with the file system setting, specified
using the -k flag on the mmcrfs or mmchfs commands.
• If the setting is posix, the ACL is shown as a traditional ACL.
• If the setting is nfs4, the ACL is shown as an NFS V4 ACL.
• If the setting is all, the ACL is returned in its true form.
2. The command mmeditacl -k nfs4 always produces an NFS V4 ACL.
3. The command mmeditacl -k posix always produces a traditional ACL.
4. The command mmeditacl -k native always shows the ACL in its true form regardless of the file
system setting.
The following describes how mmeditacl works for POSIX and NFS V4 ACLs:
In the case of NFS V4 ACLs, there is no concept of a default ACL. Instead, there is a single ACL and the
individual access control entries can be flagged as being inherited (either by files, directories, both, or
neither). Consequently, specifying the -d flag for an NFS V4 ACL is an error. By its nature, storing an NFS
V4 ACL implies changing the inheritable entries (the GPFS default ACL) as well.
Depending on the file system's -k setting (posix, nfs4, or all), mmeditacl may be restricted. The
mmeditacl command is not allowed to store an NFS V4 ACL if -k posix is in effect, and is not allowed
to store a POSIX ACL if -k nfs4 is in effect. For more information, see the description of the -k flag for
the mmchfs, mmcrfs, and mmlsfs commands.
Parameters
Filename
The path name of the file or directory for which the ACL is to be edited. If the -d option is specified,
Filename must contain the name of a directory.
Options
-d
Specifies that the default ACL of a directory is to be edited.
-k {nfs4 | posix | native}
nfs4
Always produces an NFS V4 ACL.
posix
Always produces a traditional ACL.
native
Always shows the ACL in its true form regardless of the file system setting.
This option should not be used for routine ACL manipulation. It is intended to provide a way to show
the translations that are done. For example, if a posix ACL is translated by NFS V4. Beware that if the
-k nfs4 flag is used, but the file system does not allow NFS V4 ACLs, you will not be able to store the
ACL that is returned. If the file system does support NFS V4 ACLs, the -k nfs4 flag is an easy way to
convert an existing posix ACL to nfs4 format.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You may issue the mmeditacl command only from a node in the GPFS cluster where the file system is
mounted.
The mmeditacl command may be used to display an ACL. POSIX ACLs may be displayed by any user with
access to the file or directory. NFS V4 ACLs have a READ_ACL permission that is required for non-
privileged users to be able to see an ACL. To change an existing ACL, the user must either be the owner,
the root user, or someone with control permission (WRITE_ACL is required where the existing ACL is of
type NFS V4).
Examples
To edit the ACL for a file named project2.history, issue this command:
mmeditacl project2.history
The current ACL entries are displayed using the default editor, provided that the EDITOR environment
variable specifies a complete path name. When the file is saved, the system displays information similar
to:
See also
• “mmdelacl command” on page 356
• “mmgetacl command” on page 422
• “mmputacl command” on page 608
Location
/usr/lpp/mmfs/bin
mmedquota command
Sets quota limits.
Synopsis
mmedquota {-u [-p [ProtoFileset:]ProtoUser] [Device:Fileset:]User ... |
-g [-p [ProtoFileset:]ProtoGroup] [Device:Fileset:]Group ... |
-j [-p ProtoFileset] Device:Fileset ... |
-d {-u User ... | -g Group ... | -j Device:Fileset ...} |
-t {{-u | -g | -j} [--reset]}}
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
The mmedquota command serves two purposes:
1. Sets or changes quota limits or grace periods for users, groups, and filesets in the cluster from which
the command is issued.
2. Reestablishes user, group, or fileset default quotas for all file systems with default quotas enabled in
the cluster.
Important: Quota limits are not enforced for root users (by default). For information on managing quotas,
see Managing GPFS quotas in the IBM Spectrum Scale: Administration Guide.
The mmedquota command displays the current values for these limits, if any, and prompts you to enter
new values with the default editor:
• The current block usage: The amount of disk space that is used by this user, group, or fileset, in 1 KB
units; display only.
• The current inode usage: Display only.
• Node soft limit.
• Inode hard limit.
• Block soft limit: The amount of disk space that this user, group, or fileset is allowed to use during normal
operation.
• Block hard limit: The amount of disk space that this user, group, or fileset is allowed to use during the
grace period.
Note on block limits:
– The command displays the current block limits in KB.
– When you specify a block limit, you can add a suffix to the number to indicate the unit of measure: g,
G, k, K, m, M, p, P, t, or T. If you do not specify a suffix, the command assumes that the number is in
bytes.
– The maximum block limit is 999999999999999 K (about 931322 T). For values greater than
976031318016 K (909 T) you must specify the equivalent value with the suffix K, M, or G or without
any suffix.
Note: A block or inode limit of 0 indicates no limit.
The mmedquota command waits for the edit window to be closed before you check and apply new values.
If an incorrect entry is made, reissue the command and enter the correct values.
You can also use the mmedquota command to change the file system-specific grace periods for block and
file usage if the default of one week is unsatisfactory. The grace period is the time during which users can
exceed the soft limit. If the user, group, or fileset does not show reduced usage below the soft limit before
the grace period expires, the soft limit becomes the new hard limit.
When you set quota limits for a file system, consider replication in the file system. See the topic Listing
quotas in the IBM Spectrum Scale: Administration Guide.
The EDITOR environment variable must contain a complete path name, for example:
export EDITOR=/bin/vi
Parameters
Device
Specifies the device name of the file system for which quota information is to be displayed. File
system names need not be fully qualified. fs0 is as acceptable as /dev/fs0.
Fileset
Specifies the name of a fileset that is on the Device for which quota information is to be displayed.
User
Name or user ID of target user for quota editing.
Group
Name or group ID of target group for quota editing.
-d
Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set
by a previous invocation of the mmedquota command.
-g
Sets quota limits or grace times for groups.
-j
Sets quota limits or grace times for filesets.
-p
Applies already-established limits to a particular user, group, or fileset.
When invoked with the -u option, [ProtoFileset:]ProtoUser limits are automatically applied to the
specified User or space-delimited list of users.
When invoked with the -g option, [ProtoFileset:]ProtoGroup limits are automatically applied to the
specified Group or space-delimited list of groups.
When invoked with the -j option, ProtoFileset limits are automatically applied to the specified fileset
or space-delimited list of fileset names.
You can specify any user as a ProtoUser for another User, or any group as a ProtoGroup for another
Group, or any fileset as a ProtoFileset for another Fileset.
-p cannot propagate a prototype quota from a user, group, or fileset on one file system to a user,
group, or fileset on another file system.
-t
Sets grace period during which quotas can exceed the soft limit before it is imposed as a hard limit.
The default grace period is one week.
This flag is followed by one of the following flags: -u, -g, or -j to specify whether the changes apply
to users, groups, or filesets.
-u
Sets quota limits or grace times for users.
--reset
With this option, when grace time is modified, all relative quota entries are scanned and updated if
necessary; without this option, when grace time is updated, quota entries are not scanned and
updated.
Note:
• The maximum files limit is 2147483647.
• See the Note on block limits earlier in this topic.
• If you want to display the current grace period, issue the command mmrepquota -t.
Exit status
0
Successful completion.
Nonzero
A failure has occurred.
Security
You must have root authority to run the mmedquota command.
GPFS must be running on the node from which the mmedquota command is issued.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To set user quotas for user ID pfs001, issue this command:
mmedquota -u pfs001
2. To reset default group quota values for the group blueteam, issue this command:
mmedquota -d -g blueteam
mmrepquota -q fs1
3. To change the grace periods for all users, issue this command:
mmedquota -t -u
4. To set user quotas for device gpfs2, fileset fset3, and userid pfs001, issue this command:
mmedquota -u gpfs2:fset3:pfs001
5. To apply already-established limits of user pfs002 to user pfs001, issue this command:
6. To apply already-established limits of user pfs002 in fileset fset2 to user pfs001 in fileset fset1
and file system fs1, issue this command:
7. To apply an already-established fileset quota (from fileset1 to fileset2) in two different file
systems (gpfstest1 and gpfstest2), issue this command:
See also
• “mmcheckquota command” on page 218
• “mmdefedquota command” on page 342
• “mmdefquotaoff command” on page 346
• “mmdefquotaon command” on page 349
• “mmlsquota command” on page 527
• “mmquotaon command” on page 644
• “mmquotaoff command” on page 641
• “mmrepquota command” on page 656
Location
/usr/lpp/mmfs/bin
mmexportfs command
Retrieves the information needed to move a file system to a different cluster.
Synopsis
mmexportfs {Device | all} -o ExportfsFile
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmexportfs command, in conjunction with the mmimportfs command, can be used to move one or
more GPFS file systems from one GPFS cluster to another GPFS cluster, or to temporarily remove file
systems from the cluster and restore them at a later time. The mmexportfs command retrieves all
relevant file system and disk information and stores it in the file specified with the -o parameter. This file
must later be provided as input to the mmimportfs command. When running the mmexportfs
command, the file system must be unmounted on all nodes.
When all is specified in place of a file system name, any disks that are not associated with a file system
will be exported as well.
Exported file systems remain unusable until they are imported back with the mmimportfs command to
the same or a different GPFS cluster.
Results
Upon successful completion of the mmexportfs command, all configuration information pertaining to the
exported file system and its disks is removed from the configuration data of the current GPFS cluster and
is stored in the user specified file ExportfsFile.
Parameters
Device | all
The device name of the file system to be exported. File system names need not be fully-qualified. fs0
is as acceptable as /dev/fs0. Specify all to export all GPFS file systems, as well as all disks that do
not currently belong to a file system.
If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected
IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered
arrays, vdisks, and any other file systems that are based on these objects. For more information about
IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration.
This must be the first parameter.
-o ExportfsFile
The path name of a file to which the file system information is to be written. This file must be provided
as input to the subsequent mmimportfs command.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmexportfs command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To export all file systems in the current cluster, issue this command:
mmexportfs: Processing disks that do not belong to any file system ...
mmexportfs: 6027-1371 Propagating the cluster configuration data to all
affected nodes. This is an asynchronous process.
See also
• “mmimportfs command” on page 457
Location
/usr/lpp/mmfs/bin
mmfsck command
Checks and repairs a GPFS file system.
Synopsis
or
The file system must be unmounted before you can run the mmfsck command with any option other than
-o.
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmfsck command in offline mode is intended to be used only in situations where disk or
communications failures cause MMFS_FSSTRUCT error log entries to be issued, or where you know that
disks have been forcibly removed or are otherwise permanently unavailable for use in the file system and
you see other unexpected symptoms. In general, it is unnecessary to run mmfsck in offline mode except
under the direction of the IBM Support Center. For more information about error logs, see the topic
Operating system error logs in the IBM Spectrum Scale: Problem Determination Guide.
Note: When the mmhealth command displays an fsstruct error, the command prompts you to run a file
system check. When the problem is resolved, issue the following command to clear the fsstruct error
from the mmhealth command. You must specify the file system name twice:
If neither the -n nor -y flag is specified, the mmfsck command runs interactively and prompts you for
permission to repair each consistency error as it is reported. It is suggested that you run the mmfsck
command interactively (the default) in all but the most severely damaged file systems.
I/O errors or error messages with an instruction to run the mmfsck command might indicate file system
inconsistencies. If so, issue the mmfsck command to check file system consistency and to interactively
repair the file system.
For information about file system maintenance and repair, see the topic Checking and repairing a file
system in the IBM Spectrum Scale: Administration Guide. In online mode the mmfsck command checks for
the following inconsistencies:
• Blocks that are marked as allocated but that do not belong to any file (lost blocks). The corrective action
is to mark the block free in the allocation map. A possible indicator of lost blocks is that I/O operations
fail with an out-of-space error after repeated node failures.
• Corruptions in the block allocation map. The corrective action is to repair the corrupted blocks. This
check detects and repairs structural corruptions, such as corrupt block headers and corrupt disk
addresses. It does not detect and repair non-structural corruptions, such as bad allocation map bits.
Possible indicators of corruptions in the block allocation map are MMFS_FSSTRUCT entries in the
system logs after an attempt to create or delete a file.
In offline mode the command checks for the two inconsistencies in the previous list and also checks for
the following inconsistencies:
• Files for which an inode is allocated and no directory entry exists (orphaned files). The corrective action
is to create directory entries for these files in a lost+found subdirectory of the fileset to which the
orphaned file or directory belongs. The index number of the inode is assigned as the name. If you do not
allow the mmfsck command to reattach an orphaned file, it asks for permission to delete the file.
• Directory entries that point to an inode that is not allocated. The corrective action is to remove the
directory entry.
• Incorrectly formed directory entries. A directory file contains the inode number and the generation
number of the file to which it refers. When the generation number in the directory does not match the
generation number that is stored in the inode of the file, the corrective action is to remove the directory
entry.
• Incorrect link counts on files and directories. The corrective action is to update them with accurate
counts.
• Policy files that are not valid. The corrective action is to delete the file.
• Various problems that are related to filesets. Such problems include missing or corrupted fileset
metadata, inconsistencies in directory structure related to filesets, missing or corrupted fileset root
directories, or other problems in internal data structures. The repaired filesets are renamed as Fileset
FilesetId and put into unlinked state.
You can also run the mmfsck command in read-only offline mode to get a list of inodes within filesets that
might be deleted or modified by the repair. You can mount the file system in read-only mode and run the
tsfindinode command to find the path names (which allow you to restore a file from backup after
repair) of all affected inodes. Then run mmfsck again in repair mode to fix any problems with the file
system. If you do not need to know the path names of affected files you can use this command for a full
scan to save time: mmfsck repair (-y)
Use the --patch repair option to expedite a mmfsck repair by using data for corrupt inodes that has been
detected during a previous run of read-only offline mmfsck instead of scanning the file system again for
corruptions. Generally, the --patch option is much faster than a full mmfsck scan repair. However:
1. The --patch option is faster only for large file systems. For smaller file systems you can use either the
--patch option or the -y option.
2. The --patch option uses a single thread to repair the file system. This can be slower than the full scan
repair of the file system if there are many (more than 10,000) corruptions.
3. The --patch option is safe only if the file system has not been modified after generating the --
patch-file from a previous read-only offline mmfsck run. Because the --patch option uses a list of
known corruptions, if the file system is changed (for example, by mounting it as read-write after the
read-only mmfsck run), it does not know about any new corruptions that are introduced because of
existing inconsistencies in the file system. This means that it can only perform a partial repair.
Important: This might give the impression that file system is clean even if it is not.
4. The --patch option cannot be used to repair certain types of corruptions in system files. In some
cases, you might have to run a full scan repair. This can be determined by viewing the last record of the
--patch-file that contains the string need_full_fsck_scan:
• If it is false, the --patch option can be used.
• If it is true, the -y option must be used.
If you do plan to run read-only offline mmfsck and are not sure which approach to consider, use the --
patch-file option. Once the patch file is generated, you can use it to repair the file system.
If you are repairing a file system because of node failure and if quotas are enabled in the file system, it is a
good idea to run the mmcheckquota command to make sure that the quota accounting is consistent.
Indications that might lead you to run the mmfsck command include the following events:
• An MMFS_FSSTRUCT along with an MMFS_SYSTEM_UNMOUNT error log entry on any node that indicates
that some critical piece of the file system is inconsistent. For more information about error logs, see the
topic Operating system error logs in the IBM Spectrum Scale: Problem Determination Guide.
• Disk media failures.
• Partial disk failure.
• EVALIDATE=214, which indicates that an invalid checksum or other consistency check failure either
occurred on a disk data structure, or was reported in an error log, or was returned to an application.
For more information on recovery actions and how to contact the IBM Support Center, see the IBM
Spectrum Scale: Problem Determination Guide.
If you are running the online mmfsck command to free allocated blocks that do not belong to any files,
plan to repair the file system when system demand is low. File system repairs are I/O intensive and can
affect system performance.
While the mmfsck command is working, you can run another instance of mmfsck with the --status-
report parameter at any time to display a consolidated status report from all the nodes that are
participating in the mmfsck run. For more information, see the --status-report parameter.
Results
If the file system is inconsistent, the mmfsck command displays information about the inconsistencies
and, depending on the option that is entered, might prompt you for permission to repair them. The
mmfsck command tries to avoid actions that might result in loss of data. However, in some cases it might
report the destruction of a damaged file.
All corrective actions except recovering lost disk blocks (blocks that are marked as allocated but do not
belong to any file) require that the file system be unmounted on all nodes. If the mmfsck command is run
on a mounted file system, lost blocks are recovered but any other inconsistencies are only reported, not
repaired.
If a bad disk is detected, the mmfsck command stops the disk and writes an entry to the error log. The
operator must manually start and resume the disk when the problem is fixed.
The file system must be unmounted on all nodes before the mmfsck command can repair file system
inconsistencies.
When the command is running in verbose or semi-verbose mode, the command provides a summary of
the errors and the severity of each error:
CRITICAL
The command found a critical corruption in the file system. Using the file system without repairing the
corruption might cause more corruptions or file system panics.
NONCRITICAL
The command found a non-critical corruption in the file system. You can use the file system without
repairing the corruption, but some metadata might not be accessible.
HARMLESS
The command found a harmless problem in the file system that indicates that some unused metadata
can be freed to reclaim space.
Parameters
Device
The device name of the file system to be checked and repaired. File system names need not be fully
qualified. fs0 is as acceptable as /dev/fs0.
This parameter must be the first parameter.
-n
Specifies a no response to all file system error repair prompts from the mmfsck command. This option
reports inconsistencies but it does not change the file system. To save this information, redirect it to
an output file when you issue the mmfsck command.
Note: If the mmfsck command is run offline with the -n parameter and it detects errors, it panics the
file system to force the file system to go through cleanup before any new command can be started.
-y
Specifies a yes response to all file system error repair prompts from the mmfsck command. Use this
option only on severely damaged file systems. It allows the mmfsck command to take any action that
is necessary for repairs.
--patch
Specifies that the file system will be repaired using the information stored in the patch file that is
specified with --patch-file FileName.
Note: Use of the --patch option causes the command to run only on the node where the command is
executed as a single instance. Any information provided with the -N option is ignored.
--patch-file FileName
Specifies the name of a patch file. When the --patch parameter is not specified, information about
file system inconsistencies (detected during an mmfsck run with the -n parameter) is stored in the
patch file that is specified by Path. Path must be accessible from the file system manager node. The
information that is stored in the patch file can be viewed as a report of the problems in the file system.
For more information about patch files, see the topic Checking and repairing a file system in the IBM
Spectrum Scale: Administration Guide.
When this option is specified with the --patch parameter, the information in the patch file is read
and used to repair the file system.
-c
If the file system log is lost and the file system is replicated, this option causes the mmfsck command
to attempt corrective action by comparing the replicas of metadata and data. If this error condition
occurs, it is indicated by an error log entry.
-m
Has the same meaning as -c, except that mmfsck checks only the metadata replica blocks. Therefore
it runs faster than with -c.
-o
Runs the command in online mode. See the list of conditions that the command can detect and repair
in online mode in the Description section earlier in this topic.
--skip-inode-check
Causes the command to run faster by skipping its inode-check phase. Include this option only if you
know that the inodes are valid and that only directories need to be checked. In this mode, the product
does not scan all parts of the file system and therefore might not detect all corruptions in the file
system.
--skip-directory-check
Causes the command to run faster by skipping its directory-check phase. Include this option if you
want to check only the inodes. In this mode, the product does not scan all parts of the file system and
therefore might not detect all corruptions in the file system.
-s
Specifies that the output is semi-verbose.
-v
Specifies that the output is verbose.
-V
Specifies that the output is verbose and contains information for debugging purposes.
-t TmpDirPath
Specifies the directory that GPFS uses for temporary storage during mmfsck command processing.
This directory must be available on all nodes that are participating in mmfsck and that are designated
as either a manager or quorum node. In addition to the location requirement, the storage directory
has a minimum space requirement of 4 GB. The default directory for mmfsck processing is /tmp.
--threads Num
The number of threads that are created to run mmfsck. The default is 16.
--estimate-only
Causes the command to display an estimate of the time that would be required to run the mmfsck
command in offline mode with the parameters that you are specifying. To use this option, type the
mmfsck command on the command line with the parameters that you plan to use; include the --
estimate-only option; and press Enter. The command displays the time estimate and does not do a
file system scan.
This feature is available even while the target file system is mounted. The estimate is based on the
parameters that you specify for the mmfsck command, on the characteristics of the target file system,
and on the disk and network I/O throughput of the nodes that would participate in running the
mmfsck command.
Note: To enable this feature, you must update all the nodes in the cluster to IBM Spectrum Scale 5.0.2
or later.
--use-stale-replica
Specifies whether to read possible stale replica blocks from unrecovered disks. The default behavior
is for offline fsck to not read replica blocks from disks that are in unrecovered state and with a lower
failure-config-version than other blocks in the same replica set. This is because disks with a lower
failure-config-version are likely to have stale versions of a block. In case the higher failure-config-
version disk is damaged, the block read fails and fsck reports it as a data loss. In such situations,
using this option overrides default offline fsck behavior and allows fsck to read possible stale
replica blocks from unrecovered disks. This is useful if the data loss when using this option is found to
be less than the default behavior. This can be determined by running read-only offline fsck twice,
once without --use-stale-replica and once with --use-stale-replica. Then compare the
number of corruptions reported in each case.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
the --status-report parameter to verify that the long-running instance is still working and to get
the status. You can issue this command from any node in the cluster, even if the node is not
participating in the mmfsck run.
Note: To enable this feature, you must update all the nodes in the cluster to IBM Spectrum Scale 5.0.0
or later.
Exit status
0
Successful completion.
2
The command was interrupted before it completed checks or repairs.
4
The command changed the file system and it must now be restarted.
8
The file system contains damage that was not repaired.
16
The problem cannot be fixed.
64
Do a full offline file system check to verify the integrity of the file system.
The exit string is a combination of three error values:
1. The value of the Exit errno variable.
2. An internal value that helps to explain the source of the value in the errno variable.
3. The OR of several status bits.
Security
You must have root authority to run the mmfsck command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum
Scale: Administration Guide.
Examples
1. The following command checks file system fs2 and displays inconsistencies but does not try to make
repairs:
Checking "fs2"
FsckFlags 0x2000009
Stripe group manager <c0n0>
NeedNewLogs 0
...
...
...
...
Error in inode 33536 snap 0: Directory block 0 has entry whose filesetId does not match
filesetId of directory
file entry inode 33537 "a"
Delete entry? No
Error in inode 33536 snap 0: Directory block 0 has entry whose filesetId does not match
filesetId of directory
file entry inode 33538 "b"
Delete entry? No
Error in inode 33536 snap 0: Directory block 0 has entry whose filesetId does not match
filesetId of directory
file entry inode 33539 "c"
Delete entry? No
...
Error in inode 3 snap 0: Directory block 0 has entry referring to a deleted inode
subdir entry inode 131075 "fset1"
Delete entry? No
Error in inode 3 snap 0: Directory block 0 has entry with invalid filesetId
subdir entry inode 33536 "dir1"
Delete entry? No
...
197120 inodes
59 allocated
3 repairable
0 repaired
3 damaged
0 deallocated
3 orphaned
0 attached
0 corrupt ACL references
4194304 subblocks
217682 allocated
14 unreferenced
0 duplicates
0 deletable
0 deallocated
1598 addresses
0 suspended
0 duplicates
0 reserved file holes found
0 reserved file holes repaired
Critical corruptions were found. Using file system without repairing the corruptions
may cause more corruptions and/or file system panics.
File system contains unrepaired damage.
Exit status 0:0:8.
mmfsck: 6027-1639 Command failed. Examine previous error messages to determine cause.
2. The following command checks file system fs2, displays inconsistencies, and makes repairs:
Checking "fs2"
FsckFlags 0x400000A
Stripe group manager <c0n0>
NeedNewLogs 0
...
...
...
...
Error in inode 33536 snap 0: Directory block 0 has entry whose filesetId does not match
filesetId of directory
file entry inode 33537 "a"
Delete entry? Yes
Error in inode 33536 snap 0: Directory block 0 has entry whose filesetId does not match
filesetId of directory
file entry inode 33538 "b"
Delete entry? Yes
Error in inode 33536 snap 0: Directory block 0 has entry whose filesetId does not match
filesetId of directory
file entry inode 33539 "c"
Delete entry? Yes
...
Error in inode 3 snap 0: Directory block 0 has entry referring to a deleted inode
subdir entry inode 131075 "fset1"
Delete entry? Yes
Error in inode 3 snap 0: Directory block 0 has entry whose filesetId does not match
filesetId of directory
subdir entry inode 33536 "dir1"
Delete entry? Yes
...
197120 inodes
61 allocated
4 repairable
4 repaired
0 damaged
0 deallocated
6 orphaned
6 attached
0 corrupt ACL references
4194304 subblocks
217682 allocated
14 unreferenced
0 duplicates
0 deletable
14 deallocated
1598 addresses
0 suspended
0 duplicates
0 reserved file holes found
0 reserved file holes repaired
Critical corruptions were found in the file system which were repaired.
File system is clean.
3. The following command checks file system FSchk, displays inconsistencies, and stores a record of
each inconsistency in the patch file path-towrite-patchfile:
...
Error in inode 50688 snap 0: Indirect block 0 level 1 has bad disk addr at offset 21
replica 1 addr 2:318976 is a duplicate address
Delete disk address? No
Cannot fix lost blocks if not deleting duplicate address.
Error in inode 50688 snap 0: Indirect block 0 level 1 has bad disk addr at offset 20
replica 0 addr 2:318976 is a duplicate address
Delete disk address? No
Error in inode 50689 snap 0: Indirect block 0 level 1 has bad disk addr at offset 40
replica 0 addr 1:371200 is a duplicate address
Delete disk address? No
Error in inode 50690 snap 0: Indirect block 0 level 1 has bad disk addr at offset 44
replica 0 addr 2:318976 is a duplicate address
Delete disk address? No
Error in inode 50690 snap 0: Indirect block 0 level 1 has bad disk addr at offset 792
replica 0 addr 1:371200 is a duplicate address
Delete disk address? No
...
...
Error in inode 50691 snap 0: Directory block 0 has entry with incorrect generation number
subdir entry inode 50692 "subdir7"
Delete entry? No
Error in inode 50691 snap 0: Directory block 0 has entry with incorrect generation number
subdir entry inode 13834 "subdir10"
Delete entry? No
...
65792 inodes
98 allocated
4 repairable
0 repaired
0 damaged
0 deallocated
2 orphaned
0 attached
0 corrupt ACL references
4194304 subblocks
286532 allocated
160 unreferenced
64 duplicates
0 deletable
0 deallocated
1598 addresses
0 suspended
5 duplicates
0 reserved file holes found
0 reserved file holes repaired
Critical corruptions were found. Using file system without repairing the corruptions may
cause
more corruptions and/or file system panics.
File system contains unrepaired damage.
Exit status 0:0:8.
Patch file written to "node3:path-towrite-patchfile" with 21 patch entries.
mmfsck: 6027-1639 Command failed. Examine previous error messages to determine cause.
4. The following command uses the information in patch file path-towrite-patchfile to repair the
file system:
Checking "FSchk"
FsckFlags 0x1C00000A
Stripe group manager <c0n0>
NeedNewLogs 0
...
Error in inode 50688 snap 0: Indirect block 0 level 1 has bad disk addr at offset 21
replica 1 addr 2:318976 is a duplicate address
Delete disk address? Yes
Error in inode 50688 snap 0: Indirect block 0 level 1 has bad disk addr at offset 20
replica 0 addr 2:318976 is a duplicate address
Delete disk address? Yes
Error in inode 50689 snap 0: Indirect block 0 level 1 has bad disk addr at offset 40
replica 0 addr 1:371200 is a duplicate address
Delete disk address? Yes
Error in inode 50690 snap 0: Indirect block 0 level 1 has bad disk addr at offset 44
replica 0 addr 2:318976 is a duplicate address
Delete disk address? Yes
Error in inode 50690 snap 0: Indirect block 0 level 1 has bad disk addr at offset 792
replica 0 addr 1:371200 is a duplicate address
Delete disk address? Yes
Error in inode 50691 snap 0: Directory block 0 has entry with incorrect generation number
subdir entry inode 50692 "subdir7"
Delete entry? Yes
Error in inode 50691 snap 0: Directory block 0 has entry with incorrect generation number
subdir entry inode 13834 "subdir10"
Delete entry? Yes
5. The following command checks file system fs1 in online mode, detects corruptions in the block
allocation map, and makes repairs.
Checking "fs1"
FsckFlags 0x400001A
FsckFlags2 0x0
FsckFlags3 0x0
Stripe group manager <c0n0>
Is offline fsck running 0
NeedNewLogs 0
...
InodeProblemList: 1 entries
iNum snapId status keep delete noScan new error
------------- ---------- ------ ---- ------ ------ --- ------------------
38 0 3 0 0 0 1 0x00000050
AddrCorrupt RepAddrCorrup
...
InodeProblemList: 1 entries
iNum snapId status keep delete noScan new error
------------- ---------- ------ ---- ------ ------ --- ------------------
38 0 3 0 0 0 0 0x00000050
AddrCorrupt RepAddrCorrup
...
7864320 subblocks
217094 allocated
5233594 unreferenced
5233594 deallocated
331 addresses
0 suspended
InodeProblemList: 1 entries
iNum snapId status keep delete noScan new error
------------- ---------- ------ ---- ------ ------ --- ------------------
38 0 3 0 0 0 0 0x00000050
AddrCorrupt RepAddrCorrup
File system is clean.
Fsck completed in 0 hours 0 minutes 13 seconds
Exit status 0:10:0.
See also
• “mmcheckquota command” on page 218
• “mmcrfs command” on page 315
• “mmdelfs command” on page 369
• “mmdf command” on page 382
• “mmlsfs command” on page 498
Location
/usr/lpp/mmfs/bin
mmfsctl command
Issues a file system control request.
Synopsis
mmfsctl Device {suspend | suspend-write | resume}
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmfsctl command to issue control requests to a particular GPFS file system. The command is
used to temporarily suspend the processing of all application I/O requests, and later resume them, as
well as to synchronize the file system's configuration state between peer clusters in disaster recovery
environments.
See Establishing disaster recovery for your GPFS cluster in IBM Spectrum Scale: Administration Guide.
Using mmfsctl suspend and mmfsctl resume
Before creating a FlashCopy® image of the file system, the user must run mmfsctl suspend to
temporarily quiesce all file system activity and flush the internal buffers on all nodes that mount this
file system. The on-disk metadata will be brought to a consistent state, which provides for the
integrity of the FlashCopy snapshot. If a request to the file system is issued by the application after
the invocation of this command, GPFS suspends this request indefinitely, or until the user issues
mmfsctl resume.
Once the FlashCopy image has been taken, the mmfsctl resume command can be issued to resume
the normal operation and complete any pending I/O requests.
Using mmfsctl syncFSconfig
The mmfsctl syncFSconfig command extracts the file system's related information from the local
GPFS configuration data, transfers this data to one of the nodes in the peer cluster, and attempts to
import it there.
Once the GPFS file system has been defined in the primary cluster, users run this command to import
the configuration of this file system into the peer recovery cluster. After producing a FlashCopy image
of the file system and propagating it to the peer cluster using Peer-to-Peer Remote Copy (PPRC),
users similarly run this command to propagate any relevant configuration changes made in the cluster
after the previous snapshot.
The primary cluster configuration server of the peer cluster must be available and accessible using
remote shell and remote copy at the time of the invocation of the mmfsctl syncFSconfig
command, and remote nodes must be reachable by the ping utility. Also, the peer GPFS clusters
should be defined to use the same remote shell and remote copy mechanism, and they must be set
up to allow nodes in peer clusters to communicate without the use of a password.
Note: In a cluster that is CCR-enabled, you cannot run mmfsctl syncFSconfig on a file system that
has tiebreaker disks.
Not all administrative actions performed on the file system necessitate this type of resynchronization.
It is required only for those actions that modify the file system information maintained in the local
GPFS configuration data. These actions include:
• Adding, removing, and replacing disks (commands mmadddisk, mmdeldisk, mmrpldisk)
• Modifying disk attributes (command mmchdisk)
• Changing the file system's mount point (command mmchfs -T)
• Changing the file system device name (command mmchfs -W)
The process of synchronizing the file system configuration data can be automated by utilizing the
syncfsconfig user exit.
Using mmfsctl exclude
The mmfsctl exclude command is to be used only in a disaster recovery environment, only after a
disaster has occurred, and only after ensuring that the disks in question have been physically
disconnected. Otherwise, unexpected results may occur.
The mmfsctl exclude command can be used to manually override the file system descriptor
quorum after a site-wide disaster. See Establishing disaster recovery for your GPFS cluster in IBM
Spectrum Scale: Administration Guide. This command enables users to restore normal access to the
file system with less than a quorum of available file system descriptor replica disks, by effectively
excluding the specified disks from all subsequent operations on the file system descriptor. After
repairing the disks, the mmfsctl include command can be issued to restore the initial quorum
configuration.
Parameters
Device
The device name of the file system. File system names need not be fully-qualified. fs0 is just as
acceptable as /dev/fs0. If all is specified with the syncFSconfig option, this command is
performed on all GPFS file systems defined in the cluster.
The following options can be specified after Device:
suspend
Instructs GPFS to flush the internal buffers on all nodes, bring the file system to a consistent state
on disk, and suspend the processing of all subsequent application I/O requests.
suspend-write
Suspends the execution of all new write I/O requests coming from user applications, flushes all
pending requests on all nodes, and brings the file system to a consistent state on disk.
resume
Instructs GPFS to resume the normal processing of I/O requests on all nodes.
exclude
Instructs GPFS to exclude the specified group of disks from all subsequent operations on the file
system descriptor, and change their availability state to down, if the conditions in the following
Note are met.
If necessary, this command assigns additional disks to serve as the disk descriptor replica
holders, and migrate the disk descriptor to the new replica set. The excluded disks are not deleted
from the file system, and still appear in the output of the mmlsdisk command.
Note: The mmfsctl exclude command is to be used only in a disaster recovery environment,
only after a disaster has occurred, and only after ensuring that the disks in question have been
physically disconnected. Otherwise, unexpected results may occur.
include
Informs GPFS that the previously excluded disks have become operational again. This command
writes the up-to-date version of the disk descriptor to each of the specified disks, and clears the
excl tag.
-d "DiskName[;DiskName...]"
Specifies the names of the NSDs to be included or excluded by the mmfsctl command. Separate
the names with semicolons (;) and enclose the list of disk names in quotation marks.
-F DiskFile
Specifies a file containing the names of the NSDs, one per line, to be included or excluded by the
mmfsctl command.
-G FailureGroup
A failure group identifier for the disks to be included or excluded by the mmfsctl command.
syncFSconfig
Synchronizes the configuration state of a GPFS file system between the local cluster and its peer
in two-cluster disaster recovery configurations.
The following options can be specified after syncFSconfig:
-n RemoteNodesFile
Specifies a list of contact nodes in the peer recovery cluster that GPFS uses when importing
the configuration data into that cluster. Although any node in the peer cluster can be specified
here, users are advised to specify the identities of the peer cluster's primary and secondary
cluster configuration servers, for efficiency reasons.
-C RemoteClusterName
Specifies the name of the GPFS cluster that owns the remote GPFS file system.
-S SpecFile
Specifies the description of changes to be made to the file system, in the peer cluster during
the import step. The format of this file is identical to that of the ChangeSpecFile used as input
to the mmimportfs command. This option can be used, for example, to define the assignment
of the NSD servers for use in the peer cluster.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Results
The mmfsctl command returns 0 if successful.
Security
You must have root authority to run the mmfsctl command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
This sequence of commands creates a FlashCopy image of the file system and propagates this image to
the recovery cluster using the Peer-to-Peer Remote Copy technology. The following configuration is
assumed:
Site LUNs
Primary cluster (site A) lunA1, lunA2
Recovery cluster (site B) lunB1
lunA1
FlashCopy source
lunA2
FlashCopy target, PPRC source
lunB1
PPRC target
A single GPFS file system named fs0 has been defined in the primary cluster over lunA1.
1. In the primary cluster, suspend all file system I/O activity and flush the GPFS buffers
2. Establish a FlashCopy pair using lunA1 as the source and lunA2 as the target.
3. Resume the file system I/O activity:
Resuming operations.
4. Establish a Peer-to-Peer Remote Copy (PPRC) path and a synchronous PPRC volume pair lunA2-lunB1
(primary-secondary). Use the 'copy entire volume' option and leave the 'permit read from secondary'
option disabled.
5. Wait for the completion of the FlashCopy background task. Wait for the PPRC pair to reach the duplex
(fully synchronized) state.
6. Terminate the PPRC volume pair lunA2-lunB1.
7. If this is the first time the snapshot is taken, or if the configuration state of fs0 changed since the
previous FlashCopy snapshot, propagate the most recent configuration to site B:
Location
/usr/lpp/mmfs/bin
mmgetacl command
Displays the GPFS access control list of a file or directory.
Synopsis
mmgetacl [-d] [-o OutFilename] [-k {nfs4 | posix | native}] Filename
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
Use the mmgetacl command to display the ACL of a file or directory.
For information about NFS V4 ACLs, see the topics Managing GPFS access control lists and NFS and GPFS
in the IBM Spectrum Scale: Administration Guide.
Users may need to see ACLs in their true form as well as how they are translated for access evaluations.
There are four cases:
1. By default, mmgetacl returns the ACL in a format consistent with the file system setting, specified
using the -k flag on the mmcrfs or mmchfs commands.
If the setting is posix, the ACL is shown as a traditional ACL.
If the setting is nfs4, the ACL is shown as an NFS V4 ACL.
If the setting is all, the ACL is returned in its true form.
2. The command mmgetacl -k nfs4 always produces an NFS V4 ACL.
3. The command mmgetacl -k posix always produces a traditional ACL.
4. The command mmgetacl -k native always shows the ACL in its true form regardless of the file
system setting.
The following describes how mmgetacl works for POSIX and NFS V4 ACLs:
Command ACL mmcrfs -k Display -d (default)
------------------- ----- --------- ------------- --------------
mmgetacl posix posix Access ACL Default ACL
mmgetacl posix nfs4 NFS V4 ACL Error[1]
mmgetacl posix all Access ACL Default ACL
mmgetacl nfs4 posix Access ACL[2] Default ACL[2]
mmgetacl nfs4 nfs4 NFS V4 ACL Error[1]
mmgetacl nfs4 all NFS V4 ACL Error[1]
mmgetacl -k native posix any Access ACL Default ACL
mmgetacl -k native nfs4 any NFS V4 ACL Error[1]
mmgetacl -k posix posix any Access ACL Default ACL
mmgetacl -k posix nfs4 any Access ACL[2] Default ACL[2]
mmgetacl -k nfs4 any any NFS V4 ACL Error[1]
---------------------------------------------------------------------
[1] NFS V4 ACLs include inherited entries. Consequently, there cannot
be a separate default ACL.
[2] Only the mode entries (owner, group, everyone) are translated.
The rwx values are derived from the
NFS V4 file mode attribute. Since the NFS V4 ACL is more granular
in nature, some information is lost in this translation.
---------------------------------------------------------------------
Parameters
Filename
The path name of the file or directory for which the ACL is to be displayed. If the -d option is
specified, Filename must contain the name of a directory.
Options
-d
Specifies that the default ACL of a directory is to be displayed.
-k {nfs4 | posix | native}
nfs4
Always produces an NFS V4 ACL.
posix
Always produces a traditional ACL.
native
Always shows the ACL in its true form regardless of the file system setting.
-o OutFilename
The path name of a file to which the ACL is to be written.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have read access to the directory where the file exists to run the mmgetacl command.
You may issue the mmgetacl command only from a node in the GPFS cluster where the file system is
mounted.
Examples
1. To display the ACL for a file named project2.history, issue this command:
mmgetacl project2.history
#owner:paul
#group:design
user::rwxc
group::r-x-
other::r-x-
2. This is an example of an NFS V4 ACL displayed using mmgetacl. Each entry consists of three lines
reflecting the greater number of permissions in a text format. An entry is either an allow entry or a
deny entry. An X indicates that the particular permission is selected, a minus sign (–) indicates that is
it not selected. The following access control entry explicitly allows READ, EXECUTE and READ_ATTR to
the staff group on a file:
group:staff:r-x-:allow
(X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (-)READ_ACL (X)READ_ATTR
(-)READ_NAMED
(-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (-)WRITE_ATTR
(-)WRITE_NAMED
3. This is an example of a directory ACLs, which may include inherit entries (the equivalent of a default
ACL). These do not apply to the directory itself, but instead become the initial ACL for any objects
created within the directory. The following access control entry explicitly denies READ/LIST,
READ_ATTR, and EXEC/SEARCH to the sys group.
group:sys:----:deny:DirInherit
(X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (-)READ_ACL (X)READ_ATTR
(-)READ_NAMED
(-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (-)WRITE_ATTR
(-)WRITE_NAMED
See also
• “mmeditacl command” on page 395
• “mmdelacl command” on page 356
• “mmputacl command” on page 608
Location
/usr/lpp/mmfs/bin
mmgetstate command
Displays the state of the GPFS daemon on one or more nodes.
Synopsis
mmgetstate [-L] [-s] [-v] [-Y] [-a | -N {Node[,Node...] | NodeFile | NodeClass}]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmgetstate command displays the state of the GPFS daemon on the specified nodes.
Parameters
-a
Displays the state of the GPFS daemon on all nodes in the cluster.
-N {Node[,Node...] | NodeFile | NodeClass}
Displays the state of the GPFS daemon information on the specified nodes.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
This command does not support a NodeClass of mount.
Options
-L
Displays the number of quorum nodes, the number of nodes that are up, the total number of nodes,
the state of the GPFS daemon, and other information.
-s
Displays summary information, such as the number of local and remote nodes that are joined in the
cluster and the number of quorum nodes.
-v
Displays intermediate error messages.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
The command recognizes and displays the following GPFS states:
active
The GPFS daemon is ready for operations.
arbitrating
A node is trying to form a quorum with the other available nodes.
down
The GPFS daemon is not running on the node or is recovering from an internal error.
unknown
An unknown value. The command cannot connect with the node or some other error occurred.
unresponsive
The GPFS daemon is running but is not responding.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmgetstate command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To display the quorum, the number of nodes up, and the total number of nodes for the GPFS cluster,
issue the following command:
mmgetstate -a -L
Node number Node name Quorum Nodes up Total nodes GPFS state Remarks
----------------------------------------------------------------------------
1 c5n92 3 5 12 active
2 c5n94 3 5 12 active
3 c5n95 3 5 12 active quorum node
4 c5n96 3 5 12 active
5 c5n97 3 5 12 active quorum node
6 c5n98 3 5 12 active
7 c5n107 3 5 12 active quorum node
8 c5n108 3 5 12 active
9 c5n109 3 5 12 active quorum node
10 c5n110 3 5 12 down
11 c5n111 3 5 12 active quorum node
12 c5n112 3 5 12 active
The 3 under the Quorum column means that you must have three quorum nodes up to achieve
quorum.
2. In the following example, the cluster uses node quorum with tiebreaker disks. The asterisk (*) in the
Quorum field indicates that tiebreaker disks are being used:
mmgetstate -a -L
Node number Node name Quorum Nodes up Total nodes GPFS state Remarks
------------------------------------------------------------------------
1 k5n91 5* 8 21 active
2 k5n92 5* 8 21 active quorum node
3 k5n94 5* 8 21 active
5 k5n96 5* 8 21 active
6 k5n97 5* 8 21 active quorum node
7 k5n98 5* 8 21 active
8 k5n99 5* 8 21 active quorum node
mmgetstate -s
Summary information
---------------------
Number of nodes defined in the cluster: 12
Number of local nodes active in the cluster: 12
Number of remote nodes joined in this cluster: 0
Number of quorum nodes defined in the cluster: 5
Number of quorum nodes active in the cluster: 5
Quorum = 3, Quorum achieved
See also
• “mmchconfig command” on page 169
• “mmcrcluster command” on page 303
• “mmshutdown command” on page 695
• “mmstartup command” on page 715
Location
/usr/lpp/mmfs/bin
mmhadoopctl command
Installs and sets up the GPFS connector for a Hadoop distribution; starts or stops the GPFS connector
daemon on a node.
Synopsis
mmhadoopctl connector {start |stop |getstate}
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmhadoopctl command to install and set up the GPFS connector for a Hadoop distribution, or to
start or stop the GPFS connector daemon on a node.
Parameters
connector
Controls the GPFS connector daemon with one of the following actions:
start
Starts the connector daemon.
stop
Stops the connector daemon.
getstate
Detects whether the connector daemon is running and shows its process ID.
syncconf
Synchronizes the connector configuration in the cluster.
--nocheck
If --nocheck is used, the sanity check is not performed. Therefore, use this option with caution.
path
Specifies the path to the config directory or file.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmhadoopctl command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To start the GPFS connector daemon, issue this command:
Location
/usr/lpp/mmfs/bin
mmhdfs command
The mmhdfs command configures and manages the IBM Spectrum Scale HDFS Transparency
components.
Synopsis
mmhdfs {hdfs | hdfs-nn | hdfs-dn} {start | stop | restart | status}
or
or
or
or
or
or
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Note: From IBM Spectrum Scale 5.0.4.2, the mmhdfs command is available when HDFS Transparency
3.1.1 or later is installed.
Use the mmhdfs command to perform the following:
• Start and Stop HDFS Transparency.
• Add and Remove the HDFS Transparency worker nodes.
• Manage the HDFS Transparency configurations.
Parameters
hdfs
Specifies the NameNode and DataNode service of all the HDFS Transparency nodes.
hdfs-nn
Specifies the NameNode service of all the NameNodes.
hdfs-dn
Specifies the DataNode service of all the DataNodes.
start
Starts the specified component.
stop
Stops the specified component.
restart
Restarts the specified component.
status
Shows the running status of the specified component.
namenode
Specifies the NameNode service of the specified nodes.
datanode
Specifies the DataNode service of the specified nodes.
start
Starts the specified component.
stop
Stops the specified component.
restart
Restarts the specified component.
status
Shows the running status of the specified component.
-N Node[,Node...]
Specifies the nodes on which the command should run.
config
Specifies the configuration operation.
upload
Uploads the configuration into CCR.
set
Sets the specified config option of the specified configuration file to the specified value.
filename
Specifies the name of the file.
-k key1=value1
Specifies the key value pair to set.
del
Deletes the specified config option from the specified configuration file.
filename
Specifies the name of the configuration file.
-k key1
Specifies the key to delete.
get
Gets the value of the specified config option for the specified configuration file.
filename
Specifies the name of the configuration file.
-k key1
Specifies the key to delete.
import
Imports the specified configuration files from the specified directory.
--nocheck
If this option is not specified, only the following configuration files are supported:
hadoop-env.sh, core-site.xml, hadoop-policy.xml, hdfs-site.xml, httpfs-site.xml, gpfs-site.xml,
kms-acls.xml, kms-site.xml, ranger-hdfs-audit.xml, ranger-hdfs-security.xml, ranger-
policymgr-ssl.xml, ranger-security.xml, ssl-client.xml, ssl-server.xml, yarn-site.xml, kms-
log4j.properties, log4j.properties, hadoop-metrics2.properties, hadoop-metrics.properties
and workers.
dirpath
Specifies the directory from which to import the files.
filename
Configuration files to be imported.
all
Import all configuration files from the directory.
export
Exports the specified configuration files to the specified directory.
--nocheck
If this option is not specified, only the following configuration files are supported:
hadoop-env.sh, core-site.xml, hadoop-policy.xml, hdfs-site.xml, httpfs-site.xml, gpfs-site.xml,
kms-acls.xml, kms-site.xml, ranger-hdfs-audit.xml, ranger-hdfs-security.xml, ranger-
policymgr-ssl.xml, ranger-security.xml, ssl-client.xml, ssl-server.xml, yarn-site.xml, kms-
log4j.properties, log4j.properties, hadoop-metrics2.properties, hadoop-metrics.properties
and workers.
dirpath
Specifies the directory from which to export the files.
filename
Configuration files to be exported.
all
Export all configuration files to the directory.
worker
Adds or removes the DataNode(s).
add
Adds the specified nodes as DataNodes.
remove
Removes the specified nodes from the DataNodes.
Node[,Node...]
Specifies a comma-delimited list of the nodes on which the command should run.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmhdfs command.
The node on which the command is issued must be able to execute the remote shell commands on any
other node in the cluster without the use of a password and without producing any extraneous messages.
Examples
1. To start all the HDFS Transparency components, run the following command:
2. To stop all the HDFS Transparency components, run the following command:
3. To check the HDFS Transparency status for all the Namenodes, run the following command:
or
or
or
7. To import configuration files from a local directory, run the following command:
8. To import all configuration files from a local directory, run the following command:
10. To export all configuration files to a local directory, run the following command:
11. To add Datanodes into the HDFS Transparency cluster, run the following command:
12. To upload configuration files into CCR and other nodes, run the following command:
See also
• CES HDFS topic in IBM Spectrum Scale: Big Data and Analytics Guide
• Installing IBM Spectrum Scale on Linux nodes and deploying protocols topic in IBM Spectrum Scale:
Concepts, Planning, and Installation Guide
• Configuring with the installation toolkit topic in IBM Spectrum Scale: Administration Guide
• “mmces command” on page 132
• “mmlsfs command” on page 498
Location
/usr/lpp/mmfs/hadoop/sbin
mmhealth command
Monitors health status of nodes.
Synopsis
mmhealth node show [component]
[-N {Node[,Node..] | NodeFile | NodeClass}]
[-Y] [--verbose] [--unhealthy]
[--color | --nocolor]
[--resync]
or
or
or
or
or
or
or
or
or
mmhealth thresholds add { metric [: sum | avg | min | max | rate ]| measurement
[--errorlevel {threshold error limit}
[--warnlevel{threshold warn limit }]
|--direction { high|low }]
[--sensitivity {bucketsize } ] [--hysteresis { percentage }]
[--filterBy] [--groupBy ] [--name { ruleName }]
[--errormsg {user defined action description}]
[--warnmsg {user defined action description}]
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmhealth command to monitor the health of the node and services hosted on the node in IBM
Spectrum Scale.
The IBM Spectrum Scale administrator can monitor the health of each node and the services hosted on
that node using the mmhealth command. The mmhealth command also shows the events that are
responsible for the unhealthy status of the services hosted on the node. This data can be used to monitor
and analyze the reasons for the unhealthy status of the node. The mmhealth command acts as a problem
determination tool to identify which services of the node are unhealthy, and find the events responsible
for the unhealthy state of the service.
The mmhealth command also monitors the state of all the IBM Spectrum Scale RAID components such
as array, physicaldisk, virtualdisk, and enclosure of the nodes that belong to the recovery group.
For more information about the system monitoring feature, see the Monitoring system health by using the
mmhealth command section in the IBM Spectrum Scale: Problem Determination Guide.
The mmhealth command shows the details of threshold rules. This detail helps to avoid out-of-space
errors for file systems. The space availability of the file system component depends upon the occupancy
level of fileset-inode spaces and the capacity usage in each data or metadata pool. The violation of any
single rule triggers the parent file system's capacity-issue events. The capacity metrics are frequently
compared with the rules boundaries by internal monitor process. If any of the metric values exceeds their
threshold limit, then the system health (daemon/service) will receive an event notification from monitor
process and generate a RAS event for the file system for space issues. For the predefined capacity
utilization rules, the warn level is set to 80%, and the error level to 90%. For memory utilization rule, the
warn level is set to 100 MB, and the error level to 50 MB. You can use the mmlsfileset and the
mmlspool commands to track the inode and pool space usage.
Parameters
node
Displays the health status, specifically, at node level.
show
Displays the health status of the specified component.
[component]
The value of the component can be one of the following:
GPFS | NETWORK | FILESYSTEM | DISK | CES | AUTH | AUTH_OBJ | BLOCK |
CESNETWORK | NFS | OBJECT | SMB | HADOOPCONNECTOR | HDFS_DATANODE |
HDFS_NAMENODE | CLOUDGATEWAY | GUI | PERFMON | THRESHOLD | AFM |
FILEAUDITLOG | MSGQUEUE | CESIP | NATIVE_RAID | ARRAY | PHYSICALDISK |
VIRTUALDISK | RECOVERY GROUP | NODE | ENCLOSURE | WATCHFOLDER | CALLHOME |
NVME | CANISTER | FILESYSMGR
Displays the detailed health status of the specified component.
The following components are specific to Elastic Storage Server (ESS):
• Native RAID
• Array
• Physical disk
• Virtual disk
• Recovery group
• Enclosure
For more information on these components, see Elastic Storage Server.
Important:
The HADOOPCONNECTOR component is used only in old deployments, where HDFS is
configured outside the CES, while the HDFS_DATANODE and the HDFS_NAMENODE
components are used when HDFS is configured as a CES service.
The canister component is specific to the IBM Elastic Storage® System 3000. For more
information on this component, see the Events section in the IBM Elastic Storage Server
3000 documentation.
UserDefinedSubComponent
Displays services that are named by the customer, categorized by one of the other
hosted services. For example, a file system named gpfs0 is a subcomponent of file
system.
-N
Allows the system to make remote calls to the other nodes in the cluster for:
Node[,Node....]
Specifies the node or list of nodes for which the health status is displayed.
NodeFile
Specifies a file containing a list of node descriptors, one per line, for which the health
status is displayed.
NodeClass
Specifies the node class for which the health status is displayed.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
--verbose
Shows the detailed health status of a node, including its sub-components.
--unhealthy
Displays the unhealthy components only.
--color | --nocolor
If you run the command without --color or --nocolor, the system automatically detects if a tty
is attached and uses the color mode.
If the --color is specified, it uses the color output even if no tty is attached.
If the --nocolor is specified, it uses the non-colored output.
--resync
Use this option to resync all the health states and events of the current node with the cluster
state manager. The cluster state manager collects the cluster wide health data.
eventlog
Shows the event history for a specified period of time. If no time period is specified, it displays all
the events by default:
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
[--hour | --day | --week| --month]
Displays the event history for the specified time period.
[--clear]
Clears the event log's database. This action cannot be reversed.
CAUTION: The events database is used by the mmhealth node eventlog as well as the
mmces events list. If you clear the database, it will also affect the mmces events list.
Ensure that you use the --clear option with caution.
[--verbose]
Displays additional information about the event like component name and event ID in the
eventlog.
[--show-state-changes]
Displays additional information about the changes in the state of the components of a node.
Note: This option is only valid if the cluster has a minimum release level of 5.0.2.
--color | --nocolor
If you run the command without --color or --nocolor, it automatically detects if a tty is attached
and uses the color mode.
If the --color is specified, it uses the color output even if no tty is attached.
If the --nocolor is specified, it uses the non-colored output.
event
Gives the details of various events:
show
Shows the detailed description of the specified event:
EventName
Displays the detailed description of the specified event name.
EventID
Displays the detailed description of the specified event ID.
hide
Hides the specified TIP events.
unhide
Reveals the TIP events that were previously hidden using the hide.
resolve
Manually resolve error events. All events starting with fserr* can be manually resolved with this
command. The only other event that can be manually resolved is out_of_memory.
list HIDDEN
Shows all the TIP events that are added to the list of hidden events.
cluster
Displays the health status of all nodes and monitored node components in the cluster.
show
Displays the health status of the specified component.
[component]
The value of the component can be one of the following:
Inactive
A rule is not running. Either no keys for the defined metric measurement is found in the
performance monitoring tool metadata or the corresponding sensor is not enabled. As soon as
the metric keys and metric values are detected for a rule, the state of the rule switches to
active.
Unknown
The state of a rule could not be determined. Probably an issue with querying internal data.
thresholds add
Creates a new thresholds rule for the specified metric or measurement, and activates monitoring
process stores for this rule.
Note: A measurement is a value calculated using more than one metric in a pre-defined formula.
metric [: SUM | AVG | MIN | MAX | RATE ]
Creates a threshold for the specified metric. All metrics that are supported by the performance
monitoring tool, and use raw values or are downsampled by aggregators (sum, avg, min, max,
rate) can be used. For a list of metrics supported by the performance monitoring tool, see the List
of performance metrics section in the IBM Spectrum Scale: Problem Determination Guide.
measurement
Creates a threshold for the specified measurement. The following measurements are supported:
DataPool_capUtil
Data Pool Capacity Utilization. Calculated as:
sum(gpfs_pool_total_dataKB)-sum(gpfs_pool_free_dataKB)/sum(gpfs_pool_total_dataKB)
DiskIoLatency_read
Average time in milliseconds spent for a read operation on the physical disk. Calculated as:
disk_read_time/disk_read_ios
DiskIoLatency_write
Average time in milliseconds spent for a write operation on the physical disk. Calculated as:
disk_write_time/disk_write_ios
Fileset_inode
Fileset Inode Capacity Utilization. Calculated as:
sum(gpfs_fset_allocInodes)-sum(gpfs_fset_freeInodes)/sum(gpfs_fset_maxInodes)
FsLatency_diskWaitRd
Average disk wait time per read operation on the IBM Spectrum Scale client. Calculated as:
sum(gpfs_fs_tot_disk_wait_rd)/sum(gpfs_fs_read_ops)
FsLatency_diskWaitWr
Average disk wait time per write operation on the IBM Spectrum Scale client. Calculated as:
sum(gpfs_fs_tot_disk_wait_wr)/sum(gpfs_fs_write_ops)
MetaDataPool_capUtil
MetaData Pool Capacity Utilization. Calculated as:
sum(gpfs_pool_total_metaKB)-sum(gpfs_pool_free_metaKB))/sum(gpfs_pool_total_metaKB)
NFSNodeLatency_read
Time taken for NFS read operations. Calculated as:
sum(nfs_read_lat)/sum(nfs_read_ops)
NFSNodeLatency_write
Time taken for NFS read operations. Calculated as:
sum(nfs_write_lat)/sum(nfs_write_ops)
SMBNodeLatency_read
Total amount of time spent for all type of SMB read requests. Calculated as:
avg(op_time)/avg(op_count)
SMBNodeLatency_write
Total amount of time spent for all type of SMB write requests. Calculated as:
avg(op_time)/avg(op_count)
MemoryAvailable_percent
Estimated available memory percentage. Calculated as:
• For the nodes having less than 40 GB total memory allocation:
(mem_memfree+mem_buffers+mem_cached)/mem_memtotal
• For the nodes having equal to or greater than 40 GB memory allocation:
(mem_memfree+mem_buffers+mem_cached)/40000000
--errorlevel
Defines the threshold error limit. The threshold error limit can be a percentage or an integer,
depending on the metric on which the threshold value is being set.
--warnlevel
Defines the threshold warn limit. The threshold warn limit can be a percentage or an integer,
depending on the metric on which the threshold value is being set.
--direction
Defines the direction for the threshold limit. The allowed values are high or low.
--groupby
Groups the result based on the group key. The following values are allowed for the group key:
• gpfs_cluster_name
• gpfs_disk_name
• gpfs_diskpool_name
• gpfs_disk_usage_name
• gpfs_fset_name
• gpfs_fs_name
• mountPoint
• netdev_name
• node
--filterby
Filters the result based on the filter key. The following values are allowed for the filter key:
• gpfs_cluster_name
• gpfs_disk_name
• gpfs_diskpool_name
• gpfs_disk_usage_name
• gpfs_fset_name
• gpfs_fs_name
• mountPoint
• netdev_name
• node
--sensitivity
Defines the sample interval value in seconds. It is set to 300 by default. If a sensor is configured
with a time interval greater than 300 seconds, then the --sensitivity is set to the same value as the
sensors period. The minimum value allowed is 120 seconds. If a sensor is configured with a time
interval less than 120 seconds, the --sensitivity is set to 120 seconds.
Starting from IBM Spectrum Scale version 5.0.4, the user is allowed to specify the -min and -max
suffix for the sensitivity value to evaluate the original outlier data points within the specified
sample interval. The outlier observation works only for the sensitivity time interval that is greater
than the sensor's time interval.
--hysteresis
Defines the percentage that the observed value must be under (or over) the current threshold
level to switch back to the previous state. The default value is 0.0, while the recommended value
is 5.0.
--name
Defines the name of the rule. It can be an alphanumeric string with up to 30 characters. If the rule
name is not specified, default name will be set. The default name is set using the metric name
followed by underscore and then a "custom" prefix.
--errormsg
This is a user defined input. The message can be 256 bytes long. It must be added within double
quotes (""), else the system will throw an error.
--warnmsg
This is a user defined input. The message can be 256 bytes long. It must be added within double
quotes (""), else the system will throw an error.
Important:
• The mathematical aggregations: AVG, SUM, MAX, MIN, RATE could be used to determine how to
merge the metric values in the evaluation source. The aggregation operations are not supported for
measurements.
• For each rule the user can configure up to two conditions, --error and --warn, triggering event state
change. At least one level limit setting is required. For example, the threshold add command must
have one of the following options:
• The customer can also influence the measuring quantity and precision by specifying sensitivity,
groupby, filterby, histeresis, or rule name option setting.
• For each condition level the customer can leave an output message text by using the --errormsg or --
warnmsg options, which is integrated into the state change event message. The state change event
message is triggered when this condition is exceeded.
• An existing threshold rule is deactivated in the following cases:
1. If no metric keys exist for the defined threshold rule in the performance monitoring tool
metadata.
2. If the sensor corresponding to the rule is not enabled.
As soon as the metric keys and metric values are detected for a rule, the state of the rule switches to
active.
thresholds delete
Deletes the threshold rules from the system.
ruleName
Deletes a specific threshold rule.
all
Deletes all the threshold rules.
Note: Using the mmhealth thresholds delete command to delete a rule will accomplish the
following tasks:
• The rule will be removed from the thresholds rules specification file and active monitoring process.
• All the current health information created by this particular rule will be removed as well.
config interval
Sets the monitoring interval for the whole cluster.
off
The monitoring will be off for the whole cluster.
low
Monitoring is set for every (default monitoring time *10) seconds.
medium
Monitoring is set for every (default monitoring time *5) seconds.
default
Monitoring is set for every 15-30 seconds based on the service being monitored
high
Monitoring is set for every (default monitoring time /2) seconds.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmhealth command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. See
the information about the requirements for administering a GPFS system in the IBM Spectrum Scale:
Administration Guide.
Examples
1. To show the health status of the current node, issue this command:
3. To view the health status of all the nodes, issue this command:
4. To view the detailed health status of the component and its sub-component, issue this command:
5. To view the health status of only unhealthy components, issue this command:
6. To view the health status of sub-components of a node's component, issue this command:
rg_gssio2-hs/e2d3s12 HEALTHY -
rg_gssio2-hs/e2d4s07 HEALTHY -
rg_gssio2-hs/e2d4s08 HEALTHY -
rg_gssio2-hs/e2d4s09 HEALTHY -
rg_gssio2-hs/e2d4s10 HEALTHY -
rg_gssio2-hs/e2d4s11 HEALTHY -
rg_gssio2-hs/e2d4s12 HEALTHY -
rg_gssio2-hs/e2d5s07 HEALTHY -
rg_gssio2-hs/e2d5s08 HEALTHY -
rg_gssio2-hs/e2d5s09 HEALTHY -
rg_gssio2-hs/e2d5s10 HEALTHY -
rg_gssio2-hs/e2d5s11 HEALTHY -
rg_gssio2-hs/e2d5s12ssd HEALTHY -
rg_gssio2-hs/n1s02 HEALTHY -
rg_gssio2-hs/n2s02 HEALTHY -
RECOVERYGROUP DEGRADED gnr_rg_failed
rg_gssio1-hs FAILED gnr_rg_failed
rg_gssio2-hs HEALTHY -
VIRTUALDISK DEGRADED -
rg_gssio2_hs_Basic1_data_0 HEALTHY -
rg_gssio2_hs_Basic1_system_0 HEALTHY -
rg_gssio2_hs_Basic2_data_0 HEALTHY -
rg_gssio2_hs_Basic2_system_0 HEALTHY -
rg_gssio2_hs_Custom1_data1_0 HEALTHY -
rg_gssio2_hs_Custom1_system_0 HEALTHY -
rg_gssio2_hs_Data_8M_2p_1_gpfs0 HEALTHY -
rg_gssio2_hs_Data_8M_3p_1_gpfs1 HEALTHY -
rg_gssio2_hs_MetaData_1M_3W_1_gpfs0 HEALTHY -
rg_gssio2_hs_MetaData_1M_4W_1_gpfs1 HEALTHY -
rg_gssio2_hs_loghome HEALTHY -
rg_gssio2_hs_logtip HEALTHY -
rg_gssio2_hs_logtipbackup HEALTHY -
PERFMON HEALTHY -
7. To view the eventlog history of the node for the last hour, issue this command:
8. To view the eventlog history of the node for the last hour, issue this command:
9. To view the detailed description of an event, issue the mmhealth event show command. This is an
example for quorum_down event:
Cause: The local node does not have quorum. The cluster service might not be running.
User Action: Check if the cluster quorum nodes are running and can be reached over the network.
Check local firewall settings
Severity: ERROR
State: DEGRADED
10. To view the list of hidden events, issue the mmhealth event list HIDDEN command:
Event scope
--------------------------------------
gpfs_pagepool_small -
nfsv4_acl_type_wrong fs1
nfsv4_acl_type_wrong fs2
11. To view the detailed description of the cluster, issue the mmhealth cluster show command:
Note: The cluster must have the minimum release level as 4.2.2.0 or higher to use mmhealth
cluster show command.
Also, this command does not support Windows operating system.
12. To view more information of the cluster health status, issue this command:
14. To view the list of threshold rules defined for the system, issue this command:
gpfs_diskpool_name
DataCapUtil_Rule DataPool_capUtil 90.0 80.0 high gpfs_cluster_name, 300
gpfs_fs_name,
gpfs_diskpool_name
MemFree_Rule mem_memfree 50000 100000 low node 300
MetaDataCapUtil_Rule MetaDataPool_capUtil 90.0 80.0 high gpfs_cluster_name, 300
gpfs_fs_name,
gpfs_diskpool_name
15. To view the detailed health status of file system component, issue this command:
level.
17. To set the monitoring interval to low, issue the following command:
18. To solve one of the events that is manually solvable, issue the following command:
To manually solve one of the events with an entity name, run the following command:
Successfully resolved event fserrallocblock for entity gpfs0 with event fsstruct_fixed.
See also
• “mmchconfig command” on page 169
• “mmlscluster command” on page 484
• “mmobj command” on page 565
• “mmsmb command” on page 698
Location
/usr/lpp/mmfs/bin
mmimgbackup command
Performs a backup of a single GPFS file system metadata image.
Synopsis
mmimgbackup Device [-g GlobalWorkDirectory]
[-L n] [-N {Node[,Node...] | NodeFile | NodeClass}]
[-S SnapshotName] [--image ImageSetName] [--notsm | --tsm]
[--qos QOSClass] [--tsm-server ServerName] [POLICY-OPTIONS]
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
The mmimgbackup command performs a backup of a single GPFS file system metadata image.
You must run the mmbackupconfig command before you run the mmimgbackup command. For more
information, see the topic Scale Out Backup and Restore (SOBAR) in the IBM Spectrum Scale:
Administration Guide.
Parameters
Device
The device name of the file system whose metadata image is to be backed up. File system names
need not be fully-qualified. fs0 is as acceptable as /dev/fs0.
This must be the first parameter.
-g GlobalWorkDirectory
The directory to be used for temporary files that need to be shared between the mmimgbackup
worker nodes and to hold backup images until sent to archive. The default is:
mount_point_for_Device/.mmimgbackup
-L n
Controls the level of information displayed by the mmimgbackup command. The default for
mmimgbackup is 1. Larger values indicate the display of more detailed information. n should be one of
the following values:
0
Displays only serious errors.
1
Displays some information as the command executes, but not for each file. This is the default.
2
Displays each chosen file and the scheduled action.
3
Displays the same information as 2, plus each candidate file and the applicable rule.
4
Displays the same information as 3, plus each explicitly EXCLUDEd file and the applicable rule.
5
Displays the same information as 4, plus the attributes of candidate and EXCLUDEd files.
6
Displays the same information as 5, plus non-candidate files and their attributes.
ImageSetName_YYYYMMDD_hh.mm.ss_BBB.sbr
or
ImageSetName_YYYYMMDD_hh.mm.ss.idx
where:
ImageSetName
The default ImageSetName is ImageArchive.
YYYY
A four-digit year.
MM
A two-digit month.
DD
A two-digit day.
hh
A two-digit hour.
mm
A two-digit minute.
ss
A two-digit second.
BBB
A three-digit bucket number.
--notsm | --tsm
Omits (enables) archiving an image fileset to IBM Spectrum Protect through the dsmc commands.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
--tsm-server ServerName
Specifies the server name to provide to the IBM Spectrum Protect dsmc command used to store the
image data files in the IBM Spectrum Protect server.
POLICY-OPTIONS
The following mmapplypolicy options may also be used with mmimgbackup:
-A IscanBuckets
Specifies the number of buckets of inode numbers (number of inode/filelists) to be created and
processed by the parallel inode scan. The default is 17. A bucket will typically represent
1,000,000 in-use inodes.
-a IscanThreads
Specifies the number of threads and sort pipelines each node will run during parallel inode scan
and policy evaluation. The default is 4.
-D yyyy-mm-dd[@hh:mm[:ss]]
Specifies the date and (UTC) time to be used by the mmimgbackup command when evaluating the
policy rules. The default is the current date and time. If only a date is specified, the time will
default to 00:00:00.
-M name=value
Indicates a user defined macro specification. There can be more than one -M argument. These
macro specifications are passed on to the m4 preprocessor as -D specifications.
-n DirThreadLevel
Specifies the number of threads that will be created and dispatched within each mmimgbackup
process during the directory scan phase. The default is 24.
-s LocalWorkDirectory
Specifies the directory to be used for local temporary storage during command processing. The
default directory is /tmp.
--sort-buffer-size Size
Specifies the size for the main memory buffer to be used by sort command.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmimgbackup command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To create a backup image with an ImageSetName of sobar.cluster.fs9, for the snapshot snap1 of file
system fs9 where the image is stored in the file system /backup_images on the IBM Spectrum Protect
server, issue:
To show that the images are stored on the IBM Spectrum Protect server, issue this command:
dsmls /backup_images
/backup_images/4063536/mmPolicy.4260220.51A8A6BF:
1088 1088 0 r sobar.cluster.fs9_20121129_16.19.55.idx
4520 4520 0 r sobar.cluster.fs9_20121129_16.19.55_000.sbr
See also
• “mmapplypolicy command” on page 80
• “mmbackupconfig command” on page 110
• “mmimgrestore command” on page 454
• “mmrestoreconfig command” on page 661
Location
/usr/lpp/mmfs/bin
mmimgrestore command
Restores a single GPFS file system from a metadata image.
Synopsis
mmimgrestore Device ImagePath [-g GlobalWorkDirectory]
[-L n] [-N {Node[,Node...] | NodeFile | NodeClass}]
[--image ImageSetName] [--qos QOSClass] [POLICY-OPTIONS]
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
The mmimgrestore command restores a single GPFS file system from a metadata image.
The mmrestoreconfig command must be run prior to running the mmimgrestore command. For more
information, see Scale Out Backup and Restore (SOBAR) in IBM Spectrum Scale: Administration Guide.
Parameters
Device
The device name of the file system whose metadata image is to be restored. The file system must be
empty and mounted read-only. File system names need not be fully-qualified. fs0 is as acceptable
as /dev/fs0.
This must be the first parameter.
ImagePath
The fully-qualified path name to an image fileset containing GPFS backup images. The path must be
accessible by every node participating in the restore.
-g GlobalWorkDirectory
The directory to be used for temporary files that need to be shared between the mmimgrestore
worker nodes. If not specified, the default working directory will be the ImagePath specified.
-L n
Controls the level of information displayed by the mmimgrestore command. The default for
mmimgrestore is 1. Larger values indicate the display of more detailed information. n should be one
of the following values:
0
Displays only serious errors.
1
Displays some information as the command executes, but not for each file. This is the default.
2
Displays each chosen file and the scheduled action.
3
Displays the same information as 2, plus each candidate file and the applicable rule.
4
Displays the same information as 3, plus each explicitly EXCLUDEd file and the applicable rule.
5
Displays the same information as 4, plus the attributes of candidate and EXCLUDEd files.
6
Displays the same information as 5, plus non-candidate files and their attributes.
POLICY-OPTIONS
The following mmapplypolicy options may also be used with mmimgrestore:
-m ThreadLevel
The number of threads that will be created and dispatched within each image restore process
during the policy execution phase of restore. The default is calculated to divide the work of
processing all image files being restored evenly among all nodes specified with -N. The valid range
is 1 to 20.
-s LocalWorkDirectory
Specifies the directory to be used for local temporary storage during command processing. The
default directory is /tmp.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmimgrestore command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To restore file system fs9 with data stored in the image with an ImageSetName of
sobar.cluster.fs9_20121129_16.19.55 and execute the restore only on AIX nodes, issue:
See also
• “mmapplypolicy command” on page 80
• “mmbackupconfig command” on page 110
• “mmimgbackup command” on page 450
• “mmrestoreconfig command” on page 661
Location
/usr/lpp/mmfs/bin
mmimportfs command
Imports into the cluster one or more file systems that were created in another GPFS cluster.
Synopsis
mmimportfs {Device | all} -i ImportfsFile [-S ChangeSpecFile]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmimportfs command, in conjunction with the mmexportfs command, can be used to move into
the current GPFS cluster one or more file systems that were created in another GPFS cluster. The
mmimportfs command extracts all relevant file system and disk information from the ExportFilesysData
file specified with the -i parameter. This file must have been created by the mmexportfs command.
When all is specified in place of a file system name, any disks that are not associated with a file system
will be imported as well.
If the file systems being imported were created on nodes that do not belong to the current GPFS cluster,
the mmimportfs command assumes that all disks have been properly moved, and are online and
available to the appropriate nodes in the current cluster.
Note: If the disks are part of an IBM Spectrum Scale RAID configuration, this explicitly means moving all
the disks and respective storage enclosures.
If any node in the cluster, including the node on which you are running the mmimportfs command, does
not have access to one or more disks, use the -S option to assign NSD servers to those disks.
The mmimportfs command attempts to preserve any NSD server assignments that were in effect when
the file system was exported.
After the mmimportfs command completes, use mmlsnsd to display the NSD server names that are
assigned to each of the disks in the imported file system. Use mmchnsd to change the current NSD server
assignments as needed.
After the mmimportfs command completes, use mmlsdisk to display the failure groups to which each
disk belongs. Use mmchdisk to make adjustments if necessary.
If you are importing file systems into a cluster that already contains GPFS file systems, it is possible to
encounter name conflicts. You must resolve such conflicts before the mmimportfs command can
succeed. You can use the mmchfs command to change the device name and mount point of an existing
file system. If there are disk name conflicts, use the mmcrnsd command to define new disks and specify
unique names (rather than let the command generate names). Then replace the conflicting disks using
mmrpldisk and remove them from the cluster using mmdelnsd.
Results
Upon successful completion of the mmimportfs command, all configuration information pertaining to the
file systems being imported is added to configuration data of the current GPFS cluster.
Parameters
Device | all
The device name of the file system to be imported. File system names need not be fully-qualified. fs0
is as acceptable as /dev/fs0. Specify all to import all GPFS file systems, as well as all disks that do
not currently belong to a file system.
If the specified file system device is an IBM Spectrum Scale RAID-based file system, then all affected
IBM Spectrum Scale RAID objects will be imported as well. This includes recovery groups,
declustered arrays, vdisks, and any other file systems that are based on these objects. For more
information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration.
This must be the first parameter.
-i ImportfsFile
The path name of the file containing the file system information. This file must have previously been
created with the mmexportfs command.
-S ChangeSpecFile
The path name of an optional file containing disk stanzas or recovery group stanzas, or both,
specifying the changes that are to be made to the file systems during the import step.
Prior to GPFS 3.5, the disk information was specified in the form of disk descriptors defined as:
DiskName:ServerList:
For backward compatibility, the mmimportfs command will still accept the traditional disk
descriptors, but their use is discouraged.
Disk stanzas have the following format:
%nsd:
nsd=NsdName
servers=ServerList
usage=DiskUsage
failureGroup=FailureGroup
pool=StoragePool
device=DiskName
thinDiskType={no | nvme | scsi | auto}
where:
nsd=DiskName
Is the name of a disk from the file system being imported. This clause is mandatory for the
mmimportfs command.
servers=ServerList
Is a comma-separated list of NSD server nodes. You can specify up to eight NSD servers in this
list. The defined NSD will preferentially use the first server on the list. If the first server is not
available, the NSD will use the next available server on the list.
When specifying server nodes for your NSDs, the output of the mmlscluster command lists the
host name and IP address combinations recognized by GPFS. The utilization of aliased host names
not listed in the mmlscluster command output may produce undesired results.
If you do not define a ServerList, GPFS assumes that the disk is SAN-attached to all nodes in the
cluster. If all nodes in the cluster do not have access to the disk, or if the file system to which the
disk belongs is to be accessed by other GPFS clusters, you must specify a ServerList.
To remove the NSD server list, do not specify a value for ServerList (remove or comment out the
servers=ServerList clause of the NSD stanza).
usage=DiskUsage
Specifies the type of data to be stored on the disk. If this clause is specified, the value must match
the type of usage already in effect for the disk; mmimportfs cannot be used to change this value.
failureGroup=FailureGroup
Identifies the failure group to which the disk belongs. If this clause is specified, the value must
match the failure group already in effect for the disk; mmimportfs cannot be used to change this
value.
pool=StoragePool
Specifies the storage pool to which the disk is to be assigned. If this clause is specified, the value
must match the storage pool already in effect for the disk; mmimportfs cannot be used to change
this value.
device=DiskName
The block device name of the underlying disk device. This clause is ignored by the mmimportfs
command.
thinDiskType={no | nvme | scsi | auto}
Specifies the space reclaim disk type:
no
The disk does not support space reclaim. This value is the default.
nvme
The disk is a TRIM capable NVMe device that supports the mmreclaimspace command.
scsi
The disk is a thin provisioned SCSI disk that supports the mmreclaimspace command.
auto
The type of the disk is either nvme or scsi. IBM Spectrum Scale will try to detect the actual
disk type automatically. To avoid problems, you should replace auto with the correct disk
type, nvme or scsi, as soon as you can.
Note: In 5.0.5, the space reclaim auto-detection is enhanced. It is encouraged to use the auto
key-word after your cluster is upgraded to 5.0.5.
For more information, see the topic IBM Spectrum Scale with data reduction storage devices in the
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Recovery group stanzas have the following format:
%rg: rgName=RecoveryGroupName
servers=Primary[,Backup]
where:
RecoveryGroupName
Specifies the name of the recovery group being imported.
Primary[,Backup]
Specifies the primary server and, optionally, a backup server to be associated with the recovery
group.
Notes:
1. You cannot change the name of a disk. You cannot change the disk usage or failure group
assignment with the mmimportfs command. Use the mmchdisk command for this purpose.
2. All disks that do not have stanzas in ChangeSpecFile are assigned the NSD servers that they had at
the time the file system was exported. All disks with NSD servers that are not valid are assumed to
be SAN-attached to all nodes in the cluster. Use the mmchnsd command to assign new or change
existing NSD server nodes.
3. Use the mmchrecoverygroup command to activate recovery groups that do not have stanzas in
ChangeSpecFile. The mmchrecoverygroup command is documented in IBM Spectrum Scale
RAID: Administration.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmimportfs command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To import all file systems in the current cluster, issue this command:
mmimportfs: Processing disks that do not belong to any file system ...
mmimportfs: Processing disk gpfs6nsd
mmimportfs: Processing disk gpfs1001nsd
See also
• “mmexportfs command” on page 402
Location
/usr/lpp/mmfs/bin
mmkeyserv command
Manages encryption key servers and clients.
Synopsis
mmkeyserv server {add | update}
ServerName [--port RestPortNumber] [--user-id RestUserID]
[--server-pwd PasswordFile] [--accept] [--kmip-cert CertFilesPrefix]
[--backup ServerName[,ServerName...]] [--distribute | --nodistribute]
[--timeout ConnectionTimeout] [--retry ConnectionAttempts]
[--interval Microseconds]
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
Availability
Available with IBM Spectrum Scale Advanced Edition, IBM Spectrum Scale Data Management Edition,
IBM Spectrum Scale Developer Edition, or IBM Spectrum Scale Erasure Code Edition.
Description
With the mmkeyserv command, you can configure a cluster and a remote key manager (RKM) server so
that nodes in the cluster can retrieve master encryption keys when they need to. You must set up an RKM
server before you run this command. The RKM server software must be IBM Security Key Lifecycle
Manager (SKLM). Nodes in the cluster must have direct network access to the RKM server.
With this command you can connect to an RKM server, create GPFS tenants, create encryption keys, and
create and register key clients. The command automatically generates and exchanges certificates and
sets up a local keystore. You can also use this command to securely delete key clients, encryption keys,
and tenants. You can run this command from any node in the cluster. Each node has a configuration file
and a copy of the local keystore. Configuration changes affect all nodes in the cluster.
Password files: Several of the command options require a password file as a parameter. A password file
is a text file that contains a password at the beginning. A password must be 1 - 20 characters in length.
Because the password file is a security-sensitive file, it must have the following characteristics:
• It must be a regular file.
• It must be owned by the root user.
• Only the root user must have permission to read or write it.
The following terms are used:
client or key client
An entity in the cluster that represents the nodes that access encrypted files. The key client receives
master encryption keys from the tenant of the RKM server.
IBM Security Key Lifecycle Manager (SKLM)
Required key management server software.
Master encryption key (MEK)
A key for encrypting file encryption keys.
Parameters
server
Manages a connection with an RKM server.
add
Adds an RKM server connection to the IBM Spectrum Scale cluster. You can adjust the values of
some of these options later with the mmkeyserv rkm change command.
ServerName
Specifies the host name or IP address of the RKM server.
--port RestPortNumber
Specifies the port number for the Representational State Transfer (REST) interface on the
SKLM server:
• If SKLM is configured to use its default REST port for communications with its clients, you do
not need to specify this parameter. IBM Spectrum Scale automatically tries to connect with
SKLM through the default REST port number of each of the supported versions of SKLM
serially, starting with the earliest supported version. If IBM Spectrum Scale successfully
connects with SKLM through the default REST port and successfully retrieves the REST
certificates, it stops searching for a port and uses the successful port number for future
communications with SKLM.
Note: The default SKLM REST port number depends on the version of SKLM that is installed
on the RKM server. For more information, see Firewall recommendations for IBM SKLM in the
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
• If SKLM is not configured to use its default REST port number, you must specify the --port
parameter with the correct port number so that IBM Spectrum Scale can connect with SKLM.
If you do not specify a port number or if you specify the incorrect port number, IBM
Spectrum Scale fails to connect with SKLM and displays an error message.
For more information, see Part 2 of Simplified setup: Using SKLM with a self-signed certificate
in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
--user-id RestUserID
Specifies the user ID for the RKM server. The default value is SKLMAdmin.
--server-pwd PasswordFile
Specifies a password file that contains a password for accessing the RKM server. See the
requirements for password files in the Description section of this topic. If this option is
omitted, the command prompts for a password.
--accept
Configures the command to automatically accept certificates from the RKM server. The
acceptance prompt is suppressed.
[--kmip-cert CertFilesPrefix]
Specifies the path and the file name prefix of non-self-signed certificate files in a certificate
chain.
Important:
• You must use this option when the specified key server is using a chain of certificates from a
certificate authority (CA) or other non-self-signed certificate chain for communication on the
KMIP port.
• If you omit this option, the command assumes that the specified key server is using a self-
signed certificate for communication on the KMIP port. The command retrieves the self-
signed certificate from the key server automatically.
For more information about adding certificates of either kind to the configuration, see the topic
Configuring encryption with SKLM: Simplified setup in the IBM Spectrum Scale: Concepts,
Planning, and Installation Guide.
The certificate files must be formatted as PEM-encoded X.509 certificates. You must manually
retrieve the certificate files from the key server. Copy the files to the node on which you are
issuing the mmkeyserv command. Rename the files so that the full path and the file name of
each file in the chain has the following format:
CertFilesPrefix.n.cert
where:
CertFilesPrefix
Is the full path and the file name prefix of the certificate file.
n
Is an integer that identifies the place of the certificate in the certificate chain:
0 indicates that the file is the root CA certificate.
An integer in the range 1 - (n-1) indicates that the file is an intermediate CA
certificate.
n indicates that the file is the endpoint certificate.
Note: A valid certificate chain can contain zero or more intermediate certificates.
cert
Is the suffix of the certificate file.
For example, in the following set of certificate files, the CertFilesPrefix is /tmp/
certificate/sklmChain:
/tmp/certificate/sklmChain.0.cert contains the root certificate.
/tmp/certificate/sklmChain.1.cert contains an intermediate certificate.
/tmp/certificate/sklmChain.2.cert contains the endpoint certificate.
--backup ServerName[,ServerName...]
Specifies a comma-separated list of server names that you want to add to the list of backup
RKM servers in the RKM.conf file. If an IBM Spectrum Scale node cannot retrieve a master
encryption key from its main RKM server, it tries each backup server in the list until it either
retrieves a key or exhausts the list.
Note: The mmkeyserv command itself does not attempt to contact backup servers or to
replicate client information across background servers. The system administrator is
responsible for maintaining replication across backup servers.
--distribute | --nodistribute
--distribute
Attempts to arrange the list of RKM server names (main RKM server and backup RKM
servers) in the RKM.conf file in a different order on each node so that each node connects
with the servers in a different order. This option provides some performance advantage in
retrieving MEKs. This option is the default.
--nodistribute
Does not attempt to arrange the list of backup RKM server names in the RKM.conf file.
--timeout ConnectionTimeout
Sets the connection timeout in seconds for retrieving an MEK from an RKM server. The valid
range is 1 - 120 seconds. The default value is 60 seconds.
--retry ConnectionAttempts
Sets the number of attempts to retry a connection to an RKM server. The valid range is 1 - 10
retries. The default value is three retries.
--interval Microseconds
Specifies the number of microseconds to wait between connection retries. The valid range is 1
- 1000000000. The default value is 10000 (0.1 seconds).
update
Updates a connection between an IBM Spectrum Scale cluster and an RKM server.
• The command always gets a fresh server certificate from the RKM server.
• If you do not specify the --port option, the command first tries to connect with SKLM through
the most recently used REST interface port number. If this connection fails, the command tries
to connect with SKLM through the default REST interface port number of each of the supported
versions of SKLM serially, starting with the earliest supported version. For more information, see
the description of the --port option for the mmkeyserv server add command earlier in this
topic.
ServerName
Specifies the host name or IP address of the RKM server.
--port RestPortNumber
Specifies the port number for the Representational State Transfer (REST) interface.
--user-id RestUserID
Specifies the user ID for the RKM server. The default value is SKLMAdmin.
--server-pwd PasswordFile
Specifies a password file that contains a password for accessing the RKM server. See the
requirements for password files in the Description section of this topic. If this option is
omitted, the command prompts for a password if one is required.
--accept
Configures the command to automatically accept certificates from the RKM server. The
acceptance prompt is suppressed.
[--kmip-cert CertFilesPrefix]
Specifies the path and the file name prefix of non-self-signed certificate files in a certificate
chain.
Important:
• You must use this option when the specified key server is using a chain of certificates from a
certificate authority or other non-self-signed certificate chain for communication on the
KMIP port.
• If you omit this option, the command assumes that the specified key server is using a self-
signed certificate for communication on the KMIP port. The command retrieves the self-
signed certificate from the key server automatically.
For more information about adding certificates of either kind to the configuration, see the topic
Configuring encryption with SKLM: Simplified setup in the IBM Spectrum Scale: Concepts,
Planning, and Installation Guide.
The certificate files must be formatted as PEM-encoded X.509 certificates. You must manually
retrieve the certificate files from the key server. Copy the files to the node on which you are
issuing the mmkeyserv command. Rename the files so that the full path and the file name of
each file in the chain has the following format:
CertFilesPrefix.n.cert
where:
CertFilesPrefix
Is the full path and the file name prefix of the certificate file.
n
Is an integer that identifies the place of the certificate in the certificate chain:
0 indicates that the file is the root CA certificate.
An integer in the range 1 - (n-1) indicates that the file is an intermediate CA
certificate.
n indicates that the file is the endpoint certificate.
Note: A valid certificate chain can contain zero or more intermediate certificates.
cert
Is the suffix of the certificate file.
For example, in the following set of certificate files, the CertFilesPrefix is /tmp/
certificate/sklmChain:
/tmp/certificate/sklmChain.0.cert contains the root certificate.
/tmp/certificate/sklmChain.1.cert contains an intermediate certificate.
/tmp/certificate/sklmChain.2.cert contains the endpoint certificate.
--backup ServerName[,ServerName...]
Specifies a comma-separated list of server names that you want to add to the backup RKM
servers that are listed in the RKM.conf file. If an IBM Spectrum Scale node cannot retrieve a
master encryption key from its main RKM server, it tries each backup server in the list until it
either retrieves a key or exhausts the list.
Important: To remove a backup list, specify delete instead of a list of server names, as in the
following example:
ServerName
Specifies the host name or IP address of an RKM server.
tenant
Manages tenants on RKM servers. A tenant is an SKLM device group for holding encryption keys.
add
Specifies the name of a tenant to add to the IBM Spectrum Scale cluster.
• If the tenant is already added to the cluster, the command returns with an error.
• If the tenant exists on the RKM server but is not added to the cluster, the command adds the
tenant to the cluster.
• If the tenant does not exist on the RKM server, the command creates the tenant on the server
and adds the tenant to the cluster.
TenantName
Specifies the name of the tenant that you want to create.
--server ServerName
Specifies the name of the RKM server to which the tenant belongs.
--server-pwd PasswordFile
Specifies a password file that contains a password for accessing the RKM server. See the
requirements for password files in the Description section of this topic. If this option is
omitted, the command prompts for a password.
delete
Deletes a tenant from an RKM server.
Note:
• If you delete a tenant that has encryption keys on the key server, the command deletes the
tenant from the cluster configuration but not from the key server.
• If you delete a tenant that has no encryption keys on the key server, the command deletes the
tenant from both the cluster configuration and the key server.
TenantName
Specifies the name of the tenant that you want to delete.
--server ServerName
Specifies the name of the RKM server to which the tenant belongs.
--server-pwd PasswordFile
Specifies a password file that contains a password for accessing the RKM server. See the
requirements for password files in the Description section of this topic. If this option is
omitted, the command prompts for a password.
show
Displays information about tenants and RKM servers. The following table shows the results of
various combinations of options:
TenantName
Specifies the name of a tenant.
--server ServerName
Specifies the name of an RKM server.
key
Manages encryption keys.
create
Creates encryption keys in a tenant and displays the key IDs on the console.
Note: Make a note of the key IDs. You must specify an encryption key ID and an RKM ID when you
write an encryption policy rule.
--server ServerName
Specifies the host name or IP address of an RKM server.
--server-pwd PasswordFile
Specifies a password file that contains a password for accessing the RKM server. See the
requirements for password files in the Description section of this topic. If this option is
omitted, the command prompts for a password.
--tenant TenantName
Specifies the name of the tenant in which you want to create the encryption keys.
--count NumberOfKeys
Specifies the number of keys to create. The default value is 1.
delete
Deletes encryption keys from a tenant.
CAUTION: When you delete an encryption key, any data that was encrypted by that key
becomes unrecoverable.
--server ServerName
Specifies the host name or IP address of an RKM server.
--server-pwd PasswordFile
Specifies a password file that contains a password for accessing the RKM server. See the
requirements for password files in the Description section of this topic. If this option is
omitted, the command prompts for a password.
--all --tenant TenantName
Deletes all the encryption keys in the specified tenant.
--file ListOfKeysFile
Specifies a file that contains a list of the key IDs of encryption keys that you want to delete,
one key per line.
show
Displays information about the encryption keys in a tenant.
--server ServerName
Specifies the host name or IP address of an RKM server.
--server-pwd PasswordFile
Specifies a password file that contains a password for accessing the RKM server. See the
requirements for password files in the Description section of this topic. If this option is
omitted, the command prompts for a password.
--tenant TenantName
Specifies the name of the tenant that contains the keys that you want to display.
client
Manages key clients. The following facts are important:
• You need only one key client per cluster per RKM server. However, you can create and use multiple
key clients on the same RKM server.
• You can register only one key client per tenant per cluster. However, you can register one key client
to more than one tenant in the same RKM server.
create
Creates a key client to communicate with the RKM server.
ClientName
Specifies the name of the key client that you want to create. A key client name must be 1 - 16
characters in length and must be unique within an IBM Spectrum Scale cluster.
--server ServerName
Specifies the name of the RKM server to which the key client belongs.
--server-pwd PasswordFile
Specifies a password file that contains a password for accessing the RKM server. See the
requirements for password files in the Description section of this topic. If this option is
omitted, the command prompts for a password.
• If you are providing a client certificate from a CA for communicating with the remote key
server on the KMIP port, you must specify either the --ca-cert option or the --ca-chain
option.
• If you omit the --cert option, the command generates a self-signed client certificate for
communicating with the remote key server on the KMIP port.
For more information, see the following topics:
Simplified setup: Using SKLM with a self-signed certificate in the IBM Spectrum Scale:
Administration Guide
Simplified setup: Using SKLM with a certificate chain in the IBM Spectrum Scale:
Administration Guide
The following requirements must be met:
• The contents of a private key file must be PEM-encoded and unencrypted.
• The CA certificate chain can be specified either as a certificate chain file that contains all the
CA certificates or as a set of files, one file for each CA certificate in the chain.
• If a certificate chain file is used, the certificates in it must be in PEM-encoded x509 format
and must be concatenated. The CA root certificate must be first, followed by the
intermediate CA certificates in order, followed by the final CA certificate that signed the
client certificate.
• If certificate files are used, one file for each certificate, the certificates must be in PEM-
encoded x509 format. Each file must be renamed in the format
<CACertFilesPrefix><n>.cert, where <CACertFilesPrefix> is the full path prefix
for the CA certificate files, such as /tmp/CA/certfiles, and <n> is a CA certificate index.
The index is 0 for the CA root certificate and n - 1 for the last intermediate CA certificate that
signed the client certificate. In the following example, the chain consists of a CA root
certificate file and two intermediate CA certificate files. The full path prefix is /tmp/CA/
certfiles:
--days DaysToExpiration
Specifies the number of days until the newly created client certificate expires. The valid range
is 1 - 18262. The default value is 1095. This parameter is not available when you specify a CA-
signed certificate chain for the client certificate. The certificates in the CA certificate chain
specify their expiration dates.
--keystore-pwd PasswordFile
Specifies a password file that contains a client keystore password. See the requirements for
password files in the Description section of this topic. If this parameter is omitted the
command prompts for a keystore password.
delete
Deletes a key client.
ClientName
Specifies the name of the key client that you want to delete.
register
Registers a key client to a tenant.
ClientName
Specifies the name of the key client that you want to register.
--rkm-id RkmID
Specifies a new RKM ID. An RKM ID must be unique within the cluster, must be 1 - 21
characters in length, and can contain only alphanumeric characters or underscore (_). It must
begin with a letter or an underscore. An RKM ID identifies an RKM stanza in the RKM.conf file.
The stanza contains the information that a node needs to retrieve a master encryption key
(MEK) from an RKM.
--tenant TenantName
Specifies the name of the tenant to which you want to register the client.
--server-pwd PasswordFile
Specifies a password file that contains a password for accessing the RKM server. See the
requirements for password files in the Description section of this topic. If this option is
omitted, the command prompts for a password.
deregister
Unregisters a key client from a tenant.
ClientName
Specifies the name of the key client that you want to unregister.
--tenant TenantName
Specifies the name of the tenant that you want to unregister the key client from.
--server-pwd PasswordFile
Specifies a password file that contains a password for accessing the RKM server. See the
requirements for password files in the Description section of this topic. If this option is
omitted, the command prompts for a password.
show
Displays information about a key client.
ClientName
Specifies the name of the client whose information you want to display.
--server ServerName
Specifies the name of the server to which the client belongs.
update
Replaces an expired or unexpired client certificate with a new one in the specified key client.
ClientName
Specifies the name of the key client that you want to update.
--client NewClientName
Specifies a new name for client. A key client name must be 1 - 16 characters in length and
must be unique within an IBM Spectrum Scale cluster. If this option is omitted, the client
name is not changed.
--force
Generates a self-signed client certificate for the key client. This option is required only if both
the following conditions are true:
• You want the key client to have a self-signed client certificate.
• The key client was created with or was previously updated with a client CA-signed certificate
and chain.
This option is not required in either of the following situations:
• You want to replace a self-signed certificate with a CA-signed certificate and chain.
• You want to replace a CA-signed certificate and chain with another CA-signed certificate and
chain.
--days DaysToExpiration
Specifies the number of days until the new client certificate expires. The valid range is 1 -
18262. If this option is omitted, the new client certificate expires in 1095 days. This
parameter is not available when you specify a CA-signed certificate chain for the client
certificate. The certificates in the CA certificate chain specify their expiration dates.
--keystore-pwd PasswordFile
Specifies a password file that contains a client keystore password. See the requirements for
password files in the Description section of this topic. If this option is omitted, the current
password is not changed.
--server-pwd PasswordFile
Specifies a password file that contains a password for accessing the RKM server. See the
requirements for password files in the Description section of this topic. If this option is
omitted, the command prompts for a password.
rkm
change
Changes the properties of an RKM stanza. For more information about an RKM stanza, see the
Description section of this topic.
RkmID
Specifies the RKM ID of the stanza whose properties you want to change.
--rkm-id RkmID
Specifies a new RKM ID. An RKM ID must be unique within the cluster, must be 1 - 21
characters in length, and can contain only alphanumeric characters or underscore (_). It must
begin with a letter or an underscore. An RKM ID identifies an RKM stanza in the RKM.conf file.
The stanza contains the information that a node needs to retrieve a master encryption key
(MEK) from an RKM.
--backup ServerName[,ServerName...]
Specifies a comma-separated list of server names that you want to add to the backup RKM
servers that are listed in the RKM.conf file. If an IBM Spectrum Scale node cannot retrieve a
master encryption key from its main RKM server, it tries each backup server in the list until it
either retrieves a key or exhausts the list.
Important: To remove a backup list, specify delete instead of a list of server names, as in the
following example:
--distribute | --nodistribute
--distribute
Attempts to arrange the list of RKM server names (main RKM server and backup RKM
servers) in the RKM.conf file in a different order on each node so that each node connects
with the servers in a different order. This option provides some performance advantage in
retrieving MEKs.
--nodistribute
Does not attempt to arrange the list of backup RKM server names in the RKM.conf file.
--timeout ConnectionTimeout
Sets the connection timeout in seconds for retrieving an MEK from an RKM server. The valid
range is 1 - 120 seconds. To restore the system default value (60 seconds), you can specify
either 60 or the keyword default as the ConnectionTimeout value.
This option does not change the RKM.conf file.
--retry ConnectionAttempts
Sets the number of attempts to retry a connection to an RKM server. The valid range is 1 - 10
retries. To restore the system default value (3 retries), you can specify either 3 or the keyword
default as the ConnectionAttempts value.
This option does not change the RKM.conf file.
--interval Microseconds
Specifies the number of microseconds to wait between connection retries. The valid range is 1
- 1000000000. To restore the system default value (10000, which is 0.1 seconds), you can
specify either 10000 or the keyword default as the Microseconds value.
This option does not change the RKM.conf file.
show
Displays information about all the RKM stanzas in the RKM.conf file of the node.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion.
Nonzero
A failure occurred.
Security
You must have root authority to run the mmkeyserv command.
The node on which you enter the command must be able to execute remote shell commands on any other
administration node in the cluster. It must be able to do so without the use of a password and without
producing any extraneous messages. For more information, see the topic Requirements for administering
a GPFS file system in the IBM Spectrum Scale: Administration Guide.
Examples
Examples 1 - 5 illustrate the steps in configuring an RKM server and a key client and generating an
encryption key:
1. The following command makes an RKM server known to an IBM Spectrum Scale cluster. The name
keyserver01 is the host name of the SKLM server:
The following command displays information about all RKM servers that are known to the cluster. At
the moment, the only one is keyserver01:
FIPS1402: off
Backup Key Servers:
Distribute: yes
Retrieval Timeout: 120
Retrieval Retry: 3
Retrieval Interval: 10000
REST Certificate Expiration: 2030-12-22 21:48:53 (-0500)
KMIP Certificate Expiration: 2021-11-02 13:03:52 (-0400)
2. The following command creates a tenant in the server that you defined in Example 1, keyserver01.
The name of the tenant is devG1:
3. The following command adds a key client to the tenant that you created in Example 2. The command
does not specify password files for the server and the new keystore, so the command prompts for the
passwords. The name of the key client is c34f2n03Client1:
The following command displays information about all the key clients on the RKM server:
4. The following command registers the key client from Example 3 to the tenant from Example 2. To
ensure uniqueness in RKM IDs, it is a good practice to create the RKM ID name by combining the
names of the RKM server and the tenant. However, the RKM ID cannot be longer than 21 characters. In
this example the RKM ID is keyserver01_devG1:
mmkeyserv: [I] Client currently does not have access to the key. Continue the registration
process ...
mmkeyserv: Successfully accepted client certificate
The following two commands now show that key client c34f2n03Client1 is registered to tenant
devG1:
The following command shows the contents of the new RKM stanza that was added to the RKM.conf
file:
You can also show the contents of the RKM.conf file by routing the contents of the file to the console:
# cat /var/mmfs/ssl/keyServ/RKM.conf
keyserver01_devG1 {
type = SKLM
kmipServerUri = tls://192.0.2.59:5696
keyStore = /var/mmfs/ssl/keyServ/serverKmip.1_keyserver01.c34f2n03Client1.1.p12
passphrase = pw4c34f2n03Client1
clientCertLabel = c34f2n03Client1
tenantName = devG1
}
5. The following example creates an encryption key in the tenant from Example 4. In the third line, the
command displays the new encryption key (KEY-d4e83148-e827-4f54-8e5b-5e1b5cc66de1):
6. In the following example, assume that client certificate c6f2bc3n9client expires in a few days. The
system administrator issues the mmkeyserv client update command to replace the old certificate
with a new one that expires in 90 days.
Note: Notice that this command also updates the keystore password. If the --keystore-pwd
parameter is omitted, the keystore password remains the same.
See also
Location
/usr/lpp/mmfs/bin
mmlinkfileset command
Creates a junction that references the root directory of a GPFS fileset.
Synopsis
mmlinkfileset Device FilesetName [-J JunctionPath]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmlinkfileset command creates a junction at JunctionPath that references the root directory of
FilesetName. The junction is a special directory entry, much like a POSIX hard link, that connects a name
in a directory of one fileset, the parent, to the root directory of a child fileset. From the user's viewpoint, a
junction always appears as if it were a directory, but the user is not allowed to issue the unlink or rmdir
commands on a junction. Instead, the mmunlinkfileset command must be used to remove a junction.
If JunctionPath is not specified, the junction is created in the current directory with the name
FilesetName. The user may use the mv command on the directory to move to a new location in the parent
fileset, but the mv command is not allowed to move the junction to a different fileset.
For information on GPFS filesets, see the IBM Spectrum Scale: Administration Guide.
Parameters
Device
The device name of the file system that contains the fileset.
File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0.
FilesetName
Specifies the name of the fileset to be linked. It must not already be linked into the namespace.
There are no restrictions on linking independent filesets, but a dependent fileset can only be linked
inside its own inode space.
-J JunctionPath
Specifies the name of the junction. The name must not refer to an existing file system object.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmlinkfileset command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
This command links fileset fset1 in file system gpfs1 to junction path /gpfs1/fset1:
mmlsfileset gpfs1
See also
• “mmchfileset command” on page 222
• “mmcrfileset command” on page 308
• “mmdelfileset command” on page 365
• “mmlsfileset command” on page 493
• “mmunlinkfileset command” on page 724
Location
/usr/lpp/mmfs/bin
mmlsattr command
Queries file attributes.
Synopsis
mmlsattr [-L] [-l]
[-d | --dump-attr]
[-n AttributeName | --get-attr AttributeName]
[-X | --hex-attr] [--hex-attr-name]
[-D | --dump-data-block-disk-numbers]
{--inode-number [SnapPath/]InodeNumber [[SnapPath/]InodeNumber...] |
Filename[ Filename...]}
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlsattr command to display attributes of a file.
Results
For the specified file, the mmlsattr command lists:
• The current number of copies of data for a file and the maximum value
• The number of copies of the metadata for a file and the maximum value
• Whether the Direct I/O caching policy is in effect for a file
• The disk number distribution of all data block replicas for a file
Parameters
-l
Specifies that this command works only with regular files and directories and does not follow
symlinks. The default is to follow symlinks.
-L
Displays additional file attributes:
• The assigned storage pool name of the file.
• The name of the fileset that includes the file.
• If a file is a snapshot file, the name of the snapshot that includes the file is shown. If the file is a
regular file, an empty string is displayed.
• Whether the file is exposed, ill replicated, ill placed, or unbalanced (displayed under the flags
heading).
• Whether the file is immutable.
• Whether the file is in appendOnly mode.
• The creation time of the file.
• If the compact attribute is nonzero, then this parameter displays the number of directory slots that
were set by the mmchattr command with the --compact option or by the gpfs_prealloc
subroutine. For more information, see the topics “mmchattr command” on page 156 and
“gpfs_prealloc() subroutine” on page 950.
-L can be combined with -d | --dump-attr to display all extended attribute names and values for
each file.
-d | --dump-attr
Displays the names of all extended attributes for each file.
-n AttributeName | --get-attr AttributeName
Displays the name and value of the specified extended attribute for each file.
-X | --hex-attr
Displays the attribute value in hex.
--hex-attr-name
Displays the attribute name in hex.
-D | --dump-data-block-disk-numbers
Displays the disk number distribution for all replicas of all data blocks for each file. It is used for
resolving data block replica mismatches. For more information, see the Replica mismatches topic in
the IBM Spectrum Scale: Problem Determination Guide.
--inode-number [SnapPath/]InodeNumber
The inode number of the file to be queried. You must enter at least one inode number or file name, but
not both; if you specify more than one inode number, delimit each inode number by a space. If the
current working directory is not already inside the active file system or snapshot, then the inode
number has to be prefixed by the path to the active file system or snapshot. For example:
Exit status
0
Successful completion.
nonzero
A failure has occurred. The return code equals the number of files from which the command was not
able to get attribute information.
Security
You must have read access to run the mmlsattr command.
You may issue the mmlsattr command only from a node in the GPFS cluster where the file system is
mounted.
Examples
1. To list the attributes of a file, issue the following command:
mmlsattr -L newfile
2. To show the attributes for all files in the root directory of file system fs0, issue the following
command:
mmlsattr /fs0/*
replication factors
metadata(max) data(max) file [flags]
------------- --------- ---------------
1 ( 1) 1 ( 1) /fs0/project4.sched
1 ( 1) 1 ( 1) /fs0/project4.hist
1 ( 1) 1 ( 1) /fs0/project5.plan
3. To show all extended attribute names and values for the file /ba1/newfile.fastq, issue the
following command:
mmlsattr -d -L /ba1/newfile.fastq
gpfs.CompressLibs: 0x0x4141616C7068616500
4. To show the disk number distribution of all data block replicas for the file /fs0/file1, which has 2
DataReplicas and 3 MaxDataReplicas, issue the following command:
mmlsattr -D /ba1/newfile.fastq
See also
• “mmchattr command” on page 156
Location
/usr/lpp/mmfs/bin
mmlscallback command
Lists callbacks that are currently registered in the GPFS system.
Synopsis
mmlscallback [-Y] [CallbackIdentifier[,CallbackIdentifier...] | user | system | all]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlscallback command to list some or all of the callbacks that are currently registered in the
GPFS system.
Parameters
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
CallbackIdentifier
Indicates the callback for which information is displayed.
user
Indicates all user-defined callbacks. This is the default.
system
Indicates all system-defined callbacks.
all
Indicates all callbacks currently registered with the system.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmlscallback command
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To list all of the callbacks that are currently in the GPFS system, issue this command:
mmlscallback
test1
command = /tmp/myScript
event = startup
test2
command = /tmp/myScript2
event = shutdown
parms = %upNodes
To list a specific callback (for example, test2) that is currently in the GPFS system, issue this command:
mmlscallback test2
test2
command = /tmp/myScript2
event = shutdown
parms = %upNodes
See also
• “mmaddcallback command” on page 12
• “mmdelcallback command” on page 358
Location
/usr/lpp/mmfs/bin
mmlscluster command
Displays the current configuration information for a GPFS cluster.
Synopsis
mmlscluster [-Y] [--ces] [--cnfs] [--cloud-gateway]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlscluster command to display the current configuration information for an IBM Spectrum
Scale cluster.
For the IBM Spectrum Scale cluster, the mmlscluster command displays:
• The cluster name
• The cluster ID
• The UID domain
• The remote shell command being used
• The remote file copy command being used
• The repository type (CCR or server-based)
• The primary cluster configuration server (if server-based repository)
• The secondary cluster configuration server (if server-based repository)
• A list of nodes belonging to the IBM Spectrum Scale cluster
For each node, the command displays:
• The node number assigned to the node by IBM Spectrum Scale
• GPFS daemon node interface name
• Primary network IP address
• IBM Spectrum Scale administration node interface name
• Designation, such as whether the node is any of the following:
– quorum node - A node in the cluster that is counted to determine if a quorum exists. Members of a
cluster use the quorum node to determine if it is safe to continue I/O operations when a
communications failure occurs.
– manager node - The file system node that provides the file system manager services to all of the
nodes using the file system, including: file system configuration, disk space allocation, token
management, and quota management.
– snmp_collector node - The designated SNMP collector node for the cluster. The GPFS SNMP subagent
runs on the designated SNMP collector node. For additional information see GPFS SNMP support in
IBM Spectrum Scale: Problem Determination Guide.
– gateway node - Ensures primary and Disaster Recovery cluster communication during failover.
– perfmon node - Performance monitoring nodes collect metrics and performance information and
sends the information to one or more performance collection nodes.
Parameters
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
--ces
Displays information about protocol nodes.
--cnfs
Displays information about clustered NFS.
--cloud-gateway
Displays information about Transparent cloud tiering nodes.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmlscluster command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To display the current configuration information for the GPFS cluster, issue the mmlscluster
command with no parameters:
mmlscluster
The command displays the cluster information. The following example is typical:
2. To display the configuration information about the Transparent cloud tiering nodes, issue this
command:
mmlscluster --cloud-gateway
See also
• “mmaddnode command” on page 35
• “mmchcluster command” on page 164
• “mmcrcluster command” on page 303
• “mmdelnode command” on page 371
Location
/usr/lpp/mmfs/bin
mmlsconfig command
Displays the current configuration data for a GPFS cluster.
Synopsis
mmlsconfig [Attribute[,Attribute...]] [-Y]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlsconfig command to display the requested configuration attributes for a GPFS cluster. If no
specific attributes are requested, the command displays all values that were set explicitly by the user.
Depending on your configuration, additional information that is set by GPFS might be displayed. If a
configuration attribute is not shown in the output of this command, the default value for that attribute, as
documented in the mmchconfig command, is in effect.
Parameters
Attribute
Specifies the name of an attribute to display with its value. If no name is specified, the command
displays a default list of attributes with their values. See Example 1.
For descriptions of the attributes, see the topic “mmchconfig command” on page 169. The
mmlsconfig command, which lists the values of attributes, and the mmchconfig command, which
sets the values of attributes, use the same attribute names.
Exception: To update the minimum release level, issue the mmchconfig command with the attribute
release=LATEST. To display the value of the minimum release level, issue the mmlsconfig
command with the minReleaseLevel parameter. For more information, see the topic Minimum
release level of a cluster in the IBM Spectrum Scale: Administration Guide.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion.
nonzero
A failure occurred.
Security
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster. It must be able to do so without the use of a password and without producing any
extraneous messages. For more information, see Requirements for administering a GPFS file system in IBM
Spectrum Scale: Administration Guide.
Examples
1. To display the current configuration data for the GPFS cluster that you are running on, issue this
command:
mmlsconfig
2. To display the current values for the maxblocksize and pagepool attributes, issue the following
command:
mmlsconfig maxblocksize,pagepool
maxblocksize 4M
pagepool 1G
pagepool 512M [c6f1c3vp3]
3. To display the current value for the cipherList attribute, issue this command:
mmlsconfig cipherList
cipherList AUTHONLY
See also
• “mmchcluster command” on page 164
• “mmchconfig command” on page 169
• “mmcrcluster command” on page 303
Location
/usr/lpp/mmfs/bin
mmlsdisk command
Displays the current configuration and state of the disks in a file system.
Synopsis
mmlsdisk Device [-d "DiskName[;DiskName...]"] [-e | -Y] [-L]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlsdisk command to display the current state of the disks in the file system.
The mmlsdisk command may be run against a mounted or unmounted file system.
For each disk in the list, the mmlsdisk command displays the following:
• Disk name
• Driver type
• Logical sector size (under the heading "sector size")
• Failure group
• Whether it holds metadata
• Whether it holds data
• Status:
ready
Normal status.
suspended
or
to be emptied
Indicates that data is to be migrated off this disk.
being emptied
Transitional status in effect while a disk deletion is pending.
emptied
Indicates that data is already migrated off this disk.
replacing
Transitional status in effect for old disk while replacement is pending.
replacement
Transitional status in effect for new disk while replacement is pending.
• Availability:
up
The disk is available to GPFS for normal read and write operations.
down
No read and write operations can be performed on this disk.
recovering
An intermediate state for disks coming up, during which GPFS verifies and corrects data. write
operations can be performed while a disk is in this state, but read operations cannot (because data
on the disk being recovered might be stale until the mmchdisk start command completes).
unrecovered
The disk was not successfully brought up.
• Disk ID
• Storage pool to which the disk is assigned
• Remarks: A tag is displayed if the disk is a file system descriptor replica holder, an excluded disk, or the
disk supports space reclaim.
Parameters
Device
The device name of the file system to which the disks belong. File system names need not be fully-
qualified. fs0 is as acceptable as /dev/fs0.
This must be the first parameter.
-d "DiskName[;DiskName...]"
The name of the disks for which you want to display current configuration and state information. When
you enter multiple values for DiskName, separate them with semicolons and enclose the list in
quotation marks.
"gpfs3nsd;gpfs4nsd;gpfs5nsd"
Options
-e
Displays all of the disks in the file system that do not have an availability of up and a status of ready. If
all disks in the file system are up and ready, the message displayed is:
-L
Displays an extended list of disk parameters that includes the disk id column and the remarks
column. The remarks column can contain one or more of the following tags:
desc
The disk is a file system descriptor replica holder.
excl
The disk is excluded by the mmfsctl command.
{nvme(t) | scsi(t) | auto(t)}
The disk is a device support space reclaim:
nvme(t)
The disk is a TRIM capable NVMe device that supports the mmreclaimspace command.
scsi(t)
The disk is a thin provisioned SCSI disk that supports the mmreclaimspace command.
auto(t)
The disk is either an nvme(t) device or a scsi(t) device. IBM Spectrum Scale will try to
detect the actual disk type automatically. To avoid problems, you should replace auto with
the correct disk type, nvme or scsi, as soon as you can.
Note: In 5.0.5, the space reclaim auto-detection is enhanced. It is encouraged to use the auto
key-word after your cluster is upgraded to 5.0.5.
For more information, see the topic IBM Spectrum Scale with data reduction storage devices in the
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
-M
Displays whether I/O requests to the disk are satisfied on the local node, or using an NSD server. If the
I/O is done using an NSD server, shows the NSD server name and the underlying disk name on that
server node.
-m
Displays whether I/O requests to the disk are satisfied on the local node, or using an NSD server. The
scope of this option is the node on which the mmlsdisk command is issued.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
If you are a root user, the node on which the command is issued must be able to execute remote shell
commands on any other node in the cluster without the use of a password and without producing any
extraneous messages. For more information, see Requirements for administering a GPFS file system in IBM
Spectrum Scale: Administration Guide.
As root, the command can also do an mmlsdisk on remote file systems.
If you are a non-root user, you may specify only file systems that belong to the same cluster as the node
on which the mmlsdisk command was issued.
The mmlsdisk command does not work if GPFS is down.
Examples
1. To display the current state of gpfs2nsd, issue this command:
mmlsdisk fs0 -L
In IBM Spectrum Scale V4.1.1 and later, the system displays information similar to the following
example:
4. To display whether the I/O is performed locally or using an NSD server, the NSD server name, and the
underlying disk name for the file system named test, issue this command:
mmlsdisk test -M
5. To display the same information as in the previous example, but limited to the node on which the
command is issued, issue this command:
mmlsdisk test -m
See also
• “mmadddisk command” on page 28
• “mmchdisk command” on page 210
• “mmdeldisk command” on page 360
• “mmrpldisk command” on page 679
Location
/usr/lpp/mmfs/bin
mmlsfileset command
Displays attributes and status for GPFS filesets.
Synopsis
mmlsfileset Device
[[Fileset[,Fileset...]] [-J Junction[,Junction...]] | -F FileName]
[-d [--block-size {BlockSize | auto}]] [-i] [-L] [-X] [-Y]
[--afm] [--deleted] [--iam-mode]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlsfileset command to display information for the filesets that belong to a given GPFS file
system. The default is to display information for all filesets in the file system. You may choose to display
information for only a subset of the filesets.
The operation of the -L flag omits the attributes listed without it, namely status and junction path. In
addition, if the fileset has status Deleted, then -L also displays the name of the latest snapshot that
includes the fileset in place of the root inode number and parent fileset identifier.
The attributes displayed are:
• Name of the fileset
• Status of the fileset (when the -L flag is omitted)
• Junction path to the fileset (when the -L flag is omitted)
• Fileset identifier (when the -L flag is included)
• Root inode number, if not deleted (when the -L flag is included)
• Parent fileset identifier, if not deleted (when the -L flag is included)
• Latest including snapshot, if deleted (when the -L flag is included)
• Creation time (when the -L flag is included)
• Inode space (when the -L flag is included)
• Number of inodes in use (when the -i flag is included)
• Data size (when the -d flag is included)
• Comment (when the -L flag is included)
• Caching-related information (when the --afm flag is included)
• Value of the permission change flag (when the -X flag is used to generate stanza output)
• Integrated archive manger (IAM) mode information
For information on GPFS filesets, see the IBM Spectrum Scale: Administration Guide.
Parameters
Device
The device name of the file system that contains the fileset.
File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0.
This must be the first parameter.
Fileset
Specifies a comma-separated list of fileset names.
-J Junction
Specifies a comma-separated list of path names. They are not restricted to fileset junctions, but may
name any file or directory within the filesets to be listed.
Note: The base of the junction path that is displayed is always the default mount point of the file
system in the cluster that owns the file system. For example, suppose that a file system has a default
mount point of /fs0 in the owning cluster and a default mount point of /remote_fs0 in a remote
cluster that accesses the file system. If you issue the mmlsfileset command on the accessing
cluster /remote_fs0, the junction path that is displayed begins with /fs0, not /remote_fs0.
-F FileName
Specifies the name of a file containing either fileset names or path names. Each line must contain a
single entry. All path names must be fully-qualified.
-d
Displays the amount of storage in use for the fileset.
This operation requires an amount of time that is proportional to the size of the file system; therefore,
it can take several minutes or even hours on a large and heavily-loaded file system.
This optional parameter can impact overall system performance. Avoid running the mmlsfileset
command with this parameter frequently or during periods of high file system activity.
This option is not valid if the Device parameter is a remote file system.
--block-size {BlockSize | auto}
Specifies the unit in which the number of blocks is displayed. The value must be of the form [n]K, [n]M,
[n]G or [n]T, where n is an optional integer in the range 1 to 1023. The default is 1K. If auto is
specified, the number of blocks is automatically scaled to an easy-to-read value.
-i
Displays the number of inodes in use for the fileset.
This option is not valid if the Device parameter is a remote file system.
This operation requires an amount of time that is proportional to the number of inodes in the file
system; therefore, it can take several minutes or even hours on a large and heavily-loaded file system.
Information about the number of inodes in the fileset can be retrieved more efficiently with the
following command, if quota management has been enabled for the file system:
mmrepquota -j FileSystem
-L
Displays additional information for the fileset. This includes:
• Fileset identifier
• Root inode number
• Parent identifier
• Fileset creation time
• Inode space
• User defined comments, if any
If the fileset is a dependent fileset, dpnd will be displayed next to the inode space identifier.
-X
Generates stanza output containing the following:
• The same information presented by the -L flag
• The value of the permission change flag
• The same information presented by the --afm flag
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
--afm
Displays caching-related information for the fileset.
--deleted
Displays only the filesets with a status of Deleted.
--iam-mode
Displays integrated archive manager (IAM) mode information. For more information, see “mmchfileset
command” on page 222.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
Fileset owners can run the mmlsfileset command with the -L, -d, and -i options.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. This command displays fileset information for all filesets in file system gpfs1:
mmlsfileset gpfs1
2. These commands display information for a file system with filesets and snapshots. Note that deleted
filesets that are saved in snapshots are displayed with the name enclosed in parentheses.
a. Command:
mmlsfileset fs1 -d -i
b. Command:
c. Command:
d. Command:
mmlsfileset fs1 -L
e. Command:
Last pSnapId 0
Display Home Snapshots no
f. Command:
g. Command:
See also
• “mmchfileset command” on page 222
• “mmcrfileset command” on page 308
• “mmdelfileset command” on page 365
• “mmlinkfileset command” on page 477
• “mmunlinkfileset command” on page 724
Location
/usr/lpp/mmfs/bin
mmlsfs command
Displays file system attributes.
Synopsis
mmlsfs {Device | all | all_local | all_remote} [-A] [-B] [-d] [-D]
[-E] [-f] [-i] [-I] [-j] [-k] [-K] [-L] [-m] [-M] [-n] [-o]
[-P] [-Q] [-r] [-R] [-S] [-t] [-T] [-V] [-Y] [-z]
[--create-time] [--encryption] [--fastea] [--file-audit-log]
[--filesetdf] [--inode-limit] [--is4KAligned] [--log-replicas]
[--maintenance-mode] [--mount-priority] [--perfileset-quota]
[--rapid-repair] [--subblocks-per-full-block] [--write-cache-threshold]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlsfs command to list the attributes of a file system. You can issue the mmlsfs command for
file systems that belong to the current cluster or for file systems that are owned by other clusters.
Depending on your configuration, additional information that is set by GPFS can be displayed to help
problem determination when contacting the IBM Support Center.
Results
If you do not specify any options, all attributes of the file system are displayed. When you specify options,
only those attributes that are specified are listed, in the order issued in the command. Some parameters
are preset for optimum performance and, although they are displayed in the mmlsfs command output,
you cannot change them.
Parameters
The following parameter must be the first parameter:
Device | all | all_local | all_remote
Device
Indicates the device name of the file system for which information is displayed. File system names
do not need to be fully qualified. fs0 is as acceptable as /dev/fs0.
all
Indicates all file systems that are known to this cluster.
all_local
Indicates all file systems that are owned by this cluster.
all_remote
Indicates all file systems that are owned by another cluster.
This must be the first parameter.
The following optional parameters, when used, must be provided after the Device | all | all_local |
all_remote parameter:
-A
Displays if and when the file system is automatically mounted.
-B
Displays the block size of the file system in bytes. For more information about block size, see the
description of the -B BlockSize parameter in “mmcrfs command” on page 315.
-d
Displays the names of all of the disks in the file system.
-D
Displays the type of file locking semantics that are in effect (nfs4 or posix).
-E
Displays the exact mtime values reported.
-f
Displays the minimum fragment (subblock) size of the file system in bytes. The subblock size and the
number of subblocks in a block are determined by the block size. For more information, see the
description of the -B BlockSize parameter in “mmcrfs command” on page 315.
-i
Displays the inode size, in bytes.
-I
Displays the indirect block size, in bytes.
-j
Displays the block allocation type.
-k
Displays the type of authorization that is supported by the file system.
-K
Displays the strict replication enforcement.
-L
Displays the internal log file size.
-m
Displays the default number of metadata replicas.
-M
Displays the maximum number of metadata replicas.
-n
Displays the estimated number of nodes for mounting the file system.
-o
Displays the additional mount options.
-P
Displays the storage pools that are defined within the file system.
-Q
Displays which quotas are currently enforced on the file system.
-r
Displays the default number of data replicas.
-R
Displays the maximum number of data replicas.
-S
Displays whether the updating of atime is suppressed for the gpfs_stat(), gpfs_fstat(),
stat(), and fstat() calls.
-t
Displays the Windows drive letter.
-T
Displays the default mount point.
-V
Displays the current format version of the file system.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
-z
Displays whether DMAPI is enabled for this file system.
--create-time
Displays the creation time of the file system.
--encryption
Displays a yes or no value to indicate whether encryption is enabled. This value cannot be changed
with the mmchfs command. When the cluster is created this value is set to no. When an encryption
policy is established for the file system, the value is set to yes.
--fastea
Displays a yes or no value to indicate whether fast external attributes are enabled. Displays a
migrating value if migration was initiated with mmmigratefs --fastea but is not yet complete.
--file-audit-log
Displays whether file audit logging is enabled or disabled.
--filesetdf
Displays a yes or no value to indicate whether filesetdf is enabled. If yes, the df command
reports numbers based on the quotas for the fileset and not for the total file system. This option
affects the df command behavior only on Linux nodes.
--inode-limit
Displays the maximum number of files in the file system.
--is4KAligned
Displays whether file systems are formatted to be 4K aligned.
--log-replicas
Displays the number of recovery log replicas. If a value of 0 is displayed, the number of recovery log
replicas is the same as the number of metadata replicas currently in effect for the file system.
--maintenance-mode
Displays a yes or no value to indicate whether file system maintenance mode is on or off:
• The value yes indicates that file system maintenance mode is on.
• The value no, which is the default, indicates that file system maintenance mode is off.
For more information on file system maintenance mode, see File system maintenance mode in IBM
Spectrum Scale: Administration Guide.
Note: Another possible output value can be request in progress. This output value means that
the file system has been asked to enable or disable file system maintenance mode, and that request is
in progress.
--mount-priority
Displays the assigned mount priority.
--perfileset-quota
Displays the per-fileset quota.
--rapid-repair
Displays a yes or no value indicating whether the per-block replication tracking and repair feature is
enabled.
--write-cache-threshold
Displays the threshold below which synchronous writes are initially buffered in the highly available
write cache before being written back to primary storage.
--subblocks-per-full-block
Displays the number of subblocks in a file system data block.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Note: The command treats the following conditions as failures:
• The file system that you specified was not found.
• You specified all, all_local, or all_remote and no file systems were found.
Security
If you are a root user, the node on which the command is issued must be able to execute remote shell
commands on any other node in the cluster without the use of a password and without producing any
extraneous messages. For more information, see Requirements for administering a GPFS file system in IBM
Spectrum Scale: Administration Guide.
Examples
The following command displays the attributes of the file system gpfs1:
mmlsfs gpfs1
mmlsfs all -A
See also
• “mmcrfs command” on page 315
• “mmchfs command” on page 230
• “mmdelfs command” on page 369
Location
/usr/lpp/mmfs/bin
mmlslicense command
Displays information about the IBM Spectrum Scale node licensing designation or about disk and cluster
capacity.
Synopsis
mmlslicense [-Y] [-L | --capacity [--formatted] | --licensed-usage|--ilmt-data]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlslicense command to display the number of IBM Spectrum Scale client, FPO, and server
licenses assigned to the nodes in the cluster.
For information on IBM Spectrum Scale license designation, see IBM Spectrum Scale license designation
in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Parameters
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
-L
Displays information about the license type that is associated with each of the nodes in the cluster. An
asterisk after the license type indicates insufficient license level for the roles that the node performs.
--capacity [--formatted]
Displays disk and cluster size information.
--formatted
Inserts commas to separate groups of three numerals for readability.
--licensed-usage
Displays the names and sizes of the NSDs, the total size of all the NSDs, and the licensed capacity
limit. See Example 4. This option is valid only in the IBM Spectrum Scale Developer Edition.
Note: The licensed capacity of the IBM Spectrum Scale Developer Edition is 12 TB
(12,000,000,000,000 bytes) of storage. The mmcrnsd command calculates the size of the current
NSDs plus the size of the proposed new NSDs and fails with an error message if the licensed capacity
would be exceeded. For more information see the topic “mmcrnsd command” on page 332.
--ilmt-data
Writes the software identify information such as the product edition and the product ID into
the /var/adm/ras/ILMT_Data.slmtag. This information is used by the IBM License Metric Tool
(ILMT). The option also writes the DECIMAL_TERABYTE metric into the log file. This metric represents
the terabytes of storage capacity, in decimal number, for all the NSDs in the Cluster.
This option must be used independently and not with any other option, including the -Y option. It is
valid only in the following editions:
• IBM Spectrum Scale Erasure Code Edition
• IBM Spectrum Scale Data Management Edition
Exit status
0
Successful completion.
nonzero
A failure occurred.
Security
You must have root authority to run the mmlslicense command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster. The shell commands must be executed without the use of a password and must not
produce any extraneous messages. For more information, see Requirements for administering a GPFS file
system in IBM Spectrum Scale: Administration Guide.
Examples
1. The following command displays summary information about the IBM Spectrum Scale licenses of the
nodes in the cluster:
#mmlslicense
Summary information
---------------------
Number of nodes defined in the cluster: 4
Number of nodes with server license designation: 1
Number of nodes with FPO license designation: 0
Number of nodes with client license designation: 2
Number of nodes still requiring server license designation: 1
Number of nodes still requiring client license designation: 1
This node runs IBM Spectrum Scale Advanced Edition
2. The following command displays the types of IBM Spectrum Scale licenses that are associated with the
nodes in the cluster:
#mmlslicense -L
Node name Required license Designated license
-------------------------------------------------------------------
k145n05.kgn.ibm.com server server
k145n06.kgn.ibm.com server client *
k145n07.kgn.ibm.com client client
k145n08.kgn.ibm.com client none *
Summary information
---------------------
Number of nodes defined in the cluster: 4
Number of nodes with server license designation: 1
Number of nodes with FPO license designation: 0
Number of nodes with client license designation: 2
Number of nodes still requiring server license designation: 1
Number of nodes still requiring client license designation: 1
This node runs IBM Spectrum Scale Advanced Edition
Cluster Summary:
======================
Cluster Total Capacity: 21,474,836,480 Bytes
4. The following command displays the names and sizes of the NSDs, the total size of all the NSDs, and
the licensed capacity limit:
#mmlslicense --licensed-usage
NSD Name NSD Size (Bytes)
------------------------------------------------------------------
de1_c35f1m4n09_sda 299,439,751,168
de1_c35f1m4n09_sdf 99,488,890,880
de1_mmimport_test 73,284,976,640
5. The following command displays disk capacities in an environment that contains both vdisks and
physical disks:
# mmlslicense --capacity
NSD Summary:
======================
Total Number of NSDs: 40
RG001LG001VS002: 53957623808 Bytes
RG001LG002VS002: 53957623808 Bytes
RG001LG003VS002: 53957623808 Bytes
RG001LG004VS002: 53957623808 Bytes
RG001LG005VS002: 53957623808 Bytes
RG001LG006VS002: 53957623808 Bytes
RG002LG001VS016: 16544432128 Bytes
RG002LG002VS016: 16544432128 Bytes
RG002LG003VS016: 16544432128 Bytes
RG002LG004VS016: 16544432128 Bytes
RG002LG005VS016: 16544432128 Bytes
RG003LG001VS001: 2002248531968 Bytes
RG003LG001VS004: 252160507904 Bytes
RG003LG001VS007: 252160507904 Bytes
RG003LG002VS001: 2002248531968 Bytes
RG003LG002VS004: 252160507904 Bytes
RG003LG002VS007: 252160507904 Bytes
RG003LG003VS001: 2002248531968 Bytes
RG003LG003VS004: 252160507904 Bytes
RG003LG003VS007: 252160507904 Bytes
RG003LG004VS001: 2002248531968 Bytes
RG003LG004VS004: 252160507904 Bytes
RG003LG004VS007: 252160507904 Bytes
RG003LG005VS001: 2002248531968 Bytes
RG003LG005VS004: 252160507904 Bytes
RG003LG005VS007: 252160507904 Bytes
RG003LG006VS001: 2002248531968 Bytes
RG003LG006VS004: 252160507904 Bytes
RG003LG006VS007: 252160507904 Bytes
RG003LG006VS001: 2002248531968 Bytes
RG003LG006VS004: 252160507904 Bytes
RG003LG006VS007: 252160507904 Bytes
RG003LG007VS001: 2002248531968 Bytes
RG003LG007VS004: 252160507904 Bytes
RG003LG007VS007: 252160507904 Bytes
RG003LG008VS001: 2002248531968 Bytes
RG003LG008VS007: 252160507904 Bytes
sdb: 2097152000 Bytes
sdc: 2097152000 Bytes
sdd: 2097152000 Bytes
sde: 2097152000 Bytes
sdf: 2097152000 Bytes
sdg: 2097152000 Bytes
Cluster Summary:
======================
Cluster Total Capacity: 20219446689792 Bytes
See also
• “mmchlicense command” on page 237
Location
/usr/lpp/mmfs/bin
mmlsmgr command
Displays which node is the file system manager for the specified file systems or which node is the cluster
manager.
Synopsis
mmlsmgr [Device [Device...]] [-Y]
or
mmlsmgr -C RemoteClusterName
or
mmlsmgr -c
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlsmgr command to display which node is the file system manager or cluster manager for the
file system.
If you do not provide a Device operand, file system managers for all file systems within the current cluster
for which a file system manager has been appointed are displayed.
Parameters
Device
The device names of the file systems for which the file system manager information is displayed.
If more than one file system is listed, the names must be delimited by a space. File system names
need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
If no file system is specified, information about all file systems is displayed.
-C RemoteClusterName
Displays the name of the nodes that are file system managers in cluster RemoteClusterName.
-c
Displays the current cluster manager node.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
If you are a root user, the node on which the command is issued must be able to execute remote shell
commands on any other node in the cluster without the use of a password and without producing any
extraneous messages. For more information, see Requirements for administering a GPFS file system in IBM
Spectrum Scale: Administration Guide.
As root, a user can also issue the mmlsmgr on remote file systems.
If you are a non-root user, you may specify only file systems that belong to the same cluster as the node
on which the mmlsmgr command was issued.
Examples
1. To display the file system manager node information for all the file systems, issue this command:
mmlsmgr
The output shows the device name of the file system and the file system manager's node number and
name, in parenthesis, as they are recorded in the GPFS cluster data.
2. To display the file system manager information for file systems gpfs2 and gpfs3, issue this
command:
See also
• “mmchmgr command” on page 239
Location
/usr/lpp/mmfs/bin
mmlsmount command
Lists the nodes that have a given GPFS file system mounted.
Synopsis
mmlsmount {Device | all | all_local | all_remote | {-F DeviceFileName}} [-L]
[-Y] [-C {all | all_remote | ClusterName[,ClusterName...]}]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmlsmount command reports if a file system is in use at the time the command is issued. A file
system is considered to be in use if it is explicitly mounted with the mount or mmmount command, or if it
is mounted internally for the purposes of running some other GPFS command. For example, when you run
the mmrestripefs command, the file system will be internally mounted for the duration of the
command. If mmlsmount is issued in the interim, the file system will be reported as being in use by the
mmlsmount command but, unless it is explicitly mounted, will not show up in the output of the mount or
df commands.
Parameters
Device | all | all_local | all_remote | {-F DeviceFileName}
Indicates the file system or file systems for which information is displayed.
Device
Indicates the device name of the file system for which information is displayed. File system names
need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
all
Indicates all file systems known to this cluster.
all_local
Indicates all file systems owned by this cluster.
all_remote
Indicates all file systems owned by another cluster.
-F DeviceFileName
Specifies a file containing the device names, one per line, of the file systems for which information
is displayed.
This must be the first parameter.
Options
-L
Specifies to list the nodes that have the file system mounted.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
If you are a root user, the node on which the command is issued must be able to execute remote shell
commands on any other node in the cluster without the use of a password and without producing any
extraneous messages. For more information, see Requirements for administering a GPFS file system in IBM
Spectrum Scale: Administration Guide.
If you are a non-root user, you may specify only file systems that belong to the same cluster as the node
on which the mmlsmount command was issued.
Examples
1. To see how many nodes have file system fs2 mounted, issue this command:
mmlsmount fs2
mmlsmount all
mmlsmount all_remote
mmlsmount all -L
192.168.105.34 c6f1c3vp4
File system gpfs1 is not mounted.
See also
• “mmmount command” on page 537
• “mmumount command” on page 721
Location
/usr/lpp/mmfs/bin
mmlsnodeclass command
Displays node classes defined in the system.
Synopsis
mmlsnodeclass [ClassName[,ClassName...] | --user | --system | --all] [-Y]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlsnodeclass command to display node classes defined in the system.
Parameters
ClassName
Displays the specified node class.
--user
Displays all user-defined node classes. This is the default.
--system
Displays all system-defined node classes.
--all
Displays both the system-defined and user-defined node classes.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmlsnodeclass command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To display the current user-defined node classes, issue this command:
mmlsnodeclass
2. To display all node classes defined in the system, issue this command:
mmlsnodeclass --all
3. To display only the nodes that are quorum nodes, issue this command:
mmlsnodeclass quorumNodes
See also
• “mmchnodeclass command” on page 248
• “mmcrnodeclass command” on page 330
• “mmdelnodeclass command” on page 374
Location
/usr/lpp/mmfs/bin
mmlsnsd command
Displays Network Shared Disk (NSD) information for the GPFS cluster.
Synopsis
mmlsnsd [-a | -F | -f Device | -d "DiskName[;DiskName...]"]
[-L | -m | -M | -X] [-Y | -v]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlsnsd command to display the current information for the NSDs belonging to the GPFS cluster.
The default is to display information for all NSDs defined to the cluster (-a). Otherwise, you may choose
to display the information for a particular file system (-f) or for all disks that do not belong to any file
system (-F).
Parameters
-a
Displays information for all of the NSDs belonging to the GPFS cluster. This is the default.
-f Device
Specifies the device name of the file system for which you want NSD information displayed. File
system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0.
-F
Displays the NSDs that are not in use.
-d DiskName[;DiskName...]
Specifies the name of the NSDs for which you want information displayed. When you enter multiple
DiskNames, separate them with semicolons and enclose the entire string of disk names in quotation
marks:
"gpfs3nsd;gpfs4nsd;gpfs5nsd"
Options
-L
Displays the information in a long format that shows the NSD identifier.
-m
Maps the NSD name to its disk device name on the local node and, if applicable, on the NSD server
nodes.
-M
Maps the NSD names to its disk device name on all nodes.
This is a slow operation and its usage is suggested for problem determination only.
-v
Specifies that the output should contain error information, where available.
-X
Maps the NSD name to its disk device name on the local node and, if applicable, on the NSD server
nodes. The -X option also displays extended information for the NSD volume ID and information such
as NSD server status and Persistent Reserve (PR) enablement in the Remarks field. Using the -X
option is a slow operation and is recommended only for problem determination.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to issue the mmlsnsd command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To display the default information for all of the NSDs belonging to the cluster, issue this command:
mmlsnsd
2. To display all of the NSDs attached to the node from which the command is issued, issue this
command:
mmlsnsd -m
3. To display all of the NSDs in the GPFS cluster in extended format, issue this command:
mmlsnsd -L
4. To display extended disk information about disks hd3n97, sdfnsd, and hd5n98, issue this command:
mmlsnsd -X -d "hd3n97;sdfnsd;hd5n98"
See also
• “mmcrnsd command” on page 332
• “mmdelnsd command” on page 376
Location
/usr/lpp/mmfs/bin
mmlspolicy command
Displays policy information.
Synopsis
mmlspolicy Device [-L] [-Y]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmlspolicy command displays policy information for a given file system. The information displayed
includes:
• When the policy file was installed.
• The user who installed the policy file.
• The node on which the policy file was installed.
• The first line of the original policy file.
For information about GPFS policies and file placement, see Information Lifecycle Management in IBM
Spectrum Scale: Administration Guide.
Parameters
Device
The device name of the file system for which policy information is to be displayed. File system names
need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
-L
Displays the entire original policy file. If this flag is not specified, only the first line of the original policy
file is displayed.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. This command displays basic information for the policy installed for file system fs2:
mmlspolicy fs2
2. This command displays extended information for the policy installed for file system fs2:
mmlspolicy fs2 -L
/* Exclude Rule */
RULE 'Exclude root users files' EXCLUDE WHERE USER_ID = 0 AND
name like '%org%'
/* Delete Rule */
RULE 'delete files' DELETE WHERE PATH_NAME like '%tmp%'
/* Migrate Rule */
RULE 'sp4.files' MIGRATE FROM POOL 'sp4' TO POOL 'sp5' WHERE
name like '%sp4%'
/* End of Policy */
3. In this example, no policy file was installed for the specified file system:
mmlspolicy fs4 -L
See also
• “mmapplypolicy command” on page 80
• “mmchpolicy command” on page 255
Location
/usr/lpp/mmfs/bin
mmlspool command
Displays information about the known storage pools.
Synopsis
mmlspool Device {StoragePool[,StoragePool...] | all} [-L]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmlspool command displays basic or detailed information about the storage pools in a file system.
Parameters
Device
Specifies the device name of the file system for which storage pool information is to be displayed. File
system names do not need to be fully qualified; for example, fs0 is as acceptable as /dev/fs0.
StoragePool[,StoragePool...]
Specifies one or more storage pools for which information is to be displayed.
all
Displays information about all the storage pools in specified file system.
-L
Displays detailed information about each storage pool.
Exit status
0
Successful completion.
nonzero
A failure occurred.
Security
If you are a root user, the node on which the command is issued must be able to execute remote shell
commands on any other node in the cluster without the use of a password and without producing any
extraneous messages. For more information, see Requirements for administering a GPFS file system in IBM
Spectrum Scale: Administration Guide.
If you are a non-root user, you may specify only file systems that belong to the same cluster as the node
on which the mmlspool command was issued.
Examples
1. To show basic information about all storage pools in a file system, issue this command:
Name Id
system 0
sataXXX 65537
mmlspool fs1 p1 -L
Pool:
name = p1
poolID = 65537
blockSize = 4 MB
usage = dataOnly
maxDiskSize = 497 GB
layoutMap = cluster
allowWriteAffinity = no
writeAffinityDepth = 0
blockGroupFactor = 1
See also
• “mmlsattr command” on page 479
• “mmlscallback command” on page 482
• “mmlscluster command” on page 484
• “mmlsconfig command” on page 487
• “mmlsdisk command” on page 489
• “mmlspolicy command” on page 518
• “mmlsquota command” on page 527
• “mmlssnapshot command” on page 532
See also the following IBM Spectrum Scale RAID: Administration topics:
• "mmlsrecoverygroup command"
• "mmlsvdisk command"
Location
/usr/lpp/mmfs/bin
mmlsqos command
Displays the I/O performance values of a file system, when you enable Quality of Service for I/O
operations (QoS) with the mmchqos command.
Synopsis
mmlsqos Device
Availability
Available on all IBM Spectrum Scale editions.
Description
Attention: The mmqos command provides the same functionality as both the mmchqos command
and the mmlsqos command and has additional features. Future QoS features will be added to the
mmqos command rather than to either the mmchqos command or the mmlsqos command. For
more information, see “mmqos command” on page 610.
With the mmlsqos command, you can display the consumption of I/O operations by processes that
access designated storage pools. With the mmchqos command, you can regulate I/O access to a specified
storage pool by allocating shares of I/O operations to two QoS classes:
maintenance
The default QoS class for some I/O intensive, potentially long-running GPFS commands, such as
mmbackup, mmrestore
other
The default QoS class for all other processes.
A third class, misc, is used to count the IOPS that some critical file system processes consume. You
cannot assign IOPS to this class, but its count of IOPS is displayed in the output of the mmlsqos
command.
Remember the following points:
• Allocations persist across unmounting and remounting the file system.
• QoS stops applying allocations when you unmount the file system and resumes when you remount it.
• When you change allocations or mount the file system, a brief delay due to reconfiguration occurs
before QoS starts applying allocations.
For more information about this command, see the topic Setting the Quality of Service for I/O operations
(QoS) in the IBM Spectrum Scale: Administration Guide.
When the file system is mounted, the command displays information about the QoS classes of both
explicitly named pools and unnamed pools. Unnamed pools are storage pools that you have not specified
by name in any mmchqos command. When the file system is unmounted, the command displays
information about only the QoS classes of explicitly named pools.
Parameters
Device
The device name of the file system to which the QoS action applies.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
--fine-stats DisplayRange
Displays the fine-grained statistics that are currently in memory. The DisplayRange values specify the
indexes of the first and last blocks of statistics in the range of blocks that you want to display, such as
0-4. If you specify only one integer, such as 2, the command displays all the blocks in memory
starting with that index and going to the end. The last line of the output displays the index of the next
block in memory. You can avoid re-displaying statistics by having the next mmlsqos command display
statistics beginning at this block index.
Fine-grained statistics are taken at 1-second intervals and contain more information than regular
statistics. They are intended to be used as input for programs that analyze and display data, such as
the example plotting program at /usr/lpp/mmfs/samples/charts/qosplotfine.pl. For the
content of the statistics, see the subtopic later in this topic.
--pool
Displays the I/O performance values for all QoS pools if all is specified, or for the named pool if a
pool name is specified. The default is all.
--seconds
Displays the I/O performance values for the previous number of seconds. The valid range of seconds
is 1-999. The default value is 60 seconds. The values are displayed for subperiods within the period
that you specify. The subperiods might be every 5 seconds over the last 60 seconds, or every 60
seconds over the last 600 seconds. You cannot configure the number or length of subperiods.
--sum-classes
Displays the I/O performance for each QoS class separately if no is specified, or summed across all
the QoS classes if yes is specified. The default is no.
--sum-nodes
If yes is specified, displays the I/O performance summed across all the nodes in the cluster. If no is
specified, displays the I/O performance for each node separately. The default is yes.
Exit status
0
Successful completion.
Nonzero
A failure occurred.
Security
You must have root authority to run the mmlsqos command.
The node on which you enter the command must be able to execute remote shell commands on any other
administration node in the cluster. It must be able to do so without the use of a password and without
producing any extraneous messages. For more information, see Requirements for administering a GPFS
file system in the IBM Spectrum Scale: Administration Guide.
QOS values::
Displays, for each storage pool that you configured, the name of the storage pool and the IOPS that
you assigned to the other class and the maintenance class. In the following example fragment, the
command shows that the system storage pool is configured with the value of inf for both QoS
classes:
The qualifier /all_local after maintenance indicates that the maintenance IOPS are applied to all
the files systems owned by the cluster. This value is the default for the maintenance class.
QOS status::
Indicates whether QoS is regulating the consumption of IOPS ("throttling") and also whether QoS is
recording ("monitoring") the consumption of IOPS of each storage pool.
The following sample output is complete:
# mmlsqos fs --seconds 30
QOS config:: enabled
QOS values:: pool=system,other=inf,maintenance/all_local=inf:pool=fpodata,other=inf,maintenance/
all_local=inf
QOS status:: throttling active, monitoring active
=== for pool fpodata
01:31:45 misc iops=11 ioql=0.016539 qsdl=1.2e-06 et=5
=== for pool system
01:31:45 misc iops=8.2 ioql=0.013774 qsdl=2e-06 et=5
The command mmlsqos fs0 --seconds 30 requests a display of I/O performance values for all QoS
pools over the previous 30 seconds. Because the parameters --sum_classes and --sum_nodes are
missing, the command also requests I/O performance for each storage pool separately and summed
across all the nodes of the cluster.
The information that is displayed for the two configured pools, fpodata and system, indicates that IOPS
occurred only for processes in the misc class. The meaning of the categories in each line is as follows:
First column
The time when the measurement period ends.
Second column
The QoS class for which the measurement is made.
iops=
The performance of the class in I/O operations per second.
ioql=
The average number of I/O requests in the class that are pending for reasons other than being queued
by QoS. This number includes, for example, I/O requests that are waiting for network or storage
device servicing.
qsdl=
The average number of I/O requests in the class that are queued by QoS. When the QoS system
receives an I/O request from the file system, QoS first finds the class to which the I/O request
belongs. It then finds whether the class has any I/O operations available for consumption. If not, then
QoS queues the request until more I/O operations become available for the class. The Qsdl value is
the average number of I/O requests that are held in this queue.
et=
The interval in seconds during which the measurement was made.
You can calculate the average service time for an I/O operation as (Ioql + Qsdl)/Iops. For a system that is
running IO-intensive applications, you can interpret the value (Ioql + Qsdl) as the number of threads in the
I/O-intensive applications. This interpretation assumes that each thread spends most of its time in
waiting for an I/O operation to complete.
specified in the mmchqos command, then each line displays the data for one QoS program that is running
on the specified node.
Time, Class, Node, Iops, TotSctrs, Pool, Pid, RW, SctrI, AvgTm, SsTm, MinTm, MaxTm, AvgQd, SsQd
1480606227,misc,172.20.0.21,285,2360,system,0,R,16,0.000260,0.000197,0.000006,0.005894,0.000001,0.000000
1480606228,misc,172.20.0.21,631,5048,system,0,R,16,0.000038,0.000034,0.000006,0.002463,0.000000,0.000000
1480606239,other,172.20.0.21,13,112,system,26724,R,16,0.000012,0.000000,0.000008,0.000022,0.000001,0.000000
1480606239,other,172.20.0.21,1,512,system,26724,R,512,0.000375,0.000000,0.000375,0.000375,0.000002,0.000000
1480606239,other,172.20.0.21,30,15360,system,26724,W,512,0.000910,0.000004,0.000451,0.002042,0.000001,0.000000
1480606241,misc,172.20.0.21,5,48,system,14680072,W,16,0.000135,0.000000,0.000025,0.000258,0.000000,0.000000
1480606240,other,172.20.0.21,4,32,system,26724,R,16,0.000017,0.000000,0.000009,0.000022,0.000001,0.000000
1480606240,other,172.20.0.21,48,24576,system,26724,W,512,0.000916,0.000006,0.000490,0.002141,0.000001,0.000000
1480606241,other,172.20.0.21,10,80,system,26724,R,16,0.000017,0.000000,0.000007,0.000039,0.000001,0.000000
1480606241,other,172.20.0.21,34,17408,system,26724,W,512,0.000873,0.000007,0.000312,0.001957,0.000001,0.000000
1480606241,misc,172.20.0.21,12,104,system,15597572,W,16,0.000042,0.000000,0.000024,0.000096,0.000000,0.000000
1480606241,misc,172.20.0.21,1,512,system,15597572,W,512,0.000192,0.000000,0.000192,0.000192,0.000000,0.000000
1480606241,misc,172.20.0.21,9,72,system,14680071,W,16,0.000029,0.000000,0.000024,0.000040,0.000000,0.000000
## conti=4
MinTm
The minimum time that was required for an I/O operation to be completed.
MaxTm
The maximum time that was required for an I/O operation to be completed.
AvgQd
The mean time for which QoS imposed a delay of the read or write operation.
SsQd
The sum of the squares of differences from the mean value that is displayed for AvgQd.
The last line in the example indicates that the index of the next block to be displayed is 4. You can avoid
re-displaying statistics by having the next mmlsqos command display statistics beginning at this block
index.
Examples
1. The following command displays the I/O performance values for all the pools in the file system over
the previous 60 seconds. It does so for each QoS class separately and summed across all the nodes in
the cluster.
2. The following command displays the I/O performance values for the named pool over the previous 60
seconds. It does so for each QoS class separately and for each node separately.
See also
• “mmchqos command” on page 260
• Setting the Quality of Service for I/O operations (QoS) in the IBM Spectrum Scale: Administration Guide.
Location
/usr/lpp/mmfs/bin
mmlsquota command
Displays quota information for a user, group, or fileset.
Synopsis
mmlsquota [-u User | -g Group] [-v | -q] [-e] [-C ClusterName]
[-Y] [--block-size {BlockSize | auto}] [Device[:Fileset] ...]
or
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
For the specified User, Group, or Fileset the mmlsquota command displays information about quota limits
and current usage on each file system in the cluster. This information is displayed only if quota limits have
been established and the user has consumed some amount of storage. If you want quota information for a
User, Group, or Fileset that has no file system storage allocated at the present time, you must specify -v.
Important: Quota limits are not enforced for root users (by default). For information on managing quotas,
see Managing GPFS quotas in the IBM Spectrum Scale: Administration Guide.
If neither the -g, -u, or -j option is specified, the default is to display only user quotas for the user who
issues the command.
Replication:
• Replicated data is included in data block current usage. See the note under "Current usage" in the
"Block limits" list later in this topic.
• Data replication and metadata replication do not affect the current number of files that are reported for
quotas. See the note under "Current number of files" in the "File limits" list later in this topic.
For each file system in the cluster, the mmlsquota command displays:
1. Block limits:
• Quota type (USR or GRP or FILESET)
• Current usage
Note: If data replication is enabled, the data block current usage includes the replicated data. For
more information, see the topic Listing quotas in the IBM Spectrum Scale: Administration Guide.
• Soft limit
• Hard limit
• Space in doubt
• Grace period
2. File limits:
• Current number of files
Note: The current number of files is not increased by data replication or metadata replication. For
more information, see the topic Listing quotas in the IBM Spectrum Scale: Administration Guide.
• Soft limit
• Hard limit
• Files in doubt
• Grace period
Note:
• In cases where small files do not have an additional block allocated for them, quota usage might
show less space usage than expected.
• If you want to check the grace period that is set, specify mmrepquota -t.
Because the sum of the in-doubt value and the current usage may not exceed the hard limit, the actual
block space and number of files available to the user, group, or fileset may be constrained by the in-doubt
value. If the in-doubt value approaches a significant percentage of the quota, run the mmcheckquota
command to account for the lost space and files. For more information, see Listing quotas in IBM
Spectrum Scale: Administration Guide.
This command cannot be run from a Windows node.
Parameters
-C ClusterName
Specifies the name of the cluster from which the quota information is obtained (from the file systems
within that cluster). If -C is omitted, the local cluster is assumed. The cluster name specified by the -
C flag must be part of the same multicluster group as the node issuing the mmlsquota command. A
node that is part of a remote cluster can only see the file systems that it has been given authority to
mount from the local cluster.
Device
Specifies the device name of the file system for which quota information is to be displayed. File
system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0.
Fileset
Specifies the name of a fileset located on Device for which quota information is to be displayed.
-d
Displays the default quota limits for user, group, or fileset quotas. When specified in combination with
the -u, -g, or -j options, default file system quotas are displayed. When specified without any of the
-u, -g, or -j options, default fileset-level quotas are displayed.
-e
Specifies that mmlsquota is to collect updated quota usage data from all nodes before displaying
results. If -e is not specified, there is the potential to display negative usage values as the quota
server may process a combination of up-to-date and back-level information.
-g Group
Displays quota information for the user group or group ID specified in the Group parameter.
-j Fileset
Displays quota information for the named fileset.
-q
Prints a terse message containing information only about file systems with usage over quota.
-u User
Displays quota information for the user name or user ID specified in the User parameter.
-v
Displays quota information on file systems where the User, Group or Fileset limit has been set, but the
storage has not been allocated.
--block-size {BlockSize | auto}
Specifies the unit in which the number of blocks is displayed. The value must be of the form [n]K, [n]M,
[n]G or [n]T, where n is an optional integer in the range 1 to 1023. The default is 1K. If auto is
specified, the number of blocks is automatically scaled to an easy-to-read value.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note:
• Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that might be
encoded, see the command documentation of mmclidecode. Use the mmclidecode command to
decode the field.
• -Y disregards the value of --block-size and returns the results in KB.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
If you are a root user:
• You may view quota information for all users, groups, and filesets.
• The node on which the command is issued must be able to execute remote shell commands on any
other node in the cluster without the use of a password and without producing any extraneous
messages. For more information, see Requirements for administering a GPFS file system in IBM Spectrum
Scale: Administration Guide.
If you are a non-root user, you may view only fileset quota information, your own quota information, and
quota information for any groups to which you belong.
You must be a root user to use the -d option.
GPFS must be running on the node from which the mmlsquota command is issued.
Examples
1. The user ID paul issues this command:
mmlsquota
This output shows the quotas for user paul in file system fsn set to a soft limit of 100096 KB, and a
hard limit of 200192 KB. 728 KB is currently allocated to paul. 4880 KB is also in doubt, meaning that
the quota system has not yet been updated as to whether this space has been used by the nodes, or
whether it is still available. No grace period appears because paul has not exceeded his quota. If he
exceeds the soft limit, the grace period is set and he has that amount of time to bring his usage below
the quota values. If he fails to do so, he can not allocate any more space.
The soft limit for files (inodes) is set at 30 and the hard limit is 50. 35 files are currently allocated to
this user, and the quota system does not yet know whether the 10 in doubt have been used or are still
available. A grace period of six days appears because the user has exceeded his quota. The user would
have this amount of time to bring his usage below the quota values. If the user fails to do so, the user
is not allocated any more space.
2. To show the quotas for user pfs001, device gpfs2, and fileset fset4, issue this command:
3. To show user and group default quotas for all filesets in the gpfs1 file system, issue this command:
mmlsquota -d gpfs1
4. To show user and group default quotas for fileset fset1 in the gpfs1 file system, issue this command:
mmlsquota -d gpfs1:fset1
5. To show the quotas for fileset fset0 in file system fs1, issue this command:
See also
• “mmcheckquota command” on page 218
• “mmdefedquota command” on page 342
• “mmdefquotaoff command” on page 346
• “mmdefquotaon command” on page 349
• “mmedquota command” on page 398
• “mmrepquota command” on page 656
Location
/usr/lpp/mmfs/bin
mmlssnapshot command
Displays GPFS snapshot information.
Synopsis
mmlssnapshot Device [-d [--block-size {BlockSize | auto}]]
[-s {all | global | [[Fileset]:]Snapshot[,[[Fileset]:]Snapshot...]} | -j
Fileset[,Fileset...]]
[--qos QOSClass] [-Y]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmlssnapshot command to display GPFS snapshot information for the specified file system or
fileset. You can optionally display the amount of storage that is used by the snapshot.
Parameters
Device
The device name of the file system for which snapshot information is to be displayed. File system
names do not need to be fully qualified. fs0 is as acceptable as /dev/fs0.
-d
Displays the amount of storage that is used by the snapshot.
This operation requires an amount of time that is proportional to the size of the file system; therefore,
it can take several minutes or even hours on a large and heavily-loaded file system.
This optional parameter can impact overall system performance. Avoid running the mmlssnapshot
command with this parameter frequently or during periods of high file system activity.
--block-size {BlockSize | auto}
Specifies the unit in which the number of blocks is displayed. The value must be of the form [n]K, [n]M,
[n]G or [n]T, where n is an optional integer in the range 1 - 1023. The default is 1 K. If auto is
specified, the number of blocks is automatically scaled to an easy-to-read value.
-s
Displays the attributes for the specified snapshots.
all
Displays information for all snapshots. This option is the default.
global
Displays information for global snapshots.
[[Fileset]:]
:
A colon (:) followed by a snapshot name indicates a global snapshot. For example, :SS01
indicates a global snapshot with the name SS01. If a global snapshot with that name exists,
the command displays information about it.
Fileset:
A fileset name followed by a colon (:) followed by a snapshot name indicates a fileset
snapshot. For example, fset02:SS01 indicates a snapshot of fileset fset02 with the name
SS01. If a snapshot of the fileset with that snapshot name exists, then the command displays
information about it.
Snapshot[,Snapshot...]
Displays information for the specified snapshots.
-j Fileset[,Fileset...]
Displays only snapshots that contain the specified filesets; including all global snapshots.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
-Y
Displays the command output in a parseable format with a colon (:) as the field delimiter. Each column
is described by a header.
Note: Fields having a colon are encoded to prevent confusion. If a field contains a % (percent sign)
character, it is most likely encoded. Use the mmclidecode command to decode the field.
Exit status
0
Successful completion.
nonzero
A failure occurred.
Security
You must be a root user or fileset owner to use the -d parameter.
If you are a root user, the node on which the command is issued must be able to execute remote shell
commands on any other node in the cluster without the use of a password and without producing any
extraneous messages. For more information, see Requirements for administering a GPFS file system in IBM
Spectrum Scale: Administration Guide.
If you are a non-root user, you can specify only file systems that belong to the same cluster as the node
on which the mmlssnapshot command was issued.
Examples
Note: Ensure that the snapshot name does not include a colon (:).
The following command displays information about all the existing snapshots in file system fs1:
mmlssnapshot fs1
The following command displays information about snapshots named SS01. The snapshots can be global
snapshots or fileset snapshots:
The following command displays information about a global snapshot named gSS01:
The following command displays information about a fileset snapshot with the name fsSS01 that is a
snapshot of fileset fset02:
The following command displays information about global snapshots with the names gSS01 and gSS02
and a fileset snapshot of fileset fset02 named fsSS02:
The following command displays information about global snapshots and fileset snapshots that contain
the filesets fset02 and fset03:
See also
• “mmcrsnapshot command” on page 337
• “mmdelsnapshot command” on page 378
• “mmrestorefs command” on page 665
• “mmsnapdir command” on page 711
Location
/usr/lpp/mmfs/bin
mmmigratefs command
Performs needed conversions to support new file system features.
Synopsis
mmmigratefs Device [--fastea] [--online | --offline]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmmigratefs command to enable features that require existing on-disk data structures to be
converted to a new format.
Before issuing the mmmigratefs command, see upgrade, coexistence, and compatibility considerations
in Upgrading in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide. You must ensure that
all nodes in the cluster have been upgraded to the latest level of GPFS code and that you have
successfully run the mmchconfig release=LATEST command. You must also ensure that the new
features have been enabled by running mmchfs -V full.
The mmmigratefs command can be run with the file system mounted or unmounted. If mmmigratefs is
run without the --online or --offline parameters specified, the command will determine the mount
status of the file system and run in the appropriate mode.
Parameters
Device
The device name of the file system to be migrated. File system names need not be fully qualified; for
example, fs0 is just as acceptable as /dev/fs0. This must be the first parameter.
--fastea
Convert the existing extended attributes to the new format required for storing the attributes in the
file's inode and thereby allowing for faster extended-attribute access.
--online
Allows the mmmigratefs command to run while the file system is mounted.
--offline
Allows the mmmigratefs command to run while the file system is unmounted.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmmigratefs command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To enable fast extended attribute access for file system fs3, issue this command:
See also
• “mmchconfig command” on page 169
• “mmchfs command” on page 230
Location
/usr/lpp/mmfs/bin
mmmount command
Mounts GPFS file systems on one or more nodes in the cluster.
Synopsis
mmmount {Device | DefaultMountPoint | DefaultDriveLetter |
all | all_local | all_remote | {-F DeviceFileName}}
[-o MountOptions] [-a | -N {Node[,Node...] | NodeFile | NodeClass}]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmmount command mounts the specified GPFS file system on one or more nodes in the cluster. If no
nodes are specified, the file systems are mounted only on the node from which the command was issued.
A file system can be specified using its device name or its default mount point, as established by the
mmcrfs, mmchfs or mmremotefs commands.
When all is specified in place of a file system name, all GPFS file systems will be mounted. This also
includes remote GPFS file systems to which this cluster has access.
Important: The mmmount command fails if you run it while file system maintenance mode is enabled. For
more information about file system maintenance mode, see the topic File system maintenance mode in
the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Note: The time that is required to mount a file system for the first time is greater if the file system uses
one or more thin provisioned disks. The extra time is used by the mount operation to reserve physical
space for error recovery on each thin provisioned disk. For more information, see the topic IBM Spectrum
Scale with data reduction storage devices in the IBM Spectrum Scale: Concepts, Planning, and Installation
Guide.
Parameters
Device | DefaultMountPoint | DefaultDriveLetter | all | all_local | all_remote | {-F DeviceFileName}
Indicates the file system or file systems to be mounted.
Device
The device name of the file system to be mounted. File system names need not be fully-qualified.
fs0 is just as acceptable as /dev/fs0.
DefaultMountPoint
The mount point associated with the file system as a result of the mmcrfs, mmchfs, or
mmremotefs commands.
DefaultDriveLetter
The Windows drive letter associated with the file system as a result of the mmcrfs or mmchfs
command.
all
Indicates all file systems known to this cluster.
all_local
Indicates all file systems owned by this cluster.
all_remote
Indicates all files systems owned by another cluster to which this cluster has access.
-F DeviceFileName
Specifies a file containing the device names, one per line, of the file systems to be mounted.
This must be the first parameter.
DriveLetter
The location where the file system is to be mounted. If not specified, the file system is mounted at its
default drive letter. This option can be used to mount a file system at a drive letter other than its
default one or to mount a file system that does not have an established default drive letter.
MountPoint
The location where the file system is to be mounted. If not specified, the file system is mounted at its
default mount point. This option can be used to mount a file system at a mount point other than its
default mount point.
Options
-a
Mount the file system on all nodes in the GPFS cluster.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes on which the file system is to be mounted.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
This command does not support a NodeClass of mount.
-o MountOptions
Specifies the mount options to pass to the mount command when mounting the file system. For a
detailed description of the available mount options, see Mount options specific to IBM Spectrum Scale
in IBM Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmmount command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum
Scale: Administration Guide.
Examples
1. To mount all GPFS file systems on all of the nodes in the cluster, issue this command:
mmmount all -a
Mon Mar 23 06:56:53 EDT 2020: mmmount: Mounting file systems ...
2. To mount file system fs2 read-only on the local node, issue this command:
mmmount fs2 -o ro
Mon Mar 23 08:14:49 EDT 2020: mmmount: Mounting file systems ...
3. To mount file system fs1 on all NSD server nodes, issue this command:
Mon Mar 23 08:18:53 EDT 2020: mmmount: Mounting file systems ...
See also
• “mmumount command” on page 721
• “mmlsmount command” on page 509
Location
/usr/lpp/mmfs/bin
mmnetverify command
Verifies network configuration and operation in a cluster.
Synopsis
mmnetverify [Operation[ Operation...]] [-N {Node[,Node...] | all}]
[--target-nodes {Node[,Node...] | all}]
[--configuration-file File] [--log-file File]
[--verbose | -Y] [--min-bandwidth Number]
[--max-threads Number] [--ces-override] [--ping-packet-size Number]
[--subnets Addr[,Addr...]][--cluster Name Node[,Node...]]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Note: The mmnetverify command is a diagnostic tool and is not intended to be issued continually during
normal IBM Spectrum Scale cluster operations. Issue this command only to validate a cluster network
before going into production or when you suspect network problems.
With the mmnetverify command, you can verify the network configuration and operation of a group of
nodes before you organize them into an IBM Spectrum Scale cluster. You can also run the command to
analyze network problems after you create a cluster.
If you have not created an IBM Spectrum Scale cluster yet, you must run the command with a
configuration file. See the --configuration-file option in the Parameters section.
The command uses the concepts of local nodes and target nodes. A local node is a node from which a
network test is run. The command can be started from one node and run from several separate local
nodes. A target node is a node against which a test is run.
The command has the following requirements:
• IBM Spectrum Scale must be installed on all the nodes that are involved in the test, including both local
nodes and target nodes.
• Each node must be able to issue remote shell commands to all nodes, including itself, without a
password. (This requirement is tested in the shell check.)
The following table lists the types of output messages and where they are sent. Messages are not added
to a log file unless you specify the log-file name on the command line:
If sudo wrappers are enabled, the command uses a sudo wrapper when it communicates with remote
nodes. Note the following restrictions:
• If you are working with an existing cluster, the cluster must be at IBM Spectrum Scale v4.2.3 or later.
• You must run the command on an administration node.
• The -N option is not supported.
• You must run the sudo command as the gpfsadmin user.
Parameters
Operation[ Operation...]
Specifies one or more operations, which are separated by blanks, that are to be verified against the
target nodes. The operations are described in Table 27 on page 546. Shortcut terms are described in
Table 26 on page 545.
If you do not specify any operations, the command does all the operations except data-large,
flood-node, and flood-cluster against the target nodes.
-N {Node[,Node...] | all}
Specifies a list of nodes on which to run the command. If you specify more than one node, the
command is run on all the specified nodes in parallel. Each node tests all the specified target nodes. If
you do not include this parameter, the command runs the operations only from the node where you
enter the command.
Note: The --max-threads parameter specifies how many nodes can run in parallel at the same
time. If the limit is exceeded, the command still tests all the specified nodes, starting the surplus
nodes as other nodes finish.
Node[,Node...]
Specifies a list of nodes in the local cluster. This parameter accepts node classes. You can specify
system node classes, such as aixnodes, and node classes that you define with the
mmcrnodeclass command.
all
Specifies all the nodes in the local cluster.
[--configuration-file File]
Specifies the path of a configuration file. You must use a configuration file if you have not created an
IBM Spectrum Scale cluster yet. You can also specify a configuration file if you have created a cluster
but you do not want the command to run with the IBM Spectrum Scale cluster configuration values.
Only the node parameter is required. The other parameters revert to their default values if they are
not specified.
Note: When you specify this parameter, the --ces-override parameter is automatically set.
The following code block shows the format of the file:
tscCmdPortRange Min-Max
subnets Addr[,Addr...]
cluster Name Node[,Node...]
where:
node Node [AdminName]
Specifies a node name, followed optionally by the node's admin name. If you do not specify an
admin name, the command uses the node name as the admin name.
You can have multiple node parameters. Add a node parameter for each node that you want to be
included in the testing, either as a local node or as a target node. You must include the node from
which you are running the command.
rshPath Path
Optional. Specifies the path of the remote shell command to be used. The default value
is /usr/bin/ssh. Specify this parameter only if you want to use a different remote shell
command.
rcpPath Path
Optional. Specifies the path of the remote file copy command to be used. The default value
is /usr/bin/scp. Specify this parameter only if you want to use a different remote copy
command.
tscTcpPort Port
Optional. Specifies the TCP port number to be used by the local GPFS daemon when it contacts a
remote cluster. The default value is 1191. Specify this value only if you want to use a different
port.
mmsdrservPort Port
Optional. Specifies the TCP port number to be used by the mmsdrserv service to provide access
to configuration data to the rest of the nodes in the cluster. The default value is the value that is
stored in mmfsdPort.
tscCmdPortRange=Min-Max
Specifies the range of port numbers to be used for extra TCP/IP ports that some administration
commands need for their processing. Defining a port range makes it easier for you set firewall
rules that allow incoming traffic on only those ports. For more information, see the topic IBM
Spectrum Scale port usage in the IBM Spectrum Scale: Administration Guide.
If you used the spectrumscale installation toolkit to install a version of IBM Spectrum Scale that
is earlier than version 5.0.0, then this attribute is initialized to 60000-61000. Otherwise, this
attribute is initially undefined and the port numbers are dynamically assigned from the range of
ephemeral ports that are provided by the operating system.
subnets Addr[,Addr...]
Optional. Specifies a list of subnet addresses to be searched for a subnet that both the local node
and the target node are connected to. If such a subnet is found, the command runs the specified
network check between the connections of the local node and target node to that subnet.
Otherwise, the command runs the network check across another connection that the local node
and the target node have in common. The command goes through this process for each local node
and target node that are specified for the command to process. The default value of this
parameter is no subnets.
You must specify subnet addresses in dot-decimal format, such as 10.168.0.0. You cannot
specify a cluster name as part of a subnet address. For more information about specifying a list of
subnets, see the description of the parameter --subnets later in this topic.
cluster Name Node[,Node...]
Optional. Specifies the name of a remote cluster followed by at least one contact node identifier.
The remote-cluster operation includes this cluster in its connectivity checks. You can specify
this parameter multiple times in a configuration file. For more information, see the description of
the --cluster parameter later in this topic.
[--log-file File]
Specifies the path of a file to contain the output messages from the network checks. If you do not
specify this parameter, messages are displayed only on the console. See Table 25 on page 540.
--verbose
Causes the command to generate verbose output messages. See Table 25 on page 540.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
[--min-bandwidth Number]
Specifies the minimum acceptable bandwidth for the data bandwidth check.
--max-threads Number
Specifies how many nodes can run the command in parallel at the same time. The valid range is 1 -
64. The default value is 32. For more information, see the -N parameter.
--ces-override
Causes the command to consider all the nodes in the configuration to be CES-enabled. This
parameter overrides the requirement of the protocol-ctdb and the protocol-object network
checks that the local node and the target nodes must be CES-enabled with the mmchnode command.
This parameter is automatically set when you specify the --configuration-file parameter.
[--ping-packet-size Number]
Specifies the size in bytes of the ICMP echo request packets that are sent between the local node and
the target node during the ping test. The size must not be greater than the MTU of the network
interface.
If the MTU size of the network interface changes, for example to support jumbo frames, you can
specify this parameter to verify that all the nodes in the cluster can handle the new MTU size.
[--subnets Addr[,Addr...]]
Specifies a list of subnets that the command searches in sequential order for a subnet that both the
local node and the target node are connected to. If such a subnet is found, the command runs the
network check between the connections of the local node and target node to that subnet. (The
command runs the network check only for the first such subnet that it finds in the list.) If such a
subnet is not found, the command runs the network check across another connection that the local
node and the target node have in common. The command goes through this process for each local
node and target node that are specified on the command line.
This parameter affects only the network checks that are included in the port, data, and bandwidth
shortcuts. For a list of these network checks, see Table 26 on page 545 following.
Before you use this parameter, ensure that all nodes in the cluster or group are running IBM Spectrum
Scale 5.0.0 or later and that you have successfully run the command mmchconfig
release=LATEST on all the nodes. For more information, see the chapter Migration, coexistence, and
compatibility in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Addr[,Addr...]
Specifies a list of subnet addresses to be searched. You can specify a subnet address either as a
literal network address in dot-decimal format, such as 10.168.0.0, or as a shell-style regular
expression that can match multiple subnet addresses.
Note: You cannot specify a cluster name as part of the subnet address.
The following types of regular expression are supported:
• Character classes
– A set of numerals enclosed in square brackets. For example, the expression 192.168.[23].0
matches 192.168.2.0 and 192.168.3.0.
– A range of numerals enclose in square brackets. For example, the expression 192.168.
[2-16].0 matches the range 192.168.2.0 through 192.168.16.0.
• Quantifiers
– The pattern X* signifies X followed by 0 or more characters. For example, the expression
192.168.*.0 matches 192.168.0.0, 192.168.1.0, and so on up to 192.168.255.0.
– The pattern X? signifies X followed by 0 or 1 characters. For example, the expression
192.168.?.0 matches 192.168.0.0, 192.168.1, 0, and so on up to 192.168.9.0.
Tip: In all of IBM Spectrum Scale, you can specify a list of subnets for the mmnetverify command in
three locations:
• In the subnets attribute of the mmchconfig command. For more information, see “mmchconfig
command” on page 169.
• In the subnets entry of the mmnetverify configuration file. See the description of the --
configuration-file parameter earlier in this topic.
• In the --subnets parameter of the mmnetverify command.
If you specify a list of subnets in the second location (the subnets entry of the mmnetverify
configuration file) the command ignores any subnets that are specified in the first location. Similarly, if
you specify a list of subnets in the third location (the --subnets parameter of the mmnetverify
command) the command ignores any subnets that are specified in the first two locations.
[--cluster Name Node[,Node...]]
Specifies a remote cluster for the remote-cluster operation to check. This parameter can occur
multiple times on the command line, so that you can specify multiple remote clusters. By default the
remote-cluster operation checks only the known remote clusters -- that is, the remote clusters
that are listed in the mmsdrfs file, where they are put by the mmremotecluster command. With the
--cluster parameter, you can specify other remote clusters to be checked. For more information
about the remote-cluster operation, see the entry for the operation in Table 27 on page 546 later
in this topic.
Node[,Node...]]
Specifies one or more contact nodes through which the remote-cluster operation can get
information about the remote cluster. You must specify at least one contact node.
The remote-cluster operation evaluates a remote cluster in two stages. In the first stage it gathers
information about the nodes in the remote cluster. In the second stage it checks the nodes for
connectivity. The first stage has two phases:
1. In the first phase the remote-cluster operation tries to get information through the contact
nodes of the remote cluster. If this attempt succeeds, the first stage ends and the operation goes
to the second stage, which is testing the remote nodes for connectivity. But if the attempt fails, the
operation goes to the second phase.
2. In the second phase the operation tries to get information through a special daemon called a
remote-cluster daemon that can be running on a remote contact node. If this attempt succeeds, the
first stage ends and the operation goes to the second stage, which is checking the nodes for
connectivity. If the attempt fails, the operation displays an error message with instructions to start
a remote-cluster daemon on a contact node and try the remote-cluster operation again.
To start a remote-cluster daemon, open a console window on a contact node and issue the following
command:
Note: The --remote-cluster-port parameter is optional. The default port number is 61147. For
more information, see the description of the --remote-cluster-daemon parameter later in this
topic.
You can now issue the mmnetverify command again with the remote-cluster operation to run
checks against the remote cluster. If you also specify the --remote-cluster-port parameter and
the port number on which the remote-cluster daemon listens, the remote-cluster operation skips
the first phase of the search and immediately starts the second phase. For example,
For more information, see the entry for the remote-cluster operation in Table 27 on page 546
later in this topic.
--remote-cluster-daemon [--remote-cluster-port PortNumber]
Causes the mmnetverify command to start a remote-cluster daemon on the node where the
command is issued. The purpose of this daemon is to provide information about the nodes of the
remote cluster to the mmnetverify command when it is running a remote-cluster operation from
a node in a local cluster.
--remote-cluster-port PortNumber
Specifies the port that the remote-cluster daemon listens on. If you do not specify a port number,
the daemon listens on the default port 61147.
For more information, see the description of the --cluster parameter earlier in this topic and the
description of the remote-cluster operation in Table 27 on page 546 later in this topic.
The following table lists the parameters that you can specify for network checks. Separate these
parameters with a blank on the command line. For example, the following command runs the interface
and copy checks on the local node against all the nodes in the cluster:
ping Network The local node can ping the target node with its name, daemon
connectivity via node name, rel_hostname, admin_shortname, and IP address
ping entries in the mmsdrfs file.
shell Remote shell The command verifies the following conditions:
command
• The local node can issue remote shell commands to the target
node's admin interface without requiring a password.
• The target node's daemon and admin names refer to the same
node.
copy Remote copy The local node can issue a remote copy command to the target
node's admin interface without requiring a password.
time Date and time The time and date on the local node and target node do not
differ by a wide margin.
daemon-port1 GPFS daemon The target node can establish a TCP connection to the local
connectivity node on the mmfsd daemon port of the local node:
• The target node uses the port that is specified in the cluster
configuration property tscTcpPort. The default value is
1191.
• If mmfsd and mmsdrserv are not running on the local node,
the command starts an echo server on the daemon port of the
local node for this test.
sdrserv-port1 GetObject daemon The target node can establish a TCP connection directed to the
connectivity local node on the port that is specified in the cluster
configuration property mmsdrservPort:
• The default value of this port is the value that is specified in
the cluster configuration property tscTcpPort.
• If mmfsd and mmsdrserv are not running on the local node,
the command starts an echo server on the daemon port of the
local node for this test.
data-small1 Small data The target node can establish a TCP connection to the local
exchange node and exchange a series of small-sized data messages
without network errors:
• The command starts an echo server on the local node for this
test.
• If the tscCmdPortRange property is set, then the echo
server listens on a port in the specified range.
• If not, then the echo server listens on an ephemeral port that
is provided by the operating system.
data-medium1 Medium data The target node can establish a TCP connection to the local
exchange node and exchange a series of medium-sized data messages
without network errors:
• The command starts an echo server on the local node for this
test.
• If the tscCmdPortRange property is set, then the echo
server listens on a port in the specified range.
• If not, then the echo server listens on an ephemeral port that
is provided by the operating system.
data-large1 Large data The target node can establish a TCP connection to the local
exchange node and exchange a series of large-sized data messages
without network errors:
• The command starts an echo server on the local node for this
test.
• If the tscCmdPortRange property is set, then the echo
server listens on a port in the specified range.
• If not, then the echo server listens on an ephemeral port that
is provided by the operating system.
bandwidth- Network The target node can establish a TCP connection to the local
node1 bandwidth one-to- node and send a large amount of data with adequate
one bandwidth:
• Bandwidth is measured on the target node.
• If the min-bandwidth parameter was specified on the
command line, the command verifies that the actual
bandwidth exceeds the specified minimum bandwidth.
gnr-bandwidth Overall bandwidth All target nodes can establish a TCP connection to the local
node and send data to it.
• The bandwidth is measured on each target node.
• If the min-bandwidth parameter is specified on the
command line, the command verifies that the actual
bandwidth exceeds the specified minimum bandwidth.
Note the following differences between this test and the
bandwidth-cluster test:
• The total bandwidth is measured rather than the bandwidth
per node.
• All target nodes must be active and must participate in the
test.
• The bandwidth value that is reported does not include ramp-
up time for TCP to read full capability.
flood-node Flood one-to-one When the local node is flooded with datagrams, the target node
can successfully send datagrams to the local node:
• The target node tries to flood the local node with datagrams.
• The command records packet loss.
• The command verifies that some of the datagrams were
received.
flood-cluster Flood many-to-one When the local node is flooded with datagrams from all the
target nodes in parallel, each target node can successfully send
datagrams to the local node:
• The command records packet loss for each target node.
• The command checks that each target node received some
datagrams.
• The command checks that none of the target nodes has a
packet loss significantly higher than the other nodes.
protocol- Object-protocol The target node can establish a connection with the local node
object connectivity through the ports that are used by the object server, the
container server, the account server, the object-server-sof, and
the keystone daemons. The command does not run this test in
the following situations:
• The Object protocol service is not enabled.
• The local node or a target node is not a CES-enabled node. A
node is considered to be CES-enabled if it is in an existing
cluster and if it was enabled with the mmchnode --ces-
enable command. This requirement is not enforced in the
following situations:
– You override the requirement by specifying the --ces-
override option.
– You specify the --configuration-file parameter.
rdma- RDMA connectivity The command verifies the following conditions on nodes on
connectivity which RDMA is configured:
• The configuration that is specified in the verbPorts cluster
attribute matches the active InfiniBand interfaces on the
nodes.
• The nodes can connect to each other through the active
InfiniBand interfaces.
For more information see the verbPorts cluster attribute in
the topic “mmchconfig command” on page 169.
Note: These checks require that the commands ibv_devnfo
and ibtracert be installed on the nodes on which RDMA is
configured.
1For this type of test, if a list of subnets is specified, the command searches the list sequentially for a
subnet that both the local node and the target node are connected to. If the command finds such a
subnet, the command runs the specified test across the subnet. You can specify a list of subnets in the
subnets attribute in the mmchconfig command, in the subnets entry of the mmnetverify
configuration file, or in the --subnets parameter of the mmnetverify command. For more
information, see the description of the --subnets parameter earlier in this topic.
Exit status
0
The command completed successfully and all the tests were completed successfully.
1
The command encountered problems with options or with running tests.
2
The command completed successfully, but one or more tests was unsuccessful.
Security
You must have root authority to run the mmnetverify command.
The node on which you enter the command must be able to execute remote shell commands on any other
administration node in the cluster. It must be able to do so without the use of a password and without
producing any extraneous messages. For more information, see Requirements for administering a GPFS
file system in the IBM Spectrum Scale: Administration Guide.
Examples
1. The following command runs all checks, except data-large, flood-node, and flood-cluster,
from the node where you enter the command against all the nodes in the cluster:
mmnetverify
2. The following command runs connectivity checks from the node where you enter the command against
all the nodes in the cluster:
mmnetverify connectivity
3. The following command runs connectivity checks from nodes c49f04n11 and c49f04n12 against all
the nodes in the cluster:
4. The following command runs all checks, except flood-node and flood-cluster, from the node
where you enter the command against nodes c49f04n11 and c49f04n12:
5. The following command runs network port checks on nodes c49f04n07 and c49f04n08, each checking
against nodes c49f04n11 and c49f04n12:
See also
• “mmlsqos command” on page 522
Location
/usr/lpp/mmfs/bin
mmnfs command
Manages NFS exports and configuration.
Synopsis
mmnfs config list [--exportdefs][-Y]
or
or
or
or
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmnfs export commands to add, change, list, load, or remove NFS export declarations for IP
addresses on nodes that are configured as CES types.
Use the mmnfs config commands to list and change NFS configuration.
The protocol functions provided in this command, or any similar command, are generally referred to as
CES (Cluster Export Services). For example, protocol node and CES node are functionally equivalent
terms.
Parameters
export
Manages the NFS export configuration for the cluster with one of the following actions:
add
Creates a new configuration file for the NFS server in case it does not yet exist. If there is already
an export configuration file, then it is extended with the provided additional export parameters.
This export configuration file is used by the NFS server to create an NFS export for the Path so that
clients can connect to it. If there is already an existing export for the Path then an error is shown.
Each export configuration set has internally its own unique identifier number. This number is
automatically incremented for each added export. The mmnfs export add command attempts
to add the new export also to running NFS server instances, and can fail if one or more instances
are not running. This is not a critical issue, because the configuration changes are made in the
repository and are applied later when restarting the NFS server instance.
The authentication method must be established before an NFS export can be defined.
The export Path must be an existing path in the GPFS file system.
Note: The paths that are not within the GPFS file system cannot be exported using the commands.
Creating nested exports (such as /path/to/folder and /path/to/folder/subfolder) is
not recommended because doing so might lead to serious issues in data consistency. Remove the
higher-level export that prevents the NFSv4 client from descending through the NFSv4 virtual file
system path. In case nested exports cannot be avoided, ensure that the export with the common
path, called as the top-level export, has all the permissions for this NFSv4 client. Also NFSv4
client that mounts the parent (/path/to/folder) export does not see the child export subtree
(/path/to/folder/inside/subfolder) unless the same client is explicitly allowed to access
the child export as well.
--client {ClientOption[;ClientOption...]}
Declares the client-specific settings. You can specify a list of one or more client definitions. To
avoid incorrect parsing by the interpreter, put quotation marks around the argument list. For a
list of client definitions that can be specified with the --client option, see List of supported
client options for the mmnfs export {add | change} command.
Therefore, some export configuration commands might allow multiple client declarations, and
they have separators to distinguish them.
The following separators can be used:
• Colon to separate multiple allowed values for a specified attribute. For example, the key/value
pair "Protocols=3:4" allows the NFS protocols v3 and v4 to be declared for an export.
• Comma to separate key/value pairs within a client declaration list and a Semicolon to separate
client declaration lists.
For example:
--client "192.0.2.0/20(Access_Type=RW,Protocols=3:4); \
198.51.100.0/20(Access_Type=RO,Protocols=3:4,Transports=TCP:UDP)"
--nfsdefs Path
Lists the export configuration details for the specified Path.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
--nfsdefs-match Path
Lists the export configuration details of all export paths, which contain the specified string.
load
Overwrites (deletes) all existing NFS export declarations in the repository, if any. The export
declarations are fetched from a file provided to the load operation, which could contain a larger
number of export declarations. Some basic format checks are done during export load. After
loading export declarations from a file, the NFS service is restarted across all the nodes in the
cluster.
ExportCFGFile
The file name for the new exports declarations. This file is loaded and stored in the repository
to be published on all CES nodes running the NFS server. This load procedure can be used to
load a set of export declarations and that removes any previous configuration. The NFS
servers are restarted in order to apply the changes.
List of supported client options for the mmnfs export {add | change} command:
ACCESS_TYPE
Allowed values are none, RW, RO, MDONLY, and MDONLY_RO. The default value is none.
PROTOCOLS
Allowed values are 3, 4, NFS3, NFS4, V3, V4, NFSv3, and NFSv4. The default value is 3,4.
TRANSPORTS
Allowed values are TCP and UDP. The default value is TCP.
ANONYMOUS_UID
Allowed values are between -2147483648 and 4294967295. The default value is -2.
ANONYMOUS_GID
Allowed values are between -2147483648 and 4294967295. The default value is -2.
SECTYPE
Allowed values are none, sys, krb5, krb5i, and krb5p. The default value is sys.
PRIVILEGEDPORT
Allowed values are true and false. The default value is false.
MANAGE_GIDS=true
In this configuration, NFS server discards the list of GIDs passed via the RPC and fetch the group
information. For getting the group list, the server uses the default authentication configured on the
system (CES node). So, depending on the configuration, the list is obtained by using (/etc/passwd
+ /etc/groups) or from LDAP or from AD. To reach AD and get the list of GIDs, the client must use
sec=krb5, else NFS server is not able to get the list from AD.
MANAGE_GIDS=false
NFS server uses list of GIDs passed by RPC.
SQUASH
Allowed values are root, root_squash, all, all_squash, allsquash, no_root_squash,
none, and noidsquash. The default value is root_squash.
NFS_COMMIT
Allowed values are true and false. The default value is false.
Important: Use NFS_COMMIT very carefully because it changes the behavior of how transmitted
data is committed on the server side to NFS v2 like sync-mode on every write action.
CLIENTS
Allowed values are IP addresses in IPv4 or IPv6 notations, hostnames, netgroups, or * for all. A
netgroup name must not start with a numeric character and otherwise must only contain
underscore and/or hyphens ("[a-zA-Z_][0-9a-zA-Z_\- .]*"). The default value is *.
config
Manages NFS configuration for a CES cluster:
list
Displays the NFS configuration parameters and their values. This command also displays all the
default export configurations. This is used as the defaults by the mmnfs export add command if
no other client attributes are specified. The output can be formatted to be human readable or
machine readable.
--exportdefs
If this option is specified, the command displays the default export configuration parameters.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
change
Modifies the NFS configuration parameters. NFS is restarted across all the nodes on which NFS is
running, when this command is executed. Only some configuration options can be modified by this
command.
The configuration options that can be modified and their allowed values are as follows:
NFS_PROTOCOLS
Allowed values are 3, 4, NFS3, NFS4, V3, V4, NFSv3, and NFSv4. The default value is 3,4.
MINOR_VERSIONS
Allowed values are 0 and 1. The default value is 0 that implies NFS 4.0 is supported. The value
1 enables the NFS 4.1 support.
MNT_PORT
Specifies the port for the NFSv3 Mount protocol. Allowed values are between 0 and 65535.
The default value is 0.
NLM_PORT
Specifies the NLM port for NFSv3. Allowed values are between 0 and 65535. The default value
is 0.
RQUOTA_PORT
Specifies the RQUOTA port for NFSv3. Allowed values are between 0 and 65535. The default
value is 0.
STATD_PORT
Specifies the STATD port for NFSv3. Allowed values are between 0 and 65535. The default
value is 0.
LEASE_LIFETIME
Specifies the value in seconds during which an NFS server requires an NFS client to do some
operation to renew its lease. This is specific to NFSV4. Allowed values are between 0 and 120.
The default value is 60.
GRACE_PERIOD
Specifies the time in seconds during which an NFS client is required to reclaim its locks, in the
case of NFSv3, and additionally its state, in the case of NFSv4. Allowed values are between 0
and 180. The default value is 90.
DOMAINNAME
String. The default value is the "host's fully-qualified DNS domain name".
IDMAPD_DOMAIN
String. Domain in the ID Mapd configuration. The default value is the host's fully-qualified DNS
domain name.
LOCAL_REALMS
String. Local-Realms in the ID Mapd configuration. The default value is the host's default realm
name.
LOG_LEVEL
Allowed values are NULL, FATAL, MAJ, CRIT, WARN, EVENT, INFO, DEBUG, MID_DEBUG, and
FULL_DEBUG. The default value is EVENT.
ENTRIES_HWMARK
The high water mark for NFS cache entries. Beyond this point, NFS tries to evict some objects
from its cache. The default is 1500000.
RPC_IOQ_THRDMAX
Specifies the maximum number of RPC threads that process NFS requests from NFS clients.
The valid values are 2 - 131072. The default value is 512.
If the system has high bandwidth but high latency, you can increase the value of this
parameter to improve the system performance. However, increasing this parameter causes
Ganesha to use more memory under a heavy workload.
The NB_WORKER parameter of Ganesha 2.3 and 2.5 is obsolete, use the RPC_IOQ_THRDMAX
parameter in Ganesha 2.7.
Note: Specifying a port number in the NFS service configuration with the value of '0' means that
the service is picking a port number dynamically. This port number might change across service
restarts. If a firewall is to be established between the NFS server and the NFS clients, specific port
number can be configured via the command to establish discrete firewall rules. Note that the
NFS_PORT 2049 is a well-known and established convention as NFS servers and clients typically
expect this port number. Changing the NFS service port numbers impacts existing clients and a
remount of the client is required.
The export defaults that can be set are:
ACCESS_TYPE
Allowed values are none, RW, RO, MDONLY, and MDONLY_RO. The default value is none.
Note: Changing this option to any value other than none exposes data to all NFS clients that
can access your network, even if the export is created using add --client ClientOptions to
limit that clients access. All clients, even if not declared with the --client in the mmnfs
export add, has access to the data. The global value applies to an unseen *, even if
showmount -e CESIP does not display it. Use caution if you change it in this global
definition.
ANONYMOUS_UID
Allowed values are between -2147483648 and 4294967295. The default value is -2.
ANONYMOUS_GID
Allowed values are between -2147483648 and 4294967295. The default value is -2.
MANAGE_GIDS
Allowed values are true and false. The default value is false.
NFS_COMMIT
Allowed values are true and false. The default value is false.
Important: Use NFS_COMMIT very carefully because it changes the behavior of how
transmitted data is committed on the server side to NFS v2 like sync-mode on every write
action.
PRIVILEGEDPORT
Allowed values are true and false. The default value is false.
PROTOCOLS
Allowed values are 3, 4, NFS3, NFS4, V3, V4, NFSv3, and NFSv4. The default value is 3,4.
SECTYPE
Allowed values are none, sys, krb5, krb5i, and krb5p. The default value is sys.
SQUASH
Allowed values are root, root_squash, all, all_squash, allsquash, no_root_squash,
none, and noidsquash. The default value is root_squash.
Note: Changing this option to no_root_squash exposes data to root on all NFS clients, even
if the export is created using add --client ClientOptions to limit the root access of that
client.
TRANSPORTS
Allowed values are TCP and UDP. The default value is TCP.
If export defaults are set, then new exports that are created pick up the export default values. If
an export configuration value is not specified, the current export default value is used.
Note: For exports created before IBM Spectrum Scale 5.0.2, the export default value at the time
of its creation is used.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmnfs command.
The node on which the command is issued must be able to execute remote shell commands on any other
CES node in the cluster without the use of a password and without producing any extraneous messages.
For more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To create an NFS export (using a netgroup), issue this command:
Note: To specify an NFS client, use an IP address in IPv4 or IPv6 notation, hostname, netgroup, or *
for all.
2. To create an NFS export (using a client IP), issue this command:
Path Delegations Clients Access_Type Protocols Transports Squash Anonymous_uid Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids
NFS_Commit
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-----
/gpfs/fs1/export_1 none 192.0.2.8 RW 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE none FALSE
FALSE
Now add a client definition that is more restrictive to a different client, 198.51.100.10, by issuing the
command:
Now add a client definition that is very permissive but we want it to be last on the list so that the more
restrictive attributes for client 192.0.2.8 take precedence for that one client, by issuing the
command:
Path Delegations Clients Access Protocols Transports Squash Anonymous Anonymous SecType Privileged Default
Manage_Gids NFS_Commit
_Type _uid _gid Port
Delegation
--------------------------------------------------------------------------------------------------------------------------------------------
--------------------
/gpfs/fs1/export_1 none 192.0.2.8 RW 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE none
FALSE FALSE
/gpfs/fs1/export_1 none 198.51.100.10 MDONLY 3,4 TCP allsquash -2 -2 SYS FALSE none
FALSE FALSE
/gpfs/fs1/export_1 none 192.0.2.0/20 RW 3,4 TCP no_root_squash -2 -2 SYS FALSE none
FALSE FALSE
Note: This command removes a single client definition, for the IP "1.2.3.1", from the NFS export. If
this is the last client definition for the export, then the export is also removed.
5. To modify an NFS export, issue this command:
LEASE_LIFETIME: 60
GRACE_PERIOD: 90
DOMAINNAME: VIRTUAL1.COM
DELEGATIONS: Disabled
==========================
STATD Configuration
==========================
STATD_PORT: 0
==========================
CacheInode Configuration
==========================
ENTRIES_HWMARK: 1500000
==========================
Export Defaults
==========================
ACCESS_TYPE: NONE
PROTOCOLS: 3,4
TRANSPORTS: TCP
ANONYMOUS_UID: -2
ANONYMOUS_GID: -2
SECTYPE: SYS
PRIVILEGEDPORT: FALSE
MANAGE_GIDS: FALSE
SQUASH: ROOT_SQUASH
NFS_COMMIT: FALSE
==========================
Log Configuration
==========================
LOG_LEVEL: EVENT
==========================
Idmapd Configuration
==========================
LOCAL-REALMS: LOCALREALM
DOMAIN: LOCALDOMAIN
==========================
7. To change STATD_PORT configuration, issue this command (When a port is assigned, STATD is started
on the given port):
NFS Configuration successfully changed. NFS server restarted on all NFS nodes
on which NFS server is running.
Path Delegations Clients Access_ Protocols Transports Squash Anonymous Anonymous SecType Privileged Default MANAGE
NFS
Type _uid _gid Port Delegations_Gids
_Commit
--------------------------------------------------------------------------------------------------------------------------------------------
-------
/mnt/gpfs0/p1 none 203.0.113.2 RO 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE none FALSE
FALSE
/mnt/gpfs0/p1 none * RW 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE none FALSE
FALSE
/mnt/gpfs0/p1 none 203.0.113.1 RO 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE none FALSE
FALSE
10. To list export configuration details for the path /gpfs/gpfs1/export1, issue this command:
Path Deleg. Clients Access_Type Protocols Transports Squash Anon_uid Anon_gid SecType PrivPort DefDeleg
Manage_Gids NFS_Commit
--------------------------------------------------------------------------------------------------------------------------------------------
----------------
/gpfs/gpfs1/export1 NONE c49f04n10 RO 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE
FALSE FALSE
/gpfs/gpfs1/export1 NONE * RW 3 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE
FALSE FALSE
/gpfs/gpfs1/export1 NONE c49f04n12 RO 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE
FALSE FALSE
/gpfs/gpfs1/export1 NONE c49f04n11 NONE 3,4 TCP NO_ROOT_SQUASH -2 -2 SYS FALSE NONE
FALSE FALSE
11. To list export configuration details for all paths that contain the string - /gpfs/gpfs1/export1, issue this
command:
Path Deleg. Clients Access_Type Protocols Transports Squash Anon_uid Anon_gid SecType PrivPort DefDeleg
Manage_Gids NFS_Commit
--------------------------------------------------------------------------------------------------------------------------------------------
----------------
/gpfs/gpfs1/export1 NONE c49f04n10 RO 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE
FALSE FALSE
/gpfs/gpfs1/export1 NONE * RW 3 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE
FALSE FALSE
/gpfs/gpfs1/export1 NONE c49f04n12 RO 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE
FALSE FALSE
/gpfs/gpfs1/export1 NONE c49f04n11 NONE 3,4 TCP NO_ROOT_SQUASH -2 -2 SYS FALSE NONE
FALSE FALSE
/gpfs/gpfs1/export11NONE * NONE 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE
FALSE FALSE
Path Delegations Clients Access_Type Protocols Transports Squash Anonymous_uid Anonymous_gid SecType PrivilegedPort DefaultDelegations
Manage_Gids NFS_Commit
--------------------------------------------------------------------------------------------------------------------------------------------
----------------------
/gpfs/FS1/fset_io NONE @host.linux RW 3,4 TCP NO_ROOT_SQUASH -2 -2 SYS FALSE
NONE FALSE FALSE
/gpfs/FS1/fset_io NONE @host.hsn61_linux RW 3,4 TCP NO_ROOT_SQUASH -2 -2 SYS FALSE
NONE FALSE FALSE
See also
• “mmces command” on page 132
• “mmchconfig command” on page 169
• “mmlscluster command” on page 484
• “mmlsconfig command” on page 487
• “mmobj command” on page 565
• “mmsmb command” on page 698
• “mmuserauth command” on page 727
Location
/usr/lpp/mmfs/bin
mmnsddiscover command
Rediscovers paths to the specified network shared disks.
Synopsis
mmnsddiscover [-a | -d "Disk[;Disk...]" | -F DiskFile] [-C ClusterName]
[-N {Node[,Node...] | NodeFile | NodeClass}]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmnsddiscover command is used to rediscover paths for GPFS NSDs on one or more nodes. If you
do not specify a node, GPFS rediscovers NSD paths on the node from which you issued the command.
On server nodes, mmnsddiscover causes GPFS to rediscover access to disks, thus restoring paths which
may have been broken at an earlier time. On client nodes, mmnsddiscover causes GPFS to refresh its
choice of which NSD server to use when an I/O operation occurs.
In general, after the path to a disk is fixed, the mmnsddiscover command must be first run on the server
that lost the path to the NSD. After that, run the command on all client nodes that need to access the NSD
on that server. You can achieve the same effect with a single mmnsddiscover invocation if you utilize the
-N option to specify a node list that contains all the NSD servers and clients that need to rediscover paths.
Parameters
-a
Rediscovers paths for all NSDs. This is the default.
-d "DiskName[;DiskName]"
Specifies a list of NSDs whose paths are to be rediscovered.
-F DiskFile
Specifies a file that contains the names of the NSDs whose paths are to be rediscovered.
-C ClusterName
Specifies the name of the cluster to which the NSDs belong. This defaults to the local cluster if not
specified.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes on which the rediscovery is to be done.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmnsddiscover command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To rediscover the paths for all of the NSDs in the local cluster on the local node, issue the command:
mmnsddiscover
mmnsddiscover: Finished.
2. To rediscover the paths for all of the NSDs in the local cluster on all nodes in the local cluster, issue the
command:
mmnsddiscover -a -N all
mmnsddiscover: Finished.
3. To rediscover the paths for a given list of the NSDs on a node in the local cluster, issue the command:
mmnsddiscover: Attempting to rediscover the disks. This may take a while ...
c6f2c2vp5.ppd.pok.ibm.com: GPFS: 6027-1805 [N] Rediscovered nsd server access to gpfs1nsd.
c6f2c2vp5.ppd.pok.ibm.com: GPFS: 6027-1805 [N] Rediscovered nsd server access to gpfs2nsd.
mmnsddiscover: Finished.
See also
• “mmchnsd command” on page 251
• “mmcrnsd command” on page 332
• “mmdelnsd command” on page 376
• “mmlsnsd command” on page 514
Location
/usr/lpp/mmfs/bin
mmobj command
Manages configuration of Object protocol service, and administers storage policies for object storage,
unified file and object access, and multi-region object deployment.
Synopsis
mmobj swift base -g GPFSMountPoint --cluster-hostname CESHostName
[-o ObjFileset] [-i MaxNumInodes] [--ces-group CESGroup]
{{--local-keystone} | [--remote-keystone-url URL]
[--configure-remote-keystone]}
[--admin-user AdminUser] [--swift-user SwiftUser ][--pwd-file PasswordFile]
[--enable-file-access] [--enable-s3] [--enable-multi-region]
[--join-region-file RegionFile] [--region-number RegionNumber]
or
mmobj config list --ccrfile CCRFile [--section Section [--property PropertyName]] [-Y]
or
mmobj config change --ccrfile CCRFile --section Section --property PropertyName --value Value
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
mmobj s3 enable
or
mmobj s3 disable
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmobj command to modify and change the Object protocol service configuration, and to
administer storage policies for Object Storage, unified file and Object access, and multi-region Object
deployment.
Note: The mmobj config list and the mmobj config change commands are used to list and
change the configuration values for the underlying Swift service that is stored in the Cluster Configuration
Repository (CCR).
At least one CES IP address is required in the node that is running the mmobj swift base command to
set object_singleton_node and object_database_node attributes.
• The node with the object_database_node attribute runs the keystone database.
• The node with the object_singleton_node attribute runs unique object services across the CES
cluster.
You can verify the address by using the mmces address list. The IP address can be added by using
the mmces address add command.
If there is an existing Object authentication that is configured, use the mmuserauth service remove
--data-access-method object command to remove the Object authentication. Then, use the mmces
service disable OBJ command for necessary cleanup before you run the mmobj swift base
command again.
Object authentication can be either local or remote. If the Object authentication is local, the Keystone
identity service and the Keystone database run in the CES cluster and are handled by the CES cluster. If
the Object authentication is remote, the Keystone server must be fully configured and running before
Object services are installed.
In the unified file and Object access environments, the ibmobjectizer service makes files from the file
interfaces such as POSIX, NFS, and CIFS accessible through Object interfaces such as curl and SWIFT.
The ibmobjectizer service runs periodically and makes files available for the Object interface. The
file-access parameter of the mmobj command can be used to enable and disable the file-access
capability and the ibmobjectizer service. The ibmobjectizer service runs on the CES IP address
with the object_singleton_node attribute.
In a data lake environment, you can access files that are already stored in an existing fileset through
Object interface through swift or curl by using the mmobj file-access link-fileset command.
This command can be used to make the existing fileset data accessible under a unified file and Object
access storage policy container. By default, this command allows the existing fileset to have Object
access without updating the container and metadata. However, you can use the --update-listing
option to update container listing with files that are existing in the linked fileset. If the --update-
listing option of the mmobj file-access link-fileset command is used, then the
sourcefset-path must be the exact fileset junction path and the fileset must be derived from the
Object file system.
Note: The mmobj file-access link-fileset command does not enable and disable the unified file
and Object access feature. It is only used to link the existing filesets to provide Object access. Unlinking
the Object access-enabled filesets is not supported.
Parameters
swift
Configures the underlying Swift services.
Note: The object protocol is not supported in IBM Spectrum Scale 5.1.0.0. If you want to deploy
object, install the IBM Spectrum Scale 5.1.0.1 or a later release.
base
Specifies the configuration of the Object protocol service.
-g GPFSMountPoint
Specifies the mount path of the GPFS file system that is used by Swift.
GPFSMountPoint is the mount path of the GPFS file system that is used by the Object store.
--cluster-hostname CESHostName
Specifies the host name that is used to return one of the CES IP addresses. The returned CES IP
address is used in the endpoint for the identity and Object-store values that are stored in
Keystone.
CESHostName is the value for cluster host name that is used in the identity and Object-store
endpoint definitions in Keystone. Ideally, the host name is a host name that returns one the CES IP
addresses, such as a round-robin DNS. It might also be a fixed IP of a load balancer that
distributes requests to one of the CES nodes. It is not recommended to use an ordinary CES IP
since all identity and Object-store requests would be routed to the single node with that address
and might cause performance issues.
--local-keystone
Specifies that a new Keystone server is installed and configured locally in the cluster.
--admin-user User
Specifies the name of the admin user in Swift. The default value is admin.
--swift-user User
Specifies the user for the Swift services. The default value is swift.
--pwd-file PasswordFile
Specifies the file that contains the administrative user passwords that are used for Object access
protocol authentication configuration. You must save the password file under /var/mmfs/ssl/
keyServ/tmp on the node from which you are running the command. The password file is a
security-sensitive file that must have the following characteristics:
• The password file must be a regular file.
• The password file must be owned by the root user.
• Only the root user must have permission to read or write it.
Important: Passwords cannot contain the following characters:
• Forward slash (/)
• Colon (:)
• Backward slash (\)
• At symbol (@)
• Dollar sign ($)
• Left curly bracket ({)
• Right curly bracket (})
• Space
The password file for Object protocol configuration must have the following format:
%objectauth:
ksAdminPwd=ksAdminPwdpassword
ksSwiftPwd=ksSwiftPwdpassword
ksDatabasePwd=ksDatabasePassword
ksSwiftPwd
Specifies the Swift user's password. If not specified, this value defaults to the Keystone
administrator's password.
ksDatabasePwd
Specifies the password for the Keystone user in the postgres database. If not specified, this
value defaults to the Keystone administrator's password.
--remote-keystone-url URL
Specifies the URL to an existing Keystone service.
--configure-remote-keystone
Specifies that when you use a remote Keystone and configure the remote Keystone as necessary.
The required users, roles, and endpoints that are needed by the Swift services are added to the
Keystone server. Keystone authentication information needs to be specified with the --pwd-file
flag to enable the configuration. If this flag is not specified, the remote Keystone is not modified
and the administrator must add the appropriate entries for the Swift configuration after the
installation is complete.
-o ObjFileset
Specifies the name of the fileset to be created in GPFS for the Object Storage.
ObjFileset is the name of the independent fileset that is created for the Object store. By default,
object_fileset is created.
-i
Specifies the maximum number of inodes for the Object fileset.
MaxNumInodes
The maximum number of inodes for the Object fileset. By default, 8000000 is set.
--ces-group
Specifies the CES group that contains the IP addresses to be used for the Swift ring files. This
means that you can specify a subset of the overall collection of CES IP addresses to be used by
the object protocol.
--enable-s3
Sets the S3 capability (Amazon S3 API support) to true. By default, S3 API is not enabled.
--enable-file-access
Sets the file access capability initially to true. Further configuration is still necessary by using the
mmobj file-access command. By default, the file-access capability is not enabled.
--enable-multi-region
Sets the multi-region capability initially to true. By default, multi-region capability is not enabled.
--join-region-file RegionFile
Specifies that this object installation joins an existing object multi-region Swift cluster. RegionFile
is the region data file created by the mmobj multiregion export command from the existing
multi-region cluster.
Note: The use of the --configure-remote-keystone flag is recommended so that the region-
specific endpoints for this region are automatically created in Keystone.
--region-number RegionNumber
Specifies the Swift region number for this cluster. If it is not specified, the default value is set to 1.
In a multi-region configuration, this flag is required and must be a unique region number that is
not used by another region in the multi-region environment.
config
Administers the Object configuration:
list
Lists configuration values of the underlying Swift or Keystone service that is stored in the CCR.
--section Section
Retrieves values for the specified section only.
The section is the heading that is enclosed in brackets ([]) in the associated configuration file.
--property PropertyName
Retrieves values for the specified property only.
Note: Use the -Y parameter to display the command output in a parseable format with a colon
(:) as a field delimiter.
change
Enables modifying Swift or Keystone configuration files. After you modify the configuration files,
the CES monitoring framework downloads them from the CCR and distributes them to all the CES
nodes in the cluster. The framework also automatically restarts the services that depend on the
modified configuration files.
Note: It is recommended to not directly modify the configuration files in /etc/swift and /etc/
keystone folders as they can be overwritten at any time by the files that are stored in the CCR.
--section Section
Specifies the section in the file that contains the parameter.
The section is the heading that is enclosed in brackets ([]) in the associated configuration file.
--property PropertyName
Specifies the name of the property to be set.
--value NewValue
Specifies the value of the PropertyName.
--merge-file MergeFile
Specifies a file in the openstack-config.conf format that contains multiple values to be
changed in a single operation. The properties in MergeFile can represent new properties or
properties to be modified. If a section or property name in MergeFile begins with a '-'
character, that section or property is deleted from the file. For example, a MergeFile with the
following contents would delete the ldap section, set the connections property to 512, and
delete the noauth property from the database section.
[-ldap]
[database]
connections = 512
-noauth =
manage
Performs management tasks on the object configuration.
Note: The object protocol is not supported in IBM Spectrum Scale 5.1.0.0. If you want to deploy
object, upgrade to the IBM Spectrum Scale 5.1.0.1 or a later release.
--version-sync
After an upgrade of IBM Spectrum Scale, migrates the object configuration to be consistent
with the level of installed packages.
Parameter common for both mmobj config list and mmobj config change commands:
--ccrfile CCRFile
Indicates the name of the Swift, Keystone, or Object configuration file stored in the CCR.
Some of the configuration files stored in the Cluster Configuration Repository (CCR) are:
• account-server.conf
• container-reconciler.conf
• container-server.conf
• object-expirer.conf
• object-server.conf
• proxy-server.conf
• swift.conf
• keystone.conf
• keystone-paste.ini
• spectrum-scale-object.conf
• object-server-sof.conf
• spectrum-scale-objectizer.conf
policy
Administers the storage policies for Object Storage:
list
Lists storage policies for Object Storage.
--policy-name PolicyName
Lists details of the specified storage policy, if it exists.
--policy-function PolicyFunction
Lists details of the storage policies with the specified function, if any exist.
--verbose
Lists the functions that are enabled for the storage policies.
Note: Use the -Y parameter to display the command output in a parseable format with a colon (:)
as a field delimiter.
create
Creates a storage policy for Object Storage. The associated configuration files are updated and the
ring files are created for the storage policy. The CES monitoring framework distributes the changes
to the protocol nodes and restarts the associated services.
PolicyName
Specifies the name of the storage policy.
The policy name must be unique (case-insensitive), without spaces, and it must contain only
letters, digits, or a dash.
-f FilesetName
Specifies the name of an existing fileset to be used for this storage policy. An existing fileset
can be used provided it is not being used for an existing storage policy.
If no fileset name is specified with the command, the policy name is reused for the fileset with
the prefix obj_.
--file-system FilesystemName
Specifies the name of the file system on which the fileset is created.
--i MaxNumInodes
Specifies the inode limit for the new inode space.
--enable-compression
Enables a compression policy. The Swift policy type is replication. If --enable-
compression is used, --compression-schedule must be specified too and vice versa.
Every object stored within a container that is linked to this storage policy is compressed on a
scheduled basis. This compression occurs as a background process. For object retrieval, no
decompression is needed because it occurs automatically in the background.
--compression-schedule: "MM:HH:dd:ww"
Specifies the compression schedule if --enable-compression is used. The schedule must
be given in this format, MM:HH:dd:ww:
MM = 0-59 minutes
Indicates the minute after the hour in which to run the job. The range is 0 - 59.
HH = 0-23
Indicates the hour in which to run the job. Hours are represented as numbers in the range
0 - 23.
dd = 1-31
Indicates the day of a month on which to run the job. Days are represented as numbers in
the range 1 - 31.
After the region is added, the multi-region configuration needs to be synced with the other
regions by using the mmobj multiregion command.
Note: By default, a storage policy stores only objects in the region on which it was created. If
the cluster is defined as multi-region, a storage policy can also be made multi-region by
adding more regions to its definition.
--remove-region-number RegionNumber
In a multi-region environment, removes a region from the specified storage policy. The
associated fileset for the storage policy is not modified.
After the region is removed, the multi-region configuration needs to be synced with the other
regions by using the mmobj multiregion command.
--compression-schedule: "MM:HH:dd:ww"
Specifies the compression schedule if --enable-compression is used. The schedule must
be given in this format, MM:HH:dd:ww:
MM = 0-59 minutes
Indicates the minute after the hour in which to run the job. The range is 0 - 59.
HH = 0-23
Indicates the hour in which to run the job. Hours are represented as numbers in the range
0 - 23.
dd = 1-31
Indicates the day of a month on which to run the job. Days are represented as numbers in
the range 1 - 31.
ww = 0-7 (0=Sun, 7=Sun)
Indicates the days of the week on which to run the job. One or more values can be
specified (comma-separated). Days are represented as numbers in the range 0 - 7. 0 and 7
represent Sunday. All days of a week are represented by *. This parameter is optional, and
the default is 0.
• Use * to specify every instance of a unit. For example, dd = * means that the job is
scheduled to run every day.
• Comma-separated lists are allowed. For example, meansdd= 1,3,5 that the job is
scheduled to run on every first, third, fifth of a month.
• If ww and dd are both specified, the union is used.
• If you specify a range that uses -, it is not supported.
• Empty values are allowed for dd and ww. If empty, dd and or ww are not considered.
• Empty values for mm and hh are treated as *.
file-access
Manages file-access capability, ibmobjectizer service and object access for files (objectizes) in a
unified file and object access environment.
enable
Enables the file-access capability and the ibmobjectizer service.
If the file access capability is already enabled and the ibmobjectizer service is stopped, this
option starts the ibmobjectizer service.
disable
Disables the file-access capability and the ibmobjectizer service.
--objectizer
This option stops only the ibmobjectizer service, it does not disable the file-access
capabilities.
objectize
Enables files for object access (objectizes) in a unified file and object access environment.
--object-path ObjectPath
The fully qualified path of a file or a directory that you want to enable for access through the
object interface. If a fully qualified path to a directory is specified, the command enables all
the files from that directory for access through the object interface. This parameter is
mandatory for the mmobj file-access command if the --storage-policy parameter is
not specified.
--storage-policy PolicyName
The name of the storage policy for which you want to enable files for the object interface. This
is a mandatory parameter for the mmobj file-access command if the --object-path
parameter is not specified. If only this parameter is specified, the command enables all files
for object interface from the fileset that is associated with the specified storage policy.
--account-name AcccountName
The account name for which you want to enable files for access through the object interface.
The --storage-policy parameter is mandatory if you are using this parameter.
--container-name ContainerName
The container name for which you want to enable files for access through the object interface.
You must specify the --storage-policy and the --account-name with this parameter.
--object-name ObjectName
The object name for which you want to enable files for access through the object interface.
You must specify --storage-policy, --account-name, and --container-name
parameters with this parameter.
-N | --node NodeName
The node on which to run the command. This parameter is optional and if it is not specified,
the command is run on the current node if it is a protocol node. If the current node is not a
protocol node, an available protocol node is selected.
link-fileset
Enables object access to the specified fileset path.
--sourcefset-path FilesetPath
The fully qualified non-object fileset path that must be enabled for object access.
--account-name AccountName
The name of the swift account or the keystone project. The contents of the fileset are
accessible from this account when it is accessed by using the object interface.
--container-name ContainerName
The name of the container. The contents of the fileset belong to this container when it is
accessed by using the object interface.
--fileaccess-policy-name PolicyName
The name of the file-access enabled storage policy.
--update-listing
Updates the container database with the existing files in the provided fileset.
If the --update-listing option is used, then the sourcefset-path must be the exact
fileset junction path and the fileset must be derived from the object file system.
multiregion
Administers multi-region object deployment. For more information on multi-region object deployment
and capabilities, see Overview of multi-region object deployment in IBM Spectrum Scale: Concepts,
Planning, and Installation Guide.
list
Lists the information about the region.
Note: Use the -Y parameter to display the command output in a parseable format with a colon (:)
as a field delimiter.
enable
Enables the cluster for multi-region support.
Only the first cluster of the region can run the enable command. Subsequent regions join the
multi-region cluster during installation with the use of the --join-region-file flag of the
mmobj swift base command.
export
Exports the multi-region configuration environment so that other regions can be installed into the
multi-region cluster or other regions can be synced to this region.
If successful, a region checksum is printed in the output. This checksum can be used to ensure
that different regions are in sync when the mmobj multiregion import command is run.
Note: When region-related information changes, such as CES IPs and storage policies, all regions
must be updated with the changes.
--region-file RegionFile
Specifies a path to store the multi-region data.
This file is created.
import
Imports the specified multi-region configuration environment into this region.
If successful, a region checksum for this region is printed in the output. If the local region
configuration matches the imported configuration, the checksums match. If they differ, then it
means that some configuration information in the local region needs to be synced to the other
regions. This can happen when a configuration change in the local region, such as adding CES IPs
or storage policies, is not yet synced with the other regions. If so, the multi-region configuration
for the local region needs to be exported and synced to the other regions.
--region-file RegionFile
Specifies the path to a multi-region data file created by using the mmobj multiregion
export command.
remove
Completely removes a region from the multi-region environment. The removed region is no longer
accessible by other regions.
After the region is removed, the remaining regions need to have their multi-region information that
is synced with this change by using the mmobj multiregion export and mmobj
multiregion import commands.
--region-number RegionNumber
Specifies the region number that you need to remove from the multi-region configuration.
--force
Indicates that all the configuration information for the specified region needs to be
permanently deleted.
s3
Enables and disables the S3 API without manually changing the configuration.
enable
Enables the S3 API.
disable
Disables the S3 API.
list
Verifies whether the S3 API is enabled or disabled.
Note: Use the -Y parameter to display the command output in a parseable format with a colon (:)
as a field delimiter.
-Y
Displays headers and output in a machine-readable and colon-delimited format.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion.
nonzero
A failure occurs.
Security
You must have root authority to run the mmobj command.
The node on which the command is run must be able to run remote shell commands on any other node in
the cluster without the use of a password and without producing any extraneous messages. For more
information, see Requirements for administering a GPFS file system in IBM Spectrum Scale: Administration
Guide.
Examples
1. To specify configuration of Object protocol service with local keystone and S3 enabled, run the
following command:
2. To list object configuration settings for proxy-server.conf, DEFAULT section, run the following
command:
[DEFAULT]
bind_port = 8080
workers = auto
user = swift
log_level = ERROR
3. To change the number of worker processes that each object server starts, update the object-
server.conf file as shown here:
5. To migrate the configuration data after an upgrade of IBM Spectrum Scale, run the following
command:
6. To create a new storage policy CompressionTest with the compression function enabled and with
the compression schedule specified, run the following command:
This compression schedule indicates that the fileset that is associated with the policy is compressed
at the 20th minute of the 5th, 11th, 17th, and 23rd hour of the day on every day of every week.
7. To list storage policies for Object Storage with details of functions available with those storage
policies, run the following command:
10. To enable object access for a container, run the following command:
11. To enable object access on a file when you specify a storage policy, run the following command:
13. To list the information about a region, run the following command:
14. To set up the initial multi-region environment on the first region, run the following command:
15. To export multi-region data for use by other clusters to join multi-region, run the following command:
16. To import the specified multi-region configuration environment created by the export command into a
region, run the following command:
17. To remove a region that is designated by region number 2 from a multi-region environment and to
remove all configuration information of the specified region, run the following command:
18. To create a policy with no force add where only the default GPFS policy rule is established, run the
following command:
The system creates the policy, adds and establishes the rule within the GPFS policy rules, and
displays the following output:
[I] Creating new unique index and building the object rings
[I] Updating the configuration
[I] Uploading the changed configuration
19. To create a policy with force add, run the following command:
The system creates the policy, adds and establishes the rule within GPFS policy rules, and displays
the following output:
[I] Creating new unique index and building the object rings
[I] Updating the configuration
[I] Uploading the changed configuration
20. To create a policy with no force add, run the following command:
The system creates the policy, adds the rule but does not establish the rule within the GPFS policy
rules, and displays the following output:
[I] Creating new unique index and building the object rings
[I] Updating the configuration
[I] Uploading the changed configuration
The system enables the file-access capability and starts the ibmobjectizer service.
22. To disable the ibmobjectizer service, run the following command:
The system disables the file-access capability and stops the ibmobjectizer service.
23. To use the disable --objectizer-daemon parameter, run the following command:
25. To use the storage-policy parameter for file objectization, run the following command:
26. To use the node parameter for running the command on a remote node, run the following command:
The command is run on the gpfs_node1 node and all the files from all containers within the admin
account are objectized. If --node or -N is not specified, the mmobj file-access objectize
command checks if the current node is CES node or not. If the current node is a CES node, the
command is run on the current node. If the current node is not a CES node, the command is run on a
random remote CES node:
Performing objectization
Objectization complete
See also
• “mmces command” on page 132
• “mmchconfig command” on page 169
• “mmlscluster command” on page 484
• “mmlsconfig command” on page 487
• “mmnfs command” on page 552
• “mmsmb command” on page 698
• “mmuserauth command” on page 727
Location
/usr/lpp/mmfs/bin
mmperfmon command
Configures the Performance Monitoring tool and lists the performance metrics.
Synopsis
mmperfmon config generate --collectors CollectorNode[,CollectorNode...]
[ --config-file InputFile ]
or
or
or
or
or
or
or
or
Availability
Available on all IBM Spectrum Scale editions.
The protocol functions provided in this command, or any similar command, are generally referred to as
CES (Cluster Export Services). For example, protocol node and CES node are functionally equivalent
terms.
Description
mmperfmon config modifies the performance monitoring tool by updating the configuration stored in
IBM Spectrum Scale. It can be used to generate an initial configuration, to update reporting periods of
different sensors, or to restrict sensors to a given set of nodes.
mmperfmon query is used to query metrics in a cluster from the performance metrics collector. Output
can be delivered either in a raw format, formatted table layout or as a CSV export.
In addition to metrics known by the performance collector, the mmperfmon query command can also
run predefined named queries or use predefined computed metrics. You can specify a bucket size for
each record to return in number of seconds and the number of buckets to retrieve. You can also specify
the duration or a time range for which the query can run.
When using the performance monitoring tool in a container environment, the transient network interfaces
and file system mount points can increase the amount of metadata or keys within the performance
monitoring tool dramatically, and slow down the metric queries. To prevent this, use the filter attribute of
the Network and Disk Free sensors. Usage of the mmperfmon command to set the filter conditions is
shown in Example 4.
Parameters
config
generate
Generates the configuration of the performance monitoring tool.
Note: Once the configuration has been generated, do not forget to turn on monitoring through the
mmchnode command.
--collectors CollectorNode[,CollectorNode...] specifies the set of collectors to which the sensors
report their performance measurements. The number of collectors that each sensor shall report to
may be specified through colRedundancy parameter in the template sensor configuration file
(see --config-file). Federated collectors are automatically configured between these collectors.
For more information on federated collectors, see the Configuring multiple collectors section in the
IBM Spectrum Scale: Problem Determination Guide.
--config-file InputFile specifies the template sensor configuration file to use. If this option is not
provided, the /opt/IBM/zimon/defaults/ZIMonSensors.cfg file is used.
add
Adds a new sensor to the performance monitoring tool.
--sensors SensorFile adds the sensors specified in SensorFile to the sensor configuration. Multiple
sensors in the configuration file need to be separated by a comma. Following is a sample
SensorFile:
sensors = {
name = "MySensor"
# sensor disabled by default
period = 0
type = "Generic"
}
The generic sensor and a sensor-specific configuration file need to be installed on all the nodes
where the generic sensor is to be activated.
update
Updates the existing configuration.
--collectors CollectorNode[,CollectorNode...] updates the collectors to be used by the sensors and
for federation (see config generate for details).
--config-file InputFile specifies a template sensor configuration file to use. This overwrites the
currently used configuration with the configuration specified in InputFile.
Attribute=value ... specifies a list of attribute value assignments. This sets the value of attribute
Attribute to value. Attribute is a combination of a sensor name and one of its parameters separated
by a dot as shown: <sensor>.<parameter>. For example, CPU.period.
The following attributes are supported:
<sensor>.period
Specifies the seconds between each invocation of the sensor. This parameter is supported for
all sensors.
<sensor>.restrict
Specifies a node or node class. The invocation of the sensor is limited to the nodes specified.
This parameter is supported for all sensors.
<sensor>.filter
Specifies a sensor-specific element to be ignored when the sensor retrieves data. This
parameter is supported for sensors Network and DiskFree.
delete
Removes configuration of the performance monitoring tool or the specified sensors.
--sensors Sensor[,Sensor...] removes the sensors with the specified names from the performance
monitoring configuration.
--all removes the entire performance monitoring configuration from IBM Spectrum Scale.
show
Displays the currently active performance monitoring configuration. Specifies the following
options:
--config-file OutputFile specifies that the output will be saved to the OutputFile. This option is
optional.
query
Metric[,Metric...] specifies a comma separated list of metrics for displaying in the output.
Key[,Key...] specifies a key that can consist of a node name, sensor group, or optional additional filters,
and metrics that are separated by the pipe symbol (|). For example:
"cluster1.ibm.com|CTDBStats|locking|db_hop_count_bucket_00"
• --short displays the column names in a short form when there are too many to fit into a row.
• --nice displays the column headers in the output in a bold and underlined typeface.
• --resolve displays the resolved computed metrics and metrics that are used.
• --list {computed | metrics | keys | filters | queries | expiredKeys | all} lists the following information:
– computed displays the computed metrics.
– metrics displays the metrics.
– keys lists the keys.
– filters lists the filters.
– queries lists the available predefined queries.
– expiredKeys lists the group keys for the entities that have not returned any metrics values within
the default retention period of 14 days.
– all displays the computed metrics, metrics, keys, filters, and queries.
report
Returns a report.
top
Returns a report of the top processes by the CPU.
StartTime specifies the start timestamp for query in the YYYY-MM-DD-hh:mm:ss format.
EndTime specifies the end timestamp for a query in the YYYY-MM-DD-hh:mm:ss format. If it is
not specified, the query returns results until the present time.
Duration specifies the number of seconds into the past from the present time or EndTime.
Options specifies the following options:
• -N or --Node NODENAME specifies the node from where the metrics should be retrieved.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
• --bucket-size BUCKET_SIZE specifies the bucket size in number of seconds. Default is 1.
• --number-buckets NUMBER_BUCKETS specifies the number of buckets to display. Default is 10.
• --json provides the output in a JSON format.
• --raw provides the output in a RAW format.
Note: The --json and --raw options display similar outputs.
• --filter FILTER specifies the filter criteria for the query to run. To see the list of filters in the node
use the mmperfmon query --list filters command. Currently, the only supported filter is
node.
delete
Removes expired keys from the performance monitoring tool database.
--key Key[,Key...] specifies the key or list of keys that have to be removed from the performance
monitoring tool database, if they are expired keys. The keys are displayed as a comma-separated
string.
--expiredKeys specifies all the expired keys have to be removed from the performance monitoring tool
database.
Note: A group key is the part of the metric key string representing a base entity. For example, for the
keys:
gpfsgui-cluster-1.novalocal|GPFSInodeCap|nfs_shareFS|gpfs_fs_inode_alloc
gpfsgui-cluster-1.novalocal|GPFSInodeCap|nfs_shareFS|gpfs_fs_inode_free
gpfsgui-cluster-1.novalocal|GPFSInodeCap|nfs_shareFS|gpfs_fs_inode_max
gpfsgui-cluster-1.novalocal|GPFSInodeCap|nfs_shareFS|gpfs_fs_inode_used
Exit status
0
Successful completion.
1
Invalid arguments given
2
Invalid option
3
No node found with a running performance collector
4
Performance collector backend signaled bad query, for example, no data for this query.
Security
You must have root authority to run the mmperfmon command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. The
performance monitoring tool uses the GPFS™ cluster daemon node names and network to communicate
between nodes. For more information, see Requirements for administering a GPFS file system in IBM
Spectrum Scale: Administration Guide.
Examples
1. To generate configuration for the c89f8v03 collector node, issue the command:
2. To add /tmp/SensorFile sensor to the performance monitoring tool, issue the command:
}
smbstat = ""
4. To update the Network.filter value to ignore multiple network interfaces, issue the following
command:
6. To display the currently active performance monitoring configuration, issue the command:
cephMon = "/opt/IBM/zimon/CephMonProxy"
cephRados = "/opt/IBM/zimon/CephRadosProxy"
colCandidates = "c89f8v03"
colRedundancy = 1
collectors = {
host = ""
port = "4739"
}
config = "/opt/IBM/zimon/ZIMonSensors.cfg"
ctdbstat = ""
daemonize = T
hostname = ""
ipfixinterface = "0.0.0.0"
logfile = "/var/log/zimon/ZIMonSensors.log"
loglevel = "info"
mmcmd = "/opt/IBM/zimon/MMCmdProxy"
mmdfcmd = "/opt/IBM/zimon/MMDFProxy"
mmpmon = "/opt/IBM/zimon/MmpmonSockProxy"
piddir = "/var/run"
release = "4.2.0-0"
sensors = {
name = "CPU"
period = 1
},
{
name = "Load"
period = 1
},
{
name = "Memory"
period = 1
},
{
name = "Network"
period = 1
},
{
name = "Netstat"
period = 0
},
{
name = "Diskstat"
period = 0
},
{
name = "DiskFree"
period = 600
},
{
name = "GPFSDisk"
period = 0
},
{
name = "GPFSFilesystem"
period = 1
},
{
name = "GPFSNSDDisk"
period = 1
restrict = "nsdNodes"
},
{
name = "GPFSPoolIO"
period = 0
},
{
name = "GPFSVFS"
period = 1
},
{
name = "GPFSIOC"
period = 0
},
{
name = "GPFSVIO"
period = 0
},
{
name = "GPFSPDDisk"
period = 1
restrict = "nsdNodes"
},
{
name = "GPFSvFLUSH"
period = 0
},
{
name = "GPFSNode"
period = 1
},
{
name = "GPFSNodeAPI"
period = 1
},
{
name = "GPFSFilesystemAPI"
period = 1
},
{
name = "GPFSLROC"
period = 0
},
{
name = "GPFSCHMS"
period = 0
},
{
name = "GPFSAFM"
period = 0
},
{
name = "GPFSAFMFS"
period = 0
},
{
name = "GPFSAFMFSET"
period = 0
},
{
name = "GPFSRPCS"
period = 0
},
{
name = "GPFSFilesetQuota"
period = 3600
},
{
name = "GPFSDiskCap"
period = 0
},
{
name = "NFSIO"
period = 0
proxyCmd = "/opt/IBM/zimon/GaneshaProxy"
restrict = "cesNodes"
type = "Generic"
},
{
name = "SwiftAccount"
period = 1
restrict = "cesNodes"
type = "generic"
},
{
name = "SwiftContainer"
period = 1
restrict = "cesNodes"
type = "generic"
},
{
name = "SwiftObject"
period = 1
restrict = "cesNodes"
type = "generic"
},
{
name = "SwiftProxy"
period = 1
restrict = "cesNodes"
type = "generic"
}
smbstat = ""
7. To list metrics by key, for given node, sensor group and metrics, issue this command:
2 2015-04-08-12:54:54 0
3 2015-04-08-12:54:55 0
4 2015-04-08-12:54:56 0
5 2015-04-08-12:54:57 0
6 2015-04-08-12:54:58 0
7 2015-04-08-12:54:59 0
8 2015-04-08-12:55:00 0
9 2015-04-08-12:55:01 0
10 2015-04-08-12:55:02 0
8. To list the two metrics nfs_read_lat and nfs_write_lat for a specific time range, filtered by an
export and NFS version with 60-seconds-buckets (one record represents 60 seconds), issue this
command:
mmperfmon query nfs_read_lat,nfs_write_lat 2014-12-19-11:15:00 2014-12-19-11:20:00 --filter export=/ibm/gpfs/nfsexport,nfs_ver=NFSv3 -b 60
Available Filters:
node
gpfs-21.localnet.com
gpfs-22.localnet.com
protocol
smb2
db_name
account_policy
autorid
brlock
ctdb
dbwrap_watchers
g_lock
group_mapping
leases
locking
netlogon_creds_cli
notify_index
passdb
registry
secrets
serverid
share_info
smbXsrv_open_global
smbXsrv_session_global
smbXsrv_tcon_global
smbXsrv_version_global
gpfs_fs_name
fs0
gpfs0
gpfs_cluster_name
gpfs-cluster-2.localnet.com
mountPoint
/
/boot
/dev
/dev/shm
/gpfs/fs0
/mnt/gpfs0
/run
/sys/fs/cgroup
operation
break
cancel
close
create
find
flush
getinfo
ioctl
keepalive
lock
logoff
negprot
notify
read
sesssetup
setinfo
tcon
tdis
write
sensor
CPU
CTDBDBStats
CTDBStats
DiskFree
GPFSFilesystemAPI
GPFSVFS
Load
Memory
Network
SMBGlobalStats
SMBStats
netdev_name
eth0
lo
10. To run a named query for export /ibm/gpfs/nfsexport and nfs_ver NFSv3, using default
bucket size of 1 second, showing last 10 buckets , issue this command:
Legend:
1: cluster1.ibm.com|NFSIO|/ibm/gpfs/nfsexport|NFSv3|nfs_read_ops
2: cluster2.ibm.com|NFSIO|/ibm/gpfs/nfsexport|NFSv3|nfs_write_ops
11. To run a named query for export /ibm/gpfs/nfsexport and nfs_ver NFSv3, using bucket size of
1 minute, showing last 20 buckets (= 20 minutes), issue this command:
mmperfmon query nfsIOrate --filter export=/ibm/gpfs/nfsexport,nfs_ver=NFSv3,node=cluster1.ibm.com -n 20 -b 60
Legend:
1: cluster1.ibm.com|NFSIO|/ibm/gpfs/nfsexport|NFSv3|nfs_read_ops
2: cluster2.ibm.com|NFSIO|/ibm/gpfs/nfsexport|NFSv3|nfs_write_ops
12. To run a compareNodes query for the cpu_user metric, issue this command:
Legend:
1: cluster1.ibm.com|CPU|cpu_user
2: cluster2.ibm.com|CPU|cpu_user
16. To list the top CPU report, issue the following command:
Time node-11.localnet.com
2020-04-03-17:11:00 11684 33 12
9205 23 302
26331 9 0
27671 9 1
1 3 2
9 3 0
386 1 0
707 1 0
1012 1 5
1016 1 2
[root@node-11 ~]#
See also
• "mmdumpperfdata command"
Location
/usr/lpp/mmfs/bin
mmpmon command
Manages performance monitoring and displays performance information.
Synopsis
mmpmon [-i CommandFile] [-d IntegerDelayValue] [-p]
[-r IntegerRepeatValue] [-s] [-t IntegerTimeoutValue]
Availability
Available on all IBM Spectrum Scale editions.
Description
Before you attempt to use mmpmon, it is a good idea to review the current command topic and read the
topic Monitoring I/O performance with the mmpmon command in the IBM Spectrum Scale: Problem
Determination Guide.
Use the mmpmon command to manage GPFS performance monitoring functions and display performance
monitoring data. The mmpmon command reads requests from an input file or standard input (stdin), and
writes responses to standard output (stdout). Error messages go to standard error (stderr). Prompts, if not
suppressed, go to stderr.
You can run mmpmon so that it continually reads input from a pipe. That is, the driving script or application
never sends an end of file. In this scenario, it is a good idea to set the -r option to 1, or to use the default
value of 1. This setting prevents the command from caching input records. In doing so it avoids
unnecessary memory consumption.
This command cannot be run from a Windows node.
Results
The performance monitoring request is sent to the GPFS daemon that is running on the same node that is
running the mmpmon command.
All results from the request are written to stdout.
The command has two output formats:
• Human readable, intended for direct viewing.
In this format, the results are keywords that describe the value presented, followed by the value. Here
is an example:
disks: 2
• Machine readable, an easily parsed format intended for further analysis by scripts or applications.
In this format, the results are strings with values presented as keyword/value pairs. The keywords are
delimited by underscores (_) and blanks to make them easier to locate.
For details on how to interpret the mmpmon command results, see the topic Monitoring GPFS I/O
performance with the mmpmon command in the IBM Spectrum Scale: Administration Guide.
Parameters
-i CommandFile
The input file contains mmpmon command requests, one per line. Use of the -i flag implies use of the
-s flag. For interactive use, omit the -i flag. In this case, the input is then read from stdin, allowing
mmpmon to take keyboard input or output piped from a user script or application program.
Leading blanks in the input file are ignored. A line beginning with a number sign (#) is treated as a
comment. Leading blanks in a line whose first non-blank character is a number sign (#) are ignored.
The input requests to the mmpmon command are as follows:
fs_io_s
Displays I/O statistics per mounted file system.
io_s
Displays I/O statistics for the entire node.
nlist add name [name...]
Adds node names to a list of nodes for mmpmon processing.
nlist del
Deletes a node list.
nlist new name [name...]
Creates a node list.
nlist s
Shows the contents of the current node list.
nlist sub name [name...]
Deletes node names from a list of nodes for mmpmon processing.
once request
Indicates that the request is to be performed only once.
qosio
Displays statistics for Quality of Service for I/O operations (QoS).
reset
Resets statistics to zero.
rhist nr
Changes the request histogram facility request size and latency ranges.
rhist off
Disables the request histogram facility. This value is the default.
rhist on
Enables the request histogram facility.
rhist p
Displays the request histogram facility pattern.
rhist reset
Resets the request histogram facility data to zero.
rhist s
Displays the request histogram facility statistics values.
rpc_s
Displays the aggregation of execution time for remote procedure calls (RPCs).
rpc_s size
Displays the RPC execution time according to the size of messages.
ver
Displays mmpmon version.
vio_s [f rg RecoveryGroupName [da DeclusteredArray [v Vdisk]]] [reset]
Displays IBM Spectrum Scale RAID vdisk I/O statistics. For more information about IBM Spectrum
Scale RAID, see IBM Spectrum Scale RAID: Administration.
vio_s_reset [f rg RecoveryGroupName [da DeclusteredArray [v Vdisk]]]
Resets IBM Spectrum Scale RAID vdisk I/O statistics. For more information about IBM Spectrum
Scale RAID, see IBM Spectrum Scale RAID: Administration.
loc_io_s [f fs fsName [p poolName]]
Displays locality I/O statistics. It displays the amount of data that is written/read locally and
remotely. It supports filesystem filter and pool filter.
Options
-d IntegerDelayValue
Specifies a number of milliseconds to sleep after one invocation of all the requests in the input file.
The default value is 1000. This value must be an integer greater than or equal to 500 and less than or
equal to 8000000.
The command processes the input file in the following way:
1. The command processes each request in the input file sequentially. It reads a request, processes
it, sends it to the GPFS daemon, and waits for the response. When it receives the response, the
command processes it and displays the results of the request. The command then goes on to the
next request in the input file.
2. When the command processes all the requests in the input file, it sleeps for the specified number
of milliseconds.
3. When the command wakes, it begins another cycle of processing, beginning with Step 1. The
number of repetitions depends on the value of the -r flag.
-p
Indicates to generate output that can be parsed by a script or program. If this option is not specified,
human-readable output is produced.
-r IntegerRepeatValue
Specifies the number of times to run all the requests in the input file.
The default value is one. Specify an integer between zero and 8000000. Zero means to run forever, in
which case processing continues until it is interrupted. This feature is used, for example, by a driving
script or application program that repeatedly reads the result from a pipe.
The once prefix directive can be used to override the -r flag. See the description of once in the topic
Monitoring GPFS I/O performance with the mmpmon command in the IBM Spectrum Scale:
Administration Guide.
-s
Indicates to suppress the prompt on input.
Use of the -i flag implies use of the -s flag. For use in a pipe or with redirected input (<), the -s flag
is preferred. If not suppressed, the prompts go to standard error (stderr).
-t IntegerTimeoutValue
Specifies a number of seconds that the command waits for responses from the GPFS daemon before
it fails.
The default value is 60. This value must be an integer greater than or equal to 1 and less than or equal
to 8000000.
Exit status
0
Successful completion.
1
Various errors, including insufficient memory, input file not found, incorrect option, and others.
3
Either no commands were entered interactively, or the input file did not contain any mmpmon
commands. The input file was empty, or consisted of all blanks or comments.
4
mmpmon terminated due to a request that was not valid.
5
An internal error occurred.
111
An internal error occurred. A message follows.
Restrictions
1. Up to five instances of mmpmon can be run on a node concurrently. However, concurrent users might
interfere with each other. See the topic Monitoring GPFS I/O performance with the mmpmon command
in the IBM Spectrum Scale: Administration Guide.
2. Do not alter the input file while mmpmon is running.
3. The input file must contain valid input requests, one per line. When mmpmon finds an invalid request, it
issues an error message and terminates. The command processes input requests that appear in the
input file before the first invalid request.
Security
The mmpmon command must be run by a user with root authority, on the node for which you want
statistics.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. Assume that infile contains these requests:
ver
io_s
fs_io_s
rhist off
The requests in the input file are run 10 times, with a delay of 5000 milliseconds (5 seconds) between
invocations.
2. This example uses the same parameters as the previous example, but with the -p flag:
3. This example uses the fs_io_s option with a mounted file system:
4. This example is the same as the previous one, but with the -p flag:
_fs_io_s_ _n_ 198.168.1.8 _nn_ node1 _rc_ 0 _t_ 1093352061 _tu_ 93867 _cl_ node1.localdomain
_fs_ gpfs1 _d_ 1 _br_ 52428800 _bw_ 87031808 _oc_ 6 _cc_ 4 _rdc_ 51 _wc_ 83 _dir_ 0 _iu_ 10
_fs_io_s_ _n_ 198.168.1.8 _nn_ node1 _rc_ 0 _t_ 1093352061 _tu_ 93867 _cl_ node1.localdomain
_fs_ gpfs2 _d_ 2 _br_ 87031808 _bw_ 52428800 _oc_ 4 _cc_ 3 _rdc_ 12834 _wc_ 50 _dir_ 0
_iu_ 8
6. This example is the same as the previous one, but with the -p flag:
_io_s_ _n_ 198.168.1.8 _nn_ node1 _rc_ 0 _t_ 1093351982 _tu_ 356420 _br_ 139460608
_bw_ 139460608 _oc_ 10 _cc_ 7 _rdc_ 0 _wc_ 133 _dir_ 0 _iu_ 14
8. This example is same as the previous one, but with f fs sncfs p datapool filters.
Location
/usr/lpp/mmfs/bin
mmprotocoltrace command
Starts, stops, and monitors tracing for the CES protocols.
Synopsis
mmprotocoltrace start Identifier [Identifier...] [-c ClientIP]
[-d Duration] [-l LogFileDir] [-N Nodes] [-f]
or
or
or
or
or
mmprotocoltrace {config}
Availability
Available on all IBM Spectrum Scale editions.
Notice: This command has common function to other existing commands. As such, the function may, at
any time in a future release, be rolled into other commands and immediately deprecated from use without
prior notice. Information about the change and what commands replace it would be provided in some
format at the time of that change. Users should avoid using this command in any type of automation or
scripting or be advised a future change may break that automation without prior notice.
Description
Use the mmprotocoltrace command to trace Winbind, SMB, or network operations. You can start, stop,
reset, or display the status of a trace with this command. It also controls the timeouts for the traces to
avoid excessive logging.
Note: The protocol functions provided in this command, or any similar command, are generally referred to
as CES (Cluster Export Services). For example, protocol node and CES node are functionally equivalent
terms.
For more information about this command, see the topic CES tracing and debug data collection in the IBM
Spectrum Scale: Problem Determination Guide.
Parameters
options
Specifies one of the following trace options:
-d Duration
Specifies the trace duration in minutes. The default is 10.
-l LogFileDir
Specifies the name of the directory that contains the log and tar files that are created by the trace.
The directory name cannot be a shared directory. The default is a directory in /tmp/mmfs that is
named by the trace type and time.
If the sudo wrapper mode is used, then the internal value for LogFileDir is set to the value that
is configured for dataStructureDump in the IBM Spectrum Scale configuration using the
mmlsconfig dataStructureDump command.
-N Nodes
Specifies a comma-separated list of names of the CES nodes where you want tracing to be done.
The default is all the CES nodes. For more information, see the topic Tips for using
mmprotocoltrace in the IBM Spectrum Scale: Problem Determination Guide.
-c ClientIPs
Specifies a comma-separated list of client IP addresses to trace. The CES nodes that you specified
in the -N parameter will trace their connections with these clients. This parameter applies only to
SMB traces and Network traces. For more information, see the topic Tips for using
mmprotocoltrace in the IBM Spectrum Scale: Problem Determination Guide.
-f
Forces an action to occur. Affects the clear command.This parameter also disables prompt for
smb and smbsyscalls.
-v
Verbose output. Affects only the status command.
command
Specifies one of the following trace commands:
start
Starts tracing for the specified component.
stop
Stops tracing for the specified component. If no component is specified, this option stops all
active traces.
status
Displays the status of the specified component. If no component is specified, this option displays
the status of all current traces.
config
Displays the protocol tracing configuration settings that are currently active for the node on which
the command is run. For example, the limit for the maximal trace size. These settings are defined
in the file /var/mmfs/ces/mmprotocoltrace.conf and are specific for each CES node.
clear
Clears the trace records from the trace list. If no component is specified, this option clears all
current traces.
reset
Resets the nodes to the default state that is defined in the configuration file.
Identifier
Specifies one of the following components:
smb
Traces the SMB service. This enables a detailed tracing for incoming SMB connections from the
specified SMB client IP addresses. When using this trace, it is recommended to only have few
incoming connections from the specified IP addresses, as too many traced connections can
impact the cluster performance.
network
Traces the Network service.
smbsyscalls
Collects the strace-logs for SMB. This collects a trace for all system calls issued from SMB
connections from the specified SMB client IP addresses. When using this trace it is recommended
to only have few incoming connections from the specified IP addresses, as too many traced
connections can impact the cluster performance.
winbind
Traces the winbind service that is used for user authentication.
Exit status
0
Successful completion.
Nonzero
A failure occurred.
Security
You must have root authority to run the mmprotocoltrace command.
The node on which the command is run must be able to process remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. See
the information about the requirements for administering a GPFS system in the IBM Spectrum Scale:
Administration Guide.
Examples
1. To start an SMB trace, issue this command:
Starting traces for the given clients will stop their connections prior to tracing.
Open files on these connections might get corrupted so please close them first.
Do you really want to perform the operation? (yes/no - default no): yes
Setting up traces
Trace 'f5d75a67-621e-4f09-8d00-3f9efc4093f2' created successfully for 'smb'
Stopping traces
See also
• “mmaddcallback command” on page 12
• “mmces command” on page 132
• “mmchconfig command” on page 169
• “mmlscluster command” on page 484
• “mmlsconfig command” on page 487
• “mmsmb command” on page 698
• “mmuserauth command” on page 727
Location
/usr/lpp/mmfs/bin
mmpsnap command
Creates or deletes identical snapshots on the cache and home clusters, or shows the status of snapshots
that have been queued up on the gateway nodes.
Synopsis
mmpsnap Device create -j FilesetName [{[--comment Comment] [--uid ClusterUID]} | --rpo] [--wait]
or
or
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
The mmpsnap command creates or deletes identical snapshots on the cache and home clusters, or shows
the status of snapshots that have been queued up on the gateway nodes. You can use this command only
in a Single writer (SW) cache. You cannot use for Read only (RO), Independent writer (IW), or Local
updates (LU) caches. Peer snapshots are not allowed on a Single writer (SW) cache that uses the NSD
protocol for communicating with home.
Parameters
Device
Specifies the device name of the file system.
create
Creates a fileset level snapshot in cache and a snapshot with the same name at home. The snapshot
at home could be fileset level or file system level, depending on whether the exported path is an
independent fileset or file system.
-j FilesetName
Specifies the name of the fileset.
--comment Comment
Optional; specifies user-defined text to be prepended to the snapshot name (thereby customizing the
name of the snapshot).
Note: You must use alphanumeric characters when you customize the snapshot name.
--uid ClusterUID
Optional; specifies a unique identifier for the cache site. If not specified, this defaults to the GPFS
cluster ID.
--rpo
Optional; specifies that user recovery point objective (RPO) snapshots are to be created for a primary
fileset. This option cannot be specified with the --uid option.
--wait
Optional; makes the creation of cache and home snapshots a synchronous process. When specified,
mmpsnap does not return until the snapshot is created on the home cluster. When not specified,
mmpsnap is asynchronous and returns immediately rather than waiting for the snapshot to be created
at home.
delete
Deletes the local and remote copies of the specified snapshot; AFM automatically figures out the
remote device and fileset.
-s SnapshotName
Specifies the name of the snapshot to be deleted. A snapshot name is constructed as follows:
{commentString}-psnap-{clusterId}-{fsUID}-{fsetID}-{timestamp}
psnap-14133737607146558608-C0A8AA04:4EDD34DF-1-11-12-05-14-32-10
status
Shows the status of snapshots that have been queued up on the gateway nodes. The status includes
the following pieces of information:
• Last successful snapshot (obtained through mmlsfileset –-afm)
• Status of the current snapshot process.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmpsnap command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To create a fileset level snapshot in cache of a single-writer fileset called sw in file system fs1 issue
this command:
mmlssnapshot fs1 -j sw
3. To show that the snapshot is also created at home, issue this command:
mmlssnapshot fs1
See also
• “mmafmconfig command” on page 45
• “mmafmctl command” on page 61
• “mmafmlocal command” on page 78
• “mmchattr command” on page 156
• “mmchconfig command” on page 169
• “mmchfileset command” on page 222
• “mmchfs command” on page 230
• “mmcrfileset command” on page 308
• “mmcrfs command” on page 315
• “mmlsconfig command” on page 487
• “mmlsfileset command” on page 493
• “mmlsfs command” on page 498
Location
/usr/lpp/mmfs/bin
mmputacl command
Sets the GPFS access control list for the specified file or directory.
Synopsis
mmputacl [-d] [-i InFilename] Filename
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
Use the mmputacl command to set the ACL of a file or directory.
If the -i option is not used, the command expects the input to be supplied through standard input, and
waits for your response to the prompt.
For information about NFS V4 ACLs, see the topic Managing GPFS access control lists in the IBM Spectrum
Scale: Administration Guide.
Any output from the mmgetacl command can be used as input to mmputacl. The command is extended
to support NFS V4 ACLs. In the case of NFS V4 ACLs, there is no concept of a default ACL. Instead, there
is a single ACL and the individual access control entries can be flagged as being inherited (either by files,
directories, both, or neither). Consequently, specifying the -d flag for an NFS V4 ACL is an error. By its
nature, storing an NFS V4 ACL implies changing the inheritable entries (the GPFS default ACL) as well.
The following describes how mmputacl works for POSIX and NFS V4 ACLs:
Depending on the file system's -k setting (posix, nfs4, or all), mmputacl may be restricted. The
mmputacl command is not allowed to store an NFS V4 ACL if -k posix is in effect. The mmputacl
command is not allowed to store a POSIX ACL if -k nfs4 is in effect. For more information, see the
description of the -k flag for the mmchfs, mmcrfs, and mmlsfs commands.
Note that the test to see if the given ACL is acceptable based on the file system's -k setting cannot be
done until after the ACL is provided. For example, if mmputacl file1 is issued (no -i flag specified) the
user then has to input the ACL before the command can verify that it is an appropriate ACL given the file
system settings. Likewise, the command mmputacl -d dir1 (again the ACL was not given with the -i
flag) requires that the ACL be entered before file system ACL settings can be tested. In this situation, the
-i flag may be preferable to manually entering a long ACL, only to find out it is not allowed by the file
system.
Parameters
Filename
The path name of the file or directory for which the ACL is to be set. If the -d option is specified,
Filename must be the name of a directory.
Options
-d
Specifies that the default ACL of a directory is to be set. This flag cannot be used on an NFS V4 ACL.
-i InFilename
The path name of a source file from which the ACL is to be read.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You may issue the mmputacl command only from a node in the GPFS cluster where the file system is
mounted.
You must be the file or directory owner, the root user, or someone with control permission in the ACL, to
run the mmputacl command.
Examples
To use the entries in a file named standard.acl to set the ACL for a file named project2.history,
issue this command:
user::rwxc
group::rwx-
other::--x-
mask::rw-c
user:alpha:rwxc
group:audit:rwx-
group:system:-w--
mmgetacl project.history
#owner:paul
#group:design
user::rwxc
group::rwx-
other::--x-
mask::rw-c
user:alpha:rwxc
group:audit:rwx-
group:system:-w--
See also
• “mmeditacl command” on page 395
• “mmdelacl command” on page 356
• “mmgetacl command” on page 422
Location
/usr/lpp/mmfs/bin
mmqos command
Controls the Quality of Service for I/O operations (QoS) settings for a file system.
Synopsis
mmqos class create Device --class ClassName [--fileset FilesetName[,FilesetName...]]
or
mmqos class update Device --class ClassName [{ {add | replace} --fileset FilesetName[,FilesetName...] |
--remove --fileset FilesetName}]
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
mmqos report list Device [--pool {all | PoolName}] [--seconds Seconds] [--sum-classes {yes | no}]
[--sum-nodes {yes | no}] [{-Y | --fine-stats-display-range FineStatsDisplayRange}]
Availability
Available with IBM Spectrum Scale Advanced Edition, IBM Spectrum Scale Data Management Edition,
IBM Spectrum Scale Developer Edition, or IBM Spectrum Scale Erasure Code Edition.
Description
With the mmqos command, you can regulate I/O accesses to disk storage pools to prevent I/O-intensive
processes from dominating I/O access and significantly delaying other processes that access the pools.
The disk storage pools can be the system storage pool or user storage pools.
• With the QoS system classes, you can prevent certain I/O-intensive long running IBM Spectrum Scale
maintenance commands from dominating I/O access to storage pools. This feature allows the
maintenance commands to be run during regular system operation without significantly slowing other
processes that access the same storage pools.
• With QoS user classes, you can regulate I/O access to files in filesets. This feature allows you to assign
different allotments of I/O access to groups in your business organization, such as departments or task
forces, based on their business priorities. For example, you can put the frequently accessed files of
Department HR into filesets FS01 and FS02. You can then assign FS01 and FS02 to a QoS user class
CL01 and set an I/O limit per second for the class. At run time, the QoS component allows I/O accesses
per second to filesets FS01 and FS02 up to the I/O limit per second that you set for class CL01.
You can assign I/O limits to QoS classes either as I/O operations per second (IOPS) or as megabytes per
second (MB/s). You can display regular or fine-grained statistics of I/O accesses by processes over time.
The mmqos command provides all the features that were introduced with the mmchqos and mmlsqos
commands, supports QoS user classes, and has an easy-to-use command syntax.
Warning: mmqos is the future command for all QoS functions and it was built to replace the
configuration capability of the mmchqos command. You can only use either mmchqos or mmqos on
a file system at any time, but not both. If you start to configure a QoS file system with mmchqos or
having been using it for a while and you try to use mmqos to change that configuration, mmqos will
issue you a usage error. If you start to configure a QoS file system with the new mmqos command
and you try to use mmchqos to change that configuration, mmchqos will issue you a usage error. If
you are ready to switch to mmqos from mmchqos, use the following commands to remove the
existing mmchqos configuration:
Both commands have reset functions to remove their configuration with a file system. After
removing the existing configuration, you can move to using the other command.
For more information, see Setting the Quality of Service for I/O operations (QoS) in the IBM Spectrum
Scale: Administration Guide.
Typically you assign a smaller share of IOPS or MB/s to the maintenance class than you do to the
class named other. A smaller share of IOPS or MB/s prevents these maintenance commands from
dominating file system performance and significantly delaying other processes that use I/O.
When the maintenance class is configured with I/O limits, QoS restricts the processes in the class
from collectively consuming more I/O resources than are set in the I/O limits. The default limit is
unlimited, which means that QoS does not restrict the I/O at all.
other class
The other class includes almost all other processes that use file system I/O. Typically you assign a
larger share of IOPS or MB/s or the constant unlimited to this class so that normal processes have
greater access to I/O resources and finish more quickly.
The mmchdisk command by default runs in the class named other. However, when you issue the
mmchdisk command you can specify that you want to assign it to the maintenance class. This
assignment is effective only for the instance of the mmchdisk command that you are starting. Use this
option in situations in which you need to limit the I/O of the mmchdisk command so that normal
processes that use I/O can finish more quickly. For more information, see “mmchdisk command” on
page 210.
When the other class is configured with I/O limits, QoS restricts the processes in the class from
collectively consuming more I/O resources than are set in the I/O limits. The default limit is
unlimited, which means that QoS does not restrict the I/O at all.
misc class
The misc class stores the count of IOPS or MB/s that some critical file system processes consume.
You cannot assign IOPS or MB/s to this class, but its count of IOPS or MB/s is displayed along with the
statistics of the other classes by the mmqos report list command.
mdio-all-sharing class
The mdio-all-sharing class is used for class sharing. For more information, see the subtopic
"Class sharing" later in this topic.
Note: Enabling and disabling mmqos QoS services is somewhat analogous to mmshutdown and
mmstartup for the file system. The QoS services within the daemon are either started or stopped and
the action to enable or disable then should be infrequent.
Each invocation of the mmqos command that changes the configuration with actions like create,
update, delete, set, enable, or disable causes the daemon to reread the QoS configuration from the
CCR. If the mmqos QoS services are enabled, it causes a small delay of a couple of seconds in
throttling accuracy, as new statistics are generated with the updated configuration. If the mmqos QoS
services are enabled, it might take additional time to process these actions. That is, large changes to
the configuration is more efficient when the mmqos QoS services are disabled.
Device File system An existing file system. These are required • Create: These elements must be
elements of the I/O specified in the mmqos throttle
--pool Disk pool An existing disk pool. context. There are no create command.
default values.
--class A QoS user An existing QoS user • Modify: These elements cannot be
class class that was created or changed after the throttle is created. To
or updated with one or make a change, you must delete the
a QoS system more filesets, which throttle and re-create it with different
class become part of the I/O elements.
context.
For more information, see the description of the mmqos throttle create command later in this topic.
Notes:
• Throttle scope
– The throttle scope identifies the nodes across which the I/O limit of the throttle is to be applied. The
throttle scope can be set by either the -N option or the -C option.
– For all QoS user classes and the QoS system class other, the default initial throttle scope is all
nodes, both local and remote. For the QoS system class maintenance, the default initial throttle
scope is all_local. This setting means that the I/O limits are applied across all the nodes in the
local cluster.
– If the throttle scope is not specified in the -N option or the -C option of the mmqos throttle
create command, then the default value is all_local for the maintenance class and all for all
other classes. The setting all_local means all nodes in the local cluster that have mounted the file
system. The setting all means all nodes in the local cluster and in remote clusters that have
mounted the file system.
# mmqos throttle create qos2 --pool system --class HR_Dept2 -N TCTNodeClass1 --maxmbs 100
If you later issue the mmqos throttle update command to change the I/O limit of this throttle to
200 MB/s, you must specify exactly the same throttle description:
# mmqos throttle update qos2 --pool system --class HR_Dept2 -N TCTNodeClass1 --maxmbs 200
If you changed the throttle scope to -C all_local after you created the throttle, you must specify
the new throttle scope when you change the I/O limit:
# mmqos throttle update qos2 --pool system --class HR_Dept2 -C all_local --maxmbs 200
If you accepted the default throttle scope when you created the throttle and you never changed the
throttle scope afterward, then you do not need to specify the -N option or the -C option in the mmqos
throttle update command:
# mmqos throttle create qos2 --pool system --class HR_Dept2 --maxmbs 100
...
# mmqos throttle update qos2 --pool system --class HR_Dept2 --maxmbs 200
– If you specify a non-existent throttle description, the command responds with an informative
message.
– To see the current throttle descriptions, issue the mmqos throttle list command:
As the following table shows, the I/O limit of a throttle consists of an IOPS limit or an MB/s limit. These
elements occur as options in the mmqos throttle create and mmqos throttle update
commands:
Table 29. Elements of I/O limits
Notes:
• The default value for either the maximum IOPS or the maximum MB/s of a throttle is unlimited. This
setting indicates that QoS does not restrict the I/O of the nodes that are specified in the throttle. But to
create a throttle, one of the 2 settings requires a non default value to ensure throttling is implemented.
• Although the I/O limit can include both a maximum IOPS setting and a maximum MB/s setting, at run
time the more restrictive setting controls the I/O usage. For example, if the I/O limit is unlimited IOPS
and 40 MB/s, then at run time the nodes that are specified in the throttle are collectively limited to 40
MB/s of I/O.
• No advantage results from setting both the maximum IOPS and the maximum MB/s to specific values. If
both types of limit are set, QoS tracks both the IOPS usage and the MB/s usage at run time and restricts
the I/O when either the maximum IOPS setting or the maximum MB/s setting is reached. To avoid
uncertainty about which limit might be controlling the I/O usage for the throttle, it is a good idea to
specify either MB/s or IOPS but not both.
• The maximum IOPS or the maximum MB/s settings are divided among the nodes that are specified in
the throttle. For more information, see the subtopic "QoS operation" later in this topic.
QoS operation
The following text describes how QoS regulates the consumption of a specified number of IOPS or MB/s
by processes that access files in the disk storage pools of a QoS class. Where the term "IOPS" appears,
the intended meaning is "either IOPS or MB/s":
• For each class, QoS divides the specified IOPS among the nodes in the throttle scope that have
mounted the file system. If the allocation is static, QoS assigns an equal number of IOPS to each node
and does not change the assignments. If the allocation is Mount Dynamic I/O, QoS periodically adjusts
the assignment of IOPS to each node based on the relative frequency of I/O accesses to the disk
storage pools of the class by the processes that are running on each node. For more information, see
the topic "Mount dynamic I/O (MDIO)."
• If the class is a QoS user class, QoS regulates I/O accesses to files for which the following conditions are
true:
– The file is in a fileset that belongs to the specified user class.
– The file data is located in the disk pool that is specified by the throttle.
If the class is one of the QoS system classes maintenance or other, QoS regulates I/O accesses to
files for which the following conditions are true:
– The process that initiates the I/O access belongs to the specified QoS system class.
– The file data is located in the disk pool that is specified by the throttle.
• A QoS component on each node monitors the I/O accesses of the processes that are running on the
node. During each one-second period, for each QoS class, the QoS component allows a QoS I/O access
if the IOPS or MB/s that are allocated to the node have not been exhausted during the current one-
second period. Otherwise, QoS queues the I/O access until the next one-second period begins.
• When you change IOPS allocations, a brief delay due to reconfiguration occurs before QoS begins
applying the new allocations.
• The following QoS operations apply to unmounting and mounting a file system:
– QoS stops applying IOPS allocations to a file system when you unmount it and resumes when you
remount it.
– IOPS allocations persist across unmounting and remounting a file system.
– When you mount a file system, a brief delay due to reconfiguration occurs before QoS starts applying
allocations.
Class sharing
Class sharing is an easy and powerful way to integrate throttles that use the same storage pool and to give
them the ability to share surplus I/O accesses dynamically among themselves. To participate in class
sharing, a class must be mdio-enabled. For more information, see the subtopic "Mount dynamic I/O
(MDIO)".
The following steps provide an overview of class sharing:
1. Two or more existing throttles regulate I/O accesses to the same storage pool P01. You want to change
the limit on their combined I/O accesses to P01 but keep the relative proportion of their I/O limits the
same. You also want any unused I/O accesses to be shared with throttles that need more I/O
accesses.
2. You can create a throttle for the QoS system class mdio-all-sharing that regulates I/O accesses to
the common storage pool. Set the I/O limit of the throttle to the total number of IOPS or MB/s that you
want to allow for the combined throttles.
3. Now the individual throttles are collectively limited to the I/O limit that you configured for the mdio-
all-sharing throttle. Each individual throttle now has an actual I/O limit that is based on the ratio of
its original I/O limit to the total original I/O limits of all the individual throttles. If one individual throttle
has surplus I/O accesses, they are shared proportionally among the other individual throttles.
For example, suppose that you have three existing throttles that regulate I/O accesses to the system pool
and that have I/O limits of 200 IOPS, 300 IOPS, and 500 IOPS for a total of 1000 IOPS. You create a
throttle for the mdio-all-sharing class and the system pool and assign an I/O limit of 1200 IOPS to
the new throttle. Now the three individual throttles are collectively limited to 1200 IOPS. The individual
throttles now have actual I/O limits of 240 IOPS (200/1000 x 1200), 360 IOPS (300/1000 x 1200), and
600 IOPS (500/1000 x 1200). If at some point in time the third throttle has a surplus of 200 IOPS, then
80 IOPS of the surplus (200/500 x 200) are dynamically assigned to the first throttle and 120 IOPS of the
surplus (300/500 x 200) are dynamically assigned to the second throttle.
QoS statistics
On each node, a QoS component collects I/O statistics for each process that accesses a QoS disk storage
pool. Periodically, the QoS component sends a report of the accumulated statistics to a QoS manager
node. A QoS manager node stores and manages the statistics of QoS classes and nodes so that the
statistics can be displayed by an mmqos report list command.
As the number of statistics reports increases, it can become so great that the QoS manager node is unable
to process reports in a timely manner. If so, the statistics that are reported by the mmqos report list
command might not reflect the actual number of I/O accesses.
For each class, two QoS attributes affect the frequency with which reports are sent to a QoS manager
node. These attributes are set by the mmqos config command:
stat-poll-interval=Seconds
This attribute specifies the length of the time period during which class statistics are collected. The
following events occur during this period:
1. The QoS statistics variables for the class are reset to zero.
2. Statistics are collected until the collection interval ends.
3. The accumulated statistics are sent to a QoS manager node.
The length of the collection period determines the fine-grainedness of data that is collected. The
shorter that the collection period is, the more fine-grained the data becomes.
stat-slot-time=Milliseconds
This attribute specifies the amount of time that QoS waits after the end of one stat-poll-
interval before beginning the next. The purpose of this attribute is to reduce the number of
statistics reports that are sent for the class to the QoS manager node. The longer that the slot time
period is, the fewer are the class-statistic reports that are sent.
The number of statistics reports also increases or decreases as or more or fewer nodes mount the file
system. To counter these effects, it is a good idea to adjust the values of the stat-poll-interval and
the stat-slot-time attributes for each class as the number of nodes that mount the file system
significantly increases or decreases. You can adjust the values manually for each class, or you can activate
automatic adjustments for a class by setting both stat-poll-interval and stat-slot-time to zero.
The following table shows the values that automatic adjustment sets. If you adjust the values manually,
use the values in the table as guidelines:
Limitations
The mmqos command is subject to the following limitations:
• The mmqos command is available only for the Linux operating system. However, The daemon supports
QoS throttling functions on both Linux and AIX nodes within a cluster.
• The QoS system classes have the following limitations:
– They cannot use filesets.
– They do not support MDIO by default.
• The mmqos filesystem reset command does not make a backup copy of the QoS configuration
before it deletes it.
• To avoid creating stale data within the mmqos configuration, you must first remove any QoS
configuration information associated with or that references other IBM Spectrum Scale system objects,
such as file system, pool, fileset, node, or cluster name. Ensure the following when you plan to remove
these objects from the cluster:
– Before you delete a fileset (mmdelfileset), issue the mmqos class delete command to remove
the fileset from the mmqos configuration. If the class contains a fileset that is referenced by a throttle
object, delete that throttle object first with the mmqos throttle delete command and then
remove the class that contains the fileset.
– Before you delete a node (mmdelnode), a storage pool (mmdeldisk), or a cluster name
(mmremotecluster delete), issue the mmqos throttle delete command to remove the
throttle object that contains any of those system objects from the QoS configuration.
– Before you delete a file system, issue the mmqos filesystem reset command first to remove any
QoS configuration information that is associated with the file system.
Note:
• No limit is imposed on the number of user classes, other than limits that might be imposed by the
operating system or hardware.
• No limit is imposed on the number of filesets that can be associated with a user class, other than limits
that might be imposed by the operating system or hardware.
Parameters
class
Commands for working with QoS classes.
Note: The class create, class update, class updatekey, and class delete commands
apply only to QoS user classes, not to QoS system classes. You cannot create, update, do an update
key on, or delete a QoS system class.
create
Creates a QoS user class.
Device
The device name of the file system.
--class ClassName
The name of the user class.
[--fileset FilesetName [,FilesetName...]]
The names of zero or more filesets to be included in the user class. If you do not specify a
fileset, you can add filesets to the class later with the class update add command.
update
Adds, replaces, or removes filesets from a QoS user class.
Device
The device name of the file system.
--class ClassName
The name of the QoS user class to be updated.
{--add | --replace | --remove}
The action to be done.
--add
Adds one or more filesets to the class.
--replace
Removes all filesets from the class and adds the specified filesets to the class. If a class
has ten filesets and you specify --replace with two filesets, the command removes the
ten filesets and adds the two filesets that you specify.
--remove
Removes a fileset from the class. You can remove only one fileset at a time.
Note: If the class includes only one fileset, you cannot remove it. If you want to keep the
class but delete the fileset, you can add a second fileset to the class and then remove the
first fileset. If you do not want to keep the class, you can delete the class with the class
delete command.
[--fileset FilesetName [,FilesetName...]]
The names of one or more filesets to be added to or removed from the QoS user class. If the
command is remove, you can specify only one fileset per command invocation.
delete
Deletes a QoS user class. If a class has throttles, you must delete the throttles before you can
delete the class.
Device
The device name of the file system.
--class ClassName
The user class to be deleted.
set
Sets the value of an attribute for a QoS user class and system class.
Note: Other QoS attributes exist and are set at the file system level for all QoS user classes in the
file system. For more information, see the description of the mmqos config set command.
Device
The device name of the file system.
--class ClassName
The user class or system class for which the attributes are set.
Attribute=Value[, Attribute=Value...]
The following attribute can be set:
mdio-enabled={yes | no}
Enables or disables MDIO for the class. By default, the value of this class attribute
shadows the value of the file-system-level mdio-enabled attribute. When you set the
value of this class attribute here, the value becomes fixed and no longer shadows the value
of the file-system level attribute.
By default, MDIO is enabled for QoS user classes and is disabled for QoS system classes.
For more information, see the subtopic "Mount dynamic I/O (MDIO)" earlier in this topic.
list
Lists the QoS system classes and QoS user classes that are defined in the specified file system.
For user classes, the command also lists the filesets that belong to the class.
Device
The device name of the file system.
[--config]
Instead of listing the filesets for each user class, the command lists the configuration settings
for each user class. For more information, see the description of the mmqos class set
option.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
throttle
Commands for working with QoS throttles. These commands apply to both system classes and user
classes. For more information, see the subtopic "Creating and modifying throttles" earlier in this topic.
create
Creates a QoS throttle for the specified file system, storage pool, and QoS class.
Device
The device name of the file system.
--pool [PoolName | default]
A storage pool.
--class ClassName
A QoS user class or a QoS system class. For more information, see the subtopic "QoS
operation" earlier in this topic.
-N {Node | NodeClass }
The nodes among which the IOPS or MB/s for this throttle are to be divided. Also referred to as
the throttle scope. For more information, see the subtopic "Creating and modifying throttles"
earlier in this topic.
Node
The specified node.
NodeClass
The nodes in the specified class. The node class mount is not supported.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
-C {all | all_local | all_remote | ClusterName}
The nodes among which the IOPS or MB/s for this throttle are to be divided. Also referred to as
the throttle scope. For more information, see the subtopic "Creating and modifying throttles"
earlier in this topic.
all
All nodes that have mounted the file system, no matter which cluster the node belongs to.
This group includes nodes in the cluster that owns the file system and nodes that belong to
clusters that requested access to the file system.
all_local
All nodes that have mounted the file system and that belong to the cluster that owns the
file system.
all_remote
All nodes that have mounted the file system and that belong to any cluster that requested
access to the file system.
ClusterName
All nodes that have mounted the file system and that belong to the specified cluster.
--maxiops MaxIOPS
The IOPS that are to be divided among the nodes that are governed by this throttle. The
default value is unlimited. The valid range is 0 - 1999999999. To specify a value less than
100, you must specify the force option. It is a good idea not to specify both an IOPS limit and
an MB/s limit for the same throttle.
--maxmbs maxMBS
The megabytes per second of data transfer that are to be divided among the nodes that are
governed by this throttle. The default value is unlimited. The valid range is 0 - 1999999999.
To specify a value less than 100, you must specify the force option. It is a good idea not to
specify both an IOPS limit and an MB/s limit for the same throttle.
--force
Overrides the lower limit on the number of IOPS or MB/s that can be allocated to a throttle.
QoS enforces a lower limit of 100 IOPS or 100 MB/s on the value that the --maxiops option
or the --maxmbs option can be set to. Setting a value below the limit with the --force option
typically causes the processes that are affected by the throttle to run for an indefinitely long
time.
update
Updates the IOPS limit or the MB/s limit of the specified throttle.
Important: If you specified the -N or -C option in the mmmqos throttle create command or
if you used the -N or -C option in the mmqos throttle updatekey command to change the
throttle scope, you must specify the same -N or -C option here as the current throttle scope. If
you did not specify the -N or the -C option in either command, you do not have to specify it here,
because the default value is being used. For more information, see the subtopic "Creating and
modifying throttles" earlier in this topic.
Device
The device name of the file system.
--pool [PoolName | default]
A storage pool. Be sure to specify the correct storage pool for the throttle that you want to
update. For more information, see the subtopic "Creating and modifying throttles" earlier in
this topic.
--class ClassName
A QoS user class or system class. Be sure to specify the correct class for the throttle that you
want to update. For more information, see the subtopic "Creating and modifying throttles"
earlier in this topic.
-N {Node | NodeClass }
The nodes among which the IOPS or MB/s for this throttle are to be divided. See the
information about this option in the description of the mmqos throttle create command.
-C {all | all_local | all_remote | ClusterName}
The nodes among which the IOPS or MB/s for this throttle are to be divided. See the
information about this option in the description of the mmqos throttle create command.
--maxiops maxIOPS
The IOPS limit for this throttle. See the information about this option in the description of the
mmqos throttle create command.
--maxmbs maxMBS
The MB/s limit for this throttle. See the information about this option in the description of the
mmqos throttle create command.
--force
Overrides the lower limit on the number of IOPS or MB/s that can be allocated to a throttle.
See the information about this option in the description of the mmqos throttle create
command.
updatekey
Updates the throttle scope, which specifies the nodes across which the I/O limit of the throttle is
to be applied. The throttle scope can be set by either the -new-N option or the -new-C option. For
more information, see the subtopic "Creating and modifying throttles" earlier in this topic.
Important: If you specified the -N or -C option in the mmmqos throttle create command or
if you used the -N or -C option in the mmqos throttle updatekey command to change the
throttle scope, you must specify the same -N or -C option here as the current throttle scope. If
you did not specify the -N or the -C option in either command, you do not have to specify it here,
because the default value is being used. For more information, see the subtopic "Creating and
modifying throttles" earlier in this topic.
Device
The device name of the file system.
--pool [PoolName | default]
A storage pool. Be sure to specify the correct storage pool for the throttle that you want to
update. For more information, see the subtopic "Creating and modifying throttles" earlier in
this topic.
--class [ClassName | default]
A QoS user class or system class. Be sure to specify the correct class for the throttle that you
want to update. For more information, see the subtopic "Creating and modifying throttles"
earlier in this topic.
-N {Node | NodeClass }
The current throttle scope. See the information about this option in the description of the
mmqos throttle create command.
-C {all | all_local | all_remote | ClusterName}
The current throttle scope. See the information about this option in the description of the
mmqos throttle create command.
list
Lists the throttle system objects in the file system. You can specify either --pool or --class but
not both. If you specify neither option, the command lists the throttle system objects for all pools
and classes.
Device
The device name of the file system.
--pool PoolName
A storage pool. The command lists only the throttles that are configured to monitor the
specified pool.
--class ClassName
A QoS class. The command lists only the throttles that are created for the specified class.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
config
Controls QoS attributes that apply at the file system level.
Note: To work with the same QoS attributes at the class level, see the descriptions of the following
commands:
If the initial value is later set to no, as in the following command, then fileset-stats is
initialized to no in each subsequent new class. This command does not affect the value of
fileset-stats in existing classes:
The following table shows the initial values of all the file-system-level attributes when QoS is
initialized:
Device
The device name of the file system.
Attribute=Value[,Attribute=Value...]
The name of the attribute and the value to set for the attribute. The following attributes can be
set:
fileset-stats={yes | no}
Specifies whether QoS keeps statistics for each fileset that is associated with the class.
The default value is no, which means that QoS does not keep statistics.
fine-stats=Seconds
Specifies how many seconds of fine-grained statistics to save in memory so that the
mmqos report list command can list them. The default value is 0, which means that
QoS does not save any fine-grained statistics. The valid range is 0 - 3840 seconds.
Fine-grained statistics are collected at specified intervals and contain more information
than regular statistics. The interval at which fine-grained statistics are collected is set with
the slat-slot-time file-system-level attribute. The display of fine-grained statistics is
controlled by the --fine-stats-display-range option of the mmqos report list
command.
mdio-enabled={yes | no}
Enables or disables MDIO. When MDIO is enabled for a class, QoS dynamically balances
the allocation of IOPS or MB/s among the nodes that have mounted the file system, based
on the relative proportions of their attempted I/O accesses to the storage pool of the class.
When MDIO is disabled for a class, QoS allocates a fixed number of IOPS or MB/s among
the nodes that have mounted the file system and never changes the allocation.
By default, MDIO is enabled for QoS user classes and is disabled for QoS system classes.
You can change the value of this attribute for individual classes with the mmqos class
set command.
For more information, see the subtopic "Mount dynamic I/O (MDIO)" earlier in this topic.
skim-factor=Fraction
This attribute applies only to QoS system classes. If throttles from two or more QoS
classes are operating and one class has a surplus of unused I/O capacity, QoS
appropriates or "skims" the unused I/O capacity and assigns it to other classes that need
more I/O capacity. The skim-factor attribute specifies the fraction of I/O capacity that
cannot be skimmed from the class by other classes. The default value is .987, which
means that only 1.3 percent of the I/O capacity of the class can be skimmed by other
classes.
It is a good idea not to modify the value of this attribute without a strong reason.
stat-poll-interval=Seconds
Specifies the interval in seconds between each sending of accumulated QoS statistics by a
node to the QoS manager node. The QoS manager node is an internally selected node that
stores and manages the statistics of a QoS class so that the statistics can be displayed by
an mmqos report list command. The QoS statistics are described in the following
subtopics, which appear later in this topic:
“Analyzing regular output from the mmqos report list command” on page 629
“Analyzing fine-grained output from the mmqos report list command” on page 631
At the beginning of each stat-poll-interval, the QoS component on the node sets
the node's QoS statistics variables to zero. During the stat-poll-interval interval, the
QoS component adds an amount to the appropriate statistics variable whenever a relevant
event occurs. For example, when a process on a node consumes some number of IOPS,
the QoS component adds the IOPS count to the statistics variable for the number of IOPS
consumed by the node. At the end of the interval, the QoS component on the node sends
the accumulated value for QoS statistics to the QoS manager node.
The valid range of values for stat-poll-interval is 0 - 120 seconds in one-second
intervals. The default value is 5 seconds, which means that a node collects QoS statistics
for 5 seconds before it sends the accumulated QoS statistics to the QoS manager node.
The stat-poll-interval must be a multiple of the value of the stat-slot-time
attribute. For more information, see the description of that attribute.
Reducing the frequency of QoS statistics reports:
For any QoS class, the number of statistics reports that the QoS manager node must
process depends on the following factors:
The number of nodes that mount the file system
As the number of nodes increases, the number of reports also increases.
The value of stat-poll-interval
As the value decreases, the frequency of reports increases. In other words, the shorter
that the data collection interval is, the greater are the number of reports that are sent
per second or per minute to the QoS manager node.
All the settings that are listed in this example are default values.
Device
The device name of the file system.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
filesystem
Commands for controlling QoS processing for the specified file system.
init
Initializes the QoS environment for the specified file system. QoS initializes itself automatically if
you issue a command that sets a configuration value, such as mmqos class create.
Device
The device name of the file system.
enable
Enables QoS processing for the specified file system. QoS begins or resumes regulating I/O
operations and accumulating I/O performance statistics. This command does not modify any
customer-created QoS configuration and does not discard accumulated performance statistics.
Device
The device name of the file system.
disable
Disables QoS processing for the specified file system. QoS stops regulating I/O operations and
stops accumulating performance statistics. This command does not modify any customer-created
QoS configuration and does not discard accumulated performance statistics.
Device
The device name of the file system.
reset
Removes all QoS customer-created QoS configuration information and accumulated performance
statistics from the specified file system. Resets the QoS configuration to the same state that
mmqos filesystem init sets. This command fails if QoS is in the enabled state.
Warning: Issue this command only if you are sure that you no longer need QoS functions in
this file system. If you remove all the QoS configuration information and later find that you
need QoS functions, you must re-create the configuration step-by-step.
Device
The device name of the file system.
refresh
Refreshes the active QoS configuration in the IBM Spectrum Scale daemon from the master QoS
configuration that is stored in the CCR.
Device
The device name of the file system.
list [-Y]
Lists the file systems in the current cluster that are configured for QoS. Notice that this command
does not require a device name as the first option.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
status [-Y]
Displays the current status of QoS for the specified file system.
Device
The device name of the file system.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters
that might be encoded, see the command documentation of mmclidecode. Use the
mmclidecode command to decode the field.
The following example displays the QoS status for file system fs1:
The command displays QoS status information for the file system both as it exists in the CCR and
as it exists in the file system:
State
Indicates whether QoS is initialized, enabled, disabled, or reset.
Version
Shows how many configuration changes have occurred since the first QoS configuration action
was done for this file system.
Throttling
Indicates whether any throttle rules are created.
Note: In mmqos output the value "--" indicates that the measurement is not applicable in the
current context. In the preceding example, it indicates that throttling and monitoring do not
occur in the CCR environment.
Monitoring
Indicates whether performance statistics are being collected.
report
Commands for working with reports.
list
Writes a report to STDOUT.
Device
The device name of the file system.
--pool {all | PoolName}
The storage pools for which statistics are to be listed.
all
Lists statistics for all the pools that are assigned to QoS filesets.
PoolName
Lists statistics for the specified storage pool.
--seconds Seconds
Lists the I/O performance values that were collected during the previous specified number of
seconds. The valid range is 1 - 999 seconds. The default is 60 seconds. The values are listed
over a number of subperiods within the period that you specify. You cannot configure the
number or length of subperiods. Examples of subperiods are every 5 seconds over the last 60
seconds or every 60 seconds over the last 600 seconds.
Exit status
0
Successful completion.
Nonzero
A failure occurred.
Security
You must have root authority to run the mmqos command.
The node on which you enter the command must be able to run remote shell commands on any other
administration node in the cluster. It must be able to do so without the use of a password and without
producing any extraneous messages. For more information, see the topic Requirements for administering
a GPFS file system in the IBM Spectrum Scale: Administration Guide.
QOS config::
This line indicates whether QoS is actively regulating I/O accesses (enabled) or not (disabled).
QOS values::
This line displays information for each pool that is part of a throttle configuration. For each pool, the
information consists of the pool name followed information about each QoS class that accesses the
pool. For each class, the information includes the IOPS or MB/s of the throttles and in some cases the
throttle scope. In the following example fragment, the QOS values line shows that the system
storage pool is used by a throttle for the other class that is set to 500 IOPS and a throttle for the
maintenance class that is set to 200 MB/s. The line also displays the throttle scope of the
maintenance class, all_local, which is the default throttle scope for the class:
Note: The QOS values line does not display a system class if the IOPS and MB/s of the class are set
to unlimited.
QOS status::
This line indicates whether QoS is regulating the consumption of IOPS ("throttling") and also whether
QoS is recording ("monitoring") the consumption of IOPS of each storage pool.
The following example shows the complete output of an mmqos report list command:
# mmqos report list gpfs1 --seconds 30
QOS config:: enabled -- --ccr:ccr-file-version=3
QOS values:: pool=system,other=100Iops
QOS status:: throttling active, monitoring active
QOS IO stats for pool: system
03:22:25 misc iops=2.80000 MBs=0.06172 maxIOPS=unlimited maxMBS=unlimited ioql=0.00136 qsdl=0.00000 et=5 wideOpen=yes(1:1:1)
03:22:25 other iops=50.00000 MBs=197.60469 maxIOPS=50.0 maxMBS=unlimited ioql=3.04175 qsdl=22.94107 et=5 wideOpen=no(0:0:0)
03:22:30 other iops=49.40000 MBs=197.60000 maxIOPS=50.0 maxMBS=unlimited ioql=2.99324 qsdl=27.92655 et=5 wideOpen=no(0:0:0)
03:22:35 misc iops=2.20000 MBs=0.04219 maxIOPS=unlimited maxMBS=unlimited ioql=0.00058 qsdl=0.00000 et=5 wideOpen=yes(1:1:1)
03:22:35 other iops=43.20000 MBs=170.40469 maxIOPS=50.0 maxMBS=unlimited ioql=1.67852 qsdl=17.50021 et=5 wideOpen=no(0:0:0)
03:22:40 misc iops=0.80000 MBs=0.00859 maxIOPS=unlimited maxMBS=unlimited ioql=0.00031 qsdl=0.00000 et=5 wideOpen=yes(1:1:1)
03:23:10 misc iops=1.20000 MBs=0.81484 maxIOPS=unlimited maxMBS=unlimited ioql=0.00905 qsdl=0.00000 et=5 wideOpen=yes(1:1:1)
03:23:20 misc iops=0.20000 MBs=0.00078 maxIOPS=unlimited maxMBS=unlimited ioql=0.00006 qsdl=0.00000 et=5 wideOpen=yes(1:1:1)
mmqos: Command completed.
The command requests a list of I/O performance values for all QoS pools over the previous 30 seconds.
Because the options --sum_classes and --sum_nodes are missing, the command also requests I/O
performance for each storage pool separately and summed across all the nodes of the cluster.
Only one pool is part of a throttle configuration, the system storage pool. The output indicates that IOPS
and MB/s occurred only for processes in the other class and in the misc class. The meaning of the
categories in each line is as follows:
First column
The time when the measurement period ends.
Second column
The QoS class for which the measurement is made.
iops=
The performance of the class in I/O operations per second.
MBs=
The performance of the class in megabytes per second.
maxIOPS=
The maxIOPS throttling limitation for the interval.
maxMBS=
The maxMBS throttling limitation for the interval.
ioql=
The average number of I/O requests in the class that are pending for reasons other than being queued
by QoS. This number includes, for example, I/O requests that are waiting for network or storage
device servicing.
qsdl=
The average number of I/O requests in the class that are queued by QoS. When the QoS system
receives an I/O request from the file system, QoS first finds the class to which the I/O request
belongs. It then finds whether the class has any I/O operations available for consumption. If not, then
QoS queues the request until more I/O operations become available for the class. The Qsdl value is
the average number of I/O requests that are held in this queue.
et=
The interval in seconds during which the measurement was made.
wideOpen=
Whether the throttling of the class in that interval is unlimited (1) or not (0).
You can calculate the average service time for an I/O operation as ((Ioql + Qsdl) / Iops). For a
system that is running IO-intensive applications, you can interpret the value (Ioql + Qsdl) as the
number of threads in the I/O-intensive applications. This interpretation assumes that each thread spends
most of its time in waiting for an I/O operation to complete.
The following table shows the same output in table format. Only the first 10 columns are shown:
Time Class Node Iops TotSctrs Pool Pid RW Sctrl AvgTime
Node
The IP address of the node on which the process was running.
Iops
The number of IOPS that the process consumed during the sample period.
MB/s
The number of MB/s that the process consumed during the sample period.
TotSctrs
The number of sectors for which data was read or written. By default, one sector is 512 bytes.
Note: The number of TotSctrs from one process or one node might not be equal to the actual
number of bytes that are read or written. For cached I/O in IBM Spectrum Scale, if the modified data
from one application I/O operation is less than fgdbRangeSize (4 KiB by default) IBM Spectrum
Scale writes the number of bytes that are specified by fgdbRangeSize from the page pool to
backend disks. If the application updates only one byte of the file, the value that is displayed by
TotSctrs is eight sectors.
Pool
The storage pool where the I/O operations were done.
Pid
The process ID of the process that initiated the I/O. When the --pid-stats option of the mmqos
report list command is not specified, this value is always 0.
RW
The type of I/O operation, read or write.
SctrI
The number of sectors that were affected by the I/O operation. This value is expressed in terms of one
of the following measures:
A single sector.
1/32 or less of a full block.
Less than a full block.
A full block.
For example, if the disk block size is 512, the command displays one of the following values: 1, 16,
511, or 512.
AvgTm
The mean time that was required for an I/O operation to be completed.
SsTm
The sum of the squares of differences from the mean value that is displayed for AvgTm. You can use
this value to calculate the variance of I/O service times.
MinTm
The minimum time that was required for an I/O operation to be completed.
MaxTm
The maximum time that was required for an I/O operation to be completed.
AvgQd
The mean time for which QoS imposed a delay of the read or write operation.
SsQd
The sum of the squares of differences from the mean value that is displayed for AvgQd.
The last line in the example indicates that the index of the next block to be displayed is 4:
## conti=4
You can avoid redisplaying statistics by having the next mmqos report list command display statistics
beginning at this block index.
Examples
1. This example shows the steps for creating a new QoS configuration:
a. Issue the mmqos class create command to create a user class HR_Dept_1 and to associate it
with an existing fileset testfs1:
b. Optionally, you can Issue the mmqos class list command to verify that user class HR_Dept_1
has been created and is associated with fileset testfs1:
c. Issue mmqos throttle create command to create a throttle. In this example, the command
does the following actions:
1) It creates a new throttle.
2) It associates the throttle with an existing storage pool C and with the new user class
HR_Dept_1.
3) It assigns 100 IOPS to the throttle. When user-launched processes access storage pool C they
will collectively be able to consume up to 100 I/0 operations per second.
d. Optionally, issue the mmqos throttle list command to verify that the throttle has been
created:
e. Issue the mmqos filesystem enable command to enable QoS processing for file system
gpfs1:
f. Optionally, issue the mmqos filesystem status command to list the QoS status for file system
gpfs1
g. Issue the mmqos config set command to cause QoS to collect statistics for every process that
accesses pool C:
h. Optionally, issue the mmqos config list command to display the current configuration for
collecting QoS statistics for file system gpfs1. Because only pid-stats has been modified, the
values that are shown for the other configuration settings are the default values:
2. The following example shows how to issue the mmqos report list command to get QoS status
information for a file system and QoS statistics information for a storage pool. In this example, the file
system is gpfs1 and the pool is C:
3. The following example shows how to limit the I/O activity of the long running IBM Spectrum Scale
maintenance commands that belong to the QoS maintenance system class. The mmqos class
list command shows that only the four QoS system classes exist. No QoS user classes have been
created:
The following mmqos throttle create command creates a throttle for the maintenance class
with an I/O limit of 200 MB/s. The command does not specify the -N or -C option, so QoS sets the
scope of the new throttle to the default throttle scope value for the maintenance class, which is
all_local:
# mmqos throttle create gpfs1 --pool system --class maintenance --maxMBS 200
mmqos: Detected the -N and -C options were not optionally supplied with the Maintenance
class.
mmqos: Processing the new QoS configuration...
mmqos: File System Manager QoS service validating the configuration (host is
gpfsnode1.gpfs.net)...
The mmqos throttle list command shows that the throttle has been created:
This example assumes that QoS is already enabled in the file system, so it is not necessary to issue the
mmqos filesystem enable command to start the new throttle working. The last step is to change
the I/O limit of the throttle to 400 MB/s:
# mmqos throttle update gpfs1 --pool system --class maintenance -C all_local --maxMBS 400
mmqos: Processing the new QoS configuration...
mmqos: File System Manager QoS service validating the configuration (host is
gpfsnode1.gpfs.net)...
The mmqos throttle list command shows that the maximum MB/s setting has changed to 400:
4. The following example shows how to create a QoS user class and modify the throttle scope and the I/O
limits:
a. The following mmqos class list command shows that no QoS user classes have been created
yet:
b. The following mmqos class create command creates the QoS user class HR_Dept_2, which
includes filesets hrfset1 and hrfset2:
The mmqos class list command shows that the user class HR_Dept_2 is created:
c. A throttle is created for the new QoS user class. The --force option is used to set the I/O limit
below 100 IOPS to 90 IOPS:
# mmqos throttle create gpfs1 --pool system --class HR_Dept_2 --maxiops 90 --force
mmqos: Processing the new QoS configuration...
mmqos: File System Manager QoS service validating the configuration (host is
gpfsnode1.gpfs.net)...
The mmqos throttle list command shows that the throttle is created:
d. The mmqos throttle update command is issued to change the maximum IOPS setting of the
throttle to 300 IOPS:
# mmqos throttle update gpfs1 --pool system --class HR_Dept_2 --maxiops 300
mmqos: Processing the new QoS configuration...
mmqos: File System Manager QoS service validating the configuration (host is
gpfsnode1.gpfs.net)...
The mmqos throttle list command shows that the maximum IOPS for the throttle is now 300
IOPS:
e. The mmqos throttle updatekey command is issued to narrow the scope of the throttle to a
single node gpfsnode2.gpfs.net:
The mmqos throttle list command shows that the scope of the throttle is now set to node
gpfsnode2.gpfs.net:
f. The mmqos throttle update command is issued to change the maximum MB/s I/O limit to 400
MB/s. Notice that the new throttle scope -N gpfsnode2.gpfs.net must be specified explicitly to
enable QoS to identify the throttle:
The mmqos throttle list command shows that the throttle now has a maximum IOPS setting
of 300 IOPS and a maximum MB/s setting of 400 MB/s:
g. Finally, the mmqos throttle delete command is issued to delete the throttle:
The mmqos throttle list command no longer lists the QoS user throttle, because it has been
deleted:
5. The following example shows how to create a throttle for the mdio-all-sharing class to configure
class sharing within the system pool and QoS user classes HR_Dept_1 and HR_Dept_2. Initially the
mmqos class list command shows only the four system classes and the two user classes, each of
which includes a fileset:
The mmqos throttle list command lists a throttle for the maintenance class and a throttle for
each of the two user classes. The pool for all three throttles is the system storage pool:
-------------------------------------------------------------------------------------
system maintenance - all_local unlimited 400
system HR_Dept_1 - - unlimited 200
system HR_Dept_2 - - unlimited 300
The following mmqos throttle create command creates a throttle for the mdio-all-sharing
system class with an I/O limit of 500 MB/s:
# mmqos throttle create gpfs1 --pool system --class mdio-all-sharing --maxmbs 500
mmqos: Processing the new QoS configuration...
mmqos: File System Manager QoS service validating the configuration (host is
gpfsnode1.gpfs.net)...
The mmqos throttle list command shows that the throttle for the mdio-all-sharing class is
created:
6. If one of the system objects that can be elements of a QoS class or a QoS throttle, such as a fileset, a
pool, a node, a node class, or a cluster, is deleted before the class or throttle is deleted, the
configuration of the class or throttle contains references to non-existent objects. These references can
cause the QoS configuration validation phase to fail and display error messages when you try to revise
or delete the class or throttle.
To revise or delete the class or throttle, you can use the special option QOSSkipValidate=1 to
override the QoS configuration validation phase. This option must be used only in this situation. Using
the option in any other situation can result in an invalid configuration.
The following two examples illustrate how to use this option. In the first example, the fileset hrfset1,
which is included in QoS user class HR_Dept_2, was accidentally deleted from the system with the
command mmdelfileset. When you try to remove the fileset from the class, the command fails with
a configuration validation error. To remove the fileset from the class, issue the following command:
# QOSSkipValidate=1 mmqos class update gpfs1 --class HR_Dept_2 --remove --fileset hrfset1
mmqos: Processing the new QoS configuration...
mmqos: The configuration data was written to the CCR successfully
mmqos: Calling the daemon to update the configuration within the cluster...
In the second example, the node gpfsnode1.gpfs.net, which defines the throttle scope of QoS
user class HR_Dept_2, was removed from the cluster. When you try to delete the throttle, the
command fails with a configuration validation error. To delete the throttle, issue the following
command:
7. The following example shows how to issue the mmqos filesystem reset command to reset all QoS
configuration information for a file system to the default values. Follow these steps:
a. Issue the mmqos filesystem reset command to reset all the configuration information for file
system gpfs1:
mmqos: Are you certain you want to remove all specific QOS configuration for this device
(gpfs1)?
This action will reset all QoS configuration to default values.
This action cannot be undone.
Permanently remove all specific QoS configuration data for device gpfs1 (yes/no)?
yes
QOS configuration has been installed and broadcast to all nodes
Note:
• Before you do this action, save any QoS configuration information that you might want to use
again either with this file system or with another file system. Issue mmqos commands to display
the current configuration information. Then copy the information to a secure file.
• Instead of resetting the QoS configuration information to default values, consider issuing the
mmqos filesystem disable command to disable all QoS activity for the file system. When
you are ready, issue the mmqos filesystem enable command to enable QoS activity for the
file system.
b. Optionally, issue the mmqos filesystem status command to verify that the QoS configuration
information has been reset:
Class Objects:
System Classes: 4
User Classes: 0
Filesets: 0
Throttle Objects: 0
mmquotaoff command
Deactivates quota limit checking.
Synopsis
mmquotaoff [-u] [-g] [-j] [-v] {Device [Device ...] | -a}
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmquotaoff command disables quota limit checking by GPFS.
If none of: -u, -j or -g is specified, the mmquotaoff command deactivates quota limit checking for
users, groups, and filesets.
If the -a option is not specified, Device must be the last parameter entered.
Parameters
Device [Device ...]
The device name of the file system to have quota limit checking deactivated.
If more than one file system is listed, the names must be delimited by a space. File system names
need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
Options
-a
Deactivates quota limit checking for all GPFS file systems in the cluster. When used in combination
with the -g option, only group quota limit checking is deactivated. When used in combination with the
-u or -j options, only user or fileset quota limit checking, respectively, is deactivated.
-g
Specifies that only group quota limit checking is to be deactivated.
-j
Specifies that only quota checking for filesets is to be deactivated.
-u
Specifies that only user quota limit checking is to be deactivated.
-v
Prints a message for each file system in which quotas are deactivated.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmquotaoff command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
GPFS must be running on the node from which the mmquotaoff command is issued.
Examples
1. To deactivate user quota limit checking on file system fs0, issue this command:
mmquotaoff -u fs0
mmlsfs fs0 -Q
2. To deactivate group quota limit checking on all file systems, issue this command:
mmquotaoff -g -a
To confirm the change, individually for each file system, issue this command:
mmlsfs fs2 -Q
3. To deactivate all quota limit checking on file system fs0, issue this command:
mmquotaoff fs0
mmlsfs fs0 -Q
See also
• “mmcheckquota command” on page 218
• “mmdefedquota command” on page 342
• “mmdefquotaoff command” on page 346
• “mmdefquotaon command” on page 349
• “mmedquota command” on page 398
• “mmlsquota command” on page 527
• “mmquotaon command” on page 644
• “mmrepquota command” on page 656
Location
/usr/lpp/mmfs/bin
mmquotaon command
Activates quota limit checking.
Synopsis
mmquotaon [-u] [-g] [-j] [-v] {Device [Device...] | -a}
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmquotaon command enables quota limit checking by GPFS.
If none of: -u, -j or -g is specified, the mmquotaon command activates quota limit checking for users,
groups, and filesets.
If the -a option is not used, Device must be the last parameter specified.
After quota limit checking has been activated by issuing the mmquotaon command, issue the
mmcheckquota command to count inode and space usage.
Parameters
Device [Device...]
The device name of the file system to have quota limit checking activated.
If more than one file system is listed, the names must be delimited by a space. File system names
need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
Options
-a
Activates quota limit checking for all of the GPFS file systems in the cluster. When used in combination
with the -g option, only group quota limit checking is activated. When used in combination with the -
u or -j option, only user or fileset quota limit checking, respectively, is activated.
-g
Specifies that only group quota limit checking is to be activated.
-j
Specifies that only fileset quota checking is to be activated.
-u
Specifies that only user quota limit checking is to be activated.
-v
Prints a message for each file system in which quota limit checking is activated.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmquotaon command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
GPFS must be running on the node from which the mmquotaon command is issued.
Examples
1. To activate user quotas on file system fs0, issue this command:
mmquotaon -u fs0
mmlsfs fs0 -Q
2. To activate group quota limit checking on all file systems, issue this command:
mmquotaon -g -a
To confirm the change, individually for each file system, issue this command:
mmlsfs fs1 -Q
3. To activate user, group, and fileset quota limit checking on file system fs2, issue this command:
mmquotaon fs2
mmlsfs fs2 -Q
See also
• “mmcheckquota command” on page 218
• “mmdefedquota command” on page 342
• “mmdefquotaoff command” on page 346
• “mmdefquotaon command” on page 349
• “mmedquota command” on page 398
• “mmlsquota command” on page 527
• “mmquotaoff command” on page 641
• “mmrepquota command” on page 656
Location
/usr/lpp/mmfs/bin
mmreclaimspace command
Reclaims free space on a GPFS file system.
Synopsis
mmreclaimspace Device [-Y] [-P PoolName] [-qos QosClass]
{--reclaim-threshold Percentage | --emergency-reclaim}
Availability
Available on all IBM Spectrum Scale editions.
Space reclamation
For more information about space reclamation in IBM Spectrum Scale and a list of supported operating
systems and verified storage systems, see the topic IBM Spectrum Scale with data reduction storage
devices in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Description
Use the mmreclaimspace command to reclaim unused device sectors in a GPFS file system. This
command is useful for solid-state drives (SSDs) and thin provisioned storage. Reclaiming means informing
the device that data that is saved on these sectors is no longer used by the file system. These sectors can
be reused by the device for other purposes, such as improving internal storage management in SSDs or
reusing the space for this or another volume.
Note: The mmreclaimspace command is effective only if the file system includes at least one SSD or thin
provisioned disk. For more information, see the topic IBM Spectrum Scale with data reduction storage
devices in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
For each disk in the GPFS file system, the mmreclaimspace command displays the following
information, by failure group and by storage pool:
• The size of the disk.
• The failure group of the disk.
• Whether the disk is used to hold data, metadata, or both.
• Available space in full blocks.
• Available space in subblocks.
• Reclaimed space in subblocks. This value indicates the number of subblocks that are discarded on the
device by this instance of the command.
Displayed values are rounded down to a multiple of 1024 bytes. If the subblock size that is used by the
file system is not a multiple of 1024 bytes, then the displayed values can be lower than the actual values.
This situation can result in the display of a total value that exceeds the sum of the rounded values that are
displayed for individual disks. The individual values are accurate if the subblock size is a multiple of 1024
bytes.
The mmreclaimspace command can be run against a mounted or unmounted file system.
Note:
• This command is I/O-intensive and should be run only when the system load is light.
• An asterisk at the end of a line indicates that the disk is in a state where it is not available for new block
allocation.
Parameters
Device
The device name of the file system to be queried for available file space. File system names need not
be fully qualified. fs0 is as acceptable as /dev/fs0.
This parameter must be the first parameter in the command.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
-P PoolName
Reclaim space only for disks that belong to the specified storage pool.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
--reclaim-threshold Percentage
Specifies the threshold of reclaimable space, as a percentage value, beyond which a block allocation
segment will be selected to be reclaimed. A percentage of 0 means that all segments that have
reclaimable space are selected for reclaiming; a percentage of 90 means that only segments that have
reclaimable space over 90% are selected for reclaiming.
Considering that the block allocation segment is exclusively locked while space on it is being
reclaimed, it is a good practice to choose a bigger value, such as 90, if the application is latency-
sensitive, and choose a smaller value, such as 0, to reclaim all reclaimable space if the system is
almost idle. The block allocation segment is a logical concept that is used to manage the disk space in
a GPFS file system. Each segment contains free space and reclaimable space information.
--emergency-reclaim
Performs emergency recovery and releases pre-reserved space for the next GPFS file system
recovery. If the back-end storage is thin provisioned, it is possible that writing to a file in a GPFS file
system might fail because the thin-provisioned storage is out of physical space. That situation can
result in the file system being unmounted and not being recovered. To bring the file system back into
operation, an emergency recovery procedure is necessary. Issuing the mmreclaimspace command
with the --emergency-reclaim option is part of the recovery process.
Note: This option works only if the thin-provisioned storage is read-writable and the file system is
mounted in restrict mode.
For more information, see the topic IBM Spectrum Scale with data reduction storage devices in the IBM
Spectrum Scale: Concepts, Planning, and Installation Guide.
Exit status
0
Successful completion.
Nonzero
A failure has occurred.
Security
You must have root authority to run the mmreclaimspace command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To reclaim space on all the space reclaimable disks in the foofs file system, issue the following
command:
Disks in storage pool: data (Maximum disk size allowed is 8.93 TB)
gpfs2nsd 1169920000 1 No Yes 1161445376 ( 99%) 20320 ( 0%) 1161465696 (99%)
gpfs3nsd 1169920000 2 No Yes 1161445376 ( 99%) 20320 ( 0%) 1161465696 (99%)
------------ -------------------- ------------- ----------------
(pool total) 2339840000 2322890752 ( 99%) 40640 ( 0%) 2322931392 (99%)
See also
• “mmcrnsd command” on page 332
• “mmcrfs command” on page 315
• “mmchdisk command” on page 210
• “mmadddisk command” on page 28
• “mmrpldisk command” on page 679
• “mmlsfs command” on page 498
• IBM Spectrum Scale with data reduction storage devices in the IBM Spectrum Scale: Concepts, Planning,
and Installation Guide.
Location
/usr/lpp/mmfs/bin
mmremotecluster command
Manages information about remote GPFS clusters.
Synopsis
mmremotecluster add RemoteClusterName [-n ContactNodes] [-k KeyFile]
or
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmremotecluster command is used to make remote GPFS clusters known to the local cluster, and
to maintain the attributes associated with those remote clusters. The keyword appearing after
mmremotecluster determines which action is performed:
add
Adds a remote GPFS cluster to the set of remote clusters known to the local cluster.
delete
Deletes the information for a remote GPFS cluster.
show
Displays information about a remote GPFS cluster.
update
Updates the attributes of a remote GPFS cluster.
To be able to mount file systems that belong to some other GPFS cluster, you must first make the nodes in
this cluster aware of the GPFS cluster that owns those file systems. This is accomplished with the
mmremotecluster add command. The information that the command requires must be provided to you
by the administrator of the remote GPFS cluster. You will need this information:
• The name of the remote cluster.
• The names or IP addresses of a few nodes that belong to the remote GPFS cluster.
• The public key file generated by the administrator of the remote cluster by issuing the mmauth genkey
command for the remote cluster.
Since each cluster is managed independently, there is no automatic coordination and propagation of
changes between clusters like there is between the nodes within a cluster. This means that once a remote
cluster is defined with the mmremotecluster command, the information about that cluster is
automatically propagated across all nodes that belong to this cluster. But if the administrator of the
remote cluster decides to rename it, or deletes some or all of the contact nodes, or change the public key
file, the information in this cluster becomes obsolete. It is the responsibility of the administrator of the
remote GPFS cluster to notify you of such changes so that you can update your information using the
appropriate options of the mmremotecluster update command.
Parameters
RemoteClusterName
Specifies the cluster name associated with the remote cluster that owns the remote GPFS file system.
The value all indicates all remote clusters defined to this cluster, when using the
mmremotecluster delete or mmremotecluster show commands.
-C NewClusterName
Specifies the new cluster name to be associated with the remote cluster.
-k KeyFile
Specifies the name of the public key file provided to you by the administrator of the remote GPFS
cluster.
-n ContactNodes
A comma separated list of nodes that belong to the remote GPFS cluster, in this format:
[tcpPort=NNNN,]node1[,node2 ...]
where:
tcpPort=NNNN
Specifies the TCP port number to be used by the local GPFS daemon when contacting the remote
cluster. If not specified, GPFS will use the default TCP port number 1191.
node1[,node2...]
Specifies a list of nodes that belong to the remote cluster. The nodes can be identified through their
host names or IP addresses.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Exit status
0
Successful completion. After successful completion of the mmremotecluster command, the new
configuration information is propagated to all nodes in the cluster.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmremotecluster command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. This command adds remote cluster k164.kgn.ibm.com to the set of remote clusters known to the
local cluster, specifying k164n02 and k164n03 as remote contact nodes. File k164.id_rsa.pub is
the name of the public key file provided to you by the administrator of the remote cluster.
For more information on the SHA digest, see the IBM Spectrum Scale: Problem Determination Guide
and search on SHA digest.
3. This command updates information for the remote cluster k164.kgn.ibm.com, changing the remote
contact nodes to k164n02 and k164n01. The TCP port to be used when contacting cluster
k164.kgn.ibm.com is defined to be 6667.
The mmremotecluster show command can then be used to see the changes.
For more information on the SHA digest, see the IBM Spectrum Scale: Problem Determination Guide
and search on SHA digest.
4. This command deletes information for remote cluster k164.kgn.ibm.com from the local cluster.
See also
• “mmauth command” on page 96
• “mmremotefs command” on page 653
See also the topic about accessing GPFS file systems from other GPFS clusters in the IBM Spectrum Scale:
Administration Guide.
Location
/usr/lpp/mmfs/bin
mmremotefs command
Manages information needed for mounting remote GPFS file systems.
Synopsis
mmremotefs add Device -f RemoteDevice -C RemoteClusterName
[-T MountPoint] [-t DriveLetter]
[-A {yes | no | automount}] [-o MountOptions] [--mount-priority Priority]
or
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmremotefs command is used to make GPFS file systems that belong to other GPFS clusters known
to the nodes in this cluster, and to maintain the attributes associated with these file systems. The keyword
appearing after mmremotefs determines which action is performed:
add
Define a new remote GPFS file system.
delete
Delete the information for a remote GPFS file system.
show
Display the information associated with a remote GPFS file system.
update
Update the information associated with a remote GPFS file system.
Use the mmremotefs command to make the nodes in this cluster aware of file systems that belong to
other GPFS clusters. The cluster that owns the given file system must have already been defined with the
mmremotecluster command. The mmremotefs command is used to assign a local name under which
the remote file system will be known in this cluster, the mount point where the file system is to be
mounted in this cluster, and any local mount options that you may want.
Once a remote file system has been successfully defined and a local device name associated with it, you
can issue normal commands using that local name, the same way you would issue them for file systems
that are owned by this cluster.
When running the mmremotefs command delete and update options, the file system must be unmounted
on the local cluster. However, it can be mounted elsewhere.
Parameters
Device
Specifies the name by which the remote GPFS file system will be known in the cluster.
-C RemoteClusterName
Specifies the name of the GPFS cluster that owns the remote GPFS file system.
-f RemoteDevice
Specifies the actual name of the remote GPFS file system. This is the device name of the file system as
known to the remote cluster that owns the file system.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
Options
-A {yes | no | automount}
Indicates when the file system is to be mounted:
yes
When the GPFS daemon starts.
no
Manual mount. This is the default.
automount
When the file system is first accessed.
-o MountOptions
Specifies the mount options to pass to the mount command when mounting the file system. For a
detailed description of the available mount options, see GPFS-specific mount options in IBM Spectrum
Scale: Administration Guide.
-T MountPoint
The local mount point directory of the remote GPFS file system. If it is not specified, the mount point
will be set to DefaultMountDir/Device. The default value for DefaultMountDir is /gpfs, but it can be
changed with the mmchconfig command.
-t DriveLetter
Specifies the drive letter to use when the file system is mounted on Windows.
--mount-priority Priority
Controls the order in which the individual file systems are mounted at daemon startup or when one of
the all keywords is specified on the mmmount command.
File systems with higher Priority numbers are mounted after file systems with lower numbers. File
systems that do not have mount priorities are mounted last. A value of zero indicates no priority.
--force
The --force flag can only be used with the delete option. It will override an error that can occur when
trying to delete a remote mount where the remote cluster was already removed. If the original delete
attempt returns an error stating it cannot check to see if the mount is in use, then this is the condition
to use. The --force flag overrides and allows the deletion to complete.
Exit status
0
Successful completion. After successful completion of the mmremotefs command, the new
configuration information is propagated to all nodes in the cluster.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmremotefs command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum
Scale: Administration Guide.
Examples
This command adds remote file system gpfsn, owned by remote cluster k164.kgn.ibm.com, to the
local cluster, assigning rgpfsn as the local name for the file system, and /gpfs/rgpfsn as the local
mount point.
Local Name Remote Name Cluster name Mount Point Mount Options
Automount Drive
rgpfs1 gpfs1 gpfs-n60-win.fvtdomain.net /rgpfs1 rw
no K
See also
• “mmauth command” on page 96
• “mmremotecluster command” on page 650
See also the topic about accessing GPFS file systems from other GPFS clusters in the IBM Spectrum Scale:
Administration Guide.
Location
/usr/lpp/mmfs/bin
mmrepquota command
Displays file system user, group, and fileset quotas.
Synopsis
mmrepquota [-u] [-g] [-e] [-n] [-v]
[--block-size {BlockSize | auto} | -Y] {-a | Device:Fileset ...}
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmrepquota command reports file system usage and quota information for a user, group, or fileset.
This command cannot be run from a Windows node.
Important: Quota limits are not enforced for root users (by default). For information on managing quotas,
see Managing GPFS quotas in the IBM Spectrum Scale: Administration Guide.
If -g, -j, or -u are not specified, then user, group, and fileset quotas are listed.
If -a is not specified, Device must be the last parameter entered.
For each file system in the cluster, the mmrepquota command displays:
1. Block limits (displayed in number of data blocks in 1 KB units or a unit that is defined by the --block-
size parameter):
• Quota type (USR, GRP, or FILESET)
• Current usage (the amount of disk space used by this user, group, or fileset, in 1 KB units or a unit
defined by the --block-size parameter)
• Soft limit (the amount of disk space that this user, group, or fileset is allowed to use during normal
operation, in 1 KB units or a unit defined by the --block-size parameter)
• Hard limit (the total amount of disk space that this user, group, or fileset is allowed to use during the
grace period, in 1 KB units or a unit defined by the --block-size parameter)
• Space in doubt
• Grace period
2. File limits:
• Current number of files
• Soft limit
• Hard limit
• Files in doubt
• Grace period
Note: In cases where small files do not have an extra block that is allocated for them, quota usage
might show less space usage than expected.
3. Entry Type
default on
Default quotas are enabled for this file system.
default off
Default quotas are not enabled for this file system.
e
Explicit quota limits are set by using the mmedquota command.
d_fsys
The quota limits are the default file system values set by using the mmdefedquota command.
d_fset
The quota limits are the default fileset-level values set by using the mmdefedquota command.
i
Default quotas were not enabled when this initial entry was established. Initial quota limits have a
value of zero indicating no limit.
Because the sum of the in-doubt value and the current usage must not exceed the hard limit, the actual
block space and number of files available to the user, group, or fileset might be constrained by the in-
doubt value. If the in-doubt values approach a significant percentage of the quota, run the
mmcheckquota command to account for the lost space and files.
For more information, see Listing quotas in the IBM Spectrum Scale: Administration Guide.
Parameters
Device
The device name of the file system to be listed.
If more than one file system is listed, the names must be delimited by a space. File system names
need not be fully qualified. fs0 is as acceptable as /dev/fs0.
Fileset
Specifies an optional fileset to be listed.
-a
Lists quotas for all file systems in the cluster. A header line is printed automatically with this option.
-e
Specifies that the mmrepquota command is to collect updated quota usage data from all nodes
before displaying results. If this option is not specified, there is the potential to display negative usage
values because the quota server might process a combination of up-to-date and back-level
information.
-g
Lists only group quotas.
-j
Lists only fileset quotas.
-n
Displays a numerical user ID.
-q
Shows whether file system quota enforcement and default quota enforcement are active.
-t
Lists global user, group, and fileset block and inode grace times.
-u
Lists only user quotas.
-v
Prints a header line for the file systems that are being queried and adds an entryType description for
each quota entry.
--block-size {BlockSize | auto}
Specifies the unit in which the number of blocks is displayed. The value must be of the form [n]K, [n]M,
[n]G or [n]T, where n is an optional integer in the range 1 - 1023. The default is 1 K. If auto is
specified, the number of blocks is automatically scaled to an easy-to-read value.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmrepquota command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum
Scale: Administration Guide.
GPFS must be running on the node from which the mmrepquota command is issued.
Examples
1. To report on user quotas for file system fs2 and display a header line, issue this command:
mmrepquota -u -v fs2
mmrepquota -q fs2
3. To report on user quotas for file system gpfs2, issue this command:
mmrepquota -u gpfs2
4. To report on user quotas for file system gpfs2 in fileset fset4, issue this command:
mmrepquota -u gpfs2:fset4
5. To list global user, group, and fileset block and inode grace times, issue this command:
mmrepquota -u -t gpfs_s
User: block default grace time 7days, inode default grace time 7days
Group: block default grace time 7days, inode default grace time 7days
Fileset: block default grace time 7days, inode default grace time 7days
Block Limits | File Limits
Name type KB quota limit in_doubt grace | files quota limit in_doubt grace
root USR 0 0 0 0 none | 50 0 0 0 none
ftp USR 0 0 0 0 none | 50 0 0 0 none
Note: In any mmrepquota listing, when the type is FILESET, the Name column heading is meant to
indicate the fileset name (root, fset0, and fset1 in this example), and the value for the fileset
column heading (root, in this example) can be ignored.
See also
• “mmcheckquota command” on page 218
• “mmdefedquota command” on page 342
• “mmdefquotaoff command” on page 346
• “mmdefquotaon command” on page 349
• “mmedquota command” on page 398
• “mmlsquota command” on page 527
• “mmquotaoff command” on page 641
• “mmquotaon command” on page 644
Location
/usr/lpp/mmfs/bin
mmrestoreconfig command
Restores file system configuration information.
Synopsis
mmrestoreconfig Device -i InputFile [-I {yes | test}]
[-Q {yes | no | only}] [-W NewDeviceName]
or
or
or
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
The mmrestoreconfig command allows you to either query or restore, or both query and restore, the
output file of the mmbackupconfig command.
In the query phase, the mmrestoreconfig command uses the output file generated by the
mmbackupconfig command as an input parameter, and then creates a configuration file. Users can then
edit the configuration file to fit their current file system configuration. You can use the definitions in the
configuration file to create the appropriate network shared disks (NSDs) and file systems required for the
restore.
In the image restore phase, the mmrestoreconfig command uses the input file (output from the
mmbackupconfig command) to restore the backed up file system configuration in the newly created file
system. The newly created file system must not be mounted prior to the mmimgrestore command
execution thus the quota settings are turned off for image restore. They can be reactivated after the
mmimgrestore command is completed using the -Q only flag of the mmrestoreconfig command.
This command cannot be run from a Windows node.
Parameters
Device
Specifies the name of the file system to be restored.
-i inputFile
Specifies the file generated by the mmbackupconfig command. The input file contains the file
system configuration information.
-I {yes | test}
Specifies the action to be taken during the restore phase:
yes
Test and proceed on the restore process. This is the default action.
test
Test all the configuration settings before the actual restore is performed.
Use -I continue to restart mmrestoreconfig from the last known successful configuration
restore.
-F QueryResultFile
Specifies the pathname of the configuration query result file generated by mmrestoreconfig. The
configuration query result file is a report file that you can edit and use as a guide to mmcrnsd or
mmcrfs.
--image-restore
Restores the configuration data in the proper format for Scale Out Backup and Restore (SOBAR).
-Q {yes | no | only}
Specifies whether quota settings are enforced during the file system restore. If set to no, the quota
settings are ignored.
To restore quota settings after the mmimgrestore command has successfully run, the -Q only
option must be specified.
-W newDeviceName
Restores the backed up file system information to this new device name
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmrestoreconfig command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. Run mmrestoreconfig -F QueryResultFile to specify the pathname of the configuration query result
file to be generated.
------------------------------------------------------------
Configuration test restore of fs1 begins at Wed Mar 14 16:00:16 EDT 2012.
------------------------------------------------------------
mmrestoreconfig: Checking disk settings for fs1:
mmrestoreconfig: Checking the number of storage pools defined for fs1.
The restored filesystem currently has 1 pools defined.
mmrestoreconfig: Checking storage pool names defined for fs1.
Storage pool 'system' defined.
mmrestoreconfig: Checking storage pool size for 'system'.
mmrestoreconfig: Storage pool size 127306752 was defined for 'system'.
4. To restore the fs9 file system configuration data prior to the mmimgrestore command, issue:
5. To restore the quota settings for file system fs9, after the mmimgrestore command, issue:
--------------------------------------------------------
Configuration restore of fs9 begins at Thu Nov 29 17:13:51 EST 2012.
--------------------------------------------------------
See also
• “mmbackupconfig command” on page 110
• “mmimgbackup command” on page 450
• “mmimgrestore command” on page 454
Location
/usr/lpp/mmfs/bin
mmrestorefs command
Restores a file system or an independent fileset from a snapshot.
Synopsis
mmrestorefs Device SnapshotName [-j FilesetName]
[-N {Node[,Node...] | NodeFile | NodeClass}]
[--log-quiet] [--preserve-encryption-attributes]
[--suppress-external-attributes] [--threads MaxNumThreads]
[--work-unit FilesPerThread]
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
Use the mmrestorefs command to restore user data and attribute files to a file system or an
independent fileset using those of the specified snapshot. Data will be restored by mmrestorefs without
regard for file system or fileset quotas unless the enforceFilesetQuotaOnRoot configuration attribute
of the mmchconfig command is set to yes. The mmrestorefs command does not restore the file
system and fileset quota configuration information.
In versions before IBM Spectrum Scale 4.1.1, ensure that the file system is unmounted before you run the
mmrestorefs command. When restoring from an independent fileset snapshot (using the -j option), link
the fileset from nodes in the cluster that are to participate in the restore. It is preferable to run the
mmrestorefs command when there are no user operations (either from commands, applications, or
services) in progress on the file system or fileset. If there are user operations in progress on the file
system or fileset while mmrestorefs is running, the restore might fail. For these failures, stop the user
operations and run the mmrestorefs command again to complete the restore. For better performance,
run the mmrestorefs command when the system is idle. While the restore is in progress, do not unlink
the fileset, unmount the file system, or delete the fileset, fileset snapshot, or file system.
The mmrestorefs command cannot restore a fileset that was deleted after a global snapshot was
created. In addition, the filesets in a global snapshot that are in deleted or unlinked state cannot be
restored.
Snapshots are not affected by the mmrestorefs command. When a failure occurs during a restore, try
repeating the mmrestorefs command except when there are ENOSPC or quota exceeded errors. In these
cases, fix the errors then try the mmrestorefs command again.
For information on how GPFS policies and snapshots interact, see the IBM Spectrum Scale: Administration
Guide.
Because snapshots are not copies of the entire file system, they should not be used as protection against
media failures. For protection against media failures, see the IBM Spectrum Scale: Concepts, Planning,
and Installation Guide and search on "recoverability considerations".
The mmrestorefs command can cause a compressed file in the active file system to become
decompressed if it is overwritten by the restore process. To recompress the file, run the
mmrestripefile command with the -z option.
CAUTION:
• Do not run file compression or decompression while an mmrestorefs command is running. This
caution applies to compression or decompression with the mmchattr command or with the
mmapplypolicy command.
• Do not run the mmrestripefs or mmrestripefile command while an mmrestorefs
command is running.
Parameters
Device
The device name of the file system that contains the snapshot to use for the restore. File system
names need not be fully-qualified. fs0 is just as acceptable as /dev/fs0.
This must be the first parameter.
SnapshotName
Specifies the name of the snapshot that will be used for the restore.
-j FilesetName
Specifies the name of a fileset covered by this snapshot.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes that are to participate in the restore. The default is all or the current value of the
defaultHelperNodes parameter of the mmchconfig command.
Starting with IBM Spectrum Scale 4.1.1, -N can be used for both fileset and global snapshot restores.
(In GPFS 4.1, -N can be used for fileset snapshot restore only. In GPFS 3.5 and earlier, there is no -N
parameter.)
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
--log-quiet
Suppresses detailed thread log output.
--preserve-encryption-attributes
Preserves the encryption extended attributes. Files that were removed after the snapshot was taken
are restored with the same encryption attributes (including FEK) of the file in the snapshot. If this
option is not used, the file is recreated with the encryption policy in place at the time the file is
restored.
--suppress-external-attributes
Specifies that external attributes will not be restored.
--threads MaxNumThreads
Specifies the maximum number of concurrent restore operations. The default is 24.
--work-unit FilesPerThread
Specifies the number of files each thread will process at a time. The default is 100.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmrestorefs command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
Suppose that you have the following directory structure:
/fs1/file1
/fs1/userA/file2
/fs1/userA/file3
/fs1/.snapshots/snap1/file1
/fs1/.snapshots/snap1/userA/file2
/fs1/.snapshots/snap1/userA/file3
/fs1/file1
/fs1/.snapshots/snap1/file1
/fs1/.snapshots/snap1/userA/file2
/fs1/.snapshots/snap1/userA/file3
The directory userB is then created using the inode originally assigned to userA, and another snapshot is
taken:
/fs1/file1
/fs1/userB/file2b
/fs1/userB/file3b
/fs1/.snapshots/snap1/file1
/fs1/.snapshots/snap1/userA/file2
/fs1/.snapshots/snap1/userA/file3
/fs1/.snapshots/snap2/file1
/fs1/.snapshots/snap2/userB/file2b
/fs1/.snapshots/snap2/userB/file3b
/fs1/file1
/fs1/userA/file2
/fs1/userA/file3
/fs1/.snapshots/snap1/file1
/fs1/.snapshots/snap1/userA/file2
/fs1/.snapshots/snap1/userA/file3
/fs1/.snapshots/snap2/file1
/fs1/.snapshots/snap2/userB/file2b
/fs1/.snapshots/snap2/userB/file3b
See also
• “mmcrsnapshot command” on page 337
• “mmdelsnapshot command” on page 378
• “mmlssnapshot command” on page 532
• “mmsnapdir command” on page 711
Location
/usr/lpp/mmfs/bin
mmrestripefile command
Rebalances or restores the replication factor of the specified files, performs any incomplete or deferred
file compression or decompression, or detects and repairs data and directory block replica mismatches in
the specified files.
Synopsis
mmrestripefile {-m | -r | -p | -b [--strict] | -l | -c [--read-only] | -z}
{--inode-number [SnapPath/]InodeNumber [[SnapPath/]InodeNumber...] |
--inode-number-file InodeNumberFile |
-F FilenameFile | Filename [Filename...]}
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmrestripefile command attempts to repair the specified files, performs any deferred or
incomplete compression or decompression of the specified files, or detects and repairs data and directory
block replica mismatches in the specified files. You can use -F option to specify a file that contains the list
of file names to be processed or the --inode-number-file option to specify the list of inode numbers
to be processed, with one file name per line.
The repair options are rebalancing (-b), restoring replication factors (-r), migrating data (-m), migrating
file data to the proper pool (-p), relocating the block placement of the files (-l), data and directory block
replica compare and repair (-c), and performing any deferred or incomplete compression or
decompression (-z). The -b option not only rebalances files but also performs all the operations of the -m
and -r options. For more information, see Restriping a GPFS file system in IBM Spectrum Scale:
Administration Guide.
CAUTION: Do not run the mmrestripefs or mmrestripefile command while an mmrestorefs
command is running.
Parameters
--inode-number [SnapPath/]InodeNumber
Specifies the inode number of the file to be restriped. If the current working directory is not already
inside the active file system or snapshot, then the inode number has to be prefixed by the path to the
active file system or snapshot. For example:
Options
stop
-m
Migrates critical data from any suspended disk for a list of specified files. Critical data is all data that
would be lost if currently suspended disks were removed.
-r
Migrates all data for a list of files from suspended disks. If a disk failure or removal makes some
replicated data inaccessible, this command also restores replicated files to their designated level of
replication. Use this option immediately after a disk failure to protect replicated data against a
subsequent failure. You can also use this option before you take a disk offline for maintenance to
protect replicated data against the failure of another disk during the maintenance process.
-p
Moves the data of ill-placed files to the correct storage pool.
Some utilities, including the mmchattr command, can assign a file to a different storage pool without
moving the data of the file to the new pool. Such files are called ill-placed. The -p parameter causes
the command to move the data of ill-placed files to the correct storage pools.
-b [--strict]
Rebalances the specified files to improve file system performance. Rebalancing attempts to distribute
file blocks evenly across the disks of the file system. In IBM Spectrum Scale 5.0.0 and later,
rebalancing is implemented by a lenient round-robin method that typically runs faster than the
previous method of strict round robin. To rebalance the file system using the strict round-robin
method, include the --strict option.
--strict
Rebalances the specified files with a strict round-robin method. In IBM Spectrum Scale v4.2.3 and
earlier, rebalancing always uses this method.
Note: Rebalancing distributes file blocks across all the disks in the cluster that are not suspended,
including stopped disks. For stopped disks, rebalancing does not allow read operations and allocates
data blocks without writing them to the disk. When the disk is restarted and replicated data is copied
onto it, the file system completes the write operations.
-l
Relocates the block placement of the file. The location of the blocks depends on the current write
affinity depth, write affinity failure group setting, block group factor, and the node from which the
command is run. For example, for an existing file, regardless of how its blocks are distributed on disks
currently, if mmrestripefile -l is run from node A, the final block distribution looks as if the file
was created from scratch on node A.
To specify the write affinity failure group where the replica is put, before you run mmrestripefile -
l, enter a command like the following example:
mmlsfs <file_system_name> -i
--read-only
Only compares replicas and does not attempt to fix any mismatches.
-z
Performs any deferred or incomplete compression or decompression of files. For more information,
see the topic File compression in the IBM Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have write access to the file to run the mmrestripefile command unless you run with the -c
--read-only option, in which case read access to the file is sufficient.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
This example illustrates restriping a file that is named testfile0. The following command confirms that
the file is ill-placed:
mmlsattr -L testfile0
mmrestripefile -p testfile0
mmlsattr -L testfile0
The following command compresses or decompresses a file for which compression or decompression is
deferred or incomplete:
mmrestripefile -z largefile.data
See also
• “mmadddisk command” on page 28
• “mmapplypolicy command” on page 80
Location
/usr/lpp/mmfs/bin
mmrestripefs command
Rebalances or restores the replication factor of all the files in a file system. Alternatively, this command
performs any incomplete or deferred file compression or decompression of all the files in a file system.
Synopsis
mmrestripefs Device {-m | -r | -b [--strict] | -R | -p | -z | --check-conflicting-replicas}
[-N {Node[,Node...] | NodeFile | NodeClass}] [-o InodeResultFile]
[-P PoolName] [--inode-criteria CriteriaFile] [--qos QOSClass]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Issue the mmrestripefs command to rebalance or restore the replication of all files in a file system. The
command moves existing file system data between different disks in the file system based on changes to
the disk state made by the mmchdisk, mmadddisk, and mmdeldisk commands. It also attempts to
restore the metadata or data replication of all the files in the file system.
Tip: The mmrestripefs command can take a long time to run if there are many files or a large amount of
data to rebalance or restore. If you are adding, deleting, or replacing multiple disks at the same time
(mmadddisk, mmdeldisk, or mmrpldisk) you can run the mmrestripefs command after you have
added, deleted, or replaced all the disks, rather than after each disk.
Alternatively, you can issue the mmrestripefs command to perform any deferred or incomplete file
compression or decompression in all the files of a file system.
You must specify one of the options (-m, -r, -b, -R, -p, -z, or --check-conflicting-replicas) to
indicate how much file system data to move or whether to perform file compression or decompression.
You can issue this command against a mounted or unmounted file system.
If the file system uses replication, then restriping the file system also replicates it. Also, if the file system
uses replication the -r option and the -m options treat suspended disks differently. The -r option
removes all data from a suspended disk. But the -m option leaves data on a suspended disk if at least one
replica of the data remains on a disk that is not suspended.
The -b option performs all the operations of the -m and -r options.
Use the -z option to perform any deferred or incomplete file compression or decompression.
CAUTION: Do not issue the mmrestripefs or mmrestripefile command while an
mmrestorefs command is running.
Consider the necessity of restriping and the current demands on the system. New data that is added to
the file system is correctly striped. Restriping a large file system requires many insert and delete
operations and might affect system performance. Plan to perform this task when system demand is low.
Parameters
Device
The device name of the file system to be restriped. File system names need not be fully qualified.
Device must be the first parameter. It can take the following parameters:
-m
Migrates all critical data off any suspended disk in this file system. Critical data is all data that
would be lost if currently suspended disks were removed.
-r
Migrates all data off suspended disks. It also restores all replicated files in the file system to their
designated degree of replication when a previous disk failure or removal of a disk makes some
replica data inaccessible. Use this parameter either immediately after a disk failure to protect
replicated data against a subsequent failure, or before you take a disk offline for maintenance to
protect replicated data against failure of another disk during the maintenance process.
Note: If the file system uses replication, before running mmrestripefs Device -r, you should
run mmlsdisk Device -L to check the number of failure groups available. If the number of
failure groups available is less than your default replication, you should not run mmrestripefs
Device -r because it will remove the data replica for files that have replicas that are located on
suspended or to be emptied disks.
-b [--strict]
Rebalances the file system to improve performance. Rebalancing attempts to distribute file blocks
evenly across the disks of the file system. In IBM Spectrum Scale 5.0.0 and later, rebalancing is
implemented by a lenient round-robin method that typically runs faster than the previous method
of strict round robin. To rebalance the file system with the strict round-robin method, include the
--strict option that is described in the following text.
--strict
Rebalances the file system with a strict round-robin method. In IBM Spectrum Scale v4.2.3
and earlier, rebalancing always uses this method.
Note: Rebalancing of files is an I/O intensive and time-consuming operation and is important only
for file systems with large files that are mostly invariant. In many cases, normal file update and
creation rebalance a file system over time without the cost of a complete rebalancing.
Note: Rebalancing distributes file blocks across all the disks in the cluster that are not suspended,
including stopped disks. For stopped disks, rebalancing does not allow read operations and
allocates data blocks without writing them to the disk. When the disk is restarted and replicated
data is copied onto it, the file system completes the write operations.
-R
Changes the replication settings of each file, directory, and system metadata object so that they
match the default file system settings (see the mmchfs command -m and -r options) as long as
the maximum (-M and -R) settings for the object allow it. Next, it replicates or unreplicates the
object as needed to match the new settings. This option can be used to replicate all of the existing
files that were not previously replicated or to unreplicate the files if replication is no longer needed
or wanted. All data is also migrated off disks that have either a suspended or to be emptied
status.
--check-conflicting-replicas
Scans the file system and compares replicas of metadata and data for conflicts. Each such conflict
is reported.
Attention: If this option reports inconsistent replicas, contact IBM Service for guidance.
-c [--read-only]
Deprecated in favor of the --check-conflicting-replicas option. Both the -c option and
the -c --read-only option now have exactly same function as the --check-conflicting-
replicas option. The function of -c in earlier releases, which was to attempt to fix replica
conflicts, is no longer available. For more information see the description of the option --check-
conflicting-replicas.
--metadata-only
Limits the specified operation to metadata blocks. Data blocks are not affected. This option is
valid only with the -r, -b, -R, or --check-conflicting-replicas option.
The mmrestripefs command with this option completes its operation quicker than a full
restripe, replication, or replica compare of data and metadata.
Use this option when you want to prioritize the mmrestripefs operation on the metadata. This
option ensures that the mmrestripefs operation has a reduced impact on the file system
performance when compared to running the mmrestripefs command on the metadata and data.
After you run the mmrestripefs command on the metadata with --metadata-only option, you
can issue the mmrestripefs command without this option to restripe the data and any metadata
that requires to be restriped.
Note: This option does not run until all the nodes in the cluster are upgraded to IBM Spectrum
Scale 4.2.1 release. If any of the nodes is not upgraded, the system displays the following error
message:
mmrestripefs: The --metadata-only option support has not been enabled yet.
Issue "mmchconfig release=LATEST" to activate the new function.
mmrestripefs: Command failed. Examine previous error messages to determine cause.
-p
Directs mmrestripefs to repair the file placement within the storage pool.
Files that are assigned to one storage pool, but with data in a different pool, have their data
migrated to the correct pool. Such files are referred to as ill-placed. Utilities, such as the
mmchattr command, might change a file's storage pool assignment, but not move the data. The
mmrestripefs command might then be invoked to migrate all of the data at once, rather than
migrating each file individually. The placement option (-p) rebalances only the files that it moves.
In contrast, the rebalance operation (-b) performs data placement on all files.
-z
Performs any deferred or incomplete file compression or decompression of files in the file system.
For more information, see the topic File compression in the IBM Spectrum Scale: Administration
Guide.
-P PoolName
Directs mmrestripefs to repair only files assigned to the specified storage pool. This option is
convenient for migrating ill-placed data blocks between pools, for example after you change a file's
storage pool assignment with mmchattr or mmapplypolicy with the -I defer flag.
Do not use for other tasks, in particular, for any tasks that require metadata processing, such as re-
replication. By design, all GPFS metadata is kept in the system pool, even for files that have blocks in
other storage pools. Therefore a command that must process all metadata must not be restricted to a
specific storage pool.
-N {Node[,Node...] | NodeFile | NodeClass}
Specify the nodes that participate in the restripe of the file system. This command supports all
defined node classes. The default is all or the current value of the defaultHelperNodes
parameter of the mmchconfig command.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
-o InodeResultFile
Contains a list of the inodes that met the interesting inode flags that were specified on the --inode-
criteria parameter. The output file contains the following:
INODE_NUMBER
This is the inode number.
DISKADDR
Specifies a dummy address for later tsfindinode use.
SNAPSHOT_ID
This is the snapshot ID.
ISGLOBAL_SNAPSHOT
Indicates whether or not the inode is in a global snapshot. Files in the live file system are
considered to be in a global snapshot.
INDEPENDENT_FSETID
Indicates the independent fileset to which the inode belongs.
MEMO (INODE_FLAGS FILE_TYPE [ERROR])
Indicates the inode flag and file type that will be printed:
Inode flags:
BROKEN
exposed
dataUpdateMiss
illCompressed
illPlaced
illReplicated
metaUpdateMiss
unbalanced
File types:
BLK_DEV
CHAR_DEV
DIRECTORY
FIFO
LINK
LOGFILE
REGULAR_FILE
RESERVED
SOCK
*UNLINKED*
*DELETED*
Notes:
1. An error message will be printed in the output file if an error is encountered when repairing the
inode.
2. DISKADDR, ISGLOBAL_SNAPSHOT, and FSET_ID work with the tsfindinode tool
(/usr/lpp/mmfs/bin/tsfindinode) to find the file name for each inode. tsfindinode
uses the output file to retrieve the file name for each interesting inode.
--inode-criteria CriteriaFile
Specifies the interesting inode criteria flag, where CriteriaFile contains a list of the following flags with
one per line:
BROKEN
Indicates that a file has a data block with all of its replicas on disks that have been removed.
Note: BROKEN is always included in the list of flags even if it is not specified.
dataUpdateMiss
Indicates that at least one data block was not updated successfully on all replicas.
exposed
Indicates an inode with an exposed risk; that is, the file has data where all replicas are on
suspended disks. This could cause data to be lost if the suspended disks have failed or been
removed.
illCompressed
Indicates an inode in which file compression or decompression is deferred, or in which a
compressed file is partly decompressed to allow the file to be written into or memory-mapped.
illPlaced
Indicates an inode with some data blocks that might be stored in an incorrect storage pool.
illReplicated
Indicates that the file has a data block that does not meet the setting for the replica.
metaUpdateMiss
Indicates that there is at least one metadata block that has not been successfully updated to all
replicas.
unbalanced
Indicates that the file has a data block that is not well balanced across all the disks in all failure
groups.
Note: If a file matches any of the specified interesting flags, all of its interesting flags (even those not
specified) will be displayed.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to issue the mmrestripefs command.
The node on which you issue the command must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To move all critical data from any suspended disk in file system fs1, issue the following command:
mmrestripefs fs1 -m
2. To rebalance all files in file system fs1 across all defined, accessible disks that are not stopped or
suspended, issue the following command:
mmrestripefs fs1 -b
3. The following command scans file system gpfs1 for replica conflicts of metadata and data:
4. To fix the pool placement of files in file system fs1 and also determine which files are illReplicated (for
example, as a result of a failed disk), issue the following command:
REGULAR_FILE
24325 0:0 0 1 0 illreplicated unbalanced
REGULAR_FILE
24323 0:0 0 1 0 illreplicated unbalanced
REGULAR_FILE
24326 0:0 0 1 0 illreplicated unbalanced
REGULAR_FILE
24327 0:0 0 1 0 illreplicated unbalanced
REGULAR_FILE
24328 0:0 0 1 0 illreplicated unbalanced
REGULAR_FILE
24329 0:0 0 1 0 illreplicated unbalanced
REGULAR_FILE
See also
• “mmadddisk command” on page 28
• “mmapplypolicy command” on page 80
• “mmchattr command” on page 156
• “mmchdisk command” on page 210
• “mmchfs command” on page 230
• “mmdeldisk command” on page 360
• “mmrpldisk command” on page 679
• “mmrestripefile command” on page 668
Location
/usr/lpp/mmfs/bin
mmrpldisk command
Replaces the specified disk.
Synopsis
mmrpldisk Device DiskName {DiskDesc | -F StanzaFile} [-v {yes | no}]
[-N {Node[,Node...] | NodeFile | NodeClass}]
[--inode-criteria CriteriaFile] [-o InodeResultFile]
[--qos QOSClass]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmrpldisk command to replace an existing disk in the GPFS file system with a new one. All data
on the old disk is migrated to the new disk.
To replace a disk in a GPFS file system, you must first decide if you will:
1. Create a new disk using the mmcrnsd command.
In this case, use the rewritten disk stanza file produced by the mmcrnsd command or create a new
disk stanza. When using the rewritten file, the disk usage and failure group specifications remain the
same as specified on the mmcrnsd command.
2. Select a disk no longer in any file system. Issue the mmlsnsd -F command to display the available
disks.
The disk may then be used to replace a disk in the file system using the mmrpldisk command.
Notes:
• Do not replace a stopped disk under any circumstances. You must start the disk before replacing it. If
the disk cannot be started, delete it with the mmdeldisk command. For more information, see the topic
Disk media failure in the IBM Spectrum Scale: Problem Determination Guide.
• A disk cannot be replaced if it is the only disk in the file system.
• The replacement disk must have the same thin disk type as the disk being replaced. Otherwise the
mmrpldisk command fails with an error message. For more information see the description of the
thinDiskType parameter later in this help topic.
• The mmrpldisk command can be run while a file system is mounted.
Results
Upon successful completion of the mmrpldisk command, the disk is replaced in the file system and data
is copied to the new disk without restriping.
Parameters
Device
The device name of the file system where the disk is to be replaced. File system names need not be
fully-qualified. fs0 is as acceptable as /dev/fs0.
This must be the first parameter.
DiskName
The name of the disk to be replaced. To display the names of disks that belong to the file system,
issue the mmlsnsd -f, mmlsfs -d, or mmlsdisk command. The mmlsdisk command will also
show the current disk usage and failure group values for each of the disks.
DiskDesc
A descriptor for the replacement disk.
Prior to GPFS 3.5, the disk information for the mmrpldisk command was specified in the form of a
disk descriptor defined as follows (with the second, third, sixth, and seventh fields reserved):
DiskName:::DiskUsage:FailureGroup:::
For backward compatibility, the mmrpldisk command will still accept a traditional disk descriptor as
input, but this use is discouraged.
-F StanzaFile
Specifies a file containing the NSD stanzas for the replacement disk. NSD stanzas have this format:
%nsd:
nsd=NsdName
usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}
failureGroup=FailureGroup
pool=StoragePool
servers=ServerList
device=DiskName
thinDiskType={no | nvme | scsi | auto}
where:
nsd=NsdName
The name of an NSD previously created by the mmcrnsd command. For a list of available disks,
issue the mmlsnsd -F command. This clause is mandatory for the mmrpldisk command.
usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}
Specifies the type of data to be stored on the disk:
dataAndMetadata
Indicates that the disk contains both data and metadata. This is the default for disks in the
system pool.
dataOnly
Indicates that the disk contains data and does not contain metadata. This is the default for
disks in storage pools other than the system pool.
metadataOnly
Indicates that the disk contains metadata and does not contain data.
descOnly
Indicates that the disk contains no data and no file metadata. IBM Spectrum Scale uses this
type of disk primarily to keep a copy of the file system descriptor. It can also be used as a third
failure group in certain disaster recovery configurations. For more information, see the topic
Synchronous mirroring utilizing GPFS replication in the IBM Spectrum Scale: Administration
Guide.
This clause is optional for the mmrpldisk command. If omitted, the new disk will inherit the
usage type of the disk being replaced.
failureGroup=FailureGroup
Identifies the failure group to which the disk belongs. A failure group identifier can be a simple
integer or a topology vector that consists of up to three comma-separated integers. The default is
-1, which indicates that the disk has no point of failure in common with any other disk.
GPFS uses this information during data and metadata placement to ensure that no two replicas of
the same block can become unavailable due to a single failure. All disks that are attached to the
same NSD server or adapter must be placed in the same failure group.
If the file system is configured with data replication, all storage pools must have two failure groups
to maintain proper protection of the data. Similarly, if metadata replication is in effect, the system
storage pool must have two failure groups.
Disks that belong to storage pools in which write affinity is enabled can use topology vectors to
identify failure domains in a shared-nothing cluster. Disks that belong to traditional storage pools
must use simple integers to specify the failure group.
This clause is optional for the mmrpldisk command. If omitted, the new disk will inherit the
failure group of the disk being replaced.
pool=StoragePool
Specifies the storage pool to which the disk is to be assigned. This clause is ignored by the
mmrpldisk command.
servers=ServerList
A comma-separated list of NSD server nodes. This clause is ignored by the mmrpldisk command.
device=DiskName
The block device name of the underlying disk device. This clause is ignored by the mmrpldisk
command.
thinDiskType={no | nvme | scsi | auto}
Specifies the space reclaim disk type:
Attention: The replacement disk must have the thin disk type as the disk being replaced.
Otherwise the mmrpldisk command fails with an error message. For more information,
see the topic IBM Spectrum Scale with data reduction storage devices in the IBM Spectrum
Scale: Concepts, Planning, and Installation Guide.
no
The disk does not support space reclaim. This value is the default.
nvme
The disk is a TRIM capable NVMe device that supports the mmreclaimspace command.
scsi
The disk is a thin provisioned SCSI disk that supports the mmreclaimspace command.
auto
The type of the disk is either nvme or scsi. IBM Spectrum Scale will try to detect the actual
disk type automatically. To avoid problems, you should replace auto with the correct disk
type, nvme or scsi, as soon as you can.
Note: In 5.0.5, the space reclaim auto-detection is enhanced. It is encouraged to use the auto
key-word after your cluster is upgraded to 5.0.5.
For more information, see the topic IBM Spectrum Scale with data reduction storage devices in the
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Note: While it is not absolutely necessary to specify the same parameters for the new disk as the old
disk, it is suggested that you do so. If the new disk is equivalent in size to the old disk, and if the disk
usage and failure group parameters are the same, the data and metadata can be completely migrated
from the old disk to the new disk. A disk replacement in this manner allows the file system to maintain
its current data and metadata balance.
If the new disk has a different size, disk usage, parameter, or failure group parameter, the operation
may leave the file system unbalanced and require a restripe. Additionally, a change in size or the disk
usage parameter may cause the operation to fail since other disks in the file system may not have
sufficient space to absorb more data or metadata. In this case, first use the mmadddisk command to
add the new disk, the mmdeldisk command to delete the old disk, and finally the mmrestripefs
command to rebalance the file system.
-v {yes | no}
Verify the new disk does not belong to an existing file system. The default is -v yes. Specify -v no
only when you want to reuse a disk that is no longer needed for an existing file system. If the
command is interrupted for any reason, use the -v no option on the next invocation of the command.
Important: Using -v no on a disk that already belongs to a file system will corrupt that file system.
This will not be noticed until the next time that file system is mounted.
Inode flags:
BROKEN
exposed
dataUpdateMiss
illCompressed
illPlaced
illReplicated
metaUpdateMiss
unbalanced
File types:
BLK_DEV
CHAR_DEV
DIRECTORY
FIFO
LINK
LOGFILE
REGULAR_FILE
RESERVED
SOCK
*UNLINKED*
*DELETED*
Notes:
1. An error message will be printed in the output file if an error is encountered when repairing the
inode.
2. DISKADDR, ISGLOBAL_SNAPSHOT, and FSET_ID work with the tsfindinode tool
(/usr/lpp/mmfs/bin/tsfindinode) to find the file name for each inode. tsfindinode
uses the output file to retrieve the file name for each interesting inode.
--qos QOSClass
Specifies the Quality of Service for I/O operations (QoS) class to which the instance of the command is
assigned. If you do not specify this parameter, the instance of the command is assigned by default to
the maintenance QoS class. This parameter has no effect unless the QoS service is enabled. For
more information, see the topic “mmchqos command” on page 260. Specify one of the following QoS
classes:
maintenance
This QoS class is typically configured to have a smaller share of file system IOPS. Use this class for
I/O-intensive, potentially long-running GPFS commands, so that they contribute less to reducing
overall file system performance.
other
This QoS class is typically configured to have a larger share of file system IOPS. Use this class for
administration commands that are not I/O-intensive.
For more information, see the topic Setting the Quality of Service for I/O operations (QoS) in the IBM
Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmrpldisk command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To replace disk hd27n01 in fs1 with a new disk, hd16vsdn10 allowing the disk usage and failure
group parameters to default to the corresponding values of hd27n01, and have only nodes c154n01,
c154n02, and c154n09 participate in the migration of the data, issue this command:
2. To replace disk vmip3_nsd1 from storage pool GOLD on file system fs2 and to search for any
interesting files handled during the mmrpldisk at the same time, issue this command:
GPFS: 6027-531 The following disks of fs2 will be formatted on node vmip1:
vmip2_nsd3: size 5120 MB
Extending Allocation Map
Checking Allocation Map for storage pool GOLD
59 % complete on Wed Apr 15 10:52:44 2015
Note: The mmrpldisk command will report any interesting inodes that it finds during routine
processing, but the list might not be 100% accurate or complete.
See also
• “mmadddisk command” on page 28
• “mmchdisk command” on page 210
• “mmcrnsd command” on page 332
• “mmlsdisk command” on page 489
• “mmlsnsd command” on page 514
• “mmrestripefs command” on page 672
Location
/usr/lpp/mmfs/bin
mmsdrrestore command
Restores the latest GPFS system files on the specified nodes.
Synopsis
mmsdrrestore [-p NodeName] [-F mmsdrfsFile] [-R remoteFileCopyCommand]
[-a | -N {Node[,Node...] | NodeFile | NodeClass}]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmsdrrestore command is intended for use by experienced system administrators.
The mmsdrrestore command restores the latest GPFS system files on the specified nodes. If no nodes
are specified, the command restores the configuration information only on the node on which the
command is issued. If the local GPFS configuration file is missing, the file that is specified with the -F
option from the node that is specified with the -p option is used instead.
This command works best when the -F option specifies a backup file that is created by the mmsdrbackup
user exit. If the Cluster Configuration Repository (CCR) is enabled, the mmsdrbackup user exit creates a
CCR backup file. If the CCR is not enabled, the user exit creates an mmsdrfs backup file. For more
information, see “mmsdrbackup user exit” on page 1002.
The mmsdrrestore command cannot restore a cluster configuration unless a majority of the quorum
nodes in the cluster are accessible. However, this requirement does not apply if the -F option specifies a
CCR backup file or if the --ccr-repair option is specified.
Parameters
-p NodeName
Specifies the node from which to obtain a valid GPFS configuration file. The node must be either the
primary configuration server or a node that has a valid backup copy of the mmsdrfs file. If this
parameter is not specified, the command uses the configuration file on the node from which the
command is issued.
-F mmsdrfsFile
Specifies the path name of the GPFS configuration file for the mmsdrrestore command to use. This
configuration file might be the current one on the primary server, or it might be a configuration file that
is obtained from the mmsdrbackup user exit. If not specified, /var/mmfs/gen/mmsdrfs is used.
If the configuration file is a CCR backup file, you must also specify the -a option of the
mmsdrrestore command. The command restores any nodes in the cluster that need to be restored.
If the configuration file is an mmsdrfs file, you can specify the nodes to be restored with the -N option
or you can issue the command from a node that needs to be restored.
-R remoteFileCopyCommand
Specifies the fully qualified path name for the remote file copy program to be used for obtaining the
GPFS configuration file. The default is /usr/bin/rcp.
-a
Restores the GPFS configuration files on all nodes in the cluster.
• If the node is running IBM Spectrum Scale 5.0.5.x, Python 2 must be installed on the node:
– On Linux, Python 2 is usually installed automatically.
– On AIX, manually install Python 2.7.5 or a later version of Python 2 from the AIX Toolbox for Linux
Applications.
– On Windows, manually install Python 2 under the Cygwin environment as described in the
following steps. Windows native (non-Cygwin) distributions of Python 2 are not supported.
1. From http://www.cygwin.com, download and run the Cygwin 64-bit setup program setup-
x86_64.exe.
2. In the "Select Packages" window, click View > Category.
3. Click All > Python > Python2 and select the latest level.
4. Follow the instructions to complete the installation.
Important: The use of the --ccr-repair option does not guarantee the recovery of the most recent
state of all the configuration files in the CCR. Instead, the option brings the CCR back into a consistent
state with the most recent available version of each configuration file.
Exit status
0
Successful completion.
nonzero
A failure occurred.
Security
You must have root authority to run the mmsdrrestore command.
The node on which the command is issued must be able to run remote shell commands on any other node
in the cluster without the use of a password and without producing any extraneous messages.For more
information, see Requirements for administering a GPFS file system in IBM Spectrum Scale: Administration
Guide.
Examples
1. To restore the latest GPFS system files on the local node using the GPFS configuration file /var/
mmfs/gen/mmsdrfs from the node that is named primaryServer, issue the following command:
mmsdrrestore -p primaryServer
2. To restore the GPFS system files on all nodes in the cluster using GPFS configuration file /
GPFSconfigFiles/mmsdrfs.120605 on the node that is named GPFSarchive, issue the following
command from the node named localNode:
3. The following command restores the GPFS system files from a CCR backup file. The -a option is
required. The command restores any nodes in the cluster that need to be restored:
mmsdrrestore -F /GPFSbackupFiles/CCRBackup.2015.10.14.10.01.25.tar.gz -a
4. The following example shows how the mmsdrrestore command can be run with the --ccr-repair
option to repair missing or corrupted files in the CCR committed directory of all of the quorum nodes.
Note: The command returns with an error message if the CCR committed directory of any of the
quorum nodes does not have corrupted or lost files.
a. Issue a command like the following one to shut down the GPFS daemon on all of the quorum
nodes:
# mmshutdown -a
Wed Mar 13 01:57:39 EDT 2019: mmshutdown: Starting force unmount of GPFS file systems
Wed Mar 13 01:57:44 EDT 2019: mmshutdown: Shutting down GPFS daemons
Wed Mar 13 01:58:07 EDT 2019: mmshutdown: Finished
b. Issue the following command to repair the missing or corrupted CCR files:
# mmsdrrestore --ccr-repair
mmsdrrestore: Checking CCR on all quorum nodes ...
mmsdrrestore. Invoking CCR restore in dry run mode ...
ccrrestore: +++ DRY RUN: CCR state on quorum nodes and tiebreaker disks will not be
restored +++
ccrrestore: 1/10: Test tool chain successful
ccrrestore: 2/10: Setup local working directories successful
ccrrestore: 3/10: Read CCR Paxos state from tiebreaker disks successful
ccrrestore: 4/10: Copy Paxos state files from quorum nodes successful
ccrrestore: 5/10: Getting most recent Paxos state file successful
ccrrestore: 6/10: Get cksum of files in committed directory successful
ccrrestore: 7/10: WARNING: Intact ccr.nodes file missing in committed directory
ccrrestore: 7/10: INFORMATION: Intact mmsysmon.json found (file id: 3 version: 1)
ccrrestore: 7/10: INFORMATION: Intact mmsdrfs found (file id: 4 version: 901)
ccrrestore: 7/10: INFORMATION: Intact mmLockFileDB found (file id: 5 version: 1)
ccrrestore: 7/10: INFORMATION: Intact genKeyData found (file id: 6 version: 1)
ccrrestore: 7/10: INFORMATION: Intact genKeyDataNew found (file id: 7 version: 1)
ccrrestore: 7/10: Parsing committed file list successful
ccrrestore: 8/10: Get cksum of CCR files successful
ccrrestore: 9/10: Pulling committed files from quorum nodes successful
ccrrestore: 10/10: File name: 'ccr.nodes' file state: UPDATED remark: 'OLD (v1, ((n1,e1),
0),
9a5d4266)'
ccrrestore: 10/10: File name: 'ccr.disks' file state: MATCHING remark: 'none'
ccrrestore: 10/10: File name: 'mmsysmon.json' file state: MATCHING remark: 'none'
ccrrestore: 10/10: File name: 'mmsdrfs' file state: MATCHING remark: 'none'
ccrrestore: 10/10: File name: 'mmLockFileDB' file state: MATCHING remark: 'none'
ccrrestore: 10/10: File name: 'genKeyData' file state: MATCHING remark: 'none'
ccrrestore: 10/10: File name: 'genKeyDataNew' file state: MATCHING remark: 'none'
ccrrestore: 10/10: Patching Paxos state successful
mmsdrrestore: Review the dry run report above to see what will be changed and decide if
you
want to continue the restore or not. Do you want to continue? (yes/no) yes
ccrrestore: 1/17: Test tool chain successful
ccrrestore: 2/17: Test GPFS shutdown successful
ccrrestore: 3/17: Setup local working directories successful
ccrrestore: 4/17: Archiving CCR directories on quorum nodes successful
ccrrestore: 5/17: Read CCR Paxos state from tiebreaker disks successful
ccrrestore: 6/17: Kill GPFS mmsdrserv daemon successful
ccrrestore: 7/17: Copy Paxos state files from quorum nodes successful
ccrrestore: 8/17: Getting most recent Paxos state file successful
ccrrestore: 9/17: Get cksum of files in committed directory successful
ccrrestore: 10/17: WARNING: Intact ccr.nodes file missing in committed directory
ccrrestore: 10/17: INFORMATION: Intact mmsysmon.json found (file id: 3 version: 1)
ccrrestore: 10/17: INFORMATION: Intact mmsdrfs found (file id: 4 version: 901)
ccrrestore: 10/17: INFORMATION: Intact mmLockFileDB found (file id: 5 version: 1)
ccrrestore: 10/17: INFORMATION: Intact genKeyData found (file id: 6 version: 1)
ccrrestore: 10/17: INFORMATION: Intact genKeyDataNew found (file id: 7 version: 1)
ccrrestore: 10/17: Parsing committed file list successful
ccrrestore: 11/17: Get cksum of CCR files successful
ccrrestore: 12/17: Pulling committed files from quorum nodes successful
ccrrestore: 13/17: File name: 'ccr.nodes' file state: UPDATED remark: 'OLD (v1, ((n1,e1),
0),
9a5d4266)'
ccrrestore: 13/17: File name: 'ccr.disks' file state: MATCHING remark: 'none'
ccrrestore: 13/17: File name: 'mmsysmon.json' file state: MATCHING remark: 'none'
ccrrestore: 13/17: File name: 'mmsdrfs' file state: MATCHING remark: 'none'
ccrrestore: 13/17: File name: 'mmLockFileDB' file state: MATCHING remark: 'none'
ccrrestore: 13/17: File name: 'genKeyData' file state: MATCHING remark: 'none'
ccrrestore: 13/17: File name: 'genKeyDataNew' file state: MATCHING remark: 'none'
ccrrestore: 13/17: Patching Paxos state successful
ccrrestore: 14/17: Pushing CCR files successful
ccrrestore: 15/17: Started GPFS mmsdrserv daemon successful
ccrrestore: 16/17: Ping GPFS mmsdrserv daemon successful
ccrrestore: 17/17: Write CCR Paxos state to tiebreaker disks
c. Issue a command like the following one to restart the GPFS daemon on all the quorum nodes:
# mmstartup -a
Tue Mar 19 22:31:35 EDT 2019: mmstartup: Starting GPFS ...
See also
• “mmsdrbackup user exit” on page 1002
Location
/usr/lpp/mmfs/bin
mmsetquota command
Sets quota limits.
Synopsis
mmsetquota Device{[:FilesetName]
[--user IdOrName[,IdOrName]] [--group IdOrName[,IdOrName]]}
{[--block SoftLimit[:HardLimit]] [--files SoftLimit[:HardLimit]]}
or
or
or
or
mmsetquota -F StanzaFile
Availability
Available on all IBM Spectrum Scale editions. Available on AIX and Linux.
Description
The mmsetquota command sets quota limits, default quota limits, or grace periods for users, groups, and
file sets in the specified file system.
When setting quota limits for a file system, replication within the file system should be considered. For
explanation, see Listing quotas in IBM Spectrum Scale: Administration Guide
Important: Quota limits are not enforced for root users (by default). For information on managing quotas,
see Managing GPFS quotas in the IBM Spectrum Scale: Administration Guide.
Parameters
Device
Specifies the device name of the file system.
FilesetName
Specifies the name of a fileset located on Device for which quota information is to be set.
IdOrName
Specifies a numeric ID, user name, or group name.
SoftLimit
Specifies the amount of data or the number of files the user, group, or fileset will be allowed to use.
HardLimit
Specifies the amount of data or the number of files the user, group, or fileset will be allowed to use
during a grace period. If omitted, the default is no limit. See note.
GracePeriod
Specifies the file-system grace period during which quotas can exceed the soft limit before it is
imposed as a hard limit. See note.
StanzaFile
Specifies a file containing quota stanzas.
--block
Specifies the quota limits or grace period for data.
--files
Specifies the quota limits or grace period for files.
--default
Sets the default quota for the user, group, or fileset.
--grace
Sets the grace period for the user, group, or fileset.
-F StanzaFile
Specifies a file containing the quota stanzas for set quota, set default quota, or set grace period.
Quota stanzas have this format:
%quota:
device=Device
command={setQuota|setDefaultQuota|setGracePeriod}
type={USR|GRP|FILESET}
id=IdList
fileset=FilesetName
blockQuota=Number
blockLimit=Number
blockGrace=Period
filesQuota=Number
filesLimit=Number
filesGrace=Period
where:
device=Device
The device name of file system.
command={setQuota|setDefaultQuota|setGracePeriod}
Specifies the command to be executed for this stanza.
setQuota
Sets the quota limits. This command ignores blockGrace and filesGrace attributes.
setDefaultQuota
Sets the default quota limits. This command ignores id, blockGrace and filesGrace
attributes.
setGracePeriod
Sets the grace periods. The command ignores id, fileset, and quota limit attributes. Grace
periods can be set for each quota type in the file system.
type={USR|GRP|FILESET}
Specifies whether the command applies to user, group, or fileset.
id=IdList
Specifies a list of numeric IDs or user, group, or fileset names.
fileset=FilesetName
Specifies the fileset name for the perfileset quota setting. This attribute is ignored for
type=FILESET
blockQuota=Number
Specifies the block soft limit. The number can be specified using the suffix K, M, G, or T. See note.
blockLimit=Number
Specifies the block hard limit. The number can be specified using the suffix K, M, G, or T. See note.
filesQuota=Number
Specifies the inode soft limit. The number can be specified using the suffix K, M, or G. See note.
filesLimit=Number
Specifies the inode hard limit. The number can be specified using the suffix K, M, or G. See note.
blockGrace=Period
Specifies the file-system grace period during which the block quotas can exceed the soft limit
before it is imposed as a hard limit. The period can be specified in days, hours, minutes, or
seconds.
filesGrace=Period
Specifies the file-system grace period during which the files quota can exceed the soft limit before
it is imposed as a hard limit. The period can be specified in days, hours, minutes, or seconds.
Note:
• The maximum files limit is 2147483647.
• The maximum block limit is 999999999999999K. For values greater than 976031318016K (909T) and
up to the maximum limit of 999999999999999K (about 931322T), you must specify the equivalent
value with the suffix K, M, or G.
• If you want to check the grace period that is set, specify mmrepquota -t
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmsetquota command.
GPFS must be running on the node from which the mmsetquota command is issued.
You may issue the mmsetquota command only from a node in the GPFS cluster where the file system is
mounted.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see the topic Requirements for administering a GPFS file system in the IBM Spectrum
Scale: Administration Guide.
Examples
1. The following command sets the block soft and hard limit to 25G and 30G and files soft and hard limit
to 10K and 11K, respectively for user user234:
2. If perfileset quota is enabled, the following command sets block soft and hard limit to 5G and 7G,
respectively, for group fvt090 and for fileset ifset2:
3. To change the user grace period for block data to 10 days, issue the following command:
4. All of the previous examples can be done in one invocation of mmsetquota by using quota stanza file.
The stanza file /tmp/quotaExample may look like this:
%quota:
device=fs1
command=setquota
type=USR
id=user234
blockQuota=25G
blockLimit=30G
filesQuota=10K
filesLimit=11K
%quota:
device=fs1
command=setquota
type=GRP
id=fvt090
fileset=ifset2
blockQuota=5G
blockLimit=7G
%quota:
device=fs1
command=setgraceperiod
type=user
blockGrace=10days
# mmsetquota –F /tmp/quotaExample
See also
• “mmcheckquota command” on page 218
• “mmdefedquota command” on page 342
• “mmdefquotaoff command” on page 346
• “mmdefquotaon command” on page 349
• “mmedquota command” on page 398
• “mmlsquota command” on page 527
• “mmquotaon command” on page 644
• “mmquotaoff command” on page 641
• “mmrepquota command” on page 656
Location
/usr/lpp/mmfs/bin
mmshutdown command
Unmounts all GPFS file systems and stops GPFS on one or more nodes.
Synopsis
mmshutdown [-t UnmountTimeout] [-a | -N {Node[,Node...] | NodeFile | NodeClass}] [--accept]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmshutdown command to stop the GPFS daemons on one or more nodes. If no operand is
specified, GPFS is stopped only on the node from which the command was issued.
The mmshutdown command first attempts to unmount all GPFS file systems. If the unmount does not
complete within the specified timeout period, the GPFS daemons shut down anyway.
If shutting down the specified nodes will cause problems in the cluster, the command prompts for
confirmation. These checks can be bypassed by setting the confirmShutdownIfHarmful parameter
value. To bypass these checks, issue the following command:
mmchconfig confirmShutdonwIfHarmful=no
Currently, the command checks only whether the shutdown will cause loss of node quorum.
Results
Upon successful completion of the mmshutdown command, these tasks are completed:
• GPFS file systems are unmounted.
• GPFS daemons are stopped.
Parameters
-a
Stop GPFS on all nodes in a GPFS cluster.
-N {Node[,Node...] | NodeFile | NodeClass}
Directs the mmshutdown command to process a set of nodes.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
This command does not support a NodeClass of mount.
--accept
Bypasses the user confirmation, if the command detects that shutting down the specified nodes will
cause a harmful condition.
Options
-t UnmountTimeout
The maximum amount of time, in seconds, that the unmount command is given to complete. The
default timeout period is equal to:
60 + 3 × number of nodes
If the unmount does not complete within the specified amount of time, the command times out and
the GPFS daemons shut down.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmshutdown command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To stop GPFS on all nodes in the GPFS cluster, issue this command:
mmshutdown -a
mmshutdown -N k164n04
Thu Mar 15 14:00:12 EDT 2012: mmshutdown: Starting force unmount of GPFS file systems
k164n04: forced unmount of /gpfs/fs1
Thu Mar 15 14:00:22 EDT 2012: mmshutdown: Shutting down GPFS daemons
k164n04: Shutting down!
k164n04: 'shutdown' command about to kill process 7274548
Thu Mar 15 14:00:45 EDT 2012: mmshutdown: Finished
See also
• “mmgetstate command” on page 425
• “mmlscluster command” on page 484
• “mmstartup command” on page 715
Location
/usr/lpp/mmfs/bin
mmsmb command
Administers SMB shares, export ACLs, and global configuration.
Synopsis
mmsmb export list [ListofSMBExports ][ -Y ][ --option Arg ][ --export-regex Arg ]
[ --header N ][ --all ][ --key-info Arg]
or
or
or
or
or
or
or
or
or
or
or
or
Note: For mmsmb export, you can specify −−option −−remove multiple times but you cannot specify
both of these options simultaneously.
Availability
Available on all IBM Spectrum Scale editions.
The protocol functions provided in this command, or any similar command, are generally referred to as
CES (Cluster Export Services). For example, protocol node and CES node are functionally equivalent
terms.
Description
Use the mmsmb command to administer SMB shares and global configuration.
Using the mmsmb export command, you can do the following tasks:
• Create the specified SMB share. The mmsmb export add command creates the specified export for
the specified path. Any supported SMB option can be specified by repeating --option. Also the
substitution values %D for the domain, %U for session user name and %G for the primary group of %U
are supported as part of the specified path. The % character is not allowed in any other context. If the
export exists but the path does not exist or if it is not inside the GPFS file system, the command returns
with an error. When one or more substitution variables are used, only the sub-path to the first
substitution variable is checked. If authentication on the cluster is not enabled, this command will
terminate with an error.
• Change the specified SMB share using the mmsmb export change command.
• Delete an SMB share using the mmsmb export remove command. Existing connections to the
deleted SMB share will be disconnected. This can result in data loss for files being currently open on the
affected connections.
• List the SMB shares by using the mmsmb export list command. The command displays the
configuration of options for each SMB share. If no specific options are specified, the command displays
all SMB shares with the SMB options browseable; guest ok; smb encrypt as a table. Each row
represents an SMB share and each column represents an SMB option.
Using the mmsmb config command, you can do the following tasks:
• Change, add or remove the specified SMB option for the SMB configuration. Use the mmsmb config
change command to change the global configuration.
• List the global configuration of SMB shares. Use the mmsmb config list command to display the
global configuration options of SMB shares. If no specific options are specified, the command displays
all SMB option-value pairs.
• Enable SMB fruit support
Using the mmsmb exportacl command, you can do the following tasks:
• Retrieve the ID of the specified user/group/system.
• List, change, add, remove, replace and delete the ACL associated with an export.
• The add option has two mandatory arguments: --access and --permissions.
Parameters
mmsmb export
list
Lists the SMB shares.
ListofSMBExports
Specifies the list of SMB shares that needs to be listed as blank separated strings.
--option {key | all | unsupported}
key
Specifies only the supported SMB option to be listed.
all
Displays all used SMB options.
unsupported
Detects and displays all SMB shares with unsupported SMB options. The path of all
unsupported options are listed for each SMB share.
--export-regex arg
arg is a regular expression against which the exportnames are matched. Only matching
exports are shown. If this option is not specified and if ListofSMBExports are also not specified,
all existing exports are displayed.
--all
Displays all defined SMB options. Similar to --option all.
add
Creates the specified SMB share on a GPFS file system with NFSv4 ACLs enforced. You can verify
whether your GPFS files system has been configured correctly by using the mmlsfs command. For
example, mmlsfs gpfs0 -k.
path
Specifies the path of the SMB share that needs to be added.
--option SMBoption=value
Specifies the SMB option for the SMB protocol. If it is not a supported SMB option or the value
is not allowable for this SMB option, the command terminates with an error. If this option is
not specified, the default options: guest ok = no and smb encrypt = auto are set.
Note: You cannot change the "guest ok" option by using the mmsmb command as it is an
unsupported option.
change
Modifies the specified SMB share.
--option SMBOption=value
Specifies the SMB option for the SMB protocol. If the SMB option is not configured for the
specified export, it will be added with the specified value. If the SMB option is not supported
or the value is not allowable for this SMB option the command terminates with an error. If no
value is specified, the specified SMB option is set to default by removing the current setting
from the configuration.
--remove SMBOption
Specifies the SMB option that is to be removed. If the SMB option is supported it will be
removed from the specified export. The default value becomes active. If the SMB option is not
supported, the command terminates with an error.
--vfs-fruit-enable
Enables the vfs_fruit module and alternate data streams support for better support of Mac
OS SMB2 clients. Setting this option requires the SMB service to be down on all CES nodes.
Disabling it requires the help of IBM support as the Apple file meta-data that has been moved
to extended attributes and would not be accessible any more unless restored back to files. For
more information, see the Support of vfs_fruit topic of the IBM Spectrum Scale: Administration
Guide.
remove
Deletes the specified SMB share.
--force
Suppresses confirmation questions.
SMBExport
Specifies the SMB share that needs to be listed.
List of supported SMB options for the mmsmb export {list | add | change | remove}
command:
admin users
Using this option, administrative users can be defined in the format of admin
users=user1,user2,..,usern. This is a list of users who will be granted administrative
privileges on the share. This means that they will do all file operations as the super user (root). You
should use this option very carefully, as any user in this list will be able to do anything they like on
the share, irrespective of file permissions.
Default: admin users =
Example: admin users = win-dom\jason
browseable
If the value is set as yes, the export is shown in the Windows Explorer browser when browsing the
file server. By default, this option is enabled.
comment
Description of the export.
csc policy
csc policy stands for client−side caching policy, and specifies how clients that are capable of
offline caching cache the files in the share. The valid values are: manual and disable. Setting
csc profile = disable disables offline caching. For example, this can be used for shares
containing roaming profiles. By default, this option is set to the value manual.
fileid:algorithm
This option allows to control the level of enforced data integrity. If the data integrity is ensured on
the application level, it can be beneficial in cluster environments to reduce the level of enforced
integrity for performance reasons.
fsname is the default value that ensures data integrity in the entire cluster by managing
concurrent access to files and directories cluster-wide.
The fsname_norootdir value disables synchronization of directory locks for the root directory
of the specified export only and keeps locking enabled for all files and directories within and
underneath the share root.
The fsname_nodirs value disables synchronization of directory locks across the cluster nodes,
but keeps locking enabled for files.
The hostname value completely disables cross−node locking for both directories and files on the
selected share.
Note: Data integrity is ensured if an application does not use multiple processes to access the
data at the same time, for example, reading of file content does not happen while another process
is still writing to the file. Without locking, the consistency of files is no longer guaranteed on
protocol level. If data integrity is not ensured on application level this can lead to data corruption.
For example, if two processes modify the same file in parallel, assuming that they have exclusive
access.
gpfs:leases
gpfs:leases are cross protocol oplocks (opportunistic locks), that means an SMB client can
lock a file that provides the user improved performance while reading or writing to the file because
no other user read or write to this file. If the value is set as yes, clients accessing the file over the
other protocols can break the oplock of an SMB client and the user gets informed when another
user is accessing the same file at the same time.
gpfs:recalls
If the value is set as yes files that have been migrated from disk will be recalled on access. By
default, this is enabled. If recalls = no files will not be recalled on access and the client will
receive ACCESS_DENIED message.
gpfs:sharemodes
An application can set share modes. If you set gpfs:sharemodes = yes, using the mmsmb
export change SMBexport --option "gpfs:sharemodes = yes" the sharemodes
specified by the application will be respected by all protocols and not only by the SMB protocol. If
you set gpfs:sharemodes = no the sharemodes specified by the application will only be
respected by the SMB protocol. For example, the NFS protocol will ignore the sharemode set by
the application.
The application can set the following sharemodes: SHARE_READ or SHARE_WRITE or
SHARE_READ and SHARE_WRITE or no sharemodes.
gpfs:syncio
If the value is set as yes, it specifies the files in an export, for which the setting is enabled, are
opened with the O_SYNC flag. Accessing a file is faster if gpfs:syncio is set to yes.
Performance for certain workloads can be improved when SMB accesses the file with the O_SYNC
flag set. For example, updating only small blocks in a large file as observed with database
applications. The underlying GPFS behavior is then changed to not read a complete block if there
is only a small update to it. By default, this option is disabled.
hide unreadable
If the value is set as yes, all files and directories that the user has no permission to read are
hidden from directory listings in the export. The hideunreadable=yes option is also known as
access−based enumeration because when a user is listing (enumerating) the directories and files
within the export, they only see the files and directories that they have read access to. By default,
this option is disabled.
Warning: Enabling this option has a negative impact on directory listing performance,
especially for large directories as the ACLs for all directory entries have to be read and
evaluated. This option is disabled by default.
oplocks
If the value is set as yes, a client may request an opportunistic lock (oplock) from an SMB server
when it opens a file. If the server grants the request, the client can cache large chunks of the file
without informing the server what it is doing with the cached chunks until the task is completed.
Caching large chunks of a file saves a lot of network I/O round−trip time and enhances
performance. By default, this option is enabled.
Warning: While oplocks can enhance performance, they can also contribute to data loss
in case of SMB connection breaks/timeouts. To avoid the loss of data in case of an interface
node failure or storage timeout, you might want to disable oplocks.
Opportunistic locking allows a client to notify the SMB server that it will be the exclusive writer of
the file. It also notifies the SMB server that it will cache its changes to that file on its own system
and not on the SMB server to speed up file access for that client. When the SMB server is notified
about a file being opportunistically locked by a client, it marks its version of the file as having an
opportunistic lock and waits for the client to complete work on the file. The client has to send the
final changes back to the SMB server for synchronization. If a second client requests access to
that file before the first client has finished working on it, the SMB server can send an oplock
break request to the first client. This request informs the client to stop caching its changes and
return the current state of the file to the server so that the interrupting client can use it. An
opportunistic lock, however, is not a replacement for a standard deny-mode lock. There are
many use cases when the interrupting process to be granted an oplock break only to discover
that the original process also has a deny-mode lock on the file.
posix locking
If the value is set as yes, it will be tested if a byte−range (fcntl) lock is already present on the
requested portion of the file before granting a byte−range lock to an SMB client. For improved
performance on SMB−only shares this option can be disabled, which is the default behavior.
Disabling locking on cross−protocol shares can result in data integrity issues when clients
concurrently set locks on a file via multiple protocols, for example, SMB and NFS.
read only
If the value is set as yes, files cannot be modified or created on this export independent of the
ACLs. By default, the value is no.
smb encrypt
This option controls whether the remote client is allowed or required to use SMB encryption.
Possible values are auto, mandatory, disabled, and desired. This is set when the export is
created with default value, which is auto. Clients may chose to encrypt the entire session, not just
traffic to a specific export. The server would return access denied message to all non
−encrypted requests on such an export. Selecting encrypted traffic reduces throughput as smaller
packet sizes must be used as well as the overhead of encrypting and signing all the data. If SMB
encryption is selected, the message integrity is guaranteed so that signing is implicit. When set to
auto, SMB encryption is offered, but not enforced. When set to mandatory, SMB encryption is
required and if set to disabled, SMB encryption cannot be negotiated.
This setting controls SMB encryption setting for the individual SMB share. Supported values are:
• auto/default: This will enable negotiation of encryption but will not turn on data encryption
globally.
• mandatory: This will enable SMB encryption and turn on data encryption for this share. Clients
that do not support encryption will be denied access to this SMB share.
Note: This requires the global mmsmb config smb encrypt setting to be either auto/
default or mandatory.
• disabled: Disables the encryption feature for a specific SMB share.
• desired: Enables negotiation and turns on data encryption on sessions and share connections
for those clients that support it.
syncops:onclose
This option ensures that the file system synchronizes data to the disk each time a file is closed
after writing. The data written is flushed to the disk, but this can reduce performance for
workloads writing many files. Enabling this option will decrease the risk of possible data loss in
case of a node failure. This option is disabled by default. Enabling this should not be necessary, as
data is flushed to disk periodically by the file system and also when requested from an application
on a SMB client.
Default: "syncops:onclose = no"
Allowable values: yes, no
Example: "syncops:onclose = yes"
Warning: Enabling this option can have a negative performance impact on workloads
creating many files from an SMB client.
hide dot files
Setting this to "no" will remove hidden attributes for dot files in an SMB share so that those files
become visible when you access that share.
msdfs proxy
This option enables the IBM Spectrum Scale customers to configure DFS redirects for SMB shares.
If an SMB client attempts to connect to such a share, the client is redirected to one or multiple
proxy shares by using the SMB-DFS protocol.
Here, "msdfs proxy" needs a value to be assigned, which is the list of proxy shares. The syntax for
this list is,
\<smbserver1>\<someshare>[,\<smbserver2>\<someshare>,...]
mmsmb config
list
Lists the global configuration options of SMB shares.
ListofSMBOptions
Specifies the list of SMB options that needs to be listed.
--supported
Displays all changeable SMB options and their values.
change
Modifies the global configuration options of SMB shares.
--option SMBOption=value
Sets the value of the specified SMB option. If no value is given, the SMB option is removed.
Specifies the SMB option for the SMB protocol. If the SMB option is not configured for global
configuration, it will be added with the specified value. If no value is specified, the specified
SMB option is set to default and removed from the global configuration. If the SMB option is
not supported or the value is not allowable for the SMB option, the command terminates with
an error.
--remove SMBOption
Specifies the SMB option that is to be removed. If the SMB option is supported it will be
removed from the global configuration. The default value becomes active. If the SMB option is
not supported, the command terminates with an error.
List of supported SMB options by the mmsmb config {list | change} command:
gpfs:dfreequota
gpfs:dfreequota stands for disk Free Quota. If the value is set to yes the free space and size
reported to a SMB client for a share will be adjusted according to the applicable quotas. The
applicable quotas are the quota of the user requesting this information, the quota of the user's
primary group and the quota of the fileset containing the export.
restrict anonymous
The setting of this parameter determines whether access to information is allowed or restricted
for anonymous users. The options are:
restrict anonymous = 2: anonymous users are restricted from accessing information. This is
the default setting.
restrict anonymous = 0: anonymous users are allowed to access information.
restrict anonymous = 1: is not supported
server string
server string stands for Server Description. It specifies the server description for SMB
protocol. Server description with special characters must be provided in single quotes.
smb encrypt
This setting controls the global SMB encryption setting that applies to each SMB connection.
Supported values are:
• auto/default: This will enable negotiation of encryption but will not turn on data encryption
globally.
• mandatory: This will enable negotiation and turn on data encryption for all SMB shares. Clients
that do not support encryption will be denied access to the server.
• disabled: This will completely disable the encryption feature for all connections.
Note: With smb encrypt = disabled on the share and smb encrypt = mandatory in the
global section, access will be denied for all clients.
mmsmb exportacl
getid
Retrieve the ID of the specified user/group/system.
list
List can only take viewing options.
mmsmb exportacl list myExport Show the export ACL for this exportname
mmsmb exportacl list Show all the export ACLs
mmsmb exportacl list myExport --viewsddl Show the export ACL for this exportname in
sddl format.
add
Add will add a new permission to the export ACL. It will include adding a user, group or system.
The options are:
• {User/group/system name} If you do not specify the type of name, the system will prioritize in
this order:
• --user
• --group
• --system, or
• --SID (this can be the SID for a user, group or system).
Mandatory arguments are:
• --access: ALLOWED or DENIED
• --permissions: One of FULL, CHANGE, or READ or any combination of RWXDPO.
Examples:
change
Change will update the specified ACE in an export ACL. The options are:
• {User/group/system name} If you do not specify the type of name, the system will prioritize in
this order:
• --user
• --group
• --system
• --SID (This can be the SID for a user, group or system).
Mandatory arguments are:
• --access: ALLOWED or DENIED
• --permissions: One of FULL, CHANGE, or READ or any combination of RWXDPO.
Examples:
mmsmb exportacl change myExport --user myUser --access ALLOWED --permissions RWX
mmsmb exportacl change myExport --group allUsers --access ALLOWED --permissions R
remove
Remove will remove the ACE for the specified user/group/system from the ACL.
The user, group or system will be removed automatically for a specified name.
In the event that the system is unable to locate an ACE within the export ACL that it can remove
the permissions for as instructed, an error will be issued to the user informing them of this.
The options are:
• {User/group/system name} If you do not specify the type of name, the system will prioritize in
this order:
• --user
• --group
• --system
• --SID (This can be the SID for a user, group or system).
Optional arguments are:
• --access: ALLOWED or DENIED
• --permissions: One of FULL, CHANGE, or READ or any combination of RWXDPO.
Examples:
replace
The replace command replaces all the permissions in a export ACL with those indicated in its ACE
specification. It is therefore a potentially destructive command and will include a confirmation.
This confirmation can be overridden with the --force command. The options are:
• {User/group/system name} If you do not specify the type of name, the system will prioritize in
this order:
• --user
• --group
• --system
• --SID (This can be the SID for a user, group or system)
• --force.
Mandatory arguments are:
• --access: ALLOWED or DENIED
• --permissions: One of FULL, CHANGE, or READ or any combination of RWXDPO.
Examples:
mmsmb exportacl replace myExport01 --user user01 --access ALLOWED --permissions FULL
mmsmb exportacl replace myExport02 --group group01 --access ALLOWED --permissions READ --
force
mmsmb exportacl replace myUser --access ALLOWED --permissions FULL
delete
The Delete command will remove an entire export ACL. It therefore does not require a system, id
or user identified as they are not appropriate in this case. All it needs is the name of an export for
which the export ACL will be deleted.
Delete will include a confirmation. This can be overridden using the --force parameter.
Examples:
Parameters common for both mmsmb export and mmsmb config commands
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each
column is described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
--header n
Repeats the output table header every n lines for a table that is spread over multiple pages. The
value n can be of any integer value.
--key-info arg
Displays the supported SMB options and their possible values.
arg SMBoption | supported
SMBoption
Specifies the SMB option.
supported
Displays descriptions for all the supported SMB options.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmsmb command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
mmsmb config list
1. Show descriptions of all the supported SMB configuration options.
2. List the SMB option that specifies whether anonymous access is allowed or not.
add share command:aio read size:aio write size:aio_pthread%3Aaio open:async smb echo handler:auth
methods:change
notify:change share command:client NTLMv2 auth:ctdb locktime warn threshold:debug hires timestamp:delete
share
command:dfree cache time:disable netbios:disable spoolss:dmapi support:ea support:fileid%3Amapping:force
unknown
acl user:gencache%3Astabilize_count:gpfs%3Adfreequota:gpfs%3Ahsm:gpfs%3Aleases:gpfs%3Aprealloc:gpfs
%3Asharemodes:
Note: The output of this command depends on the configuration options supported by the system.
2. Change an SMB configuration option.
You can confirm the change by using this mmsmb config list "restrict anonymous" command.
3. Remove an SMB configuration option.
Warning:
Unused options suppressed in display:
server string
2. To list SMB option csc policy for SMB share myexport and all SMB shares starting with foo
followed by any quantity of 1 and ending on 2 or 3, issue this command:
3. To list all exports where unsupported SMB options are set, issue this command:
4. To list the DFS redirect option setting for a share, issue this command:
5. To list all share options (including "msdfds proxy" setting), issue this command:
2. To add a new share and set "msdfs proxy" share option, issue this command:
You can confirm the change by using this mmsmb export list --option "oplocks" myExport
command. The system displays output similar to this:
2. To remove the SMB option oplocks from SMB share myExport, issue the following commands:
You can confirm the change by using this mmsmb export list --option all myExport
command.
3. To change the "msdfs proxy" share option, issue this command:
See also
• “mmnfs command” on page 552
• “mmces command” on page 132
• “mmlsfs command” on page 498
Location
/usr/lpp/mmfs/bin
mmsnapdir command
Controls how the special directories that connect to snapshots appear.
Synopsis
mmsnapdir Device [-r | -a]
[--show-global-snapshots {rootfileset | allfilesets}]
[{[--fileset-snapdir FilesetSnapDirName] [--global-snapdir GlobalSnapDirName]}
| {{--snapdir | -s} SnapDirName}]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmsnapdir command to control how the special directories that connect to snapshots appear.
Both the name of the directories and where they are accessible can be changed.
Global snapshots appear in a subdirectory in the root directory of the file system, whose default name
is .snapshots. Fileset snapshots appear in a similar .snapshots subdirectory located in the root
directory of each independent fileset. These special subdirectories are collectively referred to as
snapdirs. Note that the root directory of the file system and the root directory of the root fileset are the
same, so global snapshots and fileset snapshots of the root fileset will both appear in the same snapdir.
If you prefer to access the snapshots from each directory rather than traversing through the root
directory, you can use an invisible directory to make the connection by issuing the mmsnapdir command
with the -a option (see "Examples"). The -a option enables an invisible directory in each directory in the
active file system (they do not appear in directories in snapshots) that contains a subdirectory for each
existing snapshot of the file system (in the root fileset) or fileset (in other independent filesets). These
subdirectories correspond to the copy of the active directory in the snapshot with the same name. For
example, if you enter ls -a /fs1/userA, the (invisible) .snapshots directory is not listed. However,
you can use ls /fs1/userA/.snapshots, for example, to confirm that .snapshots is present and
contains the snapshots holding copies of userA. When the -a option is enabled, the
paths /fs1/.snapshots/Snap17/userA and /fs1/userA/.snapshots/Snap17 refer to the same
directory, namely userA at the time when Snap17 was created. The -r option (root-directories-only),
which is the default, reverses the effect of the -a option (all-directories), and disables access to
snapshots via snapdirs in non-root directories.
If you prefer to access global snapshots from the root directory of all independent filesets, use the
mmsnapdir command with the --show-global-snapshots allfilesets option. With this option,
global snapshots will also appear in the snapdir in the fileset root directory. The global snapshots will also
appear in the snapdirs in each non-root directory if all-directories (the -a option) is enabled. To return to
the default setting use --show-global-snapshots rootfileset, and global snapshots will only be
available in root of the file system, or the root fileset, if all-directories is enabled.
The name of the snapdir directories can be changed using the --snapdir (or -s) option. This name is
used for both global and fileset snapshots in both fileset root directories and, if all-directories is enabled,
non-root directories also. The snapdir name for global and fileset snapshots can be specified separately
using the --global-snapdir and --fileset-snapdir options. If these names are different, two
snapdirs will appear in the file system root directory, with the global and fileset snapshots listed
separately. When --show-global-snapshots is set to allfilesets, two snapdirs will appear in
fileset root directories also, and when all-directories (the -a option) is specified, the two snapdirs will be
available in non-root directories as well. If --global-snapdir is specified by itself, the fileset snapdir
name is left unchanged, and vice versa if --fileset-snapdir option is used. Setting both snapdirs to
the same name is equivalent to using the --snapdir option. The snapdir name enabled in non-root
directories by all-directories is always the same as the name used in root directories.
For more information on global snapshots, see Creating and maintaining snapshots of file systems in the
IBM Spectrum Scale: Administration Guide.
For more information on fileset snapshots, see Fileset-level snapshots in the IBM Spectrum Scale:
Administration Guide.
Parameters
Device
The device name of the file system. File system names need not be fully-qualified. fs0 is just as
acceptable as /dev/fs0.
This must be the first parameter.
-a
Adds a snapshots subdirectory to all subdirectories in the file system.
-r
Reverses the effect of the -a option. All invisible snapshot directories are no longer accessible. The
snapshot directory under the file system root directory is not affected.
--show-global-snapshots {rootfileset | allfilesets}
This option controls whether global snapshots are accessible through a subdirectory under the root
directory of all independent filesets (allfilesets) or only in the file system root (rootfileset).
For example, issuing the following command:
specifies that the root directory of each independent fileset will contain a .gsnaps subdirectory
listing all global snapshots, such as /fs1/junctions/FsetA/.gsnaps and /fs1/junctions/
FsetA/.fsnaps. This can be used to make global snapshots accessible to clients, for example NFS
users, that do not have access to the file system root directory.
Specifying rootfileset reverses this feature, restoring the default condition, so that global
snapshots are only visible in the root fileset.
--fileset-snapdir FilesetSnapDirName
--global-snapdir GlobalSnapDirName
The --global-snapdir option specifies the name for the directory where global snapshots are
listed. The --fileset-snapdir option specifies the name for the directory where fileset snapshots
are listed. These options can be specified together or separately, in which case only the corresponding
snapdir is changed. Neither option may be specified with --snapdir, which sets both to the same
name.
For example, after issuing the command:
the directory /fs1/.gsnaps will list all global snapshots and /fs1/.fsnaps will only list fileset
snapshots of the root fileset. Fileset snapshots of other independent filesets will be listed in .fsnaps
under the root directory of each independent fileset, such as /fs1/junctions/FsetA/.fsnaps.
--snapdir | -s SnapDirName
Changes the name of the directory for both global and fileset snapshots to SnapDirName. This affects
both the directory in the file system root as well as the invisible directory in the other file system
directories if the -a option has been enabled. The root and non-root snapdirs cannot be given
different names.
-q
Displays current snapshot settings. The -q option cannot be specified with any other options. This is
the default if no other options are specified.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
If you are a root user, the node on which the command is issued must be able to execute remote shell
commands on any other node in the cluster without the use of a password and without producing any
extraneous messages. For more information, see the topic Requirements for administering a GPFS file
system in the IBM Spectrum Scale: Administration Guide.
You must be a root user to use all of the mmsnapdir options. Non-root users can only use the -q option.
If you are a non-root user, you may only specify file systems that belong to the same cluster as the node
on which the mmsnapdir command was issued.
Examples
1. To rename the .snapshots directory (the default snapshots directory name) to .link for file system
fs1, issue the command:
After the command has been issued, the directory structure would appear similar to:
/fs1/file1
/fs1/userA/file2
/fs1/userA/file3
/fs1/.link/snap1/file1
/fs1/.link/snap1/userA/file2
/fs1/.link/snap1/userA/file3
2. To add the .link subdirectory to all subdirectories in the file system, issue:
mmsnapdir fs1 -a
After the command has been issued, the directory structure would appear similar to:
/fs1/file1
/fs1/userA/file2
/fs1/userA/file3
/fs1/userA/.link/snap1/file2
/fs1/userA/.link/snap1/file3
/fs1/.link/snap1/file1
/fs1/.link/snap1/userA/file2
/fs1/.link/snap1/userA/file3
The .link subdirectory under the root directory and under each subdirectory of the tree provides two
different paths to each snapshot copy of a file. For example, /fs1/userA/.link/snap1/file2
and /fs1/.link/snap1/userA/file2 are two different paths that access the same snapshot copy
of /fs1/userA/file2.
3. To reverse the effect of the previous command, issue:
mmsnapdir fs1 -r
After the command has been issued, the directory structure would appear similar to:
/fs1/file1
/fs1/userA/file2
/fs1/userA/file3
/fs1/.link/snap1/file1
/fs1/.link/snap1/userA/file2
/fs1/.link/snap1/userA/file3
mmsnapdir fs1 -q
If there are independent filesets, fileset snapshots, or the global and fileset snapshot directory names
are different in the file system, the system displays output similar to:
See also
• “mmcrsnapshot command” on page 337
• “mmdelsnapshot command” on page 378
• “mmlssnapshot command” on page 532
• “mmrestorefs command” on page 665
Location
/usr/lpp/mmfs/bin
mmstartup command
Starts the GPFS subsystem on one or more nodes.
Synopsis
mmstartup [-a | -N {Node[,Node...] | NodeFile | NodeClass}] [-E EnvVar=value ...]
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmstartup command to start the GPFS daemons on one or more nodes. If no operand is
specified, GPFS is started only on the node from which the command was issued.
Parameters
-a
Start GPFS on all nodes in a GPFS cluster.
-N {Node[,Node...] | NodeFile | NodeClass}
Directs the mmstartup command to process a set of nodes.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
This command does not support a NodeClass of mount.
-E EnvVar=value
Specifies the name and value of an environment variable to be passed to the GPFS daemon. You can
specify multiple -E options.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmstartup command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To start GPFS on all nodes in the GPFS cluster, issue this command:
mmstartup -a
Thu Aug 12 13:22:40 EDT 2004: 6027-1642 mmstartup: Starting GPFS ...
See also
• “mmgetstate command” on page 425
• “mmlscluster command” on page 484
• “mmshutdown command” on page 695
Location
/usr/lpp/mmfs/bin
mmtracectl command
Sets up and enables GPFS tracing.
Synopsis
mmtracectl { --start | --stop | --off | --set | --status}
[--trace={io | all | def | "Class Level [Class Level ...]" }]
[--trace-recycle={off | local | global | globalOnShutdown }]
[--aix-trace-buffer-size=BufferSize]
[--tracedev-buffer-size=BufferSize]
[--trace-file-size=FileSize] [--trace-dispatch={yes | no }]
[--tracedev-compression-level=Level]
[--tracedev-write-mode={blocking | overwrite }]
[--tracedev-timeformat={relative | absolute | calendar }]
[--tracedev-overwrite-buffer-size=Size]
[--format | --noformat]
[-N {Node [,Node...] | NodeFile | NodeClass }]
Availability
Available on all IBM Spectrum Scale editions.
Description
Attention: Use this command only under the direction of the IBM Support Center.
Results
GPFS tracing can be started, stopped, or related configuration options can be set.
Parameters
--start | --stop | --off | --set | --status
Specifies the actions that the mmtracectl command performs, where:
--start
Starts the trace.
--stop
Stops the trace.
--off
Clears all of the setting variables and stops the trace.
--set
Sets the trace variables.
--status
Displays the tracing status of the specified nodes. This parameter is rejected with a usage error
message unless all the nodes in the cluster are running IBM Spectrum Scale 5.0.0 or later.
In the following table, the letter X indicates that the specified type of information is displayed:
--tracedev-buffer-size=BufferSize
Specifies the trace buffer size for Linux trace in blocking mode. If --tracedev-write-mode is set to
blocking, this parameter will be used. It should be no less than 4K and no more than 64M. The default
is 4M.
Note: This option applies only to Linux nodes.
--trace-file-size=FileSize
Controls the size of the trace file. The default is 128M on Linux and 64M on other platforms.
--trace-dispatch={yes | no}
Enables AIX thread dispatching trace hooks.
--tracedev-compression-level=Level
Specifies the trace raw data compression level. Valid values are 0 to 9. A value of zero indicates no
compression. A value of 9 provides the highest compression ratio, but at a lower speed. The default is
6.
Note: This option applies only to Linux nodes.
--tracedev-write-mode={blocking | overwrite}
Specifies when to overwrite the old data, where:
blocking
Specifies that if the trace buffer is full, wait until the trace data is written to the local disk and the
buffer becomes available again to overwrite the old data.
overwrite
Specifies that if the trace buffer is full, overwrite the old data. This is the default.
Note: This option applies only to Linux nodes.
--tracedev-timeformat={relative | absolute | calendar}
Controls time formatting in the trace records. The following values are accepted:
relative
Displays the trace time stamp in relative format, showing the number of seconds from the
beginning time stamp. This is the default.
absolute
Displays the trace time stamp in absolute format, showing the number of seconds since 1/1/1970.
calendar
Displays the trace time stamp in local calendar format, showing day of the week, month, day,
hours, minutes, seconds, and year.
--tracedev-overwrite-buffer-size=Size
Specifies the trace buffer size for Linux trace in overwrite mode. If --tracedev-write-mode is set to
overwrite, this parameter will be used. It should be no less than 16M. The default is 64M.
Note: This option applies only to Linux nodes.
--format | --noformat
Enables or disables formatting.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes that will participate in the tracing of the file system. This option supports all
defined node classes (with the exception of mount). The default value is all.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmtracectl command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
To set trace levels to the defined group def and start the traces on all nodes when GPFS starts, issue the
following command:
mmlsconfig trace,traceRecycle
trace all 4 tm 2 thread 1 mutex 1 vnode 2 ksvfs 3 klockl 2 io 3 pgalloc 1 mb 1 lock 2 fsck 3
traceRecycle global
mmtracectl --start
See also
• “mmchconfig command” on page 169
See the mmtrace shell script.
Location
/usr/lpp/mmfs/bin
mmumount command
Unmounts GPFS file systems on one or more nodes in the cluster.
Synopsis
mmumount {Device | MountPoint | DriveLetter |
all | all_local | all_remote | {-F DeviceFileName}}
[-f] [-a | -N {Node[,Node...] | NodeFile | NodeClass}]
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Another name for the mmumount command is the mmunmount command. Either name can be used.
The mmumount command unmounts a previously mounted GPFS file system on one or more nodes in the
cluster. If no nodes are specified, the file systems are unmounted only on the node from which the
command was issued. The file system can be specified using its device name or the mount point where it
is currently mounted.
Use the first form of the command to unmount file systems on nodes that belong to the local cluster.
Use the second form of the command with the -C option when it is necessary to force an unmount of file
systems that are owned by the local cluster, but are mounted on nodes that belong to another cluster.
When a file system is unmounted by force with the second form of the mmumount command, the affected
nodes may still show the file system as mounted, but the data will not be accessible. It is the
responsibility of the system administrator to clear the mount state by issuing the umount command.
When multiple nodes are affected and the unmount target is identified via a mount point or a Windows
drive letter, the mount point is resolved on each of the target nodes. Depending on how the file systems
were mounted, this may result in different file systems being unmounted on different nodes. When in
doubt, always identify the target file system with its device name.
Parameters
Device | MountPoint | DriveLetter | all | all_local | all_remote | {-F DeviceFileName}
Indicates the file system or file systems to be unmounted.
Device
Is the device name of the file system to be unmounted. File system names do not need to be fully
qualified. fs0 is as acceptable as /dev/fs0.
MountPoint
Is the location where the GPFS file system to be unmounted is currently mounted.
DriveLetter
Identifies a file system by its Windows drive letter.
all
Indicates all file systems that are known to this cluster.
all_local
Indicates all file systems that are owned by this cluster.
all_remote
Indicates all files systems that are owned by another cluster to which this cluster has access.
-F DeviceFileName
Specifies a file containing the device names, one per line, of the file systems to be unmounted.
This must be the first parameter.
Options
-a
Unmounts the file system on all nodes in the GPFS cluster.
-f
Forces the unmount to take place even though the file system may be still in use.
Use this flag with extreme caution. Using this flag may cause outstanding write operations to be lost.
Because of this, forcing an unmount can cause data integrity failures and should be used with caution.
The mmumount command relies on the native umount command to carry out the unmount operation.
The semantics of forced unmount are platform-specific. On some platforms (such as Linux), even
when forced unmount is requested, a file system cannot be unmounted if it is still referenced by the
system kernel. Examples of such cases are:
• Open files are present in the file system
• A process uses a subdirectory in the file system as the current working directory
• The file system is NFS-exported
To unmount a file system successfully in such a case, it may be necessary to identify and stop the
processes that are referencing the file system. System utilities like lsof and fuser could be used for
this purpose.
-C {all_remote | ClusterName}
Specifies the cluster on which the file system is to be unmounted by force. all_remote denotes all
clusters other than the one from which the command was issued.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes on which the file system is to be unmounted.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
This command does not support a NodeClass of mount.
When the -N option is specified in conjunction with -C ClusterName, the specified node names are
assumed to refer to nodes that belong to the specified remote cluster (as identified by the
mmlsmount command). The mmumount command cannot verify the accuracy of this information.
NodeClass and NodeFile are not supported in conjunction with the -C option.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmumount command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. To unmount file system fs1 on all nodes in the cluster, issue this command:
mmumount fs1 -a
Fri Feb 10 15:51:25 EST 2006: mmumount: Unmounting file systems ...
2. To force unmount file system fs2 on the local node, issue this command:
mmumount fs2 -f
Fri Feb 10 15:52:20 EST 2006: mmumount: Unmounting file systems ...
forced unmount of /fs2
See also
• “mmmount command” on page 537
• “mmlsmount command” on page 509
Location
/usr/lpp/mmfs/bin
mmunlinkfileset command
Removes the junction to a GPFS fileset.
Synopsis
mmunlinkfileset Device {FilesetName | -J JunctionPath} [-f]
Availability
Available on all IBM Spectrum Scale editions.
Description
The mmunlinkfileset command removes the junction to the fileset. The junction can be specified by
path or by naming the fileset that is its target. The unlink fails if there are files open in the fileset, unless
the -f flag is specified. The root fileset may not be unlinked.
Attention: If you are using the IBM Spectrum Protect Backup-Archive client, use caution when you
unlink filesets that contain data backed up by IBM Spectrum Protect. IBM Spectrum Protect tracks
files by pathname and does not track filesets. As a result, when you unlink a fileset, it appears to
IBM Spectrum Protect that you deleted the contents of the fileset. Therefore, the IBM Spectrum
Protect Backup-Archive client inactivates the data on the IBM Spectrum Protect server which may
result in the loss of backup data during the expiration process.
For information on GPFS filesets, see the IBM Spectrum Scale: Administration Guide.
Parameters
Device
The device name of the file system that contains the fileset.
File system names need not be fully-qualified. fs0 is as acceptable as /dev/fs0.
FilesetName
Specifies the name of the fileset to be removed.
-J JunctionPath
Specifies the name of the junction to be removed.
A junction is a special directory entry that connects a name in a directory of one fileset to the root
directory of another fileset.
-f
Forces the unlink to take place even though there may be open files. This option forcibly closes any
open files, causing an errno of ESTALE on their next use of the file.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the mmunlinkfileset command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS file system in IBM Spectrum Scale:
Administration Guide.
Examples
1. This command indicates the current configuration of filesets for file system gpfs1:
mmlsfileset gpfs1
mmlsfileset gpfs1
2. This command indicates the current configuration of filesets for file system gpfs1:
mmlsfileset gpfs1
This command unlinks junction path /gpfs1/fset1 from file system gpfs1:
mmlsfileset gpfs1
See also
• “mmchfileset command” on page 222
• “mmcrfileset command” on page 308
• “mmdelfileset command” on page 365
• “mmlinkfileset command” on page 477
• “mmlsfileset command” on page 493
Location
/usr/lpp/mmfs/bin
mmuserauth command
Manages the authentication configuration of file and object access protocols. The configuration allows
protocol access methods to authenticate users who need to access data that is stored on the system over
these protocols.
Synopsis
mmuserauth service create --data-access-method{file|object}
--type {ldap|local|ad|nis|userdefined}
--servers[IP address/hostname]
{[--pwd-file PasswordFile]--user-name | --enable-anonymous-bind}
[--base-dn]
[--enable-server-tls][--enable-ks-ssl]
[--enable-kerberos][--enable-nfs-kerberos]
[--user-dn][--group-dn][--netgroup-dn]
[--netbios-name] [--domain]
[--idmap-role{master|subordinate}][--idmap-range][--idmap-range-size]
[--user-objectclass][--group-objectclass][--user-name-attrib]
[--user-id-attrib][--user-mail-attrib][--user-filter]
[ --ks-dns-name][--ks-ext-endpoint]
[--kerberos-server][--kerberos-realm]
[--unixmap-domains] [--enable-overlapping-unixmap-ranges] [--ldapmap-domains]
Or
Or
Or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the mmuserauth commands to create and manage IBM Spectrum Scale protocol authentication and
ID mappings.
Parameters
service
Manages the authentication configuration of file and object access protocols with one of the following
actions:
create
Configures authentication for file and object access protocols. The authentication method for file
and object access protocols cannot be configured together. The mmuserauth service create
command needs to be submitted separately for configuring authentication for the file and object
access protocols each.
list
Displays the details of the authentication method that is configured for both file and object access
protocols.
check
Verifies the authentication method configuration details for file and object access protocols.
Validates the connectivity to the configured authentication servers. It also supports corrections to
the configuration details on the erroneously configured protocol nodes.
remove
Removes the authentication method configuration of file and object access protocols and ID maps
if any.
If you plan to remove both, authentication method configuration and ID maps, remove
authentication method configuration followed by the removal of ID maps. That is, at first you need
to submit the mmuserauth service remove command without the --idmapdelete option to
remove the authentication method configuration and then submit the same command with the --
idmapdelete option to remove ID maps.
CAUTION: Deleting the authentication method configuration with the ID maps can lead to
irrecoverable loss of access to data. Use this option with proper planning.
--data-access-method {file|object}
Specifies the access protocols for which the authentication method needs to be configured. The IBM
Spectrum Scale system supports file access protocols such as SMB and NFS along with Object access
protocols to access data that is stored on the system.
The file data access method is meant for authorizing the users who access data over SMB and NFS
protocols.
--type {ldap|local|ad|nis|userdefined}
Specifies the authentication method to be configured for accessing data over file and object access
protocols.
ldap - Defines an external LDAP server as the authentication server. This authentication type is valid
for both file and object access protocols.
ad - Defines an external Microsoft Active Directory server as the authentication server. This
authentication type is valid for both file and object access protocols.
local - Defines an internal database stored on IBM Spectrum Scale protocol nodes for
authenticating user accessing data over object access protocol. This authentication type is valid for
Object access protocol only.
nis - Defines an external NIS server as the authentication server. This authentication type is only
valid for NFS file access protocol only. The NIS configuration with an IPv6 address is not supported.
userdefined - Defines user-defined (system administrator defined) authentication method for data
access. This authentication type is valid for both file and object access protocols.
--servers [AuthServer1[:Port],AuthServer2[:Port],AuthServer3[:Port] ...]
Specifies the host name or IP address of the authentication server that is used for file and object
access protocols.
This option is only valid with --type {ldap|ad|nis}.
With --type ldap, the input value format is "serverName/serverIP:[port]".
Specifying the port value is optional. Default port is 389.
For example,
--servers ldapserver.mydomain.com:1389.
For file access protocol, multiple LDAP servers can be specified by using a comma as a separator.
For the object access protocol, only one authentication server must be specified. If multiple servers
are specified by using a comma, only the first server in the list is considered as the authentication
server for configuration.
With --type ad, the input value format is "serverName/serverIP".
For example,
--servers ldapserver.mydomain.com.
For the file access protocol, only one authentication server must be specified. Specifying multiple
servers is invalid. The AD server accepted while configuration is used to fetch details required for
validation and configuration of the authentication method. Post successful configuration, each CES
node will query DNS to lookout available Domain Controllers serving the AD domain it is joined to.
Among the returned list, the node binds with the best available domain controller.
For the object access protocol, only one authentication server must be specified. If multiple servers
are specified by using a comma, only the first server in the list is considered as the authentication
server.
With --type nis, the input value format is "serverName/serverIP".
For example,
--servers nisserver.mydomain.com.
Multiple NIS servers can be specified by using a comma separator. At least one of the specified
servers must be available and reachable while configuring the authentication method. This is
important for the verification of the specified NIS domain, against which the availability of either
passwd.byname or netgroup map is validated.
When you enter an IPv6 address, ensure that the address is enclosed within square brackets to work
correctly.
For example,
--servers [2001:192::e61f:122:feb7:5df5]
--base-dn ldapBase
Specifies the LDAP base DN of the authentication server. This option is only valid with --type
{ldap|ad} for --data-access-method object and --type ldap for --data-access-method
file.
--enable-anonymous-bind
Specifies whether to enable anonymous binding with authentication server for various validation
operations.
This option is only valid with --type {ldap|ad} and --data-access-method {object}. This
option is mutually exclusive with --user-name and password combination.
--user-name userName
Specifies the user name to be used to perform operations against the authentication server. This
option is only valid with --type {ldap|ad} and --data-access-method {file|object}.
This option combined with password is mutually exclusive with --enable-anonymous-bind. The
specified user name must have sufficient permissions to read user and group attributes from the
authentication server.
In case of --type {ad|ldap} with --data-access-method object, the user name must be
specified in complete DN format.
In case of --type ad with --data-access-method file, the specified username is used to join the
cluster to AD domain. It results in creating a machine account for the cluster based on the --
netbios-name specified in the command. After successful configuration, the cluster connects with
its machine account, and not the user used during the domain join. So the specified username after
domain join has no role to play in communication with the AD domain controller and can be even
deleted from the AD server. The cluster can still keep using AD for authentication via the machine
account created.
--pwd-file PasswordFile
Specifies the file containing passwords of administrative users for authentication configuration of file
and object access protocols. The password file must be saved under /var/mmfs/ssl/keyServ/tmp
on the node from which you are running the command. If this option is omitted, the command
prompts for a password. The password file is a security-sensitive file and hence, must have the
following characteristics:
• It must be a regular file.
• It must be owned by the root user.
• Only the root user must have permission to read or write it.
A password file for file protocol configuration must have the following format:
%fileauth:
password=userpassword
where:
fileauth
Stanza name for file protocol
password
Specifies the password of --user-name.
Note: With --type ad for file authentication, the specified password is only required during the
domain joining period. After joining the domain, the password of the machine account of the cluster is
used for accessing Active Directory.
A password file for object protocol configuration must have the following format:
%objectauth:
password=userpassword
ksAdminPwd=ksAdminPwdpassword
ksSwiftPwd=ksSwiftPwdpassword
where:
objectauth
Stanza name for object protocol
password
Specifies the password of --user-name.
ksAdminPwd
Specifies the Keystone Administrator's password.
ksSwiftPwd
Specifies the Swift service user's password.
Note: Passwords cannot contain any of the following characters: / : \ @ $ { } and space.
The passwords are stored in the associated Keystone and Swift configuration files. You can change
these passwords by using the following commands:
• To change the stored AD or LDAP password, issue the following command:
# mmobj config change --ccrfile keystone.conf --section ldap --property password --value NewPassword
• To change the stored password for the Swift user, issue the following command:
# mmobj config change --ccrfile proxy-server.conf --section filter:authtoken --property password
--value NewPassword
--enable-server-tls
Specifies whether to enable TLS communication with the authentication server. With --data-
access-method object, this option is only valid with --type {ldap|ad}. With --data-
access-method file, this option is only valid with --type {ldap}.
This option is disabled by default.
For file access protocol configuration, ensure that the CA certificate is placed in the /var/mmfs/tmp/
directory with the name ldap_cacert.pem on the node that the command is to be run.
For object access protocol configuration, ensure that the CA certificate is placed in the /var/
mmfs/tmp/ directory with the name object_ldap_cacert.pem on the node that the command is
to be run.
--enable-nfs-kerberos
Specifies whether to enable Kerberized logins for users gaining access by using the NFSv3 and NFSv4
file access protocols.
This option is only valid with --type {ad} and --data-access-method {file}.
This option is disabled by default.
Note: Kerberized NFSv3 and NFSv4 access is only supported for users from AD domains that are
configured for fetching the UID/GID information from Active Directory (RFC2307 schema attributes).
Such AD domain definition is specified by using the --unixmap-domains option.
--user-dn ldapUserDN
Specifies the LDAP group DN. Restricts search of groups within the specified sub-tree. For CIFS
access, the value of this parameter is ignored and a search is performed on the baseDN.
This option is only valid with --type {ldap} and --data-access-method {file}. If this
parameter is not set, the system uses the value that is set for baseDN as the default value.
--group-dn ldapGroupDN
Specifies the LDAP group suffix. Restricts search of groups within a specified sub-tree.
This option is only valid with --type {ldap} and --data-access-method {file}. If this
parameter is not set, the system uses the value that is set for baseDN as the default value.
--netgroup-dn ldapGroupDN
Specifies the LDAP netgroup suffix. The system searches the netgroups based on this suffix. The value
must be specified in complete DN format.
This option is only valid with --type {ldap}and --data-access-method {file}. Default value
is baseDN.
--user-objectclass userObjectClass
Specifies the object class of user on the authentication server. Only users with specified object class
along with other filter are treated as valid users.
If the --data-access-method is object, this option is only valid with --type {ldap|ad}.
If the --data-access-method is file, this option is only valid with --type {ldap}. With --type
ldap, the default value is posixAccount and with --type ad the default value is
organizationalPerson.
--group-objectclass groupObjectClass
Specifies the object class of group on the authentication server. This option is only valid with --type
{ldap} and --data-access-method {file}.
--netbios-name netBiosName
Specifies the unique identifier of the resources on a network that are running NetBIOS. This option is
only valid with --type {ad|ldap} and --data-access-method {file}.
The NetBIOS name is limited to 15 ASCII characters and must not contain any white space or one of
the following characters: / : * ? . " ; |
If AD is selected as the authentication method, the NetBIOS name must be selected carefully. If there
are name collisions across multiple IBM Spectrum Scale clusters, or between the AD Domain and the
NetBIOS name, the configuration does not work properly. Consider the following points while planning
for a naming strategy:
• There must not be NetBIOS name collision between two IBM Spectrum Scale clusters that are
configured against the same Active Directory server.
• The domain join of the latter machines revokes the join of the former one.
• The NetBIOS name and the domain name must not collide.
• The NetBIOS name and the short name of the Domain Controllers hosting the domain must not
collide.
--domain domainName
Specifies the name of the NIS domain. This option is only valid with --type {nis} and --data-
access-method {file}.
The NIS domain that is specified must be served by one of the servers specified with --server. This
option is mandatory when NIS-based authentication is configured for file access.
--idmap-role {master|subordinate}
Specifies the ID map role of the IBM Spectrum Scale system. ID map role of a stand-alone or singular
system deployment must be selected "master". The value of the ID map role is important in AFM-
based deployments.
This option is only valid with --type {ad} and --data-access-method {file}.
You can use AD with automatic ID mapping to set up two or more storage subsystems in AFM
relationship. The two or more systems configured in a master-subordinate relationship provides a
means to synchronize the UIDs and GIDs generated for NAS clients on one system with UIDs and
GIDs on the other systems. In the AFM relationship, only one system can be configured as master and
other systems must be configured as subordinates. The ID map role of master and subordinate
systems are the following:
• Master: System creates ID maps on its own.
• Subordinate: System does not create ID maps on its own. ID maps must be exported from the
master to the subordinate.
While using automatic ID mapping, in order to have same ID maps on systems sharing AFM
relationship, you need to export the ID mappings from master to subordinate. The NAS file services
are inactive on the subordinate system. If you need to export and import ID maps from one system to
another, contact the IBM Support Center.
--idmap-range lowerValue-higherValue
Specifies the range of values from which the IBM Spectrum Scale UIDs and GIDs are assigned by the
system to the Active Directory users and groups. This option is only valid with --type {ad} and --
data-access-method {file}. The default value is 10000000-299999999. The lower value of the
range must be at least 1000. After configuring the IBM Spectrum Scale system with AD
authentication, only the higher value can be increased (this essentially increases the number of
ranges).
--idmap-range-size rangeSize
Specifies the total number of UIDs and GIDs that are assignable per domain. For example, if --
idmap-range is defined as 10000000-299999999, and range size is defined as 1000000, 290
domains can be mapped, each consisting of 1000000 IDs.
Choose a value for range size that allows for the highest anticipated RID value among all of the
anticipated AD users and AD groups in all of the anticipated AD domains. Choose the range size value
carefully because range size cannot be changed after the first AD domain is defined on the IBM
Spectrum Scale system.
This option is only valid with --type {ad} and --data-access-method {file}. Default value is
1000000.
--unixmap-domains unixDomainMap
Specifies the AD domains for which user ID and group ID should be fetched from the AD server. This
option is only valid with --type {ad} and --data-access-method {file}. The unixDomainMap
takes value in this format: DOMAIN1(L1-H1:{win|unix})[;DOMAIN2(L2-H2:{win|unix})
[;DOMAIN3(L3-H3:{win|unix})....]]
DOMAIN
Use DOMAIN to specify an AD domain for which ID mapping services are to be configured. The
name of the domain to be specified must be the NetBIOS domain name. The UIDs and GIDs of the
users and groups for the specified DOMAIN are read from the UNIX attributes that are populated
in the RFC2307 schema extension of AD server. Any users or groups, from this domain, with
missing UID/GID attributes are denied access. Use the L-H format to specify the ID range. All the
users or groups from DOMAIN that need access to exports need to have their UIDs or GIDs in the
specified range.
DOMAIN
Use DOMAIN to specify an AD domain for which ID mapping services are to be configured. The
name of the domain to be specified must be the Pre-Win2K domain name. The UID and GID of the
users and groups for the specified DOMAIN are read from the objects stored on LDAP server in
RFC2307 schema attributes. Any users or groups, from this domain, with missing UID/GID
attributes are denied access.
type
Defines the type of LDAP server to use.
Supported value: stand-alone.
range
Attribute takes value in the L-H format. Defines the user or group from DOMAIN that needs access
to exports need to have their UIDs or GIDs in the specified range. The specified range should not
intersect with,
• The range specified using --idmap-range option of the command
• The range specified for other AD DOMAIN for which ID mapping needs to be done from Active
Directory (RFC2307 schema attributes) specified in --unixmap-domains option
• The range specified for other AD DOMAIN for which ID mapping needs to be done from LDAP
server specified in --ldapmap-domains option
This is intended to avoid ID collisions among users and groups from different domains.
ldap_srv
Defines the name or IP address of the LDAP server to fetch the UID or GID for of a user or group
records in RFC2307 schema format. The user and group objects should be in RFC2307 schema
format. Specifying only single LDAP server is supported.
When you enter an IPv6 address, ensure that the address is enclosed within square brackets to
work correctly.
For example,
--servers [2001:192::e61f:122:feb7:5df5]
user_dn
Defines the bind tree on the LDAP server where user objects shall be found.
grp_dn
Defines the bind tree on the LDAP server where the group objects shall be found.
bind_dn
Optional attribute.
Defines the user DN that should be used for authentication against the LDAP server. If not
specified, anonymous bind shall be performed against the LDAP server.
bind_dn_pwd
Optional attribute.
Defines the password of the user DN specified in bind_dn to be used for authentication against the
LDAP server. Must be specified when bind_dn attribute is specified for binding with the LDAP
server in the DOMAIN definition.
Password cannot contain these special characters such as semicolon (;) or colon (:).
For example,
--ldapmap-domains "MYDOMAIN1(type=stand-alone:range=10000-50000
:ldap_srv=myldapserver.mydomain.com :usr_dn=ou=People,dc=mydomain,dc=com
:grp_dn=ou=Groups,dc=mydomain,dc=com :bind_dn=cn=ldapuser,dc=mydomain,dc=com
:bind_dn_pwd=MYPASSWORD);MYDOMAIN2(type=stand-alone :range=70000-100000
:ldap_srv=myldapserver.example.com:usr_dn=ou=People,dc=example,dc=com
:grp_dn=ou=Groups,dc=example,dc=com)"
--enable-kerberos
Specifies whether to enable Kerberized logins for users who are gaining access by using file access
protocols.
This option is only valid with --type {ldap} and --data-access-method {file}.
This option is disabled by default.
Note: Ensure that the legitimate keytab file is placed in the /var/mmfs/tmp directory and is named
as krb5.keytab on the node that the authentication method configuration command is to be run.
--kerberos-server kerberosServer
Specifies the Kerberos server. This option is only valid with --type {ldap} and --data-access-
method {file}.
When you enter an IPv6 address, ensure that the address is enclosed within square brackets to work
correctly.
For example,
--servers [2001:192::e61f:122:feb7:5df5]
--kerberos-realm kerberosRealm
Indicates the Kerberos server authentication administrative domain. The realm name is usually the
all-uppercase version of the domain name. This option is case-sensitive.
--user-name-attrib UserNameAttribute
Specifies the attribute to be used to search for user name on authentication server.
If the --data-access-method is object, this option is only valid with --type {ldap|ad}.
If the --data-access-method is file, this option is only valid with --type {ldap}. With --type
ldap, default value is cn and with --type ad, the default value is sAMAccountName.
--user-id-attrib UserIDAttribute
Specifies the attribute to be used to search for user ID on the authentication server.
If --data-access-method is object, this option is only valid with --type {ldap|ad}.
If --data-access-method is file, this option is only valid with --type {ldap}. For --type
ldap, default value is uid and for --type ad the default value is CN.
--user-mail-attrib UserMailAttribute
Specifies the attribute to be used to search for email on authentication server. If the --data-
access-method is object, this option is only valid with --type {ldap|ad}. For --data-access-
method file, this option is only valid with --type {ldap}. Default value is mail.
--user-filter userFilter
Specifies the additional filter to be used to search for user in the authentication server. The filter must
be specified in LDAP filter format. This option is only valid with --type {ldap|ad} and --data-
access-method {object}. By default, no filter is used.
--ks-dns-name keystoneDnsName
Specifies the DNS name for keystone service. The specified name must be resolved on all protocol
nodes for proper functioning. This is optional with --data-access-method {object}. If the value
is not specified for this parameter, the mmuserauth service create command uses the value that
is used during the IBM Spectrum Scale system installation.
--ks-admin-user keystoneAdminName
Specifies the Keystone server administrative user. This user must be a valid user on authentication
server if --type {ldap|ad} is specified. In case of --type local, new user is created, and admin
role is assigned in Keystone. This option is mandatory with --data-access-method {object}.
For --type {ldap|ad}, do not specify user name in DN format for --ks-admin-user. The name must
be the base or short name that is written against the specified user-id-attrib or user-name-
attrib of user on the LDAP server.
--enable-ks-ssl
Specifies whether the SSL communication must be enabled with the Keystone service. It enables a
secured way to access the Keystone service over the HTTPS protocol. The default communication
option with the Keystone service is over HTTP protocol, which has security risks.
This option is only valid with --data-access-method {object}.
By default, this option is disabled.
With --type local | ad | ldap, ensure that the valid certificate files are placed in the /var/
mmfs/tmp directory on the node that the command has to be run:
The SSL certificate at: /var/mmfs/tmp/ssl_cert.pem
The private key at: /var/mmfs/tmp/ssl_key.pem
The CA certificate at: /var/mmfs/tmp/ssl_cacert.pem
With --type userdefined, ensure that the valid certificate files are placed in the /var/mmfs/tmp
directory on the node that the command has to be run:
The CA certificate at: /var/mmfs/tmp/ssl_cacert.pem
--ks-swift-user keystoneSwiftName
Specifies the username to be used as swift user in proxy-server.conf. If AD or LDAP-based
authentication is used, this user must be available in the AD or LDAP authentication server. If local
authentication method is used, new user with this name is created in the local database This option is
only valid with --data-access-method {object}.
For --type {ldap|ad}, do not specify user name in DN format for --ks-swift-user. The name must
be the base or short name that is written against the specified user-id-attrib or user-name-
attrib of user on the LDAP server.
--ks-ext-endpoint externalendpoint
Specifies the endpoint URL of external keystone. Only API v3 and HTTP are supported. This option is
only valid with --data-access-method {object} and --type {userdefined}
--idmapdelete
Specifies whether to delete the current ID maps (SID to UID/GID mappings) from the ID mapping
databases for the file access method and user-role-project-domain mappings stored in local keystone
database for object access method.
This option is only valid with the mmuserauth service remove command. Unless the --data-
access-method parameter is specified on the command line, ID maps for file and object access
protocols are erased by default. To delete the ID maps of a particular access protocol, explicitly
specify the --data-access-method parameter on command line along with the valid access
protocol name.
The authentication method configuration and ID maps cannot be deleted together. The authentication
method configuration must be deleted before the ID maps.
CAUTION: Deleting ID maps can lead to irrecoverable loss of access to data. Use this option
with proper planning.
-N|--nodes{node-list|cesNodes}
Verifies the authentication configuration on each node. If the specified node is not protocol node, it is
ignored. If protocol node is specified, then the system checks configuration on all protocol nodes. If
you do not specify a node, the system checks the configuration of only the current node.
-Y
Displays the command output in a parseable format with a colon (:) as a field delimiter. Each column is
described by a header.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
-r|--rectify
Rectifies the authentication configurations and missing SSL and TLS certificates.
--server-reachability
Without this flag, the mmuserauth service check command only validates whether the
authentication configuration files are consistent across the protocol nodes. Use this flag to ensure if
the external authentication server is reachable by each protocol node.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Note: When you reconfigure Object protocol access, messages suggesting that a duplicate key
value violates unique constraint might appear in the system log. Disregard these messages.
Security
You must have root authority to run the mmuserauth command.
The node on which the command is issued must be able to run remote shell commands on any other node
in the cluster without the use of a password and without producing any extraneous messages. For more
information, see Requirements for administering a GPFS file system in IBM Spectrum Scale: Administration
Guide.
Examples
1. To configure Microsoft Active Directory (AD) based authentication with automatic ID mapping for file
access, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
2. To configure Microsoft Active Directory (AD) based authentication with RFC2307 ID mapping for file
access, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
3. To configure Microsoft Active Directory (AD) based authentication with LDAP ID mapping for file
access, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
4. To configure Microsoft Active Directory (AD) based authentication with LDAP ID mapping for file
access (anonymous binding with LDAP), issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
IDMAP_ROLE master
IDMAP_RANGE 10000000-299999999
IDMAP_RANGE_SIZE 1000000
UNIXMAP_DOMAINS none
LDAPMAP_DOMAINS DOMAIN(type=stand-alone: range=1000-10000:ldap_srv=myLDAPserver:
usr_dn=ou=People,dc=example,dc=com:grp_dn=ou=Groups,dc=example,dc=com)
5. To configure AD-based authentication with overlapping ID map ranges, issue this command:
To verify the authentication configuration, the mmuserauth service list command as shown in
the following example:
6. To configure LDAP-based authentication with TLS encryption for file access, issue this command:
Note: Before issuing the mmuserauth service create command to configure LDAP with TLS,
ensure that the CA certificate for LDAP server is placed under /var/mmfs/tmp directory with the
name "ldap_cacert.pem" specifically on the protocol node where the command is issued.
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
7. To configure LDAP-based authentication with Kerberos for file access, issue this command:
Note: Before issuing the mmuserauth service create command to configure LDAP with
Kerberos, ensure that the keytab file is also placed under /var/mmfs/tmp directory name as
"krb5.keytab" specifically on the node where the command is run.
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
KERBEROS_SERVER myKerberosServer
KERBEROS_REALM MYREAL.com
8. To configure LDAP with TLS and Kerberos for file access, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
9. To configure LDAP without TLS and without Kerberos for file access, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
ENABLE_SERVER_TLS false
ENABLE_KERBEROS false
USER_NAME cn=ldapuser,dc=example,dc=com
SERVERS myLDAPserver
NETBIOS_NAME specscale
BASE_DN dc=example,dc=com
USER_DN none
GROUP_DN none
NETGROUP_DN none
USER_OBJECTCLASS posixAccount
GROUP_OBJECTCLASS posixGroup
USER_NAME_ATTRIB cn
USER_ID_ATTRIB uid
KERBEROS_SERVER none
KERBEROS_REALM none
10. To configure NIS-based authentication for file access, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
11. To configure user-defined authentication for file access, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
12. To configure local authentication for object access, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
13. To configure AD without TLS authentication for object access, issue this command:
# mmuserauth service create --type ad --data-access-method object
--user-name "cn=adUser,cn=Users,dc=example,dc=com" --base-dn "dc=example,DC=com"
--ks-dns-name ksDNSname --ks-admin-user admin --servers myADserver --user-id-attrib cn
--user-name-attrib sAMAccountName --user-objectclass organizationalPerson --user-dn
"cn=Users,dc=example,dc=com"
--ks-swift-user swift
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
14. To configure AD with TLS authentication for object access, issue this command:
# mmuserauth service create --type ad --data-access-method object
--user-name "cn=adUser,cn=Users,dc=example,dc=com" --base-dn
"dc=example,DC=com" --enable-server-tls --ks-dns-name ksDNSname --ks-admin-user admin --servers
myADserver --user-id-attrib cn --user-name-attrib sAMAccountName --user-objectclass organizationalPerson
--user-dn "cn=Users,dc=example,dc=com" --ks-swift-user swift
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
15. To configure LDAP-based authentication for object access, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
USER_OBJECTCLASS posixAccount
USER_NAME_ATTRIB cn
USER_ID_ATTRIB uid
USER_MAIL_ATTRIB mail
USER_FILTER none
ENABLE_KS_CASIGNING false
KS_ADMIN_USER admin
16. To configure LDAP with TLS-based authentication for object access, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
17. To remove the authentication method that is configured for file access, issue this command:
Note: Authentication configuration and ID maps cannot be deleted together. To remove ID maps,
remove the authentication configuration first and then remove ID maps. Also, you cannot delete ID
maps that are used for file and object access together. That is, when you delete the ID maps, the
value that is specified for --data-access-method must be either file or object.
18. To remove the authentication method that is configured for object access, issue this command:
Note: Authentication configuration and ID maps cannot be deleted together. To remove ID maps,
remove the authentication configuration first and then remove the ID maps. Also, you cannot delete
ID maps that are used for file and object access together. That is, when you delete the ID maps, the
value that is specified for --data-access-method must be either file or object.
19. To check whether the authentication configuration is consistent across the cluster and the required
services are enabled and running, issue this command:
20. To check whether the file authentication configuration is consistent across the cluster and the
required services are enabled and running, and if you want to correct the situation, issue this
command:
21. To check that all object configuration files (including certificates) are present, and if not, rectify the
situation by issuing the following command:
22. To check if the external authentication server is reachable by each protocol node, use the following
command:
a. If file is not configured, object is configured, and there are no errors, the system displays output
similar to this:
b. If file is not configured, object is configured, and there is a single error, the system displays output
similar to this:
c. If file and object are configured and there are no errors, the system displays output similar to this:
d. If file and object are configured and there is a single error, the system displays output similar to
this:
Checking keystone.conf: OK
Checking wsgi-keystone.conf: OK
Checking /etc/keystone/ssl/certs/signing_cert.pem: OK
Checking /etc/keystone/ssl/private/signing_key.pem: OK
Checking /etc/keystone/ssl/certs/signing_cacert.pem: OK
e. If file and object are configured and there is are multiple errors, the system displays output similar
to this:
Note: The --rectify or -r option cannot fix server reachability errors. Specifying that option with --
server-reachability may fix the erroneous config files and service-related errors only.
23. To configure AD authentication by using a password file for File protocol configuration, use the
following command:
%fileauth:
password=Passw0rd
Here, Passw0rd is the password for the user to bind with the authentication server.
24. To configure AD authentication by using a password file for Object protocol configuration, use the
following command:
mmuserauth service create --type ad --data-access-method object
--base-dn "dc=example,DC=com" --servers myADserver --user-id-attrib cn
--user-name-attrib sAMAccountName --user-objectclass organizationalPerson
--user-dn "cn=Users,dc=example,dc=com" --pwd-file objectauth
%objectauth:
password=Passw0rd
ksAdminPwd=Passw0rd1
ksSwiftPwd=Passw0rd2
Here, Passw0rd is the password for the user to bind with the authentication server. Passw0rd1 is the
Keystone administrator's password, and Passw0rd2 is the Swift Service user's password.
25. To check whether the DNS configuration is correct when cluster is already configured with file AD
authentication scheme, issue this command:
mmuserauth service check --data-access-method file --nodes cesNodes
can be looked up from this node. Validate correct DNS server is populated in network
configuration.)
Found errors in configuration.
26. To configure Microsoft Active Directory(AD)-based authentication when the time difference between
the node and domain controller is more than five minutes, run the following command:
mmuserauth service create --type ad --data-access-method file --netbios-name
specscale --user-name adUser --idmap-role master --servers myADserver
--idmap-range-size 1000000 --idmap-range 10000000-299999999
WARNING: Time difference between current node and domain controller is 4073 seconds.
It is greater than max allowed clock skew 300 seconds. File Authentication configuration
completed successfully.
27. To configure LDAP-based authentication for file access with an IPv6 address of the authentication
server, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
28. To configure LDAP-based authentication with TLS encryption for file access with an IPv6 address of
the authentication server, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
29. To configure LDAP-based authentication with Kerberos for file access with an IPv6 address of the
authentication server, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
PARAMETERS VALUES
-------------------------------------------------
30. To configure Microsoft Active Directory (AD)-based authentication with the automatic ID mapping for
file access with an IPv6 address of the authentication server, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
31. To configure Microsoft Active Directory (AD)-based authentication with RFC2307 ID mapping for file
access with IPv6 address of the authentication server, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
PARAMETERS VALUES
-------------------------------------------------
32. To configure Microsoft Active Directory (AD)-based authentication with LDAP ID mapping for file
access with IPv6 address of the authentication server, issue this command:
To verify the authentication configuration, use the mmuserauth service list command as shown
in the following example:
See also
• “mmces command” on page 132
• “mmchconfig command” on page 169
• “mmlscluster command” on page 484
• “mmlsconfig command” on page 487
• “mmnfs command” on page 552
• “mmobj command” on page 565
• “mmsmb command” on page 698
Location
/usr/lpp/mmfs/bin
mmwatch command
Administers clustered watch folder watches.
Synopsis
or
or
or
or
or
or
or
or
Availability
Available with IBM Spectrum Scale Advanced Edition, IBM Spectrum Scale Data Management Edition,
IBM Spectrum Scale Developer Edition, or IBM Spectrum Scale Erasure Code Edition.
Description
The mmwatch command is used to enable, disable, list, and generally administer persistent and fault-
tolerant clustered watches. The purpose of clustered watch folder is to monitor file system event activity
and produce events to an external event handler, which is referenced as a sink. Errors are logged
to /var/adm/ras/mmwatch.log, /var/adm/ras/mmwfclient.log, and /var/log/messages.
Remember: To use this command to enable, disable, and administer clustered watch folder, your cluster
code level must be at IBM Spectrum Scale 5.0.3, and the file system on which the clustered watch folder
is set must be at IBM Spectrum Scale 5.0.2.
Parameters
Device
Specifies the device name of the file system upon which the clustered watch folder configuration
change or listing is to occur.
Note: You must specify the Device or use the all keyword for mmwatch operations.
all
Lists the active clustered watches for all file systems.
Note: You must specify the Device or use the all keyword for mmwatch operations.
list [--events] [-Y]
Displays the details of active clustered watches for the specified device. The optional --events
option lists the monitored clustered watch folder events for each specific watch for the specified
device. The -Y parameter provides output in machine-readable (colon-delimited) format.
Note: Fields that have a colon (:) are encoded to prevent confusion. For the set of characters that
might be encoded, see the command documentation of mmclidecode. Use the mmclidecode
command to decode the field.
producerRestart -N { NodeName[,NodeName...] | NodeFile | NodeClass }
Restarts the producers for all file systems under watch on the nodes specified by the -N option. The -
N option supports a comma-separated list of nodes, a full path name to a file containing node names,
or a predefined node class.
Note: Issuing this command causes the event producers for file audit logging to be restarted as well.
-q
Suppresses all [I] informational messages.
upgrade
Converts all active clustered watches to use the IBM Spectrum Scale system health infrastructure.
During the conversion process, conduits are disabled and then re-enabled. In the period of time
between disablement and re-enablement of the consumer conduits, some events might be missed. In
addition, this command also disables any clustered watches that are currently in the suspended state.
status
Provides the status of the nodes sending watch events to the external Kafka sink, and is subject to the
following:
• If all is used in place of a device name, the status is provided for all clustered watches that are
enabled within the cluster. If no clustered watches are found, a statement stating this is returned
but this scenario is not considered an error.
• If a Device is specified without a watch ID, then the status is provided for all clustered watches
that are enabled within the cluster associated with that device. If no clustered watches exist for the
device, a statement stating this is returned and this is considered an error.
• If Device and --watch-id WatchID are both specified, then the status is provided for the single
clustered watch that is associated with the given device and watch ID. If the clustered watch cannot
be found, a statement stating this is returned and this is considered an error.
Note: The -v flag can only be used when a watch ID is given. It provides up to the last 10 entries for
the given watch ID for each component.
enable
Enables a clustered watch for the given device. The scope of the watch is the whole file system unless
the --fileset option is provided. If --fileset is provided, then the watch type will be fileset
watch for dependent filesets and inode space watch for independent filesets. Enablement entails
setting up and validating the configuration, and applying the respective policy partition rules based on
the watched events.
-F ConfigFilePath
Specifies the path to the configuration file.
The configuration file is populated with key:value pairs, which include all of the required parameters
for enable and any optional fields desired in the following format:
• FILESET:<fileset name>
• EVENTS:<Event1>,<Event2> | ALL
• EVENT_HANDLER:kafkasink
• SINK_BROKERS:<sink broker:port>
• SINK_TOPIC:<sink topic>
• SINK_AUTH_CONFIG:</path/to/auth/config>
• WATCH_DESCRIPTION:<description_string>
Note: If this option is given, then the other command line options that are used to enable a watch are
invalid and produce a syntax error. Using the --fileset, --events, --event-handler, --sink-
brokers, --sink-topic, or --sink-auth-config options in combination with the -F option
produces a syntax error.
--fileset FSetName
Specifies the name of the fileset within the given file system to watch. If the fileset is dependent, then
the watch type is fileset. No nested filesets are watched within a dependent fileset. If the fileset is
independent, then the watch type is inodespace. Only nested dependent filesets are included in the
watch of an independent fileset. The --fileset option is optional.
--description Description
The watch description is an optional parameter that identifies how a watch is being used. It can be
any combination of letters, numbers, and spaces, but it is limited to 50 total characters.
--config
Displays the configuration details for the specified active watch WatchID.
--events Event[,Event...] | ALL}
Defines a comma-separated string of events to monitor. The --events option is optional. The
following events can be watched:
• IN_ACCESS
• IN_ATTRIB
• IN_CLOSE_NOWRITE
• IN_CLOSE_WRITE
• IN_CREATE
• IN_DELETE
• IN_DELETE_SELF
• IN_MODIFY
• IN_MOVED_FROM
• IN_MOVED_TO
• IN_MOVE_SELF
• IN_OPEN
If the --events option is not included, all events are watched.
--event-handler handlertype
The only type of event handler that is currently supported is kafkasink. The --event-handler
option is required.
--sink-brokers BrokerIP:Port[,BrokerIP:Port...]
Includes a comma-separated list of broker:port pairs for the sink Kafka cluster (the external Kafka
cluster where the events are sent). The --sink-brokers option is required.
When specifying IP addresses in IPv6 format, you must enclose the IP address with []. For example:
[2002:90b:e006:86:19:1:186:13]:9092,[2002:90b:e006:86:19:1:186:13]:9092.
--sink-topic Topic
The topic that producers write to in the sink Kafka cluster. The --sink-topic option is required.
--sink-auth-config Path
The full path to the file that contains the authentication details to the sink Kafka cluster. The --sink-
auth-config option is optional. For an example of how to use this flag, see Interaction between
clustered watch folder and the external Kafka sink in the IBM Spectrum Scale: Concepts, Planning, and
Installation Guide.
disable
Disables a clustered watch for the given device. Disablement removes the configuration and deletes
the policy partition rules for the watch.
Exit status
0
Successful completion.
nonzero
A failure occurs.
Security
You must have root authority to run the mmwatch command.
Examples
1. To list all current clustered watches that are active, issue this command:
2. To list all current clustered watches with associated events, issue this command:
----------------------------------------------------------------------------------
10019023689338312536 CLW1601405441 IN_ACCESS,IN_DELETE
3. To list the current configuration for a clustered watch, issue this command:
5. To enable a clustered watch using an input configuration file, issue this command:
CLW1601410684
[I] Successfully enabled Clustered Watch: CLW1601410684
See also
• “mmaudit command” on page 92
Location
/usr/lpp/mmfs/bin
mmwinservctl command
Manages the mmwinserv Windows service.
Synopsis
mmwinservctl set [--account AccountName [--password Password]] [--remote-shell {yes | no}]
[-N {Node[,Node...] | NodeFile | NodeClass}] [-v]
or
Availability
Available on all IBM Spectrum Scale editions. Available on Windows.
Description
mmwinserv is a GPFS for Windows service that is needed for the proper functioning of the GPFS daemon
on nodes running Windows. Optionally, the service can be configured to provide a remote execution
facility for GPFS administration commands.
Use the mmwinservctl command to manage the mmwinserv service. You can set the log on account and
password for the service, enable or disable the service, enable or disable the service's remote execution
facility, or query its current state.
The mmwinservctl command must be run on a Windows node and it has no effect on nodes running
other operating systems.
If the remote execution facility of mmwinserv is enabled, a Windows GPFS cluster can be configured to
use mmwinrsh and mmwinrcp as the remote shell and remote file copy commands:
• mmwinrsh (/usr/lpp/mmfs/bin/mmwinrsh) uses Windows Named Pipes to pass the command to
the target node.
• mmwinrcp (/usr/lpp/mmfs/bin/mmwinrcp) is a wrapper module that invokes the Cygwin cp
command to copy the files that are needed by the mm commands. The path names on remote hosts are
translated into path names based on the standard Windows ADMIN$ share.
An account must be given the right to log on as a service before it can be used to run mmwinserv. The
right to log on as a service is controlled by the Local Security Policy of each Windows node. You can use
the Domain Group Policy to set the Local Security Policy on all Windows nodes in a GPFS cluster.
For more information on the mmwinserv service, see Configuring the GPFS Administration service in the
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Parameters
set
Sets the service configuration options and restarts the service if it is running.
enable
Sets the service to automatic startup and starts the service.
disable
Sets the service to disabled and stops the service.
query
Returns information about the service's configuration and current state.
--account AccountName
Specifies the log on account name for the mmwinserv service. By default, mmwinserv is configured to
run using the LocalSystem account.
--password Password
Specifies the log on password for the mmwinserv service.
--remote-shell {yes | no}
Specifies whether or not remote connections are allowed.
-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the list of nodes on which to perform the action. The default is the node on which the
mmwinservctl command is issued.
If the node on which the mmwinservctl command is issued belongs to a GPFS cluster, the nodes
specified with the -N parameter must belong to the cluster.
If the node on which the mmwinservctl command is issued does not belong to a GPFS cluster, the
nodes specified with the -N parameter must be identified by their host names or IP addresses. Node
classes and node numbers cannot be used.
For general information on how to specify node names, see Specifying nodes as input to GPFS
commands in the IBM Spectrum Scale: Administration Guide.
-v
Displays progress and intermediate error messages.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must be a member of the Domain Admins group to run the mmwinservctl command.
Examples
1. To specify 'gpfs\root' as the log on account name for the mmwinserv service and enable the remote
command execution facility on nodes ls21n19 and ls21n20, issue:
2. To display the current state of the mmwinserv service on all nodes in the cluster, issue:
Location
/usr/lpp/mmfs/bin
spectrumscale command
Installs and configures GPFS; adds nodes to a cluster; deploys and configures protocols, performance
monitoring tools, and authentication services; configures call home and file audit logging; and upgrades
GPFS and protocols.
Synopsis
spectrumscale setup [ -i SSHIdentity ] [ -s ServerIP ]
[ -st { "ss","SS","ess","ESS","ece","ECE" } ]
[ --storesecret ]
or
spectrumscale node add [-g] [-q] [-m] [-a] [-n] [-e] [-c] [-p] [-so] Node
or
spectrumscale node load [-g] [-q] [-m] [-a] [-n] [-e] [-c] [-p] [-so] NodeFile
or
or
or
or
or
or
Important: The Object protocol requires OpenStack 16 repositories to be available to satisfy the
necessary dependencies. Ensure that the following repositories are enabled on all protocol nodes before
attempting installation: openstack-16-for-rhel-8-x86_64-rpms and codeready-builder-for-
rhel-8-x86_64-rpms.
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
-bs { 1M | 2M | 4M | 8M | 16M }
-ss VdiskSetSize
or
or
or
or
or
or
or
or
or
or
or
or
or
or
or
CAUTION: Disabling object service discards OpenStack Swift configuration and ring files from the
CES cluster. If OpenStack Keystone configuration is configured locally, disabling object storage
also discards the Keystone configuration and database files from the CES cluster. However, the
data is not removed. For subsequent object service enablement with a clean configuration and
new data, remove object store fileset and set up object environment. See the mmobj swift base
command. For more information, contact the IBM Support Center.
or
or
or
or
or
or
or
or
Availability
Available on all IBM Spectrum Scale editions.
Description
Use the spectrumscale command (the installation toolkit) to do the following:
• Install and configure GPFS.
• Add GPFS nodes to an existing cluster.
• Deploy and configure SMB, NFS, object (OpenStack Swift), HDFS, and performance monitoring tools on
top of GPFS.
• Configure authentication services for protocols.
• Enable and configure the file audit logging function.
Parameters
setup
Installs Chef and its components, as well as configure the install node in the cluster definition file. The
IP address passed in should be the node from which the installation toolkit will be run. The SSH key
passed in should be the key the installer should use to have passwordless SSH onto all other nodes.
This is the first command you will run to set up IBM Spectrum Scale. This option accepts the following
arguments:
-i SSHIdentity
Adds the path to the SSH identity file into the configuration.
-s ServerIP
Adds the control node IP into the configuration.
-st {"ss","SS","ess","ESS","ece","ECE"}
Specifies the setup type.
The allowed values are ess, ece, and ss. The default value is ss.
• If you are using the installation toolkit in a cluster containing ESS, specify the setup type as ess.
• If you are using the installation toolkit in an IBM Spectrum Scale Erasure Code Edition cluster,
specify the setup type as ece.
• The setup type ss specifies an IBM Spectrum Scale cluster containing no ESS nodes.
Regardless of the mode, the installation toolkit contains safeguards to prevent changing of a tuned
ESS configuration. While adding a node to the installation toolkit, it looks at whether the node is
currently in an existing cluster and if so, it checks the node class. ESS I/O server nodes are
detected based upon existence within the gss_ppc64 node class. ESS EMS nodes are detected
based upon existence within the ems node class. ESS I/O server nodes are not allowed to be
added to the installation toolkit and must be managed by the ESS toolsets contained in the EMS
node. A single ESS EMS node is allowed to be added to the installation toolkit. Doing so adds this
node as an admin node of the installation toolkit functions. While the installation toolkit runs from
a non-ESS node, it uses the designated admin node (an EMS node in this case) to run mm
commands on the cluster as a whole. Once in the ESS mode, the following assumptions and
restrictions apply:
• File audit logging is not configurable using the installation toolkit.
• Call home is not configurable using the installation toolkit
• EMS node will be the only admin node designated in the installation toolkit. This designation will
automatically occur when the EMS node is added.
• EMS node will be the only GUI node allowed in the installation toolkit. Additional existing GUI
nodes can exist but they cannot be added.
• EMS node will be the only performance monitoring collector node allowed within the installation
toolkit. Additional existing collectors can exist but they cannot be added.
• EMS node cannot be designated as an NSD or a protocol node.
• I/O server nodes cannot be added to the installation toolkit. These nodes must be managed
outside the installation toolkit by ESS toolsets contained in the EMS node.
• NSDs and file systems managed by the I/O server nodes cannot be added to the installation
toolkit.
• File systems managed by the I/O server nodes can be used for placement of the Object fileset as
well as CESSharedRoot file system. Simply point the installation toolkit to the path.
• The cluster name is set upon addition of the EMS node to the installation toolkit. It is
determined by mmlscluster being run from the EMS node.
• EMS node must have passwordless SSH set up to all nodes, including any protocol, NSD, and
client nodes being managed by the installation toolkit.
• EMS node can be a different architecture or operating system than the protocol, NSD, and client
nodes being managed by the installation toolkit.
• If the config populate function is used, an EMS node of a different architecture or operating
system than the protocol, NSD, and client nodes can be used.
• If the config populate function is used, a mix of architectures within the non-ESS nodes being
added or currently within the cluster cannot be used. To handle this case, use the installation
toolkit separately for each architecture grouping. Run the installation toolkit from a node with
similar architecture to add the required nodes. Add the EMS node and use the setup type ess.
--storesecret
Disables the prompts for the encryption secret.
CAUTION: If you use this option, passwords will not be securely stored.
node
Used to add, remove, or list nodes in the cluster definition file. This command only interacts with this
configuration file and does not directly configure nodes in the cluster itself. The nodes that have an
entry in the cluster definition file will be used during install, deploy, or upgrade. This option accepts
the following arguments:
add Node
Adds the specified node and configures it according to the following arguments:
-g
Adds GPFS Graphical User Interface servers to the cluster definition file.
-q
Configures the node as a quorum node.
-m
Configures the node as a manager node.
-a
Configures the node as an admin node.
-n
Specifies the node as NSD.
-e
Specifies the node as the EMS node of an ESS system. This node is automatically specified as
the admin node.
-c
Specifies the node as a call home node.
-p
Configures the node as a protocol node.
-so
Specifies the node as a scale-out node. The setup type must be ece for adding this type of
nodes in the cluster definition.
Node
Specifies the node name.
load NodeFile
Loads the specified file containing a list of nodes, separated per line; adds the nodes specified in
the file and configures them according to the following:
-g
Sets the nodes as GPFS Graphical User Interface server.
-q
Sets the nodes as quorum nodes.
-m
Sets the nodes as manager nodes.
-a
Sets the nodes as admin nodes.
-n
Sets the nodes as NSD servers.
-e
Sets the node as the EMS node of an ESS system. This node is automatically specified as the
admin node.
-c
Sets the nodes as call home nodes.
-p
Sets the nodes as protocol nodes.
-so
Sets the nodes as scale-out nodes. The setup type must be ece for adding this type of nodes
in the cluster definition.
delete Node
Removes the specified node from the configuration. The following option is accepted.
-f
Forces the action without manual confirmation.
clear
Clears the current node configuration. The following option is accepted:
-f
Forces the action without manual confirmation.
list
Lists the nodes configured in your environment.
config
Used to set properties in the cluster definition file that will be used during install, deploy, or upgrade.
This command only interacts with this configuration file and does not directly configure these
properties on the GPFS cluster. This option accepts the following arguments:
gpfs
Sets any of the following GPFS-specific properties to be used during GPFS installation and
configuration:
-l
Lists the current settings in the configuration.
-c ClusterName
-p
Specifies the profile to be set on cluster creation. The following values are accepted:
default
Specifies that the GpfsProtocolDefaults profile is to be used.
randomio
Specifies that the GpfsProtocolRandomIO profile is to be used.
-r RemoteShell
Specifies the remote shell binary to be used by GPFS. If no remote shell is specified in the
cluster definition file, /usr/bin/ssh will be used as the default.
-rc RemoteFileCopy
Specifies the remote file copy binary to be used by GPFS. If no remote file copy binary is
specified in the cluster definition file, /usr/bin/scp will be used as the default.
-e EphemeralPortRange
Specifies an ephemeral port range to be set on all GPFS nodes. If no port range is specified in
the cluster definition, 60000-61000 will be used as default.
For information about ephemeral port range, see the topic about GPFS port usage in the
Miscellaneous advanced administration tasks in IBM Spectrum Scale: Administration Guide.
protocols
Provides details of the GPFS environment that will be used during protocol deployment, according
to the following options:
-l
Lists the current settings in the configuration.
-f FileSystem
Specifies the file system.
-m MountPoint
Specifies the shared file system mount point or path.
-e ExportIPPool
Specifies a comma-separated list of additional CES export IPs to configure on the cluster.
object
Sets any of the following Object-specific properties to be used during Object deployment and
configuration:
-l
Lists the current settings in the configuration.
-f FileSystem
Specifies the file system.
-m MountPoint
Specifies the mount point.
-e EndPoint
Specifies the host name that will be used for access to the object store. This should be a
round-robin DNS entry that maps to all CES IP addresses or the address of a load balancer
front end; this will distribute the load of all keystone and object traffic that is routed to this
host name. Therefore the endpoint is an IP address in a DNS or in a load balancer that maps to
a group of export IPs (that is, CES IPs that were assigned on the protocol nodes).
-o ObjectBase
Specifies the object base.
-i InodeAllocation
Specifies the inode allocation.
-t AdminToken
Specifies the admin token.
-au AdminUser
Specifies the user name for the admin.
-ap AdminPassword
Specifies the admin user password. This credential is for the Keystone administrator. This user
can be local or on remote authentication server based on the authentication type used.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely
store the password. Choose a memorable pass phrase which you will be prompted for each
time you enter the password.
-su SwiftUser
Specifies the Swift user name. This credential is for the Swift services administrator. All Swift
services are run in this user's context. This user can be local or on remote authentication
server based on the authentication type used.
-sp SwiftPassword
Specifies the Swift user password.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely
store the password. Choose a memorable pass phrase which you will be prompted for each
time you enter the password.
-dp DataBasePassword
Specifies the object database user password.
Note: You will be prompted to enter a Secret Encryption Key which will be used to securely
store the password. Choose a memorable pass phrase which you will be prompted for each
time you enter the password.
-mr MultiRegion
Enables the multi-region option.
-rn RegionNumber
Specifies the region number.
-s3 on | off
Specifies whether s3 is to be turned on or off.
hdfs
Sets Hadoop Distributed File System (HDFS) related configuration in the cluster definition file:
new
Defines configuration for a new HDFS cluster.
-n HDFSClusterName
Specifies the name of the new HDFS cluster.
-nn NameNodes
Specifies the name node host names in a comma-separated list.
-dn DataNodes
Specifies the data node host names in a comma-separated list.
-f FileSystem
Specifies the IBM Spectrum Scale file system name.
-d DataDirectoryName
Specifies the IBM Spectrum Scale data directory name.
import
Imports an existing HDFS configuration.
-l LocalDirectoryName
Specifies the local directory that contains the existing HDFS configuration.
add
Adds name nodes or data nodes in an existing HDFS configuration.
-n HDFSClusterName
Specifies the name of the existing HDFS cluster.
-nn NameNodes
Specifies the name node host names in a comma-separated list.
-dn DataNodes
Specifies the data node host names in a comma-separated list.
list
Lists the current HDFS configuration.
clear
Clears the current HDFS configuration from the cluster definition file.
-f
Forces action without manual confirmation.
perfmon
Sets performance monitoring specific properties to be used during installation and configuration:
-r on | off
Specifies if the install toolkit can reconfigure performance monitoring.
Note: When set to on, reconfiguration might move the collector to different nodes and it might
reset sensor data. Custom sensors and data might be erased.
-d on | off
Specifies if performance monitoring should be disabled (not installed).
Note: When set to on, pmcollector and pmsensor packages are not installed or upgraded.
Existing sensor or collector state remains as is.
-l
Lists the current settings in the configuration.
ntp
Used to add, list, or remove NTP nodes to the configuration. NTP nodes will be configured on the
cluster as follows: the admin node will point to the upstream NTP servers that you provide to
determine the correct time. The rest of the nodes in the cluster will point to the admin node to
obtain the time.
-s Upstream_Server
Specifies the host name that will be used. You can use an upstream server that you have
already configured, but it cannot be part of your Spectrum Scale cluster.
Note: NTP works best with at least four upstream servers. If you provide fewer than four, you
will receive a warning during installation advising that you add more.
-l List
Lists the current settings of your NTP setup.
-e on | off
Specifies whether NTP is enabled or not. If this option is turned to off, you will receive a
warning during installation.
clear
Removes specified properties from the cluster definition file:
gpfs
Removes GPFS related properties from the cluster definition file:
-c
Clears the GPFS cluster name.
-p
Clears the GPFS profile to be applied on cluster creation. The following values are
accepted:
default
Specifies that the GpfsProtocolDefaults profile is to be cleared.
randomio
Specifies that the GpfsProtocolRandomIO profile is to be cleared.
-r RemoteShell
Clears the absolute path name of the remote shell command GPFS uses for node
communication. For example, /usr/bin/ssh.
-rc RemoteFileCopy
Clears the absolute path name of the remote copy command GPFS uses when transferring
files between nodes. For example, /usr/bin/scp.
-e EphemeralPortRange
Clears the GPFS daemon communication port range.
--all
Clears all settings in the cluster definition file.
protocols
Removes protocols related properties from the cluster definition file:
-f
Clears the shared file system name.
-m
Clears the shared file system shared file system mount point or path.
-e
Clears a comma-separated list of additional CES export IPs to configure on the cluster.
--all
Clears all settings in the cluster definition file.
object
Removes object related properties from the cluster definition file:
-f
Clears the object file system name.
-m
Clears the absolute path to your file system on which the objects reside.
-e
Clears the host name which maps to all CES IP addresses in a round-robin manner.
-o
Clears the GPFS fileset to be created or used as the object base.
-i
Clears the GPFS fileset inode allocation to be used by the object base.
-t
Clears the admin token to be used by Keystone.
-au
Clears the user name for the admin user.
-ap
Clears the password for the admin user.
-su
Clears the user name for the Swift user.
-sp
Clears the password for the Swift user.
-dp
Clears the password for the object database.
-s3
Clears the S3 API setting, if it is enabled.
-mr
Clears the multi-region data file path.
-rn
Clears the region number for the multi-region configuration.
--all
Clears all settings in the cluster definition file.
update
Updates operating system and CPU architecture fields in the cluster definition file. This update is
automatically done if you run the upgrade precheck while upgrading to IBM Spectrum Scale
release 4.2.2 or later.
populate
Populates the cluster definition file with the current cluster state. In the following upgrade
scenarios, you might need to update the cluster definition file with the current cluster state:
• A manually created cluster in which you want to use the installation toolkit to perform
administration tasks on the cluster such as adding protocols, adding nodes, and upgrading.
• A cluster created using the installation toolkit in which manual changes were done without using
the toolkit wherein you want to synchronize the installation toolkit with the updated cluster
configuration that was performed manually.
--node Node
Specifies an existing node in the cluster that is used to query the cluster information. If you
want to use the spectrumscale config populate command to retrieve data from a
cluster containing ESS, you must specify the EMS node with the --node flag.
nsd
Used to add, remove, list or balance NSDs, as well as add file systems in the cluster definition file. This
command only interacts with this configuration file and does not directly configure NSDs on the
cluster itself. The NSDs that have an entry in the cluster definition file will be used during install. This
option accepts the following arguments:
add
Adds an NSD to the configuration, according to the following specifications:
-p Primary
Specifies the primary NSD server name.
-s Secondary
Specifies the secondary NSD server names. You can use a comma-separated list to specify up
to seven secondary NSD servers.
-fs FileSystem
Specifies the file system to which the NSD is assigned.
-po Pool
Specifies the file system pool.
-u
Specifies NSD usage. The following values are accepted:
dataOnly
dataAndMetadata
metadataOnly
descOnly
localCache
-fg FailureGroup
Specifies the failure group to which the NSD belongs.
--no-check
Specifies not to check for the device on the server.
PrimaryDevice
Specifies the device name on the primary NSD server.
balance
Balances the NSD preferred node between the primary and secondary nodes. The following
options are accepted:
--node Node
Specifies the node to move NSDs from when balancing.
--all
Specifies that all NSDs are to be balanced.
delete NSD
Removes the specified NSD from the configuration.
modify NSD
Modifies the NSD parameters on the specified NSD, according to the following options:
-n Name
Specifies the name.
-u
The following values are accepted:
dataOnly
dataAndMetadata
metadataOnly
descOnly
localCache
-po Pool
Specifies the pool
-fs FileSystem
Specifies the file system.
-fg FailureGroup
Specifies the failure group.
servers
Adds and removes servers, and sets the primary server for NSDs.
clear
Clears the current NSD configuration. The following option is accepted:
-f
Forces the action without manual confirmation.
list
Lists the NSDs configured in your environment.
filesystem
Used to list or modify file systems in the cluster definition file. This command only interacts with this
configuration file and does not directly modify file systems on the cluster itself. To modify the
properties of a file system in the cluster definition file, the file system must first be added with
spectrumscale nsd. This option accepts the following arguments:
modify
Modifies the file system attributes. This option accepts the following arguments:
-B
Specifies the file system block size. This argument accepts the following values: 64K, 128K,
256K, 512K, 1M, 2M, 4M, 8M, 16M.
-m MountPoint
Specifies the mount point.
-r
Specifies the number of copies of each data block for a file. This argument accepts the
following values: 1, 2, 3.
-mr
Specifies the number of copies of inodes and directories. This argument accepts the following
values: 1, 2, 3.
-MR
Specifies the default maximum number of copies of inodes and directories. This argument
accepts the following values: 1, 2, 3.
-R
Specifies the default maximum number of copies of each data block for a file. This argument
accepts the following values: 1, 2, 3.
--metadata_block_size
Specifies the file system meta data block size. This argument accepts the following values:
64K, 128K, 256K, 512K, 1M, 2M, 4M, 8M, 16M.
--fileauditloggingenable
Enables file audit logging on the specified file system.
--fileauditloggingdisable
Disables file audit logging on the specified file system.
--logfileset LogFileset
Specifies the log fileset name for file audit logging. The default value is .audit_log.
--retention RetentionPeriod
Specifies the file audit logging retention period in number of days. The default value is 365
days.
FileSystem
Specifies the file system to be modified.
define
Adds file system attributes in an IBM Spectrum Scale Erasure Code Edition environment. The
setup type must be ece for using this option.
Note: If you are planning to deploy protocols in the IBM Spectrum Scale Erasure Code Edition
cluster, you must define a CES shared root file system before initiating the installation toolkit
deployment phase by using the following command.
-fs FileSystem
Specifies the file system to which the vdisk set is to be assigned.
-vs VdiskSet
Specifies the vdisk sets to be affected by a file system operation.
--mmcrfs MmcrfsParams
Specifies that all command line parameters following the --mmcrfs flag must be passed to
the IBM Spectrum Scale mmcrfs command and they must not be interpreted by the mmvdisk
command.
list
Lists the file systems configured in your environment.
fileauditlogging
Enable, disable, or list the file audit logging configuration in the cluster definition file.
enable
Enables the file audit logging configuration in the cluster definition file.
disable
Disables the file audit logging configuration in the cluster definition file.
list
Lists the file audit logging configuration in the cluster definition file.
recoverygroup
Define, undefine, change, list, or clear recovery group related configuration in the cluster definition file
in an IBM Spectrum Scale Erasure Code Edition environment. The setup type must be ece for using
this option.
define
Defines recovery groups in the cluster definition file.
-rg RgName
Sets the name of the recovery group.
-nc ScaleOutNodeClassName
Sets the name of the scale-out node class.
--node Node
Specifies the scale-out node with in an existing IBM Spectrum Scale Erasure Code Edition
cluster for the server node class.
undefine
Undefines specified recovery group from the cluster definition file.
RgName
The name of the recovery group that is to be undefined.
change
Changes the recovery group name.
ExistingRgName
The name of the recovery group that is to be modified.
-rg NewRgName
The new name of the recovery group.
clear
Clears the current recovery group configuration from the cluster definition file.
-f
Forces operation without manual confirmation.
list
Lists the current recovery group configuration in the cluster definition file.
vdiskset
Define, undefine, list, or clear vdisk set related configuration in the cluster definition file in an IBM
Spectrum Scale Erasure Code Edition environment. The setup type must be ece for using this option.
define
Defines vdisk sets in the cluster definition file.
-vs VdiskSet
Sets the name of the vdisk set.
-rg RgName
Specifies an existing recovery group with which the defined vdisk set is to be associated.
-code
Defines the erasure code. This argument accepts the following values: 4+2P, 4+3P, 8+2P, and
8+3P.
-bs
Specifies the block size for a vdisk set definition. This argument accepts the following values:
1M, 2M, 4M, 8M, and 16M.
-ss VdiskSetSize
Defines the vdisk set size in percentage of the available storage space.
undefine
Undefines specified vdisk set from the cluster definition file.
VdiskSet
The name of the vdisk set that is to be undefined.
clear
Clears the current vdisk set configuration from the cluster definition file.
-f
Forces operation without manual confirmation.
list
Lists the current vdisk set configuration in the cluster definition file.
callhome
Used to enable, disable, configure, schedule, or list call home configuration in the cluster definition
file.
enable
Enables call home in the cluster definition file.
disable
Disables call home in the cluster definition file. The call home function is enabled by default in the
cluster definition file. If you disable it in the cluster definition file, the call home packages are
installed on the nodes but no configuration is done by the installation toolkit.
config
Configures call home settings in the cluster definition file.
-n CustName
Specifies the customer name for the call home configuration.
-i CustID
Specifies the customer ID for the call home configuration.
-e CustEmail
Specifies the customer email address for the call home configuration.
-cn CustCountry
Specifies the customer country code for the call home configuration.
-s ProxyServerIP
Specifies the proxy server IP address for the call home configuration. This is an optional
parameter.
If you are specifying the proxy server IP address, the proxy server port must also be specified.
-pt ProxyServerPort
Specifies the proxy server port for the call home configuration. This is an optional parameter.
If you are specifying the proxy server port, the proxy server IP address must also be specified.
-u ProxyServerUserName
Specifies the proxy server user name for the call home configuration. This is an optional
parameter.
-pw ProxyServerPassword
Specifies the proxy server password for the call home configuration. This is an optional
parameter.
If you do not specify a password on the command line, you are prompted for a password.
-a
When you specify the call home configuration settings by using the ./spectrumscale
callhome config, you are prompted to accept or decline the support information collection
message. Use the -a parameter to accept that message in advance. This is an optional
parameter.
If you do not specify the -a parameter on the command line, you are prompted to accept or
decline the support information collection message.
Clear
Clears the specified call home settings from the cluster definition file.
--all
Clears all call home settings from the cluster definition file.
-n
Clears the customer name from the call home configuration in the cluster definition file.
-i
Clears the customer ID from the call home configuration in the cluster definition file.
-e
Clears the customer email address from the call home configuration in the cluster definition
file.
-cn
Clears the customer country code from the call home configuration in the cluster definition
file.
-s
Clears the proxy server IP address from the call home configuration in the cluster definition
file.
-pt
Clears the proxy server port from the call home configuration in the cluster definition file.
-u
Clears the proxy server user name from the call home configuration in the cluster definition
file.
-pw
Clears the proxy server password from the call home configuration in the cluster definition file.
schedule
Specifies the call home data collection schedule in the cluster definition file.
By default, the call home data collection is enabled in the cluster definition file and it is set for a
daily and a weekly schedule. Daily data uploads are by default executed at 02:xx AM each day.
Weekly data uploads are by default executed at 03:xx AM each Sunday. In both cases, xx is a
random number from 00 to 59. You can use the spectrumscale callhome schedule
command to set either a daily or a weekly call home data collection schedule.
-d
Specifies a daily call home data collection schedule.
If call home data collection is scheduled daily, data uploads are executed at 02:xx AM each
day. xx is a random number from 00 to 59.
-w
Specifies a weekly call home data collection schedule.
If call home data collection is scheduled weekly, data uploads are executed at 03:xx AM each
Sunday. xx is a random number from 00 to 59.
-c
Clears the call home data collection schedule in the cluster definition file.
The call home configuration can still be applied without a schedule being set. In that case, you
either need to manually run and upload data collections or you can set the call home schedule
to the desired interval at a later time with Daily: ./spectrumscale callhome schedule
-d, Weekly: ./spectrumscale callhome schedule -w, or Both Daily and Weekly: ./
spectrumscale callhome schedule -d -w commands.
list
Lists the call home configuration specified in the cluster definition file.
auth
Used to configure either Object or File authentication on protocols in the cluster definition file. This
command only interacts with this configuration file and does not directly configure authentication on
the protocols. To configure authentication on the GPFS cluster during a deploy, authentication settings
must be provided through the use of a template file. This option accepts the following arguments:
file
Specifies file authentication.
One of the following must be specified:
ldap
ad
nis
none
object
Specifies object authentication.
Either of the following options are accepted:
--https
One of the following must be specified:
local
external
ldap
ad
Both file and object authentication can be set up with the authentication backend server specified.
Running this command will open a template settings file to be filled out before installation.
commitsettings
Merges authentication settings into the main cluster definition file.
clear
Clears your current authentication configuration.
enable
Used to enable Object, SMB, HDFS, or NFS in the cluster definition file. This command only interacts
with this configuration file and does not directly enable any protocols on the GPFS cluster itself. The
default configuration is that all protocols are disabled. If a protocol is enabled in the cluster definition
file, this protocol will be enabled on the GPFS cluster during deploy. This option accepts the following
arguments:
obj
Object
nfs
NFS
hdfs
HDFS
smb
SMB
disable
Used to disable Object, SMB, HDFS, or NFS in the cluster definition file. This command only interacts
with this configuration file and does not directly disable any protocols on the GPFS cluster itself. The
default configuration is that all protocols are disabled, so this command is only necessary if a protocol
has previously been enabled in the cluster definition file, but is no longer required.
Note: Disabling a protocol in the cluster definition will not disable this protocol on the GPFS cluster
during a deploy, it merely means that this protocol will not be enabled during a deploy.
This option accepts the following arguments:
obj
Object
CAUTION: Disabling object service discards OpenStack Swift configuration and ring files
from the CES cluster. If OpenStack Keystone configuration is configured locally, disabling
object storage also discards the Keystone configuration and database files from the CES
cluster. However, the data is not removed. For subsequent object service enablement with
a clean configuration and new data, remove object store fileset and set up object
environment. See the mmobj swift base command. For more information, contact the IBM
Support Center.
nfs
NFS
hdfs
HDFS
smb
SMB
install
Installs, creates a GPFS cluster, creates NSDs and adds nodes to an existing GPFS cluster. The
installation toolkit will use the environment details in the cluster definition file to perform these tasks.
If all configuration steps have been completed, this option can be run with no arguments (and pre-
install and post-install checks will be performed automatically).
For a "dry-run," the following arguments are accepted:
-pr
Performs a pre-install environment check.
-po
Performs a post-install environment check.
-s SecretKey
Specifies the secret key on the command line required to decrypt sensitive data in the cluster
definition file and suppresses the prompt for the secret key.
-f
Forces action without manual confirmation.
--skip
Bypasses the specified precheck and suppresses prompts. For example, specifying --skip ssh
bypasses the SSH connectivity check. Specifying --skip chef suppresses Chef related prompts
and all answers are considered as yes.
deploy
Creates file systems, deploys protocols, and configures protocol authentication on an existing GPFS
cluster. The installation toolkit will use the environment details in the cluster definition file to perform
these tasks. If all configuration steps have been completed, this option can be run with no arguments
(and pre-deploy and post-deploy checks will be performed automatically). However, the secret key
will be prompted for unless it is passed in as an argument using the -s flag.
For a "dry-run," the following arguments are accepted:
-pr
Performs a pre-deploy environment check.
-po
Performs a post-deploy environment check.
-s SecretKey
Specifies the secret key on the command line required to decrypt sensitive data in the cluster
definition file and suppresses the prompt for the secret key.
-f
Forces action without manual confirmation.
--skip
Bypasses the specified precheck and suppresses prompts. For example, specifying --skip ssh
bypasses the SSH connectivity check. Specifying --skip chef suppresses Chef related prompts
and all answers are considered as yes.
upgrade
Performs upgrade procedure, upgrade precheck, upgrade postcheck, and upgrade related
configuration to add nodes as offline, or exclude nodes from the upgrade run.
precheck
Performs health checks on the cluster prior to the upgrade.
During the upgrade precheck, the installation toolkit displays messages in a number of scenarios
including:
• If the installed Chef version is different from the supported versions.
• If there are AFM relationships in the cluster. All file systems that have associated AFM primary
or cache filesets are listed and reference to procedure for stopping and restarting replication is
provided.
config
Manage upgrade related configuration in the cluster definition file.
offline
Designates specified nodes in the cluster as offline for the upgrade run. For entities designated
as offline, only the packages are upgraded during the upgrade; the services are not restarted
after the upgrade. You can use this option to designate those nodes as offline that have
services down or stopped, or that have unhealthy components that are flagged in the upgrade
precheck.
-N Node
You can specify one or more nodes that are a part of the cluster that is being upgraded
with -N in a comma-separated list. For example: node1,node2,node3
If the nodes being specified as offline are protocol nodes then all components (GPFS,
SMB, NFS, and object) are added as offline in the cluster configuration. If the nodes being
specified as offline are not protocol nodes then GPFS is added as offline in the cluster
configuration.
--clear
Clears the offline nodes information from the cluster configuration.
exclude
Designates specified nodes in a cluster to be excluded from the upgrade run. For nodes
designated as excluded, the installation toolkit does not perform any action during the
upgrade. This option allows you to upgrade a subset of a cluster.
Note: Nodes that are designated as excluded must be upgraded at a later time to complete
the cluster upgrade.
-N Node
You can specify one or more nodes that are a part of the cluster that is being upgraded
with -N in a comma-separated list. For example: node1,node2,node3
--clear
Clears the excluded nodes information from the cluster configuration.
workload
Sets the installation toolkit to prompt users to shut down their workloads before an upgrade.
The setup type must be ece for using this option.
-p { on | off }
Enables or disables the prompt to users to shut down their workloads before an upgrade.
-l
Lists the current workload prompt related configuration in the cluster definition file.
list
Lists the upgrade related configuration information in the cluster definition file.
clear
Clears the upgrade related configuration in the cluster definition file.
run
Upgrades components of an existing IBM Spectrum Scale cluster.
This command can still be used even if all protocols are not enabled. If a protocol is not enabled,
then the respective packages are still upgraded, but the respective service is not started. The
installation toolkit uses environment details in the cluster definition file to perform upgrade tasks.
The installation toolkit includes the ability to determine if an upgrade is being run for the first time
or if it is a rerun of a failed upgrade.
To perform environment health checks prior to and after the upgrade, run the ./spectrumscale
upgrade command using the precheck and postcheck arguments. This is not required,
however, because specifying upgrade run with no arguments also runs these checks.
--skip
Bypasses the specified precheck and suppresses prompts. For example, specifying --skip
ssh bypasses the SSH connectivity check. Specifying --skip chef suppresses Chef related
prompts and all answers are considered as yes.
postcheck
Performs health checks on the cluster after the upgrade has been completed.
showversions
Shows installed versions of GPFS and protocols and available versions of these components in the
configured repository.
Exit status
0
Successful completion.
nonzero
A failure has occurred.
Security
You must have root authority to run the spectrumscale command.
The node on which the command is issued must be able to execute remote shell commands on any other
node in the cluster without the use of a password and without producing any extraneous messages. For
more information, see Requirements for administering a GPFS(tm) file system in IBM Spectrum Scale:
Administration Guide.
Examples
Creating a new IBM Spectrum Scale cluster
1. To instantiate your chef zero server, issue a command similar to the following:
2. To designate NSD server nodes in your environment to use for the installation, issue this command:
3. To add four non-shared NSDs seen by a primary NSD server only, issue this command:
4. To add four non-shared NSDs seen by both a primary NSD server and a secondary NSD server, issue
this command:
5. To define a shared root file system using two NSDs and a file system fs1 using two NSDs, issue these
commands:
6. To designate GUI nodes in your environment to use for the installation, issue this command:
7. To designate additional client nodes in your environment to use for the installation, issue this
command:
8. To allow the installation toolkit to reconfigure Performance Monitoring if it detects any existing
configurations, issue this command:
10. To configure the call home function with the mandatory parameters, issue this command:
If you do not want to use call home, disable it by issuing the following command:
For more information, see Enabling and configuring call home using the installation toolkit in IBM
Spectrum Scale: Concepts, Planning, and Installation Guide.
11. To review the configuration prior to installation, issue these commands:
12. To start the installation on your defined environment, issue these commands:
13. To deploy file systems after a successful installation, do one of the following depending on your
requirement:
• If you want to use only the file systems, issue these commands:
• If you want to deploy protocols also, see the examples in the Deploying protocols on an existing
cluster section.
Deploying protocols on an existing cluster
Note: If your cluster contains ESS, see the Adding protocols to a cluster containing ESS section.
1. To instantiate your chef zero server, issue a command similar to the following:
2. To designate protocol nodes in your environment to use for the deployment, issue this command:
3. To designate GUI nodes in your environment to use for the deployment, issue this command:
4. To configure protocols to point to a file system that will be used as a shared root, issue this command:
For more information, see Enabling and configuring file audit logging using the installation toolkit in
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
10. To review the configuration prior to deployment, issue these commands:
Fill out the template and save the information, and then issue the following commands:
2. To enable Object authentication with AD server on all protocol nodes, issue this command:
Fill out the template and save the information, and then issue the following commands:
./Spectrum_Scale_Protocols_Standard-5.1.x.x-xxxxx
2. Copy the cluster definition file from the prior installation to the latest installer location by issuing this
command:
cp -p /usr/lpp/mmfs/5.0.1.0/installer/configuration/clusterdefinition.txt\
/usr/lpp/mmfs/5.1.0.x/installer/configuration/
Note: This is a command example of a scenario where you are upgrading the system from 5.0.x.x to
5.1.x.x.
You can populate the cluster definition file with the current cluster state by issuing the
spectrumscale config populate command.
3. Run the upgrade precheck from the installer directory of the latest code level extraction by issuing
commands similar to the following:
cd /usr/lpp/mmfs/Latest_Code_Level_Directory/installer
./spectrumscale upgrade precheck
Note: If you are upgrading to IBM Spectrum Scale version 4.2.2, the upgrade precheck updates the
operating system and CPU architecture fields in the cluster definition file. You can also update the
operating system and CPU architecture fields in the cluster definition file by issuing the
spectrumscale config update command.
4. [Optional] Specify nodes as offline by issuing the following command, if services running on these
nodes are stopped or down.
5. [Optional] Exclude nodes that you do not want to upgrade at this point by issuing the following
command.
cd /usr/lpp/mmfs/Latest_Code_Level_Directory/installer
./spectrumscale upgrade run
• NSD nodes:
• Protocol nodes:
• GUI nodes:
c. If protocol nodes are being added, deploy protocols using the following commands:
mmlsnsd
./spectrumscale nsd list
4. To enable another protocol on an existing cluster that has protocols enabled, do the following steps
depending on your configuration:
a. Enable NFS on all protocol nodes using the following command:
You can add other types of nodes such as client nodes, NSD servers, and so on depending on your
requirements. For more information, see Defining configuration options for the installation toolkit in
IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
2. Specify one of the newly added protocol nodes as the installer node and specify the setup type as ess
by issuing the following command.
The installer node is the node on which the installation toolkit is extracted and from where the
installation toolkit command, spectrumscale, is initiated.
3. Specify the EMS node of the ESS system to the installation toolkit by issuing the following command.
This node is also automatically specified as the admin node. The admin node, which must be the EMS
node in an ESS configuration, is the node that has access to all other nodes to perform configuration
during the installation.
4. Proceed with specifying other configuration options, installing, and deploying by using the installation
toolkit. For more information, see Defining configuration options for the installation toolkit, Installing
GPFS and creating a GPFS cluster, and Deploying protocols in IBM Spectrum Scale: Concepts, Planning,
and Installation Guide.
For more information, see ESS awareness with the installation toolkit in IBM Spectrum Scale: Concepts,
Planning, and Installation Guide.
Manually adding protocols to a cluster containing ESS
For information on preparing a cluster that contains ESS for adding protocols, see Preparing a cluster that
contains ESS for adding protocols in IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
After you have prepared your cluster that contains ESS for adding protocols, you can use commands
similar to the ones listed in the Deploying protocols on an existing cluster section.
Using the installation toolkit in an IBM Spectrum Scale Erasure Code Edition environment
• Specify the installer node and the setup type as ece in the cluster definition file for IBM Spectrum Scale
Erasure Code Edition.
• Add scale-out nodes for IBM Spectrum Scale Erasure Code Edition in the cluster definition file.
• Define the recovery group for IBM Spectrum Scale Erasure Code Edition in the cluster definition file.
• Define vdisk sets for IBM Spectrum Scale Erasure Code Edition in the cluster definition file.
./spectrumscale vdiskset define -rg RgName -code RaidCode -bs BlockSize -ss SetSize
• Define the file system for IBM Spectrum Scale Erasure Code Edition in the cluster definition file.
./spectrumscale install
./spectrumscale deploy
Adding name nodes or data nodes in an existing HDFS cluster by using the installation toolkit
1. Define the properties for the new HDFS cluster.
./spectrumscale deploy
./spectrumscale deploy
See also
• Installing IBM Spectrum Scale on Linux nodes and deploying protocols in IBM Spectrum Scale: Concepts,
Planning, and Installation Guide.
• Configuring with the spectrumscale installation toolkit in IBM Spectrum Scale: Administration Guide.
• “mmchconfig command” on page 169
• “mmlscluster command” on page 484
• “mmlsconfig command” on page 487
Location
/usr/lpp/mmfs/5.1.0.x/installer
The Data Management Application Programming Interface (DMAPI) for (GPFS) is based on The Open
Group's System Management: Data Storage Management (XDSM) API Common Applications Environment
(CAE) Specification C429, The Open Group, ISBN 1-85912-190-X specification. The implementation is
compliant with the standard. Some optional features are not implemented.
The XDSM DMAPI model is intended mainly for a single-node environment. Some of the key concepts,
such as sessions, event delivery, and recovery, required enhancements for a multiple-node environment
such as GPFS.
• mount
• preunmount
• unmount
• nospace
• create, postcreate
• remove, postremove
• rename, postrename
• symlink, postsymlink
• link, postlink
Data events
• read
• write
• truncate
Metadata events
• attribute
• destroy
• close
Pseudo event
• user event
GPFS guarantees that asynchronous events are delivered, except when the GPFS daemon fails. Events are
enqueued to the session before the corresponding file operation completes. For further information on
failures, see “Failure and recovery of IBM Spectrum Scale Data Management API for GPFS” on page 829.
• debut
Metadata event
• cancel
GPFS-specific attribute events that are not part of the DMAPI standard
GPFS generates the following attribute events for DMAPI that are specific to GPFS and not part of the
DMAPI standard:
• Pre-permission change
• Post-permission change
For additional information, refer to “GPFS-specific DMAPI events” on page 826.
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 795
dm_getall_sessions
Return all extant sessions.
dm_getall_tokens
Return a session's outstanding tokens.
dm_handle_cmp
Compare file handles.
dm_handle_free
Free a handle's storage.
dm_handle_hash
Hash the contents of a handle.
dm_handle_is_valid
Check a handle's validity.
dm_handle_to_fshandle
Return the file system handle associated with an object handle.
dm_handle_to_path
Return a path name from a file system handle.
dm_init_attrloc
Initialize a bulk attribute location offset.
dm_init_service
Initialization processing that is implementation-specific.
dm_move_event
Move an event from one session to another.
dm_path_to_fshandle
Create a file system handle using a path name.
dm_path_to_handle
Create a file handle using a path name.
dm_query_right
Determine an object's access rights.
dm_query_session
Query a session.
dm_read_invis
Read a file without using DMAPI events.
dm_release_right
Release an object's access rights.
dm_request_right
Request an object's access rights.
dm_respond_event
Issue a response to an event.
dm_send_msg
Send a message to a session.
dm_set_disp
For a given session, set the disposition of all file system's events.
dm_set_eventlist
For a given object, set the list of events to be enabled.
dm_set_fileattr
Set a file's time stamps, ownership and mode.
dm_set_region
Set a file's managed regions.
dm_write_invis
Write to a file without using DMAPI events.
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 797
dm_getall_inherit
Return a file system's inheritable attributes.
dm_mkdir_by_handle
Define a directory object using a handle.
dm_obj_ref_hold
Put a hold on a file system object.
dm_obj_ref_query
Determine if there is a hold on a file system object.
dm_obj_ref_rele
Release the hold on a file system object.
dm_pending
Notify FS of slow DM application processing.
dm_set_inherit
Indicate that an attribute is inheritable.
dm_symlink_by_handle
Define a symbolic link using a DM handle.
Attribute value DM_CONFIG_TOTAL_ATTRIBUTE_SPACE is per file. The entire space is available for
opaque attributes. Non-opaque attributes (event list and managed regions) use separate space.
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 799
Concepts of IBM Spectrum Scale Data Management API for GPFS
The XDSM standard is intended mainly for a single-node environment. Some of the key concepts in the
standard such as sessions, event delivery, mount and unmount, and failure and recovery, are not well
defined for a multiple-node environment such as GPFS.
For a list of restrictions and coexistence considerations, see “Usage restrictions on DMAPI functions” on
page 810.
All DMAPI APIs must be called from nodes that are in the cluster where the file system is created.
Key concepts of DMAPI for GPFS include these areas:
• “Sessions” on page 800
• “Data management events” on page 800
• “Mount and unmount” on page 802
• “Tokens and access rights” on page 803
• “Parallelism in Data Management applications” on page 803
• “Data Management attributes” on page 804
• “Support for NFS” on page 804
• “Quota” on page 805
• “Memory mapped files” on page 805
Sessions
In GPFS, a session is associated only with the node on which the session was created. This node is known
as the session node.
Events are generated at any node where the file system is mounted. The node on which a given event is
generated is called the source node of that event. The event is delivered to a session queue on the session
node.
There are restrictions as to which DMAPI functions can and cannot be called from a node other than the
session node. In general, functions that change the state of a session or event can only be called on the
session node. For example, the maximum number of DMAPI sessions that can be created on a node is
4000. See “Usage restrictions on DMAPI functions” on page 810 for details.
Session ids are unique over time within a GPFS cluster. When an existing session is assumed, using
dm_create_session, the new session id returned is the same as the old session id.
A session fails when the GPFS daemon fails on the session node. Unless this is a total failure of GPFS on
all nodes, the session is recoverable. The DM application is expected to assume the old session, possibly
on another node. This will trigger the reconstruction of the session queue. All pending synchronous events
from surviving nodes are resubmitted to the recovered session queue. Such events will have the same
token id as before the failure, except mount events. Asynchronous events, on the other hand, are lost
when the session fails. See “Failure and recovery of IBM Spectrum Scale Data Management API for GPFS”
on page 829 for information on failure and recovery.
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 801
Reliable DMAPI destroy events
A metadata destroy event is generated when the operating system has destroyed an object. This type of
event is different from a remove event, which is a namespace event and is not related to the destruction
of an object. A reliable destroy event supports synchronous destroy events in the same way that other
synchronous events do. When a synchronous event is generated, a user process is suspended in the
kernel; it will be suspended until a DM application issues an explicit response to the event. The DM
application at the session that supports the reliable destroy event must be capable of handling the
synchronous destroy event. In other words, it must respond to the DM_EVENT_DESTROY event with
DM_RESPOND_EVENT. Otherwise, the event will wait forever at the session node for a response. Based on
this, it is recommended that the cluster not be made up of nodes that are running back-level code and
new code, because the destroy event is not reliable in a mixed environment.
Note: The DM_REMOTE_MOUNT flag is redundant in the dm_mount_event structure obtained from the
mount event (as opposed to the dm_get_mountinfo function).
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 803
• On a given session node, multiple DM application threads can access the same file in parallel, using the
same session. There is no limit on the number of threads that can invoke DMAPI functions
simultaneously on each node.
• Multiple sessions, each with event dispositions for a different file system, can be created on separate
nodes. Thus, files in different file systems can be accessed independently and simultaneously, from
different session nodes.
• Dispositions for events of the same file system can be partitioned among multiple sessions, each on a
different node. This distributes the management of one file system among several session nodes.
• Although GPFS routes all events to a single session node, data movement may occur on multiple nodes.
The function calls dm_read_invis, dm_write_invis, dm_probe_hole, and dm_punch_hole are
honored from a root process on another node, provided it presents a session ID for an established
session on the session node.
A DM application may create a worker process, which exists on any node within the GPFS cluster. This
worker process can move data to or from GPFS using the dm_read_invis and dm_write_invis
functions. The worker processes must adhere to these guidelines:
1. They must run as root.
2. They must present a valid session ID that was obtained on the session node.
3. All writes to the same file which are done in parallel must be done in multiples of the file system
block size, to allow correct management of disk blocks on the writes.
4. No DMAPI calls other than dm_read_invis, dm_write_invis, dm_probe_hole, and
dm_punch_hole may be issued on nodes other than the session node. This means that any rights
required on a file must be obtained within the session on the session node, prior to the data
movement.
5. There is no persistent state on the nodes hosting the worker process. It is the responsibility of the
DM application to recover any failure which results from the failure of GPFS or the data movement
process.
Quota
GPFS supports user quota. When dm_punch_hole is invoked, the file owner's quota is adjusted by the
disk space that is freed. The quota is also adjusted when dm_write_invis is invoked and additional disk
space is consumed.
Since dm_write_invis runs with root credentials, it will never fail due to insufficient quota. However, it
is possible that the quota of the file owner will be exceeded as a result of the invisible write. In that case
the owner will not be able to perform further file operations that consume quota.
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 805
This header file must be included in the source files of the DM application.
The file is installed in directory: /usr/lpp/mmfs/include.
dmapi_types.h
The header file that contains the C declarations of the data types for the DMAPI functions and event
messages.
The header file dmapi.h includes this header file.
The file is installed in directory: /usr/lpp/mmfs/include.
libdmapi.a
The library that contains the DMAPI functions.
The library libdmapi.a consists of a single shared object, which is built with auto-import of the
system calls that are listed in the export file dmapi.exp.
The file is installed in directory: /usr/lpp/mmfs/lib.
dmapi.exp
The export file that contains the DMAPI system call names.
The file dmapi.exp needs to be explicitly used only if the DM application is to be explicitly built with
static binding, using the binder options -bnso -bI:dmapi.exp.
The file is installed in directory: /usr/lpp/mmfs/lib.
dmapicalls, dmapicalls64
Module loaded during processing of the DMAPI functions.
The module is installed in directory: /usr/lpp/mmfs/bin.
Notes:
• On Linux nodes running DMAPI, the required files libdmapi.a, dmapi.exp, dmapicalls, and
dmapicalls64 are replaced by libdmapi.so.
• If you are compiling with a non-IBM compiler on AIX nodes, you must compile DMAPI applications with
-D_AIX.
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 807
Enabling DMAPI for a file system
DMAPI must be enabled individually for each file system.
DMAPI can be enabled for a file system when the file system is created, using the -z yes option on the
mmcrfs command. The default is -z no. The setting can be changed when the file system is not mounted
anywhere, using the -z yes | no option on the mmchfs command. The setting is persistent.
The current setting can be queried using the -z option on the mmlsfs command.
While DMAPI is disabled for a given file system, no events are generated by file operations of that file
system. Any DMAPI function calls referencing that file system fail with an EPERM error.
When mmchfs -z no is used to disable DMAPI, existing event lists, extended attributes, and managed
regions in the given file system remain defined, but will be ignored until DMAPI is re-enabled. The
command mmchfs -z no should be used with caution, since punched holes, if any, are no longer protected
by managed regions.
For more information about GPFS commands, see Chapter 1, “Command reference,” on page 1.
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 809
DMEV_ISDISJ(eset1, eset2)
Check if eset1 and eset2 are disjoint
DMEV_ISSUB(eset2)
Check if eset1 is a subset of eset2
DMEV_NORM(eset)
Normalize the internal format of eset, clearing all unused bits
• DMAPI for GPFS provides a set of macros for comparison of token ids (value of type dm_token_t).
DM_TOKEN_EQ (x,y)
Check if x and y are the same
DM_TOKEN_NE (x,y)
Check if x and y are different
DM_TOKEN_LT (x,y)
Check if x is less than y
DM_TOKEN_GT (x,y)
Check if x is greater than y
DM_TOKEN_LE (x,y)
Check if x is less than or equal to y
DM_TOKEN_GE (x,y)
Check if x is greater than or equal to y
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 811
• C declarations of all functions in DMAPI for GPFS, refer to the dmapi.h file located in the /usr/lpp/
mmfs/include directory.
Synopsis
int dm_handle_to_snap(
void *hanp, /* IN */
size_t hlen, /* IN */
dm_snap_t *isnapp /* OUT */
);
Description
Use the dm_handle_to_snap function to extract a snapshot ID from a handle. dm_handle_to_snap()
is a GPFS-specific DMAPI function. It is not part of the open standard.
Parameters
void *hanp (IN)
A pointer to an opaque DM handle previously returned by DMAPI.
size_t hlen (IN)
The length of the handle in bytes.
dm_snap_t *isnapp (OUT)
A pointer to the snapshot ID.
Return values
Zero is returned on success. On error, -1 is returned, and the global errno is set to one of the following
values:
[EBADF]
The file handle does not refer to an existing or accessible object.
[EFAULT]
The system detected an invalid address in attempting to use an argument.
[EINVAL]
The argument token is not a valid token.
[ENOMEM]
DMAPI could not obtain the required resources to complete the call.
[ENOSYS]
Function is not supported by the DM implementation.
[EPERM]
The caller does not hold the appropriate privilege.
See also
“dm_make_xhandle” on page 814
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 813
dm_make_xhandle
Converts a file system ID, inode number, inode generation count, and snapshot ID into a handle.
Synopsis
int dm_make_xhandle(
dm_fsid_t *fsidp, /* IN */
dm_ino_t *inop, /* IN */
dm_igen_t *igenp, /* IN */
dm_snap_t *isnapp, /* IN */
void **hanpp, /* OUT */
size_t *hlenp /* OUT */
);
Description
Use the dm_make_xhandle() function to convert a file system ID, inode number, inode generation
count, and snapshot ID into a handle. dm_make_xhandle() is a GPFS-specific DMAPI function. It is not
part of the open standard.
Parameters
dm_fsid_t *fsidp (IN)
The file system ID.
dm_ino_t *inop (IN)
The inode number.
dm_igen_t *igenp (IN)
The inode generation count.
dm_snap_t *isnapp (IN)
The snapshot ID.
void **hanpp (OUT)
A DMAPI initialized pointer that identifies a region of memory containing an opaque DM handle. The
caller is responsible for freeing the allocated memory.
size_t *hlenp (OUT)
The length of the handle in bytes.
Return values
Zero is returned on success. On error, -1 is returned, and the global errno is set to one of the following
values:
[EBADF]
The file handle does not refer to an existing or accessible object.
[EFAULT]
The system detected an invalid address in attempting to use an argument.
[EINVAL]
The argument token is not a valid token.
[ENOMEM]
DMAPI could not obtain the required resources to complete the call.
[ENOSYS]
Function is not supported by the DM implementation.
[EPERM]
The caller does not hold the appropriate privilege.
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 815
dm_remove_dmattr_nosync
Asynchronously removes the specified attribute.
Synopsis
int dm_remove_dmattr_nosync(
dm_sessid_t sid,
void *hanp,
size_t hlen,
dm_token_t token,
int setdtime,
dm_attrname_t *attrnamep
);
Description
Use the dm_remove_dmattr_nosync function to asynchronously remove the attribute specified by
attrname.
dm_remove_dmattr_nosync is a GPFS-specific DMAPI function; it is not part of the open standard. It
has the same purpose, parameters, and return values as the standard DMAPI dm_remove_dmattr
function, except that the update that it performs is not persistent until some other activity on that file (or
on other files in the file system) happens to flush it to disk. To be certain that your update is made
persistent, use one of the following functions:
• Standard DMAPI dm_sync_by_handle function, which flushes the file data and attributes
• GPFS-specific dm_sync_dmattr_by_handle function, which flushes only the attributes.
Parameters
dm_sessid_t sid (IN)
The identifier for the session of interest.
void *hanp (IN)
The handle for the file for which the attributes should be removed.
size_t hlen (IN)
The length of the handle in bytes.
dm_token_t *token (IN)
The token referencing the access right for the handle. The access right must be DM_RIGHT_EXCL, or
the token DM_NO_TOKEN may be used and the interface acquires the appropriate rights.
int setdtime (IN)
If setdtime is non-zero, updates the file's attribute time stamp.
dm_attrname_t *attrnamep (IN)
The attribute to be removed.
Return values
Zero is returned on success. On error, -1 is returned, and the global errno is set to one of the following
values:
[EACCES]
The access right referenced by the token for the handle is not DM_RIGHT_EXCL.
[EBADF]
The file handle does not refer to an existing or accessible object.
[EFAULT]
The system detected an invalid address in attempting to use an argument.
See also
“dm_set_dmattr_nosync” on page 818, “dm_sync_dmattr_by_handle” on page 824
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 817
dm_set_dmattr_nosync
Asynchronously creates or replaces the value of the named attribute with the specified data.
Synopsis
int dm_set_dmattr_nosync(
dm_sessid_t sid,
void *hanp,
size_t hlen,
dm_token_t token,
dm_attrname_t *attrnamep,
int setdtime,
size_t buflen,
void *bufp
);
Description
Use the dm_set_dmattr_nosync function to asynchronously create or replace the value of the named
attribute with the specified data.
dm_set_dmattr_nosync is a GPFS-specific DMAPI function; it is not part of the open standard. It has
the same purpose, parameters, and return values as the standard DMAPI dm_set_dmattr function,
except that the update that it performs is not persistent until some other activity on that file (or on other
files in the file system) happens to flush it to disk. To be certain that your update is made persistent, use
one of the following functions:
• Standard DMAPI dm_sync_by_handle function, which flushes the file data and attributes
• GPFS-specific dm_sync_dmattr_by_handle function, which flushes only the attributes.
Parameters
dm_sessid_t sid (IN)
The identifier for the session of interest.
void *hanp (IN)
The handle for the file for which the attributes should be created or replaced.
size_t hlen (IN)
The length of the handle in bytes.
dm_token_t *token (IN)
The token referencing the access right for the handle. The access right must be DM_RIGHT_EXCL, or
the token DM_NO_TOKEN may be used and the interface acquires the appropriate rights.
dm_attrname_t *attrnamep (IN)
The attribute to be created or replaced.
int setdtime (IN)
If setdtime is non-zero, updates the file's attribute time stamp.
size_t buflen (IN)
The size of the buffer in bytes.
void *bufp (IN)
The buffer containing the attribute data.
Return values
Zero is returned on success. On error, -1 is returned, and the global errno is set to one of the following
values:
[E2BIG]
The attribute value exceeds one of the implementation defined storage limits.
See also
“dm_remove_dmattr_nosync” on page 816, “dm_sync_dmattr_by_handle” on page 824
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 819
dm_set_eventlist_nosync
Asynchronously sets the list of events to be enabled for an object.
Synopsis
int dm_set_eventlist_nosync(
dm_sessid_t sid,
void *hanp,
size_t hlen,
dm_token_t token,
dm_eventset_t *eventsetp,
u_int maxevent
);
Description
Use the dm_set_eventlist_nosync function to asynchronously set the list of events to be enabled for
an object.
dm_set_eventlist_nosync is a GPFS-specific DMAPI function; it is not part of the open standard. It
has the same purpose, parameters, and return values as the standard DMAPI dm_set_eventlist
function, except that the update that it performs is not persistent until some other activity on that file (or
on other files in the file system) happens to flush it to disk. To be certain that your update is made
persistent, use one of the following functions:
• Standard DMAPI dm_sync_by_handle function, which flushes the file data and attributes
• GPFS-specific dm_sync_dmattr_by_handle function, which flushes only the attributes.
Parameters
dm_sessid_t sid (IN)
The identifier for the session of interest.
void *hanp (IN)
The handle for the object. The handle can be either the system handle or a file handle.
size_t hlen (IN)
The length of the handle in bytes.
dm_token_t *token (IN)
The token referencing the access right for the handle. The access right must be DM_RIGHT_EXCL, or
the token DM_NO_TOKEN may be used and the interface acquires the appropriate rights.
dm_eventset_t *eventsetp (IN)
The list of events to be enabled for the object.
u_int maxevent (IN)
The number of events to be checked for dispositions in the event set. The events from 0 to
maxevent-1 are examined.
Return values
Zero is returned on success. On error, -1 is returned, and the global errno is set to one of the following
values:
[EACCES]
The access right referenced by the token for the handle is not DM_RIGHT_EXCL.
[EBADF]
The file handle does not refer to an existing or accessible object.
[EFAULT]
The system detected an invalid address in attempting to use an argument.
See also
“dm_sync_dmattr_by_handle” on page 824
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 821
dm_set_region_nosync
Asynchronously replaces the set of managed regions for a file.
Synopsis
int dm_set_region_nosync(
dm_sessid_t sid,
void *hanp,
size_t hlen,
dm_token_t token,
u_int nelem,
dm_region_t *regbufp,
dm_boolean_t *exactflagp
);
Description
Use the dm_set_region_nosync function to asynchronously replace the set of managed regions for a
file.
dm_set_region_nosync is a GPFS-specific DMAPI function; it is not part of the open standard. It has
the same purpose, parameters, and return values as the standard DMAPI dm_set_region function,
except that the update that it performs is not persistent until some other activity on that file (or on other
files in the file system) happens to flush it to disk. To be certain that your update is made persistent, use
one of the following functions:
• Standard DMAPI dm_sync_by_handle function, which flushes the file data and attributes
• GPFS-specific dm_sync_dmattr_by_handle function, which flushes only the attributes.
Parameters
dm_sessid_t sid (IN)
The identifier for the session of interest.
void *hanp (IN)
The handle for the regular file to be affected.
size_t hlen (IN)
The length of the handle in bytes.
dm_token_t *token (IN)
The token referencing the access right for the handle. The access right must be DM_RIGHT_EXCL, or
the token DM_NO_TOKEN may be used and the interface acquires the appropriate rights.
u_int nelem (IN)
The number of input regions in regbufp. If nelem is 0, then all existing managed regions are cleared.
dm_region_t *regbufp (IN)
A pointer to the structure defining the regions to be set. May be NULL if nelem is zero.
dm_boolean_t *exactflagp (OUT)
If DM_TRUE, the file system did not alter the requested managed region set.
Valid values for the rg_flags field of the region structure are created by OR'ing together one or more of
the following values:
DM_REGION_READ
Enable synchronous event for read operations that overlap this managed region.
DM_REGION_WRITE
Enable synchronous event for write operations that overlap this managed region.
DM_REGION_TRUNCATE
Enable synchronous event for truncate operations that overlap this managed region.
Return values
Zero is returned on success. On error, -1 is returned, and the global errno is set to one of the following
values:
[E2BIG]
The number of regions specified by nelem exceeded the implementation capacity.
[EACCES]
The access right referenced by the token for the handle is not DM_RIGHT_EXCL.
[EBADF]
The file handle does not refer to an existing or accessible object.
[EFAULT]
The system detected an invalid address in attempting to use an argument.
[EINVAL]
The argument token is not a valid token.
[EINVAL]
The file handle does not refer to a regular file.
[EINVAL]
The regions passed in are not valid because they overlap or some other problem.
[EINVAL]
The session is not valid.
[EIO]
An I/O error resulted in failure of operation.
[ENOMEM]
The DMAPI could not acquire the required resources to complete the call.
[EPERM]
The caller does not hold the appropriate privilege.
[EROFS]
The operation is not allowed on a read-only file system.
See also
“dm_sync_dmattr_by_handle” on page 824
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 823
dm_sync_dmattr_by_handle
Synchronizes one or more files' in-memory attributes with those on the physical medium.
Synopsis
int m_sync_dmattr_by_handle(
dm_sessid_t sid,
void *hanp,
size_t hlen,
dm_token_t token
);
Description
Use the dm_sync_dmattr_by_handle function to synchronize one or more files' in-memory attributes
with those on the physical medium.
dm_sync_dmattr_by_handle is a GPFS-specific DMAPI function; it is not part of the open standard. It
has the same purpose, parameters, and return values as the standard DMAPI dm_sync_by_handle
function, except that it flushes only the attributes, not the file data.
Like dm_sync_by_handle, dm_sync_dmattr_by_handle commits all previously unsynchronized
updates for that node, not just the updates for one file. Therefore, if you update a list of files and call
dm_sync_dmattr_by_handle on the last file, the attribute updates to all of the files in the list are made
persistent.
Parameters
dm_sessid_t sid (IN)
The identifier for the session of interest.
void *hanp (IN)
The handle for the file whose attributes are to be synchronized.
size_t hlen (IN)
The length of the handle in bytes.
dm_token_t *token (IN)
The token referencing the access right for the handle. The access right must be DM_RIGHT_EXCL, or
the token DM_NO_TOKEN may be used and the interface acquires the appropriate rights.
Return values
Zero is returned on success. On error, -1 is returned, and the global errno is set to one of the following
values:
[EACCES]
The access right referenced by the token for the handle is not DM_RIGHT_EXCL.
[EBADF]
The file handle does not refer to an existing or accessible object.
[EFAULT]
The system detected an invalid address in attempting to use an argument.
[EINVAL]
The argument token is not a valid token.
[ENOMEM]
The DMAPI could not acquire the required resources to complete the call.
[ENOSYS]
The DMAPI implementation does not support this optional function.
See also
“dm_remove_dmattr_nosync” on page 816, “dm_set_dmattr_nosync” on page 818,
“dm_set_eventlist_nosync” on page 820, and “dm_set_region_nosync” on page 822
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 825
– dm_handle_to_igen
– dm_handle_to_ino
– dm_handle_to_snap
• dm_handle_to_fshandle converts a file handle to a file system handle without checking the validity
of either handle.
• dm_handle_is_valid does not check if the handle references a valid file. It verifies only that the
internal format of the handle is correct.
• dm_init_attrloc ignores all of its arguments, except the output argument locp. In DMAPI for GPFS,
the location pointer is initialized to a constant. Validation of the session, token, and handle arguments is
done by the bulk access functions.
• When dm_query_session is called on a node other than the session node, it returns only the first eight
bytes of the session information string.
• dm_create_session can be used to move an existing session to another node, if the current session
node has failed. The call must be made on the new session node. See “Failure and recovery of IBM
Spectrum Scale Data Management API for GPFS” on page 829 for details on session node failure and
recovery.
• Assuming an existing session, using dm_create_session does not change the session id. If the
argument sessinfop is NULL, the session information string is not changed.
• The argument maxevent in the functions dm_set_disp and dm_set_eventlist is ignored. In GPFS
the set of events is implemented as a bitmap, containing a bit for each possible event.
• The value pointed to by the argument nelemp, on return from the functions dm_get_eventlist and
dm_get_config_events, is always DM_EVENT_MAX-1. The argument nelem in these functions is
ignored.
• The dt_nevents field in the dm_stat_t structure, which is returned by the dm_get_fileattr and
dm_get_bulkall functions, has a value of DM_EVENT_MAX-1 when the file has a file-system–wide
event enabled by calling the dm_set_eventlist function. The value will always be 3 when there is no
file-system–wide event enabled. A value of 3 indicates that there could be a managed region enabled
for the specific file, which might have enabled a maximum of three events: READ, WRITE, and
TRUNCATE.
• The functions dm_get_config and dm_get_config_events ignore the arguments hanp and hlen.
This is because the configuration is not dependent on the specific file or file system.
• The function dm_set_disp, when called with the global handle, ignores any events in the event set
being presented, except the mount event. When dm_set_disp is called with a file system handle, it
ignores the mount event.
• The function dm_handle_hash, when called with an individual file handle, returns the inode number of
the file. When dm_handle_hash is called with a file system handle, it returns the value 0.
• The function dm_get_mountinfo returns two additional flags in the me_mode field in the
dm_mount_event structure. The flags are DM_MOUNT_LOCAL and DM_MOUNT_REMOTE. See “Mount and
unmount” on page 802 for details.
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 827
Table 34. Specific DMAPI functions and associated error codes.
Name of function Error codes and descriptions
EINVAL - The session or token is not valid.
dm_downgrade_righ
t()
dm_upgrade_right()
dm_get_region() EPERM - The caller does not hold the appropriate privilege.
dm_init_service() EFAULT - The system detected an invalid address in attempting to use an
argument.
EINVAL - The token is not valid.
dm_move_event()
dm_respond_event()
dm_find_eventmsg()
dm_get_bulkall()
dm_get_bulkattr()
dm_get_dirattrs()
dm_get_events()
dm_get_mountinfo()
dm_getall_disp()
dm_getall_dmattr()
dm_handle_to_path(
)
EINVAL - The argument nelem is too large; DMAPI cannot acquire sufficient
resources.
dm_get_alloc_info(
)
dm_getall_session
s()
dm_getall_tokens()
Single-node failure
In DMAPI for GPFS, single-node failure means that DMAPI resources are lost on the failing node, but not
on any other node.
The most common single-node failure is when the local GPFS daemon fails. This renders any GPFS file
system at that node inaccessible. Another possible single-node failure is file system forced unmount.
When just an individual file system is forced unmounted on some node, its resources are lost, but the
sessions on that node, if any, survive.
Single-node failure has a different effect when it occurs on a session node or on a source node:
session node failure
When the GPFS daemon fails, all session queues are lost, as well as all nonpersistent local file system
resources, particularly DM access rights. The DM application may or may not have failed. The missing
resources may in turn cause DMAPI function calls to fail with errors such as ENOTREADY or ESRCH.
Events generated at other source nodes remain pending despite any failure at the session node. Also,
client threads remain blocked on such events.
source node failure
Events generated by that node are obsolete. If such events have already been enqueued at the
session node, the DM application will process them, even though this may be redundant since no
client is waiting for the response.
According to the XDSM standard, sessions are not persistent. This is inadequate for GPFS. Sessions must
be persistent to the extent of enabling recovery from single-node failures. This is in compliance with a
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 829
basic GPFS premise that single-node failures do not affect file access on surviving nodes. Consequently,
after session node failure, the session queue and the events on it must be reconstructed, possibly on
another node.
Session recovery is triggered by the actions of the DM application. The scenario depends on whether or
not the DM application itself has failed.
If the DM application has failed, it must be restarted, possibly on another node, and assume the old
session by id. This will trigger reconstruction of the session queue and the events on it, using backup
information replicated on surviving nodes. The DM application may then continue handling events. The
session id is never changed when a session is assumed.
If the DM application itself survives, it will notice that the session has failed by getting certain error codes
from DMAPI function calls (ENOTREADY, ESRCH). The application could then be moved to another node
and recover the session queue and events on it. Alternatively, the application could wait for the GPFS
daemon to recover. There is also a possibility that the daemon will recover before the DM application even
notices the failure. In these cases, session reconstruction is triggered when the DM application invokes
the first DMAPI function after daemon recovery.
Event recovery
Synchronous events are recoverable after session failure.
The state of synchronous events is maintained both at the source node and at the session node. When the
old session is assumed, pending synchronous events are resubmitted by surviving source nodes.
All the events originating from the session node itself are lost during session failure, including user events
generated by the DM application. All file operations on the session node fail with the ESTALE error code.
When a session fails, all of its tokens become obsolete. After recovery, the dm_getall_tokens function
returns an empty list of tokens, and it is therefore impossible to identify events that were outstanding
when the failure occurred. All recovered events return to the initial non-received state, and must be
explicitly received again. The token id of a recovered event is the same as prior to the failure (except for
the mount event).
DODeferred deletions
The asynchronous recovery code supports deferred deletions if there are no external mounts at the time
of recovery.
Once a node successfully generates a mount event for an external mount, the sgmgr node will start
deferred deletions if it is needed. Any internal mounts would bypass deferred deletions if the file system
is DMAPI enabled.
DM application failure
If only the DM application fails, the session itself remains active, events remain pending, and client
threads remain blocked waiting for a response. New events will continue to arrive at the session queue.
Note: GPFS is unable to detect that the DM application has failed.
The failed DM application must be recovered on the same node, and continue handling the events. Since
no DMAPI resources are lost in this case, there is little purpose in moving the DM application to another
node. Assuming an existing session on another node is not permitted in GPFS, except after session node
failure.
If the DM application fails simultaneously with the session node, the gpfsready shell script can be used
to restart the DM application on the failed node. See “Initializing the Data Management application” on
Chapter 2. IBM Spectrum Scale Data Management API for GPFS information 831
page 808. In the case of simultaneous failures, the DM application can also be moved to another node
and assume the failed session there. See “Single-node failure” on page 829.
gpfs_acl_t structure
Contains buffer mapping for the gpfs_getacl() and gpfs_putacl() subroutines.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_acl
{
gpfs_aclLen_t acl_len; /* Total length of this ACL in bytes */
gpfs_aclLevel_t acl_level; /* Reserved (must be zero) */
gpfs_aclVersion_t acl_version; /* POSIX or NFS4 ACL */
gpfs_aclType_t acl_type; /* Access, Default, or NFS4 */
gpfs_aclCount_t acl_nace; /* Number of Entries that follow */
union
{
gpfs_ace_v1_t ace_v1[1]; /* when GPFS_ACL_VERSION_POSIX */
gpfs_ace_v4_t ace_v4[1]; /* when GPFS_ACL_VERSION_NFS4 */
v4Level1_t v4Level1; /* when GPFS_ACL_LEVEL_V4FLAGS */
};
} gpfs_acl_t;
Description
The gpfs_acl_t structure contains size, version, and ACL type information for the gpfs_getacl() and
gpfs_putacl() subroutines.
Members
acl_len
The total length (in bytes) of this gpfs_acl_t structure.
acl_level
Reserved for future use. Currently must be zero.
acl_version
This field contains the version of the GPFS ACL. GPFS supports the following ACL versions:
GPFS_ACL_VERSION_POSIX and GPFS_ACL_VERSION_NFS4. On input to the gpfs_getacl()
subroutine, set this field to zero.
acl_type
On input to the gpfs_getacl() subroutine, set this field to:
• Either GPFS_ACL_TYPE_ACCESS or GPFS_ACL_TYPE_DEFAULT for POSIX ACLs
• GPFS_ACL_TYPE_NFS4 for NFS ACLs.
These constants are defined in the gpfs.h header file.
acl_nace
The number of ACL entries that are in the array (ace_v1 or ace_v4).
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_clone_copy() subroutine
Creates a file clone of a read-only clone parent file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_clone_copy(const char *sourcePathP, const char *destPathP);
Description
The gpfs_clone_copy() subroutine creates a writeable file clone from a read-only clone parent file.
Parameters
sourcePathP
The path of a read-only source file to clone. The source file can be a file in a snapshot or a clone
parent file created with the gpfs_clone_snap() subroutine.
destPathP
The path of the destination file to create. The destination file will become the file clone.
Exit status
If the gpfs_clone_copy() subroutine is successful, it returns a value of 0 and creates a file clone from
the clone parent.
If the gpfs_clone_copy() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EACCESS
Permission denied when writing to the destination path or reading from the source path.
EEXIST
The destination file already exists.
EFAULT
The input argument points outside the accessible address space.
EINVAL
The source or destination does not refer to a regular file or a GPFS file system.
EISDIR
The specified destination file is a directory.
ENAMETOOLONG
The source or destination path name is too long.
ENOENT
The source file does not exist.
ENOSPC
The file system has run out of disk space.
ENOSYS
The gpfs_clone_copy() subroutine is not available.
EPERM
The source file is a directory or is not a regular file.
EXDEV
The source file and destination file are not in the same file system.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
Related reference
“gpfs_clone_snap() subroutine” on page 840
Creates a read-only clone parent from a source file.
“gpfs_clone_split() subroutine” on page 842
Splits a file clone from its clone parent.
“gpfs_clone_unsnap() subroutine” on page 844
Changes a clone parent with no file clones back to a regular file.
gpfs_clone_snap() subroutine
Creates a read-only clone parent from a source file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_clone_snap(const char *sourcePathP, const char *destPathP);
Description
The gpfs_clone_snap() subroutine creates a read-only clone parent from a source file.
Parameters
sourcePathP
The path of the source file to clone.
destPathP
The path of the destination file to create. The destination file will become a read-only clone parent file.
If destPathP is NULL, then the source file will be changed in place into a read-only clone parent.
When using this method to create a clone parent, the specified file cannot be open for writing or have
hard links.
Exit status
If the gpfs_clone_snap() subroutine is successful, it returns a value of 0 and creates a read-only clone
parent from the source file.
If the gpfs_clone_snap() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EACCESS
Permission denied when writing to the destination path or reading from the source path.
EEXIST
The destination file already exists.
EFAULT
The input argument points outside accessible address space.
EINVAL
The source or destination does not refer to a regular file or a GPFS file system.
EISDIR
The specified destination file is a directory.
ENAMETOOLONG
The source or destination path name is too long.
ENOENT
The source file does not exist.
ENOSPC
The file system has run out of disk space.
ENOSYS
The gpfs_clone_snap() subroutine is not available.
EPERM
The source file is a directory or is not a regular file, or you tried to create a clone file with depth greater
than 1000.
EXDEV
The source file and destination file are not in the same file system.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
Related reference
“gpfs_clone_copy() subroutine” on page 838
Creates a file clone of a read-only clone parent file.
“gpfs_clone_split() subroutine” on page 842
Splits a file clone from its clone parent.
“gpfs_clone_unsnap() subroutine” on page 844
Changes a clone parent with no file clones back to a regular file.
gpfs_clone_split() subroutine
Splits a file clone from its clone parent.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_clone_split(gpfs_file_t fileDesc, int ancLimit);
Description
The gpfs_clone_split() subroutine splits a file clone from its clone parent. The gpfs_declone()
subroutine must be called first to remove all references to the clone parent.
Parameters
fileDesc
File descriptor for the file clone to split from its clone parent.
ancLimit
The ancestor limit specified with one of these values:
GPFS_CLONE_ALL
Remove references to all clone parents.
GPFS_CLONE_PARENT_ONLY
Remove references from the immediate clone parent only.
Exit status
If the gpfs_clone_split() subroutine is successful, it returns a value of 0.
If the gpfs_clone_split() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EACCESS
Permission denied when writing to the target file.
EBADF
The file descriptor is not valid or is not a GPFS file.
EINVAL
An argument to the function was not valid.
ENOSYS
The gpfs_clone_split() subroutine is not available.
EPERM
The file descriptor does not refer to a regular file or a file clone.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
Related reference
“gpfs_clone_copy() subroutine” on page 838
Creates a file clone of a read-only clone parent file.
“gpfs_clone_snap() subroutine” on page 840
Creates a read-only clone parent from a source file.
“gpfs_clone_unsnap() subroutine” on page 844
Changes a clone parent with no file clones back to a regular file.
gpfs_clone_unsnap() subroutine
Changes a clone parent with no file clones back to a regular file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_clone_unsnap(gpfs_file_t fileDesc);
Description
The gpfs_clone_unsnap() subroutine changes a clone parent with no file clones back to a regular file.
Parameters
fileDesc
File descriptor for the clone parent to convert back to a regular file.
Exit status
If the gpfs_clone_unsnap() subroutine is successful, it returns a value of 0.
If the gpfs_clone_unsnap() subroutine is unsuccessful, it returns a value of -1 and sets the global
error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EACCESS
Permission denied when writing to the target file.
EBADF
The file descriptor is not valid or is not a GPFS file.
EINVAL
An argument to the function was not valid.
ENOSYS
The gpfs_clone_unsnap() subroutine is not available.
EPERM
The file descriptor does not refer to a regular file or a clone parent.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
Related reference
“gpfs_clone_copy() subroutine” on page 838
gpfs_close_inodescan() subroutine
Closes an inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
void gpfs_close_inodescan(gpfs_iscan_t *iscan);
Description
The gpfs_close_inodescan() subroutine closes the scan of the inodes in a file system or snapshot
that was opened with the gpfs_open_inodescan() subroutine. The gpfs_close_inodescan()
subroutine frees all storage used for the inode scan and invalidates the iscan handle.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
iscan
Pointer to the inode scan handle.
Exit status
The gpfs_close_inodescan() subroutine returns void.
Exceptions
None.
Error status
None.
Examples
For an example using gpfs_close_inodescan(), see /usr/lpp/mmfs/samples/util/
tsgetusage.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_cmp_fssnapid() subroutine
Compares two file system snapshot IDs.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_cmp_fssnapid(const gpfs_fssnap_id_t *fssnapId1,
const gpfs_fssnap_id_t *fssnapId2,
int *result);
Description
The gpfs_cmp_fssnapid() subroutine compares two snapshot IDs for the same file system to
determine the order in which the two snapshots were taken. The result parameter is set as follows:
• result less than zero indicates that snapshot 1 was taken before snapshot 2.
• result equal to zero indicates that snapshot 1 and 2 are the same.
• result greater than zero indicates that snapshot 1 was taken after snapshot 2.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapId1
File system snapshot ID of the first snapshot.
fssnapId2
File system snapshot ID of the second snapshot.
result
Pointer to an integer indicating the outcome of the comparison.
Exit status
If the gpfs_cmp_fssnapid() subroutine is successful, it returns a value of 0 and the result
parameter is set.
If the gpfs_cmp_fssnapid() subroutine is unsuccessful, it returns a value of -1 and the global error
variable errno is set to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EDOM
The two snapshots cannot be compared because they were taken from two different file systems.
ENOSYS
The gpfs_cmp_fssnapid() subroutine is not available.
GPFS_E_INVAL_FSSNAPID
fssnapId1 or fssnapId2 is not a valid snapshot ID.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_declone() subroutine
Removes file clone references to clone parent blocks.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_declone(gpfs_file_t fileDesc, int ancLimit, gpfs_off64_t nBlocks,
gpfs_off64_t *offsetP);
Description
The gpfs_declone() subroutine removes all file clone references to a clone parent by copying the clone
parent blocks to the file clone.
Parameters
fileDesc
The file descriptor for the file clone.
ancLimit
The ancestor limit specified with one of these values:
GPFS_CLONE_ALL
Remove references to all clone parents.
GPFS_CLONE_PARENT_ONLY
Remove references from the immediate clone parent only.
nBlocks
The maximum number of GPFS blocks to copy.
offsetP
A pointer to the starting offset within the file clone. This pointer will be updated to the offset of the
next block to process or -1 if there no more blocks.
Exit status
If the gpfs_declone() subroutine is successful, it returns a value of 0.
If the gpfs_declone() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EACCESS
Permission denied when writing to the target file.
EBADF
The file descriptor is not valid or is not a GPFS file.
EFAULT
The input argument points outside the accessible address space.
EINVAL
An argument to the function was not valid.
ENOSPC
The file system has run out of disk space.
ENOSYS
The gpfs_declone() subroutine is not available.
EPERM
The file descriptor does not refer to a regular file.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_direntx_t structure
Contains attributes of a GPFS directory entry.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_direntx
{
int d_version; /* this struct's version */
unsigned short d_reclen; /* actual size of this struct including
null terminated variable length d_name */
unsigned short d_type; /* Types are defined below */
gpfs_ino_t d_ino; /* File inode number */
gpfs_gen_t d_gen; /* Generation number for the inode */
char d_name[256]; /* null terminated variable length name */
} gpfs_direntx_t;
Description
The gpfs_direntx_t structure contains the attributes of a GPFS directory entry.
Members
d_version
The version number of this structure.
d_reclen
The actual size of this structure including the null-terminated variable-length d_name field.
To allow some degree of forward compatibility, careful callers should use the d_reclen field for the
size of the structure rather than the sizeof() function.
d_type
The type of directory.
d_ino
The directory inode number.
d_gen
The directory generation number.
d_name
Null-terminated variable-length name of the directory.
Examples
For an example using gpfs_direntx_t, see /usr/lpp/mmfs/samples/util/tsfindinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_direntx64_t structure
Contains attributes of a GPFS directory entry.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_direntx64
{
int d_version; /* this struct's version */
unsigned short d_reclen; /* actual size of this struct including
null terminated variable length d_name */
unsigned short d_type; /* Types are defined below */
gpfs_ino64_t d_ino; /* File inode number */
gpfs_gen64_t d_gen; /* Generation number for the inode */
unsigned int d_flags; /* Flags are defined below */
char d_name[1028]; /* null terminated variable length name */
/* (1020+null+7 byte pad to double word) */
/* to handle up to 255 UTF-8 chars */
} gpfs_direntx64_t;
Description
The gpfs_direntx64_t structure contains the attributes of a GPFS directory entry.
Members
d_version
The version number of this structure.
d_reclen
The actual size of this structure including the null-terminated variable-length d_name field.
To allow some degree of forward compatibility, careful callers should use the d_reclen field for the
size of the structure rather than the sizeof() function.
d_type
The type of directory.
d_ino
The directory inode number.
d_gen
The directory generation number.
d_flags
The directory flags.
d_name
Null-terminated variable-length name of the directory.
Examples
See the gpfs_direntx_t example in /usr/lpp/mmfs/samples/util/tsfindinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_fcntl() subroutine
Performs operations on an open file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_fcntl(gpfs_file_t fileDesc, void* fcntlArgP);
Description
The gpfs_fcntl() subroutine is used to pass file access pattern information and to control certain file
attributes on behalf of an open file. More than one operation can be requested with a single invocation of
gpfs_fcntl(). The type and number of operations is determined by the second operand, fcntlArgP,
which is a pointer to a data structure built in memory by the application. This data structure consists of:
• A fixed length header, mapped by gpfsFcntlHeader_t.
• A variable list of individual file access hints, directives or other control structures:
– File access hints:
- gpfsAccessRange_t
- gpfsFreeRange_t
- gpfsMultipleAccessRange_t
- gpfsClearFileCache_t
– File access directives:
- gpfsCancelHints_t
– Platform-independent extended attribute operations:
- gpfsGetSetXAttr_t
- gpfsListXAttr_t
– Other file attribute operations:
- gpfsGetDataBlkDiskIdx_t
- gpfsGetFilesetName_t
- gpfsGetReplication_t
- gpfsGetSnapshotName_t
- gpfsGetStoragePool_t
- gpfsRestripeData_t
- gpfsSetReplication_t
- gpfsSetStoragePool_t
These hints, directives and other operations may be mixed within a single gpfs_fcntl() subroutine,
and are performed in the order that they appear. A subsequent hint or directive may cancel out a
preceding one.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fileDesc
The file descriptor identifying the file to which GPFS applies the hints and directives.
fcntlArgP
A pointer to the list of operations to be passed to GPFS.
Exit status
If the gpfs_fcntl() subroutine is successful, it returns a value of 0.
If the gpfs_fcntl() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable
errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EBADF
The file descriptor is not valid.
EINVAL
The file descriptor does not refer to a GPFS file or a regular file.
The system call is not valid.
ENOSYS
The gpfs_fcntl() subroutine is not supported under the current file system format.
Examples
1. This programming segment releases all cache data held by the file handle and tell GPFS that the
subroutine will write the portion of the file with file offsets between 2 GB and 3 GB minus one:
struct
{
gpfsFcntlHeader_t hdr;
gpfsClearFileCache_t rel;
gpfsAccessRange_t acc;
} arg;
arg.hdr.totalLength = sizeof(arg);
arg.hdr.fcntlVersion = GPFS_FCNTL_CURRENT_VERSION;
arg.hdr.fcntlReserved = 0;
arg.rel.structLen = sizeof(arg.rel);
arg.rel.structType = GPFS_CLEAR_FILE_CACHE;
arg.acc.structLen = sizeof(arg.acc);
arg.acc.structType = GPFS_ACCESS_RANGE;
arg.acc.start = 2LL * 1024LL * 1024LL * 1024LL;
arg.acc.length = 1024 * 1024 * 1024;
arg.acc.isWrite = 1;
rc = gpfs_fcntl(handle, &arg);
2. This programming segment gets the storage pool name and fileset name of a file from GPFS.
struct {
gpfsFcntlHeader_t hdr;
gpfsGetStoragePool_t pool;
gpfsGetFilesetName_t fileset;
} fcntlArg;
fcntlArg.hdr.totalLength = sizeof(fcntlArg.hdr) + sizeof(fcntlArg.pool) +
sizeof(fcntlArg.fileset);
fcntlArg.hdr.fcntlVersion = GPFS_FCNTL_CURRENT_VERSION;
fcntlArg.hdr.fcntlReserved = 0;
fcntlArg.pool.structLen = sizeof(fcntlArg.pool);
fcntlArg.pool.structType = GPFS_FCNTL_GET_STORAGEPOOL;
fcntlArg.fileset.structLen = sizeof(fcntlArg.fileset);
fcntlArg.fileset.structType = GPFS_FCNTL_GET_FILESETNAME;
rc = gpfs_fcntl(fd, &fcntlArg);
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_fgetattrs() subroutine
Retrieves all extended file attributes in opaque format.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_fgetattrs(gpfs_file_t fileDesc,
int flags,
void *bufferP,
int bufferSize,
int *attrSizeP);
Description
The gpfs_fgetattrs() subroutine, together with gpfs_fputattrs(), is intended for use by a backup
program to save (gpfs_fgetattrs()) and restore (gpfs_fputattrs()) extended file attributes such
as ACLs, DMAPI attributes, and other information for the file. If the file has no extended attributes, the
gpfs_fgetattrs() subroutine returns a value of 0, but sets attrSizeP to 0 and leaves the contents of
the buffer unchanged.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fileDesc
The file descriptor identifying the file whose extended attributes are being retrieved.
flags
Must have one of the following values:
GPFS_ATTRFLAG_DEFAULT
Saves the attributes for file placement and the currently assigned storage pool.
GPFS_ATTRFLAG_NO_PLACEMENT
Does not save attributes for file placement or the currently assigned storage pool.
GPFS_ATTRFLAG_IGNORE_POOL
Saves attributes for file placement but does not save the currently assigned storage pool.
GPFS_ATTRFLAG_USE_POLICY
Uses the restore policy rules to determine the pool ID.
GPFS_ATTRFLAG_INCL_DMAPI
Includes the DMAPI attributes.
GPFS_ATTRFLAG_FINALIZE_ATTRS
Finalizes immutability attributes.
GPFS_ATTRFLAG_SKIP_IMMUTABLE
Skips immutable attributes.
GPFS_ATTRFLAG_INCL_ENCR
Includes encryption attributes.
GPFS_ATTRFLAG_SKIP_CLONE
Skips clone attributes.
GPFS_ATTRFLAG_MODIFY_CLONEPARENT
Allows modification on the clone parent.
bufferP
Pointer to a buffer to store the extended attribute information.
bufferSize
The size of the buffer that was passed in.
attrSizeP
If successful, returns the actual size of the attribute information that was stored in the buffer. If the
bufferSize was too small, returns the minimum buffer size.
Exit status
If the gpfs_fgetattrs() subroutine is successful, it returns a value of 0.
If the gpfs_fgetattrs() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EBADF
The file descriptor is not valid.
EFAULT
The address is not valid.
EINVAL
The file descriptor does not refer to a GPFS file.
ENOSPC
bufferSize is too small to return all of the attributes. On return, attrSizeP is set to the required
size.
ENOSYS
The gpfs_fgetattrs() subroutine is not supported under the current file system format.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_fputattrs() subroutine
Sets all the extended file attributes for a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_fputattrs(gpfs_file_t fileDesc,
int flags,
void *bufferP);
Description
The gpfs_fputattrs() subroutine, together with gpfs_fgetattrs(), is intended for use by a backup
program to save (gpfs_fgetattrs()) and restore (gpfs_fputattrs()) all of the extended attributes
of a file. This subroutine also sets the storage pool for the file and sets data replication to the values that
are saved in the extended attributes.
If the saved storage pool is not valid or if the GPFS_ATTRFLAG_IGNORE_POOL flag is set, GPFS will select
the storage pool by matching a PLACEMENT rule using the saved file attributes. If GPFS fails to match a
placement rule or if there are no placement rules installed, GPFS assigns the file to the system storage
pool.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fileDesc
The file descriptor identifying the file whose extended attributes are being set.
flags
Must have one of the following values:
GPFS_ATTRFLAG_DEFAULT
Restores the previously assigned storage pool and previously assigned data replication.
GPFS_ATTRFLAG_NO_PLACEMENT
Does not change storage pool and data replication.
GPFS_ATTRFLAG_IGNORE_POOL
Selects storage pool and data replication by matching the saved attributes to a placement rule
instead of restoring the saved storage pool.
GPFS_ATTRFLAG_USE_POLICY
Uses the restore policy rules to determine the pool ID.
GPFS_ATTRFLAG_INCL_DMAPI
Includes the DMAPI attributes.
GPFS_ATTRFLAG_FINALIZE_ATTRS
Finalizes immutability attributes.
GPFS_ATTRFLAG_SKIP_IMMUTABLE
Skips immutable attributes.
GPFS_ATTRFLAG_INCL_ENCR
Includes encryption attributes.
GPFS_ATTRFLAG_SKIP_CLONE
Skips clone attributes.
GPFS_ATTRFLAG_MODIFY_CLONEPARENT
Allows modification on the clone parent.
Non-placement attributes such as ACLs are always restored, regardless of value of the flag.
bufferP
A pointer to the buffer containing the extended attributes for the file.
If you specify a value of NULL, all extended ACLs for the file are deleted.
Exit status
If the gpfs_fputattrs() subroutine is successful, it returns a value of 0.
If the gpfs_fputattrs() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EBADF
The file descriptor is not valid.
EINVAL
The buffer pointed to by bufferP does not contain valid attribute data, or the file descriptor does not
refer to a GPFS file.
ENOSYS
The gpfs_fputattrs() subroutine is not supported under the current file system format.
Examples
To copy extended file attributes from file f1 to file f2:
char buf[4096];
int f1, f2, attrSize, rc;
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_fputattrswithpathname() subroutine
Sets all of the extended file attributes for a file and invokes the policy engine for RESTORE rules.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_fputattrswithpathname(gpfs_file_t fileDesc,
int flags,
void *bufferP,
const char *pathName);
Description
The gpfs_fputattrswithpathname() subroutine sets all of the extended attributes of a file. In
addition, gpfs_fputattrswithpathname() invokes the policy engine using the saved attributes to
match a RESTORE rule to set the storage pool and the data replication for the file. The caller should
include the full path to the file (including the file name) to allow rule selection based on file name or path.
If the file fails to match a RESTORE rule or if there are no RESTORE rules installed, GPFS selects the
storage pool and data replication as it does when calling gpfs_fputattrs().
Note: Compile any program that uses this subroutine with the -lgpfs flag from one the following
libraries:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fileDesc
Is the file descriptor that identifies the file whose extended attributes are to be set.
flags
Must have one of the following values:
GPFS_ATTRFLAG_DEFAULT
Uses the saved attributes to match a RESTORE rule to set the storage pool and the data
replication for the file.
GPFS_ATTRFLAG_NO_PLACEMENT
Does not change storage pool and data replication.
GPFS_ATTRFLAG_IGNORE_POOL
Checks the file to see if it matches a RESTORE rule. If the file fails to match a RESTORE rule, GPFS
ignores the saved storage pool and selects a pool by matching the saved attributes to a
PLACEMENT rule.
GPFS_ATTRFLAG_USE_POLICY
Uses the restore policy rules to determine the pool ID.
GPFS_ATTRFLAG_INCL_DMAPI
Includes the DMAPI attributes.
GPFS_ATTRFLAG_FINALIZE_ATTRS
Finalizes immutability attributes.
GPFS_ATTRFLAG_SKIP_IMMUTABLE
Skips immutable attributes.
GPFS_ATTRFLAG_INCL_ENCR
Includes encryption attributes.
GPFS_ATTRFLAG_SKIP_CLONE
Skips clone attributes.
GPFS_ATTRFLAG_MODIFY_CLONEPARENT
Allows modification on the clone parent.
Non-placement attributes such as ACLs are always restored, regardless of value of the flag.
bufferP
A pointer to the buffer containing the extended attributes for the file.
If you specify a value of NULL, all extended ACLs for the file are deleted.
pathName
A pointer to the path name to a file or directory.
Exit status
If the gpfs_fputattrswithpathname() subroutine is successful, it returns a value of 0.
If the gpfs_fputattrswithpathname() subroutine is unsuccessful, it returns a value of -1 and sets
the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EBADF
The file descriptor is not valid.
EINVAL
The buffer to which bufferP points does not contain valid attribute data.
ENOENT
No such file or directory.
ENOSYS
The gpfs_fputattrswithpathname() subroutine is not supported under the current file system
format.
Examples
Refer to “gpfs_fputattrs() subroutine” on page 859 for examples.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_free_fssnaphandle() subroutine
Frees a GPFS file system snapshot handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
void gpfs_free_fssnaphandle(gpfs_fssnap_handle_t *fssnapHandle);
Description
The gpfs_free_fssnaphandle() subroutine frees the snapshot handle that is passed. The return
value is always void.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
File system snapshot handle.
Exit status
The gpfs_free_fssnaphandle() subroutine always returns void.
Exceptions
None.
Error status
None.
Examples
For an example using gpfs_free_fssnaphandle(), see /usr/lpp/mmfs/samples/util/
tstimes.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_fssnap_handle_t structure
Contains a handle for a GPFS file system or snapshot.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_fssnap_handle gpfs_fssnap_handle_t;
Description
A file system or snapshot is uniquely identified by an fssnapId of type gpfs_fssnap_id_t. While the
fssnapId is permanent and global, a shorter fssnapHandle is used by the backup application
programming interface to identify the file system and snapshot being accessed. The fssnapHandle, like
a POSIX file descriptor, is volatile and may be used only by the program that created it.
There are three ways to create a file system snapshot handle:
1. By using the name of the file system and snapshot
2. By specifying the path through the mount point
3. By providing an existing file system snapshot ID
Additional subroutines are provided to obtain the permanent, global fssnapId from the fssnapHandle,
or to obtain the path or the names for the file system and snapshot, if they are still available in the file
system.
The file system must be mounted in order to use the backup programming application interface. If the
fssnapHandle is created by the path name, the path may be relative and may specify any file or
directory in the file system. Operations on a particular snapshot are indicated with a path to a file or
directory within that snapshot. If the fssnapHandle is created by name, the file system's unique name
may be specified (for example, fs1) or its device name may be provided (for example, /dev/fs1). To
specify an operation on the active file system, the pointer to the snapshot's name should be set to NULL
or a zero-length string provided.
The name of the directory under which all snapshots appear may be obtained by the
gpfs_get_snapdirname() subroutine. By default this is .snapshots, but it can be changed using the
mmsnapdir command. The gpfs_get_snapdirname() subroutine returns the currently set value,
which is the one that was last set by the mmsnapdir command, or the default, if it was never changed.
Members
gpfs_fssnap_handle
File system snapshot handle
Examples
For an example using gpfs_fssnap_handle_t, see /usr/lpp/mmfs/samples/util/
tsgetusage.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_fssnap_id_t structure
Contains a permanent identifier for a GPFS file system or snapshot.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_fssnap_id
{
char opaque[48];
} gpfs_fssnap_id_t;
Description
A file system or snapshot is uniquely identified by an fssnapId of type gpfs_fssnap_id_t. The
fssnapId is a permanent and global identifier that uniquely identifies an active file system or a read-only
snapshot of a file system. Every snapshot of a file system has a unique identifier that is also different from
the identifier of the active file system itself.
The fssnapId is obtained from an open fssnapHandle. Once obtained, the fssnapId should be stored
along with the file system's data for each backup. The fssnapId is required to generate an incremental
backup. The fssnapId identifies the previously backed up file system or snapshot and allows the inode
scan to return only the files and data that have changed since that previous scan.
Members
opaque
A 48 byte area for containing the snapshot identifier.
Examples
For an example using gpfs_fssnap_id_t, see /usr/lpp/mmfs/samples/util/tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_fstat() subroutine
Returns exact file status for a GPFS file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_fstat(gpfs_file_t fileDesc,
gpfs_stat64_t *buffer);
Description
The gpfs_fstat() subroutine is used to obtain exact information about the file associated with the
fileDesc parameter. This subroutine is provided as an alternative to the stat() subroutine, which may
not provide exact mtime and atime values. For more information, see the topic Exceptions to Open Group
technical standards in the IBM Spectrum Scale: Administration Guide.
read, write, or execute permission for the named file is not required, but all directories listed in the
path leading to the file must be searchable. The file information is written to the area specified by the
buffer parameter.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fileDesc
The file descriptor identifying the file for which exact status information is requested.
buffer
A pointer to the gpfs_stat64_t structure in which the information is returned. The
gpfs_stat64_t structure is described in the sys/stat.h file.
Exit status
If the gpfs_fstat() subroutine is successful, it returns a value of 0.
If the gpfs_fstat() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable
errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EBADF
The file descriptor is not valid.
EINVAL
The file descriptor does not refer to a GPFS file or a regular file.
ENOSYS
The gpfs_fstat() subroutine is not supported under the current file system format.
ESTALE
The cached file system information was not valid.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_fstat_x() subroutine
Returns extended status information for a GPFS file with specified accuracy.
Library
GPFS library. Runs on AIX, Linux, and Windows.
Synopsis
#include <gpfs.h>
int gpfs_fstat_x(gpfs_file_t fileDesc,
unsigned int *st_litemask,
gpfs_iattr64_t *iattr,
size_t iattrBufLen);
Description
The gpfs_fstat_x() subroutine is similar to the gpfs_fstat() subroutine but returns more
information in a gpfs_iattr64 structure that is defined in gpfs.h. This subroutine is supported only on
the Linux operating system.
Your program must verify that the version of the gpfs_iattr64 structure that is returned in the field
ia_version is the same as the version that you are using. Versions are defined in gpfs.h with the
pattern GPFS_IA64_VERSION*.
File permissions read, write, and execute are not required for the specified file, but all the directories
in the specified path must be searchable.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.so for Linux
Parameters
gpfs_file_t fileDesc
The file descriptor of a file for which information is requested.
*st_litemask
A pointer to a bitmask specification of the items that you want to be returned exactly. Bitmasks are
defined in gpfs.h with the pattern GPFS_SLITE_*BIT. The subroutine returns exact values for the
items that you specify in the bitmask. The subroutine also sets bits in the bitmask to indicate any
other items that are exact.
*iattr
A pointer to a gpfs_iattr64_t structure in which the information is returned. The structure is
described in the gpfs.h file.
iattrBufLen
The length of your gpfs_iattr64_t structure, as given by sizeof(myStructure). The subroutine
does not write data past this limit. The field ia_reclen in the returned structure is the length of the
gpfs_iattr64_t structure that the subroutine is using.
Exit status
If the gpfs_fstat_x() subroutine is successful, it returns a value of 0.
If the gpfs_fstat_x() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following errors:
EBADF
The file handle does not refer to an existing or accessible object.
ENOSYS
The gpfs_fstat_x() subroutine is not supported under the current file system format.
ESTALE
The cached file system information was invalid.
Location
/usr/lpp/mmfs/lib
gpfs_get_fsname_from_fssnaphandle() subroutine
Obtains a file system name from its snapshot handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
const char *gpfs_get_fsname_from_fssnaphandle(gpfs_fssnap_handle_t *fssnapHandle);
Description
The gpfs_get_fsname_from_fssnaphandle() subroutine returns a pointer to the name of file
system that is uniquely identified by the file system snapshot handle.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
File system snapshot handle.
Exit status
If the gpfs_get_fsname_from_fssnaphandle() subroutine is successful, it returns a pointer to the
name of the file system identified by the file system snapshot handle.
If the gpfs_get_fsname_from_fssnaphandle() subroutine is unsuccessful, it returns NULL and sets
the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOSYS
The gpfs_get_fsname_from_fssnaphandle() subroutine is not available.
EPERM
The caller does not have superuser privileges.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_get_fssnaphandle_by_fssnapid() subroutine
Obtains a file system snapshot handle using its snapshot ID.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
gpfs_fssnap_handle_t *gpfs_get_fssnaphandle_by_fssnapid(const gpfs_fssnap_id_t *fssnapId);
Description
The gpfs_get_fssnaphandle_by_fssnapid() subroutine creates a handle for the file system or
snapshot that is uniquely identified by the permanent, unique snapshot ID.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapId
File system snapshot ID
Exit status
If the gpfs_get_fssnaphandle_by_fssnapid() subroutine is successful, it returns a pointer to the
file system snapshot handle.
If the gpfs_get_fssnaphandle_by_fssnapid() subroutine is unsuccessful, it returns NULL and sets
the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOMEM
Space could not be allocated for the file system snapshot handle.
ENOSYS
The gpfs_get_fssnaphandle_by_fssnapid() subroutine is not available.
EPERM
The caller does not have superuser privileges.
GPFS_E_INVAL_FSSNAPID
The file system snapshot ID is not valid.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_get_fssnaphandle_by_name() subroutine
Obtains a file system snapshot handle using its name.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
gpfs_fssnap_handle_t *gpfs_get_fssnaphandle_by_name(const char *fsName,
const char *snapName);
Description
The gpfs_get_fssnaphandle_by_name() subroutine creates a handle for the file system or snapshot
that is uniquely identified by the file system's name and the name of the snapshot.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fsName
A pointer to the name of the file system whose snapshot handle is desired.
snapName
A pointer to the name of the snapshot whose snapshot handle is desired, or NULL to access the active
file system rather than a snapshot within the file system.
Exit status
If the gpfs_get_fssnaphandle_by_name() subroutine is successful, it returns a pointer to the file
system snapshot handle.
If the gpfs_get_fssnaphandle_by_name() subroutine is unsuccessful, it returns NULL and sets the
global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOENT
The file system name is not valid.
ENOMEM
Space could not be allocated for the file system snapshot handle.
ENOSYS
The gpfs_get_fssnaphandle_by_name() subroutine is not available.
EPERM
The caller does not have superuser privileges.
GPFS_E_INVAL_SNAPNAME
The snapshot name is not valid.
Examples
For an example using gpfs_get_fssnaphandle_by_name(), see /usr/lpp/mmfs/samples/util/
tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_get_fssnaphandle_by_path() subroutine
Obtains a file system snapshot handle using its path name.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
gpfs_fssnap_handle_t *gpfs_get_fssnaphandle_by_path(const char *pathName);
Description
The gpfs_get_fssnaphandle_by_path() subroutine creates a handle for the file system or snapshot
that is uniquely identified by a path through the file system's mount point to a file or directory within the
file system or snapshot.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
pathName
A pointer to the path name to a file or directory within the desired file system or snapshot.
Exit status
If the gpfs_get_fssnaphandle_by_path() subroutine is successful, it returns a pointer to the file
system snapshot handle.
If the gpfs_get_fssnaphandle_by_path() subroutine is unsuccessful, it returns NULL and sets the
global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOENT
The path name is not valid.
ENOMEM
Space could not be allocated for the file system snapshot handle.
ENOSYS
The gpfs_get_fssnaphandle_by_path() subroutine is not available.
EPERM
The caller does not have superuser privileges.
Examples
For an example using gpfs_get_fssnaphandle_by_path(), see /usr/lpp/mmfs/samples/util/
tsgetusage.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_get_fssnapid_from_fssnaphandle() subroutine
Obtains a file system snapshot ID using its handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_get_fssnapid_from_fssnaphandle(gpfs_fssnap_handle_t *fssnapHandle,
gpfs_fssnap_id_t *fssnapId);
Description
The gpfs_get_fssnapid_from_fssnaphandle() subroutine obtains the permanent, globally unique
file system snapshot ID of the file system or snapshot identified by the open file system snapshot handle.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
File system snapshot handle.
fssnapId
File system snapshot ID.
Exit status
If the gpfs_get_fssnapid_from_fssnaphandle() subroutine is successful, it returns a pointer to
the file system snapshot ID.
If the gpfs_get_fssnapid_from_fssnaphandle() subroutine is unsuccessful, it returns a value of -1
and sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EFAULT
Size mismatch for fssnapId.
EINVAL
NULL pointer given for returned fssnapId.
ENOSYS
The gpfs_get_fssnapid_from_fssnaphandle() subroutine is not available.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
Examples
For an example using gpfs_get_fssnapid_from_fssnaphandle(), see /usr/lpp/mmfs/
samples/util/tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_get_pathname_from_fssnaphandle() subroutine
Obtains a file system path name using its snapshot handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
const char *gpfs_get_pathname_from_fssnaphandle(gpfs_fssnap_handle_t *fssnapHandle);
Description
The gpfs_get_pathname_from_fssnaphandle() subroutine obtains the path name of the file system
or snapshot identified by the open file system snapshot handle.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
File system snapshot handle.
Exit status
If the gpfs_get_pathname_from_fssnaphandle() subroutine is successful, it returns a pointer to
the path name of the file system or snapshot.
If the gpfs_get_pathname_from_fssnaphandle() subroutine is unsuccessful, it returns NULL and
sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOSYS
The gpfs_get_pathname_from_fssnaphandle() subroutine is not available.
EPERM
The caller does not have superuser privileges.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
Examples
For an example using gpfs_get_pathname_from_fssnaphandle(), see /usr/lpp/mmfs/
samples/util/tsfindinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_get_snapdirname() subroutine
Obtains the name of the directory containing global snapshots.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_get_snapdirname(gpfs_fssnap_handle_t *fssnapHandle,
char *snapdirName,
int bufLen);
Description
The gpfs_get_snapdirname() subroutine obtains the name of the directory that contains global
snapshots.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
File system snapshot handle.
snapdirName
Buffer into which the name of the snapshot directory will be copied.
bufLen
The size of the provided buffer.
Exit status
If the gpfs_get_snapdirname() subroutine is successful, it returns a value of 0 and the snapdirName
and bufLen parameters are set.
If the gpfs_get_snapdirname() subroutine is unsuccessful, it returns a value of -1 and the global
error variable errno is set to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_get_snapdirname() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ERANGE
The buffer is too small to return the snapshot directory name.
ESTALE
The cached file system information was not valid.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
E2BIG
The buffer is too small to return the snapshot directory name.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_get_snapname_from_fssnaphandle() subroutine
Obtains a snapshot name using its file system snapshot handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
const char *gpfs_get_snapname_from_fssnaphandle(gpfs_fssnap_handle_t *fssnapHandle);
Description
The gpfs_get_snapname_from_fssnaphandle() subroutine obtains a pointer to the name of a GPFS
snapshot given its file system snapshot handle. If the fssnapHandle identifies an active file system, as
opposed to a snapshot of a file system, gpfs_get_snapname_from_fssnaphandle() returns a
pointer to a zero-length snapshot name and a successful return code.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
File system snapshot handle.
Exit status
If the gpfs_get_snapname_from_fssnaphandle() subroutine is successful, it returns a pointer to
the name of the snapshot.
If the gpfs_get_snapname_from_fssnaphandle() subroutine is unsuccessful, it returns NULL and
sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOSYS
The gpfs_get_snapname_from_fssnaphandle() subroutine is not available.
EPERM
The caller does not have superuser privileges.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
GPFS_E_INVAL_SNAPNAME
The snapshot has been deleted.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_getacl() subroutine
Retrieves the access control information for a GPFS file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_getacl(const char *pathname,
int flags,
void *acl);
Description
The gpfs_getacl() subroutine, together with the gpfs_putacl() subroutine, is intended for use by a
backup program to save (gpfs_getacl()) and restore (gpfs_putacl()) the ACL information for the
file.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
pathname
The path identifying the file for which the ACLs are being obtained.
flags
Consists of one of these values:
0
Indicates that the acl parameter is to be mapped with the gpfs_opaque_acl_t structure.
The gpfs_opaque_acl_t structure should be used by backup and restore programs.
GPFS_GETACL_STRUCT
Indicates that the acl parameter is to be mapped with the gpfs_acl_t structure.
The gpfs_acl_t structure is provided for applications that need to interpret the ACL.
acl
Pointer to a buffer mapped by the structure gpfs_opaque_acl_t or gpfs_acl_t, depending on the
value of flags.
The first four bytes of the buffer must contain its total size.
Exit status
If the gpfs_getacl() subroutine is successful, it returns a value of 0.
If the gpfs_getacl() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EINVAL
The path name does not refer to a GPFS file or a regular file.
ENOMEM
Unable to allocate memory for the request.
ENOTDIR
File is not a directory.
ENOSPC
The buffer is too small to return the entire ACL. The required buffer size is returned in the first four
bytes of the buffer pointed to by acl.
ENOSYS
The gpfs_getacl() subroutine is not supported under the current file system format.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_getacl_fd() subroutine
Retrieves the access control information for a GPFS file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_getacl_fd(gpfs_file_t fileDesc,
int flags,
void *acl);
Description
The gpfs_getacl_fd() subroutine together with the gpfs_putacl_fd() subroutine, is intended for
use by a backup program to save (gpfs_getacl_fd()) and restore (gpfs_putacl_fd()) the ACL
information for the file.
Note: Compile any program that uses this subroutine with the lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fileDesc
A file descriptor that identifies the file for which the ACLs are being obtained.
flags
Consists of one of these values:
0
Indicates that the acl parameter is to be mapped with the gpfs_opaque_acl_t structure.
The gpfs_opaque_acl_t structure is used by backup and restore programs.
GPFS_GETACL_STRUCT
Indicates that the acl parameter is to be mapped with the gpfs_acl_t structure.
The gpfs_acl_t structure is provided for applications that need to interpret the ACL.
acl
Pointer to a buffer mapped by the structure gpfs_opaque_acl_t or gpfs_acl_t, depending on the
value of flags.
The first four bytes of the buffer must contain its total size.
The gpfs_opaque_acl_t structure contains size, version, and ACL type information for the
gpfs_getacl() and gpfs_putacl() subroutines.
Exit status
If the gpfs_getacl_fd() subroutine is successful, it returns a value of 0.
If the gpfs_getacl_fd() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following list:
Error codes include but are not limited to the following:
EINVAL
The file descriptor does not refer to a GPFS file or a regular file.
EBADF
The file descriptor is not valid.
ENOMEM
Unable to allocate memory for the request.
ENOTDIR
File is not a directory.
ENOSPC
The buffer is too small to return the entire ACL. The required buffer size is returned in the first four
bytes of the buffer pointed to by acl.
ENOSYS
The gpfs_getacl_fd() subroutine is not supported under the current file system format.
Location
/usr/lpp/mmfs/lib/l ibgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_iattr_t structure
Contains attributes of a GPFS inode.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_iattr
{
int ia_version; /* this struct version */
int ia_reclen; /* sizeof this structure */
int ia_checksum; /* validity check on iattr struct */
gpfs_mode_t ia_mode; /* access mode */
gpfs_uid_t ia_uid; /* owner uid */
gpfs_gid_t ia_gid; /* owner gid */
gpfs_ino_t ia_inode; /* file inode number */
gpfs_gen_t ia_gen; /* inode generation number */
gpfs_nlink_t ia_nlink; /* number of links */
short ia_flags; /* Flags (defined below) */
int ia_blocksize; /* preferred block size for io */
gpfs_mask_t ia_mask; /* Initial attribute mask (not used) */
unsigned int ia_pad1; /* reserved space */
gpfs_off64_t ia_size; /* file size in bytes */
gpfs_off64_t ia_blocks; /* 512 byte blocks of disk held by file */
gpfs_timestruc_t ia_atime; /* time of last access */
gpfs_timestruc_t ia_mtime; /* time of last data modification */
gpfs_timestruc_t ia_ctime; /* time of last status change */
gpfs_dev_t ia_rdev; /* id of device */
unsigned int ia_xperm; /* extended attributes (defined below) */
unsigned int ia_modsnapid; /* snapshot id of last modification */
unsigned int ia_filesetid; /* fileset ID */
unsigned int ia_datapoolid; /* storage pool ID for data */
unsigned int ia_pad2; /* reserved space */
} gpfs_iattr_t;
Description
The gpfs_iattr_t structure contains the various attributes of a GPFS inode.
Members
ia_version
The version number of this structure.
ia_reclen
The size of this structure.
ia_checksum
The checksum for this gpfs_iattr structure.
ia_mode
The access mode for this inode.
ia_uid
The owner user ID for this inode.
ia_gid
The owner group ID for this inode.
ia_inode
The file inode number.
ia_gen
The inode generation number.
ia_nlink
The number of links for this inode.
ia_flags
The flags defined for inode attributes.
ia_blocksize
The preferred block size for I/O.
ia_mask
The initial attribute mask (not used).
ia_pad1
Reserved space.
ia_size
The file size in bytes.
ia_blocks
The number of 512 byte blocks of disk held by the file.
ia_atime
The time of last access.
ia_mtime
The time of last data modification.
ia_ctime
The time of last status change.
ia_rdev
The ID of the device.
ia_xperm
Indicator - nonzero if file has extended ACL.
ia_modsnapid
Internal snapshot ID indicating the last time that the file was modified. Internal snapshot IDs for the
current snapshots are displayed by the mmlssnapshot command.
ia_filesetid
The fileset ID for the inode.
ia_datapoolid
The storage pool ID for data for the inode.
ia_pad2
Reserved space.
Examples
For an example using gpfs_iattr_t, see /usr/lpp/mmfs/samples/util/tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_iattr64_t structure
Contains attributes of a GPFS inode.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_iattr64
{
int ia_version; /* this struct version */
int ia_reclen; /* sizeof this structure */
int ia_checksum; /* validity check on iattr struct */
gpfs_mode_t ia_mode; /* access mode */
gpfs_uid64_t ia_uid; /* owner uid */
gpfs_gid64_t ia_gid; /* owner gid */
gpfs_ino64_t ia_inode; /* file inode number */
gpfs_gen64_t ia_gen; /* inode generation number */
gpfs_nlink64_t ia_nlink; /* number of links */
gpfs_off64_t ia_size; /* file size in bytes */
gpfs_off64_t ia_blocks; /* 512 byte blocks of disk held by file */
gpfs_timestruc64_t ia_atime; /* time of last access */
unsigned int ia_winflags; /* windows flags (defined below) */
unsigned int ia_pad1; /* reserved space */
gpfs_timestruc64_t ia_mtime; /* time of last data modification */
unsigned int ia_flags; /* flags (defined below) */
/* next four bytes were ia_pad2 */
unsigned char ia_repl_data; /* data replication factor */
unsigned char ia_repl_data_max; /* data replication max factor */
unsigned char ia_repl_meta; /* meta data replication factor */
unsigned char ia_repl_meta_max; /* meta data replication max factor */
gpfs_timestruc64_t ia_ctime; /* time of last status change */
int ia_blocksize; /* preferred block size for io */
unsigned int ia_pad3; /* reserved space */
gpfs_timestruc64_t ia_createtime; /* creation time */
gpfs_mask_t ia_mask; /* initial attribute mask (not used) */
int ia_pad4; /* reserved space */
unsigned int ia_reserved[GPFS_IA64_RESERVED]; /* reserved space */
unsigned int ia_xperm; /* extended attributes (defined below) */
gpfs_dev_t ia_dev; /* id of device containing file */
gpfs_dev_t ia_rdev; /* device id (if special file) */
unsigned int ia_pcacheflags; /* pcache inode bits */
gpfs_snapid64_t ia_modsnapid; /* snapshot id of last modification */
unsigned int ia_filesetid; /* fileset ID */
unsigned int ia_datapoolid; /* storage pool ID for data */
gpfs_ino64_t ia_inode_space_mask; /* inode space mask of this file system */
/* This value is saved in the iattr structure
during backup and used during restore */
#ifdef GPFS_64BIT_INODES
#undef GPFS_IA_VERSION
#define GPFS_IA_VERSION GPFS_IA_VERSION64
#define gpfs_iattr_t gpfs_iattr64_t
#endif
Description
The gpfs_iattr64_t structure contains the various attributes of a GPFS inode.
Members
ia_version
The version number of this structure.
ia_reclen
The size of this structure.
ia_checksum
The checksum for this gpfs_iattr64 structure.
ia_mode
The access mode for this inode.
ia_uid
The owner user ID for this inode.
ia_gid
The owner group ID for this inode.
ia_inode
The file inode number.
ia_gen
The inode generation number.
ia_nlink
The number of links for this inode.
ia_size
The file size in bytes.
ia_blocks
The number of 512 byte blocks of disk held by the file.
ia_atime
The time of last access.
ia_winflags
The Windows flags.
ia_pad1
Reserved space.
ia_mtime
The time of last data modification.
ia_flags
The flags defined for inode attributes.
ia_repl_data
The data replication factor.
ia_repl_data_max
The maximum data replication factor.
ia_repl_meta
The metadata replication factor.
ia_repl_meta_max
The maximum metadata replication factor.
ia_ctime
The time of last status change.
ia_blocksize
The preferred block size for I/O.
ia_pad3
Reserved space.
ia_createtime
The creation time.
ia_mask
The initial attribute mask (not used).
ia_pad4
Reserved space.
ia_reserved
Reserved space.
ia_xperm
Indicator - nonzero if file has extended ACL.
ia_dev
The ID of the device containing the file.
ia_rdev
The ID of the device.
ia_pcacheflags
The pcache inode bits.
ia_modsnapid
Internal snapshot ID indicating the last time that the file was modified. Internal snapshot IDs for the
current snapshots are displayed by the mmlssnapshot command.
ia_filesetid
The fileset ID for the inode.
ia_datapoolid
The storage pool ID for data for the inode.
ia_dirminsize
Directory preallocation size in bytes.
ia_inode_space_mask
The inode space mask of this file system. This value is saved in the iattr structure during backup
and used during restore.
ia_unused
Reserved space.
Examples
See the gpfs_iattr_t example in /usr/lpp/mmfs/samples/util/tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_iclose() subroutine
Closes a file given its inode file handle.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
void gpfs_iclose(gpfs_ifile_t *ifile);
Description
The gpfs_iclose() subroutine closes an open file descriptor created by gpfs_iopen().
For an overview of using gpfs_iclose() in a backup application, see the topic Using APIs to develop
backup applications in the IBM Spectrum Scale: Administration Guide.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
ifile
Pointer to gpfs_ifile_t from gpfs_iopen().
Exit status
The gpfs_iclose() subroutine returns void.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOSYS
The gpfs_iclose() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ESTALE
Cached file system information was not valid.
Examples
For an example using gpfs_iclose(), see /usr/lpp/mmfs/samples/util/tsreaddir.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_ifile_t structure
Contains a handle for a GPFS inode.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_ifile gpfs_ifile_t;
Description
The gpfs_ifile_t structure contains a handle for the file of a GPFS inode.
Members
gpfs_ifile
The handle for the file of a GPFS inode.
Examples
For an example using gpfs_ifile_t, see /usr/lpp/mmfs/samples/util/tsfindinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_igetattrs() subroutine
Retrieves extended file attributes in opaque format.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_igetattrs(gpfs_ifile_t *ifile,
void *buffer,
int bufferSize,
int *attrSize);
Description
The gpfs_igetattrs() subroutine retrieves all extended file attributes in opaque format. This
subroutine is intended for use by a backup program to save all extended file attributes (ACLs, attributes,
and so forth). If the file does not have any extended attributes, the subroutine sets attrSize to zero.
Notes:
1. This call does not return extended attributes used for the Data Storage Management (XDSM) API (also
known as DMAPI).
2. Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
ifile
Pointer to gpfs_ifile_t from gpfs_iopen().
buffer
Pointer to buffer for returned attributes.
bufferSize
Size of the buffer.
attrSize
Pointer to returned size of attributes.
Exit status
If the gpfs_igetattrs() subroutine is successful, it returns a value of 0.
If the gpfs_igetattrs() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOSPC
The buffer is too small to return all attributes. Field attrSize will be set to the size necessary.
ENOSYS
The gpfs_igetattrs() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ESTALE
Cached file system information was not valid.
GPFS_E_INVAL_IFILE
Incorrect ifile parameters.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_igetattrsx() subroutine
Retrieves extended file attributes; provides an option to include DMAPI attributes.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_igetattrsx(gpfs_ifile_t *ifile,
int flags,
void *buffer,
int bufferSize,
int *attrSize);
Description
The gpfs_igetattrsx() subroutine retrieves all extended file attributes in opaque format. It provides
the same function as gpfs_igetattrs() but includes a flags parameter that allows the caller to back
up and restore DMAPI attributes.
This function is intended for use by a backup program to save (and restore, using the related subroutine
gpfs_iputattrsx()) all extended file attributes (ACLs, user attributes, and so forth) in one call. If the
file does not have any extended attributes, the subroutine sets attrSize to zero.
Notes:
1. This call can optionally return extended attributes used for the Data Storage Management (XDSM) API
(also known as DMAPI).
2. Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
ifile
Pointer to gpfs_ifile_t from gpfs_iopen().
flags
Flags must have one of the following values:
GPFS_ATTRFLAG_NO_PLACEMENT
File attributes for placement are not saved, and neither is the current storage pool.
GPFS_ATTRFLAG_IGNORE_PLACEMENT
File attributes for placement are saved, but the current storage pool is not.
GPFS_ATTRFLAG_INCL_DMAPI
File attributes for DMAPI are included in the returned buffer.
GPFS_ATTRFLAG_USE_POLICY
Uses the restore policy rules to determine the pool ID.
GPFS_ATTRFLAG_INCL_DMAPI
Includes the DMAPI attributes.
GPFS_ATTRFLAG_FINALIZE_ATTRS
Finalizes immutability attributes.
GPFS_ATTRFLAG_SKIP_IMMUTABLE
Skips immutable attributes.
GPFS_ATTRFLAG_INCL_ENCR
Includes encryption attributes.
GPFS_ATTRFLAG_SKIP_CLONE
Skips clone attributes.
GPFS_ATTRFLAG_MODIFY_CLONEPARENT
Allows modification on the clone parent.
buffer
A pointer to the buffer for returned attributes.
bufferSize
Size of the buffer.
attrSize
Pointer to returned size of attributes.
Exit status
If the gpfs_igetattrsx() subroutine is successful, it returns a value of 0.
If the gpfs_igetattrsx() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EINVAL
Not a GPFS file, or the flags provided are not valid.
ENOSPC
The buffer is too small to return all attributes. Field attrSize will be set to the size necessary.
ENOSYS
The gpfs_igetattrsx() subroutine is not available.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_igetfilesetname() subroutine
Returns the name of the fileset defined by a fileset ID.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_igetfilesetname(gpfs_iscan_t *iscan,
unsigned int filesetId,
void *buffer,
int bufferSize);
Description
The gpfs_igetfilesetname() subroutine is part of the backup by inode interface. The caller provides
a pointer to the scan descriptor used to obtain the fileset ID. This library routine will return the name of
the fileset defined by the fileset ID. The name is the null-terminated string provided by the administrator
when the fileset was defined. The maximum string length is GPFS_MAXNAMLEN, which is defined
in /usr/lpp/mmfs/include/gpfs.h.
Notes:
1. This routine is not thread safe. Only one thread at a time is allowed to invoke this routine for the given
scan descriptor.
2. Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
iscan
Pointer to gpfs_iscan_t used to obtain the fileset ID.
filesetId
The fileset ID.
buffer
Pointer to buffer for returned attributes.
bufferSize
Size of the buffer.
Exit status
If the gpfs_igetfilesetname() subroutine is successful, it returns a value of 0.
If the gpfs_igetfilesetname() subroutine is unsuccessful, it returns a value of -1 and sets the global
error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
E2BIG
The buffer is too small to return the fileset name.
EINTR
The call was interrupted. This routine is not thread safe.
EINVAL
The fileset ID is not valid.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_igetfilesetname() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ESTALE
The cached file system information was not valid.
GPFS_E_INVAL_ISCAN
The iscan parameters were not valid.
Examples
This programming segment gets the fileset name based on the given fileset ID. The returned fileset name
is stored in FileSetNameBuffer, which has a length of FileSetNameSize.
gpfs_iscan_t *fsInodeScanP;
gpfs_igetfilesetname(fsInodeScanP,FileSetId, &FileSetNameBuffer,FileSetNameSize);
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_igetstoragepool() subroutine
Returns the name of the storage pool for the given storage pool ID.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_igetstoragepool(gpfs_iscan_t *iscan,
unsigned int dataPoolId,
void *buffer,
int bufferSize);
Description
The gpfs_igetstoragepool() subroutine is part of the backup by inode interface. The caller provides
a pointer to the scan descriptor used to obtain the storage pool ID. This routine returns the name of the
storage pool for the given storage pool ID. The name is the null-terminated string provided by the
administrator when the storage pool was defined. The maximum string length is GPFS_MAXNAMLEN,
which is defined in /usr/lpp/mmfs/include/gpfs.h.
Notes:
1. This routine is not thread safe. Only one thread at a time is allowed to invoke this routine for the given
scan descriptor.
2. Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
iscan
Pointer to gpfs_iscan_t used to obtain the storage pool ID.
dataPoolId
The storage pool ID.
buffer
Pointer to buffer for returned attributes.
bufferSize
Size of the buffer.
Exit status
If the gpfs_igetstoragepool() subroutine is successful, it returns a value of 0.
If the gpfs_igetstoragepool() subroutine is unsuccessful, it returns a value of -1 and sets the global
error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
E2BIG
The buffer is too small to return the storage pool name.
EINTR
The call was interrupted. This routine is not thread safe.
EINVAL
The storage pool ID is not valid.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_igetstoragepool() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ESTALE
The cached storage pool information was not valid.
GPFS_E_INVAL_ISCAN
The iscan parameters were not valid.
Examples
This programming segment gets the storage pool name based on the given storage pool ID. The returned
storage pool name is stored in StoragePoolNameBuffer which has the length of
StoragePoolNameSize.
gpfs_iscan_t *fsInodeScanP;
gpfs_igetstoragepool(fsInodeScanP,StgpoolIdBuffer, &StgpoolNameBuffer,StgpoolNameSize);
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_iopen() subroutine
Opens a file or directory by inode number.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
gpfs_ifile_t *gpfs_iopen(gpfs_fssnap_handle_t *fssnapHandle,
gpfs_ino_t ino,
int open_flags,
const gpfs_iattr_t *statxbuf,
const char *symLink);
Description
The gpfs_iopen() subroutine opens a user file or directory for backup. The file is identified by its inode
number ino within the file system or snapshot identified by the fssnapHandle. The fssnapHandle
parameter must be the same one that was used to create the inode scan that returned the inode number
ino.
To read the file or directory, the open_flags must be set to GPFS_O_BACKUP. The statxbuf and
symLink parameters are reserved for future use and must be set to NULL.
For an overview of using gpfs_iopen() in a backup application, see Using APIs to develop backup
applications in IBM Spectrum Scale: Administration Guide.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
File system snapshot handle.
ino
The inode number.
open_flags
GPFS_O_BACKUP
Read files for backup.
O_RDONLY
For gpfs_iread().
statxbuf
This parameter is reserved for future use and should always be set to NULL.
symLink
This parameter is reserved for future use and should always be set to NULL.
Exit status
If the gpfs_iopen() subroutine is successful, it returns a pointer to the inode's file handle.
If the gpfs_iopen() subroutine is unsuccessful, it returns NULL and the global error variable errno is
set to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EINVAL
Missing or incorrect parameter.
ENOENT
The file does not exist in the file system.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_iopen() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ESTALE
Cached file system information was not valid.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
GPFS_E_INVAL_INUM
Users are not authorized to open the reserved inodes.
Examples
For an example using gpfs_iopen(), see /usr/lpp/mmfs/samples/util/tsreaddir.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_iopen64() subroutine
Opens a file or directory by inode number.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
gpfs_ifile_t *gpfs_iopen64(gpfs_fssnap_handle_t *fssnapHandle,
gpfs_ino64_t ino,
int open_flags,
const gpfs_iattr64_t *statxbuf,
const char *symLink);
Description
The gpfs_iopen64() subroutine opens a user file or directory for backup. The file is identified by its
inode number ino within the file system or snapshot identified by the fssnapHandle. The
fssnapHandle parameter must be the same one that was used to create the inode scan that returned
the inode number ino.
To read the file or directory, the open_flags must be set to GPFS_O_BACKUP. The statxbuf and
symLink parameters are reserved for future use and must be set to NULL.
For an overview of using gpfs_iopen64() in a backup application, see Using APIs to develop backup
applications in IBM Spectrum Scale: Administration Guide.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
The file system snapshot handle.
ino
The inode number.
open_flags
GPFS_O_BACKUP
Read files for backup.
O_RDONLY
For gpfs_iread().
statxbuf
This parameter is reserved for future use and should always be set to NULL.
symLink
This parameter is reserved for future use and should always be set to NULL.
Exit status
If the gpfs_iopen64() subroutine is successful, it returns a pointer to the inode's file handle.
If the gpfs_iopen64() subroutine is unsuccessful, it returns NULL and the global error variable errno
is set to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EFORMAT
The file system version number is not valid.
EINVAL
Missing or incorrect parameter.
ENOENT
The file does not exist in the file system.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_iopen64() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ESTALE
Cached file system information was not valid.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
GPFS_E_INVAL_IATTR
The iattr structure was corrupted.
GPFS_E_INVAL_INUM
Users are not authorized to open the reserved inodes.
Note: gpfs_iopen64() calls the standard library subroutines dup(), open(), and malloc(); if one of
these called subroutines returns an error, gpfs_iopen64() also returns that error.
Examples
See the gpfs_iopen() example in /usr/lpp/mmfs/samples/util/tsreaddir.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_iputattrsx() subroutine
Sets the extended file attributes for a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_iputattrsx(gpfs_ifile_t *ifile,
int flags,
void *buffer,
const char *pathName);
Description
The gpfs_iputattrsx() subroutine, together with gpfs_igetattrsx(), is intended for use by a
backup program to save (gpfs_igetattrsx()) and restore (gpfs_iputattrsx()) all of the extended
attributes of a file. This subroutine also sets the storage pool for the file and sets data replication to the
values that are saved in the extended attributes.
This subroutine can optionally invoke the policy engine to match a RESTORE rule using the file's attributes
saved in the extended attributes to set the file's storage pool and data replication as when calling
gpfs_fputattrswithpathname(). When used with the policy engine, the caller should include the full
path to the file, including the file name, to allow rule selection based on file name or path.
By default, the routine will not use RESTORE policy rules for data placement. The pathName parameter
will be ignored and may be set to NULL.
If the call does not use RESTORE policy rules, or if the file fails to match a RESTORE rule, or if there are not
RESTORE rules installed, then the storage pool and data replication are selected as when calling
gpfs_fputattrs().
The buffer passed in should contain extended attribute data that was obtained by a previous call to
gpfs_fgetattrs().
Note: This call will restore extended attributes used for the Data Storage Management (XDSM) API (also
known as DMAPI) if they are present in the buffer.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
ifile
A pointer to gpfs_ifile_t from gpfs_iopen().
flags
Flags must have one of the following values:
GPFS_ATTRFLAG_NO_PLACEMENT
File attributes are restored, but the storage pool and data replication are unchanged.
GPFS_ATTRFLAG_IGNORE_POOL
File attributes are restored, but the storage pool and data replication are selected by matching the
saved attributes to a placement rule instead of restoring the saved storage pool.
GPFS_ATTRFLAG_USE_POLICY
File attributes are restored, but the storage pool and data replication are selected by matching the
saved attributes to a RESTORE rule instead of restoring the saved storage pool.
GPFS_ATTRFLAG_USE_POLICY
Uses the restore policy rules to determine the pool ID.
GPFS_ATTRFLAG_INCL_DMAPI
Includes the DMAPI attributes.
GPFS_ATTRFLAG_FINALIZE_ATTRS
Finalizes immutability attributes.
GPFS_ATTRFLAG_SKIP_IMMUTABLE
Skips immutable attributes.
GPFS_ATTRFLAG_INCL_ENCR
Includes encryption attributes.
GPFS_ATTRFLAG_SKIP_CLONE
Skips clone attributes.
GPFS_ATTRFLAG_MODIFY_CLONEPARENT
Allows modification on the clone parent.
buffer
A pointer to the buffer containing the extended attributes for the file.
pathName
A pointer to a file path and file name. NULL is a valid value for pathName.
Note: pathName is a UTF-8 encoded string. On Windows, applications can convert UTF-16 (Unicode)
to UTF-8 using the platform's WideCharToMultiByte function.
Exit status
If the gpfs_iputattrsx() subroutine is successful, it returns a value of 0.
If the gpfs_iputattrsx() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EINVAL
The buffer pointed to by buffer does not contain valid attribute data, or invalid flags were provided.
ENOSYS
The gpfs_iputattrsx() subroutine is not supported under the current file system format.
EPERM
The caller of the subroutine must have superuser privilege.
ESTALE
The cached fs information was not valid.
GPFS_E_INVAL_IFILE
The ifile parameters provided were not valid.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
gpfs_iread() subroutine
Reads a file opened by gpfs_iopen().
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_iread(gpfs_ifile_t *ifile,
void *buffer,
int bufferSize,
gpfs_off64_t *offset);
Description
The gpfs_iread() subroutine reads data from the file indicated by the ifile parameter returned from
gpfs_iopen(). This subroutine reads data beginning at parameter offset and continuing for
bufferSize bytes into the buffer specified by buffer. If successful, the subroutine returns a value that
is the length of the data read, and sets parameter offset to the offset of the next byte to be read. A
return value of 0 indicates end-of-file.
For an overview of using gpfs_iread() in a backup application, see Using APIs to develop backup
applications in IBM Spectrum Scale: Administration Guide.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
ifile
Pointer to gpfs_ifile_t from gpfs_iopen().
buffer
Buffer for the data to be read.
bufferSize
Size of the buffer (that is, the amount of data to be read).
offset
Offset of where within the file to read. If gpfs_iread() is successful, offset is updated to the next
byte after the last one that was read.
Exit status
If the gpfs_iread() subroutine is successful, it returns the number of bytes read.
If the gpfs_iread() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable
errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EISDIR
The specified file is a directory.
EINVAL
Missing or incorrect parameter.
ENOSYS
The gpfs_iread() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ESTALE
Cached file system information was not valid.
GPFS_E_INVAL_IFILE
Incorrect ifile parameter.
GPFS_E_ISLNK
The specified file is a symlink. Use gpfs_ireadlink subroutine on symlink.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_ireaddir() subroutine
Reads the next directory entry.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_ireaddir(gpfs_ifile_t *idir,
const gpfs_direntx_t **dirent);
Description
The gpfs_ireaddir() subroutine returns the next directory entry in a file system. For an overview of
using gpfs_ireaddir() in a backup application, see Using APIs to develop backup applications in IBM
Spectrum Scale: Administration Guide.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
idir
Pointer to gpfs_ifile_t from gpfs_iopen().
dirent
Pointer to returned pointer to directory entry.
Exit status
If the gpfs_ireaddir() subroutine is successful, it returns a value of 0 and sets the dirent parameter
to point to the returned directory entry. If there are no more GPFS directory entries, gpfs_ireaddir()
returns a value of 0 and sets the dirent parameter to NULL.
If the gpfs_ireaddir() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_ireaddir() subroutine is not available.
ENOTDIR
File is not a directory.
EPERM
The caller does not have superuser privileges.
ESTALE
The cached file system information was not valid.
GPFS_E_INVAL_IFILE
Incorrect ifile parameter.
Examples
For an example using gpfs_ireaddir(), see /usr/lpp/mmfs/samples/util/tsreaddir.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_ireaddir64() subroutine
Reads the next directory entry.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_ireaddir64(gpfs_ifile_t *idir,
const gpfs_direntx64_t **dirent);
Description
The gpfs_ireaddir64() subroutine returns the next directory entry in a file system. For an overview of
using gpfs_ireaddir64() in a backup application, see Using APIs to develop backup applications in
IBM Spectrum Scale: Administration Guide.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
idir
A pointer to gpfs_ifile_t from gpfs_iopen64().
dirent
A pointer to the returned pointer to the directory entry.
Exit status
If the gpfs_ireaddir64() subroutine is successful, it returns a value of 0 and sets the dirent
parameter to point to the returned directory entry. If there are no more GPFS directory entries,
gpfs_ireaddir64() returns a value of 0 and sets the dirent parameter to NULL.
If the gpfs_ireaddir64() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_ireaddir64() subroutine is not available.
ENOTDIR
File is not a directory.
EPERM
The caller does not have superuser privileges.
ESTALE
The cached file system information was not valid.
GPFS_E_INVAL_IFILE
Incorrect ifile parameter.
Examples
See the gpfs_ireaddir() example in /usr/lpp/mmfs/samples/util/tsreaddir.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_ireadlink() subroutine
Reads a symbolic link by inode number.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_ireadlink(gpfs_fssnap_handle_t *fssnapHandle,
gpfs_ino_t ino,
char *buffer,
int bufferSize);
Description
The gpfs_ireadlink() subroutine reads a symbolic link by inode number. Like gpfs_iopen(), it uses
the same fssnapHandle parameter that was used by the inode scan.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
File system snapshot handle.
ino
inode number of the link file to read.
buffer
Pointer to buffer for the returned link data.
bufferSize
Size of the buffer.
Exit status
If the gpfs_ireadlink() subroutine is successful, it returns the number of bytes read.
If the gpfs_ireadlink() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EINVAL
Missing or incorrect parameter.
ENOENT
No such file or directory.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_ireadlink() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ERANGE
On AIX, the buffer is too small to return the symbolic link.
ESTALE
Cached file system information was not valid.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_ireadlink64() subroutine
Reads a symbolic link by inode number.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_ireadlink64(gpfs_fssnap_handle_t *fssnapHandle,
gpfs_ino64_t ino,
char *buffer,
int bufferSize);
Description
The gpfs_ireadlink64() subroutine reads a symbolic link by inode number. Like gpfs_iopen64(), it
uses the same fssnapHandle parameter that was used by the inode scan.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
The file system snapshot handle.
ino
The inode number of the link file to read.
buffer
A pointer to buffer for the returned link data.
bufferSize
The size of the buffer.
Exit status
If the gpfs_ireadlink64() subroutine is successful, it returns the number of bytes read.
If the gpfs_ireadlink64() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EINVAL
Missing or incorrect parameter.
ENOENT
No such file or directory.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_ireadlink64() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ERANGE
On AIX, the buffer is too small to return the symbolic link.
ESTALE
The cached file system information was not valid.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
Note: gpfs_ireadlink64() calls the standard library subroutine readlink(); if this called subroutine
returns an error, gpfs_ireadlink64() also returns that error.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_ireadx() subroutine
Performs block level incremental read of a file within an incremental inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
gpfs_off64_t gpfs_ireadx(gpfs_ifile_t *ifile,
gpfs_iscan_t *iscan,
void *buffer,
int bufferSize,
gpfs_off64_t *offset,
gpfs_off64_t termOffset,
int *hole);
Description
The gpfs_ireadx() subroutine performs a block level incremental read on a file opened by
gpfs_iopen() within a given incremental scan opened using gpfs_open_inodescan().
For an overview of using gpfs_ireadx() in a backup application, see Using APIs to develop backup
applications in IBM Spectrum Scale: Administration Guide.
The gpfs_ireadx() subroutine returns the data that has changed since the prev_fssnapId specified
for the inode scan. The file is scanned starting at offset and terminating at termOffset, looking for
changed data. Once changed data is located, the offset parameter is set to its location, the new data is
returned in the buffer provided, and the amount of data returned is the subroutine's value.
If the change to the data is that it has been deleted (that is, the file has been truncated), no data is
returned, but the hole parameter is returned with a value of 1, and the size of the hole is returned as the
subroutine's value. The returned size of the hole may exceed the bufferSize provided. If no changed
data was found before reaching the termOffset or the end-of-file, then the gpfs_ireadx() subroutine
return value is 0.
Block level incremental backups are not available on small files (a file size smaller than the file system
block size), directories, or if the file has been deleted. The gpfs_ireadx() subroutine can still be used,
but it returns all of the file's data, operating like the standard gpfs_iread() subroutine. However, the
gpfs_ireadx() subroutine will still identify sparse files and explicitly return information on holes in the
files, rather than returning the NULL data.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
ifile
Pointer to gpfs_ifile_t returned from gpfs_iopen().
iscan
Pointer to gpfs_iscan_t from gpfs_open_inodescan().
buffer
Pointer to buffer for returned data, or NULL to query the next increment to be read.
bufferSize
Size of buffer for returned data.
offset
On input, the offset to start the scan for changes. On output, the offset of the changed data, if any was
detected.
termOffset
Read terminates before reading this offset. The caller may specify ia_size from the file's
gpfs_iattr_t or 0 to scan the entire file.
hole
Pointer to a flag returned to indicate a hole in the file. A value of 0 indicates that the gpfs_ireadx()
subroutine returned data in the buffer. A value of 1 indicates that gpfs_ireadx() encountered a
hole at the returned offset.
Exit status
If the gpfs_ireadx() subroutine is successful, it returns the number of bytes read and returned in
bufP, or the size of the hole encountered in the file.
If the gpfs_ireadx() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EDOM
The file system stripe ID from the iscanId does not match the ifile's.
EINVAL
Missing or incorrect parameter.
EISDIR
The specified file is a directory.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_ireadx() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ERANGE
The file system snapshot ID from the iscanId is more recent than the ifile's.
ESTALE
Cached file system information was not valid.
GPFS_E_INVAL_IFILE
Incorrect ifile parameter.
GPFS_E_INVAL_ISCAN
Incorrect iscan parameter.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_iscan_t structure
Contains a handle for an inode scan of a GPFS file system or snapshot.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct gpfs_iscan gpfs_iscan_t;
Description
The gpfs_iscan_t structure contains a handle for an inode scan of a GPFS file system or snapshot.
Members
gpfs_iscan
The handle for an inode scan for a GPFS file system or snapshot.
Examples
For an example using gpfs_iscan_t, see /usr/lpp/mmfs/samples/util/tstimes.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_lib_init() subroutine
Sets up a GPFS interface for additional calls.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_lib_init(int flags);
Description
The gpfs_lib_init() subroutine, together with the gpfs_lib_term() subroutine, is intended for use
by a program that makes repeated calls to a GPFS programming interface. This subroutine sets up the
internal structure to speed up additional interface calls.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
flags
Reserved for future use. Must be zero.
Exit status
If the gpfs_lib_init() subroutine is successful, it returns a value of 0.
If the gpfs_lib_init() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EINVAL
A nonzero value was passed as the flags parameter.
ENOSYS
The gpfs_lib_init() subroutine is not supported under the current file system format.
Examples
For an example using gpfs_lib_init(), see /usr/lpp/mmfs/samples/util/tsfindinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_lib_term() subroutine
Cleans up after GPFS interface calls have been completed.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_lib_term(int flags);
Description
The gpfs_lib_term() subroutine, together with the gpfs_lib_init() subroutine, is intended for use
by a program that makes repeated calls to a GPFS programming interface. This subroutine cleans up the
internal structure previously set up by gpfs_lib_init().
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
flags
Reserved for future use. Must be zero.
Exit status
If the gpfs_lib_term() subroutine is successful, it returns a value of 0.
If the gpfs_lib_term() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EINTR
The gpfs_lib_term() subroutine was interrupted by a signal that was caught. Cleanup was done.
EINVAL
A nonzero value was passed as the flags parameter.
Examples
For an example using gpfs_lib_term(), see /usr/lpp/mmfs/samples/util/tsfindinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_next_inode() subroutine
Retrieves the next inode from the inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_next_inode(gpfs_iscan_t *iscan,
gpfs_ino_t termIno,
const gpfs_iattr_t **iattr);
Description
The gpfs_next_inode() subroutine obtains the next inode from the specified inode scan and sets the
iattr pointer to the inode's attributes. The termIno parameter can be used to terminate the inode scan
before the last inode in the file system or snapshot being scanned. A value of 0 may be provided to
indicate the last inode in the file system or snapshot. If there are no more inodes to be returned before
the termination inode, the gpfs_next_inode() subroutine returns a value of 0 and the inode's attribute
pointer is set to NULL.
For an overview of using gpfs_next_inode() in a backup application, see Using APIs to develop backup
applications in IBM Spectrum Scale: Administration Guide.
To generate a full backup, invoke gpfs_open_inodescan() with NULL for the prev_fssnapId
parameter. Repeated invocations of gpfs_next_inode() then return inode information about all
existing user files, directories, and links in inode number order.
To generate an incremental backup, invoke gpfs_next_inode() with the fssnapId that was obtained
from a fssnapHandle at the time the previous backup was created. The snapshot that was used for the
previous backup does not need to exist at the time the incremental backup is generated. That is, the
backup application needs to remember only the fssnapId of the previous backup; the snapshot itself
can be deleted as soon as the backup is completed.
For an incremental backup, only inodes of files that have changed since the specified previous snapshot
will be returned. Any operation that changes the file's mtime or ctime is considered a change and will
cause the file to be included. Files with no changes to the file's data or file attributes, other than a change
to atime, are omitted from the scan.
Incremental backups return deleted files, but full backups do not. A deleted file is indicated by the field
ia_nlinks having a value of 0.
To read only the inodes that have been copied to a snapshot, use gpfs_open_inodescan() with
fssnapHandle of the snapshot and pass fssnapid of the fssnapHandle as prev_fssnapId.
Repeated invocations of gpfs_next_inode() then return the inodes copied to the snapshot, skipping
holes.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
iscan
Pointer to the inode scan handle.
termIno
The inode scan terminates before this inode number. The caller may specify maxIno from
gpfs_open_inodescan() or zero to scan the entire inode file.
iattr
Pointer to the returned pointer to the inode's iattr.
Exit status
If the gpfs_next_inode() subroutine is successful, it returns a value of 0 and a pointer. The pointer
points to NULL if there are no more inodes. Otherwise, the pointer points to the returned inode's
attributes.
If the gpfs_next_inode() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_next_inode() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ESTALE
Cached file system information was not valid.
GPFS_E_INVAL_FSSNAPID
The file system snapshot ID is not valid.
GPFS_E_INVAL_ISCAN
Incorrect parameters.
Examples
For an example using gpfs_next_inode(), see /usr/lpp/mmfs/samples/util/tstimes.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_next_inode64() subroutine
Retrieves the next inode from the inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_next_inode64(gpfs_iscan_t *iscan,
gpfs_ino64_t termIno,
const gpfs_iattr64_t **iattr);
Description
The gpfs_next_inode64() subroutine obtains the next inode from the specified inode scan and sets
the iattr pointer to the inode's attributes. The termIno parameter can be used to stop the inode scan
before the last inode in the file system or snapshot being scanned. A value of 0 can be provided to
indicate the last inode in the file system or snapshot. If there are no more inodes to be returned before
the termination inode, the gpfs_next_inode64() subroutine returns a value of 0 and the inode's
attribute pointer is set to NULL.
For an overview of using gpfs_next_inode64() in a backup application, see Using APIs to develop
backup applications in IBM Spectrum Scale: Administration Guide.
To generate a full backup, invoke gpfs_open_inodescan64() with NULL for the prev_fssnapId
parameter. Repeated invocations of gpfs_next_inode64() then return inode information about all
existing user files, directories, and links in inode number order.
To generate an incremental backup, invoke gpfs_next_inode64() with the fssnapId that was
obtained from a fssnapHandle at the time the previous backup was created. The snapshot that was
used for the previous backup does not need to exist at the time the incremental backup is generated. That
is, the backup application needs to remember only the fssnapId of the previous backup; the snapshot
itself can be deleted as soon as the backup is completed.
For an incremental backup, only inodes of files that have changed since the specified previous snapshot
will be returned. Any operation that changes the file's mtime or ctime is considered a change and will
cause the file to be included. Files with no changes to the file's data or file attributes, other than a change
to atime, are omitted from the scan.
Incremental backups return deleted files, but full backups do not. A deleted file is indicated by the field
ia_nlinks having a value of 0.
To read only the inodes that have been copied to a snapshot, use gpfs_open_inodescan64() with
fssnapHandle of the snapshot and pass fssnapid of the fssnapHandle as prev_fssnapId.
Repeated invocations of gpfs_next_inode64() then return the inodes copied to the snapshot, skipping
holes.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
iscan
A pointer to the inode scan handle.
termIno
The inode scan terminates before this inode number. The caller may specify maxIno from
gpfs_open_inodescan64() or zero to scan the entire inode file.
iattr
A pointer to the returned pointer to the inode's iattr.
Exit status
If the gpfs_next_inode64() subroutine is successful, it returns a value of 0 and a pointer. The pointer
points to NULL if there are no more inodes. Otherwise, the pointer points to the returned inode's
attributes.
If the gpfs_next_inode64() subroutine is unsuccessful, it returns a value of -1 and sets the global
error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_next_inode64() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ESTALE
The cached file system information was not valid.
GPFS_E_INVAL_FSSNAPID
The file system snapshot ID is not valid.
GPFS_E_INVAL_ISCAN
Incorrect parameters.
Examples
See the gpfs_next_inode() example in /usr/lpp/mmfs/samples/util/tstimes.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_next_inode_with_xattrs() subroutine
Retrieves the next inode and its extended attributes from the inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_next_inode_with_xattrs(gpfs_iscan_t *iscan,
gpfs_ino_t termIno,
const gpfs_iattr_t **iattr,
const char **xattrBuf,
unsigned int *xattrBufLen);
Description
The gpfs_next_inode_with_xattrs() subroutine retrieves the next inode and its extended
attributes from the inode scan. The set of extended attributes returned are defined when the inode scan
was opened. The scan stops before the last inode that was specified or the last inode in the inode file
being scanned.
The data returned by gpfs_next_inode() is overwritten by subsequent calls to gpfs_next_inode(),
gpfs_seek_inode(), or gpfs_stat_inode().
The termIno parameter provides a way to partition an inode scan so it can be run on more than one
node.
The returned values for xattrBuf and xattrBufLen must be provided to gpfs_next_xattr() to
obtain the extended attribute names and values. The buffer used for the extended attributes is
overwritten by subsequent calls to gpfs_next_inode(), gpfs_seek_inode(), or
gpfs_stat_inode().
The returned pointers to the extended attribute name and value will be aligned to a double-word
boundary.
Parameters
iscan
A pointer to the inode scan descriptor.
termIno
The inode scan stops before this inode number. The caller can specify maxIno from
gpfs_open_inodescan() or zero to scan the entire inode file.
iattr
A pointer to the returned pointer to the file's iattr.
xattrBuf
A pointer to the returned pointer to the xiattr buffer.
xattrBufLen
The returned length of the xiattr buffer.
Exit status
If the gpfs_next_inode_with_xattrs() subroutine is successful, it returns a value of 0 and iattr
is set to point to gpfs_iattr_t. The pointer points to NULL if there are no more inodes, otherwise, the
pointer points to gpfs_iattr_t.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EFAULT
The buffer data was overwritten.
ENOMEM
The buffer is too small, unable to allocate memory for the request.
ENOSYS
The gpfs_next_inode_with_xattrs() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ESTALE
The cached file system information was not valid.
GPFS_E_INVAL_ISCAN
Incorrect parameters.
GPFS_E_INVAL_XATTR
Incorrect parameters.
Examples
For an example using gpfs_next_inode_with_xattrs(), see /usr/lpp/mmfs/samples/util/
tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_next_inode_with_xattrs64() subroutine
Retrieves the next inode and its extended attributes from the inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_next_inode_with_xattrs64(gpfs_iscan_t *iscan,
gpfs_ino64_t termIno,
const gpfs_iattr64_t **iattr,
const char **xattrBuf,
unsigned int *xattrBufLen);
Description
The gpfs_next_inode_with_xattrs64() subroutine retrieves the next inode and its extended
attributes from the inode scan. The set of extended attributes returned are defined when the inode scan
was opened. The scan stops before the last inode that was specified or the last inode in the inode file
being scanned.
The data returned by gpfs_next_inode64() is overwritten by subsequent calls to
gpfs_next_inode64(), gpfs_seek_inode64(), or gpfs_stat_inode64().
The termIno parameter provides a way to partition an inode scan so it can be run on more than one
node.
The returned values for xattrBuf and xattrBufLen must be provided to gpfs_next_xattr() to
obtain the extended attribute names and values. The buffer used for the extended attributes is
overwritten by subsequent calls to gpfs_next_inode64(), gpfs_seek_inode64(), or
gpfs_stat_inode64().
The returned pointers to the extended attribute name and value will be aligned to a double-word
boundary.
Parameters
iscan
A pointer to the inode scan descriptor.
termIno
The inode scan stops before this inode number. The caller can specify maxIno from
gpfs_open_inodescan64() or zero to scan the entire inode file.
iattr
A pointer to the returned pointer to the file's iattr.
xattrBuf
A pointer to the returned pointer to the xiattr buffer. Initialize this parameter to a valid value or
NULL before calling gpfs_next_inode_with_xattrs64.
xattrBufLen
The returned length of the xiattr buffer. Initialize this parameter to a valid value or NULL before
calling gpfs_next_inode_with_xattrs64.
Exit status
If the gpfs_next_inode_with_xattrs64() subroutine is successful, it returns a value of 0 and
iattr is set to point to gpfs_iattr_t. The pointer points to NULL if there are no more inodes,
otherwise, the pointer points to gpfs_iattr_t.
If the gpfs_next_inode_with_xattrs64() subroutine is unsuccessful, it returns a value of -1 and
sets the global error variable errno to NULL to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EFAULT
The buffer data was overwritten.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_next_inode_with_xattrs64() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ESTALE
The cached file system information was not valid.
GPFS_E_INVAL_ISCAN
Incorrect parameters.
GPFS_E_INVAL_XATTR
Incorrect parameters.
Examples
See the gpfs_next_inode_with_xattrs() example in /usr/lpp/mmfs/samples/util/
tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_next_xattr() subroutine
Returns individual attributes and their values.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_next_xattr(gpfs_iscan_t *iscan,
const char **xattrBuf,
unsigned int *xattrBufLen,
const char **name,
unsigned int *valueLen,
const char **value);
Description
The gpfs_next_xattr() subroutine iterates over the extended attributes buffer returned by the
gpfs_next_inode_with_xattrs() or gpfs_next_inode_with_xattrs64() subroutine to return
the individual attributes and their values. The attribute names are null-terminated strings, whereas the
attribute value contains binary data.
Note: The caller is not allowed to modify the returned attribute names or values. The data returned by
gpfs_next_xattr() might be overwritten by subsequent calls to gpfs_next_xattr() or other GPFS
library calls.
Parameters
iscan
A pointer to the inode descriptor.
xattrBuf
A pointer to the pointer to the attribute buffer.
xattrBufLen
A pointer to the attribute buffer length.
name
A pointer to the attribute name.
valueLen
A pointer to the length of the attribute value.
value
A pointer to the attribute value.
Exit status
If the gpfs_next_xattr() subroutine is successful, it returns a value of 0 and a pointer to the attribute
name. It also sets:
• The valueLen parameter to the length of the attribute value
• The value parameter to point to the attribute value
• The xattrBufLen parameter to the remaining length of buffer
• The xattrBuf parameter to index the next attribute in buffer
If the gpfs_next_xattr() subroutine is successful, but there are no more attributes in the buffer, it
returns a value of 0 and the attribute name is set to NULL. It also sets:
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EINVAL
Incorrect parameters.
ENOSYS
The gpfs_next_xattr() subroutine is not available.
Examples
For an example using gpfs_next_xattr(), see /usr/lpp/mmfs/samples/util/tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_opaque_acl_t structure
Contains buffer mapping for the gpfs_getacl() and gpfs_putacl() subroutines.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct
{
int acl_buffer_len;
unsigned short acl_version;
unsigned char acl_type;
char acl_var_data[1];
} gpfs_opaque_acl_t;
Description
The gpfs_opaque_acl_t structure contains size, version, and ACL type information for the
gpfs_getacl() and gpfs_putacl() subroutines.
Members
acl_buffer_len
On input, this field must be set to the total length, in bytes, of the data structure being passed to
GPFS. On output, this field contains the actual size of the requested information. If the initial size of
the buffer is not large enough to contain all of the information, the gpfs_getacl() invocation must
be repeated with a larger buffer.
acl_version
This field contains the current version of the GPFS internal representation of the ACL. On input to the
gpfs_getacl() subroutine, set this field to zero.
acl_type
On input to the gpfs_getacl() subroutine, set this field to either GPFS_ACL_TYPE_ACCESS or
GPFS_ACL_TYPE_DEFAULT, depending on which ACL is requested. These constants are defined in
the gpfs.h header file.
acl_var_data
This field signifies the beginning of the remainder of the ACL information.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_open_inodescan() subroutine
Opens an inode scan of a file system or snapshot.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
gpfs_iscan_t *gpfs_open_inodescan(gpfs_fssnap_handle_t *fssnapHandle,
const gpfs_fssnap_id_t *prev_fssnapId,
gpfs_ino_t *maxIno);
Description
The gpfs_open_inodescan() subroutine opens a scan of the inodes in the file system or snapshot
identified by the fssnapHandle parameter. The scan traverses all user files, directories and links in the
file system or snapshot. The scan begins with the user file with the lowest inode number and returns the
files in increasing order. The gpfs_seek_inode() subroutine may be used to set the scan position to an
arbitrary inode. System files, such as the block allocation maps, are omitted from the scan. The file
system must be mounted to open an inode scan.
For an overview of using gpfs_open_inodescan() in a backup application, see Using APIs to develop
backup applications in IBM Spectrum Scale: Administration Guide.
To generate a full backup, invoke gpfs_open_inodescan() with NULL for the prev_fssnapId
parameter. Repeated invocations of gpfs_next_inode() then return inode information about all
existing user files, directories, and links in inode number order.
To generate an incremental backup, invoke gpfs_open_inodescan() with the fssnapId that was
obtained from a fssnapHandle at the time the previous backup was created. The snapshot that was
used for the previous backup does not need to exist at the time the incremental backup is generated. That
is, the backup application needs to remember only the fssnapId of the previous backup; the snapshot
itself can be deleted as soon as the backup is completed.
For the incremental backup, any operation that changes the file's mtime or ctime causes the file to be
included. Files with no changes to the file's data or file attributes, other than a change to atime, are
omitted from the scan.
A full inode scan (prev_fssnapId set to NULL) does not return any inodes of nonexistent or deleted
files, but an incremental inode scan (prev_fssnapId not NULL) does return inodes for files that have
been deleted since the previous snapshot. The inodes of deleted files have a link count of zero.
If the snapshot indicated by prev_fssnapId is available, the caller may benefit from the extended read
subroutine, gpfs_ireadx(), which returns only the changed blocks within the files. Without the
previous snapshot, all blocks within the changed files are returned.
Once a full or incremental backup completes, the new_fssnapId must be saved in order to reuse it on a
subsequent incremental backup. This fssnapId must be provided to the gpfs_open_inodescan()
subroutine, as the prev_fssnapId input parameter.
To read only the inodes that have been copied to a snapshot, use gpfs_open_inodescan() with
fssnapHandle of the snapshot and pass fssnapid of the fssnapHandle as the prev_fssnapId.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
File system snapshot handle.
prev_fssnapId
Pointer to file system snapshot ID or NULL. If prev_fssnapId is provided, the inode scan returns
only the files that have changed since the previous backup. If the pointer is NULL, the inode scan
returns all user files.
If it is the same as the fssnapid of the fssnapHandle parameter, the scan only returns the inodes
copied into the corresponding snapshot.
maxIno
Pointer to inode number or NULL. If provided, gpfs_open_inodescan() returns the maximum
inode number in the file system or snapshot being scanned.
Exit status
If the gpfs_open_inodescan() subroutine is successful, it returns a pointer to an inode scan handle.
If the gpfs_open_inodescan() subroutine is unsuccessful, it returns a NULL pointer and the global
error variable errno is set to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EDOM
The file system snapshot ID passed for prev_fssnapId is from a different file system.
EINVAL
Incorrect parameters.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_open_inodescan() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ERANGE
The prev_fssnapId parameter is the same as or more recent than snapId being scanned.
ESTALE
Cached file system information was not valid.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
GPFS_E_INVAL_FSSNAPID
The file system snapshot ID passed for prev_fssnapId is not valid.
Examples
For an example using gpfs_open_inodescan(), see /usr/lpp/mmfs/samples/util/tstimes.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
gpfs_open_inodescan64() subroutine
Opens an inode scan of a file system or snapshot.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
gpfs_iscan_t *gpfs_open_inodescan64(gpfs_fssnap_handle_t *fssnapHandle,
const gpfs_fssnap_id_t *prev_fssnapId,
gpfs_ino64_t *maxIno);
Description
The gpfs_open_inodescan64() subroutine opens a scan of the inodes in the file system or snapshot
identified by the fssnapHandle parameter. The scan traverses all user files, directories and links in the
file system or snapshot. The scan begins with the user file with the lowest inode number and returns the
files in increasing order. The gpfs_seek_inode64() subroutine may be used to set the scan position to
an arbitrary inode. System files, such as the block allocation maps, are omitted from the scan. The file
system must be mounted to open an inode scan.
For an overview of using gpfs_open_inodescan64() in a backup application, see Using APIs to develop
backup applications in IBM Spectrum Scale: Administration Guide.
To generate a full backup, invoke gpfs_open_inodescan64() with NULL for the prev_fssnapId
parameter. Repeated invocations of gpfs_next_inode64() then return inode information about all
existing user files, directories, and links in inode number order.
To generate an incremental backup, invoke gpfs_open_inodescan64() with the fssnapId that was
obtained from a fssnapHandle at the time the previous backup was created. The snapshot that was
used for the previous backup does not need to exist at the time the incremental backup is generated. That
is, the backup application needs to remember only the fssnapId of the previous backup; the snapshot
itself can be deleted as soon as the backup is completed.
For the incremental backup, any operation that changes the file's mtime or ctime causes the file to be
included. Files with no changes to the file's data or file attributes, other than a change to atime, are
omitted from the scan.
A full inode scan (prev_fssnapId set to NULL) does not return any inodes of nonexistent or deleted
files, but an incremental inode scan (prev_fssnapId not NULL) does return inodes for files that have
been deleted since the previous snapshot. The inodes of deleted files have a link count of zero.
If the snapshot indicated by prev_fssnapId is available, the caller may benefit from the extended read
subroutine, gpfs_ireadx(), which returns only the changed blocks within the files. Without the
previous snapshot, all blocks within the changed files are returned.
Once a full or incremental backup completes, the new_fssnapId must be saved in order to reuse it on a
subsequent incremental backup. This fssnapId must be provided to the gpfs_open_inodescan64()
subroutine, as the prev_fssnapId input parameter.
To read only the inodes that have been copied to a snapshot, use gpfs_open_inodescan() with
fssnapHandle of the snapshot and pass fssnapid of the fssnapHandle as the prev_fssnapId.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
The file system snapshot handle.
prev_fssnapId
A pointer to file system snapshot ID or NULL. If prev_fssnapId is provided, the inode scan returns
only the files that have changed since the previous backup. If the pointer is NULL, the inode scan
returns all user files.
If it is same as the fssnapid of the fssnapHandle parameter, the scan only returns the inodes copied
into the corresponding snapshot.
maxIno
A pointer to inode number or NULL. If provided, gpfs_open_inodescan64() returns the maximum
inode number in the file system or snapshot being scanned.
Exit status
If the gpfs_open_inodescan64() subroutine is successful, it returns a pointer to an inode scan
handle.
If the gpfs_open_inodescan64() subroutine is unsuccessful, it returns a NULL pointer and the global
error variable errno is set to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EDOM
The file system snapshot ID passed for prev_fssnapId is from a different file system.
EINVAL
Incorrect parameters.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_open_inodescan64() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ERANGE
The prev_fssnapId parameter is the same as or more recent than snapId being scanned.
ESTALE
The cached file system information was not valid.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
GPFS_E_INVAL_FSSNAPID
The file system snapshot ID passed for prev_fssnapId is not valid.
Note: gpfs_open_inodescan64() calls the standard library subroutines dup() and malloc(); if one
of these called subroutines returns an error, gpfs_open_inodescan64() also returns that error.
Examples
See the gpfs_open_inodescan() example in /usr/lpp/mmfs/samples/util/tstimes.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_open_inodescan_with_xattrs() subroutine
Opens an inode file and extended attributes for an inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
gpfs_iscan_t *gpfs_open_inodescan_with_xattrs(gpfs_fssnap_handle_t *fssnapHandle,
const gpfs_fssnap_id_t *prev_fssnapId,
int nxAttrs,
const char *xattrsList[],
gpfs_ino_t *maxIno);
Description
The gpfs_open_inodescan_with_xattrs() subroutine opens an inode file and extended attributes
for an inode scan identified by the fssnapHandle parameter. The scan traverses all user files, directories
and links in the file system or snapshot. The scan begins with the user file with the lowest inode number
and returns the files in increasing order. The gpfs_seek_inode() subroutine can be used to set the
scan position to an arbitrary inode. System files, such as the block allocation maps, are omitted from the
scan. The file system must be mounted to open an inode scan.
For an overview of using gpfs_open_inodescan_with_xattrs() in a backup application, see Using
APIs to develop backup applications in IBM Spectrum Scale: Administration Guide.
To generate a full backup, invoke gpfs_open_inodescan_with_xattrs() with NULL for the
prev_fssnapId parameter. Repeated invocations of gpfs_next_inode() then return inode
information about all existing user files, directories, and links in inode number order.
To generate an incremental backup, invoke gpfs_open_inodescan_with_xattrs() with the
fssnapId that was obtained from a fssnapHandle at the time the previous backup was created. The
snapshot that was used for the previous backup does not need to exist at the time the incremental
backup is generated. That is, the backup application needs to remember only the fssnapId of the
previous backup; the snapshot itself can be deleted as soon as the backup is completed.
For the incremental backup, any operation that changes the file's mtime or ctime causes the file to be
included. Files with no changes to the file's data or file attributes, other than a change to atime, are
omitted from the scan.
A full inode scan (prev_fssnapId set to NULL) returns all inodes of existing files. An incremental inode
scan (prev_fssnapId not NULL) returns inodes for files that have changed since the previous snapshot.
The inodes of deleted files have a link count of zero.
If the snapshot indicated by prev_fssnapId is available, the caller may benefit from the extended read
subroutine, gpfs_ireadx(), which returns only the changed blocks within the files. Without the
previous snapshot, all blocks within the changed files are returned.
Once a full or incremental backup completes, the new_fssnapId must be saved in order to reuse it on a
subsequent incremental backup. This fssnapId must be provided to the
gpfs_open_inodescan_with_xattrs() subroutine, as the prev_fssnapId input parameter.
To read only the inodes that have been copied to a snapshot, use gpfs_open_inodescan() with
fssnapHandle of the snapshot and pass fssnapid of the fssnapHandle as the prev_fssnapId.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
The file system snapshot handle.
prev_fssnapId
A pointer to file system snapshot ID or NULL. If prev_fssnapId is provided, the inode scan returns
only the files that have changed since the previous backup. If the pointer is NULL, the inode scan
returns all user files.
If it is the same as the fssnapid of the fssnapHandle parameter, the scan only returns the inodes
copied into the corresponding snapshot.
nxAttrs
The count of extended attributes to be returned. If nxAttrs is set to 0, call returns no extended
attributes, like gpfs_open_inodescan(). If nxAttrs is set to -1, call returns all extended
attributes.
xattrsList
A pointer to an array of pointers to names of extended attributes to be returned. nxAttrsList may
be null if nxAttrs is set to 0 or -1.
maxIno
A pointer to inode number or NULL. If provided, gpfs_open_inodescan_with_xattrs() returns
the maximum inode number in the file system or snapshot being scanned.
Exit status
If the gpfs_open_inodescan_with_xattrs() subroutine is successful, it returns a pointer to
gpfs_iscan_t.
If the gpfs_open_inodescan_with_xattrs() subroutine is unsuccessful, it returns a NULL pointer
and the global error variable errno is set to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EDOM
The file system snapshot ID passed for prev_fssnapId is from a different file system.
EINVAL
Incorrect parameters.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_open_inodescan_with_xattrs() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ERANGE
The prev_fssnapId parameter is the same as or more recent than snapId being scanned.
ESTALE
The cached file system information was not valid.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
GPFS_E_INVAL_FSSNAPID
The file system snapshot ID passed for prev_fssnapId is not valid.
Note: gpfs_open_inodescan_with_xattrs() calls the standard library subroutines dup() and
malloc(); if one of these called subroutines returns an error,
gpfs_open_inodescan_with_xattrs() also returns that error.
Examples
For an example using gpfs_open_inodescan_with_xattrs(), see /usr/lpp/mmfs/samples/
util/tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_open_inodescan_with_xattrs64() subroutine
Opens an inode file and extended attributes for an inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
gpfs_iscan_t *gpfs_open_inodescan_with_xattrs64(gpfs_fssnap_handle_t *fssnapHandle,
const gpfs_fssnap_id_t *prev_fssnapId,
int nxAttrs,
const char *xattrList[],
gpfs_ino64_t *maxIno);
Description
The gpfs_open_inodescan_with_xattrs64() subroutine opens an inode file and extended
attributes for an inode scan identified by the fssnapHandle parameter. The scan traverses all user files,
directories and links in the file system or snapshot. The scan begins with the user file with the lowest
inode number and returns the files in increasing order. The gpfs_seek_inode64() subroutine may be
used to set the scan position to an arbitrary inode. System files, such as the block allocation maps, are
omitted from the scan. The file system must be mounted to open an inode scan.
For an overview of using gpfs_open_inodescan_with_xattrs64() in a backup application, see Using
APIs to develop backup applications in IBM Spectrum Scale: Administration Guide.
To generate a full backup, invoke gpfs_open_inodescan_with_xattrs64() with NULL for the
prev_fssnapId parameter. Repeated invocations of gpfs_next_inode64() then return inode
information about all existing user files, directories, and links in inode number order.
To generate an incremental backup, invoke gpfs_open_inodescan_with_xattrs64() with the
fssnapId that was obtained from a fssnapHandle at the time the previous backup was created. The
snapshot that was used for the previous backup does not need to exist at the time the incremental
backup is generated. That is, the backup application needs to remember only the fssnapId of the
previous backup; the snapshot itself can be deleted as soon as the backup is completed.
For the incremental backup, any operation that changes the file's mtime or ctime causes the file to be
included. Files with no changes to the file's data or file attributes, other than a change to atime, are
omitted from the scan.
A full inode scan (prev_fssnapId set to NULL) returns all inodes of existing files. An incremental inode
scan (prev_fssnapId not NULL) returns inodes for files that have changed since the previous snapshot.
The inodes of deleted files have a link count of zero.
If the snapshot indicated by prev_fssnapId is available, the caller may benefit from the extended read
subroutine, gpfs_ireadx(), which returns only the changed blocks within the files. Without the
previous snapshot, all blocks within the changed files are returned.
Once a full or incremental backup completes, the new_fssnapId must be saved in order to reuse it on a
subsequent incremental backup. This fssnapId must be provided to the
gpfs_open_inodescan_with_xattrs64() subroutine, as the prev_fssnapId input parameter.
To read only the inodes that have been copied to a snapshot, use gpfs_open_inodescan() with
fssnapHandle of the snapshot and pass fssnapid of the fssnapHandle as the prev_fssnapId.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fssnapHandle
The file system snapshot handle.
prev_fssnapId
A pointer to file system snapshot ID or NULL. If prev_fssnapId is provided, the inode scan returns
only the files that have changed since the previous backup. If the pointer is NULL, the inode scan
returns all user files.
If it is same as the fssnapid of the fssnapHandle parameter, the scan only returns the inodes copied
into the corresponding snapshot.
nxAttrs
The count of extended attributes to be returned. If nxAttrs is set to 0, call returns no extended
attributes, like gpfs_open_inodescan64(). If nxAttrs is set to -1, call returns all extended
attributes
xattrsList
A pointer to an array of pointers to names of extended attributes to be returned. nxAttrsList may
be null if nxAttrs is set to 0 or -1.
maxIno
A pointer to inode number or NULL. If provided, gpfs_open_inodescan_with_xattrs64()
returns the maximum inode number in the file system or snapshot being scanned.
Exit status
If the gpfs_open_inodescan_with_xattrs64() subroutine is successful, it returns a pointer to
gpfs_iscan_t.
If the gpfs_open_inodescan_with_xattrs64() subroutine is unsuccessful, it returns a NULL pointer
and the global error variable errno is set to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EDOM
The file system snapshot ID passed for prev_fssnapId is from a different file system.
EINVAL
Incorrect parameters.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_open_inodescan_with_xattrs64() subroutine is not available.
EPERM
The caller does not have superuser privileges.
ERANGE
The prev_fssnapId parameter is the same as or more recent than snapId being scanned.
ESTALE
The cached file system information was not valid.
GPFS_E_INVAL_FSSNAPHANDLE
The file system snapshot handle is not valid.
GPFS_E_INVAL_FSSNAPID
The file system snapshot ID passed for prev_fssnapId is not valid.
Note: gpfs_open_inodescan_with_xattrs64() calls the standard library subroutines dup() and
malloc(); if one of these called subroutines returns an error,
gpfs_open_inodescan_with_xattrs64() also returns that error.
Examples
See the gpfs_open_inodescan_with_xattrs() example in /usr/lpp/mmfs/samples/util/
tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_prealloc() subroutine
Preallocates storage for a regular file or preallocates directory entries for a directory.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_prealloc(gpfs_file_t fileDesc,
gpfs_off64_t startOffset,
gpfs_off64_t bytesToPrealloc);
Description
The gpfs_prealloc() subroutine preallocates disk storage for a file or a directory.
In the case of a regular file, preallocation can improve I/O performance by creating a block of available
disk storage in the file immediately instead of increasing the file size incrementally as data is written. The
preallocated disk storage begins at the specified offset in the file and extends for the specified number of
bytes, rounded up to a GPFS block boundary. Existing data is not modified. Reading any of the
preallocated blocks returns zeroes. To determine how much space was actually preallocated, call the
stat() subroutine and compare the reported file size and number of blocks used with their values before
the preallocation.
In the case of a directory, preallocation can improve metadata performance by setting the minimum
compaction size of the directory. Preallocation is most effective in systems in which many files are added
to and removed from a directory in a short time. The minimum compaction size is the number of directory
slots, including both full and empty slots, that a directory is allowed to retain when it is compacted. In
IBM Spectrum Scale v4.1 or later, by default, a directory is automatically compacted as far as possible.
The gpfs_prealloc() subroutine sets the minimum compaction size of a directory to the specified
number of slots and adds directory slots if needed to reach the minimum size. For example, if a directory
contains 5,000 files and you set the minimum compaction size to 50,000, then the file system adds
45,000 directory slots. The directory can grow beyond 50,000 entries, but the file system does not allow
the directory to be compacted below 50,000 slots.
You must specify the minimum compaction size as a number of bytes. Determine the number of bytes to
allocate with the following formula:
where:
n
Specifies the number of directory entries that you want.
ceiling()
Is a function that rounds a fractional number up to the next highest integer. For example,
ceiling(1.03125) returns 2.
namelen
Specifies the expected average length of file names.
For example, if you want 20,000 entries with an average file name length of 48, then bytesToPrealloc =
20000 * 2 * 32 = 1,280,000.
To restore the default behavior of the file system for a directory, in which a directory is compacted as far
as possible, call the subroutine with bytesToPrealloc set to 0.
The number of bytes that are allocated for a directory is stored in the ia_dirminsize field in the
gpfs_iattr64_t structure that is returned by the gpfs_fstat_x() subroutine. To convert the number
of bytes to the number of directory slots, apply the following rule:
numSlots = numBytesReturned / 32
To convert the number of bytes to the number of files in the directory, apply a version of the formula that
is described above:
If the file is not a directory or is a directory with no preallocation, the ia_dirminsize field is set to 0.
This attribute is reported for information only and is meaningful only in the active file system. It is not
backed up and restored and it is ignored in snapshots. It usually differs from the file size that is recorded
in ia_size or returned by stat().
To set the minimum compaction size of a directory from the command line, set the compact parameter in
the mmchattr command.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fileDesc
The file descriptor returned by open(). Note the following points:
• A file must be opened for writing.
• A directory can be opened for reading, but the caller must have write permission to the directory.
startOffset
For a file, the byte offset into the file at which to begin preallocation. For a directory, set this
parameter to 0.
bytesToPrealloc
The number of bytes to be preallocated. For a file, this value is rounded up to a GPFS block boundary.
For a directory, calculate the number of bytes with the formula that is described above.
Exit status
If the gpfs_prealloc() subroutine is successful, it returns a value of 0.
If the gpfs_prealloc() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error. If errno is set to one of the following, some storage
may have been preallocated:
• EDQUOT
• ENOSPC
The pre-allocation or minimum compaction setting of a directory can be obtained using the
“gpfs_fstat_x() subroutine” on page 868. The gpfs_iattr64 structure it returns, defined in gpfs.h, is
extended to include ia_dirminsize giving the pre-allocation size of a directory. For non-directories or
directories without a pre-allocation set the value is zero. The size is expressed in bytes, which is
interpreted in the same way as for the gpfs_prealloc() function. This will be 32 times the value reported
by mmlsattr and will generally differ from the file size reported in ia_size and by stat().
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EACCES
The file or directory is not opened for writing.
EBADF
The file descriptor is not valid.
EDQUOT
A disk quota has been exceeded
EINVAL
The file descriptor does not refer to a file or directory; a negative value was specified for
startOffset or bytesToPrealloc.
ENOSPC
The file system has run out of disk space.
ENOSYS
The gpfs_prealloc() subroutine is not supported under the current file system format.
Examples
#include <stdio.h>
#include <fcntl.h>
#include <errno.h>
#include <gpfs.h>
int rc;
int fileHandle = -1;
char* fileNameP = "datafile";
offset_t startOffset = 0;
offset_t bytesToAllocate = 20*1024*1024; /* 20 MB */
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_putacl() subroutine
Restores the access control information for a GPFS file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_putacl(const char *pathname,
int flags,
void *acl);
Description
The gpfs_putacl() subroutine together with the gpfs_getacl() subroutine is intended for use by a
backup program to save (gpfs_getacl()) and restore (gpfs_putacl()) the ACL information for the
file.
Notes:
1. The use of gpfs_fgetattrs() and gpfs_fputattrs() is preferred.
2. You must have write access to the file.
3. Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
pathname
Path name of the file for which the ACLs is to be set.
flags
Consists of one of these values:
0
Indicates that the acl parameter is to be mapped with the gpfs_opaque_acl_t structure.
The gpfs_opaque_acl_t structure should be used by backup and restore programs.
GPFS_PUTACL_STRUCT
Indicates that the acl parameter is to be mapped with the gpfs_acl_t structure.
The gpfs_acl_t structure is provided for applications that need to change the ACL.
acl
Pointer to a buffer mapped by the structure gpfs_opaque_acl_t or gpfs_acl_t, depending on the
value of flags.
This is where the ACL data is stored, and should be the result of a previous invocation of
gpfs_getacl().
Exit status
If the gpfs_putacl() subroutine is successful, it returns a value of 0.
If the gpfs_putacl() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EINVAL
The path name does not refer to a GPFS file or a regular file.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_putacl() subroutine is not supported under the current file system format.
ENOTDIR
File is not a directory.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_putacl_fd() subroutine
Restores the access control information for a GPFS file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_putacl_fd(gpfs_file_t fileDesc,
int flags,
void *acl);
Description
The gpfs_putacl_fd() subroutine together with the gpfs_getacl_fd() subroutine is intended for
use by a backup program to save (gpfs_getacl_fd()) and restore (gpfs_putacl_fd()) the ACL
information for the file.
Notes:
1. The use of gpfs_fgetattrs() and gpfs_fputattrs() is preferred.
2. You must have write access to the file.
3. Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
fileDesc
A file descriptor that identifies the file for which the ACLs are to be put.
flags
Consists of one of these values:
0
Indicates that the acl parameter is to be mapped with the gpfs_opaque_acl_t structure.
The gpfs_opaque_acl_t structure is used by backup and restore programs.
GPFS_PUTACL_STRUCT
Indicates that the acl parameter is to be mapped with the gpfs_acl_t structure.
The gpfs_acl_t structure is provided for applications that need to change the ACL.
acl
Pointer to a buffer mapped by the structure gpfs_opaque_acl_t or gpfs_acl_t, depending on the
value of flags.
ACL data is stored here and should be the result of a previous invocation of gpfs_getacl().
Exit status
If the gpfs_putacl_fd() subroutine is successful, it returns a value of 0.
If the gpfs_putacl_fd() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following list:
Error codes include but are not limited to the following:
EINVAL
The file descriptor does not refer to a GPFS file or a regular file.
EBADF
The file descriptor is not valid.
ENOMEM
Unable to allocate memory for the request.
ENOSYS
The gpfs_putacl_fd() subroutine is not supported under the current file system format.
ENOTDIR
File is not a directory.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_quotactl() subroutine
Manipulates disk quotas on file systems.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_quotactl(const char *pathname,
int cmd,
int id,
void *bufferP);
Description
The gpfs_quotactl() subroutine manipulates disk quotas. It enables, disables, and manipulates disk
quotas for file systems on which quotas have been enabled.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
pathname
Specifies the path name of any file within the mounted file system to which the quota control
command is to applied.
cmd
Specifies the quota control command to be applied and whether it is applied to a user, group, or fileset
quota.
The cmd parameter can be constructed using GPFS_QCMD(qcmd, Type) contained in gpfs.h. The
qcmd parameter specifies the quota control command. The Type parameter specifies one of the
following quota types:
• user (GPFS_USRQUOTA)
• group (GPFS_GRPQUOTA)
• fileset (GPFS_FILESETQUOTA)
The valid values for the qcmd parameter specified in gpfs.h are:
Q_QUOTAON
Enables quotas.
Enables disk quotas for the file system specified by the pathname parameter and type specified
in Type. The id and bufferP parameters are unused. Root user authority is required to enable
quotas.
Q_QUOTAOFF
Disables quotas.
Disables disk quotas for the file system specified by the pathname parameter and type specified
in Type. The id and bufferP parameters are unused. Root user authority is required to disable
quotas.
Q_GETQUOTA
Gets quota limits and usage information.
Retrieves quota limits and current usage for a user, group, or fileset specified by the id parameter.
The bufferP parameter points to a gpfs_quotaInfo_t structure to hold the returned
information. The gpfs_quotaInfo_t structure is defined in gpfs.h.
Root authority is required if the id value is not the current id (user id for GPFS_USRQUOTA, group
id for GPFS_GRPQUOTA) of the caller.
Q_SETQUOTA
Sets quota limits
Sets disk quota limits for a user, group, or fileset specified by the id parameter. The bufferP
parameter points to a gpfs_quotaInfo_t structure containing the new quota limits. The
gpfs_quotaInfo_t structure is defined in gpfs.h. Root user authority is required to set quota
limits.
Q_SETUSE
Sets quota usage
Sets disk quota usage for a user, group, or fileset specified by the id parameter. The bufferP
parameter points to a gpfs_quotaInfo_t structure containing the new quota usage. The
gpfs_quotaInfo_t structure is defined in gpfs.h. Root user authority is required to set quota
usage.
Q_SYNC
Synchronizes the disk copy of a file system quota
Updates the on disk copy of quota usage information for a file system. The id and bufferP
parameters are unused. Root user authority is required to synchronize a file system quota.
id
Specifies the user, group, or fileset ID to which the quota control command applies. The id
parameters is interpreted by the specified quota type.
bufferP
Points to the address of an optional, command-specific data structure that is copied in or out of the
system.
Exit status
If the gpfs_quotactl() subroutine is successful, it returns a value of 0.
If the gpfs_quotactl() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EACCES
Search permission is denied for a component of a path prefix.
EFAULT
An invalid bufferP parameter is supplied. The associated structure could not be copied in or out of
the kernel.
EINVAL
One of the following errors:
• The file system is not mounted.
• Invalid command or quota type.
• Invalid input limits: negative limits or soft limits are greater than hard limits.
• UID is not defined.
ENOENT
No such file or directory.
EPERM
The quota control command is privileged and the caller did not have root user authority.
GPFS_E_NO_QUOTA_INST
The file system does not support quotas. This is the actual errno generated by GPFS.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_quotaInfo_t structure
Contains buffer mapping for the gpfs_quotactl() subroutine.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
Description
The gpfs_quotaInfo_t structure contains detailed information for the gpfs_quotactl() subroutine.
Members
blockUsage
The current block count in 1 KB units.
blockHardLimit
The absolute limit on disk block allocation.
blockSoftLimit
The preferred limit on disk block allocation.
blockInDoubt
The distributed shares and block usage that have not been not accounted for.
inodeUsage
The current number of allocated inodes.
inodeHardLimit
The absolute limit on allocated inodes.
inodeSoftLimit
The preferred inode limit.
inodeInDoubt
The distributed inode share and inode usage that have not been accounted for.
quoId
The user ID, group ID, or fileset ID.
entryType
Not used
blockGraceTime
The time limit (in seconds since the Epoch) for excessive disk use.
inodeGraceTime
The time limit (in seconds since the Epoch) for excessive inode use.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_seek_inode() subroutine
Advances an inode scan to the specified inode number.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_seek_inode(gpfs_iscan_t *iscan,
gpfs_ino_t ino);
Description
The gpfs_seek_inode() subroutine advances an inode scan to the specified inode number.
The gpfs_seek_inode() subroutine is used to start an inode scan at some place other than the
beginning of the inode file. This is useful to restart a partially completed backup or an interrupted dump
transfer to a mirror. It could also be used to do an inode scan in parallel from multiple nodes, by
partitioning the inode number space into separate ranges for each participating node. The maximum
inode number is returned when the scan was opened and each invocation to obtain the next inode
specifies a termination inode number to avoid returning the same inode more than once.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
iscan
Pointer to the inode scan handle.
ino
The next inode number to be scanned.
Exit status
If the gpfs_seek_inode() subroutine is successful, it returns a value of 0.
If the gpfs_seek_inode() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOSYS
The gpfs_seek_inode() subroutine is not available.
GPFS_E_INVAL_ISCAN
Incorrect parameters.
Examples
For an example using gpfs_seek_inode(), see /usr/lpp/mmfs/samples/util/tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_seek_inode64() subroutine
Advances an inode scan to the specified inode number.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_seek_inode64(gpfs_iscan_t *iscan,
gpfs_ino64_t ino);
Description
The gpfs_seek_inode64() subroutine advances an inode scan to the specified inode number.
The gpfs_seek_inode64() subroutine is used to start an inode scan at some place other than the
beginning of the inode file. This is useful to restart a partially completed backup or an interrupted dump
transfer to a mirror. It could also be used to do an inode scan in parallel from multiple nodes, by
partitioning the inode number space into separate ranges for each participating node. The maximum
inode number is returned when the scan was opened and each invocation to obtain the next inode
specifies a termination inode number to avoid returning the same inode more than once.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
iscan
A pointer to the inode scan handle.
ino
The next inode number to be scanned.
Exit status
If the gpfs_seek_inode64() subroutine is successful, it returns a value of 0.
If the gpfs_seek_inode64() subroutine is unsuccessful, it returns a value of -1 and sets the global
error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
ENOSYS
The gpfs_seek_inode64() subroutine is not available.
GPFS_E_INVAL_ISCAN
Incorrect parameters.
Examples
See the gpfs_seek_inode() example in /usr/lpp/mmfs/samples/util/tsinode.c.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_stat() subroutine
Returns exact file status for a GPFS file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_stat(const char *pathname,
gpfs_stat64_t *buffer);
Description
The gpfs_stat() subroutine is used to obtain exact information about the file named by the pathname
parameter. This subroutine is provided as an alternative to the stat() subroutine, which may not provide
exact mtime and atime values. For more information, see Exceptions to Open Group technical standards
in IBM Spectrum Scale: Administration Guide.
read, write, or execute permission for the named file is not required, but all directories listed in the
path leading to the file must be searchable. The file information is written to the area specified by the
buffer parameter.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
pathname
The path identifying the file for which exact status information is requested.
buffer
A pointer to the gpfs_stat64_t structure in which the information is returned. The
gpfs_stat64_t structure is described in the sys/stat.h file.
Exit status
If the gpfs_stat() subroutine is successful, it returns a value of 0.
If the gpfs_stat() subroutine is unsuccessful, it returns a value of -1 and sets the global error variable
errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EINVAL
The path name does not refer to a GPFS file or a regular file.
ENOENT
The file does not exist.
ENOSYS
The gpfs_stat() subroutine is not supported under the current file system format.
ESTALE
The cached file system information was not valid.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_stat_inode() subroutine
Seeks the specified inode and retrieves that inode and its extended attributes from the inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_stat_inode(gpfs_iscan_t *iscan,
gpfs_ino_t ino,
gpfs_ino_t termIno,
const gpfs_iattr_t **iattr);
Description
The gpfs_stat_inode() subroutine is used to seek the specified inode and to retrieve that inode and
its extended attributes from the inode scan. This subroutine combines gpfs_seek_inode() and
get_next_inode(), but will only return the specified inode.
The data returned by gpfs_next_inode() is overwritten by subsequent calls to gpfs_next_inode(),
gpfs_seek_inode(), or gpfs_stat_inode().
The termIno parameter provides a way to partition an inode scan so it can be run on more than one
node. It is only used by this call to control prefetching.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
iscan
A pointer to an inode scan descriptor.
ino
The inode number to be returned.
termIno
Prefetches inodes up to this inode. The caller might specify maxIno from gpfs_open_inodescan()
or 0 to allow prefetching over the entire inode file.
iattr
A pointer to the returned pointer to the file's iattr.
Exit status
If the gpfs_stat_inode() subroutine is successful, it returns a value of 0 and the iattr parameter is
set to point to gpfs_iattr_t. If the gpfs_stat_inode() subroutine is successful, but there are no
more inodes before the termIno parameter, or if the requested inode does not exist, it returns a value of
0 and the iattr parameter is set to NULL.
If the gpfs_stat_inode() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EPERM
The caller must have superuser privilege.
ENOSYS
The gpfs_stat_inode() subroutine is not supported under the current file system format.
ESTALE
The cached file system information was not valid.
ENOMEM
The buffer is too small.
GPFS_E_INVAL_ISCAN
Incorrect parameters.
GPFS_E_HOLE_IN_IFILE
The inode scan is reading only the inodes that have been copied to a snapshot and this inode has not
yet been copied.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_stat_inode64() subroutine
Seeks the specified inode and retrieves that inode and its extended attributes from the inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_stat_inode64(gpfs_iscan_t *iscan,
gpfs_ino64_t ino,
gpfs_ino64_t termIno,
const gpfs_iattr64_t **iattr);
Description
The gpfs_stat_inode64() subroutine is used to seek the specified inode and to retrieve that inode
and its extended attributes from the inode scan. This subroutine combines gpfs_seek_inode64() and
get_next_inode64(), but will only return the specified inode.
The data returned by gpfs_next_inode64() is overwritten by subsequent calls to
gpfs_next_inode64(), gpfs_seek_inode64(), or gpfs_stat_inode64().
The termIno parameter provides a way to partition an inode scan so it can be run on more than one
node. It is only used by this call to control prefetching.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
iscan
A pointer to an inode scan descriptor.
ino
The inode number to be returned.
termIno
Prefetches inodes up to this inode. The caller might specify maxIno from gpfs_open_inodescan()
or 0 to allow prefetching over the entire inode file.
iattr
A pointer to the returned pointer to the file's iattr.
Exit status
If the gpfs_stat_inode64() subroutine is successful, it returns a value of 0 and the iattr parameter
is set to point to gpfs_iattr_t.
If the gpfs_stat_inode64() subroutine is unsuccessful, it returns a value of -1 and sets the global
error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EPERM
The caller must have superuser privilege.
ENOSYS
The gpfs_stat_inode() subroutine is not supported under the current file system format.
ESTALE
The cached file system information was not valid.
ENOMEM
The buffer is too small.
GPFS_E_INVAL_ISCAN
Incorrect parameters.
GPFS_E_HOLE_IN_IFILE
The inode scan is reading only the inodes that have been copied to a snapshot and this inode has not
yet been copied.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_stat_inode_with_xattrs() subroutine
Seeks the specified inode and retrieves that inode and its extended attributes from the inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_stat_inode_with_xattrs(gpfs_iscan_t *iscan,
gpfs_ino_t ino,
gpfs_ino_t termIno,
const gpfs_iattr_t **iattr,
const char **xattrBuf,
unsigned int *xattrBufLen);
Description
The gpfs_stat_inode_with_xattrs() subroutine is used to seek the specified inode and to retrieve
that inode and its extended attributes from the inode scan. This subroutine combines
gpfs_seek_inode() and get_next_inode(), but will only return the specified inode.
The data returned by gpfs_next_inode() is overwritten by subsequent calls to gpfs_next_inode(),
gpfs_seek_inode(), or gpfs_stat_inode_with_xattrs().
The termIno parameter provides a way to partition an inode scan such that it can be run on more than
one node. It is only used by this call to control prefetching.
The returned values for xattrBuf and xattrBufLen must be provided to gpfs_next_xattr() to
obtain the extended attribute names and values. The buffer used for the extended attributes is
overwritten by subsequent calls to gpfs_next_inode(), gpfs_seek_inode(), or
gpfs_stat_inode_with_xattrs().
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
iscan
A pointer to an inode scan descriptor.
ino
The inode number to be returned.
termIno
Prefetches inodes up to this inode. The caller might specify maxIno from gpfs_open_inodescan()
or 0 to allow prefetching over the entire inode file.
iattr
A pointer to the returned pointer to the file's iattr.
xattrBuf
A pointer to the returned pointer to the xattr buffer.
xattrBufLen
The returned length of the xattr buffer.
Exit status
If the gpfs_stat_inode_with_xattrs() subroutine is successful, it returns a value of 0 and the
iattr parameter is set to point to gpfs_iattr_t. If the gpfs_stat_inode_with_xattrs()
subroutine is successful, but there are no more inodes before the termIno parameter, or if the requested
inode does not exist, it returns a value of 0 and the iattr parameter is set to NULL.
If the gpfs_stat_inode_with_xattrs() subroutine is unsuccessful, it returns a value of -1 and sets
the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EPERM
The caller must have superuser privilege.
ENOSYS
The gpfs_stat_inode_with_xattrs() subroutine is not supported under the current file system
format.
ESTALE
The cached file system information was not valid.
ENOMEM
The buffer is too small.
GPFS_E_INVAL_ISCAN
Incorrect parameters.
GPFS_E_HOLE_IN_IFILE
The inode scan is reading only the inodes that have been copied to a snapshot and this inode has not
yet been copied.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_stat_inode_with_xattrs64() subroutine
Seeks the specified inode and retrieves that inode and its extended attributes from the inode scan.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Synopsis
#include <gpfs.h>
int gpfs_stat_inode_with_xattrs64(gpfs_iscan_t *iscan,
gpfs_ino64_t ino,
gpfs_ino64_t termIno,
const gpfs_iattr64_t **iattr,
const char **xattrBuf,
unsigned int *xattrBufLen);
Description
The gpfs_stat_inode_with_xattrs64() subroutine is used to seek the specified inode and to
retrieve that inode and its extended attributes from the inode scan. This subroutine combines
gpfs_seek_inode64() and get_next_inode64(), but will only return the specified inode.
The data returned by get_next_inode64() is overwritten by subsequent calls to
gpfs_next_inode64(), gpfs_seek_inode64(), or gpfs_stat_inode_with_xattrs64().
The termIno parameter provides a way to partition an inode scan so it can be run on more than one
node. It is only used by this call to control prefetching.
The returned values for xattrBuf and xattrBufLen must be provided to gpfs_next_xattr() to
obtain the extended attribute names and values. The buffer used for the extended attributes is
overwritten by subsequent calls to gpfs_next_inode64(), gpfs_seek_inode64(), or
gpfs_stat_inode_with_xattrs64().
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.a for AIX
• libgpfs.so for Linux
Parameters
iscan
A pointer to an inode scan descriptor.
ino
The inode number to be returned.
termIno
Prefetches inodes up to this inode. The caller might specify maxIno from
gpfs_open_inodescan64() or 0 to allow prefetching over the entire inode file.
iattr
A pointer to the returned pointer to the file's iattr.
xattrBuf
A pointer to the returned pointer to the xattr buffer.
xattrBufLen
The returned length of the xattr buffer.
Exit status
If the gpfs_stat_inode_with_xattrs64() subroutine is successful, it returns a value of 0 and the
iattr parameter is set to point to gpfs_iattr_t. If the gpfs_stat_inode_with_xattrs64()
subroutine is successful, but there are no more inodes before the termIno parameter, or if the requested
inode does not exist, it returns a value of 0 and the iattr parameter is set to NULL.
If the gpfs_stat_inode_with_xattrs64() subroutine is unsuccessful, it returns a value of -1 and
sets the global error variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following:
EPERM
The caller must have superuser privilege.
ENOSYS
The gpfs_stat_inode_with_xattrs64() subroutine is not supported under the current file
system format.
ESTALE
The cached file system information was not valid.
ENOMEM
The buffer is too small.
GPFS_E_INVAL_ISCAN
Incorrect parameters.
GPFS_E_HOLE_IN_IFILE
The inode scan is reading only the inodes that have been copied to a snapshot and this inode has not
yet been copied.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfs_stat_x() subroutine
Returns extended status information for a GPFS file with specified accuracy.
Library
GPFS library. Runs on Linux, AIX, and Windows.
Synopsis
#include <gpfs.h>
int gpfs_stat_x(const char *pathname,
unsigned int *st_litemask,
gpfs_iattr64_t *iattr,
size_t iattrBufLen);
Description
The gpfs_stat_x() subroutine is similar to the gpfs_stat() subroutine but returns more information
in a gpfs_iattr64 structure that is defined in gpfs.h. This subroutine is supported only on the Linux
operating system.
Your program must verify that the version of the gpfs_iattr64 structure that is returned in the field
ia_version is the same as the version that you are using. Versions are defined in gpfs.h with the
pattern GPFS_IA64_VERSION*.
File permissions read, write, and execute are not required for the specified file, but all the directories
in the specified path must be searchable.
Note: Compile any program that uses this subroutine with the -lgpfs flag from the following library:
• libgpfs.so for Linux
Parameters
*pathname
A pointer to the path of the file for which information is requested.
*st_litemask
A pointer to a bitmask specification of the items that you want to be returned exactly. Bitmasks are
defined in gpfs.h with the pattern GPFS_SLITE_*BIT. The subroutine returns exact values for the
items that you specify in the bitmask. The subroutine also sets bits in the bitmask to indicate any
other items that are exact.
*iattr
A pointer to a gpfs_iattr64_t structure in which the information is returned. The structure is
described in the gpfs.h file.
iattrBufLen
The length of your gpfs_iattr64_t structure, as given by sizeof(myStructure). The subroutine
does not write data past this limit. The field ia_reclen in the returned structure is the length of the
gpfs_iattr64_t structure that the system is using.
Exit status
If the gpfs_stat_x() subroutine is successful, it returns a value of 0.
If the gpfs_stat_x() subroutine is unsuccessful, it returns a value of -1 and sets the global error
variable errno to indicate the nature of the error.
Exceptions
None.
Error status
Error codes include but are not limited to the following errors:
EINVAL
The path name does not refer to a GPFS file or a regular file.
ENOENT
The file does not exist.
ENOSYS
The gpfs_stat_x() subroutine is not supported under the current file system format.
ESTALE
The cached file system information was invalid.
Location
/usr/lpp/mmfs/lib
gpfsFcntlHeader_t structure
Contains declaration information for the gpfs_fcntl() subroutine.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct
{
int totalLength;
int fcntlVersion;
int errorOffset;
int fcntlReserved;
} gpfsFcntlHeader_t;
Description
The gpfsFcntlHeader_t structure contains size, version, and error information for the gpfs_fcntl()
subroutine.
Members
totalLength
This field must be set to the total length, in bytes, of the data structure being passed in this
subroutine. This includes the length of the header and all hints and directives that follow the header.
The total size of the data structure cannot exceed the value of GPFS_MAX_FCNTL_LENGTH, as defined
in the header file gpfs_fcntl.h. The current value of GPFS_MAX_FCNTL_LENGTH is 64 KB.
fcntlVersion
This field must be set to the current version number of the gpfs_fcntl() subroutine, as defined by
GPFS_FCNTL_CURRENT_VERSION in the header file gpfs_fcntl.h. The current version number is
one.
errorOffset
If an error occurs processing a system call, GPFS sets this field to the offset within the parameter area
where the error was detected.
For example,
1. An incorrect version number in the header would cause errorOffset to be set to zero.
2. An error in the first hint following the header would set errorOffset to sizeof(header).
If no errors are found, GPFS does not alter this field.
fcntlReserved
This field is currently unused.
For compatibility with future versions of GPFS, set this field to zero.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsGetDataBlkDiskIdx_t structure
Obtains the FPO data block location of a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
FilemapIn filemapIn;
FilemapOut filemapOut;
} gpfsGetDataBlkDiskIdx_t;
Description
The gpfsGetDataBlkDiskIdx_t structure is used to obtain a file's blocks on-disk or in-memory
location information.
Members
structLen
Length of the gpfsGetDataBlkDiskIdx_t structure.
structType
The structure identifiers are GPFS_FCNTL_GET_DATABLKDISKIDX and
GPFS_FCNTL_GET_DATABLKLOC. GPFS_FCNTL_GET_DATABLKDISKIDX can be used to retrieve the
disk ID on which the data blocks are located. GPFS_FCNTL_GET_DATABLKLOC can be used to
retrieve the disk ID on which the data blocks are located and the node ID on which data block buffers
(in memory) are located.
filemapIn
Input parameters:
struct FilemapIn {
long long startOffset;
long long skipfactor;
long long length;
int mreplicas;
int reserved;
} FilemapIn;
startOffset
Start offset in bytes.
skipfactor
Number of bytes to skip before the next offset read. Could be 0 in which case the meta block size will
be used as the skipfactor.
length
Number of bytes to read; startOffset + length / skipfactor = numBlksReturned
mreplicas
Number of replicas that the user wants.
0 - all; 1 - primary; 2 - primary and 1st replica; 3 - primary, 1st and 2nd replica
reserved
Reserved field that should not be used.
filemapOut
Output data:
struct FilemapOut {
int numReplicasReturned;
int numBlksReturned;
int blockSize;
int blockSizeHigh;
char buffer [GPFS_MAX_FCNTL_LENGTH-1024];
} FilemapOut;
numReplicasReturned
Number of replicas returned.
numBlksReturned
Number of data blocks returned.
blockSize
Low 32bits of meta block size. Only valid if skipfactor in the input is 0. Otherwise, blockSize will
be 0.
blockSizeHigh
High 32bits of meta block size. Only valid if structType is GPFS_FCNTL_GET_DATABLKLOC and
skipfactor in the input is 0. Otherwise, blockSizeHigh will be 0.
buffer [GPFS_MAX_FCNTL_LENGTH-1024]
Buffer in which the disk ID and node ID are stored. If the buffer cannot store all of the requested
information, you will have to calculate a new startOffset and length in the input data according
to the output data and then repeat the call.
If structType is GPFS_FCNTL_GET_DATABLKDISKIDX, the format is:
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsGetFilesetName_t structure
Obtains the fileset name of a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
char buffer[GPFS_FCNTL_MAX_NAME_BUFFER];
} gpfsGetFilesetName_t;
Description
The gpfsGetFilesetName_t structure is used to obtain a file's fileset name.
Members
structLen
Length of the gpfsGetFilesetName_t structure.
structType
Structure identifier GPFS_FCNTL_GET_FILESETNAME.
buffer
The size of the buffer may vary, but must be a multiple of eight. Upon successful completion of the
call, the buffer contains a null-terminated character string for the name of the requested object.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsGetReplication_t structure
Obtains the replication factors of a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
int metadataReplicas;
int maxMetadataReplicas;
int dataReplicas;
int maxDataReplicas;
int status;
int reserved;
} gpfsGetReplication_t;
Description
The gpfsGetReplication_t structure is used to obtain a file's replication factors.
Members
structLen
Length of the gpfsGetReplication_t structure.
structType
Structure identifier GPFS_FCNTL_GET_REPLICATION.
metadataReplicas
Returns the current number of copies of indirect blocks for the file.
maxMetadataReplicas
Returns the maximum number of copies of indirect blocks for a file.
dataReplicas
Returns the current number of copies of the data blocks for a file.
maxDataReplicas
Returns the maximum number of copies of data blocks for a file.
status
Returns the status of the file.
reserved
Unused, but should be set to 0.
Error status
These values are returned in the status field:
GPFS_FCNTL_STATUS_EXPOSED
This file may have some data where the only replicas are on suspended disks; implies some data may
be lost if suspended disks are removed.
GPFS_FCNTL_STATUS_ILLREPLICATE
This file may not be properly replicated; that is, some data may have fewer or more than the desired
number of replicas, or some replicas may be on suspended disks.
GPFS_FCNTL_STATUS_UNBALANCED
This file may not be properly balanced.
GPFS_FCNTL_STATUS_DATAUPDATEMISS
This file has stale data blocks on at least one of the disks that are marked as unavailable or
recovering.
GPFS_FCNTL_STATUS_METAUPDATEMISS
This file has stale indirect blocks on at least one unavailable or recovering disk.
GPFS_FCNTL_STATUS_ILLPLACED
This file may not be properly placed; that is, some data may be stored in an incorrect storage pool.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsGetSetXAttr_t structure
Obtains or sets extended attribute values.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
int nameLen;
int bufferLen;
unsigned int flags;
int errReasonCode;
char buffer[0];
} gpfsGetSetXAttr_t;
Description
The gpfsGetSetXAttr_t structure is used to obtain extended attributes.
Members
structLen
Length of the gpfsGetSetXAttr_t structure.
structType
Structure identifier GPFS_FCNTL_GET_XATTR or GPFS_FCNTL_SET_XATTR.
nameLen
Length of the attribute name. May include a trailing '\0' character.
bufferLen
For GPFS_FCNTL_GET_XATTR: Input, length of the buffer; output, length of the attribute value.
For GPFS_FCNTL_SET_XATTR: Input, length of the attribute value. Specify -1 to delete an attribute.
errReasonCode
Reason code.
flags
The following flags are recognized:
• GPFS_FCNTL_XATTRFLAG_NONE
• GPFS_FCNTL_XATTRFLAG_SYNC
• GPFS_FCNTL_XATTRFLAG_CREATE
• GPFS_FCNTL_XATTRFLAG_REPLACE
• GPFS_FCNTL_XATTRFLAG_DELETE
• GPFS_FCNTL_XATTRFLAG_NO_CTIME
• GPFS_FCNTL_XATTRFLAG_RESERVED
buffer
Buffer for the attribute name and value.
For GPFS_FCNTL_GET_XATTR:
Input: The name begins at offset 0 and must be null terminated.
Output: The name is returned unchanged; the value begins at nameLen rounded up to a multiple of
8.
For GPFS_FCNTL_SET_XATTR:
Input: The name begins at offset 0 and must be null terminated. The value begins at nameLen
rounded up to a multiple of 8.
The actual length of the buffer should be nameLen rounded up to a multiple of 8, plus the length of
the attribute value rounded up to a multiple of 8.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsGetSnapshotName_t structure
Obtains the snapshot name of a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
char buffer[GPFS_FCNTL_MAX_NAME_BUFFER];
} gpfsGetSnapshotName_t;
Description
The gpfsGetSnapshotName_t structure is used to obtain a file's snapshot name. If the file is not part of
a snapshot, a zero length snapshot name will be returned.
Members
structLen
Length of the gpfsGetSnapshotName_t structure.
structType
Structure identifier GPFS_FCNTL_GET_SNAPSHOTNAME.
buffer
The size of the buffer may vary, but must be a multiple of eight. Upon successful completion of the
call, the buffer contains a null-terminated character string for the name of the requested object.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsGetStoragePool_t structure
Obtains the storage pool name of a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
char buffer[GPFS_FCNTL_MAX_NAME_BUFFER];
} gpfsGetStoragePool_t;
Description
The gpfsGetStoragePool_t structure is used to obtain a file's storage pool name.
Members
structLen
Length of the gpfsGetStoragePool_t structure.
structType
Structure identifier GPFS_FCNTL_GET_STORAGEPOOL.
buffer
The size of the buffer may vary, but must be a multiple of eight. Upon successful completion of the
call, the buffer contains a null-terminated character string for the name of the requested object.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsListXAttr_t structure
Lists extended attributes.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
int bufferLen;
int errReasonCode;
char buffer[0];
} gpfsListXAttr_t;
Description
The gpfsListXAttr_t structure is used to list extended attributes.
Members
structLen
Length of the gpfsListXAttr_t structure.
structType
Structure identifier GPFS_FCNTL_LIST_XATTR .
bufferLen
Input: Length of the buffer. Output: Length of the returned list of names.
The actual length of the buffer required depends on the number of attributes set and the length of
each attribute name. If the buffer provided is too small for all of the returned names, the
errReasonCode will be set to GPFS_FCNTL_ERR_BUFFER_TOO_SMALL, and bufferLen will be set
to the minimum size buffer required to list all attributes. An initial buffer length of 0 may be used to
query the attributes and determine the correct buffer size for this file.
errReasonCode
Reason code.
buffer
Buffer for the returned list of names. Each attribute name is prefixed with a one-byte name length.
The attribute name may contain embedded null bytes. The attribute name may be null-terminated in
which case the name length includes this null byte. The next attribute name follows immediately in
the buffer (and is prefixed with its own length). Following the last name, a '\0' is appended to
terminate the list. The returned bufferLen includes the final '\0'.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsRestripeData_t structure
Restripes the data blocks of a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
int options;
int errReason;
int errValue1;
int errValue2;
int reserved1;
int reserved2;
} gpfsRestripeData_t;
Description
The gpfsRestripeData_t structure is used to restripe a file's data blocks to updates its replication and
migrate its data. The data movement is always done immediately.
Members
structLen
Length of the gpfsRestripeData_t structure.
structType
Structure identifier GPFS_FCNTL_RESTRIPE_DATA.
options
Options for restripe command. See the mmrestripefile command for complete definitions.
GPFS_FCNTL_RESTRIPE_M
Migrate critical data off of suspended disks.
GPFS_FCNTL_RESTRIPE_R
Replicate data against subsequent failure.
GPFS_FCNTL_RESTRIPE_P
Place file data in assigned storage pool.
GPFS_FCNTL_RESTRIPE_B
Rebalance file data.
errReason
Reason code describing the failure. Possible codes are defined in “Error status” on page 990.
errValue1
Returned value depending upon errReason.
errValue2
Returned value depending upon errReason.
reserved1
Unused, but should be set to 0.
reserved2
Unused, but should be set to 0.
Error status
These values are returned in the errReason field:
GPFS_FCNTL_ERR_NO_REPLICA_GROUP
Not enough replicas could be created because the desired degree of replication is larger than the
number of failure groups.
GPFS_FCNTL_ERR_NO_REPLICA_SPACE
Not enough replicas could be created because there was not enough space left in one of the failure
groups.
GPFS_FCNTL_ERR_NO_BALANCE_SPACE
There was not enough space left on one of the disks to properly balance the file according to the
current stripe method.
GPFS_FCNTL_ERR_NO_BALANCE_AVAILABLE
The file could not be properly balanced because one or more disks are unavailable.
GPFS_FCNTL_ERR_ADDR_BROKEN
All replicas were on disks that have since been deleted from the stripe group.
GPFS_FCNTL_ERR_NO_IMMUTABLE_DIR
No immutable attribute can be set on directories.
GPFS_FCNTL_ERR_NO_IMMUTABLE_SYSFILE
No immutable attribute can be set on system files.
GPFS_FCNTL_ERR_IMMUTABLE_FLAG
Immutable and indefinite retention flag is wrong.
GPFS_FCNTL_ERR_IMMUTABLE_PERM
Immutable and indefinite retention flag is wrong.
GPFS_FCNTL_ERR_APPENDONLY_CONFLICT
The appendOnly flag should be set separately.
GPFS_FCNTL_ERR_NOIMMUTABLE_ONSNAP
Cannot set immutable or appendOnly on snapshots.
GPFS_FCNTL_ERR_FILE_HAS_XATTRS
An attempt to change maxDataReplicas or maxMetadataReplicas was made on a file that has
extended attributes.
GPFS_FCNTL_ERR_NOT_GPFS_FILE
This file is not part of a GPFS file system.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsRestripeRange_t structure
Restripes a specific range of data blocks of a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
int options;
int errReason;
int errValue1;
int errValue2;
gpfsByteRange_t range;
int reserved1;
int reserved2;
} gpfsRestripeRange_t;
Description
The gpfsRestripeRange_t structure is used to restripe a specific range of data blocks of a file to
update its replication and migrate its data. The data movement is always done immediately.
Members
structLen
Length of the gpfsRestripeRange_t structure.
structType
Structure identifier GPFS_FCNTL_RESTRIPE_RANGE.
options
Options for the restripe command. See the mmrestripefile command for complete definitions.
GPFS_FCNTL_RESTRIPE_M
Migrates critical data off suspended disks.
GPFS_FCNTL_RESTRIPE_R
Replicates data against subsequent failure.
GPFS_FCNTL_RESTRIPE_P
Places file data in assigned storage pool.
GPFS_FCNTL_RESTRIPE_B
Rebalances file data.
GPFS_FCNTL_RESTRIPE_L
Relocates file data.
GPFS_FCNTL_RESTRIPE_C
Compresses file data.
GPFS_FCNTL_RESTRIPE_RANGE_R
Indicates to restripe only a range. If it is not set, the whole file is restriped.
errReason
Provides a reason code that describes the failure. Possible codes are defined in the Error status.
errValue1
Returned value depending upon errReason.
errValue2
Returned value depending upon errReason.
range
Used with GPFS_FCNTL_RESTRIPE_RANGE_R to specify a range. Otherwise, it should be set to 0.
reserved1
Unused, but it should be set to 0.
reserved2
Unused, but it should be set to 0.
Error status
These values are returned in the errReason field:
GPFS_FCNTL_ERR_NO_REPLICA_GROUP
Not enough replicas could be created because the desired degree of replication is larger than the
number of failure groups.
GPFS_FCNTL_ERR_NO_REPLICA_SPACE
Not enough replicas could be created because there was not enough space left in one of the failure
groups.
GPFS_FCNTL_ERR_NO_BALANCE_SPACE
There was not enough space left on one of the disks to properly balance the file according to the
current stripe method.
GPFS_FCNTL_ERR_NO_BALANCE_AVAILABLE
The file could not be properly balanced because one or more disks are unavailable.
GPFS_FCNTL_ERR_ADDR_BROKEN
All replicas were on disks that were deleted from the stripe group.
GPFS_FCNTL_ERR_NO_IMMUTABLE_DIR
No immutable attribute can be set on directories.
GPFS_FCNTL_ERR_NO_IMMUTABLE_SYSFILE
No immutable attribute can be set on system files.
GPFS_FCNTL_ERR_IMMUTABLE_FLAG
Immutable and indefinite retention flag is wrong.
GPFS_FCNTL_ERR_IMMUTABLE_PERM
Immutable and indefinite retention flag is wrong.
GPFS_FCNTL_ERR_APPENDONLY_CONFLICT
The appendOnly flag should be set separately.
GPFS_FCNTL_ERR_NOIMMUTABLE_ONSNAP
Cannot set immutable or appendOnly on snapshots.
GPFS_FCNTL_ERR_FILE_HAS_XATTRS
An attempt to change maxDataReplicas or maxMetadataReplicas was made on a file that has
extended attributes.
GPFS_FCNTL_ERR_NOT_GPFS_FILE
This file is not part of a GPFS file system.
GPFS_FCNTL_ERR_COMPRESS_SNAPSHOT
Compression is not supported for snapshot files. Used by GPFS_FCNTL_RESTRIPE_C only.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsRestripeRangeV2_t structure
Restripes a specific range of data blocks of a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
int options;
int errReason;
int errValue1;
int errValue2;
gpfsByteRange_t range;
int reserved1;
int reserved2;
char *outBuf;
int bufLen;
int reserved3;
} gpfsRestripeRangeV2_t;
Description
The gpfsRestripeRangeV2_t structure is used to restripe a specific range of data blocks of a file to
update its replication and migrate its data. The data movement is always done immediately. The
difference between gpfsRestripeRange_t and gpfsRestripeRangeV2_t is that
gpfsRestripeRangeV2_t has an outBuf, which could be used to pass information from the daemon
back to the caller. Currently, only GPFS_FCNTL_RESTRIPE_CM uses the buffer.
Members
structLen
Length of the gpfsRestripeRangeV2_t structure.
structType
Structure identifier GPFS_FCNTL_RESTRIPE_RANGE.
options
Options for the restripe command. See the mmrestripefile command for complete definitions.
GPFS_FCNTL_RESTRIPE_M
Migrates critical data off suspended disks.
GPFS_FCNTL_RESTRIPE_R
Replicates data against subsequent failure.
GPFS_FCNTL_RESTRIPE_P
Places file data in assigned storage pool.
GPFS_FCNTL_RESTRIPE_B
Rebalances file data.
GPFS_FCNTL_RESTRIPE_L
Relocates file data.
GPFS_FCNTL_RESTRIPE_CM
Compares the content of replicas of file data.
GPFS_FCNTL_RESTRIPE_C
Compresses file data.
GPFS_FCNTL_RESTRIPE_RANGE_R
Indicates to restripe only a range. If it is not set, the whole file is restriped.
errReason
Provides a reason code that describes the failure. Possible codes are defined in the Error status.
errValue1
Returned value depending upon errReason.
errValue2
Returned value depending upon errReason.
range
Used with GPFS_FCNTL_RESTRIPE_RANGE_R to specify a range. Otherwise, it should be set to 0.
reserved1
Unused, but it should be set to 0.
reserved2
Unused, but it should be set to 0.
outBuf
Buffer to store returned information from the daemon. The caller is responsible for allocation and
deallocation of the buffer. GPFS_FCNTL_RESTRIPE_CM only uses it to return bad replica information.
bufLen
Length of outBuf.
reserved3
Unused, but it should be set to 0.
Error status
These values are returned in the errReason field:
GPFS_FCNTL_ERR_NO_REPLICA_GROUP
Not enough replicas could be created because the desired degree of replication is larger than the
number of failure groups.
GPFS_FCNTL_ERR_NO_REPLICA_SPACE
Not enough replicas could be created because there was not enough space left in one of the failure
groups.
GPFS_FCNTL_ERR_NO_BALANCE_SPACE
There was not enough space left on one of the disks to properly balance the file according to the
current stripe method.
GPFS_FCNTL_ERR_NO_BALANCE_AVAILABLE
The file could not be properly balanced because one or more disks are unavailable.
GPFS_FCNTL_ERR_ADDR_BROKEN
All replicas were on disks that were deleted from the stripe group.
GPFS_FCNTL_ERR_NO_IMMUTABLE_DIR
No immutable attribute can be set on directories.
GPFS_FCNTL_ERR_NO_IMMUTABLE_SYSFILE
No immutable attribute can be set on system files.
GPFS_FCNTL_ERR_IMMUTABLE_FLAG
Immutable and indefinite retention flag is wrong.
GPFS_FCNTL_ERR_IMMUTABLE_PERM
Immutable and indefinite retention flag is wrong.
GPFS_FCNTL_ERR_APPENDONLY_CONFLICT
The appendOnly flag should be set separately.
GPFS_FCNTL_ERR_NOIMMUTABLE_ONSNAP
Cannot set immutable or appendOnly on snapshots.
GPFS_FCNTL_ERR_FILE_HAS_XATTRS
An attempt to change maxDataReplicas or maxMetadataReplicas was made on a file that has
extended attributes.
GPFS_FCNTL_ERR_NOT_GPFS_FILE
This file is not part of a GPFS file system.
GPFS_FCNTL_ERR_COMPRESS_SNAPSHOT
Compression is not supported for snapshot files. Used by GPFS_FCNTL_RESTRIPE_C only.
GPFS_FCNTL_ERR_COMPARE_DATA_IN_INODE
Replica comparison does not support data-in-inode file. Used by GPFS_FCNTL_RESTRIPE_CM only.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsSetReplication_t structure
Sets the replication factors of a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
int metadataReplicas;
int maxMetadataReplicas;
int dataReplicas;
int maxDataReplicas;
int errReason;
int errValue1;
int errValue2;
int reserved;
} gpfsSetReplication_t;
Description
The gpfsGetReplication_t structure is used to set a file's replication factors. However, the directive
does not cause the file to be restriped immediately. Instead, the caller must append a
gpfsRestripeData_t directive or invoke an explicit restripe using the mmrestripefs or
mmrestripefile command.
Members
structLen
Length of the gpfsSetReplication_t structure.
structType
Structure identifier GPFS_FCNTL_SET_REPLICATION.
metadataReplicas
Specifies how many copies of the file system's metadata to create. Enter a value of 1 or 2, but not
greater than the value of the maxMetadataReplicas attribute of the file. A value of 0 indicates not
to change the current value.
maxMetadataReplicas
The maximum number of copies of indirect blocks for a file. Space is reserved in the inode for all
possible copies of pointers to indirect blocks. Valid values are 1 and 2, but cannot be less than
DefaultMetadataReplicas. The default is 1. A value of 0 indicates not to change the current
value.
dataReplicas
Specifies how many copies of the file data to create. Enter a value of 1 or 2, but not greater than the
value of the maxDataReplicas attribute of the file. A value of 0 indicates not to change the current
value.
maxDataReplicas
The maximum number of copies of data blocks for a file. Space is reserved in the inode and indirect
blocks for all possible copies of pointers to data blocks. Valid values are 1 and 2, but cannot be less
than DefaultDataReplicas. The default is 1. A value of 0 indicates not to change the current value.
errReason
Reason code describing the failure. Possible codes are defined in “Error status” on page 997.
errValue1
Returned value depending upon errReason.
errValue2
Returned value depending upon errReason.
reserved
Unused, but should be set to 0.
Error status
These values are returned in the errReason field:
GPFS_FCNTL_ERR_NONE
Command was successful or no reason information was returned.
GPFS_FCNTL_ERR_METADATA_REPLICAS_RANGE
Field metadataReplicas is out of range. Fields errValue1 and errValue2 contain the valid lower
and upper range boundaries.
GPFS_FCNTL_ERR_MAXMETADATA_REPLICAS_RANGE
Field maxMetadataReplicas is out of range. Fields errValue1 and errValue2 contain the valid
lower and upper range boundaries.
GPFS_FCNTL_ERR_DATA_REPLICAS_RANGE
Field dataReplicas is out of range. Fields errValue1 and errValue2 contain the valid lower and
upper range boundaries.
GPFS_FCNTL_ERR_MAXDATA_REPLICAS_RANGE
Field maxDataReplicas is out of range. Fields errValue1 and errValue2 contain the valid lower
and upper range boundaries.
GPFS_FCNTL_ERR_FILE_NOT_EMPTY
An attempt to change maxMetadataReplicas or maxDataReplicas or both was made on a file
that is not empty.
GPFS_FCNTL_ERR_REPLICAS_EXCEED_FGMAX
Field metadataReplicas, or dataReplicas, or both exceed the number of failure groups. Field
errValue1 contains the maximum number of metadata failure groups. Field errValue2 contains
the maximum number of data failure groups.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
gpfsSetStoragePool_t structure
Sets the assigned storage pool of a file.
Library
GPFS Library (libgpfs.a for AIX, libgpfs.so for Linux)
Structure
typedef struct {
int structLen;
int structType;
int errReason;
int errValue1;
int errValue2;
int reserved;
char buffer[GPFS_FCNTL_MAX_NAME_BUFFER];
} gpfsSetStoragePool_t;
Description
The gpfsSetStoragePool_t structure is used to set a file's assigned storage pool. However, the
directive does not cause the file data to be migrated immediately. Instead, the caller must append a
gpfsRestripeData_t directive or invoke an explicit restripe with the mmrestripefs or
mmrestripefile command. The caller must have su or root privileges to change a storage pool
assignment.
Members
structLen
Length of the gpfsSetStoragePool_t structure.
structType
Structure identifier GPFS_FCNTL_SET_STORAGEPOOL.
errReason
Reason code describing the failure. Possible codes are defined in “Error status” on page 998.
errValue1
Returned value depending upon errReason.
errValue2
Returned value depending upon errReason.
reserved
Unused, but should be set to 0.
buffer
The name of the storage pool for the file's data. Only user files may be reassigned to different storage
pool. System files, including all directories, must reside in the system pool and may not be moved. The
size of the buffer may vary, but must be a multiple of eight.
Error status
These values are returned in the errReason field:
GPFS_FCNTL_ERR_NONE
Command was successful or no reason information was returned.
GPFS_FCNTL_ERR_NOPERM
User does not have permission to perform the requested operation.
GPFS_FCNTL_ERR_INVALID_STORAGE_POOL
Invalid storage pool name was given.
GPFS_FCNTL_ERR_INVALID_STORAGE_POOL_TYPE
Invalid storage pool. File cannot be assigned to given pool.
GPFS_FCNTL_ERR_INVALID_STORAGE_POOL_ISDIR
Invalid storage pool. Directories cannot be assigned to given pool.
GPFS_FCNTL_ERR_INVALID_STORAGE_POOL_ISLNK
Invalid storage pool. System files cannot be assigned to given pool.
GPFS_FCNTL_ERR_INVALID_STORAGE_POOL_ISSYS
Invalid storage pool. System files cannot be assigned to given pool.
GPFS_FCNTL_ERR_STORAGE_POOL_NOTENABLED
File system has not been upgraded to support storage pools.
Location
/usr/lpp/mmfs/lib/libgpfs.a for AIX
/usr/lpp/mmfs/lib/libgpfs.so for Linux
Description
The /var/mmfs/etc/mmsdrbackup user exit, when properly installed on the primary GPFS
configuration server, is called asynchronously every time there is a change to the GPFS master
configuration file. You can use this user exit to create a backup of the GPFS configuration data.
Read the sample file /usr/lpp/mmfs/samples/mmsdrbackup.sample for a detailed description on
how to code and install this user exit.
The type of backup that is created depends on the configuration of the cluster:
• If the Cluster Configuration Repository (CCR) is enabled, then a CCR backup is created. This type of
backup applies to IBM Spectrum Scale V4.2.0 or later.
• Otherwise, a mmsdrfs backup is created.
For more information about the CCR, see “mmcrcluster command” on page 303.
Note: The mmsdrbackup user exit is supported from IBM Spectrum Scale 4.2.0 or later. It must not be
used on clusters which have nodes running earlier versions of IBM Spectrum Scale.
Parameters
The generation number of the most recent version of the GPFS configuration data.
Exit status
The mmsdrbackup user exit returns a value of zero.
Location
/var/mmfs/etc
Description
The /var/mmfs/etc/nsddevices user exit, when properly installed, is invoked synchronously by the
GPFS daemon during its disk discovery processing. The purpose of this procedure is to discover and verify
the physical devices on each node that correspond to the disks previously defined to GPFS with the
mmcrnsd command. The nsddevices user exit can be used to either replace or to supplement the disk
discovery procedure of the GPFS daemon.
Read the sample file /usr/lpp/mmfs/samples/nsddevices.sample for a detailed description on
how to code and install this user exit.
Parameters
None.
Exit status
The nsddevices user exit should return either zero or one.
When the nsddevices user exit returns a value of zero, the GPFS disk discovery procedure is bypassed.
When the nsddevices user exit returns a value of one, the GPFS disk discovery procedure is performed
and the results are concatenated with the results from the nsddevices user exit.
Location
/var/mmfs/etc
Description
The /var/mmfs/etc/syncfsconfig user exit, when properly installed, will be synchronously invoked
after each command that may change the configuration of a file system. Examples of such commands are:
mmadddisk, mmdeldisk, mmchfs, and so forth. The syncfsconfig user exit can be used to keep the
file system configuration data in replicated GPFS clusters automatically synchronized.
Read the sample file /usr/lpp/mmfs/samples/syncfsconfig.sample for a detailed description on
how to code and install this user exit.
Parameters
None.
Exit status
The syncfsconfig user exit should always return a value of zero.
Location
/var/mmfs/etc
Description
You can use the /var/mmfs/etc/preunmount user exit to execute a script before an IBM Spectrum
Scale file system is unmounted.
Parameters
None.
Exit status
The preunmount user exit returns a value of zero.
Location
/var/mmfs/etc
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT bucket/keys request sets or changes keys for a bucket. For more information about the fields
in the data structures that are returned, see “mmafmcoskeys command” on page 58.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/bucket/keys
where
bucket/keys
Specifies the target of the request.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"bucket": "Bucket name",
"accessKey": "Access key",
"secretKey": "Secret key"
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"bucket": "Bucket name"
Name of the bucket.
"accessKey": "Access key"
Access key of the specified bucket.
"secretKey": "Secret key"
Secret key of the specified bucket.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
Examples
The following example shows how to set or change bucket key:
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000025,
"status" : "COMPLETED",
"submitted" : "2020-09-28 14:50:41,911",
"completed" : "2020-09-28 14:50:42,872",
"runtime" : 961,
"request" : {
"data" : {
"accessKey" : "****",
"bucket" : "mybucket",
"secretKey" : "****"
},
"type" : "PUT",
"url" : "/scalemgmt/v2/bucket/keys"
},
"result" : {
"progress" : [ ],
"commands" : [ "mmafmcoskeys 'mybucket' set **** **** " ],
"stdout" : [ ],
"stderr" : [ ],
"exitCode" : 0
},
"pids" : [ ]
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
}
}
Related reference
“mmafmconfig command” on page 45
Can be used to manage home caching behavior and mapping of gateways and home NFS exported
servers.
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE bucket/keys/{bucketName} request deletes local bucket definition with keys. For more
information about the fields in the data structures that are returned, see “mmafmcoskeys command” on
page 58.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/bucket/keys/{bucketName}
where
bucket/keys/{bucketName}
Specifies the bucket to be deleted. Required.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
"completed":Time"
The time at which the job was completed.
"status":"RUNNING | COMPLETED | FAILED"
Status of the job.
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000002,
"status" : "COMPLETED",
"submitted" : "2020-09-29 14:34:07,383",
"completed" : "2020-09-29 14:34:08,296",
"runtime" : 913,
"request" : {
"data" : "myBucket",
"type" : "DELETE",
"url" : "/scalemgmt/v2/bucket/keys/myBucket"
},
"result" : {
"progress" : [ ],
"commands" : [ "mmafmcoskeys 'myBucket' delete " ],
"stdout" : [ ],
"stderr" : [ ],
"exitCode" : 0
},
"pids" : [ ]
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
}
}
Related reference
“mmafmcoskeys command” on page 58
Manages an access key and a secret key to access a bucket on a cloud object storage.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET ces/addresses request gets information about CES (Cluster Export Services) addresses. For
more information about the fields in the data structures that are returned, see “mmces command” on
page 132.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/ces/addresses
where
ces/addresses
Specifies CES address as the resource. Required.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
The following list of attributes are available in the response data:
{
"status":
{
"code": ReturnCode
"message": "ReturnMessage",
}
"paging":
Examples
The following example gets information about the CES addresses available in the system:
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
The cesaddresses array returns four objects in the following example. Each object contains details
about one CES address.
Using the field parameter ":all:" returns entire details of the CES addresses available in the system:
Response data:
{
"cesaddresses" : [ {
"attributes" : "",
"cesAddress" : "198.51.100.35",
"cesGroup" : "",
"nodeName" : "mari-13.localnet.com",
"nodeNumber" : 3,
"oid" : 1
}, {
"attributes" : "",
"cesAddress" : "198.51.100.37",
"cesGroup" : "",
"nodeName" : "mari-12.localnet.com",
"nodeNumber" : 2,
"oid" : 2
}, {
"attributes" : "",
"cesAddress" : "198.51.100.31",
"cesGroup" : "",
"nodeName" : "mari-12.localnet.com",
"nodeNumber" : 2,
"oid" : 3
}, {
"attributes" : "",
"cesAddress" : "198.51.100.33",
"cesGroup" : "",
"nodeName" : "mari-14.localnet.com",
"nodeNumber" : 4,
"oid" : 4
}, {
"attributes" : "",
"cesAddress" : "198.51.100.27",
"cesGroup" : "",
"nodeName" : "mari-14.localnet.com",
"nodeNumber" : 4,
"oid" : 5
}
],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Related reference
“mmces command” on page 132
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET ces/addresses/{cesAddress} request gets information about a specific CES (Cluster
Export Services) address. For more information about the fields in the data structures that are returned,
see “mmces command” on page 132.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/ces/addresses/CesAddress
where
ces/addresses
Specifies CES address as the resource. Required.
CesAddress
Specifies the CES address about which you want to get information. Required.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
The following list of attributes are available in the response data:
{
"status":
{
"code": ReturnCode
"message": "ReturnMessage",
}
"paging":
{
"next": "URL"
},
"cesaddresses": [
{
"oid": "Integer"
Examples
The following example gets information about the CES address 198.51.100.8.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"cesaddresses": [
{
"attributes" : "",
"cesAddress" : "198.51.100.8",
"cesGroup" : "",
"nodeName" : "mari-13.localnet.com",
Related reference
“mmces command” on page 132
Manage CES (Cluster Export Services) configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET ces/services request gets information about the CES (Cluster Export Services) services in the
cluster. For more information about the fields in the returned data structure, see “mmces command” on
page 132.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/ces/services
where
ces/services
Specifies CES services as the resource. Required.
Request headers
Accept: application/json
Parameters
No parameters.
Request data
No request data.
Response data
{
"status": {
"code": ReturnCode,
"message": "ReturnMessage"
}
"cesservices": {
"protocolStates": [
{
"service":"Service"
"enabled":"{yes | no}",
}
]
"protocolNodes": [
{
"nodeName":"Node"
"serviceStates": [
{
"service":"Service"
"running":"{yes | no}",
}
]
}
],
},
Examples
The following example gets information about the CES services.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
The protocolNodes array returns the details of the protocols configured in each protocol node. The
protocolStates array provides the status of the protocols.
{
"cesservices" : {
"protocolNodes" : [ {
"nodeName" : "mari-12.localnet.com",
"serviceStates" : [ {
"running" : "yes",
"service" : "OBJ"
}, {
"running" : "yes",
"service" : "SMB"
}, {
"running" : "yes",
"service" : "NFS"
Related reference
“mmces command” on page 132
Manage CES (Cluster Export Services) configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET ces/services/service request gets information about a CES (Cluster Export Services)
service in the cluster. For more information about the fields in the returned data structure, see “mmces
command” on page 132.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/ces/services/service
where
ces/services
Specifies ces/services as the resource.
service
Specifies the service about which you want to get information. Required.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code": ReturnCode,
"message": "ReturnMessage"
}
"cesservices": {
"protocolStates": [
{
"service":"Service"
"enabled":"{yes | no}",
}
]
"protocolNodes": [
status:
Return status.
"code": ReturnCode,
The return message.
"message": "ReturnMessage"
The return message.
cesservices:
Information about CES services. The information consists of two arrays, protocolNodes and
protocolStates. Each array contains multiple elements, with one element for each CES service. For
more information about the fields in these structures, see the link at the end of this topic.
protocolStates:
An array of information about the services. Each element describes one service:
"service":"Service"
Identifies the service such as CES, NETWORK, NFS, or SMB.
"enabled":"{yes | no}"
Indicates whether the service is enabled.
protocolNodes
An array of information about protocol nodes. Each array element describes one protocol node.
"nodeName":"Node"
The name of the node.
serviceStates:
An array of information about the services for which the node is a protocol node.
"service":"Service"
Name of the service.
"running":"{yes | no}"
Indicates whether the service is running.
Examples
The following example gets information about the SMB service.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "The request finished successfully"
},
"cesservices": {
"protocolStates": [
{
"service": "SMB",
"enabled": "yes"
}
],
"protocolNodes": [
{
"nodeName": "testnode-1.localnet.com",
"serviceStates": [
{
"service": "SMB",
"running": "yes"
}
]
}
]
}
}
Related reference
“mmces command” on page 132
Manage CES (Cluster Export Services) configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET cliauditlog request gets a record of various actions that are performed in the system. This
helps the system administrator to audit the commands and tasks the users and administrators are
performing. These logs can also be used to troubleshoot issues that are reported in the system. For more
information about the fields in the data structures that are returned, see “mmaudit command” on page
92.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/cliauditlog
where
cliauditlog
Specifies that the GET request fetches the audit details of the various actions that are performed in
the cluster.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
The following list of attributes are available in the response data:
{
"status":
}
}
"status":
Return status.
"code": ReturnCode,
The HTTP status code that was returned by the request.
"message": "ReturnMessage"
The return message.
"paging":
An array of information about the paging information that is used for displaying the details.
"next": "Next page URL"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects would be
returned by the query.
"fields": "Fields in the request"
The fields that are used in the original request.
"filter": "Filters used in the request"
The filter that is used in the original request.
"baseUrl": "URL"
The URL of the request without any parameters.
"lastId": "ID"
The ID of the last element that can be used to retrieve the next elements.
"auditLogRecords":
An array of information that provides the action that is performed.
"oid": "ID"
ID used for paging.
"arguments": "Arguments"
Arguments of a GPFS command.
"command": "Command name"
Name of the GPFS command.
"node": "Node"
Name of the node where a GPFS command was running.
"returnCode": "Return code"
Return Code of the GPFS command.
Examples
The following example gets information about the cluster configuration.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
The cliauditlog array provides information about the action that is performed on the cluster.
{
"status": {
"code": 200,
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/filesets?lastId=10001",
"fields": "period,restrict,sensorName",
"filter": "usedInodes>100,maxInodes>1024",
"baseUrl": "/scalemgmt/v2/perfmon/sensor/config",
"lastId": 10001
},
"auditLogRecords": [
{
"oid": 0,
"arguments": "-A yes -D nfs4 -k nfs4",
"command": "mmchfs",
"node": "testnode-11",
"returnCode": 0,
"originator": "GUI",
"user": "root",
"pid": 7891,
"entryTime": "2017-07-30 14:00:00",
"exitTime": "2017-07-31 18:00:00"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET cluster request gets information about the local cluster. For more information about the fields
in the data structures that are returned, see the following topics: “mmlscluster command” on page 484,
“mmchfs command” on page 230, and “mmlsfs command” on page 498.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/cluster
where
cluster
Specifies the IBM Spectrum Scale cluster as the resource of the GET call.
Request headers
Accept: application/json
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
}
"cluster": {
"clusterSummary": {
"clusterName": "ClusterName",
"clusterId": ID,
"uidDomain": "Domain"
"rshPath": "RemoteShellCommand",
"rshSudoWrapper": "Path",
"rcpPath": "RemoteFileCopyCommand",
"rcpSudoWrapper": "Path",
"repositoryType": "CCR | non-CCR",
"primaryServer": "Server",
"secondaryServer": "Server",
}
"cnfsSummary": {
"cnfsSharedRoot": "Directory"
"cnfsMoundtPort": "Port",
"cnfsNFSDprocs": "Number",
"cnfsReboot": "{true | false}",
"cnfsMonitorEnabled": "enabled | disabled",
"cnfsGanesha": "ServerAddress",
}
"cesSummary": {
"cesSharedRoot": "Directory",
"enabledServices": "ServiceList",
"logLevel": "Level",
"addressPolicy": "{none | balanced-load | node-affinity | even-coverage}",
}
"capacityLicensing": {
For more information about the fields in the following data structures, see the links at the end of this
topic.
Note: The structures cesNode, cesSummary, cnfsNode, cnfsSummary, and gatewayNode appear in
the output only when the corresponding role is active on the node.
"cluster":
A data structure that describes the local cluster. It contains the following data structures:
cesSummary, clusterSummary, cnfsSummary, links, and nodes.
"clusterSummary":
Cluster information.
"clusterId": "ID"
The ID of the cluster.
"clusterName": "ClusterName"
The name of the cluster.
"primaryServer": "Server"
The primary server node for GPFS cluster data.
"rcpPath": "RemoteFileCopyCommand"
The remote file copy command.
"rcpSudoWrapper": ""
The fully qualified path of the remote file copy program for sudo wrappers.
"repositoryType": "CCR | non-CCR"
The type of repository that the cluster uses for storing configuration data.
"rshPath": "RemoteShellCommand"
The remote shell command.
"rshSudoWrapper": "Path"
The fully qualified path of the remote shell program for sudo wrappers.
"secondaryServer": "Server"
The secondary server node for GPFS cluster data.
Examples
The following example gets information about the local cluster.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"cluster": {
"clusterSummary": {
"clusterName": "gpfs-cluster-2.localnet.com",
"clusterId": "13445038716632536363",
"uidDomain": "localnet.com",
"rshPath": "/usr/bin/ssh",
"rshSudoWrapper": "no",
"rcpPath": "/usr/bin/scp",
"rcpSudoWrapper": "no",
"repositoryType": "CCR",
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET config request gets information about the local cluster. For more information about the fields
in the data structures that are returned, see the following topics: “mmchconfig command” on page 169
and “mmlsconfig command” on page 487.
This API call gets it data from the mmlsconfig command. The mmlsconfig command displays the
PERSISTED configuration values; not the ACTIVE values. That is, the configuration changes made through
the mmchconfig command comes in to the effect based on how you use -I and -i parameters of the
command. The following three scenarios are applicable:
• mmchconfig -I: A value is changed immediately in the daemon but not persistent so that it is reset
after a restart. Those settings do not show in mmlsconfig at all. In this case, this API call shows an old
setting that is temporarily changed to something else.
• mmchconfig -i: The setting takes effect immediately and is also persisted so that mmlsconfig
shows it.
• mmchconfig: The setting takes effect on the next restart and is also persisted so that mmlsconfig
shows it. In this case, the API call shows a future setting that will get applied after the restart.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/config
where
config
Specifies that the GET request fetches the configuration details of the cluster.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
{
"status":
{
"code": ReturnCode
"message": "ReturnMessage",
}
"config": [
{
"clusterConfig": "Configuration details"
}
}
"status":
Return status.
"code": ReturnCode,
The HTTP status code that was returned by the request.
"message": "ReturnMessage"
The return message.
"config":
An array of information about the cluster configuration.
"clusterConfig": "Configuration details"
The cluster configuration as shown by mmlsconfig -Y command.
The return information and the information that the command retrieves are returned in the same way as
they are for the other requests. The parameters that are returned are the same as the configuration
attributes that are displayed by the mmlsconfig command. For more information, see the “mmlsconfig
command” on page 487.
Examples
The following example gets information about the cluster configuration.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
The config array provides information about the cluster configuration.
{
"config" : {
"clusterConfig" : {
"FIPS1402mode" : "no",
"adminMode" : "central",
"afmAsyncDelay" : "15",
"afmAsyncOpWaitTimeout" : "120",
"afmAtimeXattr" : "no",
"afmDIO" : "0",
"afmDirLookupRefreshInterval" : "60",
"afmDirOpenRefreshInterval" : "60",
"afmDisconnectTimeout" : "60",
"afmEnableADR" : "no",
"afmExpirationTimeout" : "disable",
"afmFileLookupRefreshInterval" : "30",
"afmFileOpenRefreshInterval" : "30",
"afmFlushThreadDelay" : "5",
"afmGatewayQueueTransfer" : "yes",
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems request gets information about file systems in the cluster. For more information
about the fields in the data structures that are returned, see the topics “mmcrfs command” on page 315,
“mmchfs command” on page 230, and “mmlsfs command” on page 498.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems
where
filesystems
Specifies file systems as the resource of the GET call.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
},
"paging":
{
"next": "URL"
},
For more information about the fields in the following data structures, see the links at the end of this
topic.
Examples
The following example gets information about file systems in the cluster.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"filesystems" : [ {
"name" : "gpfs0"
}, {
"name" : "objfs"
}, {
"name" : "gpfs1"
} ],
"status" : {
Using the field parameter ":all:" returns entire details of the file systems that are configured in the system.
The fileystems array returns file system objects in the following example. Each object contains details
about one file system:
{
"status": {
"code": 200,
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/filesets?lastId=10001",
"fields": "period,restrict,sensorName",
"filter": "usedInodes>100,maxInodes>1024",
"baseUrl": "/scalemgmt/v2/perfmon/sensor/config",
"lastId": 10001
},
"filesystems": [
{
"oid": 2,
"uuid": "0A00641F:58982753",
"name": "gpfs0",
"version": "17.00",
"type": "local",
"createTime": "Mon Feb 06 08:35:47 2017",
"block": {
"pools": "system;data",
"disks": "disk1;disk2",
"blockSize": 262144,
"metaDataBlockSize": 262144,
"indirectBlockSize": 16384,
"minFragmentSize": 8192,
"inodeSize": 4096,
"logfileSize": 4194304,
"writeCacheThreshold": 0
},
"mount": {
"mountPoint": "/mnt/gpfs0",
"automaticMountOption": "yes",
"additionalMountOptions": "none",
"mountPriority": 0,
"driveLetter": "F",
"remoteDeviceName": "gpfs0",
"readOnly": false
},
"replication": {
"defaultMetadataReplicas": 1,
"maxMetadataReplicas": 2,
"defaultDataReplicas": 1,
"maxDataReplicas": 2,
"strictReplication": "whenpossible",
"logReplicas": 0
},
"quota": {
"quotasAccountingEnabled": "user;group;fileset",
"quotasEnforced": "user;group;fileset",
"defaultQuotasEnabled": "none",
"perfilesetQuotas": true,
"filesetdfEnabled": false
},
"settings": {
"blockAllocationType": "cluster",
"fileLockingSemantics": "nfs4",
"aclSemantics": "nfs4",
"numNodes": 32,
"dmapiEnabled": true,
"exactMTime": true,
"suppressATime": "no",
"fastEAEnabled": true,
"encryption": false,
"maxNumberOfInodes": 3211520,
"is4KAligned": true,
"rapidRepairEnabled": true,
"stripeMethod": "round-robin",
"stripedLogs": true,
"fileAuditLogEnabled": true
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/filesystemName request gets information about a particular file system. For
more information about the fields in the data structures that are returned, see the topics “mmcrfs
command” on page 315, “mmchfs command” on page 230, and “mmlsfs command” on page 498.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/fileSystemName
where
filesystems
Specifies file system as the resource of the GET call. Required.
FileSystemName
The file system about which you want to get information. Required.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": "ReturnCode"
The return code.
"filesystems":
An array of elements that describe file systems. Each element describes one file system.
"oid":"Internal ID",
Internal identifier that is used for paging.
"uuid":"ID"
The UUID of the file system.
"name":"Name"
Name of the file system.
"version":"Version"
File system version.
"type":"local | remote"
File system type.
"createTime":"DateTime"
Creation time.
"block"
"pools":"Pools"
List of pools of this file system.
"disks":"List of disks"
A semicolon-separated list of the disks that are included in the file system.
"blockSize":"Block size"
The block size of the disks in the storage pool.
"metaDataBlockSize":"Metadata block size"
Block size of metadata pool.
"indirectBlockSize":"Indirect block size"
Indirect block size in bytes.
"minFragmentSize":"Minimum fragment size"
Minimum fragment size in bytes.
"inodeSize":"Inode size"
Inode size in bytes.
"logfileSize":"Log file size"
The size of the internal log files in bytes.
"writeCacheThreshold":"Threshold"
Specifies the maximum length (in bytes) of write requests that are initially buffered in the
highly‐available write cache before being written back to primary storage.
Examples
The following example gets information about file system gpfs0.
Request data:
Using the field parameter ":all:" returns entire details of the file system.
{
"status": {
"code": 200,
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/filesets?lastId=10001",
"fields": "period,restrict,sensorName",
"filter": "usedInodes>100,maxInodes>1024",
"baseUrl": "/scalemgmt/v2/perfmon/sensor/config",
"lastId": 10001
},
"filesystems": [
{
"oid": 2,
"uuid": "0A00641F:58982753",
"name": "gpfs0",
"version": "17.00",
"type": "local",
"createTime": "Mon Feb 06 08:35:47 2017",
"block": {
"pools": "system;data",
"disks": "disk1;disk2",
"blockSize": 262144,
"metaDataBlockSize": 262144,
"indirectBlockSize": 16384,
"minFragmentSize": 8192,
"inodeSize": 4096,
"logfileSize": 4194304,
"writeCacheThreshold": 0
},
"mount": {
"mountPoint": "/mnt/gpfs0",
"automaticMountOption": "yes",
"additionalMountOptions": "none",
"mountPriority": 0,
"driveLetter": "F",
"remoteDeviceName": "gpfs0",
"readOnly": false
},
"replication": {
"defaultMetadataReplicas": 1,
"maxMetadataReplicas": 2,
"defaultDataReplicas": 1,
"maxDataReplicas": 2,
"strictReplication": "whenpossible",
"logReplicas": 0
},
"quota": {
"quotasAccountingEnabled": "user;group;fileset",
"quotasEnforced": "user;group;fileset",
"defaultQuotasEnabled": "none",
"perfilesetQuotas": true,
"filesetdfEnabled": false
},
"settings": {
"blockAllocationType": "cluster",
"fileLockingSemantics": "nfs4",
"aclSemantics": "nfs4",
"numNodes": 32,
"dmapiEnabled": true,
"exactMTime": true,
"suppressATime": "no",
"fastEAEnabled": true,
"encryption": false,
"maxNumberOfInodes": 3211520,
"is4KAligned": true,
"rapidRepairEnabled": true,
"stripeMethod": "round-robin",
"stripedLogs": true,
"fileAuditLogEnabled": true
},
"fileAuditLogConfig": {
"auditFilesetDeviceName": "logFilesystem",
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/filesystemName/acl/path request gets information about ACLs set for files
or directories within a particular file system. For more information about the fields in the data structures
that are returned, see the topics “mmgetacl command” on page 422 and “mmputacl command” on page
608.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/FileSystemName/
acl/path
where
filesystems/filesystemName
The file system to which the file or directory belongs. Required.
acl/path
The path of the file or directory about which you want to get the ACL information. Required.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"acl":
An array of elements that describe ACL.
"type":"NFSv4"
Type of the ACL.
"entries":"Access control entries"
"type":"allow | deny | alarm | audit"
Type of the entry.
"who":" special:owner@ | special:group@ | special:everyone@ | user:{name} |
group:{name}"
The name of the user or group of users for which the ACL is applicable.
"permission":"(r) read | (w) write | (m) mkdir, | (x) execute | (d) delete | (D)
delete child | (a) read attr | (A) write attr (n) read named | (N) write Named
| (c) read acl | (C) write acl | (o) change owner| (s) synchronize "
The access permissions.
"flags":"(f) file inherit | (d) dir inherit | (i) inherit only | (I) inherited
| (S) successful access | (F) failed access"
Special flags and inheritance definition.
Examples
The following example gets ACL information for the file system gpfs0 and path /rest_fset.
Request data:
Response data:
{
"status": {
"code": "200",
"message": "..."
},
"acl": {
"type": "NFSv4",
"entries": [
{
"type": "allow",
"who": "user:testuser",
"permissions": "rxancs",
"flags": "fd"
}
]
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/filesystemName/acl/path request sets ACL for files or directories within a
particular file system. For more information about the fields in the data structures that are returned, see
the topics “mmgetacl command” on page 422 and “mmputacl command” on page 608.
Note: Only the users with dataaccess role can set ACL for a file or directory.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/FileSystemName/
acl/path
where
filesystems/filesystemName
The file system in which the file or directory is located. Required.
acl/path
The path of the file or directory for which you want to set the ACL. Required.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"type":"{NFSv4} ",
"entries": [
"type":"{allow | deny | alarm | audit }",
"who":"User or group",
"permissions":"Access permissions",
"flags":"Flags",
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"type":"NFSv4"
Type of the ACL.
"entries":"Access control entries"
"type":"allow | deny | alarm | audit"
Type of the entry.
"who":" special:owner@ | special:group@ | special:everyone@ | user:{name} | group:
{name}"
The name of the user or group of users for which the ACL is applicable.
"permission":"(r) read | (w) write | (m) mkdir, | (x) execute | (d) delete | (D)
delete child | (a) read attr | (A) write attr (n) read named | (N) write Named |
(c) read acl | (C) write acl | (o) change owner| (s) synchronize "
The access permissions.
"flags":"(f) file inherit | (d) dir inherit | (i) inherit only | (I) inherited |
(S) successful access | (F) failed access"
Special flags and inheritance definition.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
"completed":"Time"
The time at which the job was completed.
"status":"RUNNING | COMPLETED | FAILED"
Status of the job.
Examples
The following example sets ACL information for the file system gpfs0 and path mnt/gpfs0.
Request data:
{
"type": "NFSv4",
"entries": [
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000002,
"status" : "RUNNING",
"submitted" : "2017-03-14 15:50:00,493",
"completed" : "N/A",
"request" : {
"data" : {
"entries" : [
{
"type" : "allow",
"who" : "special:owner@",
"permissions" : "rwmxDaAnNcCos",
"flags" : ""
},
{
"type" : "allow",
"who" : "special:group@",
"permissions" : "rxancs",
"flags" : ""
},
{
"type" : "allow",
"who" : "special:everyone@",
"permissions" : "rxancs",
"flags" : ""
},
{
"type" : "allow",
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/{filesystemName}/afm/state request gets AFM state in a file system. For
more information about the fields in the data structures that are returned, see the topics “mmafmconfig
command” on page 45, “mmafmctl command” on page 61, and “mmafmlocal command” on page 78.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/afm/state
where
filesystems/filesetName
Specifies the file system to which the AFM fileset belongs. Required.
afm/state
Specifies AFM as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request data
No request data.
Request parameters
The following parameters can be used in the request URL to customize the request:
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"filesetAfmstateList"
"filesetAfmState":"AFM state of the fileset"
Status of AFM filesets that belong to the file system.
"filesystemName": "File system name"
Name of the file system to which the AFM fileset belongs.
"filesetName": "Fileset name"
Name of the AFM fileset.
"filesetId": "Fileset ID"
Unique identifier of the AFM fileset.
"filesetTarget": "Fileset target"
The target fileset in the AFM and AFM DR configuration.
"cacheState": "Cache state"
The status of the cache site in the AFM configuration.
"gatewayNodeName": "Gateway node name"
The name of the gateway node that manages I/O in the AFM configuration.
"queueLength": "Queue length"
The length of the AFM queue.
"drState": "DR state"
The DR status of the AFM fileset.
"oid": "OID"
Internal identifier of the AFM fileset state.
Examples
The following example gets the details of the call home configuration in the system.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"filesetAfmStateList" : [ {
"cacheState" : "Active",
"filesetName" : "primary_rpo",
"filesystemName" : "gpfs0"
}, {
"cacheState" : "Active",
"filesetName" : "primary",
"filesystemName" : "gpfs0"
}, {
"cacheState" : "Active",
"filesetName" : "ind_writer_mapped",
"filesystemName" : "gpfs0"
}, {
"cacheState" : "Active",
"filesetName" : "ind_writer_dnsmap",
"filesystemName" : "gpfs0"
}, {
"cacheState" : "Active",
"filesetName" : "ind_writer_cache",
"filesystemName" : "gpfs0"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT /filesystems/{filesystemName}/audit request enables or disables file audit logging for
a specific file system. For more information about the fields in the data structures that are returned, see
“mmaudit command” on page 92.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
audit
where
filesystems/filesystemName/audit
Specifies that file audit logging is going to be enabled or disabled for the specific file system.
Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"action": "enable | disable"
"fileAuditLogConfig":
{
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
Examples
The following example enables file audit logging for the file system fs1.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000004,
"status" : "COMPLETED",
"submitted" : "2020-09-29 15:02:11,560",
"completed" : "2020-09-29 15:02:13,131",
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/{filesystemName}/directory/{path} request creates directory inside a
file system.
Request URL
https://management API host:port/scalemgmt/v2/filesystems/filesystemName/directory/path
where:
filesystems/filesystemName
Specifies the name of the file system to which the directory belongs. Required.
directory/path
Specifies the path of the directory. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"user": "User name",
"uid": "User ID",
"group": "Group name",
"gid": "Group ID"
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success and nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
Examples
The following example shows how to create a directory inside the file system gpfs0.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000005,
"status" : "RUNNING",
"submitted" : "2017-03-14 15:59:34,180",
"completed" : "N/A",
"request" : {
"data" : {
"user": "testuser55",
"uid": 1234,
"group": "mygroup",
"gid": 4711
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/directory/mydir"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE filesystems/{filesystemName}/directory/{path} request removes a directory
from a file system.
Request URL
https://management API host:port/scalemgmt/v2/filesystems/filesystemName/directory/path
where:
filesystems/filesystemName
Specifies the name of the file system to which the directory belongs. Required.
directory/path
The path of the directory to be removed.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
None.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
"completed":Time"
The time at which the job was completed.
"status":"RUNNING | COMPLETED | FAILED"
Status of the job.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000006,
"status" : "RUNNING",
"submitted" : "2017-03-14 16:05:30,960",
"completed" : "N/A",
"request" : {
"type" : "DELETE",
"url" : "/scalemgmt/v2/filesystems/gpfs0/directory/myDir1"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/{filesystemName}/directoryCopy/{sourcePath} request copies a
directory on a particular file system. For more information about the fields in the data structures that are
returned, see the topics “mmcrfs command” on page 315, “mmchfs command” on page 230, and
“mmlsfs command” on page 498.
Request URL
https://<IP or host name of API server>:port/scalemgmt/v2/filesystems/{filesystemName}/
directoryCopy/{sourcePath}
where:
filesystems/filesystemName
Specifies the name of the file system to which the directory belongs. Required.
directoryCopy
Action to be performed on the directory. Required.
sourcePath
Path of the directory to be copied. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"targetFilesystem": "File system name",
"targetFileset": "Fileset name",
"targetPath": "Directory path",
"nodeclassName": "Name of the node class",
"force": "True | False",
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success and nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
Examples
The following example shows how to copy a directory that belongs to the file system gpfs0.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000009,
"status" : "COMPLETED",
"submitted" : "2020-09-29 15:39:55,452",
"completed" : "2020-09-29 15:39:58,675",
"runtime" : 3223,
"request" : {
"type" : "PUT",
"url" : "/scalemgmt/v2/filesystems/gpfs0/directoryCopy/mydir"
},
"result" : {
"progress" : [ ],
"commands" : [ "tscp --source '/mnt/gpfs0/mydir' --target '/mnt/gpfs0/fset1/mydir' --
nodeclass 'cesNodes' --force " ],
"stdout" : [ ],
"stderr" : [ ],
"exitCode" : 0
},
"pids" : [ ]
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/{filesystemName}/disks request gets details of the disks that belong to the
specified file system. For more information about the fields in the data structures that are returned, see
the topics “mmlsdisk command” on page 489, “mmlsnsd command” on page 514and “mmlsfs
command” on page 498.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
disks
where
filesystems/filesystemName
Specifies the file system to which the disks belong. Required.
disks
The disks that belong to the specified file system. Required.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"disks":
An array of elements that describe one disk.
"name":"DiskName"
Name of the disk.
"filesystem":"FileSystemName"
The file system to which the disk belongs.
"failureGroup":"FailureGroupID"
Failure group of the disk.
"type":"{dataOnly | metadataOnly | dataAndMetadata | descOnly | localCache}"
Specifies the type of data to be stored on the disk:
dataOnly
Indicates that the disk contains data and does not contain metadata. This is the default for
disks in storage pools other than the system pool.
metadataOnly
Indicates that the disk contains metadata and does not contain data.
dataAndMetadata
Indicates that the disk contains both data and metadata. This is the default for disks in the
system pool.
Examples
The following example gets information about disks in the file system gpfs0.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
The disks array returns details of the disks. Each config object contains details about one disk.
{
"disks" : [ {
"fileSystem" : "gpfs0",
"name" : "disk1"
}, {
"fileSystem" : "gpfs0",
"name" : "disk8"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
{
"disks" : [ {
"availability" : "up",
"availableBlocks" : "9.15 GB",
"availableFragments" : "552 kB",
"failureGroup" : "1",
"fileSystem" : "gpfs0",
"name" : "disk1",
"nsdServers" : "mari-11.localnet.com,mari-15.localnet.com,mari-14.localnet.com,
mari-13.localnet.com,mari-12.localnet.com",
"nsdVolumeId" : "0A00640B58B82A8C",
"quorumDisk" : "no",
"remarks" : "desc",
"size" : " 10.00GiB",
"status" : "ready",
"storagePool" : "system",
"type" : "nsd"
}, {
"availability" : "up",
"availableBlocks" : "9.45 GB",
"availableFragments" : "664 kB",
"failureGroup" : "1",
"fileSystem" : "gpfs0",
"name" : "disk8",
"nsdServers" : "mari-11.localnet.com,mari-12.localnet.com,mari-13.localnet.com,
mari-14.localnet.com,mari-15.localnet.com",
"nsdVolumeId" : "0A00640B58B82AD9",
"quorumDisk" : "no",
"remarks" : "desc",
"size" : " 10.00GiB",
"status" : "ready",
"storagePool" : "data",
"type" : "nsd"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/{filesystemName}/disks/{diskName} request gets details of a specific
disk. For more information about the fields in the data structures that are returned, see the topics
“mmlsdisk command” on page 489, “mmlsnsd command” on page 514and “mmlsfs command” on page
498.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/fileSystemName/
disks/diskName
where
filesystems/filesystemName
The file system to which the disks belong. Required.
disks/diskName
The disk about which you want to get information. Required.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"disks":
An array of elements that describe one disk.
"name":"DiskName"
Name of the disk.
"filesystem":"FileSystemName"
The file system to which the disk belongs.
"failureGroup":"FailureGroupID"
Failure group of the disk.
"type":"{dataOnly | metadataOnly | dataAndMetadata | descOnly | localCache}"
Specifies the type of data to be stored on the disk:
dataOnly
Indicates that the disk contains data and does not contain metadata. This is the default for
disks in storage pools other than the system pool.
metadataOnly
Indicates that the disk contains metadata and does not contain data.
dataAndMetadata
Indicates that the disk contains both data and metadata. This is the default for disks in the
system pool.
descOnly
Indicates that the disk contains no data and no file metadata. Such a disk is used solely to
keep a copy of the file system descriptor, and can be used as a third failure group in certain
Examples
The following example gets information about the disk disk8 in the file system gpfs0.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://198.51.100.1:443/scalemgmt/v2/filesystems/gpfs0/disks/disk8?lastId=1001"
},
"disks": [
{
"name": "disk8",
"fileSystem": "gpfs0",
"failureGroup": "1",
"type": "dataOnly",
"storagePool": "data",
"status": "ready",
"availability": "up",
"quorumDisk": "no",
"remarks": "This is a comment",
"size": "10.00 GB",
"availableBlocks": "730.50 MB",
"availableFragments": "1.50 MB",
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/{filesystemName}/filesets request gets information about all filesets in
the specified file system. For more information about the fields in the data structures that are returned,
see the topics “mmcrfileset command” on page 308, “mmchfileset command” on page 222, and
“mmlsfileset command” on page 493.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filsystems/filesystemName/
filesets
where:
filesystem/filesystemName
Specifies the name of the file system to which the filesets belong. Required.
filesets
Specifies that you need to get details of all filesets that belong to the specified file system. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code":"ReturnCode"
The return code.
"paging":
Paging details.
"next": "URL",
The URL to retrieve the next page. Paging is enabled when more than 1000 objects would be
returned by the query.
"fields": "Fields",
The fields used in the original request.
"filter": "Filters",
The filter used in the original request.
"baseUrl": "Base URL",
The URL of the request without any parameters.
"lastId": "ID of the last element",
The ID of the last element that can be used to retrieve the next elements.
"filesets":
An array of information about the filesets in the specified file system. Each array element describes
one fileset and can contain the data structures afm, config, links, and state. For more
information about the fields in these data structures, see the links at the end of this topic:
Examples
The following example gets information about the filesets in file system gpfs0.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
{
"filesets" : [ {
"config" : {
"filesetName" : "root",
"filesystemName" : "gpfs0"
}
}, {
"config" : {
"filesetName" : "fset1",
"filesystemName" : "gpfs0"
}
}, {
"config" : {
"filesetName" : "fset2",
"filesystemName" : "gpfs0"
}
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Using the field parameter ":all:" returns entire details of the filesets that are part of the specified file
system. For example:
{
"status": {
"code": 200,
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/filesets?lastId=10001",
"fields": "period,restrict,sensorName",
"filter": "usedInodes>100,maxInodes>1024",
"baseUrl": "/scalemgmt/v2/perfmon/sensor/config",
"lastId": 10001
},
"filesets": [
{
"filesetName": "myFset1",
"filesystemName": "gpfs0",
"config": {
"path": "/mnt/gpfs0/myFset1",
"inodeSpace": 0,
"maxNumInodes": 4096,
"permissionChangeMode": "chmodAndSetAcl",
"comment": "Comment1",
"iamMode": "off",
"oid": 158,
"id": 5,
"status": "Linked",
"parentId": 1,
"created": "2016-12-13 13.59.15",
"isInodeSpaceOwner": false,
"inodeSpaceMask": 0,
"snapId": 0,
"rootInode": 131075
},
"afm": {
"afmTarget": "nfs://10.0.100.11/gpfs/afmHomeFs/afmHomeFileset2",
"afmState": "enabled",
"afmMode": "read-only",
"afmFileLookupRefreshInterval": 30,
"afmFileOpenRefreshInterval": 0,
"afmDirLookupRefreshInterval": 60,
"afmDirOpenRefreshInterval": 60,
"afmAsyncDelay": 0,
"afmNeedsRecovery": true,
"afmExpirationTimeout": 100,
"afmRPO": 0,
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/{filesystemName}/filesets command creates a new fileset in the
specified file system. For more information about the fields in the request data structures, see the topics
“mmcrfileset command” on page 308, “mmchfileset command” on page 222, and “mmcrfs command” on
page 315.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/
{filesystemName}/filesets
where
filesystem/filessystemName
Specifies the file system in which the fileset is to be created. Required.
filesets
Specifies fileset as the resource. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"filesetName": "Fileset name",
"path": "Path",
"owner": "UserID[:GroupID]",
"permissions": "Permissions"
"inodeSpace": "{new | ExistingFileset}",
"maxNumInodes": "Inodes",
"allocInodes": "Inodes",
Response data
{
"status": {
"code":ReturnCode",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
Examples
The following example creates and links a fileset fset1 in file system gpfs0.
Request data:
Response data:
{
"jobs" : [ {
"jobId" : 1000000000003,
"status" : "COMPLETED",
"submitted" : "2017-03-14 15:54:23,042",
"completed" : "N/A",
"request" : {
"data" : {
"filesetName" : "fset1",
"owner" : "scalemgmt",
"path" : "/mnt/gpfs0/fset1",
"permissions" : 700
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST Filesystems/{filesystemName}/filesets/cos request creates a fileset that is related to a
corresponding bucket. For more information about the fields in the data structures that are returned, see
“mmafmcosaccess command” on page 48.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/
{filesystemName}/filesets/cos
where
Filesystems/{filesystemName}/filesets/cos
Specifies the file system where the new fileset needs to be created.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"filesetName": "COS fileset name",
"endpoint": "URL of the object store",
"useObjectFs": "true | false",
"bucket": "Name of the bucket",
"newBucket": "true | false",
"dir": "Directory name",
"policy": "Policy rules for the fileset",
"tmpDir": "Directory pattern",
"tmpFile": "File pattern",
"quotaFiles": "Number",
"quotaBlocks": "Number",
"uid": "User ID",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"filesetName": "COS fileset name"
Name of the Cloud Object Storage fileset.
"endpoint": "URL of the object store"
URL to your object store. Use server's hostname or IP or name of the map.
"useObjectFs": "true | false"
Whether to handle Cloud Object Storage more like a file system.
"bucket": "Name of the bucket"
Name of the bucket that is related to your fileset. You can skip this parameter if your fileset has the
same name as your bucket.
"newBucket": "true | false"
Whether to create a Cloud Object Storage bucket.
"dir": "Directory name"
Name of the directory name or full path inside a file system, where you need to link the fileset. If you
skip this parameter, the fileset name will be used as the directory name.
"policy": "Policy rules for the fileset"
Policy rules for the fileset.
"tmpDir": "Directory pattern"
Directory pattern to keep local, if no policy is specified.
"tmpFile": "File pattern"
File pattern to keep local, if no policy is specified.
"quotaFiles": "Number"
Quota for the number of files. Enables eviction when 80% of the files are used.
"quotaBlocks": "Number"
Quota limit for the blocks. Enables eviction when 80% of the quotas are used.
"uid": "User ID"
User ID of the fileset owner.
"gid": "Group ID"
Group ID of the fileset users.
"permission": "Access permission"
Access permissions in octal format.
"mode": "Fileset access mode"
Fileset access mode. Possible values are: iw (independent-writer), sw (single-writer), ro (read-only),
or lu (local-updates)
"useUserKeys": "true | false"
Whether to use user keys on requests from object store.
"chunkSize": "Chunk size",
Chunk size to control number of upload multiple parts.
"readSize": "Read size"
Download size [in numeric], default is zero to get the full object.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
Examples
The following example creates a new fileset that is related to a corresponding bucket.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
{
"jobs" : [ {
"jobId" : 1000000000012,
"status" : "FAILED",
"submitted" : "2020-09-29 15:57:42,926",
"completed" : "2020-09-29 15:57:50,251",
"runtime" : 7325,
"request" : {
"data" : {
"accessKey" : "****",
"bucket" : "mybucket",
"chunkSize" : 5,
"dir" : "mycosfset",
"endpoint" : "http://s3store.com:8080",
"filesetName" : "mycosfset",
"gid" : "992",
"mode" : "sw",
"permission" : "0770",
"quotaBlocks" : 20,
"quotaFiles" : 1000,
"readSize" : 3,
"secretKey" : "****",
"tmpDir" : "dirpattern",
"tmpFile" : "%.log",
"uid" : "994",
"useGcs" : false,
"useNoSubdir" : false,
"useObjectFs" : true,
"useSslCertVerify" : false,
"useUserKeys" : false,
"useVhb" : false,
"useXattr" : false
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/cos"
},
"result" : {
"progress" : [ ],
"commands" : [ "mmafmcoskeys 'mybucket' set **** **** ", "mmafmcosconfig 'gpfs0'
'mycosfset' --endpoint 'http://s3store.com:8080' --object-fs --bucket 'mybucket' --dir
'mycosfset' --tmpdir 'dirpattern' --tmpFile '%.log' --quota-files 1000 --quota-blocks 20 --uid
'994' --gid '992' --perm '0770' --mode 'sw' --chunk-size 5 --read-size 3 " ],
"stdout" : [ ],
"stderr" : [ "EFSSG0053C Failed to execute command:\nmmafmcosconfig: Creating fileset
mycosfset failed. Check logs for details.\nmmafmcosconfig: Command failed. Examine previous
error messages to determine cause.\n." ],
"exitCode" : 8
},
"pids" : [ ]
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
}
}
Related reference
“mmafmcosaccess command” on page 48
Maps a directory in an AFM to cloud object storage fileset to the bucket on a cloud object storage.
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE filesystems/{filesystemName}/filesets/{filesetName} command deletes the
specified fileset. For more information on deleting snapshots, see “mmdelfileset command” on page 365 .
Request URL
Use this URL to delete a global snapshot:
where:
filesystems/filesystemName
Specifies the file system to which the fileset belongs. Required.
filesets/filesetName
Specifies the fileset to be deleted. Required.
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
"completed":Time"
The time at which the job was completed.
"status":"RUNNING | COMPLETED | FAILED"
Status of the job.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create a
new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/filesystems/gpfs0/filesets/myFset1",
"data": "",
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
See also
• “mmdelfileset command” on page 365
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/{filesystemName}/filesets/{filesetName} request gets information
about the specified fileset in a file system. For more information about the fields in the data structures
that are returned, see the topics “mmcrfileset command” on page 308, “mmchfileset command” on page
222, and “mmlsfileset command” on page 493.
Request URL
https://<IP or host name of API server>:<port>/scalemgmt/v2/filsystems/filesystemName/filesets/
filesetName
where:
filesystem/filesystemName
Specifies the name of the file system to which the filesets belong. Required.
filesets/filesetName
Specifies the fileset about which you need information. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code":"ReturnCode"
The return code.
"paging":
Paging details.
"next": "URL",
The URL to retrieve the next page. Paging is enabled when more than 1000 objects would be
returned by the query.
"fields": "Fields",
The fields used in the original request.
"filter": "Filters",
The filter used in the original request.
"baseUrl": "Base URL",
The URL of the request without any parameters.
"lastId": "ID of the last element",
The ID of the last element that can be used to retrieve the next elements.
"filesets":
An array of information about the filesets in the specified file system. Each array element describes
one fileset and can contain the data structures afm, config, links, and state. For more
information about the fields in these data structures, see the links at the end of this topic:
"filesets":
Information about the fileset configuration.
"filesetName": "Fileset",
The name of the fileset.
Examples
The following example gets information about the fileset myFset1 in file system gpfs0.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/{filesystemName}/filesets/{filesetName} request modifies a specific
fileset, which is part of the specified file system. For more information about the fields in the data
structures that are returned, see the topics “mmcrfileset command” on page 308, “mmchfileset
command” on page 222, and “mmlsfileset command” on page 493.
Request URL
https://<IP or host name of API server>:port/scalemgmt/v2/filesystems/filesystemName/filesets/
filesetName
where:
filesystems/filesystemName
Specifies the name of the file system to which the fileset belongs. Required.
filesets/filesetName
Specifies fileset to be modified. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"newFilesetName": "Fileset",
"maxNumInodes": "Inodes",
"allocInodes": "Inodes",
"newFilesetName": "Fileset",
The name of the fileset.
"maxNumInodes": "Inodes"
The inode limit for the inode space that is owned by the specified fileset.
"permissionChangeMode": "Mode"
The permission change mode. Controls how chmod and ACL commands affect objects in the fileset.
chmodOnly
Only the chmod command can change access permissions.
setAclOnly
Only the ACL commands and API can change access permissions.
chmodAndSetAcl
Both the chmod command and ACL commands can change access permissions.
chmodAndUpdateAcl
Both the chmod command and ACL commands can change access permissions.
"comment": "Comment",
A comment that appears in the output of the mmlsfileset command.
"iamMode": "Mode"
The integrated archive manager (IAM) mode for the fileset.
advisory
noncompliant
compliant
"afmTarget": Interval"Protocol://{Host | Map}/Path"
The home that is associated with the cache.
"afmAsyncDelay": Delay
The time in seconds by which to delay write operations because of the lag in updating remote
clusters.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
Examples
The following example gets information about the fileset myFset1 in file system gpfs0.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000007,
"status" : "FAILED",
"submitted" : "2020-09-29 15:17:52,389",
"completed" : "2020-09-29 15:17:53,388",
See also
• “mmchfileset command” on page 222
• “mmcrfileset command” on page 308
• “mmlsfileset command” on page 493
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/{filesystemName}/filesets/{filesetName}/afmctl command
defines the attributes that controls the AFM function of a specific fileset. For more information about the
fields in the request data structures, see the topics “mmafmctl command” on page 61, “mmafmconfig
command” on page 45, and “mmafmlocal command” on page 78.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/{filesystemName}
/filesets/{filesetName}/afmctl
where
filesystem/filessystemName
Specifies the file system in which the fileset present. Required.
filesets/filesetName/afmctl
Specifies the fileset for which the AFM controls are to be defined. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":"",
}
"jobId":"ID",
"submitted":"Time",
"completed":"Time",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String"
Array of commands that are run in this job.
"progress":"String"
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
"completed":Time"
The time at which the job was completed.
"status":"RUNNING | COMPLETED | FAILED"
Status of the job.
Examples
The following example creates and links a fileset fset1 in file system gpfs0.
Request data:
Response data:
{
"jobs" : [ {
"jobId" : 1000000000003,
"status" : "COMPLETED",
"submitted" : "2017-03-14 15:54:23,042",
"completed" : "N/A",
"request" : {
"data" : {
"filesetName" : "fset1",
"owner" : "scalemgmt",
"path" : "/mnt/gpfs0/fset1",
"permissions" : 700
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/fset1/afmctl"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST Filesystems/{filesystemName}/filesets/{filesetName}/cos/directory request creates a
directory that is related to a corresponding bucket. For more information about the fields in the data
structures that are returned, see “mmafmcosaccess command” on page 48.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/
{filesystemName}/filesets/{filesetName}/cos/directory
where
filesystems/{filesystemName}/filesets/{filesetName}/cos/directory
Specifies the fileset in which the directory needs to be created.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"dir": "Directory name",
"bucket": "Name of the bucket",
"endpoint": "URL of the object store",
"accessKey": "Access key",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"dir": "Directory name"
Name of the directory name or full path inside a file system, where you need to link the fileset. If you
skip this parameter, the fileset name is used as the directory name.
"bucket": "Name of the bucket"
Name of the bucket that is related to your fileset. You can skip this parameter if your fileset has the
same name as your bucket.
"endpoint": "URL of the object store"
URL to your object store. Use server's hostname or IP or name of the map.
"accessKey": "Access key"
Access key for your bucket. Use together with Secret key.
"secretKey": "Secret key"
Secret key for your bucket. Use together with Access key.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
Examples
The following example creates a directory that is related to a corresponding bucket.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000014,
"status" : "COMPLETED",
"submitted" : "2020-09-29 16:42:39,680",
"completed" : "2020-09-29 16:42:39,957",
"runtime" : 277,
"request" : {
"data" : {
"accessKey" : "****",
"bucket" : "mybucket",
"dir" : "mydir",
"endpoint" : "http://s3store.com:8080",
"secretKey" : "****"
},
Related reference
“mmafmcosaccess command” on page 48
Maps a directory in an AFM to cloud object storage fileset to the bucket on a cloud object storage.
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST Filesystems/{filesystemName}/filesets/{filesetName}/cos/download request
downloads a directory from object store. For more information about the fields in the data structures that
are returned, see “mmafmcosaccess command” on page 48.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/
{filesystemName}/filesets/{filesetName}/cos/download
where
filesystems/{filesystemName}/filesets/{filesetName}/cos/download
Specifies the action to be performed.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"path": "Path",
"objectList": "List of files",
"all": "true | false",
"useMetadata": "true | false",
"useData": "true | false",
"useNoSubdir": "true | false",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"path": "Path"
Path to a dedicated directory in a fileset. You can skip this parameter to use default link path of your
fileset.
"objectList": "List of files"
List of files to be downloaded from Cloud Object Storage.
"all": "true | false"
Whether to download all files from Cloud Object Storage.
"useMetadata": "true | false"
Whether to download only metadata.
"useData": "true | false"
Whether to download data.
"useNoSubdir": "true | false"
Whether to create a subdirectory, if '/' is in object name.
"prefix": "Prefix"
Prefix of object names to download.
"uid": "User ID"
User ID of the fileset owner.
"gid": "Group ID"
Group ID of the fileset owner.
"permission": "Access permissions"
Access permission in octal format.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
Examples
The following example shows how to download files from object store.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
{
"jobs" : [ {
"jobId" : 1000000000015,
"status" : "FAILED",
"submitted" : "2020-09-29 16:52:23,236",
"completed" : "2020-09-29 16:52:25,515",
"runtime" : 2279,
"request" : {
"data" : {
"all" : true
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/mfset1/cos/download"
},
"result" : {
"progress" : [ ],
"commands" : [ "mmafmcosctl 'gpfs0' 'mfset1' '/mnt/gpfs0/mfset1' download --all " ],
"stdout" : [ ],
"stderr" : [ "EFSSG0053C Failed to execute command: Queued\t Failed\t
TotalData\n \t \t (approx in Bytes)\n 20\t 0\t
1153433600\n\nNot all Objects are queued for Download.\n\nmmafmcosctl: Fileset fileset1 is not
an AFM fileset.\nAFM: tspcacheu failed, rc 2 code 0\nmmafmcosctl: Command failed. Examine
previous error messages to determine cause.\n." ],
"exitCode" : 8
},
"pids" : [ ]
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
}
}
Related reference
“mmafmcosaccess command” on page 48
Maps a directory in an AFM to cloud object storage fileset to the bucket on a cloud object storage.
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST Filesystems/{filesystemName}/filesets/{filesetName}/cos/evict request evicts files
from object store. For more information about the fields in the data structures that are returned, see
“mmafmcosaccess command” on page 48.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/
{filesystemName}/filesets/{filesetName}/cos/evict
where
filesystems/{filesystemName}/filesets/{filesetName}/cos/evict
Specifies the action to be performed.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"path": "Path",
"objectList": "List of files",
"all": "true | false",
"useMetadata": "true | false",
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success. A nonzero value denotes failure.
Examples
The following example shows how to evict files from object store.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000016,
"status" : "FAILED",
"submitted" : "2020-09-29 20:38:44,349",
"completed" : "2020-09-29 20:38:45,203",
"runtime" : 854,
"request" : {
"data" : {
"all" : true,
"path" : "cosdir1",
"useMetadata" : false
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/mfset1/cos/evict"
},
"result" : {
"progress" : [ ],
"commands" : [ "mmafmcosctl 'gpfs0' 'mfset1' 'cosdir1' evict --all " ],
"stdout" : [ ],
"stderr" : [ "EFSSG0053C Failed to execute command:\nmmafmctl: Fileset mfset1 is not an
AFM fileset.\nmmafmctl: Command failed. Examine previous error messages to determine
cause.\nmmafmcosctl: Evict cmd: \"/usr/lpp/mmfs/bin/mmafmctl gpfs0 evict -j mfset1 --list-
Related reference
“mmafmcosaccess command” on page 48
Maps a directory in an AFM to cloud object storage fileset to the bucket on a cloud object storage.
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST Filesystems/{filesystemName}/filesets/{filesetName}/cos/upload request uploads files
to object store. For more information about the fields in the data structures that are returned, see
“mmafmcosaccess command” on page 48.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/
{filesystemName}/filesets/{filesetName}/cos/upload
where
filesystems/{filesystemName}/filesets/{filesetName}/cos/upload
Specifies the action to be performed.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"path": "Path",
"objectList": "List of files",
"all": "true | false",
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success. A nonzero value denotes failure.
"stderr":"Error"
CLI messages from stderr.
Examples
The following example shows how to upload files to object store.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000017,
"status" : "FAILED",
"submitted" : "2020-09-29 20:45:50,393",
"completed" : "2020-09-29 20:45:50,899",
"runtime" : 506,
"request" : {
"data" : {
"all" : true,
"path" : "cosdir1"
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/fileset1/cos/upload"
},
"result" : {
"progress" : [ ],
"commands" : [ "mmafmcosctl 'gpfs0' 'fileset1' 'cosdir1' upload --all " ],
"stdout" : [ ],
"stderr" : [ "EFSSG0053C Failed to execute command:\nmmafmcosctl: Fileset fileset1 is not
an AFM fileset.\nmmafmcosctl: cosdir1 is not a valid path for the fileset
fileset1\nmmafmcosctl: Command failed. Examine previous error messages to determine
cause.\n." ],
"exitCode" : 8
},
"pids" : [ ]
} ],
"status" : {
"code" : 200,
Related reference
“mmafmcosaccess command” on page 48
Maps a directory in an AFM to cloud object storage fileset to the bucket on a cloud object storage.
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/{filesystemName}/filesets/{filesetName}/directory/{path}
request creates directory inside a fileset.
Request URL
https://management API host:port/scalemgmt/v2/filesystems/filesystemName/filesets/filesetName/
directory/path
where:
filesystems/filesystemName
Specifies the name of the file system to which the fileset belong. Required.
filesets/filesetName
Specifies the name of the fileset to which the directory belongs. Required.
directory/path
Specifies the directory to be created. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"user": "User name",
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
Examples
The following example shows how to create a directory inside the fileset fs1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000005,
"status" : "RUNNING",
"submitted" : "2017-03-14 15:59:34,180",
"completed" : "N/A",
"request" : {
"data" : {
"user": "testuser55",
"uid": 1234,
"group": "mygroup",
"gid": 4711
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/fs1/directory/mydir"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE filesystems/{filesystemName}/filesets/{filesetName}/directory/{path}
request removes a directory from a fileset.
Request URL
https://management API host:port/scalemgmt/v2/filesystems/filesystemName/filesets/filesetName/
directory/path
where:
filesystems/filesystemName
Specifies the name of the file system to which the fileset belongs. Required.
filesets/filesetName
Specifies the fileset. Required.
directory/path
The path of the directory to be removed.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
None.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
Examples
The following example shows how to remove a directory from the fileset myFset1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000006,
"status" : "RUNNING",
"submitted" : "2017-03-14 16:05:30,960",
"completed" : "N/A",
"request" : {
"type" : "DELETE",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/myFset1/directory/myDir1"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/{filesystemName}/filesets/{filesetName}/directoryCopy/
{sourcePath} request copies a directory on a particular fileset. For more information about the fields in
the data structures that are returned, see the topics “mmchfileset command” on page 222, “mmcrfileset
command” on page 308, and “mmlsfileset command” on page 493.
Request URL
https://<IP or host name of API server>:port/scalemgmt/v2/filesystems/{filesystemName}/
directoryCopy/{sourcePath}
where:
filesystems/filesystemName/filesets/{filesetName}
Specifies the fileset to which the directory belongs. Required.
directoryCopy
Action to be performed on the directory. Required.
sourcePath
Path of the directory to be copied. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"targetFilesystem": "File system name",
"targetFileset": "Fileset name",
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
Examples
The following example shows how to copy a directory that belongs to the file system gpfs0 and fileset
fset1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000019,
"status" : "COMPLETED",
"submitted" : "2020-09-29 20:55:05,814",
"completed" : "2020-09-29 20:55:08,544",
"runtime" : 2730,
"request" : {
"type" : "PUT",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/fileset1/directoryCopy/cosdir1"
},
"result" : {
"progress" : [ ],
"commands" : [ "tscp --source '/mnt/gpfs0/fileset1/cosdir1' --target '/mnt/gpfs0/fileset1/
cosdir10' --nodeclass 'cesNodes' --force " ],
"stdout" : [ ],
"stderr" : [ ],
"exitCode" : 0
},
See also
• “mmchfileset command” on page 222
• “mmcrfileset command” on page 308
• “mmlsfileset command” on page 493
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE filesystems/{filesystemName}/filesets/{filesetName}/link request unlinks
an existing fileset, which is part of the specified file system. For more information about the fields in the
data structures that are returned, see “mmunlinkfileset command” on page 724.
Request URL
https://management API host:port/scalemgmt/v2/filsystems/filesystemName/filesets/filesetName/
link
where:
filesystem/filesystemName
Specifies the name of the file system to which the fileset belongs. Required.
filesets/filesetName
Specifies fileset to be unlinked. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{ "force": "True | False"
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
Examples
The following example unlinks the fileset myFset1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000006,
"status" : "RUNNING",
"submitted" : "2017-03-14 16:05:30,960",
"completed" : "N/A",
"request" : {
"data" : {
"force" : false
},
"type" : "DELETE",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/myFset1/link"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
See also
• “mmunlinkfileset command” on page 724
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/{filesystemName}/filesets/{filesetName}/link request links an
existing fileset, which is part of the specified file system. For more information about the fields in the data
structures that are returned, see “mmlinkfileset command” on page 477.
Request URL
https://management API host:port/scalemgmt/v2/filsystems/filesystemName/filesets/filesetName/
link
where:
filesystem/filesystemName
Specifies the name of the file system to which the fileset belong. Required.
filesets/filesetName
Specifies fileset to be linked. Required.
link
Specifies action to be performed in the POST call. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{ "path": "Path"
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
Examples
The following example links the fileset myFset1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000005,
"status" : "RUNNING",
"submitted" : "2017-03-14 15:59:34,180",
"completed" : "N/A",
"request" : {
"data" : {
"path" : "/mnt/gpfs0/fset1"
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/myFset1/link"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
See also
• “mmlinkfileset command” on page 477
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/{filesystemName}/filesets/{filesetName}/psnaps request creates
an AFM peer snapshot. The peer snapshot function provides a snapshot at home and cache sites
separately, ensuring application consistency on both home and cache sides. For more information about
the fields in the data structures that are returned, see the topics .“mmpsnap command” on page 605,
“mmafmctl command” on page 61, and “mmafmconfig command” on page 45.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/{filesystemName}
/filesets/{filesetName}/psnaps
where
filesystems/{filesystemName}/filesets/{filesetName}
Specifies the AFM fileset as the target. Required.
psnap
Specifies that a peer snapshot needs to be taken for the AFM fileset.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
{
"comment": "Comment",
"uid": "Location",
"rpo": " yes | no",
"wait": "yes | no"
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"comment":"Comment"
Comment about the peer snapshot.
"uId":"UID"
A unique identifier for the cache site. If not specified, this defaults to the IBM Spectrum Scale cluster
ID.
"rpo":"yes | no"
Specifies whether to create a user recovery point objective (RPO) snapshot for a primary fileset. This
option cannot be specified with the --comment and --uid options.
"wait":"yes | no"
Specifies whether to make the creation of cache and home snapshots a synchronous process.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
Examples
The following API command creates a peer snapshot of the fileset myFset1.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/filesystems/gpfs0/filesets/myFset1/psnaps",
"data": "comment": "My peer snapshot",
"uid": "HONGKONG",
"rpo": "no",
"wait": "yes",
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE filesystems/{filesystemName}/filesets/{filesetName}/psnaps/
{snapshotName} request deletes a specific AFM peer snapshot. For more information about the fields in
the data structures that are returned, see the topics .“mmpsnap command” on page 605 and
“mmdelsnapshot command” on page 378.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/
{filesystemName}/filesets
/{filesetName}/psnaps/{snapshotName}
where
filesystems/{filesystemName}/filesets/{filesetName}
Specifies the AFM fileset as the target. Required.
psnap/{snapshotName}
Specifies the peer snapshot to be deleted.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
Examples
The following API command deletes the peer snapshot snap1.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/filesystems/gpfs0/filesets/myFset1/psnaps/snap1",
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/filesystemName/filesets/{filesetName}/quotadefaults request
gets information about the default quotas that are defined at the fileset level. For more information, see
“mmsetquota command” on page 691 and “mmrepquota command” on page 656
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName
/filesets/filesetName/quotadefaults
where
filesystems/filesystemName/filesets/filesetName
The fileset about which you need the default quota information. Required.
quotadefaults
Specifies that you need to get the default quota details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
},
"paging": {
"next": "URL",
"fields": "Fields specified",
"filter": "Filter used",
"baseUrl": "Base URL",
"lastId": "Last ID"
},
"quotaDefaults": [
{
"clusterId": {
"clusterID": "string"
},
"deviceName": "string",
"filesetId": "Fileset ID",
"filesetName": "Fileset name",
"quotaType": "Quota type",
"blockSoftLimit": "Soft limit set for capacity",
"blockHardLimit": "Hard limit set for capacity",
"filesSoftLimit": "Soft limit set for number of inodes",
"filesHardLimit": "Hard limit set for number of inodes",
"entryType": "DEFAULT_ON",
},
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging":"Paging details"
"next": "URL",
URL of the next item.
"fields": "Fields specified",
Fields specified in the request.
"filter": "Filter used",
Filters used in the request.
"baseUrl": "Base URL",
Base URL
"lastId": "Last ID",
Last item's ID.
"quotasDefaults":""
"clusterId"
clusterID": "ID"
Unique cluster ID.
"deviceName": "string",
Device name.
Examples
The following example gets quota information for all filesets inside file system mari.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status" : {
"code" : 200,
"message" : "The request finished successfully."
},
"quotaDefaults" : [ {
"deviceName" : "mari",
"filesetId" : 0,
"filesetName" : "root",
"quotaType" : "USR",
"blockSoftLimit" : 204800,
"blockHardLimit" : 409600,
"filesSoftLimit" : 0,
"filesHardLimit" : 0,
"entryType" : "DEFAULT_ON"
}, {
"deviceName" : "mari",
"filesetId" : 0,
"filesetName" : "root",
"quotaType" : "GRP",
"blockSoftLimit" : 0,
"blockHardLimit" : 0,
"filesSoftLimit" : 0,
"filesHardLimit" : 0,
"entryType" : "DEFAULT_ON"
}, {
"deviceName" : "mari",
"filesetId" : 2,
"filesetName" : "lisa",
"quotaType" : "USR",
"blockSoftLimit" : 0,
"blockHardLimit" : 0,
"filesSoftLimit" : 0,
"filesHardLimit" : 0,
"entryType" : "DEFAULT_ON"
}, {
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/filesystemName/filesets/filesetName/quotadefaults request
enables default quota limits for a file system. For more information about the fields in the data structures
that are returned, see the topics “mmsetquota command” on page 691 and “mmrepquota command” on
page 656.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName
/filesets/filesetName/quotadefaults
where
filesystems/filesystemName/filesets/filesetName
Specifies that the you need to enable default quota for the particular fileset. Required.
quotadefaults
Specifies that you need to set the default quota details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"user":"True | False",
"group":"True | False",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"user":"True | False"
Whether to enable or disable default quota for the user.
"group":"True | False"
Whether to enable or disable default quota for the group.
"fileset":"True | False"
Whether to enable or disable default quota for the fileset.
"assign":"True | False"
Whether to assign the quota defaults to enabled user, group or fileset.
"reset":"True | False"
Whether to reset the quota defaults from disabled user, group or fileset.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
Examples
The following example enables or disables default quota for the file system gpfs0 and fileset fs1.
Use the following request to set the quota defaults:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000006,
"status" : "RUNNING",
"submitted" : "2019-02-20 13:33:56,336",
"completed" : "N/A",
"runtime" : 3,
"request" : {
"type" : "PUT",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/fs1/quotadefaults"
},
"result" : { },
"pids" : [ ]
}, {
"jobId" : 1000000000007,
"status" : "RUNNING",
"submitted" : "2019-02-20 13:33:56,338",
"completed" : "N/A",
"runtime" : 1,
"request" : {
"type" : "PUT",
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/filesystemName/filesets/{filesetName}quotadefaults request
sets or changes default quota limits for a fileset. For more information about the fields in the data
structures that are returned, see the topics “mmsetquota command” on page 691 and “mmrepquota
command” on page 656.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName
/filesets/filesetName/quotadefaults
where
filesystems/filesystemName/filesets/filesetName
Specifies that the you need to set or change default quota for the particular fileset. Required.
quotadefaults
Specifies that you need to set or change the default quota details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
" quotaType":"USR | GROUP | FILESET",
"blockSoftLimit":"Soft limit for capacity",
For more information about the fields in the following data structures, see the links at the end of this
topic.
" quotaType":"USR | GROUP | FILESET"
Specify whether the quota is enabled for user, group, or fileset.
"blockSoftLimit":"Soft limit for capacity"
Soft limit set for the capacity usage. Limit can be specified in KiB, MiB, GiB, or TiB. Default is KiB.
"blockHardLimit":"Hard limit for capacity"
Hard limit set for the capacity usage. When hard limit is reached, users cannot perform data writes.
Limit can be specified in KiB, MiB, GiB, or TiB. Default is KiB.
"filesSoftLimit":"Soft limit for inodes"
Soft limit set for inode space. Limit can be specified in KiB, MiB, GiB, or TiB. Default is KiB.
"filesHardLimit":"Hard limit for inodes"
Hard limit set for inodes. Limit can be specified in KiB, MiB, GiB, or TiB. Default is KiB.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
Examples
The following example shows how to set or change default quota for the file system gpfs0 and fileset
fset1.
Now, use the following request to set the quota defaults:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000009,
"status" : "RUNNING",
"submitted" : "2019-02-20 14:28:23,747",
"completed" : "N/A",
"runtime" : 0,
"request" : {
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/mari/filesets/fset1/quotadefaults"
},
"result" : { },
"pids" : [ ]
} ],
Use the GET filesystems/gpfs0/quotadefaults request to see how the quota defaults are set:
Request: # curl -k -u admin:admin001 -X GET -H content-type:application/json
"https://localhost:443/scalemgmt/v2/filesystems/mari/filesets/fset1/
quotadefaults"
{
"status" : {
"code" : 200,
"message" : "The request finished successfully."
},
"quotaDefaults" : [ {
"deviceName" : "mari",
"filesetId" : 2,
"filesetName" : "fset1",
"quotaType" : "USR",
"blockSoftLimit" : 0,
"blockHardLimit" : 0,
"filesSoftLimit" : 0,
"filesHardLimit" : 0,
"entryType" : "DEFAULT_ON"
}, {
"deviceName" : "mari",
"filesetId" : 2,
"filesetName" : "fset1",
"quotaType" : "GRP",
"blockSoftLimit" : 12288,
"blockHardLimit" : 1048576,
"filesSoftLimit" : 102400,
"filesHardLimit" : 1048576,
"entryType" : "DEFAULT_ON"
} ]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/filesystemName/filesets/filesetName/quotas request gets
information about quotas set for filesets within a particular file system. For more information about the
fields in the data structures that are returned, see the topics “mmsetquota command” on page 691 and
“mmrepquota command” on page 656.
The quota definition at the file system level must be perfileset to retrieve the quota information using this
API call.
Request URL
https://<API server>:<port>/scalemgmt/v2/filesystems/FileSystemName/filesets/filesetName/quotas
where
filesystems/filesystemName
The file system that contains the fileset. Required.
filesets/filesetName
The fileset about which you need to get the quota information. Required.
quotas
Specifies that you need to get the quota details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
},
"paging":"{NFSv4} ",
"quotas": [
"quotaID":"ID",
"filesystemName":"File system name",
"filesetName":"Fileset name",
"quotaType":"Type",
"objectName":"Name",
"objectId":"ID",
"blockUsage":"Usage",
"blockQuota":"Soft limit",
"blockLimit":"Hard limit",
"blockInDoubt":"Space in doubt",
"blockGrace":"Grace period",
"filesUsage":"Number of files in usage",
"filesQuota":"Soft limit",
"filesLimit":"Hard limit",
"filesInDoubt":"Files in doubt",
"filesGrace":"Grace period",
"isDefaultQuota":"Default",
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging":"NFSv4"
"quotas":""
"quotaID":"ID"
Internal ID used for paging.
"filesystemName":"File system name"
The file system for which the quota is applicable.
"filesetName":"Fileset name"
The fileset for which the quota is applicable.
"quotaType":"USR | GRP | FILESET"
The quota type.
Examples
The following example gets quota information for the fileset myFset1, which is part of the file system
gpfs0.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/filesets/myFset1/quotas?
lastId=1001"
},
"quotas": [
{
"quotaId": "4711",
"filesystemName": "gpfs0",
"filesetName": "myFset1",
"quotaType": "USR",
"objectName": "myFset1",
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/filesystemName/filesets/filesetName/quotas request define quota
limits for a fileset, which is part of the specified file system. For more information about the fields in the
data structures that are returned, see the topics “mmsetquota command” on page 691 and “mmrepquota
command” on page 656.
The quota definition at the file system level must be perfileset to successfully run this API command.
Request URL
https://<API server>:<port>/scalemgmt/v2/filesystems/FileSystemName/filesets/filesetName/quotas
where
filesystems/filesystemName
The file system that contains the fileset. Required.
filesets/filesetName
The fileset for which you need to set the quota information. Required.
quotas
Specifies that you need to set the quota details. Required.
Request headers
Accept: application/json
Request data
{
"operationType":"Type",
"quotaType":"Type",
"blockSoftLimit":"Soft limit",
"blockHardLimit":"Hard limit",
"filesSoftLimit":"Soft limit",
"filesHardLimit":"Hard limit",
"filesGracePeriod":"Grace period",
"blockGracePeriod":"Default",
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"operationType":"Operation type"
In this case, set quota.
"quotaType":"USR | GRP | FILESET"
The quota type.
"objectName":"Name"
Name of the fileset, user, or user group for which the quota is applicable.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
Examples
The following example sets quota for the fileset myFset1, which is part of the file system gpfs0:
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000008,
"status" : "RUNNING",
"submitted" : "2017-03-14 16:07:43,474",
"completed" : "N/A",
"request" : {
"data" : {
"operationType": "setQuota",
"quotaType": "user",
"objectName": "adam",
"blockSoftLimit": "1M",
"blockHardLimit": "2M",
"filesSoftLimit": "1K",
"filesHardLimit": "2K",
"filesGracePeriod": "null",
"blockGracePeriod": "null"
},
"type" : "POST",
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/{filesystemName}/filesets/{filesetName}/snapshotCopy/
{snapshotName} request copies a snapshot on a particular fileset. For more information about the fields
in the data structures that are returned, see the topics “mmchfileset command” on page 222,
“mmcrfileset command” on page 308, and “mmlsfileset command” on page 493.
Request URL
https://<IP or host name of API server>:port/scalemgmt/v2/filesystems/{filesystemName}/
snapshotCopy/{snapshotName}
where:
filesystems/filesystemName/filesets/{filesetName}
Specifies the fileset to which the snapshot belongs. Required.
snapshotCopy
Action to be performed on the snapshot. Required.
snapshotName
Name of the snapshot to be copied. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"targetFilesystem": "File system name",
"targetFileset": "Fileset name",
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
Examples
The following example shows how to copy a snapshot that belongs to the file system gpfs0 and fileset
fset1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000026,
"status" : "COMPLETED",
"submitted" : "2020-09-29 21:59:35,983",
"completed" : "2020-09-29 21:59:43,828",
"runtime" : 7845,
"request" : {
"type" : "PUT",
"url" : "/scalemgmt/v2/filesystems/fs1/filesets/root/snapshotCopy/snap2"
},
"result" : {
"progress" : [ ],
"commands" : [ "tscp --snapshot 'fs1:root:snap2' --target '/mnt/gpfs0/mydir' --nodeclass
'cesNodes' --force " ],
"stdout" : [ ],
"stderr" : [ ],
"exitCode" : 0
},
See also
• “mmchfileset command” on page 222
• “mmcrfileset command” on page 308
• “mmlsfileset command” on page 493
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/{filesystemName}/filesets/{filesetName}/snapshotCopy/
{snapshotName}/path/{sourcePath} request copies a directory from a source path relative to a
snapshot, to a target path on a fileset. For more information about the fields in the data structures that are
returned, see the topics “mmchfileset command” on page 222, “mmcrfileset command” on page 308,
and “mmlsfileset command” on page 493.
Request URL
https://<IP or host name of API server>:port/scalemgmt/v2/filesystems/{filesystemName}/
snapshotCopy/{snapshotName}/path/{sourcePath}
where:
filesystems/filesystemName/filesets/{filesetName}
Specifies the fileset to which the snapshot belongs. Required.
snapshotCopy
Action to be performed on the snapshot. Required.
snapshotName
Name of the snapshot to be copied. Required.
path/sourcePath
Source path relative to a snapshot. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"targetFilesystem": "File system name",
"targetFileset": "Fileset name",
"targetPath": "Directory path",
"nodeclassName": "Name of the node class",
"force": "True | False",
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
Examples
The following example shows how to copy a snapshot that belongs to the file system gpfs0 and fileset
fset1. Snapshot name is snap1 and the relative source path is mydir1.
Request data:
Response data:
{
"jobs" : [ {
"jobId" : 1000000000027,
"status" : "COMPLETED",
"submitted" : "2020-09-29 22:05:03,274",
"completed" : "2020-09-29 22:05:09,542",
"runtime" : 6268,
"request" : {
"type" : "PUT",
"url" : "/scalemgmt/v2/filesystems/fs1/filesets/root/snapshotCopy/snap2/path/dir1"
},
"result" : {
"progress" : [ ],
"commands" : [ "tscp --snapshot 'fs1:root:snap2' --source 'dir1' --target '/mnt/gpfs0/
mydir' --nodeclass 'cesNodes' --force " ],
"stdout" : [ ],
"stderr" : [ ],
"exitCode" : 0
},
"pids" : [ ]
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
}
}
See also
• “mmchfileset command” on page 222
• “mmcrfileset command” on page 308
• “mmlsfileset command” on page 493
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/{filesystemName}/filesets/{filesetName}/snapshots request gets
information about snapshots in the specified fileset. For more information about the fields in the data
structures that are returned, see the topics “mmcrsnapshot command” on page 337 and “mmlssnapshot
command” on page 532.
Request URL
https://IP of API server:<port>/scalemgmt/v2/filesystems/filesystemName/filesets/filesetName/
snapshots
where
filesystems/filesystemName
Specifies the file system to which the fileset belongs. Required.
filesets/filesetName
Specifies the fileset of which the snapshot is taken. Required.
snapshots
Specifies snapshot as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
},
"paging":
{
"next": "URL"
},
snapshots: [
{
"snapshotName":"Snapshot"
"filesystemName":"Device",
"filesetName":"Fileset",
"oid":"ID",
"snapID":"ID",
"status":"Status"
"created":"DateTime",
"quotas":"Quotas",
"snapType":"Type",
}
]
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"snapshotName":"Snapshot"
The snapshot name.
"filesystemName":"Device"
The file system that is the target of the snapshot.
"filesetName":"Fileset"
For a fileset snapshot, the fileset that is a target of the snapshot.
"oid":"ID"
Internal identifier that is used for paging.
"snapID":"ID"
The snapshot ID.
Examples
The following example gets information about the snapshots of the fileset myFset1, which belongs to the
file system gpfs0.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"snapshots" : [ {
"filesetName" : "myFset1",
"filesystemName" : "gpfs0",
"snapshotName" : "mySnap1"
}
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Using the field parameter ":all:" returns entire details of the snapshot as shown in the following example:
{
"snapshots" : [ {
"created" : "2017-03-21 15:52:14,000",
"filesetName" : "myFset1",
"filesystemName" : "gpfs0",
"oid" : 2,
"quotas" : "",
"snapID" : 2,
"snapType" : "",
"snapshotName" : "mySnap1",
"status" : "Valid"
}
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/{filesystemName}/filesets/{filesetName}/snapshots command
creates a snapshot of the specified fileset.
Request URL
https://IP of API server:<port>/scalemgmt/v2/filesystems/filesystemName/filesets/fileseName/
snapshots
where
filesystems/filesystemName
Specifies the file system to which the fileset belongs. Required.
filesets/filesetName
Specifies the fileset of which the snapshot is taken. Required.
snapshots
Specifies snapshot as the resource of this POST call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
"snapshotName":
Name of the snapshot to be created.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":"",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
Examples
The following example creates a snapshot snap2 of the fileset myFset1 in file system gpfs0.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {}
"request": {
"type": "POST",
"url": "/scalemgmt/v2/filesystems/gpfs0/filesets/myFset1/snapshots",
"data": "{"snapshotName": "snap2"}"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE filesystems/{filesystemName}/filesets/{filesetName}/snapshots/
{snapshotName} command deletes the specific fileset snapshot. For more information on deleting
snapshots, see the topic “mmdelsnapshot command” on page 378.
Request URL
Use this URL to delete a snapshot:
where:
filesystems/filesystemName
Specifies the file system to which the fileset belongs. Required.
filesets/filesetName
Specifies the fileset of which the snapshot is deleted. Required.
snapshots/snapshotName
Specifies the snapshot to be deleted. Required.
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
Examples
The following example deletes the fileset snapshot snap1 from the file system gpfs0.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {},
"request": {
"type": "DELETE",
"url": "scalemgmt/v2/filesystems/gpfs0/filesets/myFset1/snapshots/snap1",
"data": "{}"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/filesystemName/filesets/filesetName/snapshots/snapshotName
request gets information about a specific fileset snapshot in a specific file system. For more information
about the fields in the data structures that are returned, see the topics “mmcrsnapshot command” on
page 337 and “mmlssnapshot command” on page 532.
Request URL
https://IP address of API server:<port>/scalemgmt/v2/filesystems/filesystemName/filesets/
filesetName
/snapshots/snapshotName
where
filesystems/filesystemName
Specifies the file system to which the fileset belongs. Required.
filesets/{filesetName}
Specifies the fileset of which the snapshot is taken. Required.
snapshots/snapshotName
Specifies a particular snapshot as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
},
"paging":
{
"next": "URL"
},
snapshots: [
{
"snapshotName":"Snapshot"
"filesystemName":"Device",
"filesetName":"Fileset",
"oid":"ID",
"snapID":"ID",
"status":"Status"
"created":"DateTime",
"quotas":"Quotas",
"snapType":"Type",
}
]
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"snapshotName":"Snapshot"
The snapshot name.
"filesystemName":"Device"
The file system that is the target of the snapshot.
"filesetName":"Fileset"
For a fileset snapshot, the fileset that is a target of the snapshot.
"oid":"ID"
Internal identifier that is used for paging.
"snapID":"ID"
The snapshot ID.
Examples
The following example gets information about the snapshots snap1 of the fileset myFset1, which belongs
to the file system gpfs0.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/filesets/myFset1/snapshots/
snap1?lastId=1001"
},
"snapshots": [
{
"snapshotName": "snap1",
"filesystemName": "gpfs0",
"filesetName": "myFset1",
"oid": "123",
"snapID": "5",
"status": "Valid",
"created": "2017-01-09 14.55.37",
"quotas": "string",
"snapType": "string"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/filesystemName/suspend request suspends a file system. For more
information about the fields in the data structures that are returned, see “mmfsctl command” on page
418.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/FileSystemName/
suspend
where
filesystems/filesystemName
The file system that is going to be mounted. Required.
suspend
Specifies the action to be performed on the file system. Required.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
None.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
"completed":"Time"
The time at which the job was completed.
"status":"RUNNING | COMPLETED | FAILED"
Status of the job.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000002,
"status" : "RUNNING",
"submitted" : "2017-03-14 15:50:00,493",
"completed" : "N/A",
"request" : {
"type" : "PUT",
"url" : "/scalemgmt/v2/filesystems/gpfs0/suspend"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/{filesystemName}/filesets/{filesetName}/symlink/{linkPath}
request creates a symlink for the path of a fileset.
Request URL
https://management API host:port/scalemgmt/v2/filesystems/filesystemName/filesets/filesetName/
symlink/linkPath
where:
filesystem/filesystemName
Specifies the name of the file system to which the fileset belongs. Required.
filesets/filesetName
Specifies the name of the fileset to which the symlink belongs. Required.
symlink/linkPath
Specifies the symlink path relative to the fileset path. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
Examples
The following example shows how to create a symlink for the fileset fs1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000005,
"status" : "RUNNING",
"submitted" : "2017-03-14 15:59:34,180",
"completed" : "N/A",
"request" : {
"data" : {
"filesystemName": "gpfs0",
"filesetName": "fset1",
"relativePath": "mydir"
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/fs1/symlink/mydir"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE filesystems/{filesystemName}/filesets/{filesetName}/symlink/{path}
request removes a symlink from a fileset.
Request URL
https://management API host:port/scalemgmt/v2/filesystems/filesystemName/filesets/filesetName/
symlink/path
where:
filesystems/filesystemName
Specifies the name of the file system to which the fileset belongs. Required.
filesets/filesetName
Specifies the name of the fileset to which the symlink belongs. Required.
symlink/path
The symlink path relative to the fileset path
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
None.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
Examples
The following example shows how to remove symlink from the fileset myFset1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000006,
"status" : "RUNNING",
"submitted" : "2017-03-14 16:05:30,960",
"completed" : "N/A",
"request" : {
"type" : "DELETE",
"url" : "/scalemgmt/v2/filesystems/gpfs0/filesets/myFset1/symlink/myDir1"
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/filesystemName/filesets/{filesetName}/watch request enables or
disables clustered watch for a fileset. For more information about the fields in the data structures that are
returned, see “mmwatch command” on page 753.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/
{FileSystemName}/filesets/{filesetName}/watch
where
filesystems/filesystemName/filesets/filesetName
The fileset for which clustered watch needs to be enabled.
watch
Specifies the action to be performed on the fileset.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
For more information about the fields in the following data structures, see the links at the end of this
topic.
"action": "Enable or disable file system watch"
Whether to enable or disable clustered watch for the fileset. Possible values are enable |
disable.
"watchfolderConfig":
"description": "Description"
Description of the clustered watch.
"eventTypes": "Event types"
Types of events that need to be watched.
"eventHandler": "Event handler"
Type of event handler. Only Kafkasink is supported as the evens handler.
"sinkBrokers": "Sink broker"
Includes a comma-separated list of broker:port pairs for the sink Kafka cluster, which is the
external Kafka cluster where the events are sent.
"sinkTopic": "Sink topic"
The topic that producers write to in the sink Kafka cluster.
"sinkAuthConfig": "Sink authentication config"
The path to the sink authentication configuration file.
watchId": "Watch ID"
Remote cluster name from where the file system must be watched.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
Examples
The following example how to enable the clustered watch for the fileset fset1.
Request data:
{
"action": "enable",
"watchfolderConfig": {
"description": "description",
"eventTypes": "IN_ACCESS,IN_ATTRIB",
"eventHandler": "kafkasink",
"sinkBrokers": "Broker1:Port,Broker2:Port",
"sinkTopic": "topic",
"sinkAuthConfig": "/mnt/gpfs0/sink-auth-config"
},
"watchId": "string"
}
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/filesystems/gpfs0/filsets/fset1/watch",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/filesystemName/mount request mounts a file system. For more information
about the fields in the data structures that are returned, see “mmmount command” on page 537.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/FileSystemName/
mount
where
filesystems/filesystemName
The file system that is going to be mounted. Required.
mount
Specifies the action to be performed on the file system. Required.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"nodes":"{Node name} ",
"mountOptions": "Mount options",
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
Examples
The following example how to mount the file system gpfs0.
Request data:
{
"nodes": "testnode-11",
"mountOptions": "atime=yes;relatime=no"
}
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000002,
"status" : "RUNNING",
"submitted" : "2017-03-14 15:50:00,493",
"completed" : "N/A",
"request" : {
"data" : {
"entries" : [
{
"type" : "allow",
"who" : "special:owner@",
"permissions" : "rwmxDaAnNcCos",
"flags" : ""
},
{
"type" : "allow",
"who" : "special:group@",
"permissions" : "rxancs",
"flags" : ""
},
{
"type" : "allow",
"who" : "special:everyone@",
"permissions" : "rxancs",
"flags" : ""
},
{
"type" : "allow",
"who" : "user:scalemgmt",
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/filesystemName/owner/path request gets information about owner of files
or directories within a particular file system.
Request URL
https://<IP address of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/owner/path
where
filesystems/filesystemName
The file system about which you want to get the information. Required.
owner/path
The path of the file or directory about which you want to get the owner information. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"Owner":"Owner details"
"user":"User name"
Name of the owner.
"uid":"User ID"
Unique identifier of the owner.
"group":"Group name"
Name of the user group that owns the file or directory.
"gid":"Group ID"
Unique identifier of the user group that owns the file or directory.
Examples
The following example gets owner information for the files and directories of the file system gpfs0.
Request URL:
Response URL:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."k
},
"owner": {
"user": "testuser55",
"uid": "1234",
"group": "mygroup",
"gid": "4711"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/filesystemName/owner/path request sets owner for files or directories
within a particular file system.
The file and directory ownership can be controlled by Linux commands such as stat and chown. You can
use either ID or name of the user or group to define the ownership. If you use both ID and name, the ID
takes precedence. Only the user with DataAccess role can change the owner of a file or a non-empty
directory. Empty directories can be changed by other roles such as Administrator,ProtocolAdmin,
StorageAdmin, and SecurityAdmin.
Request URL
https://<IP address of API server>:<port>/scalemgmt/v2/filesystems/FileSystemName/owner/path
where
filesystems/filesystemName
Specifies the file system to which the file or directory belongs. Required.
owner/path
The path of the file or directory for which you want to set the owner. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"path":"Path",
"user":"User name",
"uid":"User ID",
"group":"Group name",
"gid":"Group ID",
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"path":"Path"
The path of the file or directory.
"user":"User name"
Name of the owner.
"uid":"User ID"
Unique identifier of the owner.
"group":"Group name"
Name of the user group that owns the file or directory.
"gid":"Group ID"
Unique identifier of the user group that owns the file or directory.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
},
"owner": [
"path":"Path",
"user":"User name",
"uid":"User ID",
"group":"Group name",
"gid":"Group ID",
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
Examples
The following example sets owner for the files and directories of the file system gpfs0.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/filesystems/gpfs0/owner",
"data": "{
"path": "/mnt/gpfs0/xaz",
"user": "testuser55",
"uid": "1234",
"group": "mygroup",
"gid": "4711"}"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/{filesystemName}/policies request gets the details of the ILM policies
applicable to a specific file system. For more information about the fields in the data structures that are
returned, see “mmapplypolicy command” on page 80 and “mmlspolicy command” on page 518.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/
{filesystemName}/policies
where
filesystems/{filesystemName}/policies
Specifies that the GET request fetches the policies that are applicable to a specific file system.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
The following list of attributes are available in the response data:
"status":
Return status.
"code": ReturnCode,
The HTTP status code that was returned by the request.
"message": "ReturnMessage"
The return message.
"paging":
An array of information about the paging information that is used for displaying the details.
"next": "Next page URL"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects would be
returned by the query.
"fields": "Fields in the request"
The fields that are used in the original request.
"filter": "Filters used in the request"
The filter that is used in the original request.
"baseUrl": "URL"
The URL of the request without any parameters.
"lastId": "ID"
The ID of the last element that can be used to retrieve the next elements.
"policies":
An array of information about the ILM policies applicable to a file system.
"filesystemName": "File system name"
The file system for which the ILM policies are applicable.
"policy": "Policy settings"
Details of the ILM policy applicable to the specified file system.
The return information and the information that the command retrieves are returned in the same way as
they are for the other requests. The parameters that are returned are the same as the configuration
attributes that are displayed by the mmaplypolicy command. For more information, see “mmlspolicy
command” on page 518 and “mmapplypolicy command” on page 80.
Examples
The following example gets information about the cluster configuration.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
The policies array provides information about the ILM policies applicable to the file system gpfs0.
{
"status": {
"code": 200,
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/filesets?lastId=10001",
"fields": "period,restrict,sensorName",
"filter": "usedInodes>100,maxInodes>1024",
"baseUrl": "/scalemgmt/v2/perfmon/sensor/config",
"lastId": 10001
},
"policies": [
{
"filesystemName": "gpfs0",
"policy": "RULE 'placement' SET POOL 'system'"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/{filesystemName}/policies request applies an Information Lifecycle
Management (ILM) policy to a specific file system. For more information about the fields in the data
structures that are returned, see “mmapplypolicy command” on page 80.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/
{filesystemName}/policies
where
filesystems/{filesystemName}/policies
Specifies the policy for a specific file system as the target of the operation. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"policy": "RULE 'placement' SET POOL 'system'"
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"policy":"ILM policy"
Details of the policy to be applied to the file system.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
Examples
The following API command creates a threshold rule rule1.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/filesystems/gpfs0/policies",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/filesystemName/quotadefaults request gets information about the
default quotas that are defined at the file system level. For more information, see “mmsetquota
command” on page 691 and “mmrepquota command” on page 656
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
quotadefaults
where
filesystems/filesystemName
The file system about which you need the default quota information. Required.
quotadefaults
Specifies that you need to get the default quota details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
},
"paging": {
"next": "URL",
"fields": "Fields specified",
"filter": "Filter used",
"baseUrl": "Base URL",
"lastId": "Last ID"
},
"quotaDefaults": [
{
"clusterId": {
"clusterID": "string"
},
"deviceName": "string",
"filesetName": "Fileset name",
"quotaType": "Quota type",
"blockSoftLimit": "Soft limit set for capacity",
"blockHardLimit": "Hard limit set for capacity",
"filesSoftLimit": "Soft limit set for number of inodes",
"filesHardLimit": "Hard limit set for number of inodes",
"entryType": "DEFAULT_ON",
],
},
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging":"Paging details"
"next": "URL",
URL of the next item.
"fields": "Fields specified",
Fields specified in the request.
"filter": "Filter used",
Filters used in the request.
"baseUrl": "Base URL",
Base URL
"lastId": "Last ID",
Last item's ID.
"quotasDefaults":""
"clusterId"
clusterID": "ID"
Unique cluster ID.
"deviceName": "string",
Device name.
"filesetId": "Fileset ID",
The fileset ID for which the default quota is applicable.
Examples
The following example gets quota information for all filesets inside file system mari.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status" : {
"code" : 200,
"message" : "The request finished successfully."
},
"quotaDefaults" : [ {
"deviceName" : "mari",
"filesetId" : 0,
"filesetName" : "root",
"quotaType" : "USR",
"blockSoftLimit" : 204800,
"blockHardLimit" : 409600,
"filesSoftLimit" : 0,
"filesHardLimit" : 0,
"entryType" : "DEFAULT_ON"
}, {
"deviceName" : "mari",
"filesetId" : 0,
"filesetName" : "root",
"quotaType" : "GRP",
"blockSoftLimit" : 0,
"blockHardLimit" : 0,
"filesSoftLimit" : 0,
"filesHardLimit" : 0,
"entryType" : "DEFAULT_ON"
}, {
"deviceName" : "mari",
"filesetId" : 2,
"filesetName" : "lisa",
"quotaType" : "USR",
"blockSoftLimit" : 0,
"blockHardLimit" : 0,
"filesSoftLimit" : 0,
"filesHardLimit" : 0,
"entryType" : "DEFAULT_ON"
}, {
"deviceName" : "mari",
"filesetId" : 2,
"filesetName" : "lisa",
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/filesystemName/quotadefaults request enables default quota limits for a
file system. For more information about the fields in the data structures that are returned, see the topics
“mmsetquota command” on page 691 and “mmrepquota command” on page 656.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
quotadefaults
where
filesystems/filesystemName
Specifies that the you need to enable default quota for the particular file system. Required.
quotadefaults
Specifies that you need to enable the default quota details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"user":"True | False
"group":"True | False",
"fileset":"True | False",
"assign":"True | False",
"reset":"True | False",
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
Examples
The following example enables or disables default quota for the file system gpfs0.
Use the following request to see the current default settings:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000001,
"status" : "RUNNING",
"submitted" : "2019-02-19 15:18:34,738",
"completed" : "N/A",
"runtime" : 5,
"request" : {
"type" : "PUT",
"url" : "/scalemgmt/v2/filesystems/gpfs0/quotadefaults"
},
"result" : { },
"pids" : [ ]
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing."
}
}
Use the GET filesystems/gpfs0/quotadefaults request to see how the quota defaults are set:
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/filesystemName/quotadefaults request sets or changes default quota
limits for a file system. For more information about the fields in the data structures that are returned, see
the topics “mmsetquota command” on page 691 and “mmrepquota command” on page 656.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
quotadefaults
where
filesystems/filesystemName
Specifies that the you need to set or change default quota for the particular file system. Required.
quotadefaults
Specifies that you need to set or change the default quota details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
" quotaType":"USR | GROUP | FILESET",
"blockSoftLimit":"Soft limit for capacity",
"blockHardLimit":"Hard limit for capacity",
"filesSoftLimit":"Soft limit for inodes",
"filesHardLimit":"Hard limit for inodes",
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
Examples
The following example shows how to set or change default quota for the file system gpfs0.
Now, use the following request to set the quota defaults:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000008,
"status" : "RUNNING",
"submitted" : "2019-02-20 14:25:05,280",
"completed" : "N/A",
"runtime" : 3,
"request" : {
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/quotadefaults"
},
"result" : { },
"pids" : [ ]
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing."
}
}
{
"status" : {
"code" : 200,
"message" : "The request finished successfully."
},
"quotaDefaults" : [ {
"deviceName" : "gpfs0",
"filesetId" : 0,
"quotaType" : "USR",
"blockSoftLimit" : 12288,
"blockHardLimit" : 1048576,
"filesSoftLimit" : 102400,
"filesHardLimit" : 1048576,
"entryType" : "DEFAULT_ON"
}, {
"deviceName" : "gpfs0",
"filesetId" : 0,
"quotaType" : "GRP",
"blockSoftLimit" : 0,
"blockHardLimit" : 0,
"filesSoftLimit" : 0,
"filesHardLimit" : 0,
"entryType" : "DEFAULT_ON"
}, {
"deviceName" : "gpfs0",
"filesetId" : 0,
"quotaType" : "FILESET",
"blockSoftLimit" : 0,
"blockHardLimit" : 0,
"filesSoftLimit" : 0,
"filesHardLimit" : 0,
"entryType" : "DEFAULT_OFF"
} ]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/filesystemName/quotagracedefaults request gets information about the
default grace period set for the quotas limits that are defined at the file system level. For more
information, see “mmsetquota command” on page 691 and “mmrepquota command” on page 656
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
quotagracedefaults
where
filesystems/filesystemName
The file system about which you need the default quota grace period information. Required.
quotagracedefaults
Specifies that you need to get the default quota grace period details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
},
"paging": {
"next": "URL",
"fields": "Fields specified",
"filter": "Filter used",
"baseUrl": "Base URL",
"lastId": "Last ID"
},
"quotaGraceDefaults": [ {
"deviceName": "File system name",
"quotaType": "USR | GRP | FILESET",
"blockGracePeriod": "Grace period set for capacity usage",
"filesGracePeriod": "Grace period set for inode usage",
],
},
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging":"Paging details"
"next": "URL",
URL of the next item.
"fields": "Fields specified",
Fields specified in the request.
"filter": "Filter used",
Filters used in the request.
"baseUrl": "Base URL",
Base URL
"lastId": "Last ID",
Last item's ID.
"quotasDefaults":""
"deviceName": "File system name",
File system name.
"quotaType":"USR | GRP | FILESET"
The quota type.
"blockGracePeriod": "Grace period set for capacity usage",
Grace period set for capacity usage.
"filesGracePeriod": "Grace period set for inode usage",
Grace period set for inode usage.
Examples
The following example gets grace periods set for all file systems that are created in the system.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status" : {
"code" : 200,
"message" : "The request finished successfully."
},
"quotaGraceDefaults" : [ {
"deviceName" : "gpfs1",
"quotaType" : "GRP",
"blockGracePeriod" : 604800,
"filesGracePeriod" : 604800
}, {
"deviceName" : "gpfs1",
"quotaType" : "FILESET",
"blockGracePeriod" : 604800,
"filesGracePeriod" : 604800
}, {
"deviceName" : "gpfs1",
"quotaType" : "USR",
"blockGracePeriod" : 1000,
"filesGracePeriod" : 864000
}, {
"deviceName" : "mari",
"quotaType" : "USR",
"blockGracePeriod" : 604800,
"filesGracePeriod" : 604800
}, {
"deviceName" : "mari",
"quotaType" : "FILESET",
"blockGracePeriod" : 604800,
"filesGracePeriod" : 604800
}, {
"deviceName" : "mari",
"quotaType" : "GRP",
"blockGracePeriod" : 36000,
"filesGracePeriod" : 3600
} ]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/filesystemName/quotagracedefaults request sets or changes grace
period for quota limits for a file system. For more information about the fields in the data structures that
are returned, see the topics “mmsetquota command” on page 691 and “mmrepquota command” on page
656.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
quotagracedefaults
where
filesystems/filesystemName
Specifies that the you need to set or change grace periods for quota limits for the particular file
system. Required.
quotagracedefaults
Specifies that you need to set or change the default quota details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
" grace":"USR | GRP | FILESET",
"blockGracePeriod":"Grace period for capacity limits",
For more information about the fields in the following data structures, see the links at the end of this
topic.
" grace":"USR | GROUP | FILESET"
Specify whether grace period is set for user, group, or fileset.
"blockGracePeriod":"Grace period for capacity limits"
Grace period set for the capacity usage. Grace period can be set in seconds, minutes, hours, and days.
Default unit is seconds.
""filesGracePeriod":"Grace period for inode space limits"
Grace period set for the inode space usage. Grace period can be set in seconds, minutes, hours, and
days. Default unit is seconds.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
Examples
The following example shows how to set or change grace period for the file system gpfs0.
Now, use the following request to set the quota defaults:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000008,
"status" : "RUNNING",
"submitted" : "2019-02-20 14:25:05,280",
"completed" : "N/A",
"runtime" : 3,
"request" : {
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/quotadefaults"
},
"result" : { },
"pids" : [ ]
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing."
}
}
Use the GET filesystems/gpfs0/quotagracedefaults request to see how the quota defaults are
set:
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/filesystemName/quotamanagement request enables quota management
for a file system. For more information about the fields in the data structures that are returned, see the
topics “mmsetquota command” on page 691 and “mmrepquota command” on page 656.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
quotamanagement
where
filesystems/filesystemName
Specifies that the you need to enable quota management for the particular file system. Required.
quotamanagement
Specifies that you need to enable quota management. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"quota":"filesystem | fileset | disabled",
For more information about the fields in the following data structures, see the links at the end of this
topic.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
Examples
The following example enables quota management at fileset level.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/filesystemName/quotas request gets information about quotas set at file
system level. “mmsetquota command” on page 691 and “mmrepquota command” on page 656
The perfileset quota must be disabled to display the user and group quotas. The fileset quota is also
available in the response data.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
quotas
where
filesystems/filesystemName
The file system about which you need the information. Required.
quotas
Specifies that you need to get the quota details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
},
"paging":"{NFSv4} ",
"quotas": [
"quotaID":"ID",
"filesystemName":"File system name",
"filesetName":"Fileset name",
"quotaType":"Type",
"objectName":"Name",
"objectId":"ID",
"blockUsage":"Usage",
"blockQuota":"Soft limit",
"blockLimit":"Hard limit",
"blockInDoubt":"Space in doubt",
"blockGrace":"Grace period",
"filesUsage":"Number of files in usage",
"filesQuota":"Soft limit",
"filesLimit":"Hard limit",
"filesInDoubt":"Files in doubt",
"filesGrace":"Grace period",
"isDefaultQuota":"Default",
],
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging":"NFSv4"
"quotas":""
"quotaID":"ID"
Internal ID used for paging.
"filesystemName":"File system name"
The file system for which the quota is applicable.
"filesetName":"Fileset name"
The fileset for which the quota is applicable.
"quotaType":"USR | GRP | FILESET"
The quota type.
"objectName":"Name"
Name of the fileset, user, or user group for which the quota is applicable.
"objectId":"ID"
Unique identifier of the fileset, user, or user group.
"blockUsage":"Usage"
Current capacity quota usage.
"blockQuota":"Soft limit"
The soft limit set for the fileset, user, or user group.
"blockLimit":"Hard limit"
The hard limit set for the capacity quota usage. A grace period starts when the hard limit is
reached.
Examples
The following example gets quota information for the file system gpfs0.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/quotas?lastId=1001"
},
"quotas": [
{
"quotaId": "4711",
"filesystemName": "gpfs0",
"filesetName": "myFset1",
"quotaType": "USR",
"objectName": "myFset1",
"objectId": "128",
"blockUsage": "0",
"blockQuota": "2048",
"blockLimit": "4096",
"blockInDoubt": "1024",
"blockGrace": "none",
"filesUsage": "32",
"filesQuota": "50",
"filesLimit": "100",
"filesInDoubt": "3",
"filesGrace": "none",
"isDefaultQuota": false
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/filesystemName/quotas request define quota limits for a file system. For
more information about the fields in the data structures that are returned, see the topics “mmsetquota
command” on page 691 and “mmrepquota command” on page 656.
The perfileset quota must be disabled to successfully complete this API command.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
quotas
where
filesystems/filesystemName
Specifies that the you need to set quota for the particular file system. Required.
quotas
Specifies that you need to set the quota details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"operationType":"Type",
"quotaType":"Type",
"blockSoftLimit":"Soft limit",
"blockHardLimit":"Hard limit",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"operationType":"Operation type"
In this case, set quota.
"quotaType":"USR | GRP | FILESET"
The quota type.
"objectName":"Name"
Name of the fileset, user, or user group for which the quota is applicable.
"blockSoftLimit":"Soft limit"
The soft limit set for the fileset, user, or user group.
"blockHardLimit":"Hard limit"
The hard limit set for the capacity quota usage. A grace period starts when the hard limit is reached.
"filesSoftLimit":"Soft limit"
The soft limit set for the inode quota.
"filesHardLimit":"Hard limit"
The hard limit set for the inode quota.
"filesGracePeriod":"Grace period"
The grace period set for the inode usage.
"blockGracePeriod":"Grace period"
The grace period set for the capacity quota.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
Examples
The following example sets quota for the file system gpfs0.
Request data:
Response data:
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/filesystems/gpfs0/quotas",
"data": "{
"operationType": "setQuota",
"quotaType": "user",
"objectName": "adam",
"blockSoftLimit": "1M",
"blockHardLimit": "2M",
"filesSoftLimit": "1K",
"filesHardLimit": "2K",
"filesGracePeriod": "null",
"blockGracePeriod": "null""}"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/filesystemName/resume request resumes a file system operation. For more
information about the fields in the data structures that are returned, see “mmfsctl command” on page
418.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/FileSystemName/
resume
where
filesystems/filesystemName
The file system that is going to be mounted. Required.
resume
Specifies the action to be performed on the file system. Required.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
None.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
"completed":"Time"
The time at which the job was completed.
"status":"RUNNING | COMPLETED | FAILED"
Status of the job.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000002,
"status" : "RUNNING",
"submitted" : "2017-03-14 15:50:00,493",
"completed" : "N/A",
"request" : {
"type" : "PUT",
"url" : "/scalemgmt/v2/filesystems/gpfs0/resume"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/{filesystemName}/snapshotCopy/{snapshotName} request copies a
directory from a source path relative to a snapshot, to a target path on a file system. For more information
about the fields in the data structures that are returned, see the topics “mmcrfs command” on page 315,
“mmchfs command” on page 230, and “mmlsfs command” on page 498.
Request URL
https://<IP or host name of API server>:port/scalemgmt/v2/filesystems/{filesystemName}/
snapshotCopy/{snapshotName}
where:
filesystems/filesystemName
Specifies the file system to which the snapshot belongs. Required.
snapshotCopy
Action to be performed on the snapshot. Required.
snapshotName
Name of the snapshot to be copied. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"targetFilesystem": "File system name",
"targetFileset": "Fileset name",
"targetPath": "Directory path",
"nodeclassName": "Name of the node class",
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
Examples
The following example shows how to copy a snapshot that belongs to the file system gpfs0 .
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000024,
"status" : "COMPLETED",
"submitted" : "2020-09-29 21:25:25,513",
"completed" : "2020-09-29 21:25:41,217",
"runtime" : 15704,
"request" : {
"type" : "PUT",
"url" : "/scalemgmt/v2/filesystems/fs1/snapshotCopy/snap1"
},
"result" : {
"progress" : [ ],
"commands" : [ "tscp --snapshot 'fs1:snap1' --target '/mnt/objfs/dir1' --nodeclass
'cesNodes' --force " ],
"stdout" : [ ],
"stderr" : [ ],
"exitCode" : 0
},
"pids" : [ ]
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
See also
• “mmcrfs command” on page 315
• “mmchfs command” on page 230
• “mmlsfs command” on page 498
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/{filesystemName}/snapshotCopy/{snapshotName}/path/
{sourcePath} request copies a directory from a source path relative to a snapshot, to a target path on a
file system. For more information about the fields in the data structures that are returned, see the topics
“mmcrfs command” on page 315, “mmchfs command” on page 230, and “mmlsfs command” on page
498.
Request URL
https://<IP or host name of API server>:port/scalemgmt/v2/filesystems/{filesystemName}/
snapshotCopy/{snapshotName}/path/{sourcePath}
where:
filesystems/filesystemName.
Specifies the file system to which the snapshot belongs. Required.
snapshotCopy
Action to be performed on the snapshot. Required.
snapshotName
Name of the snapshot to be copied. Required.
path/sourcePath
Source path relative to a snapshot. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
Examples
The following example shows how to copy a snapshot that belongs to the file system gpfs0 . Snapshot
name is snap1 and the relative source path is mydir1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000025,
"status" : "COMPLETED",
"submitted" : "2020-09-29 21:40:31,217",
"completed" : "2020-09-29 21:40:36,298",
"runtime" : 5081,
"request" : {
"type" : "PUT",
"url" : "/scalemgmt/v2/filesystems/fs1/snapshotCopy/snap1/path/dir1"
},
See also
• “mmcrfs command” on page 315
• “mmchfs command” on page 230
• “mmlsfs command” on page 498
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/{filesystemName}/snapshots request gets information about snapshots in
the specified file system. For more information about the fields in the data structures that are returned,
see the topics “mmcrsnapshot command” on page 337 and “mmlssnapshot command” on page 532.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
snapshots
where
filesystems/filesystemName
Specifies the file system of which the snapshot is taken. Required.
snapshots
Specifies snapshot as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"snapshotName":"Snapshot"
The snapshot name.
"filesystemName":"Device"
The file system that is the target of the snapshot.
"filesetName":"Fileset"
For a fileset snapshot, the fileset that is a target of the snapshot.
"oid":"ID"
Internal identifier that is used for paging.
"snapID":"ID"
The snapshot ID.
"status":"Status"
The snapshot status.
"created":"DateTime"
The date and time when the snapshot was created.
"quotas":"Quotas"
Any quotas that are applied to the fileset.
"snapType":"Type"
The AFM type of the snapshot, including "afm_snap", "afm_recovery", "afm_failover",
"afm_rpo", "afm_baserpo", and "Invalid".
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/snapshots?lastId=1001"
},
"snapshots": [
{
"snapshotName": "snap1",
"filesystemName": "gpfs0",
"filesetName": "",
"oid": "123",
"snapID": "5",
"status": "Valid",
"created": "2017-01-09 14.55.37",
"quotas": "string",
"snapType": "string"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/{filesystemName}/snapshots command creates a snapshot of the
specified file system.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName/
snapshots
where
filesystems/filesystemName
Specifies the file system of which the snapshot needs to be taken. Required.
snapshots
Specifies snapshot as the resource of this POST call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"snapshotName": ""
}
"snapshotName":
Name of the snapshot to be created.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
Examples
The following example creates a snapshot snap2 of the file system gpfs0.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/filesystems/gpfs0/snapshots",
"data": "{"snapshotName": "snap2"}"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE filesystems/{filesystemName}/snapshots/{snapshotName} command deletes
the specified file system snapshot. For more information on deleting snapshots, see “mmdelsnapshot
command” on page 378.
Request URL
Use this URL to delete a global snapshot:
where:
filesystems/filesystemName
Specifies the file system of which the snapshot is deleted. Required.
snapshots/snapshotName
Specifies snapshot to be deleted. Required.
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
"completed":Time"
The time at which the job was completed.
"status":"RUNNING | COMPLETED | FAILED"
Status of the job.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/filesystems/gpfs0/snapshots/snap1",
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/filesystemName/snapshots/snapshotName request gets information
about a specific file system snapshot. For more information about the fields in the data structures that are
returned, see the topics “mmcrsnapshot command” on page 337 and “mmlssnapshot command” on page
532.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/filesystemName
/snapshots/snapshotName
where
filesystems/filesystemName
Specifies the file system of which the snapshot is taken. Required.
snapshots/snapshotName
Specifies a particular snapshot as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"snapshotName":"Snapshot"
The snapshot name.
"filesystemName":"Device"
The file system that is the target of the snapshot.
"filesetName":"Fileset"
For a fileset snapshot, the fileset that is a target of the snapshot.
"oid":"ID"
Internal identifier that is used for paging.
"snapID":"ID"
The snapshot ID.
"status":"Status"
The snapshot status.
"created":"DateTime"
The date and time when the snapshot was created.
"quotas":"Quotas"
Any quotas that are applied to the fileset.
"snapType":"Type"
The AFM type of the snapshot, including "afm_snap", "afm_recovery", "afm_failover",
"afm_rpo", "afm_baserpo", and "Invalid".
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/filesets/myFset1/snapshots/
snap1?lastId=1001"
},
"snapshots": [
{
"snapshotName": "snap1",
"filesystemName": "gpfs0",
"filesetName": "myFset1",
"oid": "123",
"snapID": "5",
"status": "Valid",
"created": "2017-01-09 14.55.37",
"quotas": "string",
"snapType": "string"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST filesystems/{filesystemName}/symlink/{linkPath} request creates a symlink for a
path from a specific file system.
Request URL
https://management API host:port/scalemgmt/v2/filesystems/filesystemName/symlink/linkPath
where:
filesystem/filesystemName
Specifies the name of the file system to which the symlink belongs. Required.
symlink/linkPath
Specifies the symlink path relative to the file system's mount point. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"filesystemName": "File system name",
"filesetName": "Fileset name",
"relativePath": "Relative path"
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
Examples
The following example shows how to create a symlink for the file system gpfs0.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000005,
"status" : "RUNNING",
"submitted" : "2017-03-14 15:59:34,180",
"completed" : "N/A",
"request" : {
"data" : {
"filesystemName": "gpfs0",
"filesetName": "fset1",
"relativePath": "mydir"
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/symlink/mydir"
},
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE filesystems/{filesystemName}/symlink/{path} request removes a symlink from a
file system.
Request URL
https://management API host:port/scalemgmt/v2/filesystems/filesystemName/symlink/path
where:
filesystems/filesystemName
Specifies the name of the file system from which the symlink must be removed. Required.
symlink/path
The symlink path relative to the file system's mount point.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
None.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
"completed":Time"
The time at which the job was completed.
"status":"RUNNING | COMPLETED | FAILED"
Status of the job.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000006,
"status" : "RUNNING",
"submitted" : "2017-03-14 16:05:30,960",
"completed" : "N/A",
"request" : {
"type" : "DELETE",
"url" : "/scalemgmt/v2/filesystems/gpfs0/symlink/myDir1"
"result" : { }
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/filesystemName/unmount request unmounts a file system. For more
information about the fields in the data structures that are returned, see “mmmount command” on page
537.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/FileSystemName/
unmount
where
filesystems/filesystemName
The file system that is going to be unmounted. Required.
unmount
Specifies the action to be performed on the file system. Required.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"nodes":"Node name} ",
"remoteCluster": "Remote cluster name",
"force": "True | False"
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
Examples
The following example how to unmount the file system gpfs0.
Request data:
{
"nodes": "testnode-11",
"remoteCluster": "myRemoteClusterName",
"force": false
}
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000002,
"status" : "RUNNING",
"submitted" : "2017-03-14 15:50:00,493",
"completed" : "N/A",
"request" : {
"data" : {
"entries" : [
{
"type" : "allow",
"who" : "special:owner@",
"permissions" : "rwmxDaAnNcCos",
"flags" : ""
},
{
"type" : "allow",
"who" : "special:group@",
"permissions" : "rxancs",
"flags" : ""
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT filesystems/filesystemName/watch request enables or disables clustered file system
watch. For more information about the fields in the data structures that are returned, see “mmwatch
command” on page 753.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/FileSystemName/
watch
where
filesystems/filesystemName
The file system for which clustered watch needs to be enabled.
watch
Specifies the action to be performed on the file system.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
For more information about the fields in the following data structures, see the links at the end of this
topic.
"action": "Enable or disable file system watch"
Whether to enable or disable file system watch. Possible values are enable | disable.
"watchfolderConfig":
"description": "Description"
Description of the clustered watch.
"eventTypes": "Event types"
Types of events that need to be watched.
"eventHandler": "Event handler"
Type of event handler. Only Kafkasink is supported as the evens handler.
"sinkBrokers": "Sink broker"
Includes a comma-separated list of broker:port pairs for the sink Kafka cluster, which is the
external Kafka cluster where the events are sent.
"sinkTopic": "Sink topic"
The topic that producers write to in the sink Kafka cluster.
"sinkAuthConfig": "Sink authentication config"
The path to the sink authentication configuration file.
watchId": "Watch ID"
Remote cluster name from where the file system must be watched.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
Examples
The following example how to enable the clustered file system watch for the file system gpfs0.
Request data:
{
"action": "enable",
"watchfolderConfig": {
"description": "description",
"eventTypes": "IN_ACCESS,IN_ATTRIB",
"eventHandler": "kafkasink",
"sinkBrokers": "Broker1:Port,Broker2:Port",
"sinkTopic": "topic",
"sinkAuthConfig": "/mnt/gpfs0/sink-auth-config"
},
"watchId": "string"
}
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/filesystems/gpfs0/watch",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET filesystems/{filesystemName}/watches request gets a list of clustered watches in a file
system. For more information about the fields in the data structures that are returned, see “mmwatch
command” on page 753.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/filesystems/
{filesystemName}/watches
where
filesystems/filesystemName
Specifies the file system to which the watches belong.
watches
Specifies clustered watches as the target of the GET request.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging":
Paging details.
"next": "URL",
The URL to retrieve the next page. Paging is enabled when more than 1000 objects would be
returned by the query.
"fields": "Fields",
The fields used in the original request.
"filter": "Filters",
The filter used in the original request.
"baseUrl": "Base URL",
The URL of the request without any parameters.
"lastId": "ID of the last element",
The ID of the last element that can be used to retrieve the next elements.
"watches":
An array of elements that describe one watch.
Examples
The following example gets information about the clustered watches in the file system gpfs0.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
Related reference
“mmwatch command” on page 753
Administers clustered watch folder watches.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET info request gets information about IBM Spectrum Scale management API. The information
includes the version number, the resources that are supported, and the HTTP methods that are supported
for each resource.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/info
where
info
Is the information resource.
Request headers
Content-Type: application/json
Accept: application/json
Request data
No request data.
Response data
{
"status": {
"message":"ReturnMessage",
"code":"ReturnCode"
}
"info":
{
"restAPIVersion":"Version",
"name": "API name",
"path": "Resource path"
}
}
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"info":
Information about the REST API server.
"restAPIVersion":"Version"
The IBM Spectrum Scale management API version.
Examples
The following example gets information about the REST API server.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"info" : {
"name" : "Spectrum Scale REST Management",
"restApiVersion" : "2.1.0",
"serverVersion" : "5.0.0-0_170818.165000",
"paths" : {
"/access" : [ "GET", "POST" ],
"/access/status" : [ "GET" ],
"/ces/addresses" : [ "GET" ],
"/ces/addresses/{cesAddress}" : [ "GET" ],
"/ces/services" : [ "GET" ],
"/ces/services/{service}" : [ "GET" ],
"/cluster" : [ "GET" ],
"/config" : [ "GET" ],
"/filesystems" : [ "GET" ],
"/filesystems/{filesystemName}" : [ "GET" ],
"/filesystems/{filesystemName}/acl/{path}" : [ "GET", "PUT" ],
"/filesystems/{filesystemName}/afm/state" : [ "GET" ],
"/filesystems/{filesystemName}/disks" : [ "GET" ],
"/filesystems/{filesystemName}/disks/{diskName}" : [ "GET" ],
"/filesystems/{filesystemName}/filesets" : [ "GET", "POST" ],
"/filesystems/{filesystemName}/filesets/{filesetName}" : [ "GET", "PUT", "DELETE" ],
"/filesystems/{filesystemName}/filesets/{filesetName}/link" : [ "POST", "DELETE" ],
"/filesystems/{filesystemName}/filesets/{filesetName}/psnaps" : [ "POST" ],
"/filesystems/{filesystemName}/filesets/{filesetName}/psnaps/{snapshotName}" : [ "DELETE" ],
"/filesystems/{filesystemName}/filesets/{filesetName}/quotas" : [ "GET", "POST" ],
"/filesystems/{filesystemName}/filesets/{filesetName}/snapshots" : [ "GET", "POST" ],
"/filesystems/{filesystemName}/filesets/{filesetName}/snapshots/{snapshotName}" : [ "GET",
"DELETE" ],
"/filesystems/{filesystemName}/owner/{path}" : [ "GET", "PUT" ],
"/filesystems/{filesystemName}/quotas" : [ "GET", "POST" ],
"/filesystems/{filesystemName}/snapshots" : [ "GET", "POST" ],
"/filesystems/{filesystemName}/snapshots/{snapshotName}" : [ "GET", "DELETE" ],
"/info" : [ "GET" ],
"/jobs" : [ "GET" ],
"/jobs/{jobId}" : [ "GET", "DELETE" ],
"/nfs/exports" : [ "GET", "POST" ],
"/nfs/exports/{exportPath}" : [ "GET", "DELETE", "PUT" ],
"/nodeclasses" : [ "GET", "POST" ],
"/nodeclasses/{nodeclassName}" : [ "GET", "DELETE", "PUT" ],
"/nodes" : [ "GET", "POST" ],
"/nodes/{name}" : [ "GET", "DELETE" ],
"/nodes/{name}/health/events" : [ "GET" ],
"/nodes/{name}/health/states" : [ "GET" ],
"/nsds" : [ "GET" ],
"/nsds/{nsdName}" : [ "GET" ],
"/perfmon/data" : [ "GET" ],
"/smb/shares" : [ "GET", "POST" ],
"/smb/shares/{shareName}" : [ "GET", "DELETE", "PUT" ],
"/thresholds" : [ "GET", "POST" ],
"/thresholds/{name}" : [ "GET", "DELETE" ]
}
},
"status" : {
"code" : 200,
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET jobs request gets information about the asynchronous jobs.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/jobs
where
jobs
Specifies running jobs as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"paging":
{
"next": "URL"
},
jobs: [
{
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
Examples
The following example gets information about the jobs.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000001,
"status" : "COMPLETED",
"submitted" : "2017-03-21 15:49:20,100",
"completed" : "2017-03-21 15:49:24,251"
}, {
"jobId" : 1000000000002,
"status" : "COMPLETED",
"submitted" : "2017-03-21 15:52:13,776",
"completed" : "2017-03-21 15:52:16,030"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Using the field parameter ":all:" returns entire details of the jobs. For example:
{
"jobs" : [ {
"jobId" : 1000000000001,
"status" : "COMPLETED",
"submitted" : "2017-03-21 15:49:20,100",
"completed" : "2017-03-21 15:49:24,251",
"request" : {
"data" : {
"snapshotName" : "mySnap1"
},
"type" : "POST",
"url" : "/scalemgmt/v2/filesystems/gpfs0/snapshots"
},
"result" : {
"progress" : [ ],
"commands" : [ "mmcrsnapshot 'gpfs0' 'mySnap1' " ],
"stdout" : [ "EFSSG0019I The snapshot mySnap1 has been successfully created." ],
"stderr" : [ ],
"exitCode" : 0
}
}, {
"jobId" : 1000000000002,
"status" : "COMPLETED",
"submitted" : "2017-03-21 15:52:13,776",
"completed" : "2017-03-21 15:52:16,030",
"request" : {
"data" : {
"snapshotName" : "mySnap1"
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE Jobs/{jobId} request cancels a job.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/jobs/{jobId}
where
jobs/{jobId}
Specifies the job to be canceled. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
Examples
The following API command cancels the job 1234.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/jobs/1234",
},
"jobId": "1234",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET jobs/jobID request gets information about the asynchronous job that is specified in the
request.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/jobs/jobID
where
jobs
Specifies running jobs as the resource of this GET call. Required.
jobs/jobID
Specifies the job about which you need to get information. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"paging":
{
"next": "URL"
},
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
Known limitation
If there are more than one GUI nodes in a cluster, you need to use the filter parameter to get the details of
the particular job. For example, if a job is submitted on GUI2 and it returns a jobId=2000000000003. If
you query GUI1 for that job ID using /jobs/2000000000003, it returns "Invalid request" because it cannot
find the job since it was submitted to the other GUI. A workaround is to use "/jobs?
filter=jobId=2000000000003" instead of "/jobs/2000000000003" in the request URL to retrieve the job
correctly from the other GUI.
Examples
The following example gets information about the job 12345.
Request URL:
Note: If there are more than one GUI nodes in the cluster, use the following request URL to retrieve the
details of the job:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/filesets?lastId=1001"
},
"jobs": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "GET",
"url": "/scalemgmt/v2/filesystems/gpfs0/filesets",
"data": "{\"config\":{\"filesetName\":\"restfs1001\",\"owner\":\"root\",\"path\":
\"/mnt/gpfs0/rest1001\",\"permissions\":\"555\"}"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nfs/exports request gets information about NFS exports. For more information about the
fields in the data structures that are returned, see “mmnfs command” on page 552.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nfs/exports
where
nfs/exports
Specifies NFS export as the resource. Required.
Request headers
Accept: application/json
Request data
No request data.
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
The following list of attributes are available in the response data:
{
"status":
{
"code": ReturnCode
"message": "ReturnMessage",
}
"NfsExports":
[
"status":
Return status.
"code": ReturnCode,
The HTTP status code that was returned by the request.
"message": "ReturnMessage"
The return message.
"exports":
An array of information about NFS exports. The array contains elements that describe the NFS
exports. For more information about the fields in this structure, see the links at the end of this topic.
"filesystemName": "FileSystemName"
The name of the file system to which the export belongs.
"path": "Path",
Specifies the path for the export.
"pseudoPath": "Pseudo path"
Specifies the path name that is used by the NFSv4 client to locate the directory in the server's file
system tree.
nfsClients
"clientName": "String"
The host name or IP address of the NFS server to be accessed.
"exportID": "Path"
The internal unique ID of the export.
"access_type": "none | RW | RO | MDONLY | MDONLY_RO"
Specifies the type of the access for the client.
"squash": "root | root_squash | all | all_squash | allsquash | no_root_squash | none"
Specifies whether the squashing mechanism is applied to the connecting client.
"anonUid": "ID"
This option explicitly sets the UID of the anonymous account.
"anonGid": "ID"
This option explicitly sets the GID of the anonymous account.
"privilegedPort": "True | False"
This option to specify whether a client that uses an ephemeral port must be rejected.
"sectype": "sys | krb5 | krb5i | krb5p | None"
The supported authentication method for the client.
Examples
The following example gets information about the NFS exports that are configured in the system:
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"nfsexports" : [ {
"path" : "/mnt/gpfs0"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Using the field parameter ":all:" returns entire details of the NFS exports as shown in the following
example:
{
"nfsexports" : [ {
"delegations" : "NONE",
"filesetName" : "",
"filesystemName" : "gpfs0",
"path" : "/mnt/gpfs0"
"pseudoPath" : "/mnt/test"
},
"nfsClients" : [ {
"access_type" : "RW",
"anonGid" : "-2",
"anonUid" : "-2",
"clientName" : "198.51.100.123",
"delegations" : "NONE",
"exportid" : "1",
"manageGids" : false,
"nfsCommit" : false,
"privilegedPort" : false,
"protocols" : "3,4",
"sectype" : "SYS",
"squash" : "ROOT_SQUASH",
"transportProtocol" : "TCP"
} ]
Related reference
“mmnfs command” on page 552
Manages NFS exports and configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST nfs/export request creates a new NFS export. For more information about the fields in the
data structures that are returned, see “mmnfs command” on page 552.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nfs/exports
where
nfs/exports
Specifies the NFS export as the resource. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"path": "Path",
"pseudoPath": "Pseudo path",
"nfsclients": "Client details"
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
Examples
The following example creates an NFS export.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/filesystems/gpfs0/filesets",
"data": "{
"path": "/mnt/gpfs0/fset1",
"pseudoPath": "/mnt/test/",
"nfsClients": [ ''198.51.100.111:443(access_type=ro,squash=no_root_squash)'',
''198.51.100.110:443(access_type=rw,squash=no_root_squash)'' ]}"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmnfs command” on page 552
Manages NFS exports and configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nfs/exports/exportPath request gets information about an NFS export. For more
information about the fields in the data structures that are returned, see “mmnfs command” on page 552.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nfs/exports
where
nfs/exports
Specifies NFS export as the resource. Required.
exportPath
Specifies the NFS export about which you need the details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
The following list of attributes are available in the response data:
"status":
Return status.
"code": ReturnCode,
The HTTP status code that was returned by the request.
"message": "ReturnMessage"
The return message.
"exports":
An array of information about NFS exports. The array contains elements that describe the NFS
exports. For more information about the fields in this structure, see the links at the end of this topic.
config":
Manages NFS configuration in a cluster.
"filesystemName": "FileSystemName"
The name of the file system to which the export belongs.
"path": "Path",
Specifies the path for the export.
"delegations": "Delegations"
Specifies what delegate file operations are permitted.
"pseudoPath": "Pseudo path"
Specifies the path name that is used by the NFSv4 client to locate the directory in the server's file
system tree.
nfsClients
"clientName": "String"
The host name or IP address of the NFS server to be accessed.
"exportID": "Path"
The internal unique ID of the export.
"access_type": "none | RW | RO | MDONLY | MDONLY_RO"
Specifies the type of the access for the client.
Examples
The following example gets information about the NFS export /mnt/gpfs0/fset1 that is configured in
the system.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/nfs/exports/mnt%2Fgpfs0%2Ffset1?lastId=1001"
},
"exports": [
{
"config": {
"filesystemName": "gpfs0",
"path": "/mnt/gpfs0/fset1",
"delegations": "NONE",
"pseudoPath" : "/mnt/test"
},
"nfsClients": [
{
"clientName": "198.51.100.8",
"exportid": "1",
"access_type": "RO",
"squash": "root_squash",
"anonUid": "1",
"anonGid": "1",
"privilegedPort": "false",
"sectype": "sys",
Related reference
“mmnfs command” on page 552
Manages NFS exports and configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT nfs/export/exportPath request modifies an existing NFS export. For more information
about the fields in the data structures that are returned, see “mmnfs command” on page 552.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nfs/exports/exportPath
where
nfs/exports
Specifies the NFS export as the resource. Required.
exportPath
Specifies the NFS export, which you need to modify. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"nfsadd": "New client",
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
Examples
The following example modifies the NFS export /mnt/gpfs0/fset1 that is configured in the system.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/nfs/exports/mnt%2Fgpfs0%2Ffs1",
"data": "{
"nfsadd": "198.51.100.10:443(sectype=sys,krb5)",
"nfschange": "198.51.100.11:443(sectype=sys,krb5)",
"nfsremove": "198.51.100.21:443",
"nfsposition": "1""}"
},
Related reference
“mmnfs command” on page 552
Manages NFS exports and configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE nfs/exports/exportPath command deletes the specified NFS export. For more
information, see “mmnfs command” on page 552.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nfs/exports/exportPath
where:
nfs/exports
Specifies NFS export as the resource. Required.
exportPath
Specifies the NFS export to be deleted. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
"completed":Time"
The time at which the job was completed.
"status":"RUNNING | COMPLETED | FAILED"
Status of the job.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/nfs/exports/%2Fmnt%2Fgpfs0%2Ffset1",
"data": "{}"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmnfs command” on page 552
Manages NFS exports and configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nodeclasses request gets information about all node classes and the nodes that are part of
each node class. For more information about the fields in the data structures that are returned, see the
topics “mmcrnodeclass command” on page 330, “mmdelnodeclass command” on page 374,
“mmchnodeclass command” on page 248, and “mmlsnodeclass command” on page 512.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodeclasses
where
nodeclasses
Specifies nodes classes as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request data
No request data.
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"paging":
{
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"nodeClasses":
"nodeclassName":"Name"
The name of the node class.
"members":"List of nodes"
The list of nodes that are part of the node class.
"type": "SYSTEM | USER"
Indicates whether the node class is system-defined or user-defined.
Examples
The following example gets information about the node classes that are available in the cluster.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
The nodeclasses array returns 10 objects in the following example. Each object contains details about one
node class.
{
"nodeclasses" : [ {
"nodeclassName" : "aixNodes",
"type" : "SYSTEM"
}, {
"nodeclassName" : "all",
"type" : "SYSTEM"
}, {
"nodeclassName" : "cesNodes",
"type" : "SYSTEM"
}, {
"nodeclassName" : "clientLicense",
"type" : "SYSTEM"
}, {
"nodeclassName" : "clientNodes",
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST nodeclasses request creates a node class and assign nodes to it. For more information about
the fields in the data structures that are returned, see the topics “mmcrnodeclass command” on page
330, “mmdelnodeclass command” on page 374, “mmchnodeclass command” on page 248, and
“mmlsnodeclass command” on page 512.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodeclasses
where
nodeclasses
Specifies node class as the target. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
nodeClasses:
{
"nodeclassName": "Name",
"members": "[node1Name, node2Name, node3Name]",
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"nodeclassName":"Name"
The name of the node class.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
Examples
The following example creates a node class named cesNodes.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/nodeclasses",
"data": "{
"nodeclassName": "cesNodes",
"members": "['mari-11', 'cloudOne1]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nodeclasses/{nodeclasseName} request gets information about a specific and the nodes
that are part of that node class. For more information about the fields in the data structures that are
returned, see the topics “mmcrnodeclass command” on page 330, “mmdelnodeclass command” on page
374, “mmchnodeclass command” on page 248, and “mmlsnodeclass command” on page 512.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodeclasses/nodeclassName
where
nodeclasses/nodeclassName
Specifies a particular nodes class as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request data
No request data.
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"nodeClasses":
"nodeclassName":"Name"
The name of the node class.
"members":"List of nodes"
The list of nodes that are part of the node class.
"type": "Type
Indicates whether the node class is system-defined or user-defined.
Examples
The following example gets information about the node class cesNodes.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"nodeClasses": [
{
"nodeclassName": "cesNodes",
"members": "[mari-12.localnet.com,mari-13.localnet.com]",
"type": "SYSTEM"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE nodeclass request deletes a specific node class. For more information about the fields in
the data structures that are returned, see the topics “mmcrnodeclass command” on page 330,
“mmdelnodeclass command” on page 374, “mmchnodeclass command” on page 248, and
“mmlsnodeclass command” on page 512.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodeclass/nodeclassName
where
nodeclasses/nodeclassName
Specifies the node class to be deleted. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
Examples
The following example deletes the a node class named cesNodes.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/nodeclasses/cesNodes",
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT nodeclass request changes the member nodes of a specific node class. For more information
about the fields in the data structures that are returned, see the topics “mmcrnodeclass command” on
page 330, “mmdelnodeclass command” on page 374, “mmchnodeclass command” on page 248, and
“mmlsnodeclass command” on page 512.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodeclass/nodeclassName
where
nodeclasses/nodeclassName
Specifies the node class to be modified. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"operation": "Operation",
"members": "['node1', 'node2',..]"
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
Examples
The following example modifies the member nodes in the node class cesNodes.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/nodeclasses/cesNodes",
"data": "{
"operation": "add",
"members": "['mari-11', 'CloudOne']"
},
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nodes request gets information about all nodes in the cluster. For more information about the
fields in the data structures that are returned, see the topics “mmchnode command” on page 241 and
“mmgetstate command” on page 425.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes
where
nodes
Specifies nodes as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"paging":
{
"next": "URL"
},
nodes: [
{
"adminNodename":"IP or host name",
"nodeNumber":"Node ID",
"config":
{
"adminLoginName":"Admin login name",
"designatedLicense":"Designated license",
"requiredLicense":"Required license",
},
"status":
{
"osName":"Operating system",
"nodeState":"Health status",
"gpfsState":"GPFS status",
"productVersion":"Version",
},
"network":
{
"adminIPAddress":"IP address or host name",
"daemonNodeName":"GPFS daemon node name",
"daemonIPAddress":"IP address",
"getcnfsNodeName":"Host name used by cNFS",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"nodes":
adminNodename":"IP or host name"
The host name that is used by the GPFS admin network.
"nodeNumber":"Node ID"
The GPFS node ID.
"config":
"adminLoginName":"Admin login name"
The name of the admin login.
"designatedLicense":"Designated license"
The license this node is running on.
"requiredLicense":"client | FPO | server"
Controls the type of GPFS required license that is associated with the nodes in the cluster.
"status":
"osName":"Operating system"
The name of Operating System running on this node.
"nodeState":"Health status"
The state of the node as reported by the mmhealth node show command.
"gpfsState":"GPFS status"
The state of GPFS on this node as reported the mmhealth node show command.
Examples
The following example gets information about the nodes.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"nodes" : [ {
"adminNodeName" : "mari-11.localnet.com"
}, {
"adminNodeName" : "mari-12.localnet.com"
}, {
"adminNodeName" : "mari-13.localnet.com"
}, {
"adminNodeName" : "mari-14.localnet.com"
}, {
"adminNodeName" : "mari-15.localnet.com"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Using the field parameter ":all:" returns entire details of the nodes that are available in the cluster. For
example:
{
"nodes" : [ {
"nodeNumber" : 1,
"adminNodeName" : "mari-11.localnet.com",
"config" : {
"adminLoginName" : "root",
"designatedLicense" : "server",
"requiredLicense" : "server"
},
"status" : {
"gpfsState" : "HEALTHY",
"nodeState" : "HEALTHY",
"osName" : "Red Hat Enterprise Linux Server 7.2 (Maipo)",
"productVersion" : "4.2.3.0"
},
"network" : {
"adminIPAddress" : "198.52.100.11",
"daemonIPAddress" : "198.52.100.11",
"daemonNodeName" : "mari-11.localnet.com"
},
"roles" : {
"cesNode" : false,
"cloudGatewayNode" : false,
"cnfsNode" : false,
"designation" : "quorum",
"gatewayNode" : false,
"managerNode" : false,
"otherNodeRoles" : "perfmonNode",
"quorumNode" : true,
"snmpNode" : false
}
}, {
"nodeNumber" : 2,
"adminNodeName" : "mari-12.localnet.com",
"config" : {
"adminLoginName" : "root",
"designatedLicense" : "server",
"requiredLicense" : "server"
},
"status" : {
"cloudGatewayNode" : false,
"cnfsNode" : false,
"designation" : "quorum",
"gatewayNode" : false,
"managerNode" : false,
"otherNodeRoles" : "perfmonNode,cesNode,cloudNodeMarker",
"quorumNode" : true,
"snmpNode" : false
},
"cesInfo" : {
"cesGroup" : "",
"cesIpList" : "198.51.100.43,198.51.100.22,198.51.100.28,198.51.100.34,198.51.100.40,
198.51.100.46,198.51.100.19,198.51.100.25,198.51.100.31,198.51.100.37",
"cesState" : "e",
"ipAddress" : "198.52.100.12"
}
}, {
"nodeNumber" : 3,
"adminNodeName" : "mari-13.localnet.com",
"config" : {
"adminLoginName" : "root",
"designatedLicense" : "server",
"requiredLicense" : "server"
},
"status" : {
"gpfsState" : "HEALTHY",
"nodeState" : "HEALTHY",
"osName" : "Red Hat Enterprise Linux Server 7.2 (Maipo)",
"productVersion" : "4.2.3.0"
},
"network" : {
"adminIPAddress" : "198.52.100.13",
"daemonIPAddress" : "198.52.100.13",
"daemonNodeName" : "mari-13.localnet.com"
},
"roles" : {
"cesNode" : true,
"cloudGatewayNode" : false,
"cnfsNode" : false,
"designation" : "quorum",
"gatewayNode" : false,
"managerNode" : false,
"otherNodeRoles" : "perfmonNode,cesNode,cloudNodeMarker",
"quorumNode" : true,
"snmpNode" : false
},
"cesInfo" : {
"cesGroup" : "",
"cesIpList" : "198.51.100.41,198.51.100.47,198.51.100.20,198.51.100.26,198.51.100.32,
198.51.100.38,198.51.100.44,198.51.100.23,198.51.100.29,198.51.100.35",
"cesState" : "e",
"ipAddress" : "198.52.100.13"
}
}, {
"nodeNumber" : 4,
"adminNodeName" : "mari-14.localnet.com",
"config" : {
"adminLoginName" : "root",
"designatedLicense" : "server",
"requiredLicense" : "server"
},
"status" : {
"gpfsState" : "HEALTHY",
"nodeState" : "HEALTHY",
"osName" : "Red Hat Enterprise Linux Server 7.2 (Maipo)",
"productVersion" : "4.2.3.0"
},
"network" : {
"cloudGatewayNode" : false,
"cnfsNode" : false,
"designation" : "client",
"gatewayNode" : false,
"managerNode" : false,
"otherNodeRoles" : "perfmonNode,cesNode,cloudNodeMarker",
"quorumNode" : false,
"snmpNode" : false
},
"cesInfo" : {
"cesGroup" : "",
"cesIpList" : "198.51.100.45,198.51.100.24,198.51.100.30,198.51.100.36,198.51.100.42,
198.51.100.48,198.51.100.21,198.51.100.27,198.51.100.33,198.51.100.39",
"cesState" : "e",
"ipAddress" : "198.52.100.14"
}
}, {
"nodeNumber" : 5,
"adminNodeName" : "mari-15.localnet.com",
"config" : {
"adminLoginName" : "root",
"designatedLicense" : "server",
"requiredLicense" : "server"
},
"status" : {
"gpfsState" : "HEALTHY",
"nodeState" : "HEALTHY",
"osName" : "Red Hat Enterprise Linux Server 7.2 (Maipo)",
"productVersion" : "4.2.3.0"
},
"network" : {
"adminIPAddress" : "198.52.100.15",
"daemonIPAddress" : "198.52.100.15",
"daemonNodeName" : "mari-15.localnet.com"
},
"roles" : {
"cesNode" : true,
"cloudGatewayNode" : false,
"cnfsNode" : false,
"designation" : "client",
"gatewayNode" : false,
"managerNode" : false,
"otherNodeRoles" : "perfmonNode,cesNode,cloudNodeMarker",
"quorumNode" : false,
"snmpNode" : false
},
"cesInfo" : {
"cesGroup" : "",
"cesIpList" : "198.51.100.10,198.51.100.14,198.51.100.12,198.51.100.18,198.51.100.16,
198.51.100.49,198.51.100.11,198.51.100.15,198.51.100.13,198.51.100.17",
"cesState" : "e",
"ipAddress" : "198.52.100.15"
}
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST nodes request adds one or more nodes to the cluster. For more information about the fields in
the data structures that are returned, see the topics “mmaddnode command” on page 35.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes
where
nodes
Specifies nodes as the target of the operation. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"nodesDesc": "Descriptions",
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"nodesDesc":"Descriptions"
Descriptions of the nodes to be added to the cluster.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
Examples
The following API command adds two nodes to the cluster.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/nodeclasses",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nodes/afm/mapping request lists all node mappings for AFM and cloud object storage (COS).
For more information about the fields in the data structures that are returned, see “mmafmconfig
command” on page 45.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/afm/mapping
where
nodes/afm/mapping
Specifies the target of the request. Required.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
},
"paging":
{
"next": "URL",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging":
Paging details.
"next": "URL",
The URL to retrieve the next page. Paging is enabled when more than 1000 objects would be
returned by the query.
"fields": "Fields",
The fields used in the original request.
"filter": "Filters",
The filter used in the original request.
"baseUrl": "Base URL",
The URL of the request without any parameters.
"lastId": "ID of the last element",
The ID of the last element that can be used to retrieve the next elements.
"mappings":
An array of elements that describe one mapping.
"mapName" : "Map name"
Name of the mapping.
"exportMap" : "Export server and gateway node map details"
Lists the export server and gateway node maps.
Examples
The following example lists all node mapping for AFM and COS.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
{
"status": {
"code": 200,
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/filesets?lastId=10001",
"fields": "period,restrict,sensorName",
"filter": "usedInodes>100,maxInodes>1024",
"baseUrl": "/scalemgmt/v2/nodes/afm/mapping",
"lastId": 10001
},
"mappings": [
{
"mapName": "myMap",
"exportMap": "[ testnode-11/node-13, testnode-11/node-14]"
}
]
}
Related reference
“mmafmconfig command” on page 45
Can be used to manage home caching behavior and mapping of gateways and home NFS exported
servers.
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST Nodes/afm/mapping request creates node mapping for AFM and Cloud Object Storage. For
more information about the fields in the data structures that are returned, see “mmafmcosconfig
command” on page 50.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/afm/mapping
where
/nodes/afm/mapping
Specifies the target of the request.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"mapName": Map name,
"exportMap": "Export server or gateway node map details",
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"mapName" : "Map name"
Name of the mapping.
"exportMap" : "Export server and gateway node map details"
Lists the export server and gateway node maps.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero value denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
Examples
The following example shows how to create a node mapping for AFM and Cloud Object Storage.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/node/afm/mapping",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmafmcosconfig command” on page 50
Creates and displays an AFM to cloud object storage fileset.
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE nodes/afm/mapping{mappingName} request deletes a node mapping for AFM and Cloud
Object Storage. For more information about the fields in the data structures that are returned, see
“mmafmconfig command” on page 45.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/afm/
mapping{mappingName}
where
nodes/afm/mapping
Specifies the target of the request.
{mappingName}
Specifies the map to be deleted.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero value denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
"submitted":"Time"
The time at which the job was submitted.
"completed":Time"
The time at which the job was completed.
"status":"RUNNING | COMPLETED | FAILED"
Status of the job.
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/node/afm/mapping/myMap",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmafmconfig command” on page 45
Can be used to manage home caching behavior and mapping of gateways and home NFS exported
servers.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nodes/afm/mapping/{mappingName} request lists all node mappings for AFM and Cloud
Object Storage. For more information about the fields in the data structures that are returned, see
“mmafmconfig command” on page 45.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/afm/mapping/
{mappingName}
where
nodes/afm/mapping
Specifies the target of the request.
mappingName
Name of the map.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
},
"paging":
{
"next": "URL",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging":
Paging details.
"next": "URL",
The URL to retrieve the next page. Paging is enabled when more than 1000 objects would be
returned by the query.
"fields": "Fields",
The fields used in the original request.
"filter": "Filters",
The filter used in the original request.
"baseUrl": "Base URL",
The URL of the request without any parameters.
"lastId": "ID of the last element",
The ID of the last element that can be used to retrieve the next elements.
"mappings":
An array of elements that describe one mapping.
"mapName" : "Map name"
Name of the mapping.
"exportMap" : "Export server and gateway node map details"
Lists the export server and gateway node maps.
Examples
The following example gets the details of the node mapping for AFM and Cloud Object Storage.
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
{
"status": {
"code": 200,
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/filesystems/gpfs0/filesets?lastId=10001",
"fields": "period,restrict,sensorName",
"filter": "usedInodes>100,maxInodes>1024",
"baseUrl": "/scalemgmt/v2/nodes/afm/mapping/myMap",
"lastId": 10001
},
"mappings": [
{
"mapName": "myMap",
"exportMap": "[ testnode-11/node-13, testnode-11/node-14]"
}
]
}
Related reference
“mmafmconfig command” on page 45
Can be used to manage home caching behavior and mapping of gateways and home NFS exported
servers.
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT nodes/afm/mapping/{mappingName} request changes a node mapping for AFM and Cloud
Object Storage. For more information about the fields in the data structures that are returned, see
“mmafmconfig command” on page 45.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/afm/mapping/
{mappingName}
where
nodes/afm/mapping
Specifies the target of the request.
{mappingName}
Specifies the map to be modified.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"exportMap": "Export server or gateway node map details",
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"exportMap" : "Export server and gateway node map details"
Lists the export server and gateway node maps.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success and a nonzero value denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
Examples
The following example shows how to change the node mapping myMap:
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/node/afm/mapping/myMap",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmafmconfig command” on page 45
Can be used to manage home caching behavior and mapping of gateways and home NFS exported
servers.
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE nodes/name request deletes one or more nodes from the cluster. For more information
about the fields in the data structures that are returned, see the topics “mmdelnode command” on page
371.
Prerequisites
Ensure the following before you run this API command:
• GPFS needs to be stopped on all affected nodes.
• Affected nodes cannot be used as NSD server.
• CES must be disabled on all affected nodes.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/name
where
nodes/name
Specifies the name of the node or all nodes in a node class to be deleted. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"name": "Name",
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
Examples
The following API command deletes the node node1 from the cluster. .
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/nodes/node1",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nodes/nodeName request gets information about the specified node. For more information
about the fields in the data structures that are returned, see the topics “mmchnode command” on page
241 and “mmgetstate command” on page 425.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/nodeName
where
nodes
Is the nodes resource. Required.
nodeName
Specifies the node about which you want to get information. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request data
No request data.
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code":ReturnCode",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"nodes":
Examples
The following example gets information about the node testnode1-d.localnet.com.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/node/testnode1-d.localnet.com?lastId=1001"
},
"nodes": [
{
"adminNodeName": "testnode1-a.localnet.com",
"nodeNumber": "1",
"config": {
"adminLoginName": "root",
"designatedLicense": "server",
"requiredLicense": "true"
},
"status": {
"osName": "Red Hat Enterprise Linux Server 7.2 (Maipo)",
"nodeState": "HEALTHY",
"gpfsState": "HEALTHY",
"productVersion": "4.2.3.0"
},
"network": {
"adminIPAddress": "10.0.200.21",
"daemonNodeName": "testnode1-d.localnet.com",
"daemonIPAddress": "10.0.100.21",
"getcnfsNodeName": "string"
},
"roles": {
"snmpNode": "true",
"managerNode": "false",
"gatewayNode": "false",
"cnfsNode": "true",
"cesNode": "false",
"quorumNode": "true",
"cloudGatewayNode": "true",
"otherNodeRoles": "perfmonNode,cesNode",
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT nodes/{name} request enables or disables quorum and gateway designations of a node or node
class. For more information about the fields in the data structures that are returned, see “mmchnode
command” on page 241.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/{name}
where
nodes/{name}
Specifies the target of the request. Required.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"quorum": "true | false",
"gateway": "true | false",
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"quorum": "true | false"
Set to true if you want to enable quorum. Set to false to disable it. Omit this parameter if you do not
want to change quorum.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | PUT | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
Examples
The following example shows how to change quorum and gateway designation of a node:
Request data:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/nodes/scale-node",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmchnode command” on page 241
Changes node attributes.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nodes/nodeName/health/events request gets information about the system health events
that are reported in the specified node. For more information about the fields in the data structures that
are returned, see the topics “mmhealth command” on page 435 and “mmces command” on page 132.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/nodeName/health/events
where
nodes/nodesName
Specifies the node about which you want to get information. Required.
health/events
Specifies that you need to get the details of the system health events reported on the node. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"events":
An array of elements that describe the system health event. Each element describes one node.
"oid":"ID
The internal unique ID of the event.
"component":"Component
The component to which the event belongs.
"reportingnode":" Node Name
The node in which the event is reported.
"type":" {TIP | STATE_CHANGE | NOTICE}
Specifies the event type.
"activeSince":"Time
The time at which the event occurred.
"name":"Event Name
The name of the event.
"message":"Message
The event message.
"entityType":"Entity Type
The type of the entity for which the event occurred.
"entityName":" Name
The name of the entity for which the event occurred.
"userAction":" Action
The user action that is required to resolve the event.
Examples
The following example gets information about the events reported in the node testnode1-
d.localnet.com
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"events" : [ {
"activeSince" : "2017-03-13 10:55:15,809",
"component" : "FILESYSTEM",
"description" : "The inode usage in the fileset reached a normal level.",
"entityName" : "fset1",
"entityType" : "FILESET",
"message" : "The inode usage of fileset fset1 in file system gpfs0 reached a normal level.",
"name" : "inode_normal",
"oid" : 690,
"parentName" : "gpfs0",
"reportingNode" : "mari-11.localnet.com",
"severity" : "INFO",
"state" : "HEALTHY",
"type" : "STATE_CHANGE",
"userAction" : "N/A"
}, {
"activeSince" : "2017-03-13 11:05:15,912",
"component" : "FILESYSTEM",
"description" : "The inode usage in the fileset reached a normal level.",
"entityName" : "fset2",
"entityType" : "FILESET",
"message" : "The inode usage of fileset fset2 in file system gpfs0 reached a normal level.",
"name" : "inode_normal",
"oid" : 691,
"parentName" : "gpfs0",
"reportingNode" : "mari-11.localnet.com",
"severity" : "INFO",
"state" : "HEALTHY",
"type" : "STATE_CHANGE",
"userAction" : "N/A"
}, {
"activeSince" : "2017-03-15 18:44:42,568",
"component" : "GUI",
"description" : "The GUI service is running",
"entityName" : "mari-11.localnet.com",
"entityType" : "NODE",
"message" : "GUI service as expected, state is started",
"name" : "gui_up",
"oid" : 882,
"reportingNode" : "mari-11.localnet.com",
"severity" : "INFO",
"state" : "HEALTHY",
"type" : "STATE_CHANGE",
"userAction" : "N/A"
}, {
"activeSince" : "2017-03-16 07:07:06,418",
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nodes/nodeName/health/states request gets information about the system health states
of the specified node or node class. For more information about the fields in the data structures that are
returned, see the topics “mmhealth command” on page 435 and “mmces command” on page 132.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/nodeName/health/states
where
nodes/nodeName
Specifies the node about which you want to get information. Required.
health/states
Specifies that you need to get the details of the system health state of the node. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"events":
An array of elements that describe the system health event. Each element describes one node.
"oid":"ID
The internal unique ID of the event.
"component":"Component
The component to which the event belongs.
"reportingnode":" Node Name
The node in which the event is reported.
"type":" {TIP | STATE_CHANGE | NOTICE}
Specifies the event type.
"activeSince":"Time
The time at which the event occurred.
"name":"Event Name
The name of the event.
"message":"Message
The event message.
"entityType":"Entity Type
The type of the entity for which the event occurred.
"entityName":" Name
The name of the entity for which the event occurred.
"userAction":" Action
The user action that is required to resolve the event.
Examples
The following example gets information about the events reported in the node testnode1-
d.localnet.com.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"states" : [ {
"activeSince" : "2017-03-02 15:21:57,431",
"component" : "THRESHOLD",
"entityName" : "mari-11.localnet.com",
"entityType" : "NODE",
"oid" : 4,
"reportingNode" : "mari-11.localnet.com",
"state" : "HEALTHY"
}, {
"activeSince" : "2017-03-02 15:21:58,284",
"component" : "NETWORK",
"entityName" : "mari-11.localnet.com",
"entityType" : "NODE",
"oid" : 5,
"reportingNode" : "mari-11.localnet.com",
"state" : "HEALTHY"
}, {
"activeSince" : "2017-03-02 15:24:57,143",
"component" : "PERFMON",
"entityName" : "mari-11.localnet.com",
"entityType" : "NODE",
"oid" : 6,
"reportingNode" : "mari-11.localnet.com",
"state" : "HEALTHY"
}, {
"activeSince" : "2017-03-02 15:21:58,284",
"component" : "NETWORK",
"entityName" : "eth0",
"entityType" : "NIC",
"oid" : 9,
"parentName" : "mari-11.localnet.com",
"reportingNode" : "mari-11.localnet.com",
"state" : "HEALTHY"
}
],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nodes/{name}/services request gets information about the services that are configured on
a specific node or node class. For more information about the fields in the returned data structure, see
“mmces command” on page 132.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/name/services
where
nodes/name/services
Specifies services in a particular node or node class as the resource of the GET call.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status":
{
"code": ReturnCode
"message": "ReturnMessage",
},
"paging":
}
}
"status":
Return status.
"code": ReturnCode,
The HTTP status code that was returned by the request.
"message": "ReturnMessage"
The return message.
"paging":
An array of information about the paging information that is used for displaying the details.
"next": "Next page URL"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects would be
returned by the query.
"fields": "Fields in the request"
The fields that are used in the original request.
"filter": "Filters used in the request"
The filter that is used in the original request.
"baseUrl": "URL"
The URL of the request without any parameters.
"lastId": "ID"
The ID of the last element that can be used to retrieve the next elements.
"serviceStatus":
An array of information that provides the details of the services configured on the node.
"nodeName": Node name
The node where the service is hosted.
"serviceName": "Service name"
Name of the service.
"state": State
Status of the service.
"healthState": "Health state"
Health status of the service.
Examples
The following example gets information about the services that are configured on the node
mari-13.localnet.com.
Request data:
{
"serviceStatus" : [ {
"nodeName" : "mari-12.localnet.com",
"serviceName" : "NFS",
"state" : "running",
"healthState" : "HEALTHY"
}, {
"nodeName" : "mari-12.localnet.com",
"serviceName" : "SMB",
"state" : "running",
"healthState" : "HEALTHY"
}, {
"nodeName" : "mari-12.localnet.com",
"serviceName" : "OBJ",
"state" : "running",
"healthState" : "HEALTHY"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
}
}
Related reference
“mmces command” on page 132
Manage CES (Cluster Export Services) configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nodes/name/services/serviceName request gets information about a specific service that
is hosted on a specific node or node class. For more information about the fields in the returned data
structure, see “mmces command” on page 132.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/name/services/
serviceName
where
nodes/name
Specifies the node or node class on which the service is hosted.
/services/serviceName
Specifies the service about which you need the details.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
}
}
"status":
Return status.
"code": ReturnCode,
The HTTP status code that was returned by the request.
"message": "ReturnMessage"
The return message.
"paging":
An array of information about the paging information that is used for displaying the details.
"next": "Next page URL"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects would be
returned by the query.
"fields": "Fields in the request"
The fields that are used in the original request.
"filter": "Filters used in the request"
The filter that is used in the original request.
"baseUrl": "URL"
The URL of the request without any parameters.
"lastId": "ID"
The ID of the last element that can be used to retrieve the next elements.
"serviceStatus":
An array of information that provides the details of the services configured on the node.
"nodeName": Node name
The node where the service is hosted.
"serviceName": "Service name"
Name of the service.
"state": State
Status of the service.
"healthState": "Health state"
Health status of the service.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
The serviceStatus array returns the details of the services that are configured on the node scale01.
{
"serviceStatus" : [ {
"nodeName" : "mari-14.localnet.com",
"serviceName" : "SMB",
"state" : "running",
"healthState" : "HEALTHY"
}, {
"nodeName" : "mari-15.localnet.com",
"serviceName" : "SMB",
"state" : "running",
"healthState" : "HEALTHY"
}, {
"nodeName" : "mari-13.localnet.com",
"serviceName" : "SMB",
"state" : "running",
"healthState" : "HEALTHY"
}, {
"nodeName" : "mari-12.localnet.com",
"serviceName" : "SMB",
"state" : "running",
"healthState" : "HEALTHY"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
}
}
Example 2:
The following example lists the details of the NFS service that is hosted on a specific node:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
The serviceStatus array returns the details of the NFS service on the specified node.
{
"serviceStatus" : [ {
"nodeName" : "mari-13.localnet.com",
"serviceName" : "NFS",
"state" : "running",
"healthState" : "HEALTHY"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT nodes/name/services/serviceName request starts or stops a service that is hosted on a
node or node class. For more information about the fields in the returned data structure, see “mmces
command” on page 132.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nodes/name/services/
serviceName
where
nodes/name
Specifies the node or node class on which the service is hosted.
/services/serviceName
Specifies the target service of the request.
Request headers
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"action":
The action to be performed on the service. You can either start or stop the service.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
Examples
The following example stops the NFS service that is hosted on the node scale01.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000001,
"status" : "RUNNING",
"submitted" : "2018-03-20 12:54:01,922",
"completed" : "N/A",
"runtime" : 3,
"request" : {
"type" : "PUT",
"url" : "/scalemgmt/v2/nodes/scale01/services/nfs"
},
"result" : { },
"pids" : [ ]
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing."
}
}
Related reference
“mmces command” on page 132
Manage CES (Cluster Export Services) configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nsds request gets information about NSDs that are configured in the system. For more
information about the fields in the data structures that are returned, see the topics “mmcrnsd command”
on page 332 , “mmlsdisk command” on page 489, and “mmlsnsd command” on page 514.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nsds
where
nsds
Specifies NSDs as the resource of the GET call. Required.
Request headers
Accept: application/json
Request data
No request data.
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"paging":
{
"next": "URL"
},nsds":
Examples
The following example gets information about the NSDs that are configured in the system.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"nsds" : [ {
"fileSystem" : "gpfs0",
"name" : "disk1"
}, {
"fileSystem" : "gpfs0",
"name" : "disk8"
}, {
"fileSystem" : "objfs",
"name" : "disk2"
}, {
"fileSystem" : "gpfs1",
"name" : "disk3"
}, {
"fileSystem" : "gpfs1",
"name" : "disk4"
}, {
"fileSystem" : "gpfs1",
"name" : "disk5"
}, {
"name" : "disk6"
}, {
"fileSystem" : "objfs",
"name" : "disk7"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Using the field parameter ":all:" returns entire details of the NSDs that are available in the cluster. For
example:
{
"nsds" : [ {
"availability" : "up",
"availableBlocks" : "9.15 GB",
"availableFragments" : "552 kB",
"failureGroup" : "1",
"fileSystem" : "gpfs0",
"name" : "disk1",
"nsdServers" : "mari-11.localnet.com,mari-15.localnet.com,mari-14.localnet.com,
mari-13.localnet.com,mari-12.localnet.com",
"nsdVolumeId" : "0A00640B58B82A8C",
"quorumDisk" : "no",
"remarks" : "desc",
"size" : " 10.00GiB",
"status" : "ready",
"storagePool" : "system",
"type" : "nsd"
Related reference
“mmcrnsd command” on page 332
Creates Network Shared Disks (NSDs) used by GPFS.
“mmlsnsd command” on page 514
Displays Network Shared Disk (NSD) information for the GPFS cluster.
“mmlsdisk command” on page 489
Displays the current configuration and state of the disks in a file system.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET nsds/nsdName request gets information about an NSD that is configured in the system. For
more information about the fields in the data structures that are returned, see the topics “mmcrnsd
command” on page 332 , “mmlsdisk command” on page 489, and “mmlsnsd command” on page 514.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/nsds/nsdName
where
nsds/nsdName
Specifies the NSD about which you need to get the details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"paging":
{
"next": "URL"
},nsds": [
{
"name": "Name"
"fileSystem": FilesystemName
"failureGroup": "Failure Group",
"type": "Type",
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/nsds/nsd1?lastId=1001"
},
"nsds": [
{
"name": "nsd1",
"fileSystem": "gpfs0",
"failureGroup": "1",
"type": "dataOnly",
"storagePool": "data",
"status": "ready",
"availability": "up",
"quorumDisk": "no",
"remarks": "This is a comment",
"size": "10.00 GB",
"availableBlocks": "730.50 MB",
"availableFragments": "1.50 MB",
"nsdServers": "gpfsgui-21.localnet.com",
"nsdVolumeId": "0A0064155874F5AA"
}
]
}
Related reference
“mmcrnsd command” on page 332
Creates Network Shared Disks (NSDs) used by GPFS.
“mmlsnsd command” on page 514
Displays Network Shared Disk (NSD) information for the GPFS cluster.
“mmlsdisk command” on page 489
Displays the current configuration and state of the disks in a file system.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET perfmon/data request gets performance details from the cluster with the help of queries. The
query is written in the performance monitoring tool query language format. For example, query copu_user
metric for last 10 seconds. For more information about the fields in the data structures that are returned,
see “Querying performance data by using /perfmon/data request ” on page 1422.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/perfmon/data
where
perfmon/data
Specifies the performance monitoring tool as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"performanceData": {}
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
Examples
The following example shows how to use the API command to get the performance details when you use
the following query: metrics avg(cpu_user) last 30 bucket_size 1
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": 200,
"message": "..."
},
"performanceData": "null"
}
Query samples
metrics cpu_user from node=anaphera-dev2 last 10 bucket_size 1
Gets metric cpu_user for the instance where the key element node equals anaphora-dev2 and return the
last 10 buckets of a 1-second size.
metrics cpu_user, cpu_system, cpu_idle from node=anaphera-dev2
tstart 2012-11-10 08:00:00 tend 2012-11-10 18:00:00 bucket_size 30
Gets metrics cpu_user, cpu_system and cpu_idle for node anaphora-dev3 from given start to end time,
using a bucket size of 30 seconds.
metrics sum(cpu_user), sum(netdev_bytes_r) last 30 bucket_size 10
Gets the sum of cpu_user and netdev_bytes_r metrics for all instances and the last 30 buckets by using a
bucket size of 10 seconds.
metrics sum(netdev_bytes_r) last 50 group_by netdev_name bucket_size 1
Gets sum of netdev_bytes_r (last 50 buckets of size 1 second), grouped by key attribute netdev_name.
key anaphera-dev2|CPU|cpu_user last 10 bucket_size 1
Gets metric cpu_user for the instance specified by the explicit key by using bucket size equal to 1, for 10
seconds (result will contain 10 buckets)
metrics cpu_user duration 210 bucket_size 100
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET perfmon/sensors/{sensorName} request gets details of a specific performance monitoring
sensor that is configured in the cluster. For more information about the fields in the data structures that
are returned, see the topic “mmperfmon command” on page 582.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/perfmon/sensors/{sensorName}
where
perfmon/sensors/{sensorName}
Specifies the a particular performance monitoring sensor as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"performanceData": {}
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
Examples
The following example shows how to use the API command to get the details of the performance
monitoring sensor Netstat.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"sensorConfig" : [ {
"component" : "TCT",
"defaultPeriod" : 10,
"defaultRestriction" : "",
"description" : "",
"enabledPerDefault" : true,
"generic" : true,
"minimumPeriod" : 1,
"period" : 10,
"restrict" : [ ],
"restrictionType" : "USERNODECLASS",
"sensorName" : "TCTDebugLweDestroyStats"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET perfmon/sensors request gets details of the performance monitoring sensors that are
configured in the cluster. For more information about the fields in the data structures that are returned,
see the topic “mmperfmon command” on page 582.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/perfmon/sensors
where
perfmon/sensors
Specifies the performance monitoring sensors as the resource of this GET call. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"performanceData": {}
}
Examples
The following example shows how to use the API command to get the details of the performance
monitoring sensor configuration in the cluster.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"sensorConfig" : [ {
"component" : "TCT",
"defaultPeriod" : 10,
"defaultRestriction" : "",
"description" : "",
"enabledPerDefault" : true,
"generic" : true,
"minimumPeriod" : 1,
"period" : 20,
"restrict" : [ ],
"restrictionType" : "USERNODECLASS",
"sensorName" : "TCTDebugLweDestroyStats"
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully."
}
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT perfmon/sensors/sensorName request configures the sensor. For more information about
the fields in the data structures that are returned, see “mmperfmon command” on page 582.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/perfmon/sensors/sensorName
where
perfmon/sensors/sensorName
Specifies the sensor that you need to modify. Required.
Request headers
Content-Type: application/json
Accept: application/json
Parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"performanceData": {}
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"jobs": [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
Examples
The following example sets nodes or node class where you need to run the sensor
TCTDebugLweDestroyStats and it also sets the period of the sensor.
Request data:
{
"period": 20
}
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000001,
"status" : "RUNNING",
"submitted" : "2018-03-23 14:55:38,385",
"completed" : "N/A",
"runtime" : 14,
"request" : {
"data" : {
"period" : 20
},
"type" : "PUT",
"url" : "/scalemgmt/v2/perfmon/sensors/TCTDebugLweDestroyStats"
},
"result" : { },
"pids" : [ ]
} ],
"status" : {
"code" : 202,
"message" : "The request was accepted for processing."
}
}
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The GET /remotemount/authenticationkey request gets the public RSA authentication key of a
cluster that is used for remote mounting. For more information about the fields in the data structures that
are returned, see “mmauth command” on page 96.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/
authenticationkey
where
remotemount/authenticationkey
Specifies the target of this GET request.
Request headers
Accept: application/json
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
}
"key": "Active public RSA key",
"newKey": "New public RSA key",
"ciphers": "Ciphers of the RSA key",
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"key": "Active public RSA key"
The currently active public RSA key.
"newKey": "New public RSA key"
A newly generated public RSA key. This key becomes active when it is committed. To commit this new
key, use the endpoint PUT /scalemgmt/v2/remotemount/authenticationkey.
Examples
The following example gets the public RSA authentication key that is required for remote mounting.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": 200,
"message": "..."
},
"key": [
"string"
],
"newKey": [
"string"
],
"ciphers": "[ 'AES128-SHA', 'AES256-SHA' ]"
}
Related reference
“mmauth command” on page 96
Manages secure access to GPFS file systems.
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The POST /remotemount/authenticationkey request generates a new RSA authentication key pair
(private and public) on a cluster that is used for remote mounting. Use GET /scalemgmt/v2/
remotemount/authenticationkey to check whether a new authentication key is already generated.
The new key is in addition to the currently active committed key. Both keys are accepted until the
administrator runs the mmauth genkey commit command or PUT /scalemgmt/v2/remotemount/
authenticationkey endpoint.
For more information about the fields in the data structures that are returned, see “mmauth command”
on page 96.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/
authenticationkey
where
remotemount/authenticationkey
Specifies the target of this POST request.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
None.
Request data
None.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
Examples
The following example shows how to generate a new RSA authentication key pair.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/remotemount/authenticationkey",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmauth command” on page 96
Manages secure access to GPFS file systems.
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The PUT /remotemount/authenticationkey request commits or propagates a new RSA
authentication key on a cluster that is used for remote mounting. Use the POST /scalemgmt/v2/
remotemount/authenticationkey request to generate a new RSA key pair (private and public key) for
remote mounting. For more information about the fields in the data structures that are returned, see
“mmauth command” on page 96.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/
authenticationkey
where
remotemount/authenticationkey
Specifies the target of the PUT request.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
Examples
The following API endpoint shows how to commit authentication key on the node CESnode1.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/remotemount/authenticationkey?action=commit&nodes=CESnode1",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET /remotemount/owningcluster lists the clusters that own file systems that can be mounted
remotely. For more information about the fields in the data structures that are returned, see
“mmremotecluster command” on page 650.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/owningclusters
where
remotemount/owningclusters
Specifies target of the GET request.
Request headers
Accept: application/json
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
}
"owningClusters":
{
"owningCluster": "Name of the cluster",
"contactNodes": "Contact nodes of owning cluster",
"keyDigest": "SHA digest of the public key",
"filesystemPairs": "File system mapping",
}
"filesystemPair"
{
"owningClusterFilesystem ": "File system on the owning cluster",
"owningClusterFilesystem ": "File system on the remote cluster",
}
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"owningClusters":
Owning cluster details.
"owningCluster": "Name of the cluster"
The owning cluster of the remote file system.
Examples
The following example shows how to get the list of the clusters that own file systems that can be mounted
remotely.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": 200,
"message": "..."
},
"owningClusters": [
{
"owningCluster": "string",
"contactNodes": [
"string"
],
"keyDigest": "string",
"filesystemPairs": "{ {'owningClusterFilesystem':'fs1',
'remoteClusterFilesystem':'fs1_remote'}, {'owningClusterFilesystem':'fs2',
'remoteClusterFilesystem':'fs2_remote'} }"
}
]
}
Related reference
“mmremotecluster command” on page 650
Manages information about remote GPFS clusters.
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The POST remotemount/owningclusters request registers a cluster that own file systems that can
be mounted remotely. For more information about the fields in the data structures that are returned, see
“mmremotecluster command” on page 650.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/owningclusters
where
remotemount/owningclusters
Specifies the target of the operation. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"owningCluster": "Name of the cluster",
"contactNodes": "List of contact nodes",
"key": "Public RSA key"
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"owningCluster":"Name of the cluster"
The owning cluster of the remote file system.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
Examples
The following example shows how to register a cluster that owns file systems that can be mounted
remotely.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The DELETE remotemount/owningclusters/{owningCluster} request removes the registration of
a cluster that own file systems that can be mounted remotely. For more information about the fields in the
data structures that are returned, see “mmremotecluster command” on page 650.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/owningclusters/
{owningCluster}
where
remotemount/owningclusters/{owningCluster}
Specifies the cluster that needs to be unregistered. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, and nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
Examples
The following example shows how to remove the registration of the cluster Cluster1.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/remotemount/owningclusters/Cluster1",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET /remotemount/owningclusters/{owningCluster} gets the details of the cluster that
owns the file systems that can be mounted remotely. For more information about the fields in the data
structures that are returned, see “mmremotecluster command” on page 650.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/owningclusters/
{owningCluster}
where
remotemount/owningclusters/{owningCluster}
Specifies the target of the GET request.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
}
"owningClusters":
{
"owningCluster": "Name of the cluster",
"contactNodes": "Contact nodes of owning cluster",
"keyDigest": "SHA digest of the public key",
"filesystemPairs": "File system mapping",
}
"filesystemPair"
{
"owningClusterFilesystem ": "File system on the owning cluster",
"owningClusterFilesystem ": "File system on the remote cluster",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"owningClusters":
Owning cluster details.
"owningCluster": "Name of the cluster"
The owning cluster of the remote file system.
"contactNodes": "Contact nodes of owning cluster"
The contact nodes of the owning cluster used for remote mounting.
"keyDigest": "SHA digest of the public key"
The SHA digest of the public key of the owning cluster.
"filesystemPairs": "File system mapping"
The mapping of file systems of owning cluster and remote cluster.
"filesystemPair":
File system pair details.
"owningClusterFilesystem": "File system on the owning cluster"
The file system on the owning cluster.
"owningClusterFilesystem": "File system on the remote cluster"
The file system on the remote cluster.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
Examples
The following example shows how to get the details of the cluster Cluster1 that owns file systems that can
be mounted remotely.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": 200,
"message": "..."
},
"owningClusters": [
{
"owningCluster": "Cluster1",
"contactNodes": [
"string"
],
"keyDigest": "string",
"filesystemPairs": "{ {'owningClusterFilesystem':'fs1',
'remoteClusterFilesystem':'fs1_remote'}, {'owningClusterFilesystem':'fs2',
'remoteClusterFilesystem':'fs2_remote'} }"
}
Related reference
“mmremotecluster command” on page 650
Manages information about remote GPFS clusters.
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The PUT remotemount/owningclusters/{owningCluster} request update registration of a cluster
that owns file systems that can be mounted remotely. For more information about the fields in the data
structures that are returned, see “mmremotecluster command” on page 650.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/owningclusters/
{owningCluster}
where
remotemount/owningclusters/{owningCluster}
Specifies the cluster for which the registration needs to be updated. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
"owningClusters":
{
"owningCluster": "Name of the cluster",
"contactNodes": "Contact nodes of owning cluster",
"keyDigest": "SHA digest of the public key",
"filesystemPairs": "File system mapping", }
"filesystemPair" { "owningClusterFilesystem ":
"File system on the owning cluster",
"owningClusterFilesystem ": "File system on the remote cluster"
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
Examples
The following example shows how to update the registration of the cluster Cluster1.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/remotemount/owningclusters/Cluster1",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET /remotemount/remoteclusters lists the clusters that mount file systems of an owning
cluster remotely. For more information about the fields in the data structures that are returned, see
“mmremotecluster command” on page 650.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/remoteclusters
where
remotemount/remoteclusters
Specifies the target of this GET request.
Request headers
Accept: application/json
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
}
"remoteClusters":
{
"remoteCluster": "Name of the cluster",
"ciphers": "Ciphers of the RSA key",
"keyDigest": "SHA digest of the public key",
"owningClusterFilesystems": "Name of the file system of the owning cluster.",
}
"OwningFilesystem"
{
"filesystem ": "File system on the owning cluster",
"access ": "Access permissions",
}
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"remoteClusters":
Owning cluster details.
"remoteCluster": "Name of the cluster"
The cluster that remotely mounts the file systems of the owning cluster
Examples
The following example shows how to get the list of the clusters that mounts the file system remotely.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": 200,
"message": "..."
},
"remoteClusters": [
{
"remoteCluster": "Cluster1",
"ciphers": "[ 'AES128-SHA', 'AES256-SHA' ]",
"keyDigest": "string",
"owningClusterFilesystems": "{ {'name':'gpfs0', 'access':'rw'}, {'name':'gpfs1',
'access':'ro'} }"
}
]
}
Related reference
“mmremotecluster command” on page 650
Manages information about remote GPFS clusters.
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The POST remotemount/remoteclusters request registers a cluster that can mount one or more file
systems of an owning cluster. For more information about the fields in the data structures that are
returned, see “mmremotecluster command” on page 650.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/remoteclusters/
{remoteCluster}
where
remotemount/remoteclusters
Specifies the cluster that needs to be registered. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
"remoteClusters":
{
"remoteCluster": "Name of the cluster",
"ciphers": "Ciphers of the RSA key",
"key": "SHA digest of the public key",
}
"remoteClusters":
Owning cluster details.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
Examples
The following example shows how to register a cluster that can mount one or more file systems of an
owning cluster.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The DELETE remotemount/remoteclusters/{remoteCluster} request removes the registration of
a cluster that can mount one or more file systems of an owning cluster. For more information about the
fields in the data structures that are returned, see “mmremotecluster command” on page 650.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/remoteclusters/
{remoteCluster}
where
remotemount/remoteclusters/{remoteCluster}
Specifies the cluster for which the registration needs to be removed. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, and nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
Examples
The following example shows how to remove the registration of the cluster Cluster1.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/remotemount/remoteclusters/Cluster1",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET /remotemount/remoteclusters/{remoteCluster} gets the details of the cluster that mounts
file systems of an owning cluster remotely. For more information about the fields in the data structures
that are returned, see “mmremotecluster command” on page 650.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/remoteclusters/
{remoteCluster}
where
remotemount/remoteclusters/{remoteCluster}
Specifies the target of the GET request.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
}
"remoteClusters":
{
"remoteCluster": "Name of the cluster",
"ciphers": "Ciphers of the RSA key",
"keyDigest": "SHA digest of the public key",
"owningClusterFilesystems": "Name of the file system of the owning cluster.",
}
"OwningFilesystem"
{
"filesystem ": "File system on the owning cluster",
"access ": "Access permissions",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"remoteClusters":
Owning cluster details.
"remoteCluster": "Name of the cluster"
The cluster that remotely mounts the file systems of the owning cluster
"ciphers": "Ciphers of the RSA key"
The list of ciphers of the RSA key. It sets the security mode for communications between the
current cluster and the remote cluster. For more information on the list of ciphers, see “mmauth
command” on page 96.
"keyDigest": "SHA digest of the public key"
The SHA digest of the public key of the remote cluster.
"owningClusterFilesystems": "Name of the owning cluster file systems."
The names of file system of owning cluster and the access permissions to them. Optional.
"OwningFilesystem ":
Owning file system details.
"filesystem ": "File system on the owning cluster"
The name of the owning cluster filesystem You can also use the keyword 'all' when addressing all
file systems of the owning cluster.
"access": "Access permissions"
The access permissions to the owning cluster file system.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
Examples
The following example shows how to get the details of the cluster Cluster1 that mounts file system
remotely.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": 200,
"message": "..."
},
"remoteClusters": [
{
"remoteCluster": "Cluster1",
"ciphers": "[ 'AES128-SHA', 'AES256-SHA' ]",
"keyDigest": "string",
"owningClusterFilesystems": "{ {'name':'gpfs0', 'access':'rw'}, {'name':'gpfs1',
'access':'ro'} }"
}
Related reference
“mmremotecluster command” on page 650
Manages information about remote GPFS clusters.
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The PUT remotemount/remoteclusters/{remoteCluster} request updates registration of a cluster
that can mount one or more file systems of an owning cluster. For more information about the fields in the
data structures that are returned, see “mmremotecluster command” on page 650.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/remoteclusters/
{remoteCluster}
where
remotemount/remoteclusters/{remoteCluster}
Specifies the cluster for which the registration needs to be updated. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
"remoteClusters":
{
"remoteCluster": "Name of the cluster",
"ciphers": "Ciphers of the RSA key",
"key": "SHA digest of the public key",
}
"remoteClusters":
Owning cluster details.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
Examples
The following example shows how to update the registration of the cluster Cluster1.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The POST remotemount/remoteclusters/{remoteCluster}/access/{owningClusterFilesystem}
request authorizes a cluster to mount a file system remotely. For more information about the fields in the
data structures that are returned, see “mmauth command” on page 96.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/remoteclusters/
{remoteCluster}/access/{owningClusterFilesystem}
where
remotemount/remoteclusters/{remoteCluster}
Specifies the remote cluster where the file system needs to be mounted. Required.
access/{owningClusterFilesystem}
Defines the access privileges to mount the file system. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
Examples
The following example shows how to authorize a cluster to mount a file system remotely.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "POST",
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The PUT remotemount/remoteclusters/{remoteCluster}/access/{owningClusterFilesystem} request
changes the access permission of a cluster to mount a file system remotely. For more information about
the fields in the data structures that are returned, see “mmauth command” on page 96.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/remoteclusters/
{remoteCluster}/access/{owningClusterFilesystem}
where
remotemount/remoteclusters/{remoteCluster}
Specifies the remote cluster where the file system needs to be mounted. Required.
access/{owningClusterFilesystem}
Defines the access privileges to mount the file system. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
Examples
The following example shows how to change the access permission of a cluster to mount a file system
remotely.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The DELETE remotemount/remoteclusters/{remoteCluster}/deny/
{owningClusterFilesystem} request removes the access permission of a cluster to mount a file
system remotely. For more information about the fields in the data structures that are returned, see
“mmauth command” on page 96.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/remoteclusters/
{remoteCluster}/deny/{owningClusterFilesystem}
where
remotemount/remoteclusters/{remoteCluster}
Specifies the remote cluster where the file system needs to be mounted. Required.
deny/{owningClusterFilesystem}
Removes the access privileges to mount the file system. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, and nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
Examples
The following example shows how to remove the access permission of a cluster from mounting a file
system remotely.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/remotemount/remoteclusters/RemoteCluster1/deny/Filesystem1",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET /remotemount/remotefilesystems lists the remote file systems. For more information
about the fields in the data structures that are returned, see “mmremotefs command” on page 653.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/
remotefilesystems
where
remotemount/remotefilesystems
Specifies the target of the GET request.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
}
"remoteFilesystems": [
{
"remoteFilesystem": "Remote file system name",
"owningFilesystem": "Owning file system name",
"owningCluster": "Owning cluster name",
"remoteMountPath": "Remote mount path of the file system",
"mountOptions": "Mount options",
"automount": "Status of the auto mount"
}
]
Examples
The following example shows how to get the list of remote file systems that belong to the owning cluster
Cluster1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": 200,
"message": "..."
},
"remoteFilesystems": [
{
"remoteFilesystem": "filesystem1",
"owningFilesystem": "Owningfileystem1",
"owningCluster": "Cluster1",
"remoteMountPath": "string",
"mountOptions": "string",
"automount": "string"
}
]
Related reference
“mmremotecluster command” on page 650
Manages information about remote GPFS clusters.
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The POST /remotemount/remotefilesystems creates a remote file system. For more information
about the fields in the data structures that are returned, see “mmremotefs command” on page 653.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/
remotefilesystems
where
remotemount/remotefilesystems
Specifies the target element that is going to be created.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
"remoteFilesystems":
{
"remoteFilesystem": "Remote file system name",
"owningFilesystem": "Owning file system name",
"owningCluster": "Owning cluster name",
"remoteMountPath": "Remote mount path of the file system",
"mountOptions": "Mount options",
"automount": "Status of the auto mount"
}
"remoteFilesystems":
Remote file system details.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
Examples
The following example shows how to create a remote file system.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
Related reference
“mmremotecluster command” on page 650
Manages information about remote GPFS clusters.
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The DELETE /remotemount/remotefilesystems/{remoteFilesystem} deletes the details of a remote
file system. For more information about the fields in the data structures that are returned, see
“mmremotefs command” on page 653.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/
remotefilesystems/{remoteFilesystem}
where
remotemount/remotefilesystems/{remoteFilesystem}
Specifies the file system to be deleted.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, and nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
Examples
The following example shows how to delete the remote file system RemoteFilesystem1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/remotemount/remotefilesystems/RemoteFilesystem1",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmremotecluster command” on page 650
Manages information about remote GPFS clusters.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET /remotemount/remotefilesystems/{remoteFilesystem} gets the details of a remote file
system. For more information about the fields in the data structures that are returned, see “mmremotefs
command” on page 653.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/
remotefilesystems/{remoteFilesystem}
where
remotemount/remotefilesystems/{remoteFilesystem}
Specifies the target of the GET request.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode,
"message":"ReturnMessage"
}
"remoteFilesystems": [
{
"remoteFilesystem": "Remote file system name",
"owningFilesystem": "Owning file system name",
"owningCluster": "Owning cluster name",
"remoteMountPath": "Remote mount path of the file system",
"mountOptions": "Mount options",
"automount": "Status of the auto mount"
}
]
Examples
The following example shows how to get the details of the file system RemoteFilesystem1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": 200,
"message": "..."
},
"remoteFilesystems": [
{
"remoteFilesystem": "RemoteFilesystem1",
"owningFilesystem": "Owningfileystem1",
"owningCluster": "Cluster1",
"remoteMountPath": "string",
"mountOptions": "string",
"automount": "string"
}
]
Related reference
“mmremotecluster command” on page 650
Manages information about remote GPFS clusters.
Availability
Available on all IBM Spectrum Scale editions.
Note: Only the users with user roles Administrator or Container Operator have permission to use this REST
API endpoint.
Description
The PUT /remotemount/remotefilesystems/{remoteFilesystem} updates the details of a remote file
system. For more information about the fields in the data structures that are returned, see “mmremotefs
command” on page 653.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/remotemount/
remotefilesystems/{remoteFilesystem}
where
remotemount/remotefilesystems/{remoteFilesystem}
Specifies the remote file system to be updated.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
"remoteFilesystems": [
{
"remoteFilesystem": "Remote file system name",
"owningFilesystem": "Owning file system name",
"owningCluster": "Owning cluster name",
"remoteMountPath": "Remote mount path of the file system",
"mountOptions": "Mount options",
"remoteFilesystems":
Remote file system details.
"remoteFilesystem": "Remote file system name"
The name of the remote file system on the remote cluster.
"owningFilesystem": "Owning file system name"
The name of the file system on the owning cluster.
"owningCluster": "Owning cluster name"
The owning cluster of the remote file system.
"remoteMountPath": "Remote mount path of the file system"
The remote mount path of the remote file system.
"mountOptions": "Mount options"
The mount options of the remote file system.
"automount": "Status of the auto mount"
The automount status of the remote file system.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
Examples
The following example shows how to update the remote file system RemoteFilesystem1.
Request data:
Response data:
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/remotemount/remotefilesystems/RemoteFilesystem1",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmremotecluster command” on page 650
Manages information about remote GPFS clusters.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET smb/shares request gets information about SMB shares. For more information about the fields
in the data structures that are returned, see “mmsmb command” on page 698.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/smb/shares
where
smb/shares
Specifies SMB shares as the resource. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
The following list of attributes are available in the response data:
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"paging":
{
"next": "URL"
Examples
The following example gets information about the SMB shares that are configured in the system.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"smbShares" : [ {
"config" : {
"shareName" : "share01"
}
} ],
"status" : {
"code" : 200,
"message" : "The request finished successfully"
}
}
Using the field parameter ":all:" returns entire details of the SMB shares. For example:
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/smb/shares/smbShare1?lastId=1001"
},
"SmbShares": [
{
"shareName": "smbShare1",
"filesystemName": "gpfs0",
"filesetName": "fset1",
"path": "/mnt/gpfs0/fset1"
},
"smbOptions": {
"browseable": "yes",
"smbEncrypt": "auto",
"adminUsers": "admin",
"comment": "This is a comment",
"cscPolicy": "manual",
"fileIdAlgorithm": "fsname",
"gpfsLeases": "yes",
"gpfsRecalls": "yes",
"gpfsShareModes": "yes",
"gpfsSyncIo": "yes",
"hideUnreadable": "yes",
"opLocks": "yes",
"posixLocking": "yes",
"readOnly": "yes",
"syncOpsOnClose": "yes"
"hideDotFiles": "yes"
}
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET smb/shares/shareName request gets information about the specified SMB share. For more
information about the fields in the data structures that are returned, see “mmsmb command” on page
698.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/smb/shares/shareName
where
smb/shares
Specifies the SMB share as the resource. Required.
shareName
Specifies the SMB share about which you need to get the details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
The following list of attributes are available in the response data:
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"paging":
{
Examples
The following example gets information about the SMB share share1.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/smb/shares/smbShare1?lastId=1001"
},
"SmbShares": [
{
"shareName": "smbShare1",
"filesystemName": "gpfs0",
"filesetName": "fset1",
"path": "/mnt/gpfs0/fset1"
},
"smbOptions": {
"browseable": "yes",
"smbEncrypt": "auto",
"adminUsers": "admin",
"comment": "This is a comment",
"cscPolicy": "manual",
"fileIdAlgorithm": "fsname",
"gpfsLeases": "yes",
"gpfsRecalls": "yes",
"gpfsShareModes": "yes",
"gpfsSyncIo": "yes",
"hideUnreadable": "yes",
"opLocks": "yes",
"posixLocking": "yes",
"readOnly": "yes",
"syncOpsOnClose": "yes"
"hideDotFiles": "yes"
}
}
]
}
Related reference
“mmsmb command” on page 698
Administers SMB shares, export ACLs, and global configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST smb/shares request creates a new SMB share. For more information about the fields in the
data structures that are returned, see “mmsmb command” on page 698.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/smb/shares/shareName
/scalemgmt/v2/smb/shares
where
smb/shares
Specifies SMB share as the target. Required.
Request headers
Accept: application/json
Request data
The following list of attributes are available in the response data:
{ "shareName": "ShareName"
"path": "Path",
"smbOptions"
{
"browseable":""Browseable"
"smbEncrypt": "{auto | default | mandatory | disabled}"
"adminUsers": "Users"
"comment": "Comments"
"cscPolicy": "Client-side caching policy"
"fileIdAlgorithm": "fsname | hostname | fsnamenodirs |
fsnamenorootdir"
"gpfsLeases": "{yes | no}"
"gpfsRecalls": "{yes | no}"
"gpfsShareModes,": "{yes | no}"
"gpfsSyncIo": "{yes | no}"
"hideUnreadable": "{yes | no}"
"opLocks": "{yes | no}"
"posixLocking": "{yes | no}"
"readOnly": "{yes | no}"
"syncOpsOnClose": "{yes | no}"
"hideDotFiles": "yes | no"
}
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
Examples
The following example creates a new SMB share.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/smb/shares",
"data": "{
"shareName": "smbShare",
"path": "/mnt/gpfs0/fset1",
"smbOptions": {
"browseable": "yes",
"smbEncrypt": "auto",
"adminUsers": "admin",
"comment": "This is a comment",
"cscPolicy": "manual",
"fileIdAlgorithm": "fsname",
"gpfsLeases": "yes",
"gpfsRecalls": "yes",
"gpfsShareModes": "yes",
"gpfsSyncIo": "yes",
"hideUnreadable": "yes",
"opLocks": "yes",
"posixLocking": "yes",
"readOnly": "yes",
"syncOpsOnClose": "yes",
"hideDotFiles": "yes"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmsmb command” on page 698
Administers SMB shares, export ACLs, and global configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT smb/shares/shareName request modifying an existing SMB share. For more information
about the fields in the data structures that are returned, see “mmsmb command” on page 698.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/smb/shares/shareName
where
smb/shares
Specifies SMB share as the resource. Required.
shareName
Specifies the SMB share to be modified. Required.
Request headers
Accept: application/json
Request data
The following list of attributes are available in the response data:
{ "shareName": "ShareName"
"path": "Path",
"smbOptions"
{
"browseable":""Browseable"
"smbEncrypt": "{auto | default | mandatory | disabled}"
"adminUsers": "Users"
"comment": "Comments"
"cscPolicy": "Client-side caching policy"
"fileIdAlgorithm": "fsname | hostname | fsnamenodirs |
fsnamenorootdir"
"gpfsLeases": "{yes | no}"
"gpfsRecalls": "{yes | no}"
"gpfsShareModes,": "{yes | no}"
"gpfsSyncIo": "{yes | no}"
"hideUnreadable": "{yes | no}"
"opLocks": "{yes | no}"
"posixLocking": "{yes | no}"
"readOnly": "{yes | no}"
"syncOpsOnClose": "{yes | no}"
"hideDotFiles": "yes | no"
}
}
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
Examples
The following example modifies the SMB share smbShare1.
Request data:
Response data:
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/smb/shares/smbShare1",
"data": "{
"shareName": "smbShare",
"path": "/mnt/gpfs0/fset1",
"smbOptions": {
"browseable": "yes",
"smbEncrypt": "auto",
"adminUsers": "admin",
"comment": "This is a comment",
"cscPolicy": "manual",
"fileIdAlgorithm": "fsname",
"gpfsLeases": "yes",
"gpfsRecalls": "yes",
"gpfsShareModes": "yes",
"gpfsSyncIo": "yes",
"hideUnreadable": "yes",
"opLocks": "yes",
"posixLocking": "yes",
"readOnly": "yes",
"syncOpsOnClose": "yes",
"hideDotFiles": "yes"
},
"removeSmbOptions": [
"string"
]}"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmsmb command” on page 698
Administers SMB shares, export ACLs, and global configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE smb/shares/shareName command deletes the specified SMB share. For more
information, see “mmsmb command” on page 698.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/smb/shares/shareName
where:
smb/shares
Specifies the SMB share as the resource. Required.
shareName
Specifies the SMB share to be deleted. Required.
Request headers
Accept: application/json
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
Examples
The following example deletes the SMB share share1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000008,
"status" : "RUNNING",
"submitted" : "2017-08-24 04:45:34,034",
"completed" : "N/A",
"request" : {
"type" : "DELETE",
Related reference
“mmsmb command” on page 698
Administers SMB shares, export ACLs, and global configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE smb/shares/shareName/acl command deletes the ACLs of the specified SMB share. For
more information, see “mmsmb command” on page 698.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/smb/shares/shareName/acl
where:
smb/shares
Specifies the SMB share as the resource. Required.
shareName/acl
Specifies the SMB share of which the ACL must be deleted. Required.
Request headers
Accept: application/json
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
Examples
The following example deletes ACL of the SMB share share1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000008,
"status" : "RUNNING",
"submitted" : "2017-08-24 04:45:34,034",
"completed" : "N/A",
"request" : {
"type" : "DELETE",
Related reference
“mmsmb command” on page 698
Administers SMB shares, export ACLs, and global configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET smb/shares/shareName/acl request gets information about the ACLs that are applicable to
the specified SMB share. For more information about the fields in the data structures that are returned,
see “mmsmb command” on page 698.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/smb/shares/shareName/acl
where
smb/shares
Specifies the SMB share as the resource. Required.
shareName/acl
Specifies the SMB share for which you need to get the ACL details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
The following list of attributes are available in the response data:
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
"paging":
{
Examples
The following example gets information about the ACL of the SMB share share1.
Request URL:
{
"status": {
"code": 200,
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/smb/shares/share1/acl?lastId=10001",
"fields": "",
"filter": "",
"baseUrl": "smb/shares/share1/acl",
"lastId": 10001
},
"entries": [
{
"shareName": "myshare",
"name": "Domain1\\User3",
"access": "ALLOWED",
"permissions": "FULL",
"type": "USER"
}
]
}
Related reference
“mmsmb command” on page 698
Administers SMB shares, export ACLs, and global configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE smb/shares/shareName/acl command deletes the ACLs of the specified SMB share. For
more information, see “mmsmb command” on page 698.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/smb/shares/shareName/acl/
name
where:
smb/shares
Specifies the SMB share as the resource. Required.
shareName/acl/name
Specifies the acl entry to be deleted. Required.
Request headers
Accept: application/json
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
Examples
The following example deletes the ACL entry Domain1\\User3.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"jobs" : [ {
"jobId" : 1000000000008,
"status" : "RUNNING",
"submitted" : "2017-08-24 04:45:34,034",
"completed" : "N/A",
"request" : {
"type" : "DELETE",
Related reference
“mmsmb command” on page 698
Administers SMB shares, export ACLs, and global configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET smb/shares/shareName/acl/name request gets information about the ACLs that are
applicable to the specified SMB share. For more information about the fields in the data structures that
are returned, see “mmsmb command” on page 698.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/smb/shares/shareName/acl/
name
where
smb/shares
Specifies the SMB share as the resource. Required.
shareName/acl/name
Specifies the ACL about which you need the details. Required.
Request headers
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
The following list of attributes are available in the response data:
{
"status": {
"code":ReturnCode",
Examples
The following example gets information about the ACL of the SMB share share1.
Request URL:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": 200,
"message": "..."
},
"paging": {
"next": "https://localhost:443/scalemgmt/v2/smb/shares/share1/acl?lastId=10001",
"fields": "",
"filter": "",
"baseUrl": "smb/shares/share1/acl/Domain1\\User3",
"lastId": 10001
},
"entries": [
{
"shareName": "share1",
"name": "Domain1\\User3",
"access": "ALLOWED",
"permissions": "FULL",
"type": "USER"
}
]
}
Related reference
“mmsmb command” on page 698
Administers SMB shares, export ACLs, and global configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The PUT smb/shares/shareName/acl/name request adds an entry to the ACL of a SMB share. For
more information about the fields in the data structures that are returned, see “mmsmb command” on
page 698.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/smb/shares/shareName/acl/
name
where
smb/shares
Specifies SMB share as the resource. Required.
shareName/acl/name
Specifies the ACL entry to be modified. Required.
Request headers
Accept: application/json
Request data
The following list of attributes are available in the response data:
{
"shareName": "Share name"
"name": "ACL name"
"access": "Allowed | Denied"
"permissions": "Access permissions",
"type": "User | Group | System | UID",
)
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
{
"type":"{GET | POST | PUT | DELETE}",
"url":"URL",
"data":""",
}
"jobId":"ID",
"submitted":"Time",
"completed":Time",
"status":"Job status",
}
],
}
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
Examples
The following example modifies the SMB share smbShare1.
Request data:
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {},
"request": {
"type": "PUT",
"url": "/scalemgmt/v2/smb/shares/smbShare1/Domain1\\User3",
"data": "{
"shareName": "myshare",
"name": "Domain1\\User3",
"access": "ALLOWED",
"permissions": "FULL",
"type": "USER"
}",
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmsmb command” on page 698
Administers SMB shares, export ACLs, and global configuration.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET thresholds request gets a list of all threshold rules that are configured in the system. For
more information about the fields in the data structures that are returned, see “mmhealth command” on
page 435 .
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/thresholds
where
thresholds
Specifies the threshold rules that are configured in the cluster as the resource of this GET call.
Required.
Request headers
Content-Type: application/json
Accept: application/json
Request data
No request data.
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code": "200",
"message": "..."
},
"thresholds": [
{
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"thresholds":
"ruleName":"Name"
The name of the threshold rule.
"frequency":"Time"
The frequency in which the threshold values are evaluated.
"tags": "tags"
Maps events to components such as pool_data, pool_meta-data, fset_inode, and thresholds.
"userActionWarn": "Warning message"
A user-defined message that is included in the warning message.
"userActionError": "Error message"
A user-defined message that is included in the error message.
"type": "metric | measurement"
The type of the threshold, either metric or measurement.
"metric": "Metric name"
The metric for which the threshold rule is defined.
"metricOp": "sum | avg | min | max | rate"
The metric operation or the aggregator that is used for the threshold rule.
"sensitivity": "Sensitivity"
The sample interval value in seconds.
"computation": "Computation criteria"
A rule consists of a computation element performs a computation on the collected data. There are
four computation criteria defined: value, stats (max, min, median, and percentile), zimonstatus,
and gpfsCapacityUtil. Currently, only 'value' is supported.
"duration": "Collection duration"
Duration of collection time (in seconds).
"filterBy": "Filter by"
Filters the result based on the filter key.
Examples
The following example gets the list of threshold rules that are defined in the system.
Request URL:
{
"status": {
"code": "200",
"message": "..."
},
"thresholds": [
{
"ruleName": "string",
"frequency": "300",
"tags": "300",
"userActionWarn": "The cpu usage has exceeded the warning level. ",
"userActionError": "The cpu usage has exceeded the threshold level. ",
"type": "metric",
"metric": "cpu_user",
"metricOp": "avg",
"sensitivity": "300",
"computation": "value",
"duration": "300",
"filterBy": "gpfs_fs_name=gpfs0",
"groupBy": "gpfs_fs_name=gpfs0",
"errorLevel": "90.0",
"warnLevel": "80.0",
"direction": "high",
"hysteresis": "5.0"
}
]
}
Related reference
“mmhealth command” on page 435
Monitors health status of nodes.
Availability
Available on all IBM Spectrum Scale editions.
Description
The POST thresholds request creates a threshold rule that defines threshold levels for data that is
collected through performance monitoring sensors. For more information about the fields in the data
structures that are returned, see “mmhealth command” on page 435.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/thresholds
where
thresholds
Specifies thresholds as the target of the operation. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
{
"ruleName": "Name",
"frequency": "Frequency",
"tags": "Tags",
"userActionWarn": "Warning message",
"userActionError": "Error message",
"type": "metric",
"metric": "Metric name",
"metricOp": "Metric operation",
"sensitivity": "Sensitivity",
"computation": "value",
"duration": "Duration",
"filterBy": "Filter by",
"groupBy": "Group by",
"errorLevel": "Error level",
"warnLevel": "Warning level",
"direction": "Direction",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"ruleName":"Name"
The name of the threshold rule.
"frequency":"Time"
The frequency in which the threshold values are evaluated.
"tags": "tags"
Maps events to components such as pool_data, pool_meta-data, fset_inode, and thresholds.
"userActionWarn": "Warning message"
A user-defined message that is included in the warning message.
"userActionError": "Error message"
A user-defined message that is included in the error message.
"type": "metric | measurement"
The type of the threshold, either metric or measurement.
"metric": "Metric name"
The metric for which the threshold rule is defined.
"metricOp": "sum | avg | min | max | rate"
The metric operation or the aggregator that is used for the threshold rule.
"sensitivity": "Sensitivity"
The sample interval value in seconds.
"computation": "Computation criteria"
A rule consists of a computation element performs a computation on the collected data. There are
four computation criteria defined: value, stats (max, min, median, and percentile), zimonstatus, and
gpfsCapacityUtil. Currently, only 'value' is supported.
"duration": "Collection duration"
Duration of collection time (in seconds).
"filterBy": "Filter by"
Filters the result based on the filter key.
"groupBy": "Group by"
Groups the result based on the group key.
"errorLevel": "Error level"
The threshold error limit that is defined for the metric. The value can be a percentage or an integer,
depending on the metric on which the threshold value is being set.
"warnLevel": "Warning level"
The threshold warning limit that is defined for the metric. The value can be a percentage or an integer,
depending on the metric on which the threshold value is being set.
"direction": "High | Low"
The direction for the threshold limit.
"hysteresis": "Hysterisis"
The percentage that the observed value must be above or below the current threshold level to switch
back to the previous state.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
Examples
The following API command creates a threshold rule rule1.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "POST",
"url": "/scalemgmt/v2/thresholds",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Availability
Available on all IBM Spectrum Scale editions.
Description
The DELETE thresholds request deletes a specific threshold rule. A threshold rule defines threshold
levels for data that is collected through performance monitoring sensors. For more information about the
fields in the data structures that are returned, see “mmhealth command” on page 435.
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/thresholds/name
where
thresholds/name
Specifies thresholds as the target of the operation. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request parameters
The following parameters can be used in the request URL to customize the request:
Request data
No request data.
Response data
{
"status": {
"code":ReturnCode",
"message":"ReturnMessage"
},
jobs: [
{
"result":"",
{
"commands":"String",
"progress":"String,
"exitCode":"Exit code",
"stderr":"Error",
"stdout":"String",
},
"request":" ",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"paging"
The URL to retrieve the next page. Paging is enabled when more than 1000 objects are returned by
the query.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"jobs":
An array of elements that describe jobs. Each element describes one job.
"result"
"commands":"String'
Array of commands that are run in this job.
"progress":"String'
Progress information for the request.
"exitCode":"Exit code"
Exit code of command. Zero is success, nonzero denotes failure.
"stderr":"Error"
CLI messages from stderr.
"stdout":"String"
CLI messages from stdout.
"request"
"type":"{GET | POST | PUT | DELETE}"
HTTP request type.
"url":"URL"
The URL through which the job is submitted.
"data":" "
Optional.
"jobId":"ID",
The unique ID of the job.
Examples
The following API command deletes the threshold rule rule1.
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"job": [
{
"result": {
"commands": "[''mmcrfileset gpfs0 restfs1001'', ...]",
"progress": "[''(2/3) Linking fileset'']",
"exitCode": "0",
"stderr": "[''EFSSG0740C There are not enough resources available to create
a new independent file set.'', ...]",
"stdout": "[''EFSSG4172I The file set {0} must be independent.'', ...]"
},
"request": {
"type": "DELETE",
"url": "/scalemgmt/v2/thresholds/rule1",
"data": "nodesDesc": "[ 'mari-16:manager-quorum', 'mari-17::mari-17_admin' ]"
},
"jobId": "12345",
"submitted": "2016-11-14 10.35.56",
"completed": "2016-11-14 10.35.56",
"status": "COMPLETED"
}
]
}
Related reference
“mmhealth command” on page 435
Monitors health status of nodes.
Availability
Available on all IBM Spectrum Scale editions.
Description
The GET thresholds/{name} request gets details of the specified threshold rule. For more information
about the fields in the data structures that are returned, see “mmhealth command” on page 435 .
Request URL
https://<IP address or host name of API server>:<port>/scalemgmt/v2/thresholds/name
where
thresholds/name
Specifies the threshold rule about which you need the details. Required.
Request headers
Content-Type: application/json
Accept: application/json
Request data
No request data.
Request parameters
The following parameters can be used in the request URL to customize the request:
Response data
{
"status": {
"code": "200",
"message": "..."
},
"thresholds": [
{
"ruleName": "Name",
For more information about the fields in the following data structures, see the links at the end of this
topic.
"status":
Return status.
"message": "ReturnMessage",
The return message.
"code": ReturnCode
The return code.
"thresholds":
"ruleName":"Name"
The name of the threshold rule.
"frequency":"Time"
The frequency in which the threshold values are evaluated.
"tags": "tags"
Maps events to components such as pool_data, pool_meta-data, fset_inode, and thresholds.
"userActionWarn": "Warning message"
A user-defined message that is included in the warning message.
"userActionError": "Error message"
A user-defined message that is included in the error message.
"type": "metric | measurement"
The type of the threshold, either metric or measurement.
"metric": "Metric name"
The metric for which the threshold rule is defined.
"metricOp": "sum | avg | min | max | rate"
The metric operation or the aggregator that is used for the threshold rule.
"sensitivity": "Sensitivity"
The sample interval value in seconds.
"computation": "Computation criteria"
A rule consists of a computation element performs a computation on the collected data. There are
four computation criteria defined: value, stats (max, min, median, and percentile), zimonstatus,
and gpfsCapacityUtil. Currently, only 'value' is supported.
"duration": "Collection duration"
Duration of collection time (in seconds).
"filterBy": "Filter by"
Filters the result based on the filter key.
"groupBy": "Group by"
Groups the result based on the group key.
Examples
The following example gets details of the threshold rule rule1
Request URL:
The request URL with no field or filter parameter returns only the details that uniquely identify the object.
Response data:
Note: In the JSON data that is returned, the return code indicates whether the command is successful.
The response code 200 indicates that the command successfully retrieved the information. Error code
400 represents an invalid request and 500 represents internal server error.
{
"status": {
"code": "200",
"message": "..."
},
"thresholds": [
{
"ruleName": "rule1",
"frequency": "300",
"tags": "300",
"userActionWarn": "The cpu usage has exceeded the warning level. ",
"userActionError": The cpu usage has exceeded the threshold level.",
"type": "metric",
"metric": "cpu_user",
"metricOp": "avg",
"sensitivity": "300",
"computation": "value",
"duration": "300",
"filterBy": "gpfs_fs_name=gpfs0",
"groupBy": "gpfs_fs_name=gpfs0",
"errorLevel": "90.0",
"warnLevel": "80.0",
"direction": "high",
"hysteresis": "5.0"
}
]
}
Related reference
“mmhealth command” on page 435
Monitors health status of nodes.
Accessibility features
The following list includes the major accessibility features in IBM Spectrum Scale:
• Keyboard-only operation
• Interfaces that are commonly used by screen readers
• Keys that are discernible by touch but do not activate just by touching them
• Industry-standard devices for ports and connectors
• The attachment of alternative input and output devices
IBM Knowledge Center, and its related publications, are accessibility-enabled. The accessibility features
are described in IBM Knowledge Center (www.ibm.com/support/knowledgecenter).
Keyboard navigation
This product uses standard Microsoft Windows navigation keys.
IBM Director of Licensing IBM Corporation North Castle Drive, MD-NC119 Armonk, NY 10504-1785 US
For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual
Property Department in your country or send inquiries, in writing, to:
Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan Ltd. 19-21, Nihonbashi-
Hakozakicho, Chuo-ku Tokyo 103-8510, Japan
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in
certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:
IBM Director of Licensing IBM Corporation North Castle Drive, MD-NC119 Armonk, NY 10504-1785 US
Such information may be available, subject to appropriate terms and conditions, including in some cases,
payment of a fee.
The licensed program described in this document and all licensed material available for it are provided by
IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any
equivalent agreement between us.
The performance data discussed herein is presented as derived under specific operating conditions.
Actual results may vary.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
Each copy or any portion of these sample programs or any derivative work must include
a copyright notice as follows:
If you are viewing this information softcopy, the photographs and color illustrations may not appear.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
Copyright and trademark information at www.ibm.com/legal/copytrade.shtml.
Intel is a trademark of Intel Corporation or its subsidiaries in the United States and other countries.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or
its affiliates.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or
both.
UNIX is a registered trademark of the Open Group in the United States and other countries.
1558 Notices
Applicability
These terms and conditions are in addition to any terms of use for the IBM website.
Personal use
You may reproduce these publications for your personal, noncommercial use provided that all proprietary
notices are preserved. You may not distribute, display or make derivative work of these publications, or
any portion thereof, without the express consent of IBM.
Commercial use
You may reproduce, distribute and display these publications solely within your enterprise provided that
all proprietary notices are preserved. You may not make derivative works of these publications, or
reproduce, distribute or display these publications or any portion thereof outside your enterprise, without
the express consent of IBM.
Rights
Except as expressly granted in this permission, no other permissions, licenses or rights are granted, either
express or implied, to the publications or any information, data, software or other intellectual property
contained therein.
IBM reserves the right to withdraw the permissions granted herein whenever, in its discretion, the use of
the publications is detrimental to its interest or, as determined by IBM, the above instructions are not
being properly followed.
You may not download, export or re-export this information except in full compliance with all applicable
laws and regulations, including all United States export laws and regulations.
IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THE PUBLICATIONS ARE
PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED,
INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT,
AND FITNESS FOR A PARTICULAR PURPOSE.
Notices 1559
1560 IBM Spectrum Scale 5.1.0: Command and Programming Reference
Glossary
This glossary provides terms and definitions for IBM Spectrum Scale.
The following cross-references are used in this glossary:
• See refers you from a nonpreferred term to the preferred term or from an abbreviation to the spelled-
out form.
• See also refers you to a related or contrasting term.
For other terms and definitions, see the IBM Terminology website (www.ibm.com/software/globalization/
terminology) (opens in new window).
B
block utilization
The measurement of the percentage of used subblocks per allocated blocks.
C
cluster
A loosely coupled collection of independent systems (nodes) organized into a network for the purpose
of sharing resources and communicating with each other. See also GPFS cluster.
cluster configuration data
The configuration data that is stored on the cluster configuration servers.
Cluster Export Services (CES) nodes
A subset of nodes configured within a cluster to provide a solution for exporting GPFS file systems by
using the Network File System (NFS), Server Message Block (SMB), and Object protocols.
cluster manager
The node that monitors node status using disk leases, detects failures, drives recovery, and selects
file system managers. The cluster manager must be a quorum node. The selection of the cluster
manager node favors the quorum-manager node with the lowest node number among the nodes that
are operating at that particular time.
Note: The cluster manager role is not moved to another node when a node with a lower node number
becomes active.
clustered watch folder
Provides a scalable and fault-tolerant method for file system activity within an IBM Spectrum Scale
file system. A clustered watch folder can watch file system activity on a fileset, inode space, or an
entire file system. Events are streamed to an external Kafka sink cluster in an easy-to-parse JSON
format. For more information, see the mmwatch command in the IBM Spectrum Scale: Command and
Programming Reference.
control data structures
Data structures needed to manage file data and metadata cached in memory. Control data structures
include hash tables and link pointers for finding cached data; lock states and tokens to implement
distributed locking; and various flags and sequence numbers to keep track of updates to the cached
data.
D
Data Management Application Program Interface (DMAPI)
The interface defined by the Open Group's XDSM standard as described in the publication System
Management: Data Storage Management (XDSM) API Common Application Environment (CAE)
Specification C429, The Open Group ISBN 1-85912-190-X.
E
ECKD
See extended count key data (ECKD).
ECKD device
See extended count key data device (ECKD device).
encryption key
A mathematical value that allows components to verify that they are in communication with the
expected server. Encryption keys are based on a public or private key pair that is created during the
installation process. See also file encryption key, master encryption key.
extended count key data (ECKD)
An extension of the count-key-data (CKD) architecture. It includes additional commands that can be
used to improve performance.
extended count key data device (ECKD device)
A disk storage device that has a data transfer rate faster than some processors can utilize and that is
connected to the processor through use of a speed matching buffer. A specialized channel program is
needed to communicate with such a device. See also fixed-block architecture disk device.
F
failback
Cluster recovery from failover following repair. See also failover.
failover
(1) The assumption of file system duties by another node when a node fails. (2) The process of
transferring all control of the ESS to a single cluster in the ESS when the other clusters in the ESS fails.
See also cluster. (3) The routing of all transactions to a second controller when the first controller fails.
See also cluster.
failure group
A collection of disks that share common access paths or adapter connections, and could all become
unavailable through a single hardware failure.
FEK
See file encryption key.
G
global snapshot
A snapshot of an entire GPFS file system.
GPFS cluster
A cluster of nodes defined as being available for use by GPFS file systems.
GPFS portability layer
The interface module that each installation must build for its specific hardware platform and Linux
distribution.
GPFS recovery log
A file that contains a record of metadata activity and exists for each node of a cluster. In the event of a
node failure, the recovery log for the failed node is replayed, restoring the file system to a consistent
state and allowing other nodes to continue working.
Glossary 1563
I
ill-placed file
A file assigned to one storage pool but having some or all of its data in a different storage pool.
ill-replicated file
A file with contents that are not correctly replicated according to the desired setting for that file. This
situation occurs in the interval between a change in the file's replication settings or suspending one of
its disks, and the restripe of the file.
independent fileset
A fileset that has its own inode space.
indirect block
A block containing pointers to other blocks.
inode
The internal structure that describes the individual files in the file system. There is one inode for each
file.
inode space
A collection of inode number ranges reserved for an independent fileset, which enables more efficient
per-fileset functions.
ISKLM
IBM Security Key Lifecycle Manager. For GPFS encryption, the ISKLM is used as an RKM server to
store MEKs.
J
journaled file system (JFS)
A technology designed for high-throughput server environments, which are important for running
intranet and other high-performance e-business file servers.
junction
A special directory entry that connects a name in a directory of one fileset to the root directory of
another fileset.
K
kernel
The part of an operating system that contains programs for such tasks as input/output, management
and control of hardware, and the scheduling of user tasks.
M
master encryption key (MEK)
A key used to encrypt other keys. See also encryption key.
MEK
See master encryption key.
metadata
Data structures that contain information that is needed to access file data. Metadata includes inodes,
indirect blocks, and directories. Metadata is not accessible to user applications.
metanode
The one node per open file that is responsible for maintaining file metadata integrity. In most cases,
the node that has had the file open for the longest period of continuous time is the metanode.
mirroring
The process of writing the same data to multiple disks at the same time. The mirroring of data
protects it against data loss within the database or within the recovery log.
N
namespace
Space reserved by a file system to contain the names of its objects.
Network File System (NFS)
A protocol, developed by Sun Microsystems, Incorporated, that allows any host in a network to gain
access to another host or netgroup and their file directories.
Network Shared Disk (NSD)
A component for cluster-wide disk naming and access.
NSD volume ID
A unique 16-digit hex number that is used to identify and access all NSDs.
node
An individual operating-system image within a cluster. Depending on the way in which the computer
system is partitioned, it may contain one or more nodes.
node descriptor
A definition that indicates how GPFS uses a node. Possible functions include: manager node, client
node, quorum node, and nonquorum node.
node number
A number that is generated and maintained by GPFS as the cluster is created, and as nodes are added
to or deleted from the cluster.
node quorum
The minimum number of nodes that must be running in order for the daemon to start.
node quorum with tiebreaker disks
A form of quorum that allows GPFS to run with as little as one quorum node available, as long as there
is access to a majority of the quorum disks.
non-quorum node
A node in a cluster that is not counted for the purposes of quorum determination.
Non-Volatile Memory Express® (NVMe)
An interface specification that allows host software to communicate with non-volatile memory
storage media.
P
policy
A list of file-placement, service-class, and encryption rules that define characteristics and placement
of files. Several policies can be defined within the configuration, but only one policy set is active at one
time.
policy rule
A programming statement within a policy that defines a specific action to be performed.
pool
A group of resources with similar characteristics and attributes.
portability
The ability of a programming language to compile successfully on different operating systems without
requiring changes to the source code.
Glossary 1565
primary GPFS cluster configuration server
In a GPFS cluster, the node chosen to maintain the GPFS cluster configuration data.
private IP address
A IP address used to communicate on a private network.
public IP address
A IP address used to communicate on a public network.
Q
quorum node
A node in the cluster that is counted to determine whether a quorum exists.
quota
The amount of disk space and number of inodes assigned as upper limits for a specified user, group of
users, or fileset.
quota management
The allocation of disk blocks to the other nodes writing to the file system, and comparison of the
allocated space to quota limits at regular intervals.
R
Redundant Array of Independent Disks (RAID)
A collection of two or more disk physical drives that present to the host an image of one or more
logical disk drives. In the event of a single physical device failure, the data can be read or regenerated
from the other disk drives in the array due to data redundancy.
recovery
The process of restoring access to file system data when a failure has occurred. Recovery can involve
reconstructing data or providing alternative routing through a different server.
remote key management server (RKM server)
A server that is used to store master encryption keys.
replication
The process of maintaining a defined set of data in more than one location. Replication consists of
copying designated changes for one location (a source) to another (a target) and synchronizing the
data in both locations.
RKM server
See remote key management server.
rule
A list of conditions and actions that are triggered when certain conditions are met. Conditions include
attributes about an object (file name, type or extension, dates, owner, and groups), the requesting
client, and the container name associated with the object.
S
SAN-attached
Disks that are physically attached to all nodes in the cluster using Serial Storage Architecture (SSA)
connections or using Fibre Channel switches.
Scale Out Backup and Restore (SOBAR)
A specialized mechanism for data protection against disaster only for GPFS file systems that are
managed by IBM Spectrum Protect Hierarchical Storage Management (HSM).
secondary GPFS cluster configuration server
In a GPFS cluster, the node chosen to maintain the GPFS cluster configuration data in the event that
the primary GPFS cluster configuration server fails or becomes unavailable.
Secure Hash Algorithm digest (SHA digest)
A character string used to identify a GPFS security key.
T
token management
A system for controlling file access in which each application performing a read or write operation is
granted some form of access to a specific block of file data. Token management provides data
consistency and controls conflicts. Token management has two components: the token management
server, and the token management function.
token management function
A component of token management that requests tokens from the token management server. The
token management function is located on each cluster node.
token management server
A component of token management that controls tokens relating to the operation of the file system.
The token management server is located at the file system manager node.
transparent cloud tiering (TCT)
A separately installable add-on feature of IBM Spectrum Scale that provides a native cloud storage
tier. It allows data center administrators to free up on-premise storage capacity, by moving out cooler
data to the cloud storage, thereby reducing capital and operational expenditures.
twin-tailed
A disk connected to two nodes.
Glossary 1567
U
user storage pool
A storage pool containing the blocks of data that make up user files.
V
VFS
See virtual file system.
virtual file system (VFS)
A remote file system that has been mounted so that it is accessible to the local user.
virtual node (vnode)
The structure that contains information about a file system object in a virtual file system (VFS).
W
watch folder API
Provides a programming interface where a custom C program can be written that incorporates the
ability to monitor inode spaces, filesets, or directories for specific user activity-related events within
IBM Spectrum Scale file systems. For more information, a sample program is provided in the following
directory on IBM Spectrum Scale nodes: /usr/lpp/mmfs/samples/util called tswf that can be
modified according to the user's needs.
Index 1569
API (continued) asynchronous jobs
GET remotemount/remotecluster 1461, 1465, 1471, API
1475, 1479, 1483 GET 1314, 1321
GET remotemount/remotefilesystems 1487, 1496 atime 938, 941, 944, 947, 966, 968, 970, 976, 1553
GET smb/shares 1502, 1507 atimeDeferredSeconds attribute 176
GET smb/shares/sharename/acl 1527 attribute bit
GET smb/shares/sharename/acl/name 1533 dm_at_size 811
GET thresholds 1539, 1550 attributes
GET watches 1307 adminMode 172
POST /fileset/directory 1145 atimeDeferredSeconds 176
POST /filesystem/directory 1069 autoload 177
POST AFM mapping 1371 automountDir 177
POST afmctl 1124 backgroundSpaceReclaimThreshold 177
POST COS directory 1129 cesSharedRoot 177
POST COS download 1133 cipherList 177
POST COS evict 1137 cnfsGrace 178
POST COS fileset 1101 cnfsMountdPort 178
POST COS upload 1141 cnfsNFSDprocs 178
POST fileset/snapshots 1201 cnfsReboot 178
POST filesets 1096 cnfsSharedRoot 178
POST filesystem/snapshots 1284 cnfsVersions 178
POST link fileset 1158 commandAudit 179
POST nfs/exports 1328 configuration 798
POST nodeclass 1345 confirmShutdownIfHarmful 179
POST nodes 1161, 1165, 1318, 1365, 1547 dataDiskWaitTimeForRecovery 179
POST quotas 1173, 1177, 1244, 1248, 1255, 1259, dataStructureDump 179
1266 deadlockBreakupDelay 179
POST remotemount/remotefilesystems 1489 deadlockDataCollectionDailyLimit 180
POST smb/shares 1511 deadlockDataCollectionMinInterval 180
POST symlink 1213, 1293 deadlockDetectionThreshold 180
POST thresholds 1542 deadlockDetectionThresholdForShortWaiters 180
POST unlink fileset 1155 deadlockOverloadThreshold 180
PUT acl path 1057, 1223, 1299 debugDataControl 180
PUT AFM COS mapping 1380 defaultHelperNodes 181
PUT bucket keys 1008 defaultMountDir 181
PUT directoryCopy 1075, 1151 description 804
PUT file audit logging 1065 dioSmallSeqWriteBatching 181
PUT filesets 1118 disableInodeUpdateOnFdatasync 181
PUT filesystems/{filesystemName}/policies 1236 diskReadExclusionList 181
PUT nfs/exports 1335 dmapiDataEventRetry 182
PUT nodeclass 1355 dmapiEventTimeout 182
PUT nodes 1392 dmapiMountEvent 182
PUT nodes/{name}/services/serviceName 1410 dmapiMountTimeout 183
PUT owner path 1230 dmapiSessionFailureTimeout 183
PUT perfmon/sensors/{sensorName} 1430 enableIPv6 183
PUT remotemount/remotefilesystems 1498 encryptionKeyCacheExpiration 184
PUT resume file system 1270 enforceFilesetQuotaOnRoot 184
PUT smb/shares/shareName 1516 expelDataCollectionDailyLimit 184
PUT smb/shares/shareName/acl/name 1536 expelDataCollectionMinInterval 184
PUT snapshotCopy 1189, 1193, 1273, 1277 extended 804
PUT suspend file system 1210 failureDetectionTime 184
PUT watches 1219, 1303 fastestPolicyCmpThreshold 184
appendOnly file attribute 156, 479 fastestPolicyMaxValidPeriod 184
application failure 831 fastestPolicyMinDiffPercent 185
argument fastestPolicyNumReadSamples 185
buflen 828 fileHeatPeriodMinutes 185
flags 828 FIPS1402mode 185
hanp 826 GPFS-specific 809
hlen 826 ignoreReplicationForQuota 186
len 828 ignoreReplicationOnStatfs 186
nelem 826, 828 logOpenParallelism 188
nelemp 826 logRecoveryParallelism 188
off 828 logRecoveryThreadsPerLog 188
sessinfop 826 lrocData 187
Index 1571
changing (continued) changing (continued)
attributes (continued) attributes (continued)
dataDiskWaitTimeForRecovery 179 proactiveReconnect 197
dataStructureDump 179 readReplicaPolicy 197
deadlockBreakupDelay 179 readReplicaRuleEnabled 198
deadlockDataCollectionDailyLimit 180 release 198
deadlockDataCollectionMinInterval 180 restripeOnDiskFailure 198
deadlockDetectionThreshold 180 rpcPerfNumberDayIntervals 199
deadlockDetectionThresholdForShortWaiters 180 rpcPerfNumberHourIntervals 199
deadlockOverloadThreshold 180 rpcPerfNumberMinuteIntervals 199
debugDataControl 180 rpcPerfNumberSecondIntervals 199
defaultHelperNodes 181 rpcPerfRawExecBufferSize 199
defaultMountDir 181 rpcPerfRawStatBufferSize 199
dioSmallSeqWriteBatching 181 sidAutoMapRangeLength 200
disableInodeUpdateOnFdatasync 181 sidAutoMapRangeStart 200
diskReadExclusionList 181 subnets 200
dmapiDataEventRetry 182 systemLogLevel 202
dmapiEventTimeout 182 tiebreakerDisks 202
dmapiMountEvent 182 uidDomain 202
dmapiMountTimeout 183 unmountOnDiskFail 203
dmapiSessionFailureTimeout 183 usePersistentReserve 203
enableIPv6 183 verbsHungRDMATimeout 204
encryptionKeyCacheExpiration 184 verbsPorts 204
enforceFilesetQuotaOnRoot 184 verbsPortsWaitTimeout 205
expelDataCollectionDailyLimit 184 verbsRdma 205
expelDataCollectionMinInterval 184 verbsRdmaCm 205
failureDetectionTime 184 verbsRdmaFailBackTCPIfNotAvailable 206
fastestPolicyCmpThreshold 184 verbsRdmaRoCEToS 206
fastestPolicyMaxValidPeriod 184 verbsRdmaSend 207
fastestPolicyMinDiffPercent 185 worker1Threads 207
fastestPolicyNumReadSamples 185 disk parameters 210
fileHeatPeriodMinutes 185 disk states 210
FIPS1402mode 185 fileset attributes 222
ignoreReplicationForQuota 186 tracing attributes 717
ignoreReplicationOnStatfs 186 user-defined node classes 248
logOpenParallelism 188 changing Quality of Service for I/O operations (QoS) level
logRecoveryParallelism 188 260
logRecoveryThreadsPerLog 188 changing storage pool properties 258
lrocData 187 cipherList attribute 97, 177
lrocDataMaxFileSize 187 cleanup after GPFS interface calls 926
lrocDataStubFileSize 188 Client license 237
lrocDirectories 188 client node
lrocInodes 188 refresh NSD server 563
maxblocksize 189 clone, file
maxDownDisksForRecovery 189 copy 271, 838
maxFailedNodesForRecovery 190 decloning 849
maxFcntlRangesPerFile 190 redirect 271
maxFilesToCache 190 show 271
maxMBpS 190 snap 271, 840
maxStatCache 190 split 271, 842
metadataDiskWaitTimeForRecovery 191 unsnap 844
minDiskWaitTimeForRecovery 191 Cloud object storage keys 55
mmapRangeLock 192 cluster
nistCompliance 193 changing configuration attributes 169
noSpaceEventInterval 193 changing tracing attributes 717
nsdBufSpace 193 configuration data 402
nsdRAIDBufferPoolSizePct 194 cluster configuration attributes
nsdRAIDTracks 195 displaying 487
nsdServerWaitTimeForMount 195 cluster configuration data 419
nsdServerWaitTimeWindowOnMount 195 cluster configuration server 303, 418
numaMemoryInterleave 195 Cluster Export Services
pagepool 195 config 698
pagepoolMaxPhysMemPct 196 configuration 132, 552
prefetchThreads 196 mmces 132
Index 1573
commands (continued) Data Management API (continued)
mmremotecluster 650 restarting 831
mmremotefs 653 data replica 232
mmrepquota 656 data structures
mmrestoreconfig 661 defined 808
mmrestorefs 665 specific to GPFS implementation 809
mmrestripefile 668 data type
mmrestripefs 672 dm_eventset_t 809
mmrpldisk 679 dataDiskWaitTimeForRecovery attribute 179
mmsdrrestore 686 dataStructureDump attribute 179
mmsetquota 691 deadlockBreakupDelay attribute 179
mmshutdown 695 deadlockDataCollectionDailyLimit attribute 180
mmsmb 698 deadlockDataCollectionMinInterval attribute 180
mmsnapdir 711, 864 deadlockDetectionThreshold attribute 180
mmstartup 715 deadlockDetectionThresholdForShortWaiters attribute 180
mmtracectl 717 deadlockOverloadThreshold attribute 180
mmumount 721 debugDataControl attribute 180
mmunlinkfileset 724 declone 849
mmuserauth 727 default quotas
mmwatch 753 activating 349
mmwinservctl 759 API
spectrumscale 762 GET 1169, 1240, 1252
configuration attributes deactivating 346
DMAPI 806 editing 342
dmapiEnable 808 defaultHelperNodes attribute 181
dmapiEventTimeout defaultMountDir attribute 181
NFS (Network File System) 807 definitions
dmapiMountTimeout 802, 807 GPFS-specific DMAPI functions 812–814, 816, 818,
dmapiSessionFailureTimeout 807, 830 820, 822, 824
confirmShutdownIfHarmful attribute 179 DELETE
connector for Hadoop distributions, GPFS AFM COS mapping 1374
mmhadoopctl 428 DELETE bucket keys 1011
considerations for GPFS applications 1553 DELETE remotemount/remotefilesystems 1493
consistency checks 39 DELETE smb/shares/shareName/acl 1524
contact node 372 DELETE smb/shares/shareName/acl/name 1530
COS directory directory 1072, 1148
API fileset/snapshots 1204
POST 1129 filesets 1106
COS download filesystem/snapshots 1287
API nfs/exports 1339
POST 1133 node 1383
COS evict nodeclass 1352
API remotemount/owningcluster 1449
POST 1137 remotemount/remotecluster 1465
COS fileset smb/shares/shareName 1521
API symlink 1216, 1296
POST 1101 deleting
COS mapping disks 360
API file systems 369
POST 1371 filesets 365
COS upload nodes from a cluster 371
API snapshots 378
POST 1141 deleting links
creating snapshots 711
access control lists 608 deleting, Network Shared Disks (NSDs) 376
file systems 315 deny-write open lock 231
filesets 308 description
ctime 938, 941, 944, 947, 1553 dmapiDataEventRetry 806
dmapiFileHandleSize 807
dmapiMountEvent 807
D dioSmallSeqWriteBatching attribute 181
daRebuildFailed callback 19 direct I/O considerations 1553
Data Management API directives
failure 831 subroutine for passing 854
Index 1575
environment failure (continued)
multiple-node 800, 829 session node 829
single-node 800, 829 single-node 829
error code source node 829
EAGAIN 811 total system 829
EBADF 810, 811, 827 failure group 210, 679
EBUSY 802, 805 failureDetectionTime attribute 184
EINVAL 811, 827, 831 fastestPolicyCmpThreshold attribute 184
EIO 802, 808 fastestPolicyMaxValidPeriod attribute 184
ENOSYS 810 fastestPolicyMinDiffPercent attribute 185
ENOTREADY 804, 810, 830 fastestPolicyNumReadSamples attribute 185
EPERM 810, 827 field
ESRCH 811, 827, 830, 831 dt_change 809
error code, definitions 827 dt_ctime 809
events dt_dtime 809
as defined in XDSM standard 793 dt_nevents 826
asynchronous 794, 801 ev_nodeid 809
description 800 me_handle2 803
disposition 800 me_mode 802, 809, 826
enabled 801 me_name1 803
GPFS-specific attribute events that are not part of the me_roothandle 803
DMAPI standard 794 ne_mode 809
GPFS-specific DMAPI events 793, 826 rg_opaque 809
implemented uio_resid 828
data events 794 file
file system administration 793 /etc/filesystems 803
metadata events 794 access control information 880, 884, 886, 953, 955
namespace events 794 ACL information 880, 884, 886, 953, 955
pseudo events 794 block level incremental read 922
implemented in DMAPI for GPFS 793 dmapi_types.h 806
mount 802 dmapi.exp export 806
not implemented dmapi.h 805
file system administration 794 dmapicalls 806, 810
metadata 794 extended attributes 857, 859, 861, 909
optional events not implemented in DMAPI for GPFS reading 912
794 file access pattern information 854
pre-unmount 802 file attribute
preunmount 809 extended 897, 899
reliable DMAPI destroy 802 querying 479
source node 829 file attributes
synchronous 801 appendOnly 156, 479
unmount 802, 809 file audit logging
events, metadata API
DM_EVENT_POSTPERMCHANGE 827 enable or disable 1065
DM_EVENT_PREPERMCHANGE 827 file clone
exceptions to Open Group technical standards copy 271, 838
GPFS applications considerations 1553 decloning 849
expelDataCollectionDailyLimit attribute 184 redirect 271
expelDataCollectionMinInterval attribute 184 show 271
extended ACLs snap 271, 840
retrieve 857 split 271, 842
set 859, 861, 909 unsnap 844
extended attributes 479 file descriptor
extended file attributes closing 895
retrieve 857 opening 905, 907
set 859, 861, 909 file handle
error code 827
file status information
F gpfs_stat_inode_with_xattrs() 972
failure gpfs_stat_inode_with_xattrs64() 974
dm application 829 file system
GPFS daemon 794, 800 API
partial system 829 PUT 1210, 1270, 1273, 1277
session 800, 801 file system descriptor quorum 419
Index 1577
function (continued) GET (continued)
dm_getall_tokens 828, 830 fileset 1109
dm_handle_hash 826 fileset/snapshots 1197, 1207
dm_handle_is_valid 826 filesets 1087
dm_handle_to_fshandle 826 filesystem/snapshots 1281, 1290
dm_handle_to_fsid 825 filesystems 1040, 1047
dm_handle_to_igen 825 GET perfmon/sensors 1428
dm_handle_to_ino 825 GET remotemount/remotefilesystems 1487, 1496
dm_handle_to_path 828 GET smb/shares/sharename/acl 1527
dm_handle_to_snap 825 GET smb/shares/sharename/acl/name 1533
dm_init_attrloc 826 jobs 1314, 1321
dm_init_service 828 nfs/exports 1324, 1331
dm_make_fshandle 825 nodeclass 1349
dm_make_handle 825 nodeclasses 1342
dm_make_xhandle 825 nodes 1359, 1387
dm_mount_event 802 nodes/{name}/services 1403
dm_move_event 811, 828 nodes/{name}/services/serviceName 1406
dm_probe_hole 825, 828 nodes/nodeName/health/events 1395
dm_punch_hole 805, 811, 825, 828 nodes/nodeName/health/states 1399
dm_query_right 810 nsds 1413, 1418
dm_query_session 825, 826 owner path 1227
dm_read_invis 825, 828 perfmon/sensors/{sensorName} 1426
dm_release_right 810 performance monitoring 1421
dm_request_right 810, 811 quotas 1169, 1181, 1185, 1240, 1252, 1262
dm_respond_event 828 remotemount/authenticationkey 1434, 1436, 1439
dm_send_msg 825 remotemount/owningcluster 1443, 1452, 1459, 1468
dm_set_disp 810, 811, 825, 826 remotemount/remotecluster 1461, 1471, 1475, 1479,
dm_set_eventlist 810, 811, 825, 826 1483
dm_set_file_attr 811 smb/shares 1502, 1507
dm_set_return_on_destroy 810, 811, 825 thresholds 1539, 1550
dm_sync_by_handle 828 watches 1219, 1303, 1307
dm_upgrade_right 810, 828 GET cliauditlog 1027
dm_write_invis 805, 825, 828 GET filesystems/{filesystemName}/policies 1233
functions GPFS
implemented 795, 797 access rights
mandatory 795 loss of 831
not implemented 797 Data Management API 793
optional 797 DM application failure 831
restrictions 810 DMAPI
functions, GPFS-specific DMAPI failure 829
definitions 812 recovery 829
dm_handle_to_snap 813 enhancements 808
dm_make_xhandle 814 failure
dm_remove_dmattr_nosync 816 single-node 829
dm_set_dmattr_nosync 818 file system 793
dm_set_eventlist_nosync 820 implementation 793, 808
dm_set_region_nosync 822 installation toolkit 762
dm_sync_dmattr_by_handle 824 license designation 237, 503
licensing 237, 503
mmhealth command 435
G mmprotocoltrace command 601
gathering data to solve GPFS problems 8 programming interfaces 846, 847, 849, 851, 852, 854,
genkey 97 857, 859, 861, 863–866, 868, 870–872, 874, 876, 878,
GET 880, 882, 884, 886, 888, 891, 895–897, 899, 901, 903,
acl path 1054 905, 907, 909, 912, 914, 916, 918, 920, 922, 924–927,
AFM 1062 929, 931, 933, 935, 937, 938, 941, 944, 947, 950, 953,
AFM COS mapping 1368, 1377 955, 957, 960, 962, 964, 966, 968, 970, 972, 974, 976,
ces/addresses 1014 978, 979, 981, 982, 984, 986–989, 991, 993, 996, 998
ces/addresses/{cesAddress} 1018 session
ces/services 1021 failure 830
ces/services/{service} 1024 recovery 830
cluster 1030 stopping 695
config 1036 GPFS cluster
disks 1079, 1083 creating 303
Index 1579
gpfs_fputattrs() (continued) gpfs_ireadx() 922
GPFS_ATTRFLAG_SKIP_CLONE 859 gpfs_iscan_t 924
GPFS_ATTRFLAG_SKIP_IMMUTABLE 859 gpfs_lib_init() 925
GPFS_ATTRFLAG_USE_POLICY 859 gpfs_lib_term() 926
gpfs_fputattrswithpathname() gpfs_next_inode_with_xattrs() 931
GPFS_ATTRFLAG_DEFAULT 861 gpfs_next_inode_with_xattrs64() 933
GPFS_ATTRFLAG_FINALIZE_ATTRS 861 gpfs_next_inode() 927
GPFS_ATTRFLAG_IGNORE_POOL 861 gpfs_next_inode64() 929
GPFS_ATTRFLAG_INCL_DMAPI 861 gpfs_next_xattr() 935
GPFS_ATTRFLAG_INCL_ENCR 861 gpfs_opaque_acl_t 937
GPFS_ATTRFLAG_MODIFY_CLONEPARENT 861 gpfs_open_inodescan_with_xattrs() 944
GPFS_ATTRFLAG_NO_PLACEMENT 861 gpfs_open_inodescan_with_xattrs64() 947
GPFS_ATTRFLAG_SKIP_CLONE 861 gpfs_open_inodescan() 938
GPFS_ATTRFLAG_SKIP_IMMUTABLE 861 gpfs_open_inodescan64() 941
GPFS_ATTRFLAG_USE_POLICY 861 gpfs_prealloc() 950
gpfs_free_fssnaphandle() 863 gpfs_putacl_fd() 955
gpfs_fssnap_handle_t 864 gpfs_putacl() 953
gpfs_fssnap_id_t 865 gpfs_quotactl() 957
gpfs_fstat_x() 868 gpfs_quotaInfo_t 960
gpfs_fstat() 866 gpfs_seek_inode() 962
gpfs_get_fsname_from_fssnaphandle() 870 gpfs_seek_inode64() 964
gpfs_get_fssnaphandle_by_fssnapid() 871 gpfs_stat_inode_with_xattrs() 972
gpfs_get_fssnaphandle_by_name() 872 gpfs_stat_inode_with_xattrs64() 974
gpfs_get_fssnaphandle_by_path() 874 gpfs_stat_inode() 968
gpfs_get_fssnapid_from_fssnaphandle() 876 gpfs_stat_inode64() 970
gpfs_get_pathname_from_fssnaphandle() 878 gpfs_stat_x() 976
gpfs_get_snapdirname() 880 gpfs_stat() 966
gpfs_get_snapname_from_fssnaphandle() 882 GPFS-specific DMAPI events 793, 826
gpfs_getacl_fd() 886 GPFS-specific DMAPI functions
gpfs_getacl() 884 definitions 812
gpfs_iattr_t 888 dm_handle_to_snap 813
gpfs_iattr64_t 891 dm_make_xhandle 814
gpfs_iclose() 895 dm_remove_dmattr_nosync 816
gpfs_ifile_t 896 dm_set_dmattr_nosync 818
gpfs_igetattrs() 897 dm_set_eventlist_nosync 820
gpfs_igetattrsx() dm_set_region_nosync 822
GPFS_ATTRFLAG_FINALIZE_ATTRS 899 dm_sync_dmattr_by_handle 824
GPFS_ATTRFLAG_IGNORE_PLACEMENT 899 gpfs.snap command 8
GPFS_ATTRFLAG_INCL_DMAPI 899 gpfsFcntlHeader_t 978
GPFS_ATTRFLAG_INCL_ENCR 899 gpfsGetDataBlkDiskIdx_t 979
GPFS_ATTRFLAG_MODIFY_CLONEPARENT 899 gpfsGetFilesetName_t 981
GPFS_ATTRFLAG_NO_PLACEMENT 899 gpfsGetReplication_t 982
GPFS_ATTRFLAG_SKIP_CLONE 899 gpfsGetSetXAttr_t 984
GPFS_ATTRFLAG_SKIP_IMMUTABLE 899 gpfsGetSnapshotName_t 986
GPFS_ATTRFLAG_USE_POLICY 899 gpfsGetStoragePool_t 987
gpfs_igetfilesetname() 901 gpfsListXAttr_t 988
gpfs_igetstoragepool() 903 gpfsRestripeData_t 989
gpfs_iopen() 905 gpfsRestripeRange_t 991
gpfs_iopen64() 907 gpfsRestripeRangeV2_t 993
gpfs_iputattrsx() gpfsSetReplication_t 996
GPFS_ATTRFLAG_FINALIZE_ATTRS 909 gpfsSetStoragePool_t 998
GPFS_ATTRFLAG_IGNORE_POOL 909 grace period
GPFS_ATTRFLAG_INCL_DMAPI 909 changing 398, 691
GPFS_ATTRFLAG_INCL_ENCR 909 setting 398, 691
GPFS_ATTRFLAG_MODIFY_CLONEPARENT 909 group quota 343, 346, 349, 399, 527, 644, 656
GPFS_ATTRFLAG_NO_PLACEMENT 909
GPFS_ATTRFLAG_SKIP_CLONE 909
GPFS_ATTRFLAG_SKIP_IMMUTABLE 909
H
GPFS_ATTRFLAG_USE_POLICY 909 Hadoop distributions, GPFS connector for
gpfs_iread() 912 mmhadoopctl 428
gpfs_ireaddir() 914 hints
gpfs_ireaddir64() 916 subroutine for passing 854
gpfs_ireadlink() 918 hole 922
gpfs_ireadlink64() 920
Index 1581
IBM Spectrum Scale management API (continued) lrocData attribute 187
PUT audit logging 1065 lrocDataMaxFileSize attribute 187
PUT bucket keys 1008 lrocDataStubFileSize attribute 188
PUT directoryCopy 1075, 1151 lrocDirectories attribute 188
PUT filesets 1118 lrocInodes attribute 188
PUT filesystems/{filesystemName}/policies 1236
PUT nfs/exports 1335
PUT node class 1355
M
PUT nodeclass 1355 macro
PUT nodes 1392 DM_TOKEN_EQ (x,y) 810
PUT nodes/{name}/services/serviceName 1410 DM_TOKEN_GE (x,y) 810
PUT owner path 1230 DM_TOKEN_GT (x,y) 810
PUT perfmon/sensors/{sensorName} 1430 DM_TOKEN_LE (x,y) 810
PUT remotemount/remotefilesystems 1498 DM_TOKEN_LT (x,y) 810
PUT resume file system 1270 DM_TOKEN_NE (x,y) 810
PUT smb/shares/shareName 1516 DMEV_ADD(eset1, eset2) 809
PUT smb/shares/shareName/acl/name 1536 DMEV_ALL(eset) 809
PUT snapshotCopy 1189, 1193, 1273, 1277 DMEV_ISALL(eset) 809
PUT suspend file system 1210 DMEV_ISDISJ(eset1, eset2) 810
PUT watch 1219, 1303 DMEV_ISEQ(eset1, eset2) 809
IBM Spectrum Scale REST API 1007 DMEV_ISSUB(eset2) 810
IBM Spectrum Scale user exits 1001 DMEV_ISZERO(eset) 809
ignoreReplicationForQuota attribute 186 DMEV_NORM(eset) 810
ignoreReplicationOnStatfs attribute 186 DMEV_REM(eset1, eset2) 809
in-doubt value 218, 528, 657 DMEV_RES(eset1, eset2) 809
incremental backup 927, 929, 938, 941, 944, 947 macros, GPFS 809
info: GET macros, XDSM standard 809
REST API 1311 management API
inode DELETE AFM COS mapping 1374
attributes 888, 891 DELETE bucket keys 1011
inode file handle 895, 896 DELETE directory 1072, 1148
inode number 905, 907, 918, 920, 927, 929, 931, 933, 935, DELETE fileset 1106
938, 941, 944, 947, 962, 964 DELETE fileset/snapshots 1204
inode scan DELETE filesystem/snapshots 1287
closing 846 DELETE nfs/exports 1339
opening 938, 941, 944, 947 DELETE node 1383
inode scan handle 846, 924 DELETE nodeclass 1352
installation 762 DELETE remotemount/remotefilesystems 1493
installation requirements 805 DELETE smb/shares/shareName 1521
interface calls, cleanup after 926 DELETE smb/shares/shareName/acl 1524
interface for additional calls, setup of 925 DELETE smb/shares/shareName/acl/name 1530
iscan handle 846 DELETE symlink 1216, 1296
GET acl path 1054
K GET AFM 1062
GET AFM COS mapping 1368, 1377
kernel memory 156 GET ces/addresses 1014
GET ces/addresses/{cesAddress} 1018
GET ces/services 1021
L GET ces/services/{service} 1024
license 237 GET cliauditlog 1027
linking GET cluster 1030
filesets 477 GET config 1036
links to snapshots GET disks 1079, 1083
creating 711 GET fileset 1109
deleting 711 GET fileset/snapshots 1197, 1207
listing GET filesets 1087
snapshots 532 GET filesystem/snapshots 1281, 1290
user-defined callbacks 482 GET filesystems 1040, 1047
listing Quality of Service for I/O operations (QoS) settings GET filesystems/{filesystemName}/policies 1233
522 GET jobs 1314, 1321
logOpenParallelism attribute 188 GET nfs/exports 1324, 1331
logRecoveryParallelism attribute 188 GET nodeclass 1349
logRecoveryThreadsPerLog attribute 188 GET nodeclasses 1342
GET nodes 1359, 1387
Index 1583
mmcrnsd 332, 679, 1003, 1004 mmquotaon 644
mmcrsnapshot 337 mmreclaimspace 647
mmdefedquota 342 mmremotecluster 650
mmdefquotaoff 346 mmremotefs 653
mmdefquotaon 349 mmrepquota 656
mmdefragfs 353 mmrestoreconfig 661
mmdelacl 356 mmrestorefs 665
mmdelcallback 358 mmrestripefile 668
mmdeldisk 360, 1004 mmrestripefs 672
mmdelfileset 365 mmrpldisk 679
mmdelfs 369 mmsdrrestore 686
mmdelnode 371 mmsetquota 691
mmdelnodeclass 374 mmshutdown 695
mmdelnsd 376 mmsmb 698
mmdelsnapshot 378 mmsnapdir 711, 864
mmdf 382 mmstartup 715
mmdiag 386 mmtracectl 717
mmdsh 393 mmumount 721
mmeditacl 395 mmunlinkfileset 724
mmedquota 398 mmuserauth 727
mmexportfs 402 mmwatch 753
MMFS_FSSTRUCT 404 mmwinserv service
MMFS_SYSTEM_UNMOUNT 406 managing 759
mmfsck 404 mmwinservctl 759
mmfsctl 418 monitoring
mmgetacl 422 performance 595
mmgetstate 425 mount point directory 316
mmhadoopctl 428 mounting a file system 230
mmhdfs 430 mtime 321, 938, 941, 944, 947, 966, 968, 970, 976, 1553
mmhealth 435, 461 multi-region object deployment
mmimgbackup 450 mmobj command 565
mmimgrestore 454 multiple sessions 803
mmimportfs 457 multiple-node environment
mmlinkfileset 477 model for DMAPI 829
mmlsattr 479
mmlscallback 482
mmlscluster 484
N
mmlsconfig 487 Network Shared Disks (NSDs)
mmlsdisk 489 changing configuration attributes 251
mmlsfileset 493 creating 332
mmlsfs 498 displaying 514
mmlslicense 503 Network Shared Disks (NSDs), deleting 376
mmlsmgr 507 network verification tool 540
mmlsmount 509 NFS (Network File System) 804
mmlsnodeclass 512 nfs export
mmlsnsd 514 API
mmlspolicy 518 DELETE 1339
mmlspool 520 GET 1324, 1331
mmlsqos 522 POST 1328
mmlsquota 527 PUT 1335
mmlssnapshot 532, 890, 894 NFS V4 231, 321
mmmigratefs 535 NFS V4 ACL 231, 356, 395, 396, 422, 423, 608
mmmount 537 nistCompliance attribute 193
mmnetverify 540 node
mmnfs 552 API
mmnsddiscover 563 DELETE 1383
mmobj 565 node classes, user-defined
mmperfmon query 582 changing 248
mmpmon 595 creating 330
mmprotocoltrace 601 deleting 374
mmpsnap 605 listing 512
mmputacl 608 node descriptor 35, 303
mmqos 610 node designation 35, 304
mmquotaoff 641 node failure detection 184
Index 1585
PUT (continued) remote mount (continued)
PUT smb/shares/shareName/acl/name 1536 remote cluster 1459, 1461, 1465, 1468, 1471, 1475,
remotemount/owningcluster 1455 1479, 1483
resume file system 1270 remote file system 1487, 1489, 1493, 1496, 1498
smb/shares/shareName 1516 remote shell command
snapshotCopy 1189, 1193, 1273, 1277 changing 164
suspend file system 1210 choosing 303
replacing disks 679
replicated cluster 1004
Q replication
Quality of Service for I/O operations (QoS) level querying 479
changing 260 replication attributes
Quality of Service for I/O operations (QoS) settings changing 156
listing 522 replication factor 156
quorum 419 replication, strict 322
quorum node 303, 372, 425 REST API
quota 805 info: GET 1311
quota files restoring configuration information 661
replacing 218 restoring NSD path
quota information 960 mmnsddiscover 563
quotas restrictions
activating 644 functions 810
API restripeOnDiskFailure attribute 198
GET 1181, 1185, 1262 restriping a file 668
POST 1173, 1177, 1244, 1248, 1255, 1259, 1266 restriping a file system 672
changing 398, 691, 957 rgOpenFailed callback 25
checking 218 rgPanic callback 25
creating reports 656 root credentials 810
deactivating 641 rpcPerfNumberDayIntervals attribute 199
displaying 527 rpcPerfNumberHourIntervals attribute 199
setting 398, 691 rpcPerfNumberMinuteIntervals attribute 199
rpcPerfNumberSecondIntervals attribute 199
rpcPerfRawExecBufferSize attribute 199
R rpcPerfRawStatBufferSize attribute 199
readReplicaPolicy attribute 197
readReplicaRuleEnabled attribute 198 S
rebalancing a file 668
rebalancing a file system 672 semantic changes
recovery for the GPFS implementation 825
DODeferred deletions 831 Server license 237
mount event 831 server node
synchronous event 830 restoring NSD path 563
unmount event 831 server node, NSD
recovery groups choosing 332
stanza files 459 session
refresh NSD server failure 801, 826, 830
mmnsddiscover 563 recovery 830
registering user event commands 12 session node 800, 825, 829
release attribute 198 session, assuming a 800, 826
reliable DMAPI destroy events 802 sessions
remote cluster description 800
remote file system 1487, 1489, 1493, 1496, 1498 failure 800
remote mount 1459, 1461, 1465, 1468, 1471, 1475, information string, changing 826
1479, 1483 maximum per node 800, 810
remote copy command state of 800
changing 164 setup of interface for additional calls 925
choosing 303 shell script
remote file system gpfsready 808
API 1487, 1489, 1493, 1496, 1498 sidAutoMapRangeLength attribute 200
remote file systems 96 sidAutoMapRangeStart attribute 200
remote mount single-node 829
authentication key 1434, 1436, 1439 single-node environment 800, 829
owning cluster 1443, 1445, 1449, 1452, 1455 SMB
export 582
Index 1587
subroutines (continued) usage restrictions 810
gpfs_next_inode_with_xattrs64() 933 usePersistentReserve attribute 203
gpfs_next_inode() 927 user event commands, registering 12
gpfs_next_inode64() 929 user exit
gpfs_next_xattr() 935 GPFS 1004, 1005
gpfs_open_inodescan_with_xattrs() 944 IBM Spectrum Scale 1004
gpfs_open_inodescan_with_xattrs64() 947 preunmount 1005
gpfs_open_inodescan() 938 user exits
gpfs_open_inodescan64() 941 GPFS 1003
gpfs_prealloc() 950 IBM Spectrum Scale 1003
gpfs_putacl_fd() 955 mmsdrbackup 1002
gpfs_putacl() 953 nsddevices 1003
gpfs_quotactl() 957 preunmount 1005
gpfs_seek_inode() 962 syncfsconfig 1004
gpfs_seek_inode64() 964 user quota 343, 346, 349, 399, 527, 644, 656
gpfs_stat_inode_with_xattrs() 972 user space buffer 156
gpfs_stat_inode_with_xattrs64() 974 user-defined callbacks
gpfs_stat_inode() 968 deleting 358
gpfs_stat_inode64() 970 listing 482
gpfs_stat_x() 976 user-defined node classes
gpfs_stat() 966 changing 248
symbolic link creating 330
reading 918, 920 deleting 374
symlink listing 512
API using the gpfs.snap command
DELETE 1216, 1296 gathering data 8
syncFSconfig 418
system snapshots 8
systemLogLevel attribute 202
V
verbsHungRDMATimeout attribute 204
T verbsPorts attribute 204
verbsPortsWaitTimeout attribute 205
thin provisioned disks verbsRdma attribute 205
reclaiming space 647 verbsRdmaCm attribute 205
thresholds verbsRdmaFailBackTCPIfNotAvailable attribute 206
API verbsRdmaRoCEToS attribute 206
GET 1539, 1550 verbsRdmaSend attribute 207
POST 1542 verification tool, network 540
tiebreakerDisks attribute 202
timeout period 695
token, usage 803
W
tokens watches
input arguments 827 API
trace-recycle GET 1307
changing 718 PUT 1219, 1303
tracing attributes, changing 717 worker1Threads attribute 207
traditional ACL 231, 395, 396, 422, 423
traditional ACLs
NFS V4 ACL 321 X
Windows 321
XDSM standard 798, 800, 829
transparent cloud tiering
commands
mmcloudgateway 274
U
UID domain 305
uidDomain attribute 202
unified file and object access
mmobj command 565
unlinking
filesets 724
unmountOnDiskFail attribute 203
GC28-3164-02