PowerScale OneFS 9 0 0 0 Web Admin Guide
PowerScale OneFS 9 0 0 0 Web Admin Guide
June 2020
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the
problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2016 - 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
Contents
Contents 3
Activating trial licenses..................................................................................................................................................36
Certificates........................................................................................................................................................................... 37
Replacing or renewing the TLS certificate................................................................................................................. 37
Verify a TLS certificate update....................................................................................................................................39
TLS certificate data example....................................................................................................................................... 40
Cluster identity.....................................................................................................................................................................40
Set the cluster name and contact information..........................................................................................................40
Cluster date and time.......................................................................................................................................................... 41
Set the cluster date and time....................................................................................................................................... 41
Specify an NTP time server.......................................................................................................................................... 41
SMTP email settings............................................................................................................................................................ 41
Configure SMTP email settings....................................................................................................................................41
Configuring the cluster join mode......................................................................................................................................42
Specify the cluster join mode.......................................................................................................................................42
File system settings.............................................................................................................................................................42
Enable or disable access time tracking....................................................................................................................... 43
Specify the cluster character encoding......................................................................................................................43
Security hardening...............................................................................................................................................................43
STIG hardening profile.................................................................................................................................................. 44
Apply a security hardening profile................................................................................................................................44
Revert a security hardening profile............................................................................................................................. 45
View the security hardening status.............................................................................................................................45
Cluster monitoring............................................................................................................................................................... 46
Monitor the cluster........................................................................................................................................................46
View node status........................................................................................................................................................... 46
Monitoring cluster hardware.............................................................................................................................................. 47
View node hardware status..........................................................................................................................................47
Chassis and drive states............................................................................................................................................... 47
Check battery status.....................................................................................................................................................49
SNMP monitoring.......................................................................................................................................................... 49
Events and alerts..................................................................................................................................................................51
Events overview............................................................................................................................................................. 51
Alerts overview............................................................................................................................................................... 51
Channels overview......................................................................................................................................................... 51
Event groups overview................................................................................................................................................. 52
Viewing and modifying event groups.......................................................................................................................... 52
Managing alerts............................................................................................................................................................. 52
Managing channels........................................................................................................................................................54
Maintenance and testing.............................................................................................................................................. 56
Cluster maintenance........................................................................................................................................................... 57
Replacing node components........................................................................................................................................ 57
Upgrading node components.......................................................................................................................................58
Automatic Replacement Recognition (ARR) for drives............................................................................................58
Managing drive firmware..............................................................................................................................................59
Managing cluster nodes................................................................................................................................................62
Upgrading OneFS.......................................................................................................................................................... 63
SRS Summary...................................................................................................................................................................... 63
SRS Telemetry............................................................................................................................................................... 64
Obtain signed OneFS license file for evaluation clusters..........................................................................................64
Configuring and Enabling SRS Overview................................................................................................................... 64
4 Contents
Diagnostic commands and scripts...............................................................................................................................65
Enabling SRS Telemetry............................................................................................................................................... 68
Disabling SRS Telemetry...............................................................................................................................................68
Chapter 5: Authentication..............................................................................................................77
Authentication overview..................................................................................................................................................... 77
Authentication provider features....................................................................................................................................... 77
Security Identifier (SID) history overview........................................................................................................................ 78
Supported authentication providers..................................................................................................................................78
Active Directory................................................................................................................................................................... 78
LDAP..................................................................................................................................................................................... 79
NIS......................................................................................................................................................................................... 79
Kerberos authentication..................................................................................................................................................... 80
Keytabs and SPNs overview........................................................................................................................................80
MIT Kerberos protocol support................................................................................................................................... 80
File provider..........................................................................................................................................................................80
Local provider....................................................................................................................................................................... 81
Multi-factor Authentication (MFA)....................................................................................................................................81
Multi-instance active directory...........................................................................................................................................81
LDAP public keys..................................................................................................................................................................81
Managing Active Directory providers............................................................................................................................... 82
Configure an Active Directory provider...................................................................................................................... 82
Modify an Active Directory provider........................................................................................................................... 82
Delete an Active Directory provider............................................................................................................................ 83
Active Directory provider settings...............................................................................................................................83
Managing LDAP providers..................................................................................................................................................84
Configure an LDAP provider........................................................................................................................................ 84
Modify an LDAP provider............................................................................................................................................. 85
Contents 5
Delete an LDAP provider.............................................................................................................................................. 85
LDAP query settings..................................................................................................................................................... 85
LDAP advanced settings.............................................................................................................................................. 86
Managing NIS providers......................................................................................................................................................87
Configure an NIS provider............................................................................................................................................ 87
Modify an NIS provider................................................................................................................................................. 87
Delete an NIS provider.................................................................................................................................................. 88
Managing MIT Kerberos authentication........................................................................................................................... 88
Managing MIT Kerberos realms...................................................................................................................................88
Managing MIT Kerberos providers.............................................................................................................................. 89
Managing MIT Kerberos domains................................................................................................................................92
Managing file providers.......................................................................................................................................................93
Configure a file provider............................................................................................................................................... 93
Generate a password file.............................................................................................................................................. 94
Password file format..................................................................................................................................................... 94
Group file format........................................................................................................................................................... 95
Netgroup file format..................................................................................................................................................... 95
Modify a file provider.................................................................................................................................................... 96
Delete a file provider..................................................................................................................................................... 96
Managing local users and groups...................................................................................................................................... 96
View a list of users or groups by provider.................................................................................................................. 96
Create a local user.........................................................................................................................................................96
Create a local group...................................................................................................................................................... 97
Naming rules for local users and groups.....................................................................................................................98
Modify a local user.........................................................................................................................................................98
Modify a local group...................................................................................................................................................... 98
Delete a local user..........................................................................................................................................................99
Delete a local group.......................................................................................................................................................99
6 Contents
Identity management overview.........................................................................................................................................114
Identity types.......................................................................................................................................................................114
Access tokens..................................................................................................................................................................... 115
Access token generation................................................................................................................................................... 115
ID mapping..................................................................................................................................................................... 116
User mapping.................................................................................................................................................................117
On-disk identity............................................................................................................................................................. 118
Managing ID mappings.......................................................................................................................................................119
Create an identity mapping..........................................................................................................................................119
Modify an identity mapping......................................................................................................................................... 119
Delete an identity mapping.......................................................................................................................................... 119
View an identity mapping............................................................................................................................................ 120
Flush the identity mapping cache.............................................................................................................................. 120
View a user token.........................................................................................................................................................120
Configure identity mapping settings...........................................................................................................................121
View identity mapping settings................................................................................................................................... 121
Managing user identities.................................................................................................................................................... 121
View user identity.........................................................................................................................................................122
Create a user-mapping rule.........................................................................................................................................122
Test a user-mapping rule.............................................................................................................................................123
Merge Windows and UNIX tokens.............................................................................................................................124
Retrieve the primary group from LDAP.....................................................................................................................124
Mapping rule options................................................................................................................................................... 125
Mapping rule operators............................................................................................................................................... 126
Contents 7
NFS access of Windows-created files...................................................................................................................... 138
SMB access of UNIX-created files............................................................................................................................ 138
Managing access permissions.......................................................................................................................................... 138
View expected user permissions................................................................................................................................ 138
Configure access management settings...................................................................................................................139
Modify ACL policy settings......................................................................................................................................... 140
ACL policy settings...................................................................................................................................................... 140
Run the PermissionRepair job.....................................................................................................................................145
8 Contents
Delivering protocol audit events to multiple CEE servers............................................................................................. 177
Supported event types......................................................................................................................................................178
Sample audit log................................................................................................................................................................. 179
Managing audit settings.................................................................................................................................................... 179
Enable protocol access auditing................................................................................................................................. 179
Forward protocol access events to syslog .............................................................................................................. 180
Enable system configuration auditing.........................................................................................................................181
Set the audit hostname................................................................................................................................................181
Configure protocol audited zones...............................................................................................................................181
Forward system configuration changes to syslog.................................................................................................... 181
Configure protocol event filters................................................................................................................................. 182
Integrating with the Common Event Enabler.................................................................................................................182
Install CEE for Windows.............................................................................................................................................. 182
Configure CEE for Windows.......................................................................................................................................183
Configure CEE servers to deliver protocol audit events.........................................................................................184
Tracking the delivery of protocol audit events...............................................................................................................184
View the time stamps of delivery of events to the CEE server and syslog..........................................................184
Move the log position of the CEE forwarder............................................................................................................184
View the rate of delivery of protocol audit events to the CEE server..................................................................185
Contents 9
Clone a file from a snapshot....................................................................................................................................... 198
Managing snapshot schedules......................................................................................................................................... 198
Modify a snapshot schedule....................................................................................................................................... 198
Delete a snapshot schedule........................................................................................................................................ 199
View snapshot schedules............................................................................................................................................199
Managing snapshot aliases............................................................................................................................................... 199
Configure a snapshot alias for a snapshot schedule............................................................................................... 199
Assign a snapshot alias to a snapshot....................................................................................................................... 199
Reassign a snapshot alias to the live file system.....................................................................................................200
View snapshot aliases................................................................................................................................................. 200
Snapshot alias information......................................................................................................................................... 200
Managing with snapshot locks........................................................................................................................................ 200
Create a snapshot lock................................................................................................................................................201
Modify a snapshot lock expiration date.....................................................................................................................201
Delete a snapshot lock................................................................................................................................................ 201
Snapshot lock information..........................................................................................................................................202
Configure SnapshotIQ settings....................................................................................................................................... 202
SnapshotIQ settings....................................................................................................................................................202
Set the snapshot reserve.................................................................................................................................................203
Managing changelists....................................................................................................................................................... 203
Create a changelist..................................................................................................................................................... 203
Delete a changelist...................................................................................................................................................... 204
View a changelist.........................................................................................................................................................204
Changelist information................................................................................................................................................204
10 Contents
Full and differential replication....................................................................................................................................214
Controlling replication job resource consumption.................................................................................................... 215
Replication policy priority............................................................................................................................................ 215
Replication reports....................................................................................................................................................... 215
Replication snapshots........................................................................................................................................................215
Source cluster snapshots............................................................................................................................................215
Target cluster snapshots.............................................................................................................................................216
Data failover and failback with SyncIQ............................................................................................................................216
Data failover.................................................................................................................................................................. 217
Data failback..................................................................................................................................................................217
SmartLock compliance mode failover and failback.................................................................................................. 217
SmartLock replication limitations............................................................................................................................... 218
Recovery times and objectives for SyncIQ.....................................................................................................................218
RPO Alerts.................................................................................................................................................................... 219
Replication policy priority.................................................................................................................................................. 219
SyncIQ license functionality..............................................................................................................................................219
Replication for nodes with multiple interfaces............................................................................................................... 219
Restrict SyncIQ source nodes..........................................................................................................................................219
Creating replication policies............................................................................................................................................. 220
Excluding directories in replication............................................................................................................................ 220
Excluding files in replication....................................................................................................................................... 220
File criteria options....................................................................................................................................................... 221
Configure default replication policy settings............................................................................................................222
Create a replication policy.......................................................................................................................................... 223
Assess a replication policy.......................................................................................................................................... 227
Managing replication to remote clusters........................................................................................................................ 227
Start a replication job.................................................................................................................................................. 228
Pause a replication job................................................................................................................................................ 228
Resume a replication job.............................................................................................................................................228
Cancel a replication job............................................................................................................................................... 228
View active replication jobs........................................................................................................................................ 228
Replication job information.........................................................................................................................................229
Initiating data failover and failback with SyncIQ............................................................................................................229
Fail over data to a secondary cluster........................................................................................................................ 229
Revert a failover operation.........................................................................................................................................230
Fail back data to a primary cluster............................................................................................................................ 230
Run the ComplianceStoreDelete job in a Smartlock compliance mode domain...................................................231
Performing disaster recovery for older SmartLock directories....................................................................................231
Recover SmartLock compliance directories on a target cluster............................................................................ 231
Migrate SmartLock compliance directories............................................................................................................. 232
Managing replication policies........................................................................................................................................... 232
Modify a replication policy.......................................................................................................................................... 232
Delete a replication policy........................................................................................................................................... 233
Enable or disable a replication policy......................................................................................................................... 233
View replication policies.............................................................................................................................................. 233
Replication policy information.................................................................................................................................... 234
Replication policy settings.......................................................................................................................................... 234
Managing replication to the local cluster........................................................................................................................236
Cancel replication to the local cluster....................................................................................................................... 236
Break local target association.................................................................................................................................... 236
Contents 11
View replication policies targeting the local cluster................................................................................................ 236
Remote replication policy information...................................................................................................................... 236
Managing replication performance rules........................................................................................................................ 237
Create a network traffic rule......................................................................................................................................237
Create a file operations rule....................................................................................................................................... 237
Modify a performance rule......................................................................................................................................... 238
Delete a performance rule.......................................................................................................................................... 238
Enable or disable a performance rule........................................................................................................................ 238
View performance rules..............................................................................................................................................238
Managing replication reports........................................................................................................................................... 238
Configure default replication report settings........................................................................................................... 239
Delete replication reports........................................................................................................................................... 239
View replication reports..............................................................................................................................................239
Replication report information................................................................................................................................... 239
Managing failed replication jobs...................................................................................................................................... 240
Resolve a replication policy........................................................................................................................................ 240
Reset a replication policy.............................................................................................................................................241
Perform a full or differential replication..................................................................................................................... 241
12 Contents
NDMP protocol support................................................................................................................................................... 252
Supported DMAs...............................................................................................................................................................253
NDMP hardware support................................................................................................................................................. 253
NDMP backup limitations.................................................................................................................................................253
NDMP performance recommendations......................................................................................................................... 253
Excluding files and directories from NDMP backups....................................................................................................254
Configuring basic NDMP backup settings..................................................................................................................... 255
Configure and enable NDMP backup....................................................................................................................... 255
View NDMP backup settings.....................................................................................................................................256
Disable NDMP backup................................................................................................................................................ 256
Managing NDMP user accounts..................................................................................................................................... 256
Create an NDMP administrator account..................................................................................................................256
View NDMP user accounts........................................................................................................................................256
Modify the password of an NDMP administrator account....................................................................................256
Delete an NDMP administrator account...................................................................................................................257
NDMP environment variables overview......................................................................................................................... 257
Managing NDMP environment variables..................................................................................................................257
NDMP environment variable settings....................................................................................................................... 257
Add an NDMP environment variable........................................................................................................................ 258
View NDMP environment variables.......................................................................................................................... 258
Edit an NDMP environment variable.........................................................................................................................258
Delete an NDMP environment variable.................................................................................................................... 258
NDMP environment variables....................................................................................................................................259
Setting environment variables for backup and restore operations....................................................................... 263
Managing NDMP contexts.............................................................................................................................................. 264
NDMP context settings..............................................................................................................................................264
View NDMP contexts................................................................................................................................................. 264
Delete an NDMP context........................................................................................................................................... 265
Managing NDMP sessions............................................................................................................................................... 265
NDMP session information........................................................................................................................................ 265
View NDMP sessions.................................................................................................................................................. 267
Abort an NDMP session............................................................................................................................................. 267
Managing NDMP Fibre Channel ports............................................................................................................................267
NDMP backup port settings...................................................................................................................................... 267
Enable or disable an NDMP backup port..................................................................................................................268
View NDMP backup ports..........................................................................................................................................268
Modify NDMP backup port settings......................................................................................................................... 268
Managing NDMP preferred IP settings..........................................................................................................................269
Create an NDMP preferred IP setting......................................................................................................................269
Modify an NDMP preferred IP setting..................................................................................................................... 269
List NDMP preferred IP settings...............................................................................................................................269
View NDMP preferred IP settings............................................................................................................................ 269
Delete NDMP preferred IP settings.......................................................................................................................... 270
Managing NDMP backup devices................................................................................................................................... 270
NDMP backup device settings.................................................................................................................................. 270
Detect NDMP backup devices................................................................................................................................... 271
View NDMP backup devices.......................................................................................................................................271
Modify the name of an NDMP backup device......................................................................................................... 271
Delete an entry for an NDMP backup device........................................................................................................... 271
NDMP dumpdates file overview......................................................................................................................................272
Contents 13
Managing the NDMP dumpdates file........................................................................................................................272
NDMP dumpdates file settings..................................................................................................................................272
View entries in the NDMP dumpdates file............................................................................................................... 272
Delete entries from the NDMP dumpdates file....................................................................................................... 272
NDMP restore operations................................................................................................................................................ 272
NDMP parallel restore operation............................................................................................................................... 273
NDMP serial restore operation.................................................................................................................................. 273
Specify a NDMP serial restore operation................................................................................................................. 273
Sharing tape drives between clusters.............................................................................................................................273
Managing snapshot based incremental backups...........................................................................................................273
Enable snapshot-based incremental backups for a directory................................................................................ 274
View snapshots for snapshot-based incremental backups.................................................................................... 274
Delete snapshots for snapshot-based incremental backups................................................................................. 274
Managing cluster performance for NDMP sessions..................................................................................................... 274
Enable NDMP Redirector to manage cluster performance................................................................................... 274
Managing CPU usage for NDMP sessions.....................................................................................................................275
Enable NDMP Throttler.............................................................................................................................................. 275
14 Contents
Protection domain considerations...................................................................................................................................286
Create a protection domain............................................................................................................................................. 286
Delete a protection domain.............................................................................................................................................. 287
Contents 15
Search for quotas.........................................................................................................................................................310
Manage quotas..............................................................................................................................................................311
Export a quota configuration file.................................................................................................................................311
Import a quota configuration file.................................................................................................................................311
Managing quota notifications........................................................................................................................................... 312
Configure default quota notification settings........................................................................................................... 312
Configure custom quota notification rules................................................................................................................312
Map an email notification rule for a quota.................................................................................................................313
Email quota notification messages...................................................................................................................................313
Custom email notification template variable descriptions.......................................................................................314
Customize email quota notification templates......................................................................................................... 314
Managing quota reports....................................................................................................................................................315
Create a quota report schedule................................................................................................................................. 315
Generate a quota report..............................................................................................................................................315
Locate a quota report..................................................................................................................................................315
Basic quota settings.......................................................................................................................................................... 316
Advisory limit quota notification rules settings...............................................................................................................316
Soft limit quota notification rules settings...................................................................................................................... 317
Hard limit quota notification rules settings..................................................................................................................... 319
Limit notification settings..................................................................................................................................................319
Quota report settings....................................................................................................................................................... 320
16 Contents
Delete an SSD compatibility....................................................................................................................................... 332
Managing L3 cache from the web administration interface........................................................................................332
Set L3 cache as the default for node pools............................................................................................................. 332
Set L3 cache on a specific node pool....................................................................................................................... 333
Restore SSDs to storage drives for a node pool..................................................................................................... 333
Managing tiers................................................................................................................................................................... 333
Create a tier................................................................................................................................................................. 333
Edit a tier...................................................................................................................................................................... 334
Delete a tier.................................................................................................................................................................. 334
Creating file pool policies..................................................................................................................................................334
Create a file pool policy...............................................................................................................................................335
File-matching options for file pool policies............................................................................................................... 335
Valid wildcard characters........................................................................................................................................... 336
SmartPools settings.................................................................................................................................................... 337
Managing file pool policies................................................................................................................................................339
Configure default file pool protection settings........................................................................................................ 339
Default file pool requested protection settings....................................................................................................... 340
Configure default I/O optimization settings............................................................................................................. 341
Default file pool I/O optimization settings................................................................................................................ 341
Modify a file pool policy.............................................................................................................................................. 342
Prioritize a file pool policy........................................................................................................................................... 342
Create a file pool policy from a template..................................................................................................................342
Delete a file pool policy............................................................................................................................................... 342
Monitoring storage pools..................................................................................................................................................343
Monitor storage pools.................................................................................................................................................343
View subpools health.................................................................................................................................................. 343
View the results of a SmartPools job........................................................................................................................343
Contents 17
Copy an impact policy.................................................................................................................................................352
Modify an impact policy..............................................................................................................................................353
Delete an impact policy...............................................................................................................................................353
View impact policy settings........................................................................................................................................353
Viewing job reports and statistics................................................................................................................................... 353
View statistics for a job in progress.......................................................................................................................... 354
View a report for a completed job.............................................................................................................................354
18 Contents
Configure a connection balancing policy.................................................................................................................. 374
Configure an IP failover policy................................................................................................................................... 375
Configure an IP rebalance policy............................................................................................................................... 375
Managing network interface members...........................................................................................................................376
Add or remove a network interface.......................................................................................................................... 376
Configure link aggregation..........................................................................................................................................377
Managing node provisioning rules................................................................................................................................... 378
Create a node provisioning rule................................................................................................................................. 378
Modify a node provisioning rule................................................................................................................................. 379
Delete a node provisioning rule.................................................................................................................................. 379
View node provisioning rule settings.........................................................................................................................379
Managing routing options.................................................................................................................................................379
Enable or disable source-based routing....................................................................................................................379
Add or remove a static route..................................................................................................................................... 380
Managing DNS cache settings........................................................................................................................................ 380
Flush the DNS cache.................................................................................................................................................. 380
Modify DNS cache settings....................................................................................................................................... 380
DNS cache settings.................................................................................................................................................... 380
Managing TCP ports..........................................................................................................................................................381
Add or remove TCP ports........................................................................................................................................... 381
Contents 19
Managing antivirus policies............................................................................................................................................... 391
Modify an antivirus policy............................................................................................................................................391
Delete an antivirus policy.............................................................................................................................................391
Enable or disable an antivirus policy..........................................................................................................................392
View antivirus policies................................................................................................................................................. 392
Managing antivirus scans................................................................................................................................................. 392
Scan a file..................................................................................................................................................................... 392
Manually run an antivirus policy................................................................................................................................. 392
Stop a running antivirus scan.....................................................................................................................................392
Managing antivirus threats.............................................................................................................................................. 393
Manually quarantine a file........................................................................................................................................... 393
Rescan a file................................................................................................................................................................. 393
Remove a file from quarantine...................................................................................................................................393
Manually truncate a file...............................................................................................................................................393
View threats.................................................................................................................................................................394
Antivirus threat information....................................................................................................................................... 394
Managing antivirus reports.............................................................................................................................................. 394
View antivirus reports................................................................................................................................................. 394
View antivirus events..................................................................................................................................................394
20 Contents
1
Introduction to this guide
Topics:
• About this guide
• Scale-out NAS overview
• Where to go for support
Dell Technologies support • Support tab on the Dell homepage: https://www.dell.com/support/incidents-online. Once you
identify your product, the "How to Contact Us" gives you the option of email, chat, or telephone
support.
• For questions about accessing online support, send an email to [email protected].
Node components
As a rack-mountable appliance, a pre-Generation 6 storage node includes the following components in a 2U or 4U rack-mountable chassis
with an LCD front panel: CPUs, RAM, NVRAM, network interfaces, InfiniBand adapters, disk controllers, and storage media. A PowerScale
cluster is made up of three or more nodes, up to 144. The 4U chassis is always used for Generation 6. There are four nodes in one 4U
chassis in Generation 6, therefore a quarter chassis makes up one node.
When you add a node to a pre-Generation 6 cluster, you increase the aggregate disk, cache, CPU, RAM, and network capacity. OneFS
groups RAM into a single coherent cache so that a data request on a node benefits from data that is cached anywhere. NVRAM is
grouped to write data with high throughput and to protect write operations from power failures. As the cluster expands, spindles and CPU
combine to increase throughput, capacity, and input-output operations per second (IOPS). The minimum cluster for Generation 6 is four
nodes and Generation 6 does not use NVRAM. Journals are stored in RAM and M.2 flash is used for a backup in case of node failure.
The PowerScale F200 and F600 nodes are 1U models that require a minimum cluster size of three nodes. Clusters can be expanded to a
maximum of 252 nodes in single node increments.
There are several types of nodes, all of which can be added to a cluster to balance capacity and performance with throughput or IOPS:
Node Function
A-Series Performance Accelerator Independent scaling for high performance
A-Series Backup Accelerator High-speed and scalable backup-and-restore solution for tape
drives over Fibre Channel connections
Cluster administration
OneFS centralizes cluster management through a web administration interface and a command-line interface. Both interfaces provide
methods to activate licenses, check the status of nodes, configure the cluster, upgrade the system, generate alerts, view client
connections, track performance, and change various settings.
In addition, OneFS simplifies administration by automating maintenance with a Job Engine. You can schedule jobs that scan for viruses,
inspect disks for errors, reclaim disk space, and check the integrity of the file system. The engine manages the jobs to minimize impact on
the cluster's performance.
With SNMP versions 2c and 3, you can remotely monitor hardware components, CPU usage, switches, and network interfaces. Dell EMC
PowerScale supplies management information bases (MIBs) and traps for the OneFS operating system.
OneFS also includes an application programming interface (API) that is divided into two functional areas: One area enables cluster
configuration, management, and monitoring functionality, and the other area enables operations on files and directories on the cluster. You
can send requests to the OneFS API through a Representational State Transfer (REST) interface, which is accessed through resource
URIs and standard HTTP methods. The API integrates with OneFS role-based access control (RBAC) to increase security. See the
PowerScale API Reference.
Quorum
A PowerScale cluster must have a quorum to work correctly. A quorum prevents data conflicts—for example, conflicting versions of the
same file—in case two groups of nodes become unsynchronized. If a cluster loses its quorum for read and write requests, you cannot
access the OneFS file system.
For a quorum, more than half the nodes must be available over the internal network. A seven-node cluster, for example, requires a four-
node quorum. A 10-node cluster requires a six-node quorum. If a node is unreachable over the internal network, OneFS separates the node
from the cluster, an action referred to as splitting. After a cluster is split, cluster operations continue as long as enough nodes remain
connected to have a quorum.
In a split cluster, the nodes that remain in the cluster are referred to as the majority group. Nodes that are split from the cluster are
referred to as the minority group.
When split nodes can reconnect with the cluster and re-synchronize with the other nodes, the nodes rejoin the cluster's majority group, an
action referred to as merging.
A OneFS cluster contains two quorum properties:
• read quorum (efs.gmp.has_quorum)
• write quorum (efs.gmp.has_super_block_quorum)
By connecting to a node with SSH and running the sysctl command-line tool as root, you can view the status of both types of quorum.
Here is an example for a cluster that has a quorum for both read and write operations, as the command output indicates with a 1, for true:
sysctl efs.gmp.has_quorum
efs.gmp.has_quorum: 1
sysctl efs.gmp.has_super_block_quorum
efs.gmp.has_super_block_quorum: 1
The degraded states of nodes—such as smartfail, read-only, offline—effect quorum in different ways. A node in a smartfail or read-only
state affects only write quorum. A node in an offline state, however, affects both read and write quorum. In a cluster, the combination of
nodes in different degraded states determines whether read requests, write requests, or both work.
A cluster can lose write quorum but keep read quorum. Consider a four-node cluster in which nodes 1 and 2 are working normally. Node 3
is in a read-only state, and node 4 is in a smartfail state. In such a case, read requests to the cluster succeed. Write requests, however,
receive an input-output error because the states of nodes 3 and 4 break the write quorum.
A cluster can also lose both its read and write quorum. If nodes 3 and 4 in a four-node cluster are in an offline state, both write requests
and read requests receive an input-output error, and you cannot access the file system. When OneFS can reconnect with the nodes,
OneFS merges them back into the cluster. Unlike a RAID system, a PowerScale node can rejoin the cluster without being rebuilt and
reconfigured.
Storage pools
Storage pools segment nodes and files into logical divisions to simplify the management and storage of data.
A storage pool comprises node pools and tiers. Node pools group equivalent nodes to protect data and ensure reliability. Tiers combine
node pools to optimize storage by need, such as a frequently used high-speed tier or a rarely accessed archive.
The SmartPools module groups nodes and files into pools. If you do not activate a SmartPools license, the module provisions node pools
and creates one file pool. If you activate the SmartPools license, you receive more features. You can, for example, create multiple file pools
and govern them with policies. The policies move files, directories, and file pools among node pools or tiers. You can also define how
OneFS handles write operations when a node pool or tier is full. SmartPools reserves a virtual hot spare to reprotect data if a drive fails
regardless of whether the SmartPools license is activated.
SMB The Server Message Block (SMB) protocol enables Windows users to access the cluster. OneFS works with SMB
1, SMB 2, and SMB 2.1, as well as SMB 3.0 for Multichannel only. With SMB 2.1,OneFS supports client opportunity
locks (oplocks) and large (1 MB) MTU sizes.
NFS The Network File System (NFS) protocol enables UNIX, Linux, and Mac OS X systems to remotely mount any
subdirectory, including subdirectories created by Windows users. OneFS works with NFS versions 3 and 4.
Access zones OneFS includes an access zones feature. Access zones allow users from different authentication providers, such
as two untrusted Active Directory domains, to access different OneFS resources based on an incoming IP
address. An access zone can contain multiple authentication providers and SMB namespaces.
RBAC for OneFS includes role-based access control for administration. In place of a root or administrator account, RBAC
administration lets you manage administrative access by role. A role limits privileges to an area of administration. For example,
you can create separate administrator roles for security, auditing, storage, and backup.
Data layout
OneFS evenly distributes data among a cluster's nodes with layout algorithms that maximize storage efficiency and performance. The
system continuously reallocates data to conserve space.
OneFS breaks data down into smaller sections called blocks, and then the system places the blocks in a stripe unit. By referencing either
file data or erasure codes, a stripe unit helps safeguard a file from a hardware failure. The size of a stripe unit depends on the file size, the
Writing files
On a node, the input-output operations of the OneFS software stack split into two functional layers: A top layer, or initiator, and a bottom
layer, or participant. In read and write operations, the initiator and the participant play different roles.
When a client writes a file to a node, the initiator on the node manages the layout of the file on the cluster. First, the initiator divides the
file into blocks of 8 KB each. Second, the initiator places the blocks in one or more stripe units. At 128 KB, a stripe unit consists of 16
blocks. Third, the initiator spreads the stripe units across the cluster until they span a width of the cluster, creating a stripe. The width of
the stripe depends on the number of nodes and the protection setting.
After dividing a file into stripe units, the initiator writes the data first to non-volatile random-access memory (NVRAM) and then to disk.
NVRAM retains the information when the power is off.
During the write transaction, NVRAM guards against failed nodes with journaling. If a node fails mid-transaction, the transaction restarts
without the failed node. When the node returns, it replays the journal from NVRAM to finish the transaction. The node also runs the
AutoBalance job to check the file's on-disk striping. Meanwhile, uncommitted writes waiting in the cache are protected with mirroring. As
a result, OneFS eliminates multiple points of failure.
Reading files
In a read operation, a node acts as a manager to gather data from the other nodes and present it to the requesting client.
Because a PowerScale cluster's coherent cache spans all the nodes, OneFS can store different data in each node's RAM. A node using
the internal network can retrieve file data from another node's cache faster than from its own local disk. If a read operation requests data
that is cached on any node, OneFS pulls the cached data to serve it quickly.
For files with an access pattern of concurrent or streaming, OneFS pre-fetches in-demand data into a managing node's local cache to
further improve sequential-read performance.
Metadata layout
OneFS protects metadata by spreading it across nodes and drives.
Metadata—which includes information about where a file is stored, how it is protected, and who can access it—is stored in inodes and
protected with locks in a B+ tree, a standard structure for organizing data blocks in a file system to provide instant lookups. OneFS
replicates file metadata across the cluster so that there is no single point of failure.
Working together as peers, all the nodes help manage metadata access and locking. If a node detects an error in metadata, the node looks
up the metadata in an alternate location and then corrects the error.
Feature Description
Anti-virus OneFS can send files to servers running the Internet Content
Adaptation Protocol (ICAP) to scan for viruses and other threats.
Clones OneFS enables you to create clones that share blocks with other
files to save space.
NDMP backup and restore OneFS can back up data to tape and other devices through the
Network Data Management Protocol. Although OneFS supports
both three-way and two-way backup, two-way backup requires a
PowerScale Backup Accelerator Node.
Protection domains You can apply protection domains to files and directories to
prevent changes.
The following software modules help protect data, but you must activate a separate license to use them:
Data mirroring
You can protect on-disk data with mirroring, which copies data to multiple locations. OneFS supports two to eight mirrors. You can use
mirroring instead of erasure codes, or you can combine erasure codes with mirroring.
Mirroring, however, consumes more space than erasure codes. Mirroring data three times, for example, duplicates the data three times,
which requires more space than erasure codes. As a result, mirroring suits transactions that require high performance.
You can also mix erasure codes with mirroring. During a write operation, OneFS divides data into redundant protection groups. For files
protected by erasure codes, a protection group consists of data blocks and their erasure codes. For mirrored files, a protection group
contains all the mirrors of a set of blocks. OneFS can switch the type of protection group as it writes a file to disk. By changing the
protection group dynamically, OneFS can continue writing data despite a node failure that prevents the cluster from applying erasure
codes. After the node is restored, OneFS automatically converts the mirrored protection groups to erasure codes.
Data compression
OneFS supports inline data compression on Isilon F810 and H5600 nodes, and on PowerScale F200 and F600 nodes.
The F810 node contains a Network Interface Card (NIC) that compresses and decompresses data.
Hardware compression and decompression are performed in parallel across the 40Gb Ethernet interfaces of supported nodes as clients
read and write data to the cluster. This distributed interface model allows compression to scale linearly across the node pool as supported
nodes are added to a cluster.
You can enable inline data compression on a cluster that:
• Contains F810, H5600, F200, or F600 node pools
• Offers a 40Gb Ethernet back-end network
The following table lists the nodes and OneFS release combinations that support inline data compression.
Mixed Clusters
In a mixed cluster environment, data is stored in a compressed form on F810, H5600, F200, and F600 node pools. Data that is written or
tiered to storage pools of other node types is uncompressed when it moves between pools.
Software modules
You can access advanced features by activating licenses for Dell EMC PowerScale software modules.
SmartLock SmartLock protects critical data from malicious, accidental, or premature alteration or deletion to help you comply
with SEC 17a-4 regulations. You can automatically commit data to a tamper-proof state and then retain it with a
compliance clock.
HDFS OneFS works with the Hadoop Distributed File System protocol to help clients running Apache Hadoop, a
framework for data-intensive distributed applications, analyze big data.
SyncIQ automated SyncIQ replicates data on another PowerScale cluster and automates failover and failback between clusters. If a
failover and cluster becomes unusable, you can fail over to another PowerScale cluster. Failback restores the original source
failback data after the primary cluster becomes available again.
Security Security hardening is the process of configuring your system to reduce or eliminate as many security risks as
hardening possible. You can apply a hardening policy that secures the configuration of OneFS, according to policy guidelines.
SnapshotIQ SnapshotIQ protects data with a snapshot—a logical copy of data that is stored on a cluster. A snapshot can be
restored to its top-level directory.
SmartDedupe You can reduce redundancy on a cluster by running SmartDedupe. Deduplication creates links that can impact the
speed at which you can read from and write to files.
SmartPools SmartPools enables you to create multiple file pools governed by file-pool policies. The policies move files and
directories among node pools or tiers. You can also define how OneFS handles write operations when a node pool
or tier is full.
User interfaces
OneFS provides several interfaces for managing PowerScale clusters.
IPv4 https://<yourNodeIPaddress>:8080
IPv6 https://[<yourNodeIPaddress>]:8080
If your security certificates have not been configured, the system displays a message. Resolve any certificate configurations, then
continue to the website.
2. Log in to OneFS by typing your OneFS credentials in the Username and Password fields.
After you log in to the web administration interface, there is a 4-hour login timeout.
Licensing
All PowerScale software and hardware must be licensed through Dell EMC Software Licensing Central (SLC).
A license file contains a record of your active licenses and your cluster hardware. One copy of the license file is stored in the SLC
repository, and another copy of the license file is stored on your cluster. The license file contains a record of the following license types:
Software licenses
Your OneFS license and optional software module licenses are contained in the license file on your cluster. Your license file must match
your license record in the Dell EMC Software Licensing Central (SLC) repository.
Ensure that the license file on your cluster, and your license file in the SLC repository, match your upgraded version of OneFS.
Advanced cluster features are available when you activate licenses for the following OneFS software modules:
• CloudPools
• Security hardening
• HDFS
• PowerScale Swift
• SmartConnect Advanced
• SmartDedupe
• SmartLock
• SmartPools
• SmartQuotas
• SnapshotIQ
• SyncIQ
For more information about optional software modules, contact your Dell EMC sales representative.
Hardware tiers
Your license file contains information about the PowerScale hardware that is installed in your cluster.
Your license file lists nodes by tiers. Nodes are placed into a tier according to their compute performance level, capacity, and drive type.
NOTE: Your license file contains line items for every node in your cluster. However, pre-Generation 6 hardware is not
included in the OneFS licensing model.
License status
The status of a OneFS license indicates whether the license file on your cluster reflects your current version of OneFS. The status of a
OneFS module license indicates whether the functionality provided by a module is available on the cluster.
Licenses exist in one of the following states:
Status Description
Unsigned The license has not been updated in Dell EMC Software Licensing
Central (SLC). You must generate and submit an activation file to
update your license file with your new version of OneFS.
Inactive The license has not been activated on the cluster. You cannot
access the features provided by the corresponding module.
• You can view active alerts that are related to your licenses by clicking Alerts about licenses in the upper corner of the Cluster
Management > Licensing page.
The signed license file is an XML file with a name in the following format:
ISLN_nnn_date.xml
NOTE: OneFS defaults to the best supported version of TLS against the client request.
Output similar to the following is displayed, where <old cert ID> is the ID number of the current certificate:
ID Name Default
<old cert ID>
Isilon Systems True
Total: 1
3. Import the new certificate. This assumes the certificate is located in the /ifs/local directory.
Output similar to the following is displayed, where <old cert ID> is the ID number of the current certificate:
ID Name Default
<new cert ID>
True
<old cert ID>
Isilon Systems False
Total: 2
Are you sure you want to delete this certificate (ID: <old cert ID>)?? (yes/[no]): yes
NOTE: If you have multiple certificates, set the default one after you replace the old one:
mkdir /ifs/data/backup/
4. Make backup copies of the existing server.crt and server.key files by using the cp command to copy them to the backup
directory that you just created.
NOTE: If files with the same names exist in the backup directory, either overwrite the existing files, or, to save the
old backups, rename the new files with a timestamp or other identifier.
5. Create a working directory to hold the files while you complete this procedure:
mkdir /ifs/local/
cd /ifs/local/
9. When prompted, type the information to be incorporated into the certificate request.
When you finish entering the information, a renewal certificate is created, based on the existing (stock) server key. The renewal
certificate is named server.crt and it appears in the /ifs/local directory.
10. Optional: To verify the attributes in the TLS certificate, run the following command:
11. Run the following commands to install the certificate and key and restart the isi_webui service:
If the private key is password encrypted, you can use the isi certificate server import command's --certificate-
key-password <string> parameter to specify the password.
12. Run the command isi certificate server list to verify that the installation succeeded. Optionally re-run the isi
certificate server view server.crt command to confirm the certificate settings.
13. Delete the temporary working files from the /ifs/local directory:
rm /ifs/local/<common-name>.csr \
/ifs/local/<common-name>.key /ifs/local/<common-name>.crt
14. (Optional) Delete the backup files from the /ifs/data/backup directory:
rm /ifs/data/backup/server.crt.bak \
/ifs/data/backup/server.key.bak
In addition, if you are requesting a third-party CA-issued certificate, you should include additional attributes that are shown in the following
example:
Cluster identity
You can specify identity attributes for a PowerScale cluster.
Cluster name The cluster name appears on the login page, and it makes the cluster and its nodes more easily recognizable on
your network. Each node in the cluster is identified by the cluster name plus the node number. For example, the
first node in a cluster that is named Images may be named Images-1.
Cluster The cluster description appears below the cluster name on the login page. The cluster description is useful if your
description environment has multiple clusters.
Login message The login message appears as a separate box on the login page of the OneFS web administration interface, or as a
line of text under the cluster name in the OneFS command-line interface. The login message can convey cluster
information, login instructions, or warnings that a user should know before logging into the cluster. Set this
information in the Cluster Identity page of the OneFS web administration interface.
NOTE: If the cluster and Active Directory become out of sync by more than 5 minutes, authentication will not work.
4. Click Submit.
Mode Description
Manual Allows you to manually add a node to the cluster without requiring
authorization.
Secure Requires authorization of every node added to the cluster. The
node must be added through the web administration interface or
through the isi devices -a add -d
<unconfigured_node_serial_no> command in the
command-line interface.
NOTE: If you specify a secure join mode, you cannot join
a node to the cluster through serial console wizard
option [2] Join an existing cluster.
Option Description
Manual Joins can be manually initiated
Secure Joins can be initiated only by the cluster and require authentication
3. Click Submit.
1. Click File System Management > File System Settings > Access Time Tracking.
2. In the Access Time Tracking area, click the Enable access time tracking check box to track file access time stamps. This feature
is disabled by default.
3. In the Precision fields, specify how often to update the last-accessed time by typing a numeric value and by selecting a unit of
measure, such as Seconds, Minutes, Hours, Days, Weeks, Months, or Years.
For example, if you configure a Precision setting of one day, the cluster updates the last-accessed time once each day, even if some
files were accessed more often than once during the day.
4. Click Save Changes.
Security hardening
Security hardening is the process of configuring a system to reduce or eliminate as many security risks as possible.
When you apply a hardening profile on a PowerScale cluster, OneFS reads the security profile file and applies the configuration defined in
the profile to the cluster. If required, OneFS identifies configuration issues that prevent hardening on the nodes. For example, the file
permissions on a particular directory might not be set to the expected value, or the required directories might be missing. When an issue is
found, you can choose to allow OneFS to resolve the issue, or you can defer resolution and fix the issue manually.
NOTE: The intention of the hardening profile is to support the Security Technical Implementation Guides (STIGs) that
are defined by the Defense Information Systems Agency (DISA) and applicable to OneFS. Currently, the hardening
profile only supports a subset of requirements defined by DISA in STIGs. The hardening profile is meant to be primarily
used in Federal accounts.
If you determine that the hardening configuration is not right for your system, OneFS allows you to revert the security hardening profile.
Reverting a hardening profile returns OneFS to the configuration achieved by resolving issues, if any, prior to hardening.
You must have an active security hardening license and be logged in to the PowerScale cluster as the root user to apply hardening
toOneFS. To obtain a license, contact your PowerScale sales representative.
OneFS checks whether the system contains any configuration issues that must be resolved before hardening can be applied.
• If OneFS does not encounter any issues, the hardening profile is applied.
• If OneFS encounters issues, the system displays output similar to the following example:
Found the following Issue(s) on the cluster:
Issue #1 (PowerScale Control_id:isi_GEN001200_01)
Node: test-cluster-2
1: /etc/syslog.conf: Actual permission 0664; Expected permission 0654
Total: 2 issue(s)
Do you want to resolve the issue(s)?[Y/N]:
3. Resolve any configuration issues. At the prompt Do you want to resolve the issue(s)?[Y/N], choose one of the
following actions:
• To allow OneFS to resolve all issues, type Y. OneFS fixes the issues and then applies the hardening profile.
• To defer resolution and fix all of the found issues manually, type N. After you have fixed all of the deferred issues, run the isi
hardening apply command again.
NOTE: If OneFS encounters an issue that is considered catastrophic, the system prompts you to resolve the issue
manually. OneFS cannot resolve a catastrophic issue.
Total: 2 issue(s)
Do you want to resolve the issue(s)?[Y/N]:
3. Resolve any configuration issues. At the prompt Do you want to resolve the issue(s)?[Y/N], choose one of the
following actions:
• To allow OneFS to resolve all issues, type Y. OneFS sets the affected configurations to the expected state and then reverts the
hardening profile.
• To defer resolution and fix all of the found issues manually, type N. OneFS halts the revert process until all of the issues are fixed.
After you have fixed all of the deferred issues, run the isi hardening revert command again.
NOTE: If OneFS encounters an issue that is considered catastrophic, the system will prompt you to resolve the issue
manually. OneFS cannot resolve a catastrophic issue.
Cluster monitoring
You can monitor the health, performance, and status of the PowerScale cluster.
Using the OneFS dashboard from the web administration interface, you can monitor the status and health of the OneFS system.
Information is available for individual nodes, including node-specific network traffic, internal and external network interfaces, and details
about node pools, tiers, and overall cluster health. You can monitor the following areas of the PowerScale cluster health and performance:
Node status Health and performance statistics for each node in the cluster, including hard disk drive (HDD) and solid-state
drive (SSD) usage.
Client connections Number of clients connected per node.
New events List of event notifications generated by system events, including the severity, unique instance ID, start time, alert
message, and scope of the event.
Cluster size Current view: Used and available HDD and SSD space and space reserved for the virtual hot spare (VHS).
Historical view: Total used space and cluster size for a one-year period.
Cluster Current view: Average inbound and outbound traffic volume passing through the nodes in the cluster for the past
throughput (file hour.
system) Historical view: Average inbound and outbound traffic volume passing through the nodes in the cluster for the
past two weeks.
CPU usage Current view: Average system, user, and total percentages of CPU usage for the past hour.
Historical view: CPU usage for the past two weeks.
You can hide or show a plot by clicking System, User, or Total in the chart legend. To view maximum usage, next
to Show, select Maximum.
SUSPENDED This state indicates that drive activity is temporarily Command-line interface,
suspended and the drive is not in use. The state is web administration
interface
ERASE The drive is ready for removal but needs your Command-line interface
attention because the data has not been erased. You only
can erase the drive manually to guarantee that data
is removed.
NOTE: In the web administration interface,
this state is included in Not available.
SNMP monitoring
You can use SNMP to remotely monitor the PowerScale cluster hardware components, such as fans, hardware sensors, power supplies,
and disks. Use the default Linux SNMP tools or a GUI-based SNMP tool of your choice for this purpose.
SNMP is enabled or disabled cluster wide, nodes are not configured individually. You can monitor cluster information from any node in the
cluster. Generated SNMP traps correspond to CELOG events. SNMP notifications can also be sent. by using isi event channels
create snmpchannel snmp --use-snmp-trap false.
You can configure an event notification rule that specifies the network station where you want to send SNMP traps for specific events.
When the specific event occurs, the cluster sends the trap to that server. OneFS supports SNMP version 2c (default), and SNMP version
3 in read-only mode.
OneFS does not support SNMP version 1. Although an option for --snmp-v1-v2-access exists in the OneFS command-line interface
(CLI) command isi snmp settings modify, if you turn on this feature, OneFS will only monitor through SNMP version 2c.
You can configure settings for SNMP version 3 alone or for both SNMP version 2c and version 3.
NOTE: All SNMP v3 security levels are configurable: noAuthNoPriv, authNoPriv, authPriv.
Elements in an SNMP hierarchy are arranged in a tree structure, similar to a directory tree. As with directories, identifiers move from
general to specific as the string progresses from left to right. Unlike a file hierarchy, however, each element is not only named, but also
numbered.
For example, the SNMP entity
iso.org.dod.internet.private.enterprises.powerscale.cluster.clusterStatus.clusterName.0 maps
to .1.3.6.1.4.1.12124.1.1.1.0. The part of the name that refers to the OneFS SNMP namespace is the 12124 element.
Anything further to the right of that number is related to OneFS-specific monitoring.
Management Information Base (MIB) documents define human-readable names for managed objects and specify their data types and
other properties. You can download MIBs that are created for SNMP-monitoring of a PowerScale cluster from the OneFS web
administration interface or manage them using the command line interface (CLI). MIBs are stored in /usr/share/snmp/mibs/ on a
OneFS node. The OneFS ISILON-MIBs serve two purposes:
• Augment the information available in standard MIBs
• Provide OneFS-specific information that is unavailable in standard MIBs
ISILON-MIB is a registered enterprise MIB. PowerScale clusters have two separate MIBs:
ISILON-MIB Defines a group of SNMP agents that respond to queries from a network monitoring system (NMS) called OneFS
Statistics Snapshot agents. As the name implies, these agents snapshot the state of the OneFS file system at the
time that it receives a request and reports this information back to the NMS.
ISILON-TRAP-MIB Generates SNMP traps to send to an SNMP monitoring station when the circumstances occur that are defined in
the trap protocol data units (PDUs).
TheOneFS MIB files map the OneFS-specific object IDs with descriptions. Download or copy MIB files to a directory where your SNMP
tool can find them, such as /usr/share/snmp/mibs/.
During SNMP configuration, it is recommended that you change the mapping to something similar to the following:
If the MIB files are not in the default Net-SNMP MIB directory, you may need to specify the full path, as in the following example. All three
lines are a single command.
snmpwalk -m /usr/local/share/snmp/mibs/ISILON-MIB.txt:/usr \
/share/snmp/mibs/ISILON-TRAP-MIB.txt:/usr/share/snmp/mibs \
/ONEFS-TRAP-MIB.txt -v2c -C c -c public isilon
NOTE: The previous examples are run from the snmpwalk command on a cluster. Your SNMP version may require
different arguments.
5. If your protocol is SNMPv2, ensure that the Allow SNMPv2 Access check box is selected. SNMPv2 is selected by default.
6. In the SNMPv2 Read-Only Community Name field, enter the appropriate community name. The default is I$ilonpublic.
7. To enable SNMPv3, click the Allow SNMPv3 Access check box.
8. Configure SNMP v3 Settings:
a. In the SNMPv3 Read-Only User Name field, type the SNMPv3 security name to change the name of the user with read-only
privileges.
The default read-only user is general.
b. In the SNMPv3 Read-Only Password field, type the new password for the read-only user to set a new SNMPv3 authentication
password.
The default password is password. We recommend that you change the password to improve security. The password must
contain at least eight characters and no spaces.
c. Type the new password in the Confirm password field to confirm the new password.
Events overview
Events are individual occurrences or conditions related to the data workflow, maintenance operations, and hardware components of your
cluster.
Throughout OneFS there are processes that are constantly monitoring and collecting information on cluster operations.
When the status of a component or operation changes, the change is captured as an event and placed into a priority queue at the kernel
level.
Every event has two ID numbers that help to establish the context of the event:
• The event type ID identifies the type of event that has occurred.
• The event instance ID is a unique number that is specific to a particular occurrence of an event type. When an event is submitted to
the kernel queue, an event instance ID is assigned. You can reference the instance ID to determine the exact time that an event
occurred.
You can view individual events. However, you manage events and alerts at the event group level.
Alerts overview
An alert is a message that describes a change that has occurred in an event group.
At any point in time, you can view event groups to track situations occurring on your cluster. However, you can also create alerts that will
proactively notify you if there is a change in an event group.
For example, you can generate an alert when a new event is added to an event group, when an event group is resolved, or when the
severity of an event group changes.
You can configure your cluster to only generate alerts for specific event groups, conditions, severity, or during limited time periods.
Alerts are delivered through channels. You can configure a channel to determine who will receive the alert and when.
Channels overview
Channels are pathways by which event groups send alerts.
When an alert is generated, the channel that is associated with the alert determines how the alert is distributed and who receives the
alert.
You can configure a channel to deliver alerts with one of the following mechanisms: SMTP, SNMP, or Connect Home. You can also specify
the required routing and labeling information for the delivery mechanism.
View an event
You can view the details of a specific event.
1. Click Cluster Management > Events and Alerts.
2. In the Actions column of the event group that contains the event you want to view, click View Details.
3. In the Event Details area, in the Actions column for the event you want to view, click View Details.
Managing alerts
You can view, create, modify, or delete alerts to determine the information you deliver about event groups.
View an alert
You can view the details of a specific alert.
1. Click Cluster Management > Events and Alerts > Alerts.
2. In the Actions column of the alert you want to view, click View / Edit.
f. For the New event groups, New events, Interval, Severity increase, Severity decrease, and Resolved
event group conditions, enter a number and time value for how long you would like an event to exist before the alert reports on
it.
g. For the New events condition, in the Maximum Alert Limit field, edit the maximum number of alerts that can be sent out for
new events.
h. For the ONGOING condition, enter a number and time value for the interval you want between alerts related to an ongoing event.
4. Click Create Alert.
Delete an alert
You can delete alerts that you created.
1. Click Cluster Management > Events and Alerts > Alerts.
2. In the Actions column of the alert you want to delete, click More.
3. In the menu that appears, click Delete.
NOTE: You can delete multiple alerts by selecting the check box next to the alert names you want to delete, then
selecting Delete Selections from the Select an action drop-down list.
Modify an alert
You can modify an alert that you created.
1. Click Cluster Management > Events and Alerts > Alerts.
2. In the Actions column of the alert you want to modify, click View / Edit.
3. Click Edit Alert.
4. Modify the alert settings as needed.
a. In the Name field, edit the alert name.
b. In the Alert Channels area, click the checkbox next to the channel you want to associate with the alert.
To associate a new channel with the alert, click Create an Alert Channel.
c. Click the checkbox next to the Event Group Categories you want to associate with the alert.
d. In the Event Group ID field, enter the ID of the event group you would like the alert to report on.
To add another event group ID to the alert, click Add Another Event Group ID.
e. Select the an alert condition from the Condition drop-down list.
NOTE: Depending on the alert condition you select, other settings will appear.
f. For the New event groups, New events, Interval, Severity increase, Severity decrease, and Resolved
event group conditions, enter a number and time value for how long you would like an event to exist before the alert reports on
it.
g. For the New events condition, in the Maximum Alert Limit field, edit the maximum number of alerts that can be sent out for
new events.
h. For the ONGOING condition, enter a number and time value for the interval you want between alerts related to an ongoing event.
5. Click Save Changes.
View a channel
You can view the details of a specific channel.
1. Click Cluster Management > Events and Alerts > Alerts.
2. In the Alert Channels area, locate the channel you want to view.
3. In the Actions column of the channel you want to view, click View / Edit.
6. If you are creating an SMTP channel, you can configure the following settings:
a. In the Send to field, enter an email address you want to receive alerts on this channel.
To add another email address to the channel, click Add Another Email Address.
b. In the Send from field, enter the email address you want to appear in the from field of the alert emails.
c. In the Subject field, enter the text you want to appear on the subject line of the alert emails.
d. In the SMTP Host or Relay Address field, enter your SMTP host or relay address.
e. In the SMTP Relay Port field, enter the number of your SMTP relay port.
f. Click the Use SMTP Authentication checkbox to specify a username and password for your SMTP server.
g. Specify your connection security between NONE or STARTTLS.
h. From the Notification Batch Mode dropdown, select whether alerts will be batched together, by severity, or by category.
i. From the Notification Email Template dropdown, select whether emails will be created from a standard or custom email
template.
If you specify a custom template, enter the location of the template on your cluster in the Custom Template Location field.
j. In the Master Nodes area, in the Allowed Nodes field, type the node number of a node in the cluster that is allowed to send
alerts through this channel.
To add another allowed node to the channel, click Add another Node. If you do not specify any nodes, all nodes in the cluster will
be considered allowed nodes.
k. In the Excluded Nodes field, type the node number of a node in the cluster that is not allowed to send alerts through this channel.
To add another excluded node to the channel, click Exclude another Node.
7. If you are creating a ConnectEMC channel, you can configure the following settings:
a. In the Master Nodes area, in the Allowed Nodes field, type the node number of a node in the cluster that is allowed to send
alerts through this channel.
To add another allowed node to the channel, click Add another Node. If you do not specify any nodes, all nodes in the cluster will
be considered allowed nodes.
b. In the Excluded Nodes field, type the node number of a node in the cluster that is not allowed to send alerts through this channel.
To add another excluded node to the channel, click Exclude another Node.
8. If you are creating an SNMP channel, you can configure the following settings:
a. In the Community field, enter your SNMP community string.
b. In the Host field, enter your SNMP host name or address.
c. In the Master Nodes area, in the Allowed Nodes field, type the node number of a node in the cluster that is allowed to send
alerts through this channel.
To add another allowed node to the channel, click Add another Node. If you do not specify any nodes, all nodes in the cluster will
be considered allowed nodes.
d. In the Excluded Nodes field, type the node number of a node in the cluster that is not allowed to send alerts through this channel.
Modify a channel
You can modify a channel that you created.
1. Click Cluster Management > Events and Alerts > Alerts.
2. In the Alert Channels area, locate the channel you want to modify.
3. In the Actions column of the channel you want to modify, click View / Edit.
4. Click Edit Alert Channel.
5. Click the Enable this Channel checkbox to enable or disable the channel.
6. Select the delivery mechanism for the channel from the Type drop-down list.
NOTE: Depending on the delivery mechanism you select, different settings will appear.
7. If you are modifying an SMTP channel, you can change the following settings:
a. In the Send to field, enter an email address you want to receive alerts on this channel.
To add another email address to the channel, click Add Another Email Address.
b. In the Send from field, enter the email address you want to appear in the from field of the alert emails.
c. In the Subject field, enter the text you want to appear on the subject line of the alert emails.
d. In the SMTP Host or Relay Address field, enter your SMTP host or relay address.
e. In the SMTP Relay Port field, enter the number of your SMTP relay port.
f. Click the Use SMTP Authentication checkbox to specify a username and password for your SMTP server.
g. Specify your connection security between NONE or STARTTLS.
h. From the Notification Batch Mode dropdown, select whether alerts will be batched together, by severity, or by category.
i. From the Notification Email Template dropdown, select whether emails will be created from a standard or custom email
template.
If you specify a custom template, enter the location of the template on your cluster in the Custom Template Location field.
j. In the Master Nodes area, in the Allowed Nodes field, type the node number of a node in the cluster that is allowed to send
alerts through this channel.
To add another allowed node to the channel, click Add another Node. If you do not specify any nodes, all nodes in the cluster will
be considered allowed nodes.
k. In the Excluded Nodes field, type the node number of a node in the cluster that is not allowed to send alerts through this channel.
To add another excluded node to the channel, click Exclude another Node.
8. If you are modifying a ConnectHome channel, you can change the following settings:
a. In the Master Nodes area, in the Allowed Nodes field, type the node number of a node in the cluster that is allowed to send
alerts through this channel.
To add another allowed node to the channel, click Add another Node. If you do not specify any nodes, all nodes in the cluster will
be considered allowed nodes.
b. In the Excluded Nodes field, type the node number of a node in the cluster that is not allowed to send alerts through this channel.
To add another excluded node to the channel, click Exclude another Node.
9. If you are modifying an SNMP channel, you can change the following settings:
a. In the Community field, enter your SNMP community string.
b. In the Host field, enter your SNMP host name or address.
c. In the Master Nodes area, in the Allowed Nodes field, type the node number of a node in the cluster that is allowed to send
alerts through this channel.
To add another allowed node to the channel, click Add another Node. If you do not specify any nodes, all nodes in the cluster will
be considered allowed nodes.
d. In the Excluded Nodes field, type the node number of a node in the cluster that is not allowed to send alerts through this channel.
To add another excluded node to the channel, click Exclude another Node.
10. Click Save Changes.
Maintenance windows
You can schedule a maintenance window by setting a maintenance start time and duration.
During a scheduled maintenance window, the system will continue to log events, but no alerts will be generated. Scheduling a maintenance
window will keep channels from being flooded by benign alerts associated with cluster maintenance procedures.
Active event groups will automatically resume generating alerts when the scheduled maintenance period ends.
Cluster maintenance
Trained service personnel can replace or upgrade components in PowerScale nodes.
Dell EMC PowerScale Technical Support can assist you with replacing node components or upgrading components to increase
performance.
If you don't specify a node LNN, the command will be applied to the entire cluster.
The following example command disables ARR for the node with the LNN of 2:
4. To enable ARR for a specific node, you must perform the following steps through the command-line interface (CLI).
a. Establish an SSH connection to any node in the cluster.
b. Run the following command:
If you don't specify a node LNN, the command will be applied to the entire cluster.
The following example command enables ARR for the node with the LNN of 2:
NOTE: It is recommended that you contact PowerScale Technical Support before updating the drive firmware.
3. Open a secure shell (SSH) connection to any node in the cluster and log in.
4. Create or check for the availability of the directory structure /ifs/data/Isilon_Support/dsp.
5. Copy the downloaded file to the dsp directory through SCP, FTP, SMB, NFS, or any other supported data-access protocols.
6. Unpack the file by running the following command:
tar -zxvf Drive_Support_<version>.tgz
7. Install the package by running the following command:
isi_dsp_install Drive_Support_<version>.tar
NOTE:
• You must run the isi_dsp_install command to install the drive support package. Do not use the isi pkg
command.
• Running isi_dsp_install will install the drive support package on the entire cluster.
• The installation process takes care of installing all the necessary files from the drive support package followed by
the uninstallation of the package. You do not need to delete the package after its installation or prior to installing
a later version.
OneFS 8.0 or later isi devices drive firmware list --node-lnn all
Earlier than isi drivefirmware status
OneFS 8.0
• To view the drive firmware status of drives on a specific node, run one of the following commands:
OneFS 8.0 or later isi devices drive firmware list --node-lnn <node-number>
Earlier than isi drivefirmware status -n <node-number>
OneFS 8.0
LNN Displays the LNN for the node that contains the drive.
Location Displays the bay number where the drive is installed.
Firmware Displays the version number of the firmware currently running on the drive.
Desired If the drive firmware should be upgraded, displays the version number of the drive firmware that the firmware
should be updated to.
Model Displays the model number of the drive.
NOTE: The isi devices drive firmware list command displays firmware information for the drives in the local
node only. You can display drive firmware information for the entire cluster, not just the local cluster, by running the
following command:
isi devices drive firmware list --node-lnn all
isi config
lnnset 12 73
4. Enter commit .
You might need to reconnect to your SSH session before the new node name is automatically changed.
Alternatively, select a node, and from the Actions column, perform one of the following options:
• Click More > Shut down node to shut down the node.
• Click More > Reboot node to stop and restart the node.
Upgrading OneFS
Before upgrading OneFS software, a pre-upgrade check must be performed.
Three options are available for upgrading the OneFSoperating system: a parallel upgrade, a rolling upgrade, or a simultaneous upgrade. For
more information about how to plan an upgrade, prepare the cluster for upgrade, and perform an upgrade of the operating system, see
the OneFS Upgrades Info Hub.
A parallel upgrade installs the new operating system on a subset of nodes and restarts that subset of nodes at the same time. Each subset
of nodes attempts to make a reservation for their turn to upgrade until all nodes are upgraded. Node subsets and reservations are based
on diskpool and node availability.
During a parallel upgrade, node subsets that are not being upgraded remain online and can continue serving clients. However, clients that
are connected to a restarting node are disconnected and reconnected. How the client connection behaves when a node is restarted
depends on several factors including client type, client configuration (mount type, timeout settings), IP allocation method, and how the
client connected to the cluster.
NOTE: Only upgrades from OneFS 8.2.2 to newer OneFS versions can take advantage of the parallel upgrade feature.
Any upgrades where the starting cluster is on OneFS 8.2.1 or older cannot take advantage of the parallel upgrade
feature.
A rolling upgrade individually upgrades and restarts each node in the PowerScale cluster sequentially. During a rolling upgrade, the cluster
remains online and continues serving clients with no interruption in service, although some connection resets may occur on SMB clients.
Rolling upgrades are performed sequentially by node number, so a rolling upgrade takes longer to complete than a simultaneous upgrade.
The final node in the upgrade process is the node that you used to start the upgrade process.
A simultaneous upgrade installs the new operating system and restarts all nodes in the cluster at the same time. Simultaneous upgrades
are faster than rolling upgrades but require a temporary interruption of service during the upgrade process. Your data is inaccessible during
the time that it takes to complete the upgrade process.
Before beginning an upgrade, OneFS compares the current cluster and operating system with the new version to ensure that the cluster
meets certain criteria, such as configuration compatibility (SMB, LDAP, SmartPools), disk availability, and the absence of critical cluster
events. If upgrading puts the cluster at risk, OneFS warns you, provides information about the risks, and prompts you to confirm whether
to continue the upgrade.
If the cluster does not meet the pre-upgrade criteria, the upgrade does not proceed, and the unsupported statuses are listed.
NOTE: It is recommended that you run the optional pre-upgrade checks. Before starting an upgrade, OneFS checks that
your cluster is healthy enough to complete the upgrade process. Some of the pre-upgrade checks are mandatory, and
will be performed even if you choose to skip the optional checks. All pre-upgrade checks contribute to a safer upgrade.
SRS Summary
OneFS allows remote support through Secure Remote Services (SRS), which monitors the cluster, and with permission, provides remote
access for PowerScale Technical Support personnel to gather cluster data and troubleshoot issues. SRS is a secure, Customer Support
system that includes 24x7 remote monitoring and secure authentication with AES 256-bit encryption and RSA digital certificates.
Although SRS is not a licensed feature, there must be a signed elicense file that is uploaded on the OneFS cluster before you can enable
SRS. If you are using an evaluation license on the cluster, it is not possible to enable SRS. To evaluate SRS on an evaluation cluster, ask
Technical Support for help with obtaining a signed elicense file.
If you configure and enable remote support, PowerScale Technical Support personnel can establish a secure SSH session with the cluster
through the SRS connection. Remote access to the cluster is only in the context of an open support case. You can allow or deny the
remote session request by PowerScale Technical Support personnel. During remote sessions, support personnel can run remote support
scripts that gather diagnostic data about cluster settings and operations. Diagnostic data is sent over the secure SRS connection to Dell
EMC SRS.
The remote support user credentials are required for access to the cluster. The remote support user is a separate user, not a general
cluster user, or a System Admin user. OneFS does not store the required remote support user credentials.
SRS Telemetry
SRS Telemetry is enabled when Secure Remote Services is enabled.
SRS Telemetry replaces phone home functionality:
• isi_phone_home was deprecated in OneFS 8.2.1.
• isi_phone_home was disabled in OneFS 8.2.2.
SRS Telemetry gathers configuration data (gconfig), system controls (sysctls), directory paths, and statistics at the cluster level. SRS
Telemetry also gathers API endpoints and statistics at the node level. This data is sent through Secure Remote Services for use by
CloudIQ.
For more information about SRS Telemetry, contact your OneFS support representative.
4. Upload the signed license file to your cluster, as described earlier in this guide.
Configuring SRS
You can configure and enable support for Secure Remote Services (SRS) on a PowerScale cluster in the OneFS web UI.
The OneFS software must have a signed license before you can enable and configure SRS.
Clusters running OneFS 8.1.x or later must have SRS gateway server 3.x installed and configured.
Clusters running OneFS 9.0.0.0 with PowerScale F200 or PowerScale F600 nodes must have SRS v3 installed.
The IP address pools that handle gateway connections must exist in the system and must belong to a subnet under groupnet0, which is
the default system groupnet.
1. Click Cluster Management > General Settings > Remote Support.
2. If the OneFS license is unsigned, click Update license now and follow the instructions in Licensing.
3. SRS must be configured before it can be enabled. To configure SRS, click Configure SRS.
4. In the Primary SRS gateway address field, type an IPv4 address or the name of the primary gateway server.
5. In Secondary SRS gateway address field, type an IPv4 address or the name of the secondary gateway server.
6. In the Manage subnets and pools section, select the network pools that you want to manage.
Enabling SRS
SRS must be configured before it can be enabled.
1. Click Cluster Management > General Settings > Remote Support.
2. Click Enable SRS to connect to the gateway.
The login dialog box opens.
3. Type the User name and Password, and click Enable SRS.
If the User name or Password is incorrect, or if the user is not registered with Dell EMC, an error message is generated. Look for the
u'message section in the error text.
isi diagnostics gather settings modify --ftp- Sets the FTP host to upload to.
upload-host
isi diagnostics gather settings modify --ftp- Sets the FTP user's password. Also see --set-ftp-upload-
upload-pass pass.
isi diagnostics gather settings modify --ftp- Sets the path on the FTP server for the upload.
upload-path
isi diagnostics gather settings modify --ftp- Sets the proxy server for FTP.
upload-proxy
isi diagnostics gather settings modify --ftp- Sets the port for proxy server for FTP.
upload-proxy-port
isi diagnostics gather settings modify --ftp- Sets the FTP user.
upoad-user
isi diagnostics gather settings modify -- Sets the type of gather: incremental or full.
gather-mode
isi diagnostics gather settings modify --http- Sets whether or not to use HTTP upload on completed gather.
upload
isi diagnostics gather settings modify --help Displays help for this command.
isi diagnostics gather settings modify --http- Sets the HTTP host to upload to.
upload-host
isi diagnostics gather settings modify --http- Sets the path for the upload.
upload-path
isi diagnostics gather settings modify --http- Sets the proxy to use for HTTP upload.
upload-proxy
isi diagnostics gather settings modify --http- Sets the proxy port to use for HTTP upload.
upload-proxy-port
isi diagnostics netlogger settings modify -- Sets the Client IP address or addresses to filter.
clients
isi diagnostics netlogger settings modify -- Sets the number of capture files to keep after they reach the
count duration limit. Defaults to the last 3 files.
Get application data Collects and uploads information about OneFS application
programs.
Generate dashboard file daily Generates daily dashboard information.
Generate dashboard file sequence Generates dashboard information in the sequence that it
occurred.
Get ABR data (as built record) Collects as-built information about hardware.
Get ATA control and GMirror status Collects system output and invokes a script when it
receives an event that corresponds to a predetermined
eventid.
Get cluster data Collects and uploads information about overall cluster
configuration and operations.
Get cluster events Gets the output of existing critical events and uploads the
information.
Get cluster status Collects and uploads cluster status details.
Get contact info Extracts contact information and uploads a text file that
contains it.
Get contents (var/crash) Uploads the contents of /var/crash.
Get job status Collects and uploads details on a job that is being
monitored.
Get domain data Collects and uploads information about the cluster’s Active
Directory Services (ADS) domain membership.
Get file system data Collects and uploads information about the state and
health of the OneFS /ifs/ file system.
Get network data Collects and uploads information about cluster-wide and
node-specific network configuration settings and
operations.
Get NFS clients Runs a command to check if nodes are being used as NFS
clients.
Get node data Collects and uploads node-specific configuration, status,
and operational information.
Get protocol data Collects and uploads network status information and
configuration settings for the NFS, SMB, HDFS, FTP, and
HTTP protocols.
Get Pcap client stats Collects and uploads client statistics.
Get readonly status Warns if the chassis is open and uploads a text file of the
event information.
Get usage data Collects and uploads current and historical information
about node performance and resource usage.
Access zones 69
OneFS supports overlapping data between access zones for cases where your workflows require shared data. However, the added
complexity to the access zone configuration might lead to future issues with client access. For the best results from overlapping data
between access zones, it is recommended that the access zones also share the same authentication providers. Shared providers ensures
that users will have consistent identity information when accessing the same data through different access zones.
If you cannot configure the same authentication providers for access zones with shared data, ensure the following:
• Select Active Directory as the authentication provider in each access zone. This causes files to store globally unique SIDs as the on-
disk identity, eliminating the chance of users from different zones gaining access to each other's data.
• Avoid selecting local, LDAP, and NIS as the authentication providers in the access zones. These authentication providers use UIDs and
GIDs, which are not guaranteed to be globally unique. This results in a high probability that users from different zones will be able to
access each other's data.
• Set the on-disk identity to native, or preferably, to SID. When user mappings exist between Active Directory and UNIX users or if the
Services for Unix option is enabled for the Active Directory provider, OneFS stores SIDs as the on-disk identity instead of UIDs.
70 Access zones
Quality of service
You can set upper bounds on quality of service by assigning specific physical resources to each access zone.
Quality of service addresses physical hardware performance characteristics that can be measured, improved, and sometimes guaranteed.
Characteristics that are measured for quality of service include but are not limited to throughput rates, CPU usage, and disk capacity.
When you share physical hardware in a PowerScale cluster across multiple virtual instances, competition exists for the following services:
• CPU
• Memory
• Network bandwidth
• Disk I/O
• Disk capacity
Access zones do not provide logical quality of service guarantees to these resources, but you can partition these resources between
access zones on a single cluster. The following table describes a few ways to partition resources to improve quality of service:
Use Notes
NICs You can assign specific NICs on specific nodes to an IP address
pool that is associated with an access zone. By assigning these
NICs, you can determine the nodes and interfaces that are
associated with an access zone. This enables the separation of
CPU, memory, and network bandwidth.
SmartPools SmartPools are separated into multiple tiers of high, medium, and
low performance. The data written to a SmartPool is written only
to the disks in the nodes of that pool.
Associating an IP address pool with only the nodes of a single
SmartPool enables partitioning of disk I/O resources.
Privilege Description
ISI_PRIV_AUDIT • Add/remove your zone from list of audited zones
• View/modify zone-specific audit settings for your zone
Access zones 71
Privilege Description
ISI_PRIV_FILE_FILTER Use all functionalities that are associated with this privilege, in your own zone.
ISI_PRIV_HDFS Use all functionalities that are associated with this privilege, in your own zone.
ISI_PRIV_NFS • View global NFS settings, but do not modify them.
• Otherwise, use all functionality that is associated with this privilege, in your own zone.
ISI_PRIV_ROLE Use all functionalities that are associated with this privilege, but only in your own zone.
ISI_PRIV_SMB • View global SMB settings, but do not modify them.
• Otherwise, use all functionalities that are associated with this privilege, in your own zone.
ISI_PRIV_SWIFT Use all functionalities that are associated with this privilege, in your own zone.
ISI_PRIV_VCENTER Configure VMware vCenter
ISI_PRIV_LOGIN_PAPI Access the WebUI from a non-System access zone
ISI_PRIV_BACKUP Bypass file permission checks and grant all read permissions.
ISI_PRIV_RESTORE Bypass file permission checks and grant all write permissions.
ISI_PRIV_NS_TRAVERSE Traverse and view directory metadata inside the zone base path.
ISI_PRIV_NS_IFS_ACCESS Access directories inside the zone base path through RAN
NOTE: These roles do not have any default users who are automatically assigned to them.
72 Access zones
Zone-specific authentication providers
Some information about how authentication providers work with zRBAC.
Authentication providers are global objects in a OneFS cluster. However, as part of the zRBAC feature, an authentication provider is
implicitly associated with the access zone from which it was created, and has certain behaviors that are based on that association.
• All access zones can view and use an authentication provider that is created from the System zone. However, only a request from the
System access zone can modify or delete it.
• An authentication provider that is created from (or on behalf of) a non-System access zone can only be viewed or modified or deleted
by that access zone and the System zone.
• A local authentication provider is implicitly created whenever an access zone is created, and is associated with that access zone.
• A local authentication provider for a non-System access zone may no longer be used by another access zone. If you would like to
share a local authentication provider among access zones, then it must be the System zone's local provider.
• The name of an authentication provider is still global. Therefore, authentication providers must have unique names. Thus, you cannot
create two LDAP providers named ldap5 in different access zones, for example.
• The Kerberos provider can only be created from the System access zone.
• Creating two distinct Active Directory (AD) providers to the same AD may require the use of the AD multi-instancing feature. To
assign a unique name to the AD provider, use --instance.
Related concepts
Data Security overview on page 69
Managing access zones on page 73
Related tasks
Associate an IP address pool with an access zone on page 74
Access zones 73
Assign an overlapping base directory
You can create overlapping base directories between access zones for cases where your workflows require shared data.
1. Click Access > Access Zones.
2. Click View/Edit next to the access zone that you want to modify.
The system displays the View Access Zone Details window.
3. Click Edit.
The system displays theEdit Access Zone Details window.
4. In the Zone Base Directory field, type or browse to the base directory path for the access zone.
5. Click Save Changes.
The system prompts you to confirm that the directory you set overlaps with the base directory of another access zone.
6. Click Update at the system prompt to confirm that you want to allow data access to users in both access zones.
7. Click Close.
Before users can connect to an access zone, you must associate it with an IP address pool.
Related concepts
Data Security overview on page 69
Managing access zones on page 73
Related concepts
Data Security overview on page 69
Managing access zones on page 73
74 Access zones
The system displays the Edit Pool Details window.
4. From the Access Zone list, select the access zone you want to associate with the pool.
5. Click Save Changes.
Related concepts
Data Security overview on page 69
Managing access zones on page 73
Related concepts
Data Security overview on page 69
Managing access zones on page 73
Related concepts
Data Security overview on page 69
Managing access zones on page 73
Related concepts
Data Security overview on page 69
Managing access zones on page 73
Access zones 75
Modify a role in an access zone
You can modify zone-level roles with the help of the Current access zone list.
1. Click Access > Membership & Roles > Roles.
2. From the Current access zone list, select the appropriate zone-level role that you want to modify.
The roles that are associated with that access zone is displayed.
3. In the Roles area, select a role and click View / Edit.
The View role details dialog box is displayed.
NOTE: From the More list, select Copy or Delete to either copy a role or delete a role.
4. Click Edit Role and modify the settings as needed in the Edit role details dialog box.
5. To return to the View Role Details dialog box, click Save changes.
6. Click Close.
76 Access zones
5
Authentication
This section contains the following topics:
Topics:
• Authentication overview
• Authentication provider features
• Security Identifier (SID) history overview
• Supported authentication providers
• Active Directory
• LDAP
• NIS
• Kerberos authentication
• File provider
• Local provider
• Multi-factor Authentication (MFA)
• Multi-instance active directory
• LDAP public keys
• Managing Active Directory providers
• Managing LDAP providers
• Managing NIS providers
• Managing MIT Kerberos authentication
• Managing file providers
• Managing local users and groups
Authentication overview
You can manage authentication settings for your cluster, including authentication providers, Active Directory domains, LDAP, NIS, and
Kerberos authentication, file and local providers, multi-factor authentication, and more.
Feature Description
Authentication All authentication providers support cleartext authentication. You
can configure some providers to support NTLM or Kerberos
authentication also.
Users and groups OneFS provides the ability to manage users and groups directly on
the cluster.
Netgroups Specific to NFS, netgroups restrict access to NFS exports.
UNIX-centric user and group properties Login shell, home directory, UID, and GID. Missing information is
supplemented by configuration templates or additional
authentication providers.
Windows-centric user and group properties NetBIOS domain and SID. Missing information is supplemented by
configuration templates.
Authentication 77
Related concepts
Authentication overview on page 77
Related concepts
Authentication overview on page 77
Active Directory
Active Directory is a Microsoft implementation of Lightweight Directory Access Protocol (LDAP), Kerberos, and DNS technologies that
can store information about network resources. Active Directory can serve many functions, but the primary reason for joining the cluster
to an Active Directory domain is to perform user and group authentication.
You can join the cluster to an Active Directory (AD) domain by specifying the fully qualified domain name, which can be resolved to an
IPv4 or an IPv6 address, and a user name with join permission. When the cluster joins an AD domain, a single AD machine account is
created. The machine account establishes a trust relationship with the domain and enables the cluster to authenticate and authorize users
in the Active Directory forest. By default, the machine account is named the same as the cluster. If the cluster name is more than 15
characters long, the name is hashed and displayed after joining the domain.
OneFS supports NTLM and Microsoft Kerberos for authentication of Active Directory domain users. NTLM client credentials are obtained
from the login process and then presented in an encrypted challenge/response format to authenticate. Microsoft Kerberos client
credentials are obtained from a key distribution center (KDC) and then presented when establishing server connections. For greater
security and performance, we recommend that you implement Kerberos, according to Microsoft guidelines, as the primary authentication
protocol for Active Directory.
Each Active Directory provider must be associated with a groupnet. The groupnet is a top-level networking container that manages
hostname resolution against DNS nameservers and contains subnets and IP address pools. The groupnet specifies which networking
78 Authentication
properties the Active Directory provider will use when communicating with external servers. The groupnet associated with the Active
Directory provider cannot be changed. Instead you must delete the Active Directory provider and create it again with the new groupnet
association.
You can add an Active Directory provider to an access zone as an authentication method for clients connecting through the access zone.
OneFS supports multiple instances of Active Directory on a PowerScale cluster; however, you can assign only one Active Directory
provider per access zone. The access zone and the Active Directory provider must reference the same groupnet. Configure multiple
Active Directory instances only to grant access to multiple sets of mutually-untrusted domains. Otherwise, configure a single Active
Directory instance if all domains have a trust relationship. You can discontinue authentication through an Active Directory provider by
removing the provider from associated access zones.
Related concepts
Authentication overview on page 77
Managing Active Directory providers on page 82
LDAP
The Lightweight Directory Access Protocol (LDAP) is a networking protocol that enables you to define, query, and modify directory
services and resources.
OneFS can authenticate users and groups against an LDAP repository in order to grant them access to the cluster. OneFS supports
Kerberos authentication for an LDAP provider.
The LDAP service supports the following features:
• Users, groups, and netgroups.
• Configurable LDAP schemas. For example, the ldapsam schema allows NTLM authentication over the SMB protocol for users with
Windows-like attributes.
• Simple bind authentication, with and without TLS.
• Redundancy and load balancing across servers with identical directory data.
• Multiple LDAP provider instances for accessing servers with different user data.
• Encrypted passwords.
• IPv4 and IPv6 server URIs.
Each LDAP provider must be associated with a groupnet. The groupnet is a top-level networking container that manages hostname
resolution against DNS nameservers and contains subnets and IP address pools. The groupnet specifies which networking properties the
LDAP provider will use when communicating with external servers. The groupnet associated with the LDAP provider cannot be changed.
Instead you must delete the LDAP provider and create it again with the new groupnet association.
You can add an LDAP provider to an access zone as an authentication method for clients connecting through the access zone. An access
zone may include at most one LDAP provider. The access zone and the LDAP provider must reference the same groupnet. You can
discontinue authentication through an LDAP provider by removing the provider from associated access zones.
Related concepts
Authentication overview on page 77
Managing LDAP providers on page 84
NIS
The Network Information Service (NIS) provides authentication and identity uniformity across local area networks. OneFS includes an NIS
authentication provider that enables you to integrate the cluster with your NIS infrastructure.
NIS, designed by Sun Microsystems, can authenticate users and groups when they access the cluster. The NIS provider exposes the
passwd, group, and netgroup maps from an NIS server. Hostname lookups are also supported. You can specify multiple servers for
redundancy and load balancing.
Each NIS provider must be associated with a groupnet. The groupnet is a top-level networking container that manages hostname
resolution against DNS nameservers and contains subnets and IP address pools. The groupnet specifies which networking properties the
NIS provider will use when communicating with external servers. The groupnet associated with the NIS provider cannot be changed.
Instead you must delete the NIS provider and create it again with the new groupnet association.
You can add an NIS provider to an access zone as an authentication method for clients connecting through the access zone. An access
zone may include at most one NIS provider. The access zone and the NIS provider must reference the same groupnet. You can
discontinue authentication through an NIS provider by removing the provider from associated access zones.
Authentication 79
NOTE: NIS is different from NIS+, which OneFS does not support.
Related concepts
Authentication overview on page 77
Managing NIS providers on page 87
Kerberos authentication
For general information about Kerberos authentication, see the OneFS 9.0.0 Web Administration Guide and the OneFS 9.0.0 CLI
Administration Guide.
Related concepts
Authentication overview on page 77
File provider
A file provider enables you to supply an authoritative third-party source of user and group information to a PowerScale cluster. A third-
party source is useful in UNIX and Linux environments that synchronize /etc/passwd, /etc/group, and etc/netgroup files across
multiple servers.
Standard BSD /etc/spwd.db and /etc/group database files serve as the file provider backing store on a cluster. You generate the
spwd.db file by running the pwd_mkdb command in the OneFS command-line interface (CLI). You can script updates to the database
files.
On a PowerScale cluster, a file provider hashes passwords with libcrypt. For the best security, it is recommended that you use the
Modular Crypt Format in the source /etc/passwd file to determine the hashing algorithm. OneFS supports the following algorithms for
the Modular Crypt Format:
• MD5
• NT-Hash
• SHA-256
• SHA-512
For information about other available password formats, run the man 3 crypt command in the CLI to view the crypt man pages.
NOTE: The built-in System file provider includes services to list, manage, and authenticate against system accounts
such as root, admin, and nobody. It is recommended that you do not modify the System file provider.
80 Authentication
Related concepts
Authentication overview on page 77
Local provider
The local provider provides authentication and lookup facilities for user accounts added by an administrator.
Local authentication is useful when Active Directory, LDAP, or NIS directory services are not configured or when a specific user or
application needs access to the cluster. Local groups can include built-in groups and Active Directory groups as members.
In addition to configuring network-based authentication sources, you can manage local users and groups by configuring a local password
policy for each node in the cluster. OneFS settings specify password complexity, password age and re-use, and password-attempt lockout
policies.
Related concepts
Authentication overview on page 77
Authentication 81
Nonetheless, you need a home directory on the cluster or you could get an error when you log in.
• When you join Active Directory from OneFS, cluster time is updated from the Active Directory server, as long as an
NTP server has not been configured for the cluster.
• If you migrate users to a new or different Active Directory domain, you must re-set the ACL domain information after
you configure the new provider. You can use third-party tools such as Microsoft SubInACL.
1. Click Access > Authentication Providers > Active Directory.
2. Click Join a domain.
3. In the Domain Name field, specify the fully qualified Active Directory domain name, which can be resolved to an IPv4 or an IPv6
address.
The domain name will also be used as the provider name.
4. In the User field, type the username of an account that is authorized to join the Active Directory domain.
5. In the Password field, type the password of the user account.
6. Optional: In the Organizational Unit field, type the name of the organizational unit (OU) to connect to on the Active Directory server.
Specify the OU in the format OuName or OuName1/SubName2.
7. Optional: In the Machine Account field, type the name of the machine account.
NOTE: If you specified an OU to connect to, the domain join will fail if the machine account does not reside in the
OU.
8. From the Groupnet list, select the groupnet the authentication provider will reference.
9. Optional: To enable Active Directory authentication for NFS, select Enable Secure NFS.
NOTE: If you specified an OU to connect to, the domain join will fail if the machine account does not reside in the
OU.
If you enable this setting, OneFS registers NFS service principal names (SPNs) during the domain join.
10. Optional: In the Advanced Active Directory Settings area, configure the advanced settings that you want to use. It is recommended
that you not change any advanced settings without understanding their consequences.
11. Click Join.
Related concepts
Managing Active Directory providers on page 82
Related References
Active Directory provider settings on page 83
82 Authentication
4. For each setting that you want to modify, click Edit, make the change, and then click Save.
5. Optional: Click Close.
Related concepts
Managing Active Directory providers on page 82
Related concepts
Managing Active Directory providers on page 82
Setting Description
Services For UNIX Specifies whether to support RFC 2307 attributes for domain
controllers. RFC 2307 is required for Windows UNIX Integration
and Services For UNIX technologies.
Map to primary domain Enables the lookup of unqualified user names in the primary
domain. If this setting is not enabled, the primary domain must be
specified for each authentication operation.
Ignore trusted domains Ignores all trusted domains.
Trusted Domains Specifies trusted domains to include if the Ignore Trusted
Domains setting is enabled.
Domains to Ignore Specifies trusted domains to ignore even if the Ignore Trusted
Domains setting is disabled.
Send notification when domain is unreachable Sends an alert as specified in the global notification rules.
Use enhanced privacy and encryption Encrypts communication to and from the domain controller.
Home Directory Naming Specifies the path to use as a template for naming home
directories. The path must begin with /ifs and can contain
variables, such as %U, that are expanded to generate the home
directory path for the user.
Create home directories on first login Creates a home directory the first time that a user logs in if a home
directory does not already exist for the user.
UNIX Shell Specifies the path to the login shell to use if the Active Directory
server does not provide login-shell information. This setting applies
only to users who access the file system through SSH.
Query all other providers for UID If no UID is available in the Active Directory, looks up Active
Directory users in all other providers for allocating a UID.
Match users with lowercase If no UID is available in the Active Directory, normalizes Active
Directory user names to lowercase before lookup.
Auto-assign UIDs If no UID is available in the Active Directory, enables UID allocation
for unmapped Active Directory users.
Authentication 83
Setting Description
Query all other providers for GID If no GID is available in the Active Directory, looks up Active
Directory groups in all other providers before allocating a GID.
Match groups with lowercase If no GID is available in the Active Directory, normalizes Active
Directory group names to lowercase before lookup.
Auto-assign GIDs If no GID is available in the Active Directory, enables GID allocation
for unmapped Active Directory groups.
Make UID/GID assignments for users and groups in these specific Restricts user and group lookups to the specified domains.
domains
• If you do not specify a port, the default port is used. The default port for non-secure LDAP (ldap://) is 389; for
secure LDAP (ldaps://), it is 636. If you specify non-secure LDAP, the bind password is transmitted to the server
in cleartext.
• If you specify an IPv6 address, the address must be enclosed in square brackets. For example, ldap://
[2001:DB8:170:7cff::c001] is the correct IPv6 format for this field.
5. Select the Connect to a random server on each request checkbox to connect to an LDAP server at random. If unselected, OneFS
connects to an LDAP server in the order listed in the Server URIs field.
6. In the Base distinguished name (DN) field, type the distinguished name (DN) of the entry at which to start LDAP searches.
Base DNs can include cn (Common Name), l (Locality), dc (Domain Component), ou (Organizational Unit), or other components.
For example, dc=emc,dc=com is a base DN for emc.com.
7. From the Groupnet list, select the groupnet that the authentication provider will reference.
8. In the Bind DN field, type the distinguished name of the entry at which to bind to the LDAP server.
9. In the Bind DN password field, specify the password to use when binding to the LDAP server.
Use of this password does not require a secure connection; if the connection is not using Transport Layer Security (TLS), the
password is sent in cleartext.
10. Optional: Update the settings in the following sections of the Add an LDAP provider form to meet the needs of your environment:
Option Description
Default Query Settings Modify the default settings for user, group, and netgroup queries.
User Query Settings Modify the settings for user queries and home directory provisioning.
Group Query Settings Modify the settings for group queries.
Netgroup Query Settings Modify the settings for netgroup queries.
Advanced LDAP Settings Modify the default LDAP attributes that contain user information or to modify LDAP security settings.
11. Click Add LDAP Provider.
84 Authentication
Related concepts
Managing LDAP providers on page 84
Related References
LDAP query settings on page 85
LDAP advanced settings on page 86
Related concepts
Managing LDAP providers on page 84
Related concepts
Managing LDAP providers on page 84
Base distinguished Specifies the base distinguished name (base DN) of the entry at which to start LDAP searches for user, group, or
name netgroup objects. Base DNs can include cn (Common Name), l (Locality), dc (Domain Component), ou
(Organizational Unit), or other components. For example, dc=emc,dc=com is a base DN for emc.com.
Search scope Specifies the depth from the base DN at which to perform LDAP searches. The following values are valid:
Default Applies the search scope that is defined in the default query settings. This option is not
available for the default query search scope.
Base Searches only the entry at the base DN.
One-level Searches all entries exactly one level below the base DN.
Subtree Searches the base DN and all entries below it.
Children Searches all entries below the base DN, excluding the base DN itself.
Search timeout Specifies the number of seconds after which to stop retrying and fail a search. The default value is 100. This
setting is available only in the default query settings.
Query filter Specifies the LDAP filter for user, group, or netgroup objects. This setting is not available in the default query
settings.
Authentication 85
Authenticate Specifies whether to allow the provider to respond to authentication requests. This setting is available only in the
users from this user query settings.
LDAP provider
Home directory Specifies the path to use as a template for naming home directories. The path must begin with /ifs and can
naming template contain variables, such as %U, that are expanded to generate the home directory path for the user. This setting is
available only in the user query settings.
Automatically Specifies whether to create a home directory the first time a user logs in, if a home directory does not exist for
create user home the user. This setting is available only in the user query settings.
directories on first
login
UNIX shell Specifies the path to the user's login shell, for users who access the file system through SSH. This setting is
available only in the user query settings.
Name attribute Specifies the LDAP attribute that contains UIDs, which are used as login names. The default value is uid.
Common name Specifies the LDAP attribute that contains common names (CNs). The default value is cn.
attribute
Email attribute Specifies the LDAP attribute that contains email addresses. The default value is mail.
GECOS field Specifies the LDAP attribute that contains GECOS fields. The default value is gecos.
attribute
UID attribute Specifies the LDAP attribute that contains UID numbers. The default value is uidNumber.
GID attribute Specifies the LDAP attribute that contains GIDs. The default value is gidNumber.
Home directory Specifies the LDAP attribute that contains home directories. The default value is homeDirectory.
attribute
UNIX shell Specifies the LDAP attribute that contains UNIX login shells. The default value is loginShell.
attribute
Member of Sets the attribute to be used when searching LDAP for reverse memberships. This LDAP value should be an
attribute attribute of the user type posixAccount that describes the groups in which the POSIX user is a member. This
setting has no default value.
Netgroup Specifies the LDAP attribute that contains netgroup members. The default value is memberNisNetgroup.
members attribute
Netgroup triple Specifies the LDAP attribute that contains netgroup triples. The default value is nisNetgroupTriple.
attribute
Group members Specifies the LDAP attribute that contains group members. The default value is memberUid.
attribute
Unique group Specifies the LDAP attribute that contains unique group members. This attribute is used to determine which
members attribute groups a user belongs to if the LDAP server is queried by the user’s DN instead of the user’s name. This setting
has no default value.
Alternate security Specifies the name to be used when searching for alternate security identities. This name is used when OneFS
identities attribute tries to resolve a Kerberos principal to a user. This setting has no default value.
UNIX password Specifies the LDAP attribute that contains UNIX passwords. This setting has no default value.
attribute
Windows Specifies the LDAP attribute that contains Windows passwords. A commonly used value is ntpasswdhash.
password
attribute
Certificate Specifies the full path to the root certificates file.
authority file
86 Authentication
Require secure Specifies whether to require a Transport Layer Security (TLS) connection.
connection for
passwords
Ignore TLS errors Continues over a secure connection even if identity checks fail.
Related concepts
Managing NIS providers on page 87
Authentication 87
4. Click Close.
Related concepts
Managing NIS providers on page 87
Related concepts
Managing NIS providers on page 87
Related concepts
Managing MIT Kerberos realms on page 88
88 Authentication
4. Select or clear the Set as the default realm check box to modify the default realm setting.
5. In the Key Distribution Centers (KDCs) field, specify the IPv4 address, IPv6 address, or the hostname of each additional KDC
server.
6. In the Admin Server field, specify the IPv4 address, IPv6 address, or hostname of the administration server, which will be fulfill the
role of master KDC.
7. In the Default Domain field, specify an alternate domain name for translating the service principal names (SPNs).
8. Click Save Changes to return to the View a Kerberos Realm page.
9. Click Close.
Related concepts
Managing MIT Kerberos realms on page 88
Related concepts
Managing MIT Kerberos realms on page 88
Related concepts
Managing MIT Kerberos realms on page 88
Related concepts
Managing MIT Kerberos providers on page 89
Authentication 89
Create an MIT Kerberos realm, domain, and a provider
You can create an MIT Kerberos realm, domain, and a provider through a single workflow instead of configuring each of these objects
individually.
1. Click Access > Authentication Providers > Kerberos Provider.
2. Click Get Started.
The system displays the Create a Kerberos Realm and Provider window.
3. From the Create Realm section, type a domain name in the Realm Name field.
It is recommended that the domain name is formatted in uppercase characters, such as CLUSTER-NAME.COMPANY.COM.
4. Check the Set as the default realm box to set the realm as the default.
5. In the Key Distribution Centers (KDCs) field, add one or more KDCs by specifying the IPv4 address, IPv6 address, or the hostname
of each server.
6. In the Admin Server field, specify the IPv4 address, IPv6 address, or hostname of the administration server, which will be fulfill the
role of master KDC. If you omit this step, the first KDC that you added previously is used as the default admin server.
7. In the Default Domain field, specify the domain name to use for translating the service principal names (SPNs).
8. Optional: From the Create Domain(s) section, specify one or more domain names to associate with the realm in the Domain(s) field.
9. From the Authenticate to Realm section, type the name and password of a user that has permission to create SPNs in the Kerberos
realm in the User and Password fields.
10. From the Create Provider section, select the groupnet the authentication provider will reference from the Groupnet list.
11. From the Service Principal Name (SPN) Management area, select one of the following options to be used for managing SPNs:
• Use recommended SPNs
• Manually associate SPNs
If you select this option, type at least one SPN in the format service/principal@realm to manually associate it with the
realm.
12. Click Create Provider and Join Realm.
Related concepts
Creating an MIT Kerberos provider on page 89
Related concepts
Creating an MIT Kerberos provider on page 89
90 Authentication
Modify an MIT Kerberos provider
You can modify the realm authentication information and the service principal name (SPN) information for an MIT Kerberos provider.
You must be a member of the SecurityAdmin role to view and access the View / Edit button to modify an MIT Kerberos provider.
1. Click Access > Authentication Providers > Kerberos Provider.
2. In the Kerberos Provider table, select a domain and click View / Edit.
3. In the View a Kerberos Provider page, click Edit Provider.
4. In the Realm Authentication Information section, specify the credentials for a user with permissions to create SPNs in the given
Kerberos realm.
5. In the Provider Information section, select one of the following options for managing the SPNs:
• Use the recommended SPNs.
• Type an SPN in the format service/principal@realm to manually associate the SPN with the selected realm. You can add
more than one SPN for association, if necessary.
6. Click Save Changes to return to the View a Kerberos Provider page.
7. Click Close.
Related concepts
Managing MIT Kerberos providers on page 89
Related concepts
Managing MIT Kerberos providers on page 89
Related concepts
Managing MIT Kerberos providers on page 89
Authentication 91
Selecting this check box enables the Kerberos ticket requests to include ENC_TIMESTAMP as the pre-authentication data even if the
authentication server did not request it. This is useful when working with Active Directory servers.
4. Select a check box to specify whether to use the DNS server records to locate the KDCs and other servers for a realm, if that
information is not listed for the realm.
5. Select a check box to specify whether to use the DNS text records to determine the Kerberos realm of a host.
6. Click Save Changes.
Related concepts
Managing MIT Kerberos providers on page 89
Related concepts
Managing MIT Kerberos domains on page 92
Related concepts
Managing MIT Kerberos domains on page 92
Related concepts
Managing MIT Kerberos domains on page 92
92 Authentication
Delete an MIT Kerberos domain
You can delete one or more MIT Kerberos domain mappings.
You must be a member of the SecurityAdmin role to perform the tasks described in this procedure.
1. Click Access > Authentication Providers > Kerberos Provider.
2. In the Kerberos Domains table, select one or more domain mappings and then perform one of the following actions:
• To delete a single domain mapping, select the mapping and click More > Delete from the Actions column.
• To delete multiple domain mappings, select the mappings and then select Delete Selection from the Select a bulk action list.
Related concepts
Managing MIT Kerberos domains on page 92
Option Description
Authenticate users from Specifies whether to allow the provider to respond to authentication requests.
this provider
Create home directories on Specifies whether to create a home directory the first time a user logs in, if a home directory does not
first login exist for the user.
Path to home directory Specifies the path to use as a template for naming home directories. The path must begin with /ifs
and can contain expansion variables such as %U, which expand to generate the home directory path
for the user. For more information, see the Home directories section of the OneFS Web
Administration Guide or the OneFS CLI Administration Guide.
UNIX Shell Specifies the path to the user's login shell, for users who access the file system through SSH.
8. Click Add File Provider.
Related concepts
Managing file providers on page 93
Authentication 93
Related References
Password file format on page 94
Group file format on page 95
Netgroup file format on page 95
The following command generates an spwd.db file in the /etc directory from a password file that is located at /ifs/
test.passwd:
pwd_mkdb /ifs/test.passwd
The following command generates an spwd.db file in the /ifs directory from a password file that is located at /ifs/
test.passwd:
Related concepts
Managing file providers on page 93
Related References
Password file format on page 94
admin:*:10:10::0:0:Web UI Administrator:/ifs/home/admin:/bin/zsh
The fields are defined below in the order in which they appear in the file.
NOTE: UNIX systems often define the passwd format as a subset of these fields, omitting the Class, Change, and Expiry
fields. To convert a file from passwd to master.passwd format, add :0:0: between the GID field and the Gecos field.
Username The user name. This field is case-sensitive. OneFS does not limit the length; many applications truncate the name
to 16 characters, however.
Password The user’s encrypted password. If authentication is not required for the user, you can substitute an asterisk (*)
for a password. The asterisk character is guaranteed to not match any password.
UID The UNIX user identifier. This value must be a number in the range 0-4294967294 that is not reserved or
already assigned to a user. Compatibility issues occur if this value conflicts with an existing account's UID.
GID The group identifier of the user’s primary group. All users are a member of at least one group, which is used for
access checks and can also be used when creating files.
Class This field is not supported by OneFS and should be left empty.
Change OneFS does not support changing the passwords of users in the file provider. This field is ignored.
94 Authentication
Expiry OneFS does not support the expiration of user accounts in the file provider. This field is ignored.
Gecos This field can store a variety of information but is usually used to store the user’s full name.
Home The absolute path to the user’s home directory.
Shell The absolute path to the user’s shell. If this field is set to /sbin/nologin, the user is denied command-line
access.
Related concepts
Managing file providers on page 93
admin:*:10:root,admin
The fields are defined below in the order in which they appear in the file.
Group name The name of the group. This field is case-sensitive. Although OneFS does not limit the length of the group name,
many applications truncate the name to 16 characters.
Password This field is not supported by OneFS and should contain an asterisk (*).
GID The UNIX group identifier. Valid values are any number in the range 0-4294967294 that is not reserved or
already assigned to a group. Compatibility issues occur if this value conflicts with an existing group's GID.
Group members A comma-delimited list of user names.
Related concepts
Managing file providers on page 93
Where <host> is a placeholder for a machine name, <user> is a placeholder for a user name, and <domain> is a placeholder for a domain
name. Any combination is valid except an empty triple: (,,).
The following sample file contains two netgroups. The rootgrp netgroup contains four hosts: two hosts are defined in member triples and
two hosts are contained in the nested othergrp netgroup, which is defined on the second line.
NOTE: A new line signifies a new netgroup. You can continue a long netgroup entry to the next line by typing a backslash
character (\) in the right-most position of the first line.
Related concepts
Managing file providers on page 93
Authentication 95
Modify a file provider
You can modify any setting for a file provider, with the exception that you cannot rename the System file provider.
1. Click Access > Authentication Providers > File Provider.
2. In the File Providers table, click View details for the provider whose settings you want to modify.
3. For each setting that you want to modify, click Edit, make the change, and then click Save.
4. Click Close.
Related concepts
Managing file providers on page 93
Related concepts
Managing file providers on page 93
Option Description
Users Select this tab to view all users by provider.
Groups Select this tab to view all groups by provider.
3. From the Current Access Zone list, select an access zone.
4. Select the local provider in the Providers list.
Related concepts
Managing local users and groups on page 96
96 Authentication
5. In the User Name field, type a username for the account.
6. In the Password field, type a password for the account.
7. Optional: Configure the following additional settings as needed.
Option Description
UID If this setting is left blank, the system automatically allocates a UID for the account. This is the recommended
setting. You cannot assign a UID that is in use by another local user account.
Full Name Type a full name for the user.
Email Address Type an email address for the account.
Primary Group To specify the owner group using the Select a Primary Group dialog box, click Select group.
a. To locate a group under the selected local provider, type a group name or click Search.
b. Select a group to return to the Manage Users window.
Additional Groups To specify any additional groups to make this user a member of, click Add group.
Home Directory Type the path to the user's home directory. If you do not specify a path, a directory is automatically created
at /ifs/home/<username>.
UNIX Shell This setting applies only to users who access the file system through SSH. From the list, select a shell. By
default, the /bin/zsh shell is selected.
Account Expiration Click the calendar icon to select the expiration date or type the expiration date in the field, and then type the
Date date in the format <mm>/<dd>/<yyyy>.
Enable the account Select this check box to allow the user to authenticate against the local database for SSH, FTP, HTTP, and
Windows file sharing through SMB. This setting is not used for UNIX file sharing through NFS.
8. Click Create.
Related concepts
Managing local users and groups on page 96
Related References
Naming rules for local users and groups on page 98
7. Optional: For each member that you want to add to the group, click Add Members and perform the following tasks in the Select a
User dialog box:
a. Search for either Users, Groups, or Well-known SIDs.
b. If you selected Users or Groups, specify values for the following fields:
User Name
Type all or part of a user name, or leave the field blank to return all users. Wildcard characters are accepted.
Group Name
Type all or part of a group name, or leave the field blank to return all users. Wildcard characters are accepted.
Authentication 97
Provider
Select an authentication provider.
c. Click Search.
d. In the Search Results table, select a user and then click Select.
The dialog box closes.
8. Click Create Group.
Related concepts
Managing local users and groups on page 96
Related References
Naming rules for local users and groups on page 98
Related concepts
Managing local users and groups on page 96
98 Authentication
7. Click Close.
Related concepts
Managing local users and groups on page 96
Related concepts
Managing local users and groups on page 96
Related concepts
Managing local users and groups on page 96
Authentication 99
6
Administrative roles and privileges
This section contains the following topics:
Topics:
• Role-based access
• Roles
• Privileges
• Managing roles
Role-based access
You can assign role-based access to delegate administrative tasks to selected users.
Role based access control (RBAC) allows the right to perform particular administrative actions to be granted to any user who can
authenticate to a cluster. Roles are created by a Security Administrator, assigned privileges, and then assigned members. All
administrators, including those given privileges by a role, must connect to the System zone to configure the cluster. When these members
log in to the cluster through a configuration interface, they have these privileges. All administrators can configure settings for access
zones, and they always have control over all access zones on the cluster.
Roles also give you the ability to assign privileges to member users and groups. By default, only the root user and the admin user can log in
to the web administration interface through HTTP or the command-line interface through SSH. Using roles, the root and admin users can
assign others to built-in or custom roles that have login and administrative privileges to perform specific administrative tasks.
NOTE: As a best practice, assign users to roles that contain the minimum set of necessary privileges. For most
purposes, the default permission policy settings, system access zone, and built-in roles are sufficient. You can create
role-based access management policies as necessary for your particular environment.
Roles
You can permit and limit access to administrative areas of your cluster on a per-user basis through roles. OneFS includes several built-in
administrator roles with predefined sets of privileges that cannot be modified. You can also create custom roles and assign privileges.
The following list describes what you can and cannot do through roles:
• You can assign privileges to a role.
• You can create custom roles and assign privileges to those roles.
• You can copy an existing role.
• You can add any user or group of users, including well-known groups, to a role as long as the users can authenticate to the cluster.
• You can add a user or group to more than one role.
• You cannot assign privileges directly to users or groups.
NOTE: When OneFS is first installed, only users with root- or admin-level access can log in and assign users to roles.
Related concepts
Role-based access on page 100
Managing roles on page 111
Custom roles
Custom roles supplement built-in roles.
You can create custom roles and assign privileges mapped to administrative areas in your cluster environment. For example, you can
create separate administrator roles for security, auditing, storage provisioning, and backup.
Related concepts
Roles on page 100
Managing roles on page 111
Built-in roles
Built-in roles are included in OneFS and have been configured with the most likely privileges necessary to perform common administrative
functions. You cannot modify the list of privileges assigned to each built-in role; however, you can assign users and groups to built-in roles.
OneFS provides the following built-in roles:
• SecurityAdmin
• SystemAdmin
• AuditAdmin
• BackupAdmin
• VMwareAdmin
See the OneFS Web Administration Guide or the OneFS CLI Administration Guide for descriptions and a list of privileges assigned to each
of the built-in roles.
Related concepts
Roles on page 100
Managing roles on page 111
Related References
Built-in roles on page 101
Related References
Built-in roles on page 101
Related References
Built-in roles on page 101
Related References
Built-in roles on page 101
Privileges
Privileges permit users to complete tasks on a cluster.
Privileges are associated with an area of cluster administration such as Job Engine, SMB, or statistics.
Privileges have one of two forms:
Action Allows a user to perform a specific action on a cluster. For example, the ISI_PRIV_LOGIN_SSH privilege allows a
user to log in to a cluster through an SSH client.
Read/Write Allows a user to view or modify a configuration subsystem such as statistics, snapshots, or quotas. For example,
the ISI_PRIV_SNAPSHOT privilege allows an administrator to create and delete snapshots and snapshot
schedules. A read/write privilege can grant either read-only or read/write access. Read-only access allows a user
to view configuration settings; read/write access allows a user to view and modify configuration settings.
Privileges are granted to the user on login to a cluster through the OneFS API, the web administration interface, SSH, or a console
session. A token is generated for the user, which includes a list of all privileges granted to the user. Each URI, web-administration interface
page, and command requires a specific privilege to view or modify the information available through any of these interfaces.
In some cases, privileges cannot be granted or there are privilege limitations.
• Privileges are not granted to users that do not connect to the System Zone during login or to users that connect through the
deprecated Telnet service, even if they are members of a role.
• Privileges do not provide administrative access to configuration paths outside of the OneFS API. For example, the ISI_PRIV_SMB
privilege does not grant a user the right to configure SMB shares using the Microsoft Management Console (MMC).
Related concepts
Role-based access on page 100
Managing roles on page 111
Related concepts
Privileges on page 104
Login privileges
The login privileges listed in the following table either allow the user to perform specific actions or grants read or write access to an area of
administration on the cluster.
System privileges
The system privileges listed in the following table either allow the user to perform specific actions or grants read or write access to an area
of administration on the cluster.
Security privileges
The security privileges listed in the following table either allow the user to perform specific actions or grants read or write access to an
area of administration on the cluster.
Configuration privileges
The configuration privileges listed in the following table either allow the user to perform specific actions or grants read or write access to
an area of administration on the cluster.
Namespace privileges
The namespace privileges listed in the following table either allow the user to perform specific actions or grants read or write access to an
area of administration on the cluster.
Most cluster privileges allow changes to cluster configuration in some manner. The backup and restore privileges allow access to cluster
data from the System zone, the traversing of all directories, and reading of all file data and metadata regardless of file permissions.
Users assigned these privileges use the protocol as a backup protocol to another machine without generating access-denied errors and
without connecting as the root user. These two privileges are supported over the following client-side protocols:
• SMB
• NFS
• OneFS API
• FTP
• SSH
Over SMB, the ISI_PRIV_IFS_BACKUP and ISI_PRIV_IFS_RESTORE privileges emulate the Windows privileges SE_BACKUP_NAME and
SE_RESTORE_NAME. The emulation means that normal file-open procedures are protected by file system permissions. To enable the
backup and restore privileges over the SMB protocol, you must open files with the FILE_OPEN_FOR_BACKUP_INTENT option, which
occurs automatically through Windows backup software such as Robocopy. Application of the option is not automatic when files are
opened through general file browsing software such as Windows File Explorer.
Related concepts
Privileges on page 104
Related concepts
Privileges on page 104
Command-to-privilege mapping
Each CLI command is associated with a privilege. Some commands require root access.
Related References
Command-line interface privileges on page 108
Privilege-to-command mapping
Each privilege is associated with one or more commands. Some commands require root access.
isi zone
Related References
Command-line interface privileges on page 108
Managing roles
You can view, add, or remove members of any role. Except for built-in roles, whose privileges you cannot modify, you can add or remove
OneFS privileges on a role-by-role basis.
NOTE: Roles take both users and groups as members. If a group is added to a role, all users who are members of that
group are assigned the privileges associated with the role. Similarly, members of multiple roles are assigned the
combined privileges of each role.
See the OneFS Web Administration Guide or the OneFS CLI Administration Guide for instructions on how to create, modify, and delete
roles.
Related concepts
Managing roles on page 111
Modify a role
You can modify the description and the user or group membership of any role, including built-in roles. However, you can modify the name
and privileges only for custom roles.
1. Click Access > Membership & Roles > Roles.
2. In the Roles area, select a role and click View / Edit.
The View Role Details dialog box appears.
Related concepts
Managing roles on page 111
Copy a role
You can copy an existing role and add or remove privileges and members for that role as needed.
1. Click Access > Membership & Roles > Roles.
2. In the Roles area, select a role and click More > Copy.
3. Modify the role name, description, members, and privileges as needed.
4. Click Copy Role.
Related concepts
Managing roles on page 111
Related concepts
Managing roles on page 111
Related concepts
Managing roles on page 111
Related concepts
Managing roles on page 111
View a role
You can view information about built-in and custom roles.
1. Click Access > Membership & Roles > Roles.
2. In the Roles area, select a role and click View / Edit.
3. In the View Role Details dialog box, view information about the role.
4. Click Close to return to the Membership & Roles page.
Related concepts
Managing roles on page 111
View privileges
You can view user privileges.
This procedure must be performed through the command-line interface (CLI). You can view a list of your privileges or the privileges of
another user using the following commands:
1. Establish an SSH connection to any node in the cluster.
2. To view privileges, run one of the following commands.
• To view a list of all privileges, run the following command:
isi auth id
• To view a list of privileges for another user, run the following command, where <user> is a placeholder for another user by name:
Related concepts
Managing roles on page 111
Identity types
OneFS supports three primary identity types, each of which you can store directly on the file system. Identity types are user identifier and
group identifier for UNIX, and security identifier for Windows.
When you log on to a cluster, the user mapper expands your identity to include your other identities from all the directory services,
including Active Directory, LDAP, and NIS. After OneFS maps your identities across the directory services, it generates an access token
that includes the identity information associated with your accounts. A token includes the following identifiers:
• A UNIX user identifier (UID) and a group identifier (GID). A UID or GID is a 32-bit number with a maximum value of 4,294,967,295.
• A security identifier (SID) for a Windows user account. A SID is a series of authorities and sub-authorities ending with a 32-bit relative
identifier (RID). Most SIDs have the form S-1-5-21-<A>-<B>-<C>-<RID>, where <A>, <B>, and <C> are specific to a domain or
computer and <RID> denotes the object in the domain.
• A primary group SID for a Windows group account.
• A list of supplemental identities, including all groups in which the user is a member.
The token also contains privileges that stem from administrative role-based access control.
On a PowerScale cluster, a file contains permissions, which appear as an access control list (ACL). The ACL controls access to directories,
files, and other securable system objects.
When a user tries to access a file, OneFS compares the identities in the user’s access token with the file’s ACL. OneFS grants access
when the file’s ACL includes an access control entry (ACE) that allows the identity in the token to access the file and that does not
include an ACE that denies the identity access. OneFS compares the access token of a user with the ACL of a file.
Access tokens
An access token is created when the user first makes a request for access.
Access tokens represent who a user is when performing actions on the cluster and supply the primary owner and group identities during
file creation. Access tokens are also compared against the ACL or mode bits during authorization checks.
During user authorization, OneFS compares the access token, which is generated during the initial connection, with the authorization data
on the file. All user and identity mapping occurs during token generation; no mapping takes place during permissions evaluation.
An access token includes all UIDs, GIDs, and SIDs for an identity, in addition to all OneFS privileges. OneFS reads the information in the
token to determine whether a user has access to a resource. It is important that the token contains the correct list of UIDs, GIDs, and
SIDs. An access token is created from one of the following sources:
Source Authentication
Username • SMB impersonate user
• Kerberized NFSv3
• Kerberized NFSv4
• NFS export user mapping
• HTTP
• FTP
• HDFS
Step 1: User Using the initial identity, the user is looked up in all configured authentication providers in the access zone, in the
identity lookup order in which they are listed. The user identity and group list are retrieved from the authenticating provider. Next,
additional group memberships that are associated with the user and group list are looked up for all other
authentication providers. All of these SIDs, UIDs, or GIDs are added to the initial token.
NOTE: An exception to this behavior occurs if the AD provider is configured to call other
providers, such as LDAP or NIS.
Step 2: ID The user's identifiers are associated across directory services. All SIDs are converted to their equivalent UID/GID
mapping and vice versa. These ID mappings are also added to the access token.
Step 3: User Access tokens from other directory services are combined. If the username matches any user mapping rules, the
mapping rules are processed in order and the token is updated accordingly.
ID mapping
The Identity (ID) mapping service maintains relationship information between mapped Windows and UNIX identifiers to provide consistent
access control across file sharing protocols within an access zone.
NOTE: ID mapping and user mapping are different services, despite the similarity in names.
During authentication, the authentication daemon requests identity mappings from the ID mapping service in order to create access
tokens. Upon request, the ID mapping service returns Windows identifiers mapped to UNIX identifiers or UNIX identifiers mapped to
Windows identifiers. When a user authenticates to a cluster over NFS with a UID or GID, the ID mapping service returns the mapped
Windows SID, allowing access to files that another user stored over SMB. When a user authenticates to the cluster over SMB with a SID,
the ID mapping service returns the mapped UNIX UID and GID, allowing access to files that a UNIX client stored over NFS.
Mappings between UIDs or GIDs and SIDs are stored according to access zone in a cluster-distributed database called the ID map. Each
mapping in the ID map is stored as a one-way relationship from the source to the target identity type. Two-way mappings are stored as
complementary one-way mappings.
If the ID mapping service does not locate and return a mapped UID or GID in the ID map, the authentication daemon searches other
external authentication providers configured in the same access zone for a user that matches the same name as the Active Directory user.
If a matching user name is found in another external provider, the authentication daemon adds the matching user's UID or GID to the
access token for the Active Directory user, and the ID mapping service creates a mapping between the UID or GID and the Active
Directory user's SID in the ID map. This is referred to as an external mapping.
NOTE: When an external mapping is stored in the ID map, the UID is specified as the on-disk identity for that user.
When the ID mapping service stores a generated mapping, the SID is specified as the on-disk identity.
If a matching user name is not found in another external provider, the authentication daemon assigns a UID or GID from the ID mapping
range to the Active Directory user's SID, and the ID mapping service stores the mapping in the ID map. This is referred to as a generated
mapping. The ID mapping range is a pool of UIDs and GIDs allocated in the mapping settings.
After a mapping has been created for a user, the authentication daemon retrieves the UID or GID stored in the ID map upon subsequent
lookups for the user.
ID mapping ranges
In access zones with multiple external authentication providers, such as Active Directory and LDAP, it is important that the UIDs and GIDs
from different providers that are configured in the same access zone do not overlap. Overlapping UIDs and GIDs between providers within
an access zone might result in some users gaining access to other users' directories and files.
User mapping
User mapping provides a way to control permissions by specifying a user's security identifiers, user identifiers, and group identifiers. OneFS
uses the identifiers to check file or group ownership.
With the user-mapping feature, you can apply rules to modify which user identity OneFS uses, add supplemental user identities, and
modify a user's group membership. The user-mapping service combines a user’s identities from different directory services into a single
access token and then modifies it according to the rules that you create.
NOTE: You can configure mapping rules on a per-zone basis. Mapping rules must be configured separately in each
access zone that uses them. OneFS maps users only during login or protocol access.
Use Active Use Microsoft Active Directory with Windows Services for UNIX and RFC 2307 attributes to manage Linux, UNIX,
Directory with and Windows systems. Integrating UNIX and Linux systems with Active Directory centralizes identity management
RFC 2307 and and eases interoperability, reducing the need for user-mapping rules. Make sure your domain controllers are
Windows Services running Windows Server 2003 or later.
for UNIX
Employ a The simplest configurations name users consistently, so that each UNIX user corresponds to a similarly named
consistent Windows user. Such a convention allows rules with wildcard characters to match names and map them without
username strategy explicitly specifying each pair of accounts.
Do not use In networks with multiple identity sources, such as LDAP and Active Directory with RFC 2307 attributes, you
overlapping ID should ensure that UID and GID ranges do not overlap. It is also important that the range from which OneFS
ranges automatically allocates UIDs and GIDs does not overlap with any other ID range. OneFS automatically allocates
On-disk identity
After the user mapper resolves a user's identities, OneFS determines an authoritative identifier for it, which is the preferred on-disk
identity.
OnesFS stores either UNIX or Windows identities in file metadata on disk. On-disk identity types are UNIX, SID, and native. Identities are
set when a file is created or a file's access control data is modified. Almost all protocols require some level of mapping to operate correctly,
so choosing the preferred identity to store on disk is important. You can configure OneFS to store either the UNIX or the Windows
identity, or you can allow OneFS to determine the optimal identity to store.
On-disk identity types are UNIX, SID, and native. Although you can change the type of on-disk identity, the native identity is best for a
network with UNIX and Windows systems. In native on-disk identity mode, setting the UID as the on-disk identity improves NFS
performance.
NOTE: The SID on-disk identity is for a homogeneous network of Windows systems managed only with Active Directory.
When you upgrade from a version earlier than OneFS 6.5, the on-disk identity is set to UNIX. When you upgrade from
OneFS 6.5 or later, the on-disk identity setting is preserved. On new installations, the on-disk identity is set to native.
The native on-disk identity type allows the OneFS authentication daemon to select the correct identity to store on disk by checking for
the identity mapping types in the following order:
NOTE: If you change the on-disk identity type, you should run the PermissionRepair job with the Convert repair type
selected to make sure that the disk representation of all files is consistent with the changed setting. For more
information, see the Run the PermissionRepair job section.
Managing ID mappings
You can create, modify, and delete identity mappings and configure ID mapping settings.
The following command deletes the identity mapping of the user with UID 4236 in the zone3 access zone:
The following command flushes the mapping of the user with UID 4236 in the zone3 access zone:
Related References
Mapping rule options on page 125
Mapping rule operators on page 126
Related tasks
Merge Windows and UNIX tokens on page 124
Retrieve the primary group from LDAP on page 124
Test a user-mapping rule on page 123
User
Name:krb_user_002
UID:1002
SID:S-1-22-1-1001
On disk:1001
ZID:1
Zone:System
Privileges:-
Primary Group
Name:krb_user_001
GID:1000
SID:S-1-22-2-1001
On disk:1000
Supplemental Identities
Name:Authenticated Users
GID: -
SID:S-1-5-11
Related tasks
Create a user-mapping rule on page 122
Option Description
Join two users together Inserts the new identity into the token.
Append field from a user Modifies the access token by adding fields to it.
Depending on your selection, the Create a User Mapping Rule dialog box refreshes to display additional fields.
5. Populate the fields as needed.
6. Click Add Rule.
NOTE: Rules are called in the order they are listed. To ensure that each rule gets processed, list replacements first
and allow/deny rules last. You can change the order in which a rule is listed by clicking its title bar and dragging it to
a new position.
Related tasks
Test a user-mapping rule on page 123
Related tasks
Test a user-mapping rule on page 123
Related tasks
Create a user-mapping rule on page 122
2. Run a net use command, similar to the following example, on a Windows client to map the home directory for user411:
3. Run a command similar to the following example on the cluster to view the inherited ACL permissions for the user411 share:
cd /ifs/home/user411
ls -lde .
After running this command, user Zachary will see a share named 'zachary' rather than '%U', and when Zachary tries to connect to the
share named 'zachary', he will be directed to /ifs/home/zachary. On a Windows client, if Zachary runs the following commands, he
sees the contents of his /ifs/home/zachary directory:
Similarly, if user Claudia runs the following commands on a Windows client, she sees the directory contents of /ifs/home/claudia:
Zachary and Claudia cannot access one another's home directory because only the share 'zachary' exists for Zachary and only the share
'claudia' exists for Claudia.
NOTE: The following examples refer to setting the login shell to /bin/bash. You can also set the shell to /bin/rbash.
1. Run the following command to set the login shell for all local users to /bin/bash:
2. Run the following command to set the default login shell for all Active Directory users in your domain to /bin/bash:
Name: System
Path: /ifs
Groupnet: groupnet0
Map Untrusted: -
Auth Providers: lsa-local-provider:System, lsa-file-
provider:System
NetBIOS Name: -
User Mapping Rules: -
Home Directory Umask: 0077
Skeleton Directory: /usr/share/skel
Cache Entry Expiry: 4H
Negative Cache Entry Expiry: 1m
Zone ID: 1
In the command result, you can see the default setting for Home Directory Umask for the created home directory is 0700, which
is equivalent to (0755 & ~(077)). You can modify the Home Directory Umask setting for a zone with the --home-
directory-umask option, specifying an octal number as the umask value. This value indicates the permissions that are to be
disabled, so larger mask values indicate fewer permissions. For example, a umask value of 000 or 022 yields created home directory
permissions of 0755, whereas a umask value of 077 yields created home directory permissions of 0700.
2. Run a command similar to the following example to allow a group/others write/execute permission in a home directory:
In this example, user home directories will be created with mode bits 0755 masked by the umask field, set to the value of 022.
Therefore, user home directories will be created with mode bits 0755, which is equivalent to (0755 & ~(022)).
ssh <your-domain>\\[email protected]
2. Run the isi zone zones modify command to modify the default skeleton directory.
The following command modifies the default skeleton directory, /usr/share/skel, in an access zone, where System is the value
for the <zone> option and /usr/share/skel2 is the value for the <path> option:
Authentication provider Home directory Home directory creation UNIX login shell
Local • --home-directory- Enabled /bin/sh
template=/ifs/
home/%U
• --create-home-
directory=yes
• --login-
shell=/bin/sh
Related References
Supported expansion variables on page 134
%D NetBIOS domain name (for Expands to the user's domain name, based on the authentication
example, YORK for provider:
YORK.EAST.EXAMPLE.COM)
• For Active Directory users, %D expands to the Active Directory
NetBIOS name.
• For local users, %D expands to the cluster name in uppercase
characters. For example, for a cluster named cluster1, %D
expands to CLUSTER1.
• For users in the System file provider, %D expands to
UNIX_USERS.
• For users in other file providers, %D expands to FILE_USERS.
• For LDAP users, %D expands to LDAP_USERS.
• For NIS users, %D expands to NIS_USERS.
%Z Zone name (for example, Expands to the access zone name. If multiple zones are activated,
ZoneABC) this variable is useful for differentiating users in separate zones. For
example, for a user named user1 in the System zone, the
path /ifs/home/%Z/%U is mapped to /ifs/home/System/
user1.
%L Host name (cluster host name in Expands to the host name of the cluster, normalized to lowercase.
lowercase) Limited use.
%0 First character of the user name Expands to the first character of the user name.
%1 Second character of the user Expands to the second character of the user name.
name
%2 Third character of the user Expands to the third character of the user name.
name
NOTE: If the user name includes fewer than three characters, the %0, %1, and %2 variables wrap around. For example,
for a user named ab, the variables maps to a, b, and a, respectively. For a user named a, all three variables map to a.
ACLs
In Windows environments, file and directory permissions, referred to as access rights, are defined in access control lists (ACLs). Although
ACLs are more complex than mode bits, ACLs can express much more granular sets of access rules. OneFS checks the ACL processing
rules commonly associated with Windows ACLs.
A Windows ACL contains zero or more access control entries (ACEs), each of which represents the security identifier (SID) of a user or a
group as a trustee. In OneFS, an ACL can contain ACEs with a UID, GID, or SID as the trustee. Each ACE contains a set of rights that
allow or deny access to a file or folder. An ACE can optionally contain an inheritance flag to specify whether the ACE should be inherited
by child folders and files.
NOTE: Instead of the standard three permissions available for mode bits, ACLs have 32 bits of fine-grained access
rights. Of these, the upper 16 bits are general and apply to all object types. The lower 16 bits vary between files and
directories but are defined in a way that allows most applications to apply the same bits for files and directories.
Rights grant or deny access for a given trustee. You can block user access explicitly through a deny ACE or implicitly by ensuring that a
user does not directly, or indirectly through a group, appear in an ACE that grants the right.
Mixed-permission environments
When a file operation requests an object’s authorization data, for example, with the ls -l command over NFS or with the Security tab
of the Properties dialog box in Windows Explorer over SMB, OneFS attempts to provide that data in the requested format. In an
environment that mixes UNIX and Windows systems, some translation may be required when performing create file, set security, get
security, or access operations.
SID-to-UID and SID-to-GID mappings are cached in both the OneFS ID mapper and the stat cache. If a mapping has
recently changed, the file might report inaccurate information until the file is updated or the cache is flushed.
3. View mode-bits permissions for a user by running the isi auth access command.
The following command displays verbose-mode file permissions information in /ifs/ for the user that you specify in place of
<username>:
4. View expected ACL user permissions on a file for a user by running the isi auth access command.
The following command displays verbose-mode ACL file permissions for the file file_with_acl.tx in /ifs/data/ for the user
that you specify in place of <username>:
Option Description
Send NTLMv2 Specifies whether to send only NTLMv2 responses to SMB clients with NTLM-compatible credentials.
On-Disk Identity Controls the preferred identity to store on disk. If OneFS is unable to convert an identity to the preferred
format, it is stored as is. This setting does not affect identities that are currently stored on disk. Select one of
the following settings:
native Allow OneFS to determine the identity to store on disk. This is the recommended
setting.
unix Always store incoming UNIX identifiers (UIDs and GIDs) on disk.
sid Store incoming Windows security identifiers (SIDs) on disk, unless the SID was
generated from a UNIX identifier; in that case, convert it back to the UNIX identifier
and store it on disk.
Space Replacement For clients that have difficulty parsing spaces in user and group names, specifies a substitute character.
3. Click Save.
If you changed the on-disk identity selection, it is recommended that you run the PermissionRepair job with the Convert repair type to
prevent potential permissions errors. For more information, see the Run the PermissionRepair job section.
Related References
ACL policy settings on page 140
Environment
Depending on the environment you select, the system will automatically select the General ACL Settings and Advanced ACL Settings
options that are optimal for that environment. You also have the option to manually configure general and advanced settings.
Balanced Enables PowerScale cluster permissions to operate in a mixed UNIX and Windows environment. This setting is
recommended for most PowerScale cluster deployments.
UNIX only Enables PowerScale cluster permissions to operate with UNIX semantics, as opposed to Windows semantics.
Enabling this option prevents ACL creation on the system.
Windows only Enables PowerScale cluster permissions to operate with Windows semantics, as opposed to UNIX semantics.
Enabling this option causes the system to return an error on UNIX chmod requests.
Custom Allows you to configure General ACL Settings and Advanced ACL Settings options.
environment
NOTE: Inheritable ACLs on the system take precedence over this setting. If inheritable ACLs are
set on a folder, any new files and folders that are created in that folder inherit the folder's ACL.
Disabling this setting does not remove ACLs currently set on files. If you want to clear an existing
ACL, run the chmod -b <mode> <file> command to remove the ACL and set the correct
permissions.
Use the chmod Specifies how permissions are handled when a chmod operation is initiated on a file with an ACL, either locally or
Command On Files over NFS. This setting controls any elements that affect UNIX permissions, including File System Explorer.
With Existing Enabling this policy setting does not change how chmod operations affect files that do not have ACLs. Select one
ACLs of the following options:
Remove the For chmod operations, removes any existing ACL and instead sets the chmod
existing ACL and permissions. Select this option only if you do not need permissions to be set from
set UNIX Windows.
permissions
instead
Remove the Stores the UNIX permissions in a new Windows ACL. Select this option only if you want
existing ACL and to remove Windows permissions but do not want files to have synthetic ACLs.
create an ACL
equivalent to the
UNIX permissions
Remove the Stores the UNIX permissions in a new Windows ACL only for users and groups that are
existing ACL and referenced by the old ACL. Select this option only if you want to remove Windows
create an ACL permissions but do not want files to have synthetic ACLs.
equivalent to the
UNIX permissions,
for all users/
groups referenced
in old ACL
Merge the new Merges permissions that are applied by chmod with existing ACLs. An ACE for each
permissions with identity (owner, group, and everyone) is either modified or created, but all other ACEs are
the existing ACL unmodified. Inheritable ACEs are also left unmodified to enable Windows users to continue
to inherit appropriate permissions. However, UNIX users can set specific permissions for
each of those three standard identities.
Deny permission Prevents users from making NFS and local chmod operations. Enable this setting if you
to modify the ACL do not want to allow permission sets over NFS.
Ignore operation if Prevents an NFS client from changing the ACL. Select this option if you defined an
file has an existing inheritable ACL on a directory and want to use that ACL for permissions.
ACL
CAUTION: If you try to run the chmod command on the same permissions that are currently set on
a file with an ACL, you may cause the operation to silently fail. The operation appears to be
successful, but if you were to examine the permissions on the cluster, you would notice that the
chmod command had no effect. As an alternative, you can run the chmod command away from the
current permissions and then perform a second chmod command to revert to the original
permissions. For example, if the file shows 755 UNIX permissions and you want to confirm this
number, you could run chmod 700 file; chmod 755 file.
Use the chown/ Changes the user or group that has ownership of a file or folder. Select one of the following options:
chgrp On Files
With Existing Modify only the Enables the chown or chgrp operation to perform as it does in UNIX. Enabling this
ACLs owner and/or setting modifies any ACEs in the ACL associated with the old and new owner or group.
group
Modify the owner Enables the NFS chown or chgrp operation to function as it does in Windows. When a
and/or group and file owner is changed over Windows, no permissions in the ACL are changed.
ACL permissions
Ignore operation if Prevents an NFS client from changing the owner or group.
file has an existing
ACL
NOTE: Over NFS, the chown or chgrp operation changes the permissions and user or group that
has ownership. For example, a file that is owned by user Joe with rwx------ (700) permissions
indicates rwx permissions for the owner, but no permissions for anyone else. If you run the chown
command to change ownership of the file to user Bob, the owner permissions are still rwx but they
now represent the permissions for Bob, rather than for Joe, who lost all of his permissions. This
setting does not affect UNIX chown or chgrp operations that are performed on files with UNIX
permissions, and it does not affect Windows chown or chgrp operations, which do not change any
permissions.
Access checks In UNIX environments, only the file owner or superuser has the right to run a chmod or chown operation on a file.
(chmod, chown) In Windows environments, you can implement this policy setting to give users the right to perform chmod
operations that change permissions, or the right to perform chown operations that take ownership, but do not
give away ownership. Select one of the following options:
Allow only the file Enables chmod and chown access checks to operate with UNIX-like behavior.
owner to change
the mode or owner
of the file (UNIX
model)
Allow the file Enables chmod and chown access checks to operate with Windows-like behavior.
owner and users
with WRITE_DAC
and
WRITE_OWNER
permissions to
change the mode
or owner of the
file (Windows
model)
Retain 'rwx' Generates an ACE that provides only read, write, and execute permissions.
permissions
Treat 'rwx' Generates an ACE that provides the maximum Windows permissions for a user or a group
permissions as by adding the change permissions right, the take ownership right, and the delete right.
Full Control
Group Owner Operating systems tend to work with group ownership and permissions in two different PowerScale group owner
Inheritance from the file creator's primary group. If you enable a setting that causes the group owner to be inherited from the
creator's primary group, you can override it on a per-folder basis by running the chmod command to set the set-
gid bit. This inheritance applies only when the file is created. For more information, see the manual page for the
chmod command.
Select one of the following options:
When an ACL Specifies that if an ACL exists on a file, the group owner is inherited from the file creator's
exists, use Linux primary group. If there is no ACL, the group owner is inherited from the parent folder.
and Windows
semantics,
otherwise use BSD
semantics
BSD semantics - Specifies that the group owner be inherited from the file's parent folder.
Inherit group
owner from the
parent folder
Linux and Specifies that the group owner be inherited from the file creator's primary group.
Windows
semantics - Inherit
group owner from
the creator's
primary group
chmod (007) On Specifies whether to remove ACLs when running the chmod (007) command. Select one of the following
Files With Existing options.
ACLs
chmod(007) does Sets 007 UNIX permissions without removing an existing ACL.
not remove
existing ACL
chmod(007) Removes ACLs from files over UNIX file sharing (NFS) and locally on the cluster through
removes existing the chmod (007) command. If you enable this setting, be sure to run the chmod
ACL and sets 007 command on the file immediately after using chmod (007) to clear an ACL. In most
UNIX permissions cases, you do not want to leave 007 permissions on the file.
Approximate Windows ACLs are more complex than UNIX permissions. When a UNIX client requests UNIX permissions for a file
Owner Mode Bits with an ACL over NFS, the client receives an approximation of the file's actual permissions. Running the ls -l
When ACL Exists command from a UNIX client returns a more open set of permissions than the user expects. This permissiveness
compensates for applications that incorrectly inspect the UNIX permissions themselves when determining
whether to try a file-system operation. The purpose of this policy setting is to ensure that these applications go
with the operation to allow the file system to correctly determine user access through the ACL. Select one of the
following options:
Approximate Causes the owner permissions appear more permissive than the actual permissions on the
owner mode bits file.
using all possible
group ACEs in ACL
Approximate Causes the owner permissions appear more accurate, in that you see only the permissions
owner mode bits for a particular owner and not the more permissive set. This may cause access-denied
problems for UNIX clients, however.
Synthetic "deny" The Windows ACL user interface cannot display an ACL if any deny ACEs are out of canonical ACL order. To
ACEs correctly represent UNIX permissions, deny ACEs may be required to be out of canonical ACL order. Select one of
the following options:
Do not modify Prevents modifications to synthetic ACL generation and allows “deny” ACEs to be
synthetic ACLs generated when necessary.
and mode bit CAUTION: This option can lead to permissions being reordered, permanently
approximations denying access if a Windows user or an application performs an ACL get, an
ACL modification, and an ACL set to and from Windows.
Remove “deny” Does not include deny ACEs when generating synthetic ACLs.
ACEs from ACLs.
This setting can
cause ACLs to be
more permissive
than the
equivalent mode
bits
Access check You can control who can change utimes, which are the access and modification times of a file. Select one of the
(utimes) following options:
Allow only owners Allows only owners to change utimes, which complies with the POSIX standard.
to change utimes
to client-specific
times (POSIX
compliant)
Allow owners and Allows owners as well as users with write access to modify utimes, which is less
users with ‘write’ restrictive.
access to change
utimes to client-
specific times
Read-only DOS
Deny permission Duplicates DOS-attribute permissions behavior over only the SMB protocol, so that files
attribute
to modify files use the read-only attribute over SMB.
with DOS read-
only attribute over
Windows Files
Sharing (SMB)
Deny permission Duplicates DOS-attribute permissions behavior over both NFS and SMB protocols. For
to modify files example, if permissions are read-only on a file over SMB, permissions are read-only over
with DOS read- NFS.
only attribute
through NFS and
SMB
Related tasks
Modify ACL policy settings on page 140
Option Description
Manual The job must be started manually.
Scheduled The job is regularly scheduled. Select the schedule option from the drop-down list and specify the schedule details.
8. Click Save Changes, and then click Close.
9. Optional: From the Job Types table, click Start Job.
The Start a Job window opens.
10. Select or clear the Allow Duplicate Jobs checkbox.
11. Optional: From the Impact policy list, select an impact policy for the job to follow.
12. In the Paths field, type or browse to the directory in /ifs whose permissions you want to repair.
13. Optional: Click Add another directory path and in the added Paths field, type or browse for an additional directory in /ifs whose
permissions you want to repair.
You can repeat this step to add directory paths as needed.
14. From the Repair Type list, select one of the following methods for updating permissions:
Option Description
Clone Applies the permissions settings for the directory that is specified by the Template File or Directory setting to the
directory you set in the Paths fields.
Inherit Recursively applies the ACL of the directory that is specified by the Template File or Directory setting to each file and
subdirectory in the specified Paths fields, according to standard inheritance rules.
Convert For each file and directory in the specified Paths fields, converts the owner, group, and access control list (ACL) to the
target on-disk identity based on the Mapping Type setting.
The remaining settings options differ depending on the selected repair type.
15. In the Template File or Directory field, type or browse to the directory in /ifs that you want to copy permissions from. This setting
applies only to the Clone and Inherit repair types.
Option Description
Global Applies the system's default identity.
SID (Windows) Applies the Windows identity.
UNIX Applies the UNIX identity.
Native If a user or group does not have an authoritative UNIX identifier (UID or GID), applies the Windows identity (SID)
17. Optional: Click Start Job.
Access rights are consistently enforced across access protocols on all security models. For example, a user is granted or denied the same
rights to a file whether using SMB or NFS. Clusters running OneFS support a set of global policy settings that enable you to customize the
default access control list (ACL) and UNIX permissions settings.
NOTE: Access control lists (ACLs) are not supported with HDFS.
OneFS is configured with standard UNIX permissions on the file tree. Through Windows Explorer or OneFS administrative tools, you can
give any file or directory an ACL. In addition to Windows domain users and groups, ACLs in OneFS can include local, NIS, and LDAP users
and groups. After a file is given an ACL, the mode bits are no longer enforced and exist only as an estimate of the effective permissions.
NOTE: It is recommended that you configure ACL and UNIX permissions only if you fully understand how they interact
with one another.
SMB The write-through flag has been The write-through flag has not been
applied. applied.
Protocol Risk
NFS If a node fails, no data will be lost except in the unlikely event that a
client of that node also crashes before it can reconnect to the
cluster. In that situation, asynchronous writes that have not been
committed to disk will be lost.
SMB If a node fails, asynchronous writes that have not been committed
to disk will be lost.
We recommend that you do not disable write caching, regardless of the protocol that you are writing with. If you are writing to the cluster
with asynchronous writes, and you decide that the risks of data loss are too great, we recommend that you configure your clients to use
synchronous writes, rather than disable write caching.
SMB security
OneFS includes a configurable SMB service to create and manage SMB shares. SMB shares provide Windows clients with network access
to file system resources on the cluster. You can grant permissions to users and groups to perform operations such as reading, writing, and
setting access permissions on SMB shares.
SMB is disabled by default. To enable SMB, use the following command:
Related concepts
Managing SMB settings on page 154
Managing SMB shares on page 158
Increased OneFS can transmit more data to a client through multiple connections over high speed network adapters or over
throughput multiple network adapters.
Connection failure When an SMB Multichannel session is established over multiple network connections, the session is not lost if one
tolerance of the connections has a network fault, which enables the client to continue to work.
Automatic SMB Multichannel automatically discovers supported hardware configurations on the client that have multiple
discovery available network paths and then negotiates and establishes a session over multiple network connections. You are
not required to install components, roles, role services, or features.
Aggregated NICs SMB Multichannel establishes multiple network connections to the PowerScale cluster over aggregated
NICs, which results in balanced connections across CPU cores, effective consumption of combined
bandwidth, and connection fault tolerance.
NOTE: The aggregated NIC configuration inherently provides NIC fault tolerance that is
not dependent upon SMB.
SMBv3 encryption
Certain Microsoft Windows and Apple Mac client/server combinations can support data encryption in SMBv3 environments.
You can configure SMBv3 encryption on a per-share, per-zone, or cluster-wide basis. You can allow encrypted and unencrypted clients
access. Globally and for access zones, you can also require that all client connections are encrypted.
If you set encryption settings on a per-zone basis, those settings will override global server settings.
NOTE: Per-zone and per-share encryption settings can only be configured through the OneFS command line interface.
NOTE: You can only disable or enable SMB server-side copy for OneFS using the command line interface (CLI).
none Continuously available writes are not handled differently than other writes to the cluster. If you specify
none and a node fails, you may experience data loss without notification. This setting is not
recommended.
write-read- Writes to the share are moved to persistent storage before a success message is returned to the SMB
coherent client that sent the data. This is the default setting.
full Writes to the share are moved to persistent storage before a success message is returned to the SMB
client that sent the data, and prevents OneFS from granting SMB clients write-caching and handle-
caching leases.
For POSIX clients using Samba, you must set the following options in the [global] section of your Samba configuration file
(smb.conf) to enable Samba clients to traverse relative and absolute links:
follow symlinks=yes
wide links=yes
In this case, "wide links" in the smb.conf file refers to absolute links. The default setting in this file is no.
When you create a symbolic link, it is designated as a file link or directory link. Once the link is set, the designation cannot be changed. You
can format symbolic link paths as either relative or absolute.
To delete symbolic links, use the del command in Windows, or the rm command in a POSIX environment.
Keep in mind that when you delete a symbolic link, the target file or directory still exists. However, when you delete a target file or
directory, a symbolic link continues to exist and still points to the old target, thus becoming a broken link.
Related concepts
SMB security on page 148
Related concepts
SMB security on page 148
Related References
File and directory permission settings on page 156
Snapshots directory settings on page 156
SMB performance settings on page 157
SMB security settings on page 157
Visible at Root Specifies whether to make the .snapshot directory visible at the
root of the share. The default value is Yes.
Related concepts
SMB security on page 148
Create Permission Sets the default source permissions to apply when a file or
directory is created. The default value is Default ACL.
Directory Create Mask Specifies UNIX mode bits that are removed when a directory is
created, restricting permissions. Mask bits are applied before mode
bits are applied.
Directory Create Mode Specifies UNIX mode bits that are added when a directory is
created, enabling permissions. Mode bits are applied after mask
bits are applied.
File Create Mask Specifies UNIX mode bits that are removed when a file is created,
restricting permissions. Mask bits are applied before mode bits are
applied.
File Create Mode Specifies UNIX mode bits that are added when a file is created,
enabling permissions. Mode bits are applied after mask bits are
applied.
Related concepts
SMB security on page 148
Related concepts
SMB security on page 148
Directory Create Mask Specifies UNIX mode bits that are removed when a directory is
created, restricting permissions. Mask bits are applied before mode
Directory Create Mode Specifies UNIX mode bits that are added when a directory is
created, enabling permissions. Mode bits are applied after mask
bits are applied. The default value is None.
File Create Mask Specifies UNIX mode bits that are removed when a file is created,
restricting permissions. Mask bits are applied before mode bits are
applied. The default value is that the user has Read, Write, and
Execute permissions.
File Create Mode Specifies UNIX mode bits that are added when a file is created,
enabling permissions. Mode bits are applied after mask bits are
applied. The default value is that the user has Execute
permissions.
Impersonate Guest Determines guest access to a share. The default value is Never.
Impersonate User Allows all file access to be performed as a specific user. This must
be a fully qualified user name. The default value is No value.
NTFS ACL Allows ACLs to be stored and edited from SMB clients. The default
value is Yes.
Access Based Enumeration Allows access based enumeration only on the files and folders that
the requesting user can access. The default value is No.
HOST ACL The ACL that defines host access. The default value is No value.
Related concepts
SMB security on page 148
Related concepts
SMB security on page 148
Variable Expansion
For example, if a user is in a domain that is named DOMAIN and has a username of user_1, the path /ifs/home/%D/%U expands
to /ifs/home/DOMAIN/user_1.
7. Select Create SMB share directory if it does not exist to have OneFS create the share directory for the path you
specified if it did not previously exist.
8. Apply the initial ACL settings for the directory. You can modify these settings later.
• To apply a default ACL to the shared directory, select Apply Windows default ACLs.
NOTE: If the Create SMB share directory if it does not exist setting is selected, OneFS creates an ACL with the
equivalent of UNIX 700 mode bit permissions for any directory that is created automatically.
• To maintain the existing permissions on the shared directory, select Do not change existing permissions.
9. Optional: Configure home directory provisioning settings.
• To expand path variables such as %U in the share directory path, select Allow Variable Expansion.
• To automatically create home directories when users access the share for the first time, select Auto-Create Directories. This
option is available only if the Allow Variable Expansion option is enabled.
10. Select the Enable continuous availability on the share to allow clients to create persistent handles that can be reclaimed after an
outage such as a network-related disconnection or a server failure. Servers must be using Windows 8 or Windows 2012 R2 (or higher).
11. Click Add User or Group to edit the user and group settings.
The default permissions configuration is read-only access for the well-known Everyone account. Modify settings to allow users to
write to the share.
12. Select Enable file filters in the File Filter Extensions section to enable support for file filtering. Add the file types to be applied to
the file filtering method.
13. Select Enable or Disable in the Encryption section to allow or disallow SMBv3 encrypted clients to connect to the share.
14. Optional: Click Show Advanced Settings to apply advanced SMB share settings if needed.
15. Click Create Share.
Related concepts
SMB security on page 148
Related concepts
SMB security on page 148
Related concepts
SMB security on page 148
Related concepts
SMB security on page 148
Mixed protocol environments on page 147
NFS security
OneFS provides an NFS server so you can share files on your cluster with NFS clients that adhere to the RFC1813 (NFSv3) and RFC3530
(NFSv4) specifications.
NFS is disabled by default. To enable NFS, use the following command:
In OneFS, the NFS server is fully optimized as a multi-threaded service running in user space instead of the kernel. This architecture load
balances the NFS service across all nodes of the cluster, providing the stability and scalability necessary to manage up to thousands of
connections across multiple NFS clients.
NFS mounts execute and refresh quickly, and the server constantly monitors fluctuating demands on NFS services and makes
adjustments across all nodes to ensure continuous, reliable performance. Using a built-in process scheduler, OneFS helps ensure fair
allocation of node resources so that no client can seize more than its fair share of NFS services.
The NFS server also supports access zones defined in OneFS, so that clients can access only the exports appropriate to their zone. For
example, if NFS exports are specified for Zone 2, only clients assigned to Zone 2 can access these exports.
To simplify client connections, especially for exports with large path names, the NFS server also supports aliases, which are shortcuts to
mount points that clients can specify directly.
For secure NFS file sharing, OneFS supports NIS and LDAP authentication providers.
Related concepts
Managing the NFS service on page 163
Managing NFS exports on page 165
NFS exports
You can manage individual NFS export rules that define mount-points (paths) available to NFS clients and how the server should perform
with these clients.
In OneFS, you can create, delete, list, view, modify, and reload NFS exports.
NFS export rules are zone-aware. Each export is associated with a zone, can only be mounted by clients on that zone, and can only
expose paths below the zone root. By default, any export command applies to the client's current zone.
Each rule must have at least one path (mount-point), and can include additional paths. You can also specify that all subdirectories of the
given path or paths are mountable. Otherwise, only the specified paths are exported, and child directories are not mountable.
NFS aliases
You can create and manage aliases as shortcuts for directory path names in OneFS. If those path names are defined as NFS exports, NFS
clients can specify the aliases as NFS mount points.
NFS aliases are designed to give functional parity with SMB share names within the context of NFS. Each alias maps a unique name to a
path on the file system. NFS clients can then use the alias name in place of the path when mounting.
Aliases must be formed as top-level Unix path names, having a single forward slash followed by name. For example, you could create an
alias named /q4 that maps to /ifs/data/finance/accounting/winter2015 (a path in OneFS). An NFS client could mount that
directory through either of:
mount cluster_ip:/q4
mount cluster_ip:/ifs/data/finance/accounting/winter2015
Aliases and exports are completely independent. You can create an alias without associating it with an NFS export. Similarly, an NFS
export does not require an alias.
Each alias must point to a valid path on the file system. While this path is absolute, it must point to a location beneath the zone root (/ifs
on the System zone). If the alias points to a path that does not exist on the file system, any client trying to mount the alias would be
denied in the same way as attempting to mount an invalid full pathname.
NFS aliases are zone-aware. By default, an alias applies to the client's current access zone. To change this, you can specify an alternative
access zone as part of creating or modifying an alias.
Each alias can only be used by clients on that zone, and can only apply to paths below the zone root. Alias names are unique per zone, but
the same name can be used in different zones—for example, /home.
When you create an alias in the web administration interface, the alias list displays the status of the alias. Similarly, using the --check
option of the isi nfs aliases command, you can check the status of an NFS alias (status can be: good, illegal path, name conflict,
not exported, or path not found).
Related concepts
NFS security on page 162
Related concepts
NFS security on page 162
Related References
NFS global settings on page 164
NFS export performance settings on page 168
NFS export client compatibility settings on page 169
NFS export behavior settings on page 169
Setting Description
NFS Export Service Enables or disables the NFS service. This setting is enabled by
default.
NFSv3 Enables or disables support for NFSv3. This setting is enabled by
default.
NFSv4 Enables or disables support for NFSv4. This setting is disabled by
default.
Related concepts
NFS security on page 162
Related tasks
Configure NFS file sharing on page 164
Related concepts
NFS security on page 162
If you add the same client to more than one list and the client is entered in the same format for each entry, the client
is normalized to a single list in the following order of priority:
• Root Clients
• Always Read-Write Clients
• Always Read-Only Clients
• Clients
○ Use an IPsec tunnel. This option is very secure because it authenticates the devices using secure keys.
○ Configure all of the switch ports to go inactive if they are physically disconnected. In addition, make sure that
the switch ports are MAC limited.
Related concepts
NFS security on page 162
Related concepts
NFS security on page 162
Related concepts
NFS security on page 162
Related concepts
NFS security on page 162
Non-Root User Mapping User mapping is disabled by default. It is recommended that you
specify this setting on a per-export basis, when appropriate.
Failed User Mapping User mapping is disabled by default. It is recommended that you
specify this setting on a per-export basis, when appropriate.
Security Flavors Available options include UNIX (system), the default setting,
Kerberos5, Kerberos5 Integrity, and Kerberos5
Privacy.
Setting Description
Block Size The block size used to calculate block counts for NFSv3 FSSTAT
and NFSv4 GETATTR requests. The default value is 8192 bytes.
Commit Asynchronous If set to yes, allows NFSv3 and NFSv4 COMMIT operations to be
asynchronous. The default value is No.
Directory Transfer Size The preferred directory read transfer size reported to NFSv3 and
NFSv4 clients. The default value is 131072 bytes.
Read Transfer Max Size The maximum read transfer size reported to NFSv3 and NFSv4
clients. The default value is 1048576 bytes.
Read Transfer Multiple The recommended read transfer size multiple reported to NFSv3
and NFSv4 clients. The default value is 512 bytes.
Read Transfer Preferred Size The preferred read transfer size reported to NFSv3 and NFSv4
clients. The default value is 131072 bytes.
Write Datasync Action The action to perform for DATASYNC writes. The default value is
DATASYNC.
Write Filesync Action The action to perform for FILESYNC writes. The default value is
FILESYNC.
Write Filesync Reply The reply to send for FILESYNC writes. The default value is
FILESYNC.
Write Transfer Max Size The maximum write transfer size reported to NFSv3 and NFSv4
clients. The default value is 1048576 bytes.
Write Transfer Multiple The recommended write transfer size reported to NFSv3 and
NFSv4 clients. The default value is 512 bytes.
Write Transfer Preferred The preferred write transfer size reported to NFSv3 and NFSv4
clients. The default value is 524288.
Write Unstable Action The action to perform for UNSTABLE writes. The default value is
UNSTABLE.
Write Unstable Reply The reply to send for UNSTABLE writes. The default value is
UNSTABLE.
Related concepts
NFS security on page 162
Related tasks
Configure NFS file sharing on page 164
Readdirplus Enable Enables the use of NFSv3 readdirplus service whereby a client can
send a request and received extended information about the
directory and files in the export. The default is Yes.
Return 32 bit File IDs Specifies return 32-bit file IDs to the client. The default is No.
Related concepts
NFS security on page 162
Related tasks
Configure NFS file sharing on page 164
Encoding Overrides the general encoding settings the cluster has for the
export. The default value is DEFAULT.
Map Lookup UID Looks up incoming user identifiers (UIDs) in the local authentication
database. The default value is No.
Symlinks Informs the NFS client that the file system supports symbolic link
file types. The default value is Yes.
Time Delta Sets the server clock granularity. The default value is 1e-9
seconds (0.000000001 second).
Related concepts
NFS security on page 162
Related tasks
Configure NFS file sharing on page 164
FTP
OneFS includes a secure FTP service called vsftpd, which stands for Very Secure FTP Daemon, that you can configure for standard FTP
and FTPS file transfers.
FTP is disabled by default. To enable FTP, use the following command:
Related tasks
Enable and configure FTP file sharing on page 171
Related concepts
FTP on page 171
OneFS supports both HTTP and its secure variant, HTTPS. Each node in the cluster runs an instance of the Apache HTTP Server to
provide HTTP access. You can configure the HTTP service to run in different modes.
Both HTTP and HTTPS are supported for file transfer, but only HTTPS is supported for API calls. The HTTPS-only requirement includes
the web administration interface. OneFS supports a form of the web-based DAV (WebDAV) protocol that enables users to modify and
manage files on remote web servers. OneFS performs distributed authoring, but does not support versioning and does not perform
security checks. You can enable DAV in the web administration interface.
Related tasks
Enable and configure HTTP on page 172
Option Description
Enable HTTP Allows HTTP access for cluster administration and browsing content on the cluster.
Disable HTTP and redirect Allows only administrative access to the web administration interface. This is the default setting.
to the OneFS Web
Administration interface
Disable HTTP Closes the HTTP port that is used for file access. Users can continue to access the web administration
interface by specifying the port number in the URL. The default port is 8080.
3. In the Protocol Settings area, in the Document root directory field, type a path name or click Browse to browse to an existing
directory in /ifs.
NOTE: The HTTP server runs as the daemon user and group. To correctly enforce access controls, you must grant
the daemon user or group read access to all files under the document root, and allow the HTTP server to traverse the
document root.
4. In the Authentication Settings area, from the HTTP Authentication list, select an authentication setting:
5. To allow multiple users to manage and modify files collaboratively across remote web servers, select Enable WebDAV.
6. Select Enable access logging.
7. Click Save Changes.
Related concepts
HTTP and HTTPS security on page 172
Related concepts
File filtering in an access zone on page 174
Related concepts
File filtering in an access zone on page 174
Auditing overview
You can audit system configuration changes and protocol activity on an PowerScale cluster. All audit data is stored and protected in the
cluster file system and organized by audit topics.
Auditing can detect many potential sources of data loss, including fraudulent activities, inappropriate entitlements, and unauthorized
access attempts. Customers in industries such as financial services, health care, life sciences, and media and entertainment, as well as in
governmental agencies, must meet stringent regulatory requirements developed to protect against these sources of data loss.
System configuration auditing tracks and records all configuration events that are handled by the OneFS HTTP API. The process involves
auditing the command-line interface (CLI), web administration interface, and OneFS APIs. When you enable system configuration auditing,
no additional configuration is required. System configuration auditing events are stored in the config audit topic directories.
Protocol auditing tracks and stores activity performed through SMB, NFS, and HDFS protocol connections. You can enable and configure
protocol auditing for one or more access zones in a cluster. If you enable protocol auditing for an access zone, file-access events through
the SMB, NFS, and HDFS protocols are recorded in the protocol audit topic directories. You can specify which events to log in each
access zone. For example, you might want to audit the default set of protocol events in the System access zone but audit only
successful attempts to delete files in a different access zone.
The audit events are logged on the individual nodes where the SMB, NFS, or HDFS client initiated the activity. The events are then stored
in a binary file under /ifs/.ifsvar/audit/logs. The logs automatically roll over to a new file after the size reaches 1 GB. The logs
are then compressed to reduce space.
The protocol audit log file is consumable by auditing applications that support the Common Event Enabler (CEE).
Syslog
Syslog is a protocol that is used to convey certain event notification messages. You can configure a PowerScale cluster to log audit events
and forward them to syslog by using the syslog forwarder.
By default, all protocol events that occur on a particular node are forwarded to the /var/log/audit_protocol.log file, regardless
of the access zone the event originated from. All the config audit events are logged to /var/log/audit_config.log by default.
Syslog is configured with an identity that depends on the type of audit event that is being sent to it. It uses the facility daemon and a
priority level of info. The protocol audit events are logged to syslog with the identity audit_protocol. The config audit events are
logged to syslog with the identity audit_config.
To configure auditing on a PowerScale cluster, you must either be a root user or you must be assigned to an administrative role that
includes auditing privileges (ISI_PRIV_AUDIT).
176 Auditing
Syslog forwarding
The syslog forwarder is a daemon that, when enabled, retrieves configuration changes and protocol audit events in an access zone and
forwards the events to syslog. Only user-defined audit success and failure events are eligible for being forwarded to syslog.
On each node there is an audit syslog forwarder daemon running that will log audit events to the same node's syslog daemon.
NOTE: The CEE does not support forwarding HDFS protocol events to a third-party application.
Different SMB, NFS, and HDFS clients issue different requests, and one particular version of a platform such as Windows or Mac OS X
using SMB might differ from another. Similarly, different versions of an application such as Microsoft Word or Windows Explorer might
make different protocol requests. For example, a client with a Windows Explorer window open might generate many events if an
automatic or manual refresh of that window occurs. Applications issue requests with the logged-in user's credentials, but you should not
assume that all requests are purposeful user actions.
When enabled, OneFS audit will track all changes that are made to the files and directories in SMB shares, NFS exports, and HDFS data.
Related tasks
Enable protocol access auditing on page 179
Configure protocol event filters on page 182
Auditing 177
Follow some of these best practices before configuring the CEE servers:
• We recommend that you provide only one CEE server per node. You can use extra CEE servers beyond the PowerScale cluster size
only when the selected CEE server goes offline.
NOTE: In a global configuration, there should be one CEE server per node.
• Configure the CEE server and enable protocol auditing at the same time. If not, a backlog of events might accumulate causing stale
delivery for a period of time.
You can either receive a global view of the progress of delivery of the protocol audit events or you can receive a logical node number view
of the progress by running the isi audit progress view command.
Event name Example protocol activity Audited by default Can be exported Cannot be exported
through CEE through CEE
create • Create a file or directory X X
• Open a file, directory, or share
• Mount a share
• Delete a file
NOTE: While the SMB
protocol allows you to
set a file for deletion
with the create
operation, you must
enable the delete event
in order for the
auditing tool to log the
event.
178 Auditing
Sample audit log
You can view both configuration audit and protocol audit logs by running the isi_audit_viewer command on any node in the
PowerScale cluster.
You can view protocol access audit logs by running isi_audit_viewer -t protocol. You can view system configuration logs by
running isi_audit_viewer -t config. The following output is an example of a system configuration log:
[0: Fri Jan 23 16:17:03 2015] {"id":"524e0928-
a35e-11e4-9d0c-005056302134","timestamp":1422058623106323,"payload":"PAPI config logging
started."}
Auditing 179
The OneFS CEE export service uses round-robin load balancing when exporting events to multiple CEE servers. Valid URIs start
with http:// and include the port number and path to the CEE server if necessary—for example, http://
example.com:12228/cee.
b. In the Storage Cluster Name field, specify the name of the storage cluster to use when forwarding protocol events.
This name value is typically the SmartConnect zone name, but in cases where SmartConnect is not implemented, the value must
match the hostname of the cluster as the third-party application recognizes it. If the field is left blank, events from each node are
filled with the node name (clustername + lnn). This setting is required only if needed by your third-party audit application.
NOTE: Although this step is optional, be aware that a backlog of events will accumulate regardless of whether
CEE servers have been configured. When configured, CEE forwarding begins with the oldest events in the
backlog and moves toward newest events in a first-in-first-out sequence.
Related concepts
Protocol audit events on page 177
Related tasks
Forward protocol access events to syslog on page 180
The following command disables forwarding of audited protocol access events from the zone3 access zone:
Related concepts
Syslog on page 176
Syslog forwarding on page 177
180 Auditing
Enable system configuration auditing
OneFS can audit system configuration events on the PowerScale cluster. All configuration events that are handled by the API including
writes, modifications, and deletions are tracked and recorded in the config audit topic directories. When you enable or disable system
configuration auditing, no additional configuration is required.
Configuration events are logged to /var/log/audit_config.log only if you have enabled syslog forwarding for config audit.
Configuration change logs are populated in the config topic in the audit back-end store under /ifs/.ifsvar/audit.
NOTE: Configuration events are not forwarded to the Common Event Enabler (CEE).
Related tasks
Forward system configuration changes to syslog on page 181
Auditing 181
The following command disables forwarding of system configuration changes to syslog:
Related concepts
Syslog on page 176
Syslog forwarding on page 177
The following command creates a filter that audits the success of create, close, and delete events in the zone5 access zone:
Related References
Supported event types on page 178
Related References
Supported audit tools on page 177
NOTE: You should install a minimum of two servers. We recommend that you install CEE 6.6.0 or later.
182 Auditing
b. In the search field, type Common Event Enabler for Windows, and then click the Search icon.
c. Click Common Event Enabler <Version> for Windows, where <Version> is 6.2 or later, and then follow the instructions to open
or save the .iso file.
2. From the .iso file, extract the 32-bit or 64-bit EMC_CEE_Pack executable file that you need.
After the extraction completes, the CEE installation wizard opens.
3. Click Next to proceed to the License Agreement page.
4. Select the I accept... option to accept the terms of the license agreement, and then click Next.
5. On the Customer Information page, type your user name and organization, select your installation preference, and then click Next.
6. On the Setup Type page, select Complete, and then click Next.
7. Click Install to begin the installation.
The progress of the installation is displayed. When the installation is complete, the InstallShield Wizard Completed page appears.
8. Click Finish to exit the wizard.
9. Restart the system.
Related concepts
Integrating with the Common Event Enabler on page 182
NOTE:
• The HttpPort value must match the port in the CEE URIs that you specify during OneFS protocol audit
configuration.
• The EndPoint value must be in the format <EndPoint_Name>@<IP_Address>. You can specify multiple endpoints
by separating each value with a semicolon (;).
Related concepts
Integrating with the Common Event Enabler on page 182
Auditing 183
Configure CEE servers to deliver protocol audit events
You can configure CEE servers with OneFS to deliver protocol audit events by adding the URI of each server to the OneFS configuration.
• Run the isi audit settings global modify command with the --cee-server-uris option to add the URIs of the CEE
servers to the OneFS configuration.
The following command adds the URIs of three CEE servers to the OneFS configuration:
184 Auditing
View the rate of delivery of protocol audit events to the
CEE server
You can view the rate of delivery of protocol audit events to the CEE server.
• Run the isi statistics query command to view the current rate of delivery of the protocol audit events to the CEE server on
a node.
The following command displays the current rate of delivery of the protocol audit events to the CEE server:
Auditing 185
13
Snapshots
This section contains the following topics:
Topics:
• Snapshots overview
• Data protection with SnapshotIQ
• Snapshot disk-space usage
• Snapshot schedules
• Snapshot aliases
• File and directory restoration
• Best practices for creating snapshots
• Best practices for creating snapshot schedules
• File clones
• Snapshot locks
• Snapshot reserve
• SnapshotIQ license functionality
• Creating snapshots with SnapshotIQ
• Managing snapshots
• Restoring snapshot data
• Managing snapshot schedules
• Managing snapshot aliases
• Managing with snapshot locks
• Configure SnapshotIQ settings
• Set the snapshot reserve
• Managing changelists
Snapshots overview
A OneFS snapshot is a logical pointer to data that is stored on a cluster at a specific point in time.
A snapshot references a directory on a cluster, including all data stored in the directory and its subdirectories. If the data referenced by a
snapshot is modified, the snapshot stores a physical copy of the data that was modified. Snapshots are created according to user
specifications or are automatically generated by OneFS to facilitate system operations.
To create and manage snapshots, you must activate a SnapshotIQ license on the cluster. Some applications must generate snapshots to
function but do not require you to activate a SnapshotIQ license; by default, these snapshots are automatically deleted when OneFS no
longer needs them. However, if you activate a SnapshotIQ license, you can retain these snapshots. You can view snapshots generated by
other modules without activating a SnapshotIQ license.
You can identify and locate snapshots by name or ID. A snapshot name is specified by a user and assigned to the virtual directory that
contains the snapshot. A snapshot ID is a numerical identifier that OneFS automatically assigns to a snapshot.
186 Snapshots
Snapshots do not protect against hardware or file-system issues. Snapshots reference data that is stored on a cluster, so if the data on
the cluster becomes unavailable, the snapshots will also be unavailable. Because of this, it is recommended that you back up your data to
separate physical devices in addition to creating snapshots.
Snapshot schedules
You can automatically generate snapshots according to a snapshot schedule.
With snapshot schedules, you can periodically generate snapshots of a directory without having to manually create a snapshot every time.
You can also assign an expiration period that determines when SnapshotIQ deletes each automatically generated snapshot.
Related concepts
Managing snapshot schedules on page 198
Best practices for creating snapshot schedules on page 188
Related tasks
Create a snapshot schedule on page 191
Snapshot aliases
A snapshot alias is a logical pointer to a snapshot. If you specify an alias for a snapshot schedule, the alias will always point to the most
recent snapshot generated by that schedule. Assigning a snapshot alias allows you to quickly identify and access the most recent
snapshot generated according to a snapshot schedule.
If you allow clients to access snapshots through an alias, you can reassign the alias to redirect clients to other snapshots. In addition to
assigning snapshot aliases to snapshots, you can also assign snapshot aliases to the live version of the file system. This can be useful if
clients are accessing snapshots through a snapshot alias, and you want to redirect the clients to the live version of the file system.
Snapshots 187
NOTE: If you move a directory, you cannot revert snapshots of the directory that were taken before the directory was
moved. Deleting and then re-creating a directory has the same effect as a move. You cannot revert snapshots of a
directory that were taken before the directory was deleted and then re-created.
NOTE: Snapshot schedules with frequency of "Every Minute" are not recommended and are to be avoided.
The following table describes snapshot schedules that follow snapshot best practices:
188 Snapshots
Table 4. Snapshot schedule configurations (continued)
Deletion type Snapshot frequency Snapshot time Snapshot expiration Max snapshots retained
frequently Every day At 12:00 AM 1 week
modified data)
Every week Saturday at 12:00 AM 1 month
Every month The first Saturday of the 3 months
month at 12:00 AM
Related concepts
Snapshot schedules on page 187
File clones
SnapshotIQ enables you to create file clones that share blocks with existing files in order to save space on the cluster. A file clone usually
consumes less space and takes less time to create than a file copy. Although you can clone files from snapshots, clones are primarily used
internally by OneFS.
The blocks that are shared between a clone and cloned file are contained in a hidden file called a shadow store. Immediately after a clone is
created, all data originally contained in the cloned file is transferred to a shadow store. Because both files reference all blocks from the
shadow store, the two files consume no more space than the original file; the clone does not take up any additional space on the cluster.
However, if the cloned file or clone is modified, the file and clone will share only blocks that are common to both of them, and the
modified, unshared blocks will occupy additional space on the cluster.
Over time, the shared blocks contained in the shadow store might become useless if neither the file nor clone references the blocks. The
cluster routinely deletes blocks that are no longer needed. You can force the cluster to delete unused blocks at any time by running the
ShadowStoreDelete job.
Clones cannot contain alternate data streams (ADS). If you clone a file that contains alternate data streams, the clone will not contain the
alternate data streams.
Related tasks
Clone a file from a snapshot on page 198
Shadow-store considerations
Shadow stores are hidden files that are referenced by cloned and deduplicated files. Files that reference shadow stores behave differently
than other files.
• Reading shadow-store references might be slower than reading data directly. Reading noncached shadow-store references is slower
than reading noncached data. Reading cached shadow-store references takes no more time than reading cached data.
• When files that reference shadow stores are replicated to another PowerScale cluster or backed up to a Network Data Management
Protocol (NDMP) backup device, the shadow stores are not transferred to the target PowerScale cluster or backup device. The files
are transferred as if they contained the data that they reference from shadow stores. On the target PowerScale cluster or backup
device, the files consume the same amount of space as if they had not referenced shadow stores.
• When OneFS creates a shadow store, OneFS assigns the shadow store to a storage pool of a file that references the shadow store. If
you delete the storage pool that a shadow store resides on, the shadow store is moved to a pool that contains another file that
references the shadow store.
• OneFS does not delete a shadow-store block immediately after the last reference to the block is deleted. Instead, OneFS waits until
the ShadowStoreDelete job is run to delete the unreferenced block. If many unreferenced blocks exist on the cluster, OneFS might
report a negative deduplication savings until the ShadowStoreDelete job is run.
• Shadow stores are protected at least as much as the most protected file that references it. For example, if one file that references a
shadow store resides in a storage pool with +2 protection and another file that references the shadow store resides in a storage pool
with +3 protection, the shadow store is protected at +3.
• Quotas account for files that reference shadow stores as if the files contained the data that is referenced from shadow stores. From
the perspective of a quota, shadow-store references do not exist. However, if a quota includes data protection overhead, the quota
does not account for the data protection overhead of shadow stores.
Snapshots 189
Snapshot locks
A snapshot lock prevents a snapshot from being deleted. If a snapshot has one or more locks applied to it, the snapshot cannot be deleted
and is referred to as a locked snapshot. If the duration period of a locked snapshot expires, OneFS will not delete the snapshot until all
locks on the snapshot have been deleted.
OneFS applies snapshot locks to ensure that snapshots generated by OneFS applications are not deleted prematurely. For this reason, it is
recommended that you do not delete snapshot locks or modify the duration period of snapshot locks.
A limited number of locks can be applied to a snapshot at a time. If you create snapshot locks, the limit for a snapshot might be reached,
and OneFS could be unable to apply a snapshot lock when necessary. For this reason, it is recommended that you do not create snapshot
locks.
Related concepts
Managing with snapshot locks on page 200
Related tasks
Create a snapshot lock on page 201
Snapshot reserve
The snapshot reserve enables you to set aside a minimum percentage of the cluster storage capacity specifically for snapshots. If
specified, all other OneFS operations are unable to access the percentage of cluster capacity that is reserved for snapshots.
NOTE: The snapshot reserve does not limit the amount of space that snapshots can consume on the cluster. Snapshots
can consume a greater percentage of storage capacity specified by the snapshot reserve. It is recommended that you do
not specify a snapshot reserve.
Related tasks
Set the snapshot reserve on page 203
If you a SnapshotIQ license becomes inactive, you will no longer be able to create new snapshots, all snapshot schedules will be disabled,
and you will not be able to modify snapshots or snapshot settings. However, you will still be able to delete snapshots and access data
contained in snapshots.
190 Snapshots
Creating snapshots with SnapshotIQ
To create snapshots, you must configure the SnapshotIQ licence on the cluster. You can create snapshots either by creating a snapshot
schedule or manually generating an individual snapshot.
Manual snapshots are useful if you want to create a snapshot immediately, or at a time that is not specified in a snapshot schedule. For
example, if you plan to make changes to your file system, but are unsure of the consequences, you can capture the current state of the
file system in a snapshot before you make the change.
Before creating snapshots, consider that reverting a snapshot requires that a SnapRevert domain exist for the directory that is being
reverted. If you intend on reverting snapshots for a directory, it is recommended that you create SnapRevert domains for those directories
while the directories are empty. Creating a domain for a directory that contains less data takes less time.
WeeklyBackup_%m-%d-%Y_%H:%M
WeeklyBackup_07-13-2014_14:21
5. In the Path field, specify the directory that you want to include in snapshots that are generated according to this schedule.
6. From the Schedule list, select how often you want to generate snapshots according to the schedule.
Generate snapshots every day, or skip generating snapshots for a Select Daily, and specify how often you want to
specified number of days. generate snapshots.
Generate snapshots on specific days of the week, and optionally skip Select Weekly, and specify how often you want to
generating snapshots for a specified number of weeks. generate snapshots.
Generate snapshots on specific days of the month, and optionally skip Select Monthly, and specify how often you want to
generating snapshots for a specified number of months. generate snapshots.
Generate snapshots on specific days of the year. Select Yearly, and specify how often you want to
generate snapshots.
Snapshots 191
NOTE: A snapshot schedule cannot span multiple days. For example, you cannot specify to begin generating
snapshots at 5:00 PM Monday and end at 5:00 AM Tuesday. To continuously generate snapshots for a period greater
than a day, you must create two snapshot schedules. For example, to generate snapshots from 5:00 PM Monday to
5:00 AM Tuesday, create one schedule that generates snapshots from 5:00 PM to 11:59 PM on Monday, and another
schedule that generates snapshots from 12:00 AM to 5:00 AM on Tuesday.
7. Optional: To assign an alternative name to the most recent snapshot that is generated by the schedule, specify a snapshot alias.
a. Next to Create an Alias, click Yes.
b. To modify the default snapshot alias name, in the Alias Name field, type an alternative name for the snapshot.
8. Optional: To specify a length of time that snapshots that are generated according to the schedule are kept before they are deleted by
OneFS, specify an expiration period.
a. Next to Snapshot Expiration, select Snapshots expire.
b. Next to Snapshots expire, specify how long you want to retain the snapshots that are generated according to the schedule.
9. Click Create Schedule.
Related concepts
Best practices for creating snapshot schedules on page 188
Related References
Snapshot naming patterns on page 192
Create a snapshot
You can create a snapshot of a directory.
1. Click Data Protection > SnapshotIQ > Snapshots.
2. Click Create a Snapshot.
The Create a Snapshot dialog box appears.
3. Optional: In the Snapshot Name field, type a name for the snapshot.
4. In the Path field, specify the directory that you want the snapshot to contain.
5. Optional: To create an alternative name for the snapshot, select Create a snapshot alias, and then type the alias name.
6. Optional: To assign a time when OneFS will automatically delete the snapshot, specify an expiration period.
a. Select Snapshot Expires on.
b. In the calendar, specify the day that you want the snapshot to be automatically deleted.
7. Click Create Snapshot.
Related References
Snapshot information on page 196
Variable Description
%A The day of the week.
%a The abbreviated day of the week. For example, if the snapshot is
generated on a Sunday, %a is replaced with Sun.
192 Snapshots
Variable Description
%C The first two digits of the year. For example, if the snapshot is
created in 2014, %C is replaced with 20.
%{PolicyName} The name of the replication policy that the snapshot was created
for. This variable is valid only if you are specifying a snapshot
naming pattern for a replication policy.
%R The time. This variable is equivalent to specifying %H:%M.
Snapshots 193
Variable Description
%{SrcCluster} The name of the source cluster of the replication policy that the
snapshot was created for. This variable is valid only if you are
specifying a snapshot naming pattern for a replication policy.
%T The time. This variable is equivalent to specifying %H:%M:%S
%V The two-digit numerical week of the year that the snapshot was
created in. Numbers range from 01 to 53. The first day of the
week is calculated as Monday. If the week of January 1 is four or
more days in length, then that week is counted as the first week of
the year.
%v The day that the snapshot was created. This variable is equivalent
to specifying %e-%b-%Y.
%W The two-digit numerical week of the year that the snapshot was
created in. Numbers range from 00 to 53. The first day of the
week is calculated as Monday.
%w The numerical day of the week that the snapshot was created on.
Numbers range from 0 to 6. The first day of the week is calculated
as Sunday. For example, if the snapshot was created on Sunday,
%w is replaced with 0.
%X The time that the snapshot was created. This variable is equivalent
to specifying %H:%M:%S.
194 Snapshots
Managing snapshots
You can delete and view snapshots. You can also modify the name, duration period, and snapshot alias of an existing snapshot. However,
you cannot modify the data contained in a snapshot; the data contained in a snapshot is read-only.
Delete snapshots
You can delete a snapshot if you no longer want to access the data that is contained in the snapshot.
OneFS frees disk space that is occupied by deleted snapshots when the SnapshotDelete job is run. Also, if you delete a snapshot that
contains clones or cloned files, data in a shadow store might no longer be referenced by files on the cluster; OneFS deletes unreferenced
data in a shadow store when the ShadowStoreDelete job is run. OneFS routinely runs both the ShadowStoreDelete and SnapshotDelete
jobs. However, you can also manually run the jobs at any time.
1. Click Data Protection > SnapshotIQ > Snapshots.
2. In the list of snapshots, select the snapshot or snapshots that you want to delete.
a. From the Select an action list, select Delete.
b. In the confirmation dialog box, click Delete.
3. Optional: To increase the speed at which deleted snapshot data is freed on the cluster, run the SnapshotDelete job.
a. Click Cluster Management > Job Operations > Job Types.
b. In the Job Types area, locate SnapshotDelete, and then click Start Job.
The Start a Job dialog box appears.
c. Click Start Job.
4. Optional: To increase the speed at which deleted data that is shared between deduplicated and cloned files is freed on the cluster, run
the ShadowStoreDelete job.
Run the ShadowStoreDelete job only after you run the SnapshotDelete job.
a. Click Cluster Management > Job Operations > Job Types.
b. In the Job Types area, locate ShadowStoreDelete, and then click Start Job.
The Start a Job dialog box appears.
c. Click Start Job.
Related concepts
Snapshot disk-space usage on page 187
Reducing snapshot disk-space usage on page 195
Best practices for creating snapshots on page 188
Snapshots 195
Modify snapshot attributes
You can modify the name and expiration date of a snapshot.
1. Click Data Protection > SnapshotIQ > Snapshots.
2. In the list of snapshots, locate the snapshot that you want to modify, and then click View/Edit.
The View Snapshot Details dialog box appears.
3. Click Edit.
The Edit Snapshot Details dialog box appears.
4. Modify the attributes that you want to change.
5. Click Save Changes.
Related References
Snapshot information on page 196
Related References
Snapshot information on page 196
View snapshots
You can view a list of snapshots.
Click Data Protection > SnapshotIQ > Snapshots.
The snapshots are listed in the Snapshots table.
Related References
Snapshot information on page 196
Snapshot information
You can view information about snapshots, including the total amount of space consumed by all snapshots.
The following information is displayed in the Saved Snapshots area:
Saved Snapshots Indicates the total number of snapshots that exist on the cluster.
Snapshots Indicates the total number of snapshots that were deleted on the cluster since the last snapshot delete job was
Pending Deletion run. The space that is consumed by the deleted snapshots is not freed until the snapshot delete job is run again.
Snapshot Aliases Indicates the total number of snapshot aliases that exist on the cluster.
Capacity Used by Indicates the total amount of space that is consumed by all snapshots.
Snapshots
Related tasks
Create a snapshot on page 192
Modify snapshot attributes on page 196
Assign a snapshot alias to a snapshot on page 196
196 Snapshots
View snapshots on page 196
Revert a snapshot
You can revert a directory back to the state it was in when a snapshot was taken. Before OneFS reverts a snapshot, OneFS generates a
snapshot of the directory being reverted, so that data that is stored in the directory is not lost. OneFS does not delete a snapshot after
reverting it.
• Create a SnapRevert domain for the directory.
• Create a snapshot of a directory.
1. Click Cluster Management > Job Operations > Job Types.
2. In the Job Types table, locate the SnapRevert job, and then click Start Job.
The Start a Job dialog box appears.
3. Optional: To specify a priority for the job, from the Priority list, select a priority.
Lower values indicate a higher priority. If you do not specify a priority, the job is assigned the default snapshot revert priority.
4. Optional: To specify the amount of cluster resources the job is allowed to consume, from the Impact Policy list, select an impact
policy.
If you do not specify a policy, the job is assigned the default snapshot revert policy.
5. In the Snapshot ID to revert field, type the name or ID of the snapshot that you want to revert, and then click Start Job.
Related tasks
Create a SnapRevert domain on page 191
Snapshots 197
Restore a file or directory through a UNIX command line
You can restore a file or directory from a snapshot through a UNIX command line.
1. Open a connection to the cluster through a UNIX command line.
2. Optional: To view the contents of the snapshot you want to restore a file or directory from, run the ls command for a directory
contained in the snapshots root directory.
For example, the following command displays the contents of the /archive directory contained in Snapshot2014Jun04:
ls /ifs/.snapshot/Snapshot2014Jun04/archive
cp -a /ifs/.snapshot/Snapshot2014Jun04/archive/file1 \
/ifs/archive/file1_copy
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. To view the contents of the snapshot you want to restore a file or directory from, run the ls command for a subdirectory of the
snapshots root directory.
For example, the following command displays the contents of the /archive directory contained in Snapshot2014Jun04:
ls /ifs/.snapshot/Snapshot2014Jun04/archive
3. Clone a file from the snapshot by running the cp command with the -c option.
For example, the following command clones test.txt from Snapshot2014Jun04:
cp -c /ifs/.snapshot/Snapshot2014Jun04/archive/test.txt \
/ifs/archive/test_clone.text
Related concepts
File clones on page 189
198 Snapshots
Related concepts
Snapshot schedules on page 187
Best practices for creating snapshot schedules on page 188
Related concepts
Snapshot schedules on page 187
Related concepts
Snapshot schedules on page 187
Snapshots 199
Related References
Snapshot information on page 196
If a snapshot alias references the live version of the file system, the Target ID is -1.
3. Optional: View information about a specific snapshot by running the isi snapshot aliases view command.
The following command displays information about latestWeekly:
It is recommended that you do not create, delete, or modify snapshot locks unless you are instructed to do so by
PowerScale Technical Support.
Deleting a snapshot lock that was created by OneFS might result in data loss. If you delete a snapshot lock that was created by OneFS, it
is possible that the corresponding snapshot might be deleted while it is still in use by OneFS. If OneFS cannot access a snapshot that is
necessary for an operation, the operation will malfunction and data loss might result. Modifying the expiration date of a snapshot lock
created by OneFS can also result in data loss because the corresponding snapshot can be deleted prematurely.
Related concepts
Snapshot locks on page 190
200 Snapshots
Create a snapshot lock
You can create snapshot locks that prevent snapshots from being deleted.
Although you can prevent a snapshot from being automatically deleted by creating a snapshot lock, it is recommended that you do not
create snapshot locks. To prevent a snapshot from being automatically deleted, it is recommended that you extend the duration period of
the snapshot.
This procedure is available only through the command-line interface (CLI).
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Create a snapshot lock by running the isi snapshot locks create command.
For example, the following command applies a snapshot lock to SnapshotApril2016, sets the lock to expire in one month, and adds a
description of "Maintenance Lock":
Related concepts
Snapshot locks on page 190
Related References
Snapshot lock information on page 202
CAUTION: It is recommended that you do not modify the expiration dates of snapshot locks.
Related concepts
Snapshot locks on page 190
Related References
Snapshot lock information on page 202
The system prompts you to confirm that you want to delete the snapshot lock.
Snapshots 201
3. Type yes and then press ENTER.
Related concepts
Snapshot locks on page 190
Related References
SnapshotIQ settings on page 202
SnapshotIQ settings
SnapshotIQ settings determine how snapshots behave and can be accessed.
The following SnapshotIQ settings can be configured:
202 Snapshots
Root Directory Determines whether snapshot directories are visible through SMB.
Visible
Sub-directories Determines whether snapshot subdirectories are accessible through SMB.
Accessible
Related tasks
Configure SnapshotIQ settings on page 202
Related concepts
Snapshot reserve on page 190
Managing changelists
You can create and view changelists that describe the differences between two snapshots. You can create a changelist for any two
snapshots that have a common root directory.
Changelists are most commonly accessed by applications through the OneFS Platform API. For example, a custom application could
regularly compare the two most recent snapshots of a critical directory path to determine whether to back up the directory, or to trigger
other actions.
Create a changelist
You can create a changelist to view the differences between two snapshots.
1. Optional: Record the IDs of the snapshots.
a. Click Data Protection > SnapshotIQ > Snapshots.
b. In the row of each snapshot that you want to create a changelist for, click View Details, and record the ID of the snapshot.
2. Click Cluster Management > Job Operations > Job Types.
3. In the Job Types area, in the ChangelistCreate row, from the Actions column, select Start Job.
4. In the Older Snapshot ID field, type the ID of the older snapshot.
5. In the Newer Snapshot ID field, type the ID of the newer snapshot.
6. Click Start Job.
Snapshots 203
Delete a changelist
You can delete a changelist
Run the isi_changelist_mod command with the -k option.
The following command deletes changelist 22_24:
isi_changelist_mod -k 22_24
View a changelist
You can view a changelist that describes the differences between two snapshots. This procedure is available only through the command-
line interface (CLI).
1. View the IDs of changelists by running the following command:
isi_changelist_mod -l
Changelist IDs include the IDs of both snapshots used to create the changelist. If OneFS is still in the process of creating a changelist,
inprog is appended to the changelist ID.
2. Optional: View all contents of a changelist by running the isi_changelist_mod command with the -a option.
The following command displays the contents of a changelist named 2_6:
isi_changelist_mod -a 2_6
Changelist information
You can view the information contained in changelists.
NOTE: The information contained in changelists is meant to be consumed by applications through the OneFS Platform
API.
The following information is displayed for each item in the changelist when you run the isi_changelist_mod command:
01 The item was added or moved under the root directory of the snapshots.
02 The item was removed or moved out of the root directory of the snapshots.
04 The path of the item was changed without being removed from the root directory of the
snapshot.
10 The item either currently contains or at one time contained Alternate Data Streams
(ADS).
20 The item is an ADS.
40 The item has hardlinks.
NOTE: These values are added together in the output. For example, if an ADS was added, the code
would be cl_flags=021.
204 Snapshots
14
Deduplication with SmartDedupe
This section contains the following topics:
Topics:
• Deduplication overview
• Deduplication jobs
• Data replication and backup with deduplication
• Snapshots with deduplication
• Deduplication considerations
• Shadow-store considerations
• SmartDedupe license functionality
• Managing deduplication
Deduplication overview
SmartDedupe enables you to save storage space on your cluster by reducing redundant data. Deduplication maximizes the efficiency of
your cluster by decreasing the amount of storage that is required to store multiple files with identical blocks.
The SmartDedupe software module deduplicates data by scanning a PowerScale cluster for identical data blocks. Each block is 8 KB. If
SmartDedupe finds duplicate blocks, SmartDedupe moves a single copy of the blocks to a hidden file called a shadow store. SmartDedupe
then deletes the duplicate blocks from the original files and replaces the blocks with pointers to the shadow store.
Deduplication is applied at the directory level, targeting all files and directories underneath one or more root directories. SmartDedupe not
only deduplicates identical blocks in different files, it also deduplicates identical blocks within a single file.
Before you deduplicate a directory, you can get an estimate of the amount of space you can expect to save. After you begin deduplicating
a directory, you can monitor the amount of space that deduplication is saving in real time.
To enable deduplicating two or more files, the files must have the same disk pool policy ID and protection policy. If either or both of these
attributes differ between two or more identical files, or files with identical 8K blocks, the files are not deduplicated.
Because it is possible to specify protection policies on a per-file or per-directory basis, deduplication can be further affected. Consider the
example of two files, /ifs/data/projects/alpha/logo.jpg and /ifs/data/projects/beta/logo.jpg. Even if the
logo.jpg files in both directories are identical, they would not be deduplicated if they have different protection policies.
If you have activated a SmartPools license on your cluster, you can also specify custom file pool policies. These file pool policies might
result in identical files or files with identical 8K blocks being stored in different node pools. Those files would have different disk pool policy
IDs and would not be deduplicated.
SmartDedupe also does not deduplicate files that are 32 KB or smaller, because doing so would consume more cluster resources than the
storage savings are worth. The default size of a shadow store is 2 GB. Each shadow store can contain up to 256,000 blocks. Each block in
a shadow store can be referenced up to 32,000 times.
Deduplication jobs
A deduplication system maintenance job deduplicates data on a cluster. You can monitor and control deduplication jobs as you would any
other maintenance job on the cluster. Although the overall performance impact of deduplication is minimal, the deduplication job consumes
400 MB of memory per node.
When a deduplication job runs for the first time on a cluster, SmartDedupe samples blocks from each file and creates index entries for
those blocks. If the index entries of two blocks match, SmartDedupe scans the blocks that are next to the matching pair and then
deduplicates all duplicate blocks. After a deduplication job samples a file once, new deduplication jobs will not sample the file again until the
file is modified.
The first deduplication job that you run can take longer to complete than subsequent deduplication jobs. The first deduplication job must
scan all files under the specified directories to generate the initial index. If subsequent deduplication jobs take a long time to complete, the
most likely cause is that a large amount of data is being deduplicated. However, it can also indicate that users are storing large amounts of
Related tasks
Assess deduplication space savings on page 207
Specify deduplication settings on page 208
Deduplication considerations
Deduplication can significantly increase the efficiency at which you store data. However, the effect of deduplication varies depending on
the cluster.
You can reduce redundancy on a cluster by running SmartDedupe. Deduplication creates links that can impact the speed at which you can
read from and write to files. In particular, sequentially reading chunks smaller than 512 KB of a deduplicated file can be significantly slower
than reading the same small, sequential chunks of a non-deduplicated file. This performance degradation applies only if you are reading
non-cached data. For cached data, the performance for deduplicated files is potentially better than non-deduplicated files. If you stream
chunks larger than 512 KB, deduplication does not significantly impact the read performance of the file. If you intend on streaming 8 KB or
less of each file at a time, and you do not plan on concurrently streaming the files, it is recommended that you do not deduplicate the files.
Shadow-store considerations
Shadow stores are hidden files that are referenced by cloned and deduplicated files. Files that reference shadow stores behave differently
than other files.
• Reading shadow-store references might be slower than reading data directly. Reading noncached shadow-store references is slower
than reading noncached data. Reading cached shadow-store references takes no more time than reading cached data.
• When files that reference shadow stores are replicated to another PowerScale cluster or backed up to a Network Data Management
Protocol (NDMP) backup device, the shadow stores are not transferred to the target PowerScale cluster or backup device. The files
are transferred as if they contained the data that they reference from shadow stores. On the target PowerScale cluster or backup
device, the files consume the same amount of space as if they had not referenced shadow stores.
• When OneFS creates a shadow store, OneFS assigns the shadow store to a storage pool of a file that references the shadow store. If
you delete the storage pool that a shadow store resides on, the shadow store is moved to a pool that contains another file that
references the shadow store.
• OneFS does not delete a shadow-store block immediately after the last reference to the block is deleted. Instead, OneFS waits until
the ShadowStoreDelete job is run to delete the unreferenced block. If many unreferenced blocks exist on the cluster, OneFS might
report a negative deduplication savings until the ShadowStoreDelete job is run.
• Shadow stores are protected at least as much as the most protected file that references it. For example, if one file that references a
shadow store resides in a storage pool with +2 protection and another file that references the shadow store resides in a storage pool
with +3 protection, the shadow store is protected at +3.
• Quotas account for files that reference shadow stores as if the files contained the data that is referenced from shadow stores. From
the perspective of a quota, shadow-store references do not exist. However, if a quota includes data protection overhead, the quota
does not account for the data protection overhead of shadow stores.
Managing deduplication
You can manage deduplication on a cluster by first assessing how much space you can save by deduplicating individual directories. After
you determine which directories are worth deduplicating, you can configure SmartDedupe to deduplicate those directories specifically. You
can then monitor the actual amount of disk space you are saving.
Related concepts
Deduplication jobs on page 205
Related concepts
Deduplication jobs on page 205
Option Description
Enable this job Select to enable the job type.
type
Default Priority Set the job priority as compared to other system maintenance jobs that run at the same time. Job priority is
denoted as 1-10, with 1 being the highest and 10 being the lowest. The default value is 4.
Default Impact Select the amount of system resources that the job uses compared to other system maintenance jobs that run
Policy at the same time. Select a policy value of HIGH, MEDIUM, LOW, or OFF-HOURS. The default is LOW.
Schedule Specify whether the job must be manually started or runs on a regularly scheduled basis. When you click
Scheduled, you can specify a daily, weekly, monthly, or yearly schedule. For most clusters, it is recommended
that you run the Dedupe job once every 10 days.
5. Click Save Changes, and then click Close.
The new job controls are saved and the dialog box closes.
6. Click Start Job.
The Dedupe job runs with the new job controls.
Related References
Deduplication information on page 210
Related References
Deduplication job report information on page 209
Related tasks
View a deduplication report on page 209
Space Savings The total amount of physical disk space saved by deduplication, including protection overhead and metadata. For
example, if you have three identical files that are all 5 GB, the estimated physical saving would be greater than 10
GB, because deduplication saved space that would have been occupied by file metadata and protection overhead.
Deduplicated data The amount of space on the cluster occupied by directories that were deduplicated.
Other data The amount of space on the cluster occupied by directories that were not deduplicated.
Related tasks
View deduplication space savings on page 209
Related concepts
Creating replication policies on page 220
To configure static NAT, you would need to edit the /etc/local/hosts file on all six nodes, and associate them with their
counterparts by adding the appropriate NAT address and node name. For example, in the /etc/local/hosts file on the three nodes of
the source cluster, the entries would look like:
10.1.2.11 target-1
10.1.2.12 target-2
10.1.2.13 target-3
Similarly, on the three nodes of the target cluster, you would edit the /etc/local/hosts file, and insert the NAT address and name of
the associated node on the source cluster. For example, on the three nodes of the target cluster, the entries would look like:
10.8.8.201 source-1
10.8.8.202 source-2
10.8.8.203 source-3
When the NAT server receives packets of SyncIQ data from a node on the source cluster, the NAT server replaces the packet headers
and the node's port number and internal IP address with the NAT server's own port number and external IP address. The NAT server on
the source network then sends the packets through the Internet to the target network, where another NAT server performs a similar
process to transmit the data to the target node. The process is reversed when the data fails back.
With this type of configuration, SyncIQ can determine the correct addresses to connect with, so that SyncIQ can send and receive data.
In this scenario, no SmartConnect zone configuration is required.
For information about the ports used by SyncIQ, see the OneFS Security Configuration Guide for your OneFS version.
Related tasks
Perform a full or differential replication on page 241
Related concepts
Managing replication performance rules on page 237
Replication reports
After a replication job completes, SyncIQ generates a replication report that contains detailed information about the job, including how long
the job ran, how much data was transferred, and what errors occurred.
If a replication report is interrupted, SyncIQ might create a subreport about the progress of the job so far. If the job is then restarted,
SyncIQ creates another subreport about the progress of the job until the job either completes or is interrupted again. SyncIQ creates a
subreport each time the job is interrupted until the job completes successfully. If multiple subreports are created for a job, SyncIQ
combines the information from the subreports into a single report.
SyncIQ routinely deletes replication reports. You can specify the maximum number of replication reports that SyncIQ retains and the
length of time that SyncIQ retains replication reports. If the maximum number of replication reports is exceeded on a cluster, SyncIQ
deletes the oldest report each time a new report is created.
You cannot customize the content of a replication report.
NOTE: If you delete a replication policy, SyncIQ automatically deletes any reports that were generated for that policy.
Related concepts
Managing replication reports on page 238
Replication snapshots
SyncIQ generates snapshots to facilitate replication, failover, and failback between PowerScale clusters. Snapshots generated by SyncIQ
can also be used for archival purposes on the target cluster.
Related concepts
Initiating data failover and failback with SyncIQ on page 229
Related tasks
Fail over data to a secondary cluster on page 229
Data failback
Failback is the process of restoring primary and secondary clusters to the roles that they occupied before a failover operation. After
failback is complete, the primary cluster holds the latest data set and resumes normal operations, including hosting clients and replicating
data to the secondary cluster through SyncIQ replication policies in place.
The first step in the failback process is updating the primary cluster with all of the modifications that were made to the data on the
secondary cluster. The next step is preparing the primary cluster to be accessed by clients. The final step is resuming data replication from
the primary to the secondary cluster. At the end of the failback process, you can redirect users to resume data access on the primary
cluster.
To update the primary cluster with the modifications that were made on the secondary cluster, SyncIQ must create a SyncIQ domain for
the source directory.
You can fail back data with any replication policy that meets all of the following criteria:
• The policy has been failed over.
• The policy is a synchronization policy (not a copy policy).
• The policy does not exclude any files or directories from replication.
Related tasks
Fail back data to a primary cluster on page 230
Source directory type Target directory type Replication Allowed Failback allowed
Non-SmartLock Non-SmartLock Yes Yes
Non-SmartLock SmartLock enterprise Yes Yes, unless files are committed to a
WORM state on the target cluster
Non-SmartLock SmartLock compliance No No
SmartLock enterprise Non-SmartLock Yes; however, retention Yes; however, the files do not have
dates and commit status WORM status
of files are lost.
SmartLock enterprise SmartLock enterprise Yes Yes; any newly committed WORM files
are included
SmartLock enterprise SmartLock compliance No No
SmartLock compliance Non-SmartLock No No
SmartLock compliance SmartLock enterprise No No
SmartLock compliance SmartLock compliance Yes Yes; any newly committed WORM files
are included
If you are replicating a SmartLock directory to another SmartLock directory, you must create the target SmartLock directory prior to
running the replication policy. Although OneFS creates a target directory automatically if a target directory does not already exist, OneFS
does not create a target SmartLock directory automatically. If you attempt to replicate an enterprise directory before the target directory
has been created, OneFS creates a non-SmartLock target directory and the replication job succeeds. If you replicate a compliance
directory before the target directory has been created, the replication job fails.
If you replicate SmartLock directories to another PowerScale cluster with SyncIQ, the WORM state of files is replicated. However,
SmartLock directory configuration settings are not transferred to the target directory.
For example, if you replicate a directory that contains a committed file that is set to expire on March 4th, the file is still set to expire on
March 4th on the target cluster. However, if the directory on the source cluster is set to prevent files from being committed for more
than a year, the target directory is not automatically set to the same restriction.
In the scenario where a WORM exclusion domain has been created on an enterprise mode or compliance mode directory, replication of the
SmartLock exclusion on the directory occurs only if the SyncIQ policy is rooted at the SmartLock domain that contains the exclusion. If
this condition is not met, only data is replicated, and the SmartLock exclusion is not created on the target directory.
RPO Alerts
You can configure SyncIQ to create OneFS events that alert you to the fact that a specified Recovery Point Objective (RPO) has been
exceeded. You can view these events through the same interface as other OneFS events.
The events have an event ID of 400040020. The event message for these alerts follows the following format:
SW_SIQ_RPO_EXCEEDED: SyncIQ RPO exceeded for policy <replication_policy>
For example, assume you set an RPO of 5 hours; a job starts at 1:00 PM and completes at 3:00 PM; a second job starts at 3:30 PM; if the
second job does not complete by 6:00 PM, SyncIQ will create a OneFS event.
Related concepts
Replication policies and jobs on page 211
By default, all files and directories under the source directory of a replication policy are replicated to the target cluster. However, you can
prevent directories under the source directory from being replicated.
If you specify a directory to exclude, files and directories under the excluded directory are not replicated to the target cluster. If you
specify a directory to include, only the files and directories under the included directory are replicated to the target cluster; any directories
that are not contained in an included directory are excluded.
If you both include and exclude directories, any excluded directories must be contained in one of the included directories; otherwise, the
excluded-directory setting has no effect. For example, consider a policy with the following settings:
• The root directory is /ifs/data
• The included directories are /ifs/data/media/music and /ifs/data/media/movies
• The excluded directories are /ifs/data/archive and /ifs/data/media/music/working
In this example, the setting that excludes the /ifs/data/archive directory has no effect because the /ifs/data/archive
directory is not under either of the included directories. The /ifs/data/archive directory is not replicated regardless of whether the
directory is explicitly excluded. However, the setting that excludes the /ifs/data/media/music/working directory does have an
effect, because the directory would be replicated if the setting was not specified.
In addition, if you exclude a directory that contains the source directory, the exclude-directory setting has no effect. For example, if the
root directory of a policy is /ifs/data, explicitly excluding the /ifs directory does not prevent /ifs/data from being replicated.
Any directories that you explicitly include or exclude must be contained in or under the specified root directory. For example, consider a
policy in which the specified root directory is /ifs/data. In this example, you could include both the /ifs/data/media and
the /ifs/data/users/ directories because they are under /ifs/data.
Excluding directories from a synchronization policy does not cause the directories to be deleted on the target cluster. For example,
consider a replication policy that synchronizes /ifs/data on the source cluster to /ifs/data on the target cluster. If the policy
excludes /ifs/data/media from replication, and /ifs/data/media/file exists on the target cluster, running the policy does not
cause /ifs/data/media/file to be deleted from the target cluster.
Related tasks
Specify source directories and files on page 224
Related References
File criteria options on page 221
Related tasks
Specify source directories and files on page 224
Date created Includes or excludes files based on when the file was created. This option is available for copy policies only.
You can specify a relative date and time, such as "two weeks ago", or specific date and time, such as "January 1,
2012." Time settings are based on a 24-hour clock.
Date accessed Includes or excludes files based on when the file was last accessed. This option is available for copy policies only,
and only if the global access-time-tracking option of the cluster is enabled.
You can specify a relative date and time, such as "two weeks ago", or specific date and time, such as "January 1,
2012." Time settings are based on a 24-hour clock.
Date modified Includes or excludes files based on when the file was last modified. This option is available for copy policies only.
You can specify a relative date and time, such as "two weeks ago", or specific date and time, such as "January 1,
2012." Time settings are based on a 24-hour clock.
File name Includes or excludes files based on the file name. You can specify to include or exclude full or partial names that
contain specific text.
The following wildcard characters are accepted:
NOTE: Alternatively, you can filter file names by using POSIX regular-expression (regex) text.
PowerScale clusters support IEEE Std 1003.2 (POSIX.2) regular expressions. For more
information about POSIX regular expressions, see the BSD man pages.
Path Includes or excludes files based on the file path. This option is available for copy policies only.
You can specify to include or exclude full or partial paths that contain specified text. You can also include the
wildcard characters *, ?, and [ ].
Type Includes or excludes files based on one of the following file-system object types:
• Soft link
• Regular file
• Directory
Related concepts
Excluding files in replication on page 220
Related tasks
Specify source directories and files on page 224
3. Specify which nodes you want replication policies to connect to when a policy is run.
To connect policies to all nodes on a source cluster: Click Run the policy on all nodes in this cluster.
To connect policies only to nodes contained in a a. Click Run the policy only on nodes in the specified subnet and pool.
specified subnet and pool: b. From the Subnet and pool list, select the subnet and pool .
Related concepts
Replication policies and jobs on page 211
Runs jobs automatically a. Click Whenever a snapshot of the source directory is taken.
every time that a snapshot b. To only replicate only data contained in snapshots that match a specific naming pattern, type a
is taken of the source snapshot naming pattern into the Run job if snapshot name matches the following pattern
directory. box.
c. To replicate data contained in all snapshots that were taken of the source directory before the
policy was created, click Sync existing snapshots before policy creation time.
The next step in the process of creating a replication policy is specifying source directories and files.
Related References
Replication policy settings on page 234
2. Optional: Prevent specific subdirectories of the source directory from being replicated.
• To include a directory, in the Included Directories area, click Add a directory path.
• To exclude a directory, in the Excluded Directories area, click Add a directory path.
3. Optional: Prevent specific files from being replicated by specifying file matching criteria.
a. In the File Matching Criteria area, select a filter type.
b. Select an operator.
c. Type a value.
Files that do not meet the specified criteria will not be replicated to the target cluster. For example, if you specify File Type
doesn't match .txt, SyncIQ will not replicate any files with the .txt file extension. If you specify Created after
08/14/2013, SyncIQ will not replicate any files created before August 14th, 2013.
If you want to specify more than one file matching criterion, you can control how the criteria relate to each other by clicking either
Add an "Or" condition or Add an "And" condition.
4. Specify which nodes you want the replication policy to connect to when the policy is run.
Connect the policy to all nodes in the source cluster. Click Run the policy on all nodes in this cluster.
Connect the policy only to nodes contained in a a. Click Run the policy only on nodes in the specified subnet and pool.
specified subnet and pool. b. From the Subnet and pool list, select the subnet and pool.
The next step in the process of creating a replication policy is specifying the target directory.
Related concepts
Excluding directories in replication on page 220
Excluding files in replication on page 220
Related References
File criteria options on page 221
Replication policy settings on page 234
2. In the Target Directory field, type the absolute path of the directory on the target cluster that you want to replicate data to.
CAUTION:
If you specify an existing directory on the target cluster, make sure that the directory is not the target of another
replication policy. If this is a synchronization policy, make sure that the directory is empty. All files are deleted from
the target of a synchronization policy the first time that the policy is run.
If the specified target directory does not already exist on the target cluster, the directory is created the first time that the job is run.
We recommend that you do not specify the /ifs directory. If you specify the /ifs directory, the entire target cluster is set to a
read-only state, which prevents you from storing any other data on the cluster.
If this is a copy policy, and files in the target directory share the same name as files in the source directory, the target directory files
are overwritten when the job is run.
3. If you want replication jobs to connect only to the nodes included in the SmartConnect zone specified by the target cluster, click
Connect only to the nodes within the target cluster SmartConnect Zone.
The next step in the process of creating a replication policy is to specify policy target snapshot settings.
Related References
Replication policy settings on page 234
%{PolicyName}-on-%{SrcCluster}-latest
newPolicy-on-Cluster1-latest
3. Optional: To modify the snapshot naming pattern for snapshots that are created according to the replication policy, in the Snapshot
Naming Pattern field, type a naming pattern. Each snapshot that is generated for this replication policy is assigned a name that is
based on this pattern.
For example, the following naming pattern is valid:
%{PolicyName}-from-%{SrcCluster}-at-%H:%M-on-%m-%d-%Y
newPolicy-from-Cluster1-at-10:30-on-7-12-2012
4. Select one of the following options for how snapshots should expire:
• Click Snapshots do not expire.
• Click Snapshots expire after... and specify an expiration period.
The next step in the process of creating a replication policy is configuring advanced policy settings.
Related References
Replication policy settings on page 234
3. Optional: If you want SyncIQ to perform a checksum on each file data packet that is affected by the replication policy, select the
Validate File Integrity check box.
If you enable this option, and the checksum values for a file data packet do not match, SyncIQ retransmits the affected packet.
4. Optional: To increase the speed of failback for the policy, click Prepare policy for accelerated failback performance.
Selecting this option causes SyncIQ to perform failback configuration tasks the next time that a job is run, rather than waiting to
perform those tasks during the failback process. This will reduce the amount of time needed to perform failback operations when
failback is initiated.
6. Optional: Specify whether to record information about files that are deleted by replication jobs by selecting one of the following
options:
• Click Record when a synchronization deletes files or directories.
• Click Do not record when a synchronization deletes files or directories.
This option is applicable for synchronization policies only.
Related References
Replication policy settings on page 234
NOTE: You can assess only replication policies that have never been run before.
Related concepts
Replication policies and jobs on page 211
Related concepts
Replication policies and jobs on page 211
Related concepts
Replication policies and jobs on page 211
Related concepts
Replication policies and jobs on page 211
Related concepts
Replication policies and jobs on page 211
Status The status of the job. The following job statuses are possible:
Related concepts
Data failover and failback with SyncIQ on page 216
Related concepts
Data failback on page 217
4. If any SmartLock directory configuration settings, such as an autocommit time period, were specified for the source directory of the
replication policy, apply those settings to the target directory.
Because autocommit information is not transferred to the target cluster, files that were scheduled to be committed to a WORM state
on the original source cluster would not be scheduled to be committed at the same time on the target cluster. To make sure that all
files are retained for the appropriate time period, you can commit all files in target SmartLock directories to a WORM state.
2. Replicate recovery data to the target directory by running the policies that you created.
You can replicate data either by manually starting the policies or by specifying a schedule.
3. Optional: To ensure that SmartLock protection is enforced for all files, commit all migrated files in the SmartLock target directory to a
WORM state.
Because autocommit information is not transferred from the recovery cluster, commit all migrated files in target SmartLock directories
to a WORM state.
For example, the following command automatically commits all files in /ifs/data/smartlock to a WORM state after one minute:
This step is unnecessary if you have configured an autocommit time period for the SmartLock directories being migrated.
4. On the cluster with the migrated data, click Data Protection > SyncIQ > Local Targets.
5. In the SyncIQ Local Targets table, for each replication policy, select More > Allow Writes.
6. Optional: If any SmartLock directory configuration settings, such as an autocommit time period, were specified for the source
directories of the replication policies, apply those settings to the target directories on the cluster now containing the migrated data.
7. Optional: Delete the copy of the SmartLock data on the recovery cluster.
You cannot recover the space consumed by the source SmartLock directories until all files are released from a WORM state. If you
want to free the space before files are released from a WORM state, contact PowerScale Technical Support for information about
reformatting your recovery cluster.
Related concepts
Replication policies and jobs on page 211
Related References
Replication policy settings on page 234
Related concepts
Replication policies and jobs on page 211
If you disable a replication policy while an associated replication job is running, the running job is not interrupted.
However, the policy will not create another job until the policy is enabled.
1. Click Data Protection > SyncIQ > Policies.
2. In the SyncIQ Policies table, in the row for a replication policy, select either Enable Policy or Disable Policy.
If neither Enable Policy nor Disable Policy appears, verify that a replication job is not running for the policy. If an associated
replication job is not running, ensure that the SyncIQ license is active on the cluster.
Related concepts
Replication policies and jobs on page 211
Related References
Replication policy settings on page 234
Related tasks
View replication policies targeting the local cluster on page 236
Copy If a file is deleted in the source directory, the file is not deleted in the target directory.
Synchronize Deletes files in the target directory if they are no longer present on the source. This
ensures that an exact replica of the source directory is maintained on the target cluster.
Run job Determines whether jobs are run automatically according to a schedule or only when manually specified by a user.
Last Successful Displays the last time that a replication job for the policy completed successfully.
Run
Last Started Displays the last time that the policy was run.
Source Root The full path of the source directory. Data is replicated from the source directory to the target directory.
Directory
Included Determines which directories are included in replication. If one or more directories are specified by this setting, any
Directories directories that are not specified are not replicated.
Excluded Determines which directories are excluded from replication. Any directories specified by this setting are not
Directories replicated.
File Matching Determines which files are excluded from replication. Any files that do not meet the specified criteria are not
Criteria replicated.
Validate File Determines whether OneFS performs a checksum on each file data packet that is affected by a replication job. If a
Integrity checksum value does not match, OneFS retransmits the affected file data packet.
Keep Reports For Specifies how long replication reports are kept before they are automatically deleted by OneFS.
Log Deletions on Determines whether OneFS records when a synchronization job deletes files or directories on the target cluster.
Synchronization
The following replication policy fields are available only through the OneFS command-line interface.
Source Subnet Specifies whether replication jobs connect to any nodes in the cluster or if jobs can connect only to nodes in a
specified subnet.
Source Pool Specifies whether replication jobs connect to any nodes in the cluster or if jobs can connect only to nodes in a
specified pool.
Password Set Specifies a password to access the target cluster.
Report Max Count Specifies the maximum number of replication reports that are retained for this policy.
Target Compare Determines whether full or differential replications are performed for this policy. Full or differential replications are
Initial Sync performed the first time a policy is run and after a policy is reset.
Source Snapshot Determines whether snapshots generated for the replication policy on the source cluster are deleted when the
Archive next replication policy is run. Enabling archival source snapshots does not require you to activate the SnapshotIQ
license on the cluster.
Source Snapshot If snapshots generated for the replication policy on the source cluster are retained, renames snapshots according
Pattern to the specified rename pattern.
Resolve Determines whether you can manually resolve the policy if a replication job encounters an error.
After a replication policy is reset, SyncIQ performs a full or differential replication the next time the policy is run.
Depending on the amount of data being replicated, a full or differential replication can take a very long time to complete.
1. Click Data Protection > SyncIQ > Local Targets.
2. In the SyncIQ Local Targets table, in the row for a replication policy, select Break Association.
3. In the Confirm dialog box, click Yes.
Related References
Replication policy information on page 234
Related concepts
Controlling replication job resource consumption on page 215
Related concepts
Controlling replication job resource consumption on page 215
Related concepts
Controlling replication job resource consumption on page 215
Related concepts
Controlling replication job resource consumption on page 215
Related concepts
Controlling replication job resource consumption on page 215
Related concepts
Controlling replication job resource consumption on page 215
Related concepts
Controlling replication job resource consumption on page 215
Related concepts
Replication reports on page 215
3. In the Number of Reports to Keep Per Policy field, type the maximum number of reports you want to retain at a time for a
replication policy.
4. Click Submit.
Related concepts
Replication reports on page 215
Related concepts
Replication reports on page 215
Related concepts
Replication reports on page 215
Related References
Replication report information on page 239
Policy Name The name of the associated policy for the job. You can view or edit settings for the policy by clicking the policy
name.
Status Displays the status of the job. The following job statuses are possible:
Source Directory The path of the source directory on the source cluster.
Target Host The IP address or fully qualified domain name of the target cluster.
Related tasks
View replication reports on page 239
Related concepts
Replication policies and jobs on page 211
Related concepts
Replication policies and jobs on page 211
3. Run the policy by running the isi sync jobs start command.
For example, the following command runs newPolicy:
Related concepts
Full and differential replication on page 214
FlexProtect overview
A PowerScale cluster is designed to continuously serve data, even when one or more components simultaneously fail. OneFS ensures data
availability by striping or mirroring data across the cluster. If a cluster component fails, data that is stored on the failed component is
available on another component. After a component failure, lost data is restored on healthy components by the FlexProtect proprietary
system.
Data protection is specified at the file level, not the block level, enabling the system to recover data quickly. All data, metadata, and parity
information is distributed across all nodes: the cluster does not require a dedicated parity node or drive. No single node limits the speed of
the rebuild process.
File striping
OneFS uses a PowerScale cluster's internal network to distribute data automatically across individual nodes and disks in the cluster.
OneFS protects files as the data is being written. No separate action is necessary to protect data.
Before writing files to storage, OneFS breaks files into smaller logical chunks called stripes. The size of each file chunk is referred to as the
stripe unit size. Each OneFS block is 8 KB, and a stripe unit consists of 16 blocks, for a total of 128 KB per stripe unit. During a write,
OneFS breaks data into stripes and then logically places the data into a stripe unit. As OneFS writes data across the cluster, OneFS fills
the stripe unit and protects the data according to the number of writable nodes and the specified protection policy.
OneFS can continuously reallocate data and make storage space more usable and efficient. As the cluster size increases, OneFS stores
large files more efficiently.
To protect files that are 128KB or smaller, OneFS does not break these files into smaller logical chunks. Instead, OneFS uses mirroring with
forward error correction (FEC). With mirroring, OneFS makes copies of each small file's data (N), adds an FEC parity chunk (M), and
distributes multiple instances of the entire protection unit (N+M) across the cluster.
Related concepts
Requesting data protection on page 245
Related References
Requested protection settings on page 245
Requested protection disk space usage on page 246
Smartfail
OneFS protects data stored on failing nodes or drives through a process called smartfailing.
During the smartfail process, OneFS places a device into quarantine. Data stored on quarantined devices is read only. While a device is
quarantined, OneFS reprotects the data on the device by distributing the data to other devices. After all data migration is complete,
OneFS logically removes the device from the cluster, the cluster logically changes its width to the new configuration, and the node or
drive can be physically replaced.
OneFS smartfails devices only as a last resort. Although you can manually smartfail nodes or drives, it is recommended that you first
consult PowerScale Technical Support.
Occasionally a device might fail before OneFS detects a problem. If a drive fails without being smartfailed, OneFS automatically starts
rebuilding the data to available free space on the cluster. However, because a node might recover from a failure, if a node fails, OneFS
does not start rebuilding data unless the node is logically removed from the cluster.
Node failures
Because node loss is often a temporary issue, OneFS does not automatically start reprotecting data when a node fails or goes offline. If a
node reboots, the file system does not need to be rebuilt because it remains intact during the temporary failure.
If you configure N+1 data protection on a cluster, and one node fails, all of the data is still accessible from every other node in the cluster.
If the node comes back online, the node rejoins the cluster automatically without requiring a full rebuild.
To ensure that data remains protected, if you physically remove a node from the cluster, you must also logically remove the node from the
cluster. After you logically remove a node, the node automatically reformats its own drives, and resets itself to the factory default settings.
The reset occurs only after OneFS has confirmed that all data has been reprotected. You can logically remove a node using the smartfail
process. It is important that you smartfail nodes only when you want to permanently remove a node from the cluster.
If you remove a failed node before adding a new node, data stored on the failed node must be rebuilt in the free space in the cluster. After
the new node is added, OneFS distributes the data to the new node. It is more efficient to add a replacement node to the cluster before
failing the old node because OneFS can immediately use the replacement node to rebuild the data stored on the failed node.
For 4U Isilon IQ X-Series and NL-Series nodes, and IQ 12000X/EX 12000 combination platforms, the minimum cluster
size of three nodes requires a minimum protection of N+2:1.
Related concepts
Requested data protection on page 243
Related concepts
Requested data protection on page 243
Number of [+1n] [+2d:1n] [+2n] [+3d:1n] [+3d:1n1d] [+3n] [+4d:1n] [+4d:2n] [+4n]
nodes
3 2 +1 (33%) 4+2 — 6+3 3 + 3 (50%) — 8+4 — —
(33%) (33%) (33%)
4 3 +1 (25%) 6+2 2+2 9+3 5 + 3 (38%) — 12 + 4 4+4 —
(25%) (50%) (25%) (25%) (50%)
5 4 +1 (20%) 8+2 3+2 12 + 3 7 + 3 (30%) — 16 + 4 6+4 —
(20%) (40%) (20%) (20%) (40%)
6 5 +1 (17%) 10 + 2 4+2 15 + 3 9 + 3 (25%) 3+3 16 + 4 8+4 —
(17%) (33%) (17%) (50%) (20%) (33%)
7 6 +1 (14%) 12 + 2 5+2 15 + 3 11 + 3 (21%) 4+3 16 + 4 10 + 4 —
(14%) (29%) (17%) (43%) (20%) (29%)
8 7 +1 (13%) 14 + 2 6+2 15 + 3 13 + 3 (19%) 5+3 16 + 4 12 + 4 4+4
(12.5%) (25%) (17%) (38%) (20%) (25% ) (50%)
9 8 +1 (11%) 16 + 2 7+2 15 + 3 15+3 (17%) 6+3 16 + 4 14 + 4 5+4
(11%) (22%) (17%) (33%) (20%) (22%) (44%)
10 9 +1 (10%) 16 + 2 8+2 15 + 3 15+3 (17%) 7+3 16 + 4 16 + 4 6+4
(11%) (20%) (17%) (30%) (20%) (20%) (40%)
12 11 +1 (8%) 16 + 2 10 + 2 15 + 3 15+3 (17%) 9+3 16 + 4 16 + 4 8+4
(11%) (17%) (17%) (25%) (20%) (20%) (33%)
14 13 + 1 (7%) 16 + 2 12 + 2 15 + 3 15+3 (17%) 11 + 3 16 + 4 16 + 4 10 + 4
(11%) (14%) (17%) (21%) (20%) (20%) (29%)
16 15 + 1 (6%) 16 + 2 14 + 2 15 + 3 15+3 (17%) 13 + 3 16 + 4 16 + 4 12 + 4
(11%) (13%) (17%) (19%) (20%) (20%) (25%)
18 16 + 1 (6%) 16 + 2 16 + 2 15 + 3 15+3 (17%) 15 + 3 16 + 4 16 + 4 14 + 4
(11%) (11%) (17%) (17%) (20%) (20%) (22%)
20 16 + 1 (6%) 16 + 2 16 + 2 16 + 3 16 + 3 (16%) 16 + 3 16 + 4 16 + 4 16 + 4
(11%) (11%) (16%) (16%) (20%) (20% ) (20%)
30 16 + 1 (6%) 16 + 2 16 + 2 16 + 3 16 + 3 (16%) 16 + 3 16 + 4 16 + 4 16 + 4
(11%) (11%) (16%) (16%) (20%) (20%) (20%)
2x 3x 4x 5x 6x 7x 8x
Related concepts
Requested data protection on page 243
NOTE: OneFS multi-stream backups are not supported with the NDMP restartable backup feature.
Related concepts
Snapshot-based incremental backups on page 251
Supported DMAs
NDMP backups are coordinated by a data management application (DMA) that runs on a backup server.
NOTE: All supported DMAs can connect to a PowerScale cluster through the IPv4 protocol. However, only some of the
DMAs support the IPv6 protocol for connecting to a PowerScale cluster.
Supported tape For NDMP three-way backups, the data management application (DMA) determines the tape devices that are
devices supported.
Supported tape For both the two-way and three-way NDMP backups, OneFS supports all of the tape libraries that are supported
libraries by the DMA.
Supported virtual For three-way NDMP backups, the DMA determines the virtual tape libraries that will be supported.
tape libraries
SmartConnect recommendations
• A two-way NDMP backup session with SmartConnect requires Fibre Attached Storage node for backup and recovery operations.
However, a three-way NDMP session with SmartConnect does not require Fibre Attached Storage nodes for these operations.
• For a NDMP two-way backup session with SmartConnect, connect to the NDMP session through a dedicated SmartConnect zone
consisting of a pool of Network Interface Cards (NICs) on the Fibre Attached Storage nodes.
• For a two-way NDMP backup session without SmartConnect, initiate the backup session through a static IP address or fully qualified
domain name of the Fibre Attached Storage node.
• For a three-way NDMP backup operation, the front-end Ethernet network or the interfaces of the nodes are used to serve the backup
traffic. Therefore, it is recommended that you configure a DMA to initiate an NDMP session only using the nodes that are not already
overburdened serving other workloads or connections.
• For a three-way NDMP backup operation with or without SmartConnect, initiate the backup session using the IP addresses of the
nodes that are identified for running the NDMP sessions.
DMA-specific recommendations
• Enable parallelism for the DMA if the DMA supports this option. This allows OneFS to back up data to multiple tape devices at the
same time.
NOTE: " " are required for Symantec NetBackup when multiple patterns are specified. The patterns are not limited to
directories.
Unanchored patterns such as home or user1 target a string of text that might belong to many files or directories. If a pattern contains
'/', it is an anchored pattern. An anchored pattern is always matched from the beginning of a path. A pattern in the middle of a path is not
matched. Anchored patterns target specific file pathnames, such as ifs/data/home. You can include or exclude either types of
patterns.
If you specify both the include and exclude patterns, the include pattern is first processed followed by the exclude pattern.
If you specify both the include and exclude patterns, any excluded files or directories under the included directories would not be backed
up. If the excluded directories are not found in any of the included directories, the exclude specification would have no effect.
NOTE: Specifying unanchored patterns can degrade the performance of backups. It is recommended that you avoid
unanchored patterns whenever possible.
4. In the Password and Confirm password fields, type the password for the account.
NOTE: There are no special password policy requirements for an NDMP administrator.
In case you cannot set an environment variable directly on a DMA for your NDMP backup or recovery operation, log in to an PowerScale
cluster through an SSH client and set the environment variable on the cluster through the isi ndmp settings variables
create command.
Setting Description
Add Variables Add new path environment variables along with their values.
• To set a global environment variable for backup and recovery operations, specify the /BACKUP path for a
backup operation and the /RESTORE path for a recovery operation.
• The backup path must include .snapshot/<snapshot name> when running a backup of a user-created
snapshot.
b. Click Add Name/Value, specify an environment variable name and value, and then click Create Variable.
BASE_DATE for
the next backup
operation.
D Specifies
directory or node
file history.
F Specifies path-
based file
history.
Y Specifies the
default file
history format
determined by
your NDMP
backup settings.
N Disables file
history.
0 Performs a full
NDMP backup.
1-9 Performs an
incremental
backup at the
specified level.
10 Performs
Incremental
Forever backups.
restores the
backed up
SmartLink file as
a regular file on
the target
cluster.
Y OneFS updates
the dumpdates
file.
N OneFS does not
update the
dumpdates file.
Related concepts
Excluding files and directories from NDMP backups on page 254
Setting Description
Type The context type. It can be one of backup, restartable backup, or
restore.
ID An identifier for a backup or restore job. A backup or restore job
consists of one or more streams all of which are identified by this
identifier. This identifier is generated by the NDMP backup
daemon.
Start Time The time when the context started in month date time year
format.
Actions View or delete a selected context.
Status Status of the context. The status shows up as active if a backup or
restore job is initiated and continues to remain active until the
backup stream has completed or errored out.
Path The path where all the working files for the selected context are
stored.
MultiStream Specifies whether the multistream backup process is enabled.
Lead Session ID The identifier of the first backup or restore session corresponding
to a backup or restore operation.
Sessions A table with a list of all the sessions that are associated with the
selected context.
Item Description
Session Specifies the unique identification number that OneFS assigns to the session.
Elapsed Specifies the time that has elapsed since the session started.
Transferred Specifies the amount of data that was transferred during the session.
Throughput Specifies the average throughput of the session over the past five minutes.
Client/Remote Specifies the IP address of the backup server that the data management application (DMA) is
running on. If a NDMP three-way backup or restore operation is currently running, the IP
address of the remote tape media server also appears.
Mover/Data Specifies the current state of the data mover and the data server. The first word describes the
activity of the data mover. The second word describes the activity of the data server.
The data mover and data server send data to and receive data from each other during backup
and restore operations. The data mover is a component of the backup server that receives data
during backups and sends data during restore operations. The data server is a component of
OneFS that sends data during backups and receives information during restore operations.
NOTE: When a session ID instead of a state appears, the session is automatically
redirected.
The following states might appear:
Active The data mover or data server is currently sending or receiving data.
Paused The data mover is temporarily unable to receive data. While the data
mover is paused, the data server cannot send data to the data mover.
The data server cannot be paused.
Idle The data mover or data server is not sending or receiving data.
Listen The data mover or data server is waiting to connect to the data server
or data mover.
Operation Specifies the type of operation (backup or restore) that is currently in progress. If no operation
is in progress, this field is blank.
{ a }—a is optional
a | b—a or b but not at the same time
A(B)—The session is an agent session for a redirected backup
operation
M—Multi-stream backup
F—File list
L—Level-based
T—Token-based
S—Snapshot mode
s—Snapshot mode and a full backup (when root dir is new)
r—Restartable backup
R—Restarted backup
+—Backup is running with multiple state threads for better
performance
0-10—Dump Level
R ({M|s}[F | D | S] Where:
{h})
A(B)—The session is an agent session for a redirected restore
operation
M—Multi-stream restore
s—Single-threaded restore (when RESTORE_OPTIONS=1)
F—Full restore
D—DAR
S—Selective restore
h—Restore hardlinks by table
Source/Destination If an operation is currently in progress, specifies the /ifs directories that are affected by the
operation. If a backup is in progress, displays the path of the source directory that is being
backed up. If a restore operation is in progress, displays the path of the directory that is being
restored along with the destination directory to which the tape media server is restoring data. If
you are restoring data to the same location that you backed up your data from, the same path
appears twice.
Device Specifies the name of the tape or media changer device that is communicating with the
PowerScale cluster.
Mode Specifies how OneFS is interacting with data on the backup media server through the following
options:
Examples of active NDMP restore sessions indicated through the Operation field that is described in the previous table are as shown:
Related tasks
View NDMP sessions on page 267
Related References
NDMP session information on page 265
Setting Description
LNN Specifies the logical node number of the Fibre Attached Storage node.
Port Specifies the name and port number of the Fibre Attached Storage node.
Topology Specifies the type of Fibre Channel topology that is supported by the port. Options are:
Point to Point A single backup device or Fibre Channel switch directly connected to the port.
WWNN Specifies the world wide node name (WWNN) of the port. This name is the same for each port on a given
node.
WWPN Specifies the world wide port name (WWPN) of the port. This name is unique to the port.
Rate Specifies the rate at which data is sent through the port. The rate can be set to 1 Gb/s, 2 Gb/s, 4
Gb/s, 8 Gb/s, and Auto. 8 Gb/s is available for A100 nodes only. If set to Auto, the Fibre Channel
chip negotiates with connected Fibre Channel switch or Fibre Channel devices to determine the rate.
Auto is the recommended setting.
Related tasks
Modify NDMP backup port settings on page 268
View NDMP backup ports on page 268
Related References
NDMP backup port settings on page 267
Run the command as shown in the following example to apply a preferred IP setting for a subnet group:
Run the command as shown in the following example to modify the NDMP preferred IP setting for a subnet:
Setting Description
Name Specifies a device name assigned by OneFS.
State Indicates whether the device is in use. If data is currently being backed up to or restored from the device,
Read/Write appears. If the device is not in use, Closed appears.
Related tasks
Detect NDMP backup devices on page 271
View NDMP backup devices on page 271
5. Optional: To remove entries for devices or paths that have become inaccessible, select the Delete inaccessible paths or devices
check box.
6. Click Submit.
For each device that is detected, an entry is added to either the Tape Devices or Media Changers tables.
Related References
NDMP backup device settings on page 270
Related References
NDMP backup device settings on page 270
Setting Description
Date Specifies the date when an entry was added to the dumpdates
file.
ID The identifier for an entry in the dumpdates file.
Snapshot ID Identifies changed files for the next level of backup. This ID is
applicable only for snapshot-based backups. In all the other cases,
the value is 0.
Actions Deletes an entry from the dumpdates file.
The value of the path option must match the FILESYSTEM environment variable that is set during the backup operation. The value
that you specify for the name option is case sensitive.
3. Start the restore operation.
• NetWorker 8.0 and later • Isilon Backup Accelerator node or Fibre Attached Storage node with a
• Symantec NetBackup 7.5 and later second Backup Accelerator node or Fibre Attached Storage node.
• Isilon Backup Accelerator node or Fibre Attached Storage node with a
NetApp storage system
NetWorker refers to the tape drive sharing capability as DDS (dynamic drive sharing). Symantec NetBackup uses the term SSO (shared
storage option). Consult your DMA vendor documentation for configuration instructions.
Related References
NDMP environment variables on page 259
Related References
NDMP environment variables on page 259
Related concepts
Excluding files and directories from NDMP backups on page 254
Related References
NDMP environment variables on page 259
Service: False
Port: 10000
DMA: generic
Bre Max Num Contexts: 64
Context Retention Duration: 300
Smartlink File Open Timeout: 10
Enable Redirector: True
Service: False
Port: 10000
DMA: generic
Bre Max Num Contexts: 64
Context Retention Duration: 600
Smartlink File Open Timeout: 10
Enable Throttler: True
Throttler CPU Threshold: 50
3. If required, change the throttler CPU threshold as shown in the following example:
SmartLock overview
With the SmartLock software module, you can protect files on a PowerScale cluster from being modified, overwritten, or deleted. To
protect files in this manner, you must activate a SmartLock license.
With SmartLock, you can identify a directory in OneFS as a WORM domain. WORM stands for write once, read many. All files within the
WORM domain can be committed to a WORM state, meaning that those files cannot be overwritten, modified, or deleted.
After a file is removed from a WORM state, you can delete the file. However, you can never modify a file that has been committed to a
WORM state, even after it is removed from a WORM state.
In OneFS, SmartLock can be deployed in one of two modes: compliance mode or enterprise mode.
Compliance mode
SmartLock compliance mode enables you to protect your data in compliance with U.S. Securities and Exchange Commission rule 17a-4.
Rule 17a-4 is aimed at securities brokers and dealers, and specifies that records of all securities transactions must be archived in a
nonrewritable, nonerasable manner.
NOTE: You can configure a PowerScale cluster for SmartLock compliance mode only during the initial cluster
configuration process, before you activate a SmartLock license. A cluster cannot be converted to SmartLock compliance
mode after the cluster is initially configured and put into production.
Configuring a cluster for SmartLock compliance mode disables the root user. You cannot to log in to that cluster through the root user
account. Instead, you can log in to the cluster through the compliance administrator account that is configured during initial SmartLock
compliance mode configuration.
When you are logged in to a SmartLock compliance mode cluster through the compliance administrator account, you can perform
administrative tasks through the sudo command.
Related tasks
Set the compliance clock on page 278
SmartLock directories
In a SmartLock directory, you can commit a file to a WORM state manually or you can configure SmartLock to commit the file
automatically. Before you can create SmartLock directories, you must activate a SmartLock license on the cluster.
You can create two types of SmartLock directories: enterprise and compliance. However, you can create compliance directories only if the
PowerScale cluster has been set up in SmartLock compliance mode during initial configuration.
Enterprise directories enable you to protect your data without restricting your cluster to comply with regulations defined by U.S. Securities
and Exchange Commission rule 17a-4. If you commit a file to a WORM state in an enterprise directory, the file can never be modified and
cannot be deleted until the retention period passes.
However, if you own a file and have been assigned the ISI_PRIV_IFS_WORM_DELETE privilege, or you are logged in through the root
user account, you can delete the file through the privileged delete feature before the retention period passes. The privileged delete feature
is not available for compliance directories. Enterprise directories reference the system clock to facilitate time-dependent operations,
including file retention.
Compliance directories enable you to protect your data in compliance with the regulations defined by U.S. Securities and Exchange
Commission rule 17a-4. If you commit a file to a WORM state in a compliance directory, the file cannot be modified or deleted before the
specified retention period has expired. You cannot delete committed files, even if you are logged in to the compliance administrator
account. Compliance directories reference the compliance clock to facilitate time-dependent operations, including file retention.
You must set the compliance clock before you can create compliance directories. You can set the compliance clock only once, after which
you cannot modify the compliance clock time. You can increase the retention time of WORM committed files on an individual basis, if
desired, but you cannot decrease the retention time.
The compliance clock is controlled by the compliance clock daemon. Root and compliance administrator users could disable the
compliance clock daemon, which would have the effect of increasing the retention period for all WORM committed files. However, this is
not recommended.
NOTE: Using WORM exclusions, files inside a WORM compliance or enterprise domain can be excluded from having a
WORM state. All the files inside the excluded directory will behave as normal non-Smartlock protected files. For more
information, see the OneFS CLI Administration Guide.
SmartLock considerations
• If a file is owned exclusively by the root user, and the file exists on a PowerScale cluster that is in SmartLock compliance mode, the file
will be inaccessible: the root user account is disabled in compliance mode. For example, if a file is assigned root ownership on a cluster
that has not been configured in compliance mode, and then the file is replicated to a cluster in compliance mode, the file becomes
inaccessible. This can also occur if a root-owned file is restored onto a compliance cluster from a backup.
• It is recommended that you create files outside of SmartLock directories and then transfer them into a SmartLock directory after you
are finished working with the files. If you are uploading files to a cluster, it is recommended that you upload the files to a non-
SmartLock directory, and then later transfer the files to a SmartLock directory. If a file is committed to a WORM state while the file is
being uploaded, the file will become trapped in an inconsistent state.
• Files can be committed to a WORM state while they are still open. If you specify an autocommit time period for a directory, the
autocommit time period is calculated according to the length of time since the file was last modified, not when the file was closed. If
you delay writing to an open file for more than the autocommit time period, the file is automatically committed to a WORM state, and
you will not be able to write to the file.
• In a Microsoft Windows environment, if you commit a file to a WORM state, you can no longer modify the hidden or archive attributes
of the file. Any attempt to modify the hidden or archive attributes of a WORM committed file generates an error. This can prevent
third-party applications from modifying the hidden or archive attributes.
• You cannot rename a SmartLock compliance directory. You can rename a SmartLock enterprise directory only if it is empty.
• You can only rename files in SmartLock compliance or enterprise directories if the files are uncommitted.
• You cannot move:
○ SmartLock directories within a WORM domain
○ SmartLock directories in a WORM domain into a directory in a non-WORM domain.
○ directories in a non-WORM domain into a SmartLock directory in a WORM domain.
Related concepts
Compliance mode on page 276
Retention periods
A retention period is the length of time that a file remains in a WORM state before being released from a WORM state. You can configure
SmartLock directory settings that enforce default, maximum, and minimum retention periods for the directory.
If you manually commit a file, you can optionally specify the date that the file is released from a WORM state. You can configure a
minimum and a maximum retention period for a SmartLock directory to prevent files from being retained for too long or too short a time
period. It is recommended that you specify a minimum retention period for all SmartLock directories.
For example, assume that you have a SmartLock directory with a minimum retention period of two days. At 1:00 PM on Monday, you
commit a file to a WORM state, and specify the file to be released from a WORM state on Tuesday at 3:00 PM. The file will be released
from a WORM state two days later on Wednesday at 1:00 PM, because releasing the file earlier would violate the minimum retention
period.
You can also configure a default retention period that is assigned when you commit a file without specifying a date to release the file from
a WORM state.
Related tasks
Set a retention period through a UNIX command line on page 282
Set a retention period through Windows Powershell on page 283
Override the retention period for all files in a SmartLock directory on page 284
4. From the Privileged Delete list, specify whether to enabled the root user to delete files that are currently committed to a WORM
state.
NOTE: This functionality is available only for SmartLock enterprise directories.
5. In the Path field, type the full path of the directory you want to make into a SmartLock directory.
The specified path must belong to an empty directory on the cluster.
6. Optional: To specify a default retention period for the directory, click Apply a default retention span and then specify a time period.
The default retention period will be assigned if you commit a file to a WORM state without specifying a day to release the file from the
WORM state.
7. Optional: To specify a minimum retention period for the directory, click Apply a minimum retention span and then specify a time
period.
The minimum retention period ensures that files are retained in a WORM state for at least the specified period of time.
8. Optional: To specify a maximum retention period for the directory, click Apply a maximum retention span and then specify a time
period.
The maximum retention period ensures that files are not retained in a WORM state for more than the specified period of time.
9. Click Create Domain.
10. Click Create.
Related concepts
Autocommit time periods on page 279
Related References
SmartLock directory configuration settings on page 281
Related concepts
Autocommit time periods on page 279
Related concepts
Autocommit time periods on page 279
Related References
SmartLock directory configuration settings on page 281
Privileged Delete Indicates whether files committed to a WORM state in the directory can be deleted through the privileged delete
functionality. To access the privilege delete functionality, you must either be assigned the
ISI_PRIV_IFS_WORM_DELETE privilege and own the file you are deleting. You can also access the privilege
delete functionality for any file if you are logged in through the root or compadmin user account.
on Files committed to a WORM state can be deleted through the isi worm files
delete command.
off Files committed to a WORM state cannot be deleted, even through the isi worm
files delete command.
disabled Files committed to a WORM state cannot be deleted, even through the isi worm
files delete command. After this setting is applied, it cannot be modified.
Related tasks
Create a SmartLock directory on page 279
Modify a SmartLock directory on page 280
View SmartLock directory settings on page 281
Other touch command input formats are also allowed to modify the access time of files. For example, the command:
cp -p <source> <destination>
Related concepts
Retention periods on page 279
3. Specify the name of the file you want to set a retention period for by creating an object.
The file must exist in a SmartLock directory.
4. Specify the retention period by setting the last access time for the file.
The following command sets an expiration date of July 1, 2015 at 1:00 PM:
Related concepts
Retention periods on page 279
Related tasks
View WORM status of a file on page 284
Related tasks
View WORM status of a file on page 284
Related concepts
Retention periods on page 279
3. Delete the WORM committed file by running the isi worm files delete command.
The following command deletes /ifs/data/SmartLock/directory1/file:
Related tasks
Commit a file to a WORM state through a UNIX command line on page 283
Commit a file to a WORM state through Windows Explorer on page 283
Related concepts
Protection domains overview on page 286
Related concepts
Protection domains overview on page 286
Self-encrypting drives
Self-encrypting drives store data on a cluster that is specially designed for data-at-rest encryption.
Data-at-rest encryption on self-encrypting drives occurs when data that is stored on a device is encrypted to prevent unauthorized data
access. All data that is written to the storage device is encrypted when it is stored, and all data that is read from the storage device is
decrypted when it is read. The stored data is encrypted with a 256-bit data AES encryption key and decrypted in the same manner.
OneFS controls data access by combining the drive authentication key with on-disk data-encryption keys.
NOTE: All nodes in a cluster must be of the self-encrypting drive type. Mixed nodes are not supported.
288 Data-at-rest-encryption
Data migration to a cluster with self-encrypting
drives
You can have data from your existing cluster migrated to a cluster of nodes made up of self-encrypting drives (SEDs). As a result, all
migrated and future data on the new cluster will be encrypted.
NOTE: Data migration to a cluster with SEDs must be performed by PowerScale Professional Services. For more
information, contact your Dell EMC representative.
SUSPENDED This state indicates that drive activity is temporarily Command-line interface,
suspended and the drive is not in use. The state is web administration
manually initiated and does not occur during normal interface
cluster activity.
NOT IN USE A node in an offline state affects both read and write Command-line interface,
quorum. web administration
interface
REPLACE The drive was smartfailed successfully and is ready Command-line interface
to be replaced. only
STALLED The drive is stalled and undergoing stall evaluation. Command-line interface
Stall evaluation is the process of checking drives that only
are slow or having other issues. Depending on the
outcome of the evaluation, the drive may return to
service or be smartfailed. This is a transient state.
NEW The drive is new and blank. This is the state that a Command-line interface
drive is in when you run the isi dev command only
with the -a add option.
USED The drive was added and contained a Command-line interface
PowerScaleGUID but the drive is not from this node. only
This drive likely will be formatted into the cluster.
Data-at-rest-encryption 289
State Description Interface Error state
PREPARING The drive is undergoing a format operation. The drive Command-line interface
state changes to HEALTHY when the format is only
successful.
EMPTY No drive is in this bay. Command-line interface
only
WRONG_TYPE The drive type is wrong for this node. For example, a Command-line interface
non-SED drive in a SED node, SAS instead of the only
expected SATA drive type.
BOOT_DRIVE Unique to the A100 drive, which has boot drives in its Command-line interface
bays. only
SED_ERROR The drive cannot be acknowledged by the OneFS Command-line interface, X
system. web administration
NOTE: In the web administration interface, interface
this state is included in Not available.
ERASE The drive is ready for removal but needs your Command-line interface
attention because the data has not been erased. You only
can erase the drive manually to guarantee that data
is removed.
NOTE: In the web administration interface,
this state is included in Not available.
290 Data-at-rest-encryption
Node 1, [ATTN]
Bay 1 Lnum 11 [REPLACE] SN:Z296M8HK 000093172YE04 /dev/da1
Bay 2 Lnum 10 [HEALTHY] SN:Z296M8N5 00009330EYE03 /dev/da2
Bay 3 Lnum 9 [HEALTHY] SN:Z296LBP4 00009330EYE03 /dev/da3
Bay 4 Lnum 8 [HEALTHY] SN:Z296LCJW 00009327BYE03 /dev/da4
Bay 5 Lnum 7 [HEALTHY] SN:Z296M8XB 00009330KYE03 /dev/da5
Bay 6 Lnum 6 [HEALTHY] SN:Z295LXT7 000093172YE03 /dev/da6
Bay 7 Lnum 5 [HEALTHY] SN:Z296M8ZF 00009330KYE03 /dev/da7
Bay 8 Lnum 4 [HEALTHY] SN:Z296M8SD 00009330EYE03 /dev/da8
Bay 9 Lnum 3 [HEALTHY] SN:Z296M8QA 00009330EYE03 /dev/da9
Bay 10 Lnum 2 [HEALTHY] SN:Z296M8Q7 00009330EYE03 /dev/da10
Bay 11 Lnum 1 [HEALTHY] SN:Z296M8SP 00009330EYE04 /dev/da11
Bay 12 Lnum 0 [HEALTHY] SN:Z296M8QZ 00009330JYE03 /dev/da12
If you run the isi dev list command while the drive in bay 3 is being smartfailed, the system displays output similar to the following
example:
Node 1, [ATTN]
Bay 1 Lnum 11 [REPLACE] SN:Z296M8HK 000093172YE04 /dev/da1
Bay 2 Lnum 10 [HEALTHY] SN:Z296M8N5 00009330EYE03 /dev/da2
Bay 3 Lnum 9 [SMARTFAIL] SN:Z296LBP4 00009330EYE03 N/A
Bay 4 Lnum 8 [HEALTHY] SN:Z296LCJW 00009327BYE03 /dev/da4
Bay 5 Lnum 7 [HEALTHY] SN:Z296M8XB 00009330KYE03 /dev/da5
Bay 6 Lnum 6 [HEALTHY] SN:Z295LXT7 000093172YE03 /dev/da6
Bay 7 Lnum 5 [HEALTHY] SN:Z296M8ZF 00009330KYE03 /dev/da7
Bay 8 Lnum 4 [HEALTHY] SN:Z296M8SD 00009330EYE03 /dev/da8
Bay 9 Lnum 3 [HEALTHY] SN:Z296M8QA 00009330EYE03 /dev/da9
Bay 10 Lnum 2 [HEALTHY] SN:Z296M8Q7 00009330EYE03 /dev/da10
Bay 11 Lnum 1 [HEALTHY] SN:Z296M8SP 00009330EYE04 /dev/da11
Bay 12 Lnum 0 [HEALTHY] SN:Z296M8QZ 00009330JYE03 /dev/da12
• To securely delete the authentication key on a single drive, smartfail the individual drive.
• To securely delete the authentication key on a single node, smartfail the node.
• To securely delete the authentication keys on an entire cluster, smartfail each node and run the
isi_reformat_node command on the last node.
Upon running the isi dev list command, the system displays output similar to the following example, showing the drive state as
ERASE:
Node 1, [ATTN]
Bay 1 Lnum 11 [REPLACE] SN:Z296M8HK 000093172YE04 /dev/da1
Bay 2 Lnum 10 [HEALTHY] SN:Z296M8N5 00009330EYE03 /dev/da2
Bay 3 Lnum 9 [ERASE] SN:Z296LBP4 00009330EYE03 /dev/da3
Drives showing the ERASE state can be safely retired, reused, or returned.
Any further access to a drive showing the ERASE state requires the authentication key of the drive to be set to its default manufactured
security ID (MSID). This action erases the data encryption key (DEK) on the drive and renders any existing data on the drive permanently
unreadable.
Data-at-rest-encryption 291
23
S3 Support
This section contains the following topics:
Topics:
• S3
• Server Configuration
• Bucket handling
• Object handling
• Authentication
• Access key management
S3
OneFS supports the Amazon Web Services Simple Storage Service (AWS S3) protocol for reading data from and writing data to the
PowerScale platform.
The S3-on-OneFS technology enables the usage of Amazon Web Services Simple Storage Service (AWS S3) protocol to store data in the
form of objects on top of the OneFS file system storage. The data resides under a single namespace. The AWS S3 protocol becomes a
primary resident of the OneFS protocol stack, along with NFS, SMB, and HDFS. The technology allows multiprotocol access to objects
and files.
The S3 protocol supports bucket and object creation, retrieving, updating, and deletion. Object retrievals and updates are atomic. Bucket
properties can be updated. Objects are accessible using NFS and SMB as normal files, providing cross-protocol support.
To use S3, administrators generate access IDs and secret keys to authenticated users for access.
S3 concepts
This section describes some of the key concepts related to the S3 protocol.
Buckets: A bucket is a container for objects stored in S3. Every object is contained in a bucket. Buckets organize the S3 namespace at
the highest level, identify the account responsible for storage and data transfer charges, and play a role in access control.
Objects: Objects are the fundamental entities stored in S3. Objects consist of object data and metadata. The data portion is opaque to
S3. The metadata is a set of name-value pairs that describe the object. These include some default metadata, such as the date last
modified, and standard HTTP metadata. You can also specify custom metadata at the time the object is stored. An object is uniquely
identified within a bucket by a key.
Keys: A key is the unique identifier for an object within a bucket. Every object in a bucket has a key and a value.
On the other hand, an Account ID and a secret key are used to authenticate a user. The Account ID and secret key are created by the
Administrator and mapped to users (such as UNIX, AD, LDAP, and so on).
Server Configuration
The S3 settings are defined in the registry.
The server configuration settings for the S3 protocol are separated into global service configuration and per-zone configuration.
Global S3 settings
You can enable and disable the S3 service on the OneFS cluster, set ports for HTTP and HTTPS for the S3 protocol across the cluster.
You can view or modify the global S3 settings for service related parameters from the Global settings page.
292 S3 Support
Enable S3 protocol
You can enable the S3 protocol.
1. Click Protocols > Objects Storage (S3) .
2. Click the Global settings tab.
The Global settings page appears.
3. Under the View/Edit Settings area, select the Enable S3 service check box.
The S3 protocol is enabled.
Disable S3 protocol
You can disable the S3 protocol.
1. Click Protocols > Objects Storage (S3) .
2. Click the Global settings tab.
The Global settings page appears.
3. Under the View/Edit Settings area, clear the Enable S3 service check box.
The S3 protocol is disabled.
View ports
You can view already configured HTTPS and HTTP ports.
1. Click Protocols > Objects Storage (S3) .
2. Click the Global settings tab.
The Global settings page appears.
3. Under the View/Edit Settings area, view the details. The default values for the ports are:
• HTTPS: 9021
• HTTP: 9020
If the Enable S3 HTTP check box is clear, only S3 HTTPS is supported.
Modify ports
You can modify already configured HTTPS and HTTP ports.
1. Click Protocols > Objects Storage (S3) .
2. Click the Global settings tab.
The Global settings page appears.
3. Under the View/Edit Settings area, select the Enable S3 HTTP check box to enable S3 HTTP support.
If the Enable S3 HTTP is clear, only S3 HTTPS is supported.
4. Modify the port details. The default values for the ports are:
• HTTPS: 9021
• HTTP: 9020
You can modify the value for both the ports. Click the arrows inside the box or enter a value in the box.
Click Revert changes to go back to previous settings. Revert changes is enabled only if any changes are made.
5. Click Save changes.
Save changes is enabled only if you have modified the settings.
The changes are saved, and the following message appears.
S3 zone settings
Access zones provide default locations for creating buckets.
You can view or modify specific S3 settings of an access zone from the Zone settings page.
S3 Support 293
If you are creating a bucket, and a zone ID or name is not provided, the creation of the bucket defaults to the System zone.
Certificates
Server certificates are a requirement for the server to set up a TLS handshake.
On a OneFS cluster, the certificate manager manages all the certificates. The certificate manager is designed to provide a generic
programmatic way for accessing and configuring certificates on the cluster.
The HTTPS certificates used by S3 are handled by the isi certificate manager. The Apache instance uses the same store.
Bucket handling
Buckets are the containers for objects. You can have one or more buckets. For each bucket, you can control access to it (who can create,
delete, and list objects in the bucket).
Buckets are a similar concept to exports in NFS and shares in SMB. A major difference between buckets and NFS export is that any user
with valid credentials can create a bucket on the server, and the bucket is owned by that user.
OneFS now supports these bucket and account operations:
• PUT bucket
• GET bucket (list objects in a bucket)
294 S3 Support
• GET bucket location
• DELETE bucket
• GET Bucket acl
• PUT Bucket acl
• HEAD Bucket
• List Multipart Uploads
• GET Service
Managing buckets
You can access the S3 bucket management feature from OneFS web administration interface.
List buckets
View a list of buckets. You can also sort and filter the list.
1. Click Protocols > Objects Storage (S3) .
The Buckets page appears with the list of buckets.
2. View the list of buckets in a tabular format. The four columns are: name, path, owner, and actions.
• You can filter the buckets by zone: In the Current access zone list, select the access zone. The buckets for the given zone are
displayed. The default selected value is System zone.
• You can filter the buckets by owner: In the Owner box, enter the name of the owner and click Apply. The buckets for the given
owner are displayed.
3. Click the arrows on the table header of the columns to sort the buckets based on name, path, and owner fields.
4. Click <<, <, and > to go to the first page, previous page, and next page respectively. Click the last button to refresh and reload all the
buckets and go to page 1.
Create a bucket
You can create a bucket for a user.
1. Click Protocols > Objects Storage (S3) .
• In the Current access zone list, select the access zone where you want to create the bucket. The default selected value is
System zone.
2. On the Buckets page, click Create Bucket.
The Create a Bucket dialog box appears.
3. Enter the following information in the Create a Bucket dialog box:
• Name: Enter the name of the bucket. The name must be between 3 and 63 characters in length. Bucket name cannot contain
characters other than a-z, 0-9, and '-'. This is a mandatory field.
• Owner: Enter the name of the bucket owner. Click Select user to search for a user. Then, in the Search user dialog box, enter
the details in the User and Providers boxes and click Search. This is a mandatory field.
• Path: Enter the file path where you plan to store the bucket. Click Browse to select a path to a directory and then click Select.
Select the Create bucket path if it does not exist check box if a path does not exist. This is a mandatory field.
• Description: Enter a description for the bucket. This is an optional field.
• ACL: Click Add ACL . Then, click Select user and search for the user. In the Permissions list, select the permission for your
bucket. Whenever, you want to specify ACLs, you must select the user or grantee and permissions for that user or grantee. This is
an optional field.
4. Click Create Bucket.
The bucket is successfully created, and the following message appears:
S3 Support 295
View bucket details
View the details of a bucket.
1. Click Protocols > Objects Storage (S3) .
2. On the Buckets page, you can see the list of buckets. Under the Actions column, click View/Edit next to the bucket whose details
want to view.
The View and Edit bucket dialog box appears.
3. View the following information in the View and Edit bucket dialog box:
• Name: View the name of the bucket.
• Owner: View the name of the bucket owner.
• Path: View the file path where the bucket is stored.
• Description: View the description for the bucket.
• ACL: View details related to the added ACL(s) .
Click Cancel to exit the View and Edit bucket dialog box.
Delete a bucket
You can delete a bucket.
1. Click Protocols > Objects Storage (S3) .
2. On the Buckets page, you can see the list of buckets. Under the Actions column, click Delete next to the bucket that you want to
delete.
The Confirm delete dialog box appears.
3. Click Delete. The bucket is deleted.
Click Cancel to exit the Confirm delete dialog box and return to the list of buckets.
Object handling
An object consists of a file and optionally any metadata that describes that file. To store an object in S3, you upload the file that you want
to store to a bucket. You can set permissions on the object and any metadata.
S3 stores data in the form of objects, which are key-value pairs. An object is identified using the key. The data is stored inside the object
as a value. In OneFS, you use files to represent objects. An object key points out to a path name and an object value to the contents of the
file.
An object can have associated metadata with size limits. There can be system metadata, which are generated for every object. Also, there
can be user metadata that applications create for selected objects.
Objects reside within buckets. The life cycle and access of an object depends on the policies and ACLs enforced on the bucket. Also, each
object can have its own ACLs.
An object key can have prefixes or delimiters, which are used to organize them efficiently.
296 S3 Support
Object key
The object key is the path of the file from the root of the bucket directory.
For OneFS the object key is treated as a file path from root. "/" is treated as the path for directories. The limitations on object keys are
listed below:
• Cannot use " / " (It is treated as a delimiter) .
• Cannot use ". " and ".." as a key or as a part of prefix. For example, you can create /.a but not /../a.
• If snapshot is already present, .snapshot cannot be created.
• Maximum key length including prefix and delimiter is 1023 bytes.
• Key length or each prefix split by / is 255 bytes.
• Can use ASCII or UTF-8.
• Other OneFS data services may have a problem if path length as a file exceeds 1024 bytes.
• Cannot place object under the .isi_s3 directory.
• Cannot place file object if a directory with the same name already exists.
Object Metadata
An object can have two types of metadata, system metadata and user-defined metadata.
Both system and user-defined metadata are defined as a set of name-value pairs. In OneFS, system metadata gets stored as an inode
attribute and the user-defined metadata gets stored as an extended attribute of the file.
Multipart upload
The S3 protocol allows you to upload a large file as multiple parts rather than as a single request.
The client initiates a multipart upload with a POST request with the uploads the query parameter and the object key. On the cluster, a
unique userUploadId string is generated by concatenating the bucket ID and upload ID and returned to the client. The pair of bucket ID and
upload ID is also stored in a per-zone SBT. A directory, .s3_parts_userUploadID is created in the target directory to store the
parts. After getting created, the directory and kvstore entry persists until the multipart operation is either completed or stopped. Parts are
uploaded with a part number and stored in the temporary directory. A part has a maximum size of 5 GB and the last part a minimum size
of 5 MB. Complete multipart upload is handled by concatenating the parts to a temporary file under the .isi_s3 directory. Once the
concatenation succeeds, the temporary file is copied to the target, the .s3_parts_userUploadID is deleted, and the SBT entry is
removed.
Etag
Objects on AWS S3 have an MD5 hash sum stored in the object etag
For OneFS S3, the etag is not evaluated. If Content-MD5 is passed to the server, the etag is set in the user_defined attributes and
the ifm_arhive_data's md5_valid bit is set. This is not set if the file is modified by other protocols.
PUT object
The PUT object operation allows you to add an object to a bucket. You must have the WRITE permission on a bucket to add an object to
it.
To emulate the atomicity guarantee of an S3 PUT object, objects are written to a temporary directory, .isi_s3 before getting moved to
the target path. On PUT, directories are implicitly created from writing the object. Implicitly created directories are owned by the object
owner and have permissions that are inherited from the parent directory.
S3 Support 297
Authentication
S3 uses its own method of authentication which relies on access keys that are generated for the user.
The access ID is sent in the HTTP request and is used to identify the user. The secret key is used in the signing algorithm.
There are two signing algorithms, Version 2 (v2) and Version 4 (v4).
S3 requests can either be signed or unsigned. A signed request contains an access ID and a signature. The access ID indicates who the
user is. The included signature value is the result of hashing several header values in the request with a secret key. The server must use
the access ID to retrieve a copy of the secret key, recompute the expected hash value of the request, and compare against the signature
sent. If they match, then the requester is authenticated, and any header value that was used in the signature is now verified to be
unchanged as well.
An S3 operation is only performed after the following criteria are met:
• Verify signatures that use AWS Signature Version 4 or AWS Signature Version 2 and validate it against the S3 request.
• Get user credential using access ID, once verification is complete.
• Perform authorization of user credential against bucket ACL.
• Perform traversal check of user credential against object path.
• Perform access check of user credential against object ACL.
Access keys
On OneFS, user keys are created using PAPI and stored in the kvstore.
The entry format in the kvstore is access_id:secret_key. The secret key is the randomly generated base64 string. The access key is
formatted as ZoneId_username_accid. In the S3 protocol, on receiving an authenticated request, the access key is used to retrieve
the secret key from the keystore. The signature is then generated on the server side, using the header fields from the request and the
user's secret key. If the signature matches, the request is successfully authenticated. The username and zone information encoded in the
access ID is used to generate the user security context and the request is performed. By default, when a new key is created, the previous
user key remains valid for 10 minutes. If you want, you can change it up to 1440 minutes (24 hrs).
Access control
In S3, permissions on objects and buckets are defined by an ACL.
S3 supports five grant permission types: READ, WRITE, READ_ACP, WRITE_ACP, and FULL_CONTROL. The FULL_CONTROL grant is
a shorthand for all grants. Each ACE consists of one grantee and one grant. The grantee can either be a user or one of the defined groups
that OneFS S3 supports, Everyone and Authenticated Users. S3 ACLs are limited to a maximum of 100 entries.
ACL concepts
In S3, you must understand some concepts that are related to an ACL.
Grantee: S3 ACL grantees can be specified as either an ID or an email address to an AWS account. The ID is a randomly generated value
for each user. For the OneFS S3, only ID is supported and the ID is set to be the username or group of the grantee.
S3 Groups: S3 has two predefined groups, Everyone and Authenticated Users. On OneFS, Everyone is translated to the integrated World
group SID S-1-1-0 and Authenticated Users is translated to the integrated group Authenticated User SID S-1-5-11.
Canned ACL: When specifying ACLs in S3, the user can either specify the ACL as a list of grants or use a canned ACL. The canned ACL
is a predefined ACL list which is added to the file. The supported canned ACLS are private, public-read, public-read-write, authenticated-
read, bucket-owner-read, and bucket-owner-full-control.
Default ACL: When objects and buckets are created in S3 by a PUT operation, the user has the option of setting the ACL. If no ACL is
specified, then the private canned ACL is used by default, granting full control to the creator.
Object ACL
S3 ACLs are a legacy access control mechanism that predates Identity and Access Management (IAM).
On OneFS objects, ACLs are translated to NTFS ACLs and stored on-disk. The table below lists the mapping of S3 grants to NTFS grants.
The difference in the OneFS S3 implementation is the WRITE grant is allowed on object ACLs. In S3, the WRITE grant has no meaning as
the S3 protocol does not allow modifying objects.
298 S3 Support
The WRITE grant instead allows an object to be modified through other access protocols. For translating S3 ACLs to NTFS ACLs for
operations PUT object ACL, the translation of each entry happens as shown in the table. The translation of NTFS ACL to S3 ACL, as
needed in the GET object ACL some entries may not be shown. As NTFS ACLs have a richer set of grants, permissions that are not in the
table are omitted. Deny ACEs are also omitted as S3 ACLs do not support a deny entry.
An S3 ACL can also have one of the following pre-defined groups as a grantee:
• Authenticated Users: Any signed request is included in this group.
• All Users: Any request, signed or unsigned, is included in this group.
• Log Delivery Group: This group represents the log server that writes server access logs in the bucket.
Object ACLs translate to the following S3 permissions:
A difference in the OneFS implementation is the implicit owner ACE permission. In S3 the object owner is implicitly granted
FULL_CONTROL, regardless of the ACL on the file. On OneFS to emulate this behavior, an ace entry granting FULL_CONTROL to the
object owner is appended to the end of any ACL set by S3 which does not grant the owner FULL_CONTROL privilege.
Bucket ACL
S3 ACLs are a legacy access control mechanism that predates Identity and Access Management (IAM).
ACLs set on the bucket are written as part of the bucket configuration in Tardis. The ACLs define which S3 bucket operations are allowed
by which user.
S3 Support 299
Table 13. Grants for S3 operation (continued)
Operation Grant Required
PUT BUCKET ACL WRITE_ACP
Directory permissions
In S3, directories may be implicitly related on a PUT object for keys with delimiters.
For directories related this way, the user issuing the PUT object request becomes the owner of the directory and the directory mode gets
copied from the parent.
S3 Permissions
The following is a list of S3 permissions which OneFS supports.
• AbortMultipartUpload
• DeleteObject
• DeleteObjectVersion
• GetObject
• GetObjectAcl
• GetObjectVersion
• GetObjectVersionAcl
• ListMultipartUploadParts
• PutObject
• PutObjectAcl
• PutObjectVersionAcl
• CreateBucket
• DeleteBucket
• ListBucket
• ListBucketVersions
• ListAllMyBuckets
• ListBucketMultipartUploads
• GetBucketAcl
• PutBucketAcl
Some of these permissions require special handling. The following permissions are handled outside of the bucket, and may be handled in
PAPI:
300 S3 Support
Table 15. S3 Permissions (continued)
Permissions Effect
CreateBucket This permission gives the users the ability to create a bucket. This
can only be used in S3 user policies. Users are allowed or denied
this permission using PAPI bucket configuration.
The following permissions interact with file system ACLs and require extra handling:
You cannot bypass file system permissions. If a user has the ListBucket permission, but does not have read permission on a directory, then
the user cannot list the files in that directory.
Anonymous authentication
Requests sent without an authentication header in S3 are run as the anonymous user.
An anonymous user is mapped to the user 'nobody'.
Managing keys
You can access the S3 key management feature from OneFS web administration interface.
S3 Support 301
3. View the information related to the default access zone (System) like access key ID, secret key, creation, and expiry dates. The details
contain only valid (non-expired) keys. It returns empty response if no key is present in the persistent data store.
Created S3 Key.
S3 key has been created successfully.
8. You can now view the secret key details in a tabular format. The columns are:
• Type: Displays the type of the key (existing or old)
• Secret Keys: Click Show key to view the key and Hide key to hide the key.
• Expiry time: The time for the existing key is not mentioned. If you have created more than one key, the expiry time of the old key
is displayed.
• Creating date: Displays the date and time when the key was created.
Created S3 Key.
S3 key has been created successfully.
The old key expires after the time limit that you have set.
302 S3 Support
The default expiry time is 10 minutes. The maximum time that you can set is 1440 minutes (24 hours) and an error similar to the
following appears of the time is exceeded:
Now that you have two secret keys, if you try to create a new key, the Force delete old key check box appear. You can select the
check box if you want to forcefully create the key. The first key that you had created is not valid anymore. Only two keys appear at
one time.
S3 Support 303
24
SmartQuotas
This section contains the following topics:
Topics:
• SmartQuotas overview
• Quota types
• Default quota type
• Usage accounting and limits
• Disk-usage calculations
• Quota notifications
• Quota notification rules
• Quota reports
• Creating quotas
• Managing quotas
• Managing quota notifications
• Email quota notification messages
• Managing quota reports
• Basic quota settings
• Advisory limit quota notification rules settings
• Soft limit quota notification rules settings
• Hard limit quota notification rules settings
• Limit notification settings
• Quota report settings
SmartQuotas overview
The SmartQuotas module is an optional quota-management tool that monitors and enforces administrator-defined storage limits. Using
accounting and enforcement quota limits, reporting capabilities, and automated notifications, SmartQuotas manages storage use, monitors
disk storage, and issues alerts when disk-storage limits are exceeded.
Quotas help you manage storage usage according to criteria that you define. Quotas are used for tracking—and sometimes limiting—the
amount of storage that a user, group, or directory consumes. Quotas help ensure that a user or department does not infringe on the
storage that is allocated to other users or departments. In some quota implementations, writes beyond the defined space are denied, and
in other cases, a simple notification is sent.
NOTE: Do not apply quotas to /ifs/.ifsvar/ or its subdirectories. If you limit the size of the /ifs/.ifsvar/
directory through a quota, and the directory reaches its limit, jobs such as File-System Analytics fail. A quota blocks
older job reports from being deleted from the /ifs/.ifsvar/ subdirectories to make room for newer reports.
The SmartQuotas module requires a separate license. For more information about the SmartQuotas module or to activate the module,
contact your Dell EMC sales representative.
Quota types
OneFS uses the concept of quota types as the fundamental organizational unit of storage quotas. Storage quotas comprise a set of
resources and an accounting of each resource type for that set. Storage quotas are also called storage domains.
Storage quotas creation requires three identifiers:
• The directory to monitor
• Whether snapshots are tracked against the quota limit
• The quota type (directory, user, or group)
304 SmartQuotas
NOTE: Do not create quotas of any type on the OneFS root (/ifs). A root-level quota may significantly degrade
performance.
You can choose a quota type from the following entities:
User Either a specific user or default user (every user). Specific-user quotas that you configure take precedence over a
default user quota.
Group All members of a specific group or all members of a default group (every group). Any specific-group quotas that
you configure take precedence over a default group quota. Associating a group quota with a default group quota
creates a linked quota.
You can create multiple quota types on the same directory, but they must be of a different type or have a different snapshot option. You
can specify quota types for any directory in OneFS and nest them within each other to create a hierarchy of complex storage-use policies.
Nested storage quotas can overlap. For example, the following quota settings ensure that the finance directory never exceeds 5 TB, while
limiting the users in the finance department to 1 TB each:
• Set a 5 TB hard quota on /ifs/data/finance.
• Set 1 TB soft quotas on each user in the finance department.
Track storage The accounting option tracks but does not limit disk-storage use. Using the accounting option for a quota, you can
consumption monitor inode count and physical and logical space resources. Physical space refers to all of the space that is used
SmartQuotas 305
without specifying to store files and directories, including data, metadata, and data protection overhead in the domain. There are two
a storage limit types of logical space:
• File system logical size: Logical size of files as per file system. Sum of all files sizes, excluding file metadata and
data protection overhead.
• Application logical size : Logical size of file apparent to the application. Used file capacity from the application
point of view, which is usually equal to or less than the file system logical size. However, in the case of a
sparse file, application logical size can be greater than file system logical size. Application logical size includes
capacity consumption on the cluster as well as data tiered to the cloud.
Storage consumption is tracked using file system logical size by default, which does not include protection
overhead. As an example, by using the accounting option, you can do the following:
• Track the amount of disk space that is used by various users or groups to bill each user, group, or directory for
only the disk space used.
• Review and analyze reports that help you identify storage usage patterns and define storage policies.
• Plan for capacity and other storage needs.
Specify storage Enforcement limits include all of the functionality of the accounting option, plus the ability to limit disk storage and
limits send notifications. Using enforcement limits, you can logically partition a cluster to control or restrict how much
storage that a user, group, or directory can use. For example, you can set hard- or soft-capacity limits to ensure
that adequate space is always available for key projects and critical applications and to ensure that users of the
cluster do not exceed their allotted storage capacity. Optionally, you can deliver real-time email quota notifications
to users, group managers, or administrators when they are approaching or have exceeded a quota limit.
NOTE:
If a quota type uses the accounting-only option, enforcement limits cannot be used for that quota.
The actions of an administrator who is logged in as root may push a domain over a quota threshold. For example, changing the protection
level or taking a snapshot has the potential to exceed quota parameters. System actions such as repairs also may push a quota domain
over the limit.
The system provides three types of administrator-defined enforcement thresholds.
Hard Limits disk usage to a size that cannot be exceeded. If an operation, such as a file write, causes a
quota target to exceed a hard quota, the following events occur:
• The operation fails
• An alert is logged to the cluster
• A notification is issued to specified recipients.
Writes resume when the usage falls below the threshold.
Soft Allows a limit with a grace period that can be exceeded until the grace period expires. When a soft
quota is exceeded, an alert is logged to the cluster and a notification is issued to specified recipients;
however, data writes are permitted during the grace period.
If the soft threshold is still exceeded when the grace period expires, data writes fail, and a notification
is issued to the recipients you have specified.
Writes resume when the usage falls below the threshold.
Advisory An informational limit that can be exceeded. When an advisory quota threshold is exceeded, an alert is
logged to the cluster and a notification is issued to specified recipients. Advisory thresholds do not
prevent data writes.
Disk-usage calculations
For each quota that you configure, you can specify whether physical or logical space is included in future disk usage calculations.
You can configure quotas to include the following types of physical or logical space:
306 SmartQuotas
Type of physical or Description
logical space to include
in quota
Physical size Total on-disk space consumed to store files in File data blocks (non-sparse regions) + IFS metadata
OneFS. Apart from file data, this counts user (ACLs, ExAttr, inode) + data protection overhead
metadata (for example, ACLs and user-specified
extended attributes) and data protection overhead.
Accounts for on-premise capacity consumption with
data protection.
File system logical size Approximation of disk usage on other systems by File data blocks (non-sparse regions) + IFS metadata
ignoring protection overhead. The space consumed (Acls, ExAttr, inode)
to store files with 1x protection. Accounts for on-
premise capacity consumption without data
protection.
Application logical size Apparent size of file that a user/application observes. The physical size and file system logical size quota
How an application sees space available for storage metrics count the number of blocks required to store
regardless of whether files are cloud-tiered, sparse, file data (block-aligned). The application logical size
deduped, or compressed. It is the offset of the file's quota metric is not block-aligned. In general, the
last byte (end-of-file). Application logical size is application logical size is smaller than either the
unaffected by the physical location of the data, on or physical size or file system logical size, as the file
off cluster, and therefore includes CloudPools system logical size counts the full size of the last
capacity across multiple locations. Accounts for on- block of the file, whereas application logical size
premise and off-premise capacity consumption considers the data present in the last block.
without data protection. However, application logical size will be higher for
sparse files.
Most quota configurations do not need to include data protection overhead calculations, and therefore do not need to include physical
space, but instead can include logical space (either file system logical size, or application logical size). If you do not include data protection
overhead in usage calculations for a quota, future disk usage calculations for the quota include only the logical space that is required to
store files and directories. Space that is required for the data protection setting of the cluster is not included.
Consider an example user who is restricted by a 40 GB quota that does not include data protection overhead in its disk usage calculations.
(The 40 GB quota includes file system logical size or application logical size.) If your cluster is configured with a 2x data protection level
and the user writes a 10 GB file to the cluster, that file consumes 20 GB of space but the 10GB for the data protection overhead is not
counted in the quota calculation. In this example, the user has reached 25 percent of the 40 GB quota by writing a 10 GB file to the
cluster. This method of disk usage calculation is recommended for most quota configurations.
If you include data protection overhead in usage calculations for a quota, future disk usage calculations for the quota include the total
amount of space that is required to store files and directories, in addition to any space that is required to accommodate your data
protection settings, such as parity or mirroring. For example, consider a user who is restricted by a 40 GB quota that includes data
protection overhead in its disk usage calculations. (The 40 GB quota includes physical size.) If your cluster is configured with a 2x data
protection level (mirrored) and the user writes a 10 GB file to the cluster, that file actually consumes 20 GB of space: 10 GB for the file and
10 GB for the data protection overhead. In this example, the user has reached 50 percent of the 40 GB quota by writing a 10 GB file to the
cluster.
NOTE: Cloned and deduplicated files are treated as ordinary files by quotas. If the quota includes data protection
overhead, the data protection overhead for shared data is not included in the usage calculation.
You can configure quotas to include the space that is consumed by snapshots. A single path can have two quotas applied to it: one
without snapshot usage, which is the default, and one with snapshot usage. If you include snapshots in the quota, more files are included
in the calculation than are in the current directory. The actual disk usage is the sum of the current directory and any snapshots of that
directory. You can see which snapshots are included in the calculation by examining the .snapshot directory for the quota path.
NOTE: Only snapshots created after the QuotaScan job finishes are included in the calculation.
Quota notifications
Quota notifications are generated for enforcement quotas, providing users with information when a quota violation occurs. Reminders are
sent periodically while the condition persists.
Each notification rule defines the condition that is to be enforced and the action that is to be executed when the condition is true. An
enforcement quota can define multiple notification rules. When thresholds are exceeded, automatic email notifications can be sent to
specified users, or you can monitor notifications as system alerts or receive emails for these events.
SmartQuotas 307
Notifications can be configured globally, to apply to all quota domains, or be configured for specific quota domains.
Enforcement quotas support the following notification settings. A given quota can use only one of these settings.
Use the system settings for quota notifications Uses the global default notification for the specified type of quota.
Create custom notifications rules Enables the creation of advanced, custom notifications that apply
to the specific quota. Custom notifications can be configured for
any or all of the threshold types (hard, soft, or advisory) for the
specified quota.
Instant Includes the write-denied notification, triggered when a hard threshold denies a write, and the threshold-exceeded
notifications notification, triggered at the moment a hard, soft, or advisory threshold is exceeded. These are one-time
notifications because they represent a discrete event in time.
Ongoing Generated on a scheduled basis to indicate a persisting condition, such as a hard, soft, or advisory threshold being
notifications over a limit or a soft threshold's grace period being expired for a prolonged period.
Quota reports
The OneFS SmartQuotas module provides reporting options that enable administrators to manage cluster resources and analyze usage
statistics.
Storage quota reports provide a summarized view of the past or present state of the quota domains. After raw reporting data is collected
by OneFS, you can produce data summaries by using a set of filtering parameters and sort types. Storage-quota reports include
information about violators, grouped by threshold types. You can generate reports from a historical data sample or from current data. In
either case, the reports are views of usage data at a given time. OneFS does not provide reports on data aggregated over time, such as
trending reports, but you can use raw data to analyze trends. There is no configuration limit on the number of reports other than the
space needed to store them.
OneFS provides the following data-collection and reporting methods:
• Scheduled reports are generated and saved on a regular interval.
• Ad hoc reports are generated and saved at the request of the user.
• Live reports are generated for immediate and temporary viewing.
Scheduled reports are placed by default in the /ifs/.isilon/smartquotas/reports directory, but the location is configurable to
any directory under /ifs. Each generated report includes quota domain definition, state, usage, and global configuration settings. By
default, ten reports are kept at a time, and older reports are purged. You can create ad hoc reports at any time to view the current state
of the storage quotas system. These live reports can be saved manually. Ad hoc reports are saved to a location that is separate from
scheduled reports to avoid skewing the timed-report sets.
308 SmartQuotas
Creating quotas
You can create two types of storage quotas to monitor data: accounting quotas and enforcement quotas. Storage quota limits and
restrictions can apply to specific users, groups, or directories.
The type of quota that you create depends on your goal.
• Enforcement quotas monitor and limit disk usage. You can create enforcement quotas that use any combination of hard limits, soft
limits, and advisory limits.
NOTE: Enforcement quotas are not recommended for snapshot-tracking quota domains.
• Accounting quotas monitor, but do not limit, disk usage.
NOTE: Before using quota data for analysis or other purposes, verify that no QuotaScan jobs are running.
SmartQuotas 309
4. Depending on the target that you selected, select the entity that you want to apply the quota to. For example, if you selected User
quota from the Quota type list, you can target either all users or a specific user.
5. In the Path field, type the path for the quota, or click Browse, and then select a directory.
6. In the Quota accounting area, select the options that you want to use.
• To include snapshot data in the accounting quota, select Include snapshots in the storage quota.
• To include the metadata and data protection overhead in the accounting quota, select Physical size.
• To include the physical size minus the metadata and data protection overhead, select File system logical size.
• To include capacity consumption on the cluster as well as data tiered to the cloud, select Application logical size. This
accounting quota does not measure file system space, but provides the application/user view of the used file capacity.
7. In the Quota limits area, select Specify storage limits.
8. Select the check box next to each limit that you want to enforce.
9. Type numerals in the fields and from the lists, select the values that you want to use for the quota.
10. Select how available space should be shown:
• Size of smallest hard or soft threshold
• Size of cluster
11. In the Quota notifications area, select the notification option that you want to apply to the quota.
12. Optional: If you selected the option to use custom notification rules, click the link to expand the custom notification type that applies to
the usage-limit selections.
13. Click Create quota.
Before using quota data for analysis or other purposes, verify that no QuotaScan jobs are in progress by checking Cluster management
> Job operations > Job summary.
Managing quotas
You can modify the configured values of a storage quota, and you can enable or disable a quota. You can also create quota limits and
restrictions that apply to specific users, groups, or directories.
Quota management in OneFS is simplified by the quota search feature, which helps you locate a quota or quotas by using filters. You can
unlink quotas that are associated with a parent quota, and configure custom notifications for quotas. You can also disable a quota
temporarily and then enable it when needed.
To clear the result set and display all storage quotas, click Reset.
310 SmartQuotas
Manage quotas
Quotas help you monitor and analyze the current or historical use of disk storage. You can search for quotas, and you can view, modify,
delete, and unlink a quota.
You must run an initial QuotaScan job for the default or scheduled quotas, or the data that is displayed may be incomplete.
Before you modify a quota, consider how the changes will affect the file system and end users.
NOTE:
• The options to edit or delete a quota display only when the quota is not linked to a default quota.
• The option to unlink a quota is available only when the quota is linked to a default quota.
1. Click File System > SmartQuotas > Quotas and usage.
2. Optional: In the filter bar, select the options that you want to filter by.
• From the Filters list, select the quota type that you want to find (Directory, User, Group, Default user, or Default group).
• To search for quotas that are over the limit, select Over limit from the Exceeded list.
• In the Path field, type a full or partial path. You can use the wildcard character (*) in the Path field.
• To search subdirectories, select Include children from the Recursive path list.
Quotas that match the search criteria appear in the Quotas and usage table.
3. Optional: Locate the quota that you want to manage. You can perform the following actions:
• To review or edit this quota, click View Details.
• To delete the quota, click Delete.
• To unlink a linked quota, click Unlink.
NOTE: Configuration changes for linked quotas must be made on the parent (default) quota that the linked quota
is inheriting from. Changes to the parent quota are propagated to all children. If you want to override
configuration from the parent quota, you must first unlink the quota.
The system parses the file and imports the quota settings from the configuration file. Quota settings that you configured before
importing the quota configuration file are retained, and the imported quota settings are effective immediately.
SmartQuotas 311
Managing quota notifications
Quota notifications can be enabled or disabled, modified, and deleted.
By default, a global quota notification is already configured and applied to all quotas. You can continue to use the global quota notification
settings, modify the global notification settings, or disable or set a custom notification for a quota.
Enforcement quotas support four types of notifications and reminders:
• Threshold exceeded
• Over-quota reminder
• Grace period expired
• Write access denied
If a directory service is used to authenticate users, you can configure notification mappings that control how email addresses are resolved
when the cluster sends a quota notification. If necessary, you can remap the domain that is used for quota email notifications and you can
remap Active Directory domains, local UNIX domains, or both.
312 SmartQuotas
Before using quota data for analysis or other purposes, verify that no QuotaScan jobs are in progress by checking Cluster Management
> Job Operations > Job Summary.
NOTE: You must be logged in to the web administration interface to perform this task.
Template Description
quota_email_template.txt A notification that disk quota has been exceeded.
quota_email_grace_template.txt A notification that disk quota has been exceeded (also includes a
parameter to define a grace period in number of days).
quota_email_test_template.txt A notification test message you can use to verify that a user is
receiving email notifications.
If the default email notification templates do not meet your needs, you can configure your own custom email notification templates by
using a combination of text and SmartQuotas variables. Whether you choose to create your own templates or modify the existing ones,
make sure that the first line of the template file is a Subject: line. For example:
Subject: Disk quota exceeded
If you want to include information about the message sender, include a From: line immediately under the subject line. If you use an email
address, include the full domain name for the address. For example:
From: [email protected]
In this example of the quota_email_template.txt file, a From: line is included. Additionally, the default text "Contact your system
administrator for details" at the end of the template is changed to name the administrator:
Subject: Disk quota exceeded
From: [email protected]
SmartQuotas 313
Subject: Disk quota exceeded
From: [email protected]
cp /etc/ifs/quota_email_template.txt /ifs/data/quotanotifiers/
quota_email_template_copy.txt
edit /ifs/data/quotanotifiers/quota_email_template_copy.txt
314 SmartQuotas
7. In the Notification Rules area, click Add a Notification Rule.
The Create a Notification Rule dialog box appears.
8. From the Rule type list, select the notification rule type that you want to use with the template.
9. In the Rule Settings area, select a notification type option.
10. Depending on the rule type that was selected, a schedule form might appear. Select the scheduling options that you want to use.
11. In the Message template field, type the path for the message template, or click Browse to locate the template.
12. Optional: Click Create Rule
ls -a *.xml
SmartQuotas 315
• To view a specific quota report in the directory, run the following command:
ls <filename>.xml
Option Description
User Quota Create a quota for every current or future user that stores data
in the specified directory.
Group Quota Create a quota for every current or future group that stores data
in the specified directory.
Include snapshots in the storage quota Count all snapshot data in usage limits. This option cannot be
changed after the quota is created.
Enforce the limits for this quota based on physical size Base quota enforcement on storage usage which includes
metadata and data protection.
Enforce the limits for this quota based on file system logical size Base quota enforcement on storage usage which does not
include metadata and data protection.
Enforce the limits for this quota based on application logical size Base quota enforcement on storage usage which includes
capacity consumption on the cluster as well as data tiered to the
cloud.
Track storage without specifying a storage limit Account for usage only.
Specify storage limits Set and enforce advisory, soft, or absolute limits.
316 SmartQuotas
Option Description Exceeded Remains exceeded
separated email
addresses. Duplicate
email addresses are
identified and only unique
addresses are stored.
You can enter a maximum
of 1,024 characters of
comma-separated email
addresses.
Message template Type the path for the custom Yes Yes
template, or click Browse to
locate the custom template.
Leave the field blank to use the
default template.
SmartQuotas 317
Option Description Exceeded Remains exceeded Grace period Write access
expired denied
Message template Type the path for Yes Yes Yes Yes
the custom
template, or click
Browse to locate
the custom
template.
Leave the field blank
to use the default
template.
318 SmartQuotas
Hard limit quota notification rules settings
You can configure custom quota notification rules for hard limits for a quota. These settings are available when you select the option to
use custom notification rules.
Message template Type the path for the custom Yes Yes
template, or click Browse to
locate the custom template.
Leave the field blank to use the
default template.
SmartQuotas 319
Notification setting Description
Use the system settings for quota notifications Use the default notification rules that you configured for the
specified threshold type.
Create custom notification rules Provide settings to create basic custom notifications that apply
only to this quota.
Setting Description
Report frequency Specifies the interval for this report to run: daily, weekly, monthly,
or yearly. You can use the following options to further refine the
report schedule.
Generate report every. Specify the numeric value for the
selected report frequency; for example, every 2 months.
Generate reports on. Select the day or multiple days to generate
reports.
Select report day by. Specify date or day of the week to
generate the report.
Generate one report per specified by. Set the time of day to
generate this report.
Generate multiple reports per specified day. Set the intervals
and times of day to generate the report for that day.
Scheduled report archiving Determines the maximum number of scheduled reports that are
available for viewing on the SmartQuotas Reports page.
Limit archive size for scheduled reports to a specified number of
reports. Type the integer to specify the maximum number of
reports to keep.
Archive Directory. Browse to the directory where you want to
store quota reports for archiving.
Manual report archiving Determines the maximum number of manually generated (on-
demand) reports that are available for viewing on the SmartQuotas
Reports page.
Limit archive size for live reports to a specified number of
reports. Type the integer to specify the maximum number of
reports to keep.
Archive Directory. Browse to the directory where you want to
store quota reports for archiving.
320 SmartQuotas
25
Storage Pools
This section contains the following topics:
Topics:
• Storage pools overview
• Storage pool functions
• Autoprovisioning
• Node pools
• Virtual hot spare
• Spillover
• Suggested protection
• Protection policies
• SSD strategies
• Other SSD mirror settings
• Global namespace acceleration
• L3 cache overview
• Tiers
• File pool policies
• Managing node pools in the web administration interface
• Managing L3 cache from the web administration interface
• Managing tiers
• Creating file pool policies
• Managing file pool policies
• Monitoring storage pools
Autoprovisioning Automatically groups equivalent nodes into node pools for optimal storage efficiency and protection. At least three
of node pools equivalent nodes are required for autoprovisioning to work.
Tiers Groups node pools into logical tiers of storage. If you activate a SmartPools license for this feature, you can create
custom file pool policies and direct different file pools to appropriate storage tiers.
Default file pool Governs all file types and can store files anywhere on the cluster. Custom file pool policies, which require a
policy SmartPools license, take precedence over the default file pool policy.
Requested Specifies a requested protection setting for the default file pool, per node pool, or even on individual files. You can
protection leave the default setting in place, or choose the suggested protection calculated by OneFS for optimal data
protection.
Virtual hot spare Reserves a portion of available storage space for data repair in the event of a disk failure.
SSD strategies Defines the type of data that is stored on SSDs in the cluster. For example, storing metadata for read/write
acceleration.
L3 cache Specifies that SSDs in nodes are used to increase cache memory and speed up file system performance across
larger working file sets.
Global namespace Activates global namespace acceleration (GNA), which enables data stored on node pools without SSDs to access
acceleration SSDs elsewhere in the cluster to store extra metadata mirrors. Extra metadata mirrors accelerate metadata read
operations.
When you activate a SmartPools license, OneFS provides the following additional functions:
Custom file pool Creates custom file pool policies to identify different classes of files, and stores these file pools in logical storage
policies tiers. For example, you can define a high-performance tier of node pools and an archival tier of high-capacity node
pools. Then, with custom file pool policies, you can identify file pools based on matching criteria, and you can
define actions to perform on these pools. For example, one file pool policy can identify all JPEG files older than a
year and store them in an archival tier. Another policy can move all files that were created or modified within the
last three months to a performance tier.
Storage pool Enables automated capacity overflow management for storage pools. Spillover defines how to handle write
spillover operations when a storage pool is not writable. If spillover is enabled, data is redirected to a specified storage pool.
If spillover is disabled, new data writes fail and an error message is sent to the client that is attempting the write
operation.
Autoprovisioning
When you add a node to a cluster, OneFS attempts to assign the node to a node pool. This process is known as autoprovisioning, which
helps OneFS to provide optimal performance, load balancing, and file system integrity across a cluster.
A node is not autoprovisioned to a node pool and made writable until at least three equivalent nodes are added to the cluster. If you add
only two equivalent nodes, no data is stored on these nodes until a third equivalent node is added.
If a node fails or is removed from the cluster so that fewer than three nodes remain, the node pool becomes underprovisioned. In this
case, the two remaining nodes are still writable. If only one node remains, the node is not writable, but remains readable.
Node pools
A node pool is a group of three or more nodes that forms a single pool of storage. As you add nodes to the cluster, OneFS attempts to
automatically provision the new nodes into node pools.
To autoprovision a node, OneFS requires that the new node be equivalent to the other nodes in the node pool. If the new node is
equivalent, OneFS provisions the new node to the node pool. All nodes in a node pool are peers, and data is distributed across nodes in the
pool. Each provisioned node increases the aggregate disk, cache, CPU, and network capacity of the cluster.
We strongly recommend that you let OneFS handle node provisioning. However, if you have a special requirement or use case, you can
move nodes from an autoprovisioned node pool into a node pool that you define manually. The capability to create manually-defined node
pools is available only through the OneFS command-line interface, and should be deployed only after consulting with Dell EMC PowerScale
Technical Support.
If you try to remove a node from a node pool for the purpose of adding it to a manual node pool, and the result would leave fewer than
three nodes in the original node pool, the removal fails. When you remove a node from a manually-defined node pool, OneFS attempts to
autoprovision the node back into an equivalent node pool.
If you add fewer than three equivalent nodes to your cluster, OneFS cannot autoprovision these nodes. In these cases, you can add new
node types to existing node pools. Adding the new node types can enable OneFS to provision the newly added nodes to a compatible node
pool.
Node pools can use SSDs either as storage or as L3 cache, but not both, with the following exception. PowerScale F200 and F600 nodes
are full SSD nodes and can only be used as storage. Enabling L3 cache on F200 and F600 nodes is not an option.
NOTE: Do not use NL nodes in node pools used for NFS or SMB. It is recommended that you use high performance
nodes to handle NFS and SMB workloads.
Compatibilities
OneFS cannot autoprovision new nodes if there are compatibility restrictions between the new nodes and the existing nodes in a node
pool. To enable new nodes to join a compatible node pool, you can add a new node type to the existing node pool. You modify node pool
compatibilities using the command line interface.
For example, if your cluster already has an X410 node pool and you add a new X410 node, OneFS would attempt to autoprovision the new
node to the X410 node pool. However, if the new X410 node has different RAM than the older X410 nodes, then OneFS cannot
autoprovision the new node. To enable the new node to be provisioned into the existing X410 node pool, you must add the new X410 node
type to the existing X410 node pool. Use the isi storagepool nodetypes list command to view the node types and their IDs,
then isi storagepool nodepools modify <nodepool_name> --add-node-type-ids=<new_nodetype_id> to add
the new node type to the existing node pool. For example, suppose that your x410 nodepool name is x410_nodepool and isi
storagepool nodetypes list shows the new node type ID as 12:
You can also remove a node type from a node pool, for example, if you want to move that node type into its own pool. Using the above
example, to remove the X410 nodes with a different RAM capacity and node type ID 12 from the X410 node pool:
Some compatibility restrictions apply because there may be performance impacts if you add a particular node type to an existing node
pool, or because a node type is incompatible with the nodes in an existing node pool. In that case, OneFS generates a message describing
the compatibility issue.
NOTE: SSD compatibilities require that all nodes have L3 cache enabled. If you attempt to move nodes with SSDs into a
node pool that does not have L3 cache enabled, the process will fail with an error message. Ensure that the existing
node pool has L3 cache enabled and try again. L3 cache can only be enabled on nodes that have fewer than 16 SSDs and
at least a 2:1 ratio of HDDs to SSDs. On Generation 6 nodes that support SSD compatibilities, SSD count is ignored. If
SSDs are used for storage, then SSD counts must be identical on all nodes in a node pool. If SSD counts are left
unbalanced, node pool efficiency and performance will be less than optimal.
Compatibility restrictions
OneFS enforces pool and node type restrictions for cluster configuration and node compatibility. Restrictions represent the rules
governing cluster configuration and node compatibility. They prevent performance degradation of the node types within a node pool.
OneFS supports the following restriction types.
• Hard node type restriction: A rule that is not allowed. If you try to modify a cluster configuration in a way that generates a hard
restriction, the modification fails. OneFS presents a message that describes the restrictions that result in denying the modification
request.
• Soft node type restriction: A rule that is allowed but requires confirmation before being implemented. If you try to modify a cluster
configuration in a way that generates a soft restriction, OneFS presents an advisory notice. To continue, you must confirm the
modification.
NOTE: If the modification request results in both hard and soft restrictions, OneFS reports only the hard
restrictions.
• Pool restriction: A rule that exists for a node pool.
○ Hard pool restriction: A rule that represents an invalid change to a node group. For example, you cannot modify a manual node pool
or modify a pool in a way that results in that pool being underprovisioned.
○ Soft pool restriction: A rule that represents a change to a node group that requires confirmation. Requesting a modification that
results in a soft pool restriction generates an advisory notice. To continue, you must confirm the modification.
Some examples of hard and soft restrictions are as follows.
• There are hard node type restrictions for the PowerScale F200 and F600 node types.
○ F200 and F600 node types are incompatible with each other and with previous node types.
○ F200 node types can form node pools only with other compatible F200 nodes.
○ F600 node types can form node pools only with other compatible F600 nodes.
○ F200 and F600 nodes are storage only nodes and cannot be used as L3 cache.
○ F200 nodes must have the same SSD size to be considered compatible.
If you try to add F200 or F600 nodes to an incompatible node pool, the modification fails.
• There is a soft node type restriction for different RAM capacities. Any difference in RAM is allowed and there are no RAM ranges for
compatibilities. If you add a node to a node pool that has different RAM than existing nodes in that pool, OneFS displays an advisory
notice. Confirm the operation to add the node to the node pool.
Spillover
When you activate a SmartPools license, you can designate a node pool or tier to receive spillover data when the hardware specified by a
file pool policy is full or otherwise not writable.
If you do not want data to spill over to a different location because the specified node pool or tier is full or not writable, you can disable this
feature.
NOTE: Virtual hot spare reservations affect spillover. If the setting Deny data writes to reserved disk space is enabled,
while Ignore reserved space when calculating available free space is disabled, spillover occurs before the file system
reports 100% utilization.
Suggested protection
Based on the configuration of your PowerScale cluster, OneFS automatically calculates the amount of protection that is recommended to
maintain Dell EMC PowerScale stringent data protection requirements.
OneFS includes a function to calculate the suggested protection for data to maintain a theoretical mean-time to data loss (MTTDL) of
5000 years. Suggested protection provides the optimal balance between data protection and storage efficiency on your cluster.
By configuring file pool policies, you can specify one of multiple requested protection settings for a single file, for subsets of files called file
pools, or for all files on the cluster.
It is recommended that you do not specify a setting below suggested protection. OneFSperiodically checks the protection level on the
cluster, and alerts you if data falls below the recommended protection.
Protection policies
OneFS provides a number of protection policies to choose from when protecting a file or specifying a file pool policy.
The more nodes you have in your cluster, up to 20 nodes, the more efficiently OneFS can store and protect data, and the higher levels of
requested protection the operating system can achieve. Depending on the configuration of your cluster and how much data is stored,
OneFS might not be able to achieve the level of protection that you request. For example, if you have a three-node cluster that is
approaching capacity, and you request +2n protection, OneFS might not be able to deliver the requested protection.
The following table describes the available protection policies in OneFS.
5x
6x
7x
8x
SSD strategies
OneFS clusters can contain nodes that include solid-state drives (SSD). OneFS autoprovisions nodes with SSDs into one or more node
pools. The SSD strategy defined in the default file pool policy determines how SSDs are used within the cluster, and can be set to increase
performance across a wide range of workflows. SSD strategies apply only to SSD storage.
You can configure file pool policies to apply specific SSD strategies as needed. When you select SSD options during the creation of a file
pool policy, you can identify the files in the OneFS cluster that require faster or slower performance. When the SmartPools job runs,
OneFS uses file pool policies to move this data to the appropriate storage pool and drive type.
The following SSD strategy options that you can set in a file pool policy are listed in order of slowest to fastest choices:
Avoid SSDs Writes all associated file data and metadata to HDDs only.
CAUTION: Use this option to free SSD space only after consulting with PowerScale Technical
Support. Using this strategy can negatively affect performance.
Metadata read Writes both file data and metadata to HDDs. This is the default setting. An extra mirror of the file metadata is
acceleration written to SSDs, if available. The extra SSD mirror is included in the number of mirrors, if any, required to satisfy
the requested protection.
Metadata read/ Writes file data to HDDs and metadata to SSDs, when available. This strategy accelerates metadata writes in
write acceleration addition to reads but requires about four to five times more SSD storage than the Metadata read acceleration
setting. Enabling GNA does not affect read/write acceleration.
Data on SSDs Uses SSD node pools for both data and metadata, regardless of whether global namespace acceleration is
enabled. This SSD strategy does not result in the creation of additional mirrors beyond the normal requested
protection but requires significantly increased storage requirements compared with the other SSD strategy
options.
Note the following considerations for setting and applying SSD strategies.
• To use an SSD strategy that stores metadata and/or data on SSDs, you must have SSD storage in the node pool or tier, otherwise the
strategy is ignored.
• If you specify an SSD strategy but there is no storage of the type that you specified, the strategy is ignored.
• If you specify an SSD strategy that stores metadata and/or data on SSDs but the SSD storage is full, OneFS attempts to spill data to
HDD. If HDD storage is full, OneFS raises an out of space error.
L3 cache overview
You can configure nodes with solid-state drives (SSDs) to increase cache memory and speed up file system performance across larger
working file sets.
OneFS caches file data and metadata at multiple levels. The following table describes the types of file system cache available on a
PowerScale cluster.
NOTE: L3 cache can only be enabled on nodes that have fewer than 16 SSDs and at least a 2:1 ratio of HDDs to SSDs.
OneFS caches frequently accessed file and metadata in available random access memory (RAM). Caching enables OneFS to optimize data
protection and file system performance. When RAM cache reaches capacity, OneFS normally discards the oldest cached data and
processes new data requests by accessing the storage drives. This cycle is repeated each time RAM cache fills up.
You can deploy SSDs as L3 cache to reduce the cache cycling issue and further improve file system performance. L3 cache adds
significantly to the available cache memory and provides faster access to data than hard disk drives (HDD).
As L2 cache reaches capacity, OneFS evaluates data to be released and, depending on your workflow, moves the data to L3 cache. In this
way, much more of the most frequently accessed data is held in cache, and overall file system performance is improved.
Migration to L3 cache
L3 cache is enabled by default on new nodes.
You can enable L3 cache as the default for all new node pools or manually for a specific node pool, either through the command line or
from the web administration interface. L3 cache can be enabled only on node pools with nodes that contain SSDs. When you enable L3
cache, OneFS migrates data that is stored on the SSDs to HDD storage disks and then begins using the SSDs as cache.
When you enable L3 cache, OneFS displays the following message:
WARNING: Changes to L3 cache configuration can have a long completion time. If this is a
concern, please contact PowerScale Technical Support for more information.
You must confirm whether OneFS should proceed with the migration. After you confirm the migration, OneFS handles the migration as a
background process, and, depending on the amount of data stored on your SSDs, the process of migrating data from the SSDs to the
HDDs might take a long time.
NOTE: You can continue to administer your cluster while the data is being migrated.
Nodes Comments
HD-series For all node pools made up of HD-series nodes, L3 cache stores metadata only in SSDs and cannot be
disabled.
Generation 6 A-series For all node pools made up of Generation 6 A-series nodes, L3 cache stores metadata only in SSDs and
cannot be disabled.
Tiers
A tier is a user-defined collection of node pools that you can specify as a storage pool for files. A node pool can belong to only one tier.
You can create tiers to assign your data to any of the node pools in the tier. For example, you can assign a collection of node pools to a tier
specifically created to store data that requires high availability and fast access. In a three-tier system, this classification may be Tier 1. You
can classify data that is used less frequently or that is accessed by fewer users as Tier-2 data. Tier 3 usually comprises data that is seldom
used and can be archived for historical or regulatory purposes.
FilePolicy job
You can use the FilePolicy job to apply file pool policies.
The FilePolicy job supplements the SmartPools job by scanning the file system index that the File System Analytics (FSA) job uses. You
can use this job if you are already using snapshots (or FSA) and file pool policies to manage data on the cluster. The FilePolicy job is an
efficient way to keep inactive data away from the fastest tiers. The scan is done on the index, which does not require many locks. In this
way, you can vastly reduce the number of times a file is visited before it is tiered down.
You need to keep down-tiering data in ways they already have, such as file pool policies that move data based on a fixed age. Adjust the
data based on the fullness of their tiers.
To ensure that the cluster is correctly laid out and adequately protected, run the SmartPools job. You may use the SmartPools job after
modifying the cluster, such as adding or removing nodes. You can also use the job for modifying the SmartPools settings (such as default
protection settings), and if a node is down.
To use this feature, you must schedule the FilePolicy job daily and continue running the SmartPools job at a lower frequency. You can run
the SmartPools job after events that may affect node pool membership.
You can use the following options when running the FilePolicy job:
• --directory-only: This option helps you to process directories and is done to redirect new file ingest.
• --policy-only: This option helps you to set policies. Make sure not to restripe.
• --ingest: This option helps you to is a use -directory-only and -policy-only in combination.
• --nop: This option helps you to calculate and report the work that you have done.
Managing tiers
You can move node pools into tiers to optimize file and storage management. Managing tiers requires the SmartPools or higher
administrative privilege.
Create a tier
You can group create a tier that contains one or more node pools. You can use the tier to store specific categories of files.
1. Click File System > Storage Pools > SmartPools.
The SmartPools tab appears with two sections: Tiers and pools and Compatibilities.
Edit a tier
You can modify the name and change the node pools that are assigned to a tier.
A tier name can contain alphanumeric characters and underscores but cannot begin with a number.
1. Click File System > Storage Pools > SmartPools.
The SmartPools tab displays two groups: Tiers and pools and Compatibilities.
2. In the Tiers and pools area, click the tier you want to edit.
3. In the Edit Tier Details page, modify the following settings as needed:
Option Description
Tier Name To change the name of the tier, select and type over the existing name.
Node Pool Selection To change the node pool selection, select a node pool, and click either Add or Remove.
4. When you have finished editing tier settings, click Submit.
Delete a tier
You can delete a tier that has no assigned node pools.
If you want to delete a tier that does have assigned node pools, you must first remove the node pools from the tier.
1. Click File System > Storage Pools > SmartPools.
The SmartPools tab displays two lists: Tiers and pools and Compatibilities.
2. In the Tiers and pools list, go to the Actions column of the tier that you want to delete and click the X.
A message box asks you to confirm or cancel the operation.
3. Click Delete to confirm the operation.
The tier is removed from the Tiers and pools list.
If existing file pool policies direct data to a specific storage pool, do not configure other file pool policies with anywhere
for the Data storage target option. Because the specified storage pool is included when you use anywhere, target
specific storage pools to avoid unexpected results.
1. Click File System > Storage Pools > File Pool Policies.
2. Click Create a File Pool Policy.
3. In the Create a File Pool Policy dialog box, enter a policy name and, optionally, a description.
4. Specify the files to be managed by the file pool policy.
To define the file pool, you can specify file matching criteria by combining IF, AND, and OR conditions. You can define these conditions
with a number of file attributes, such as name, path, type, size, and timestamp information.
5. Specify SmartPools actions to be applied to the selected file pool.
You can specify storage and I/O optimization settings to be applied.
6. Click Create Policy.
The file pool policy is created and applied when the next scheduled SmartPools system job runs. By default, this job runs once a day, but
you also have the option to start the job immediately.
OneFS supports UNIX shell-style (glob) pattern matching for file name attributes and paths.
The following table lists the file attributes that you can use to define a file pool policy.
File type Includes or excludes files based on one of the following file-system object types:
• File
• Directory
• Other
Modified Includes or excludes files based on when the file was last modified.
In the web administration interface, you can specify a relative date and time, such as
"older than 2 weeks," or a specific date and time, such as "before January 1, 2012."
Time settings are based on a 24-hour clock.
Created Includes or excludes files based on when the file was created.
In the web administration interface, you can specify a relative date and time, such as
"older than 2 weeks," or a specific date and time, such as "before January 1, 2012."
Time settings are based on a 24-hour clock.
Metadata changed Includes or excludes files based on when the file metadata was last modified. This
option is available only if the global access-time-tracking option of the cluster is
enabled.
In the web administration interface, you can specify a relative date and time, such as
"older than 2 weeks," or a specific date and time, such as "before January 1, 2012."
Time settings are based on a 24-hour clock.
Accessed Includes or excludes files based on when the file was last accessed based on the
following units of time:
In the web administration interface, you can specify a relative date and time, such as
"older than 2 weeks," or a specific date and time, such as "before January 1, 2012."
Time settings are based on a 24-hour clock.
NOTE: Because it affects performance, access time tracking as a file pool
policy criterion is disabled by default.
Wildcard Description
* Matches any string in place of the asterisk.
For example, m* matches movies and m123.
[a-z] Matches any characters contained in the brackets, or a range of characters separated by a hyphen.
For example, b[aei]t matches bat, bet, and bit, and 1[4-7]2 matches 142, 152, 162, and
172.
You can exclude characters within brackets by following the first bracket with an exclamation mark.
For example, b[!ie] matches bat but not bit or bet.
You can match a bracket within a bracket if it is either the first or last character. For example,
[[c]at matches cat and [at.
You can match a hyphen within a bracket if it is either the first or last character. For example,
car[-s] matches cars and car-.
? Matches any character in place of the question mark. For example, t?p matches tap, tip, and
top.
SmartPools settings
SmartPools settings include directory protection, global namespace acceleration, L3 cache, virtual hot spare, spillover, requested
protection management, and I/O optimization management.
Enable global namespace --global-namespace- Specifies whether to allow per-file This setting is available only if 20
acceleration acceleration-enabled metadata to use SSDs in the node percent or more of the nodes in the
pool. cluster contain SSDs and at least 1.5
percent of the total cluster storage
• When disabled, restricts per-file
is SSD-based.
metadata to the storage pool
policy of the file, except in the If nodes are added to or removed
case of spillover. This is the from a cluster, and the SSD
default setting. thresholds are no longer satisfied,
• When enabled, allows per-file GNA becomes inactive. GNA
metadata to use the SSDs in remains enabled, so that when the
any node pool. SSD thresholds are met again, GNA
is reactivated.
Use SSDs as L3 Cache by --ssd-l3-cache-default- For node pools that include solid- L3 cache is enabled by default on
default for new node enabled state drives, deploy the SSDs as L3 new node pools. When you enable
pools cache. L3 cache extends L2 cache L3 cache on an existing node pool,
and speeds up file system OneFS performs a migration,
performance across larger working moving any existing data on the
file sets. SSDs to other locations on the
cluster.
OneFS manages all cache levels to
provide optimal data protection,
availability, and performance. In case
of a power failure, the data on L3
cache is retained and still available
after power is restored.
Virtual Hot Spare --virtual-hot-spare-deny- Reserves a minimum amount of If you configure both the minimum
writes space in the node pool that can be number of virtual drives and a
--virtual-hot-spare-hide- used for data repair in the event of minimum percentage of total disk
spare a drive failure. space when you configure reserved
VHS space, the enforced minimum
--virtual-hot-spare-limit- To reserve disk space for use as a
value satisfies both requirements.
drives virtual hot spare, select from the
following options: If this setting is enabled and Deny
--virtual-hot-spare-limit- new data writes is disabled, it is
percent • Ignore reserved disk space
possible for the file system
when calculating available
utilization to be reported at more
free space. Subtracts the
than 100%.
space reserved for the virtual
hot spare when calculating
available free space.
• Deny data writes to reserved
disk space. Prevents write
operations from using reserved
disk space.
• VHS Space Reserved. You can
reserve a minimum number of
virtual drives (1-4), as well as a
minimum percentage of total
disk space (0-20%).
Enable global spillover --spillover-enabled Specifies how to handle write • When enabled, redirects write
operations to a node pool that is not operations from a node pool that
writable. is not writable either to another
node pool or anywhere on the
cluster (the default).
• When disabled, returns a disk
space error for write operations
to a node pool that is not
writable.
Spillover Data Target --spillover-target Specifies another storage pool to When spillover is enabled, but it is
target when a storage pool is not important that data writes do not
--spillover-anywhere
writable. fail, select anywhere for the
Spillover Data Target setting,
Manage protection --automatically-manage- When this setting is enabled, When Apply to files with
settings protection SmartPools manages requested manually-managed protection is
protection levels automatically. enabled, overwrites any protection
settings that were configured
through File System Explorer or the
command-line interface.
Manage I/O optimization --automatically-manage-io- When enabled, uses SmartPools When Apply to files with
settings optimization technology to manage I/O manually-managed I/O
optimization. optimization settings is enabled,
overwrites any I/O optimization
settings that were configured
through File System Explorer or the
command-line interface
None --ssd-qab-mirrors Either one mirror or all mirrors for Improve quota accounting
the quota account block (QAB) are performance by placing all QAB
stored on SSDs mirrors on SSDs for faster I/O. By
default, only one QAB mirror is
stored on SSD.
None --ssd-system-btree-mirrors Either one mirror or all mirrors for Increase file system performance by
the system B-tree are stored on placing all system B-tree mirrors on
SSDs SSDs for faster access. Otherwise
only one system B-tree mirror is
stored on SSD.
None --ssd-system-delta-mirrors Either one mirror or all mirrors for Increase file system performance by
the system delta are stored on placing all system delta mirrors on
SSDs SSDs for faster access. Otherwise
only one system delta mirror is
stored on SSD.
1. Click File System > Storage Pools > File Pool Policies.
2. In the File Pool Policies tab, next to Default Policy in the list, click View/Edit.
The View Default Policy Details dialog box is displayed.
negatively affect
performance.
Snapshot storage --snapshot-storage- Specifies the storage pool that you want to target for Notes for data storage target
target target snapshot storage with this file pool policy. The settings apply to snapshot storage
--snapshot-ssd- are the same as those for data storage target, but target
strategy apply to snapshot data.
Requested --set-requested- Default of storage pool. Assign the default requested To change the requested
protection protection protection of the storage pool to the filtered files. protection , select a new value
from the list.
Specific level. Assign a specified requested protection
to the filtered files.
FSAnalyze (FSA)
The FSAnalyze job in OneFS gathers file system analytic information.
The FSAnalyze (FSA) job is used for namespace analysis. You can run FSA on demand or on a schedule through the command line
interface and PAPI. A successful FSA job produces analysis result. You may run FSA at least once a day.
Disk usage is a component of FSA that adds up the overall usage for any given directory. It gathers disk usage efficiently and stores the
results in a results database table, one row for every directory. FSA PAPI directory information endpoint exposes stored result. The result
for a directory may be compared with the result at a different time using the same PAPI endpoint. You can run FSA on demand or on a
schedule through the command line interface and PAPI. A successful FSA job produces analysis result. You may run FSA at least once a
day.
The FSA job runs in two modes. The SCAN mode scans the OneFS file system entirely. The INDEX mode is the default mode and is more
efficient. It relies on snapshots and change lists, updates, and walks a global metadata index.
FSA or IndexUpdate job is used to build the metadata index. The FSA job running in INDEX mode generates FSA Index. IndexUpdate job
generates Cluster Index. The disk usage component walks the metadata index.
NOTE: To initiate any Job Engine tasks, you must have the role of SystemAdmin in the OneFS system.
FlexProtectLin Scans the file system after a device failure to Restripe Medium 1 Manual
ensure that all files remain protected. This
PermissionRepair Uses a template file or directory as the basis for None Low 5 Manual
permissions to set on a target file or directory.
The target directory must always be
subordinate to the /ifs path. This job must be
manually started.
QuotaScan* Updates quota accounting for domains created None Low 6 Manual
on an existing file tree. Available only if you
activate a SmartQuotas license. This job should
be run manually in off-hours after setting up all
quotas, and whenever setting up new quotas.
SetProtectPlus Applies a default file policy across the cluster. Restripe Low 6 Manual
Runs only if a SmartPools license is not active.
ShadowStoreDelete Frees up space that is associated with shadow None Low 2 Scheduled
stores. Shadow stores are hidden files that are
referenced by cloned and deduplicated files.
ShadowStoreProtect Protects shadow stores that are referenced by Restripe Low 6 Scheduled
a logical i-node (LIN) with a higher level of
protection.
SmartPools* Enforces SmartPools file pool policies. Available Restripe Low 6 Scheduled
only if you activate a SmartPools license. This
job runs on a regularly scheduled basis, and can
also be started by the system when a change is
made (for example, creating a compatibility that
merges node pools).
SmartPoolsTree Enforce SmartPools file policies on a subtree. Restripe Medium 5 Manual
Available only if you activate a SmartPools
license.
SnapRevert Reverts an entire snapshot back to head. None Low 5 Manual
SnapshotDelete Creates free space associated with deleted None Medium 2 Manual
snapshots. Triggered by the system when you
mark snapshots for deletion.
TreeDelete Deletes a specified file path in the /ifs None Medium 4 Manual
directory.
Upgrade Upgrades the file system after a software Restripe Medium 3 Manual
version upgrade.
WormQueue Processes the WORM queue, which tracks the None Low 6 Scheduled
commit times for WORM files. After a file is
committed to WORM state, it is removed from
the queue.
* Available only if you activate an additional license
Job operation
OneFS includes system maintenance jobs that run to ensure that your PowerScale cluster performs at peak health. Through the Job
Engine, OneFS runs a subset of these jobs automatically, as needed, to ensure file and data integrity, check for and mitigate drive and
node failures, and optimize free space. For other jobs, for example, Dedupe, you can use Job Engine to start them manually or schedule
them to run automatically at regular intervals.
The Job Engine runs system maintenance jobs in the background and prevents jobs within the same classification (exclusion set) from
running simultaneously. Two exclusion sets are enforced: restripe and mark.
Restripe job types are:
• AutoBalance
• AutoBalanceLin
• FlexProtect
• FlexProtectLin
• MediaScan
• MultiScan
• SetProtectPlus
• SmartPools
Mark job types are:
• Collect
• IntegrityScan
• MultiScan
Note that MultiScan is a member of both the restripe and mark exclusion sets. You cannot change the exclusion set parameter for a job
type.
The Job Engine is also sensitive to job priority, and can run up to three jobs, of any priority, simultaneously. Job priority is denoted as 1–10,
with 1 being the highest and 10 being the lowest. The system uses job priority when a conflict among running or queued jobs arises. For
example, if you manually start a job that has a higher priority than three other jobs that are already running, Job Engine pauses the lowest-
priority active job, runs the new job, then restarts the older job at the point at which it was paused. Similarly, if you start a job within the
restripe exclusion set, and another restripe job is already running, the system uses priority to determine which job should run (or remain
running) and which job should be paused (or remain paused).
Other job parameters determine whether jobs are enabled, their performance impact, and schedule. As system administrator, you can
accept the job defaults or adjust these parameters (except for exclusion set) based on your requirements.
When a job starts, the Job Engine distributes job segments—phases and tasks—across the nodes of your cluster. One node acts as job
coordinator and continually works with the other nodes to load-balance the work. In this way, no one node is overburdened, and system
resources remain available for other administrator and system I/O activities not originated from the Job Engine.
After completing a task, each node reports task status to the job coordinator. The node acting as job coordinator saves this task status
information to a checkpoint file. Consequently, in the case of a power outage, or when paused, a job can always be restarted from the
point at which it was interrupted. This is important because some jobs can take hours to run and can use considerable system resources.
If you want to specify other than a default impact policy for a job, you can create a custom policy with new settings.
Jobs with a low impact policy have the least impact on available CPU and disk I/O resources. Jobs with a high impact policy have a
significantly higher impact. In all cases, however, the Job Engine uses CPU and disk throttling algorithms to ensure that tasks that you
initiate manually, and other I/O tasks not related to the Job Engine, receive a higher priority.
Related concepts
Managing impact policies on page 352
Job priorities
Job priorities determine which job takes precedence when more than three jobs of different exclusion sets attempt to run simultaneously.
The Job Engine assigns a priority value between 1 and 10 to every job, with 1 being the most important and 10 being the least important.
The maximum number of jobs that can run simultaneously is three. If a fourth job with a higher priority is started, either manually or
through a system event, the Job Engine pauses one of the lower-priority jobs that is currently running. The Job Engine places the paused
job into a priority queue, and automatically resumes the paused job when one of the other jobs is completed.
If two jobs of the same priority level are scheduled to run simultaneously, and two other higher priority jobs are already running, the job
that is placed into the queue first is run first.
Related References
System jobs library on page 345
Related References
System jobs library on page 345
Start a job
By default, only some system maintenance jobs are scheduled to run automatically. However, you can start any of the jobs manually at any
time.
1. Click Cluster Management > Job Operations > Job Types.
2. In the Job Types list, locate the job that you want to start, and then click Start Job.
The Start a Job dialog box appears.
3. Provide the details for the job, then click Start Job.
Related References
System jobs library on page 345
Pause a job
You can pause a job temporarily to free up system resources.
1. Click Cluster Management > Job Operations > Job Summary.
2. In the Active Jobs table, click More for the job that you want to pause.
3. Click Pause Running Job in the menu that appears.
The job remains paused until you resume it.
Related References
System jobs library on page 345
Resume a job
You can resume a paused job.
1. Click Cluster Management > Job Operations > Job Summary.
2. In the Active Jobs table, click More for the job that you want to pause.
3. Click Resume Running Job in the menu that appears.
The job continues from the phase or task at which it was paused.
Cancel a job
If you want to free up system resources, or for any reason, you can permanently discontinue a running, paused, or waiting job.
1. Click Cluster Management > Job Operations > Job Summary.
2. In the Active Jobs table, click More for the job that you want to cancel.
3. Click Cancel Running Job in the menu that appears.
Related References
System jobs library on page 345
Update a job
You can change the priority and impact policy of a running, waiting, or paused job.
When you update a job, only the current instance of the job runs with the updated settings. The next instance of the job returns to the
default settings for that job.
NOTE: To change job settings permanently, see "Modify job type settings."
Related References
System jobs library on page 345
Related References
System jobs library on page 345
Related concepts
Job performance impact on page 349
Related concepts
Job performance impact on page 349
Related concepts
Job performance impact on page 349
Policy description a. In the Description field, type a new overview for the impact policy.
b. Click Submit.
Impact schedule a. In the Impact Schedule area, modify the schedule of the impact policy by adding, editing, or deleting impact
intervals.
b. Click Save Changes.
The modified impact policy is saved and listed in alphabetical order in the Impact Policies table.
Related concepts
Job performance impact on page 349
Related concepts
Job performance impact on page 349
Related concepts
Job performance impact on page 349
Networking overview
After you determine the topology of your network, you can set up and manage your internal and external networks.
There are two types of networks on a cluster:
Internal Generation 5 nodes communicate with each other using a high-speed, low latency InfiniBand network. Generation
6 nodes support using InfiniBand or Ethernet for the internal network. PowerScale F200 and F600 nodes support
only Ethernet as the backend network. You can optionally configure a second InfiniBand network to enable failover
for redundancy.
External Clients connect to the cluster through the external network with Ethernet. The PowerScale cluster supports
standard network communication protocols, including NFS, SMB, HDFS, HTTP, and FTP. The cluster includes
various external Ethernet connections, providing flexibility for a wide variety of network configurations.
Networking 355
While all clusters will have, at minimum, one internal InfiniBand or Ethernet network (int-a), you can enable a second internal network to
support network failover (int-b/failover). You must assign at least one IP address range for the secondary network and one range for
failover.
If any IP address ranges defined during the initial configuration are too restrictive for the size of the internal network, you can add ranges
to the int-a network or int-b/failover networks, which might require a cluster restart. Other configuration changes, such as deleting an IP
address assigned to a node, might also require that the cluster be restarted.
NOTE: Generation 5 nodes support InfiniBand for the internal network. Generation 6 nodes support both InfiniBand and
Ethernet for the internal network. PowerScale F200 and F600 nodes support Ethernet for the internal network.
Groupnets
Groupnets reside at the top tier of the networking hierarchy and are the configuration level for managing multiple tenants on your external
network. DNS client settings, such as nameservers and a DNS search list, are properties of the groupnet. You can create a separate
groupnet for each DNS namespace that you want to use to enable portions of the PowerScale cluster to have different networking
properties for name resolution. Each groupnet maintains its own DNS cache, which is enabled by default.
A groupnet is a container that includes subnets, IP address pools, and provisioning rules. Groupnets can contain one or more subnets, and
every subnet is assigned to a single groupnet. Each cluster contains a default groupnet named groupnet0 that contains an initial subnet
named subnet0, an initial IP address pool named pool0, and an initial provisioning rule named rule0.
Each groupnet is referenced by one or more access zones. When you create an access zone, you can specify a groupnet. If a groupnet is
not specified, the access zone will reference the default groupnet. The default System access zone is automatically associated with the
default groupnet. Authentication providers that communicate with an external server, such as Active Directory and LDAP, must also
reference a groupnet. You can specify the authentication provider with a specific groupnet; otherwise, the provider will reference the
default groupnet. You can only add an authentication provider to an access zone if they are associated with the same groupnet. Client
protocols such as SMB, NFS, HDFS, and Swift, are supported by groupnets through their associated access zones.
356 Networking
Related concepts
Managing groupnets on page 365
DNS name resolution on page 357
Related tasks
Specify a SmartConnect service subnet on page 372
Subnets
Subnets are networking containers that enable you to sub-divide your network into smaller, logical IP networks.
On a cluster, subnets are created under a groupnet and each subnet contains one or more IP address pools. Both IPv4 and IPv6 addresses
are supported on OneFS; however, a subnet cannot contain a combination of both. When you create a subnet, you specify whether it
supports IPv4 or IPv6 addresses.
You can configure the following options when you create a subnet:
• Gateway servers that route outgoing packets and gateway priority.
• Maximum transmission unit (MTU) that network interfaces in the subnet will use for network communications.
• SmartConnect service address, which is the IP address on which the SmartConnect module listens for DNS requests on this subnet.
• VLAN tagging to allow the cluster to participate in multiple virtual networks.
• Direct Server Return (DSR) address, if your cluster contains an external hardware load balancing switch that uses DSR.
How you set up your external network subnets depends on your network topology. For example, in a basic network topology where all
client-node communication occurs through direct connections, only a single external subnet is required. In another example, if you want
clients to connect through both IPv4 and IPv6 addresses, you must configure multiple subnets.
Related concepts
VLANs on page 358
Managing external network subnets on page 367
IPv6 support on page 357
IPv6 support
OneFS supports both IPv4 and IPv6 address formats on a cluster.
IPv6 is the next generation of internet protocol addresses and was designed with the growing demand for IP addresses in mind. The
following table describes distinctions between IPv4 and IPv6.
IPv4 IPv6
32-bit addresses 128-bit addresses
Address Resolution Protocol (ARP) Neighbor Discovery Protocol (NDP)
You can configure the PowerScale cluster for IPv4, IPv6, or both (dual-stack) in OneFS. You set the IP family when creating a subnet, and
all IP address pools assigned to the subnet must use the selected format.
Related concepts
Subnets on page 357
Networking 357
VLANs
Virtual LAN (VLAN) tagging is an optional setting that enables a cluster to participate in multiple virtual networks.
You can partition a physical network into multiple broadcast domains, or virtual local area networks (VLANs). You can enable a cluster to
participate in a VLAN which allows multiple cluster subnet support without multiple network switches; one physical switch enables multiple
virtual subnets.
VLAN tagging inserts an ID into packet headers. The switch refers to the ID to identify from which VLAN the packet originated and to
which network interface a packet should be sent.
Related tasks
Enable or disable VLAN tagging on page 369
IP address pools
IP address pools are assigned to a subnet and consist of one or more IP address ranges. You can partition nodes and network interfaces
into logical IP address pools. IP address pools are also utilized when configuring SmartConnect DNS zones and client connection
management.
Each IP address pool belongs to a single subnet. Multiple pools for a single subnet are available only if you activate a SmartConnect
Advanced license.
The IP address ranges assigned to a pool must be unique and belong to the IP address family (IPv4 or IPv6) specified by the subnet that
contains the pool.
You can add network interfaces to IP address pools to associate address ranges with a node or a group of nodes. For example, based on
the network traffic that you expect, you might decide to establish one IP address pool for storage nodes and another for accelerator
nodes.
SmartConnect settings that manage DNS query responses and client connections are configured at the IP address pool level.
Related concepts
Managing IP address pools on page 370
Link aggregation
Link aggregation, also known as network interface card (NIC) aggregation, combines the network interfaces on a physical node into a
single, logical connection to provide improved network throughput.
You can add network interfaces to an IP address pool singly or as an aggregate. A link aggregation mode is selected on a per-pool basis
and applies to all aggregated network interfaces in the IP address pool. The link aggregation mode determines how traffic is balanced and
routed among aggregated network interfaces.
Related concepts
Managing network interface members on page 376
SmartConnect module
The SmartConnect module specifies how the cluster DNS server handles connection requests from clients and the policies that assign IP
addresses to network interfaces, including failover and rebalancing.
You can think of SmartConnect as a limited implementation of a custom DNS server. SmartConnect answers only for the SmartConnect
zone names or aliases that are configured on it. Settings and policies that are configured for SmartConnect are applied per IP address
pool.
You can configure basic and advanced SmartConnect settings.
SmartConnect Basic
SmartConnect Basic is included with OneFS as a standard feature and does not require a license. SmartConnect Basic supports the
following settings:
• Specification of the DNS zone
• Round-robin connection balancing method only
358 Networking
• Service subnet to answer DNS requests
SmartConnect Basic enables you to add two SmartConnect Service IP addresses to a subnet.
SmartConnect Basic has the following limitations to IP address pool configuration:
• You may only specify a static IP address allocation policy.
• You cannot specify an IP address failover policy.
• You cannot specify an IP address rebalance policy.
• You may assign two IP address pools per external network subnet.
SmartConnect Advanced
SmartConnect Advanced extends the settings available from SmartConnect Basic. It requires an active license. SmartConnect Advanced
supports the following settings:
• Round-robin, CPU utilization, connection counting, and throughput balancing methods
• Static and dynamic IP address allocation
SmartConnect Advance enables you to add a maximum of six SmartConnect Service IP addresses per subnet.
SmartConnect Advanced enables you to specify the following IP address pool configuration options:
• You can define an IP address failover policy for the IP address pool.
• You can define an IP address rebalance policy for the IP address pool.
• SmartConnect Advanced supports multiple IP address pools per external subnet to enable multiple DNS zones within a single subnet.
Related concepts
Managing SmartConnect Settings on page 372
SmartConnect Multi-SSIP
OneFS supports defining more than one SmartConnect Service IP (SSIP) per subnet. Support for multiple SmartConnect Service IPs
(Multi-SSIP) ensures that client connections continue uninterrupted if an SSIP becomes unavailable.
The additional SSIPs provide fault tolerance and a failover mechanism to ensure continued load balancing of clients according to the
selected policy. Though the additional SSIPs are in place for failover, they are active and respond to DNS server requests.
The SmartConnect Basic license allows defining 2 SSIPs per subnet. The SmartConnect Advanced license allows defining up to 6 SSIPs
per subnet.
NOTE: SmartConnect Multi-SSIP is not an additional layer of load balancing for client connections: additional SSIPs
only provide redundancy and reduce failure points in the client connection sequence. Do not configure the site DNS
server to perform load balancing for the SSIPs. Allow OneFS to perform load balancing through the selected
SmartConnect policy to ensure effective load balancing.
Configure DNS servers for SSIP failover to ensure that the next SSIP is contacted only if the first SSIP connection times out. If the SSIPs
are not configured in a failover sequence, the SSIP load balancing policy resets each time a new SSIP is contacted. The SSIPs function
independently: they do not track the current distribution status of the other SSIPs.
Configuring IP addresses as failover-only addresses is not supported on all DNS servers. To support Multi-SSIP as a failover only option, it
is recommended that you use a DNS server that supports failover addresses. If a DNS server does not support failover addresses, Multi-
SSIP still provides advantages over a single SSIP. However, increasing the number of SSIPs may affect SmartConnect's ability to load
balance.
NOTE: If the DNS server does not support failover addresses, test Multi-SSIP in a lab environment that mimics the
production environment to confirm the impact on SmartConnect's load balancing for a specific workflow. Only after
confirming workflow impacts in a lab environment should you update a production cluster.
Networking 359
You can configure a SmartConnect DNS zone name for each IP address pool. The zone name must be a fully qualified domain name. Add a
new name server (NS) record that references the SmartConnect service IP address in the existing authoritative DNS zone that contains
the cluster. Provide a zone delegation to the fully qualified domain name (FQDN) of the SmartConnect zone in your DNS infrastructure.
If you have a SmartConnect Advanced license, you can also specify a list of alternate SmartConnect DNS zone names for the IP address
pool.
When a client connects to the cluster through a SmartConnect DNS zone:
• SmartConnect handles the incoming DNS requests on behalf of the IP address pool.
• The service subnet distributes incoming DNS requests according to the connection balancing policy of the pool.
NOTE: Using SmartConnect zone aliases is recommended for making clusters accessible using multiple domain names.
Use of CNAMES is not recommended.
Related tasks
Modify a SmartConnect DNS zone on page 372
Related tasks
Configure a SmartConnect service IP address on page 369
Suspend or resume a node on page 373
IP address allocation
The IP address allocation policy specifies how IP addresses in the pool are assigned to an available network interface.
You can specify whether to use static or dynamic allocation.
Static Assigns one IP address to each network interface added to the IP address pool, but does not guarantee that all IP
addresses are assigned.
Once assigned, the network interface keeps the IP address indefinitely, even if the network interface becomes
unavailable. To release the IP address, remove the network interface from the pool or remove it from the node.
Without a license for SmartConnect Advanced, static is the only method available for IP address allocation.
Dynamic Assigns IP addresses to each network interface added to the IP address pool until all IP addresses are assigned.
This guarantees a response when clients connect to any IP address in the pool.
If a network interface becomes unavailable, its IP addresses are automatically moved to other available network
interfaces in the pool as determined by the IP address failover policy.
This method is only available with a license for SmartConnect Advanced.
Related References
Supported IP allocation methods on page 373
Allocation recommendations based on file sharing protocols on page 374
360 Networking
Related tasks
Configure IP address allocation on page 373
IP address failover
When a network interface becomes unavailable, the IP address failover policy specifies how to handle the IP addresses that were assigned
to the network interface.
To define an IP address failover policy, you must have a license for SmartConnect Advanced, and the IP address allocation policy must be
set to dynamic. Dynamic IP allocation ensures that all of the IP addresses in the pool are assigned to available network interfaces.
When a network interface becomes unavailable, the IP addresses that were assigned to it are redistributed to available network interfaces
according to the IP address failover policy. Subsequent client connections are directed to the new network interfaces.
You can select one of the following connection balancing methods to determine how the IP address failover policy selects which network
interface receives a redistributed IP address:
• Round-robin
• Connection count
• Network throughput
• CPU usage
Related tasks
Configure an IP failover policy on page 375
Connection balancing
The connection balancing policy determines how the DNS server handles client connections to the cluster.
You can specify one of the following balancing methods:
Round-robin Selects the next available network interface on a rotating basis. This is the default method. Without a
SmartConnect license for advanced settings, this is the only method available for load balancing.
Connection count Determines the number of open TCP connections on each available network interface and selects the network
interface with the fewest client connections.
Network Determines the average throughput on each available network interface and selects the network interface with
throughput the lowest network interface load.
CPU usage Determines the average CPU utilization on each available network interface and selects the network interface
with lightest processor usage.
Related References
Supported connection balancing methods on page 375
Related tasks
Configure a connection balancing policy on page 374
IP address rebalancing
The IP address rebalance policy specifies when to redistribute IP addresses if one or more previously unavailable network interfaces
becomes available again.
To define an IP address rebalance policy, you must have a license for SmartConnect Advanced, and the IP address allocation policy must
be set to dynamic. Dynamic IP addresses allocation ensures that all of the IP addresses in the pool are assigned to available network
interfaces.
You can set rebalancing to occur manually or automatically:
Manual Does not redistribute IP addresses until you manually start the rebalancing process.
Upon rebalancing, IP addresses will be redistributed according to the connection balancing method specified by
the IP address failover policy defined for the IP address pool.
Automatic Automatically redistributes IP addresses according to the connection balancing method specified by the IP
address failover policy defined for the IP address pool.
Networking 361
Automatic rebalancing may also be triggered by changes to cluster nodes, network interfaces, or the configuration
of the external network.
NOTE: Rebalancing can disrupt client connections. Ensure the client workflow on the IP address
pool is appropriate for automatic rebalancing.
Related tasks
Manually rebalance IP addresses on page 376
Related concepts
Managing node provisioning rules on page 378
Routing options
OneFS supports source-based routing and static routes which allow for more granular control of the direction of outgoing client traffic on
the cluster.
If no routing options are defined, by default, outgoing client traffic on the cluster is routed through the default gateway, which is the
gateway with the lowest priority setting on the node. If traffic is being routed to a local subnet and does not need to route through a
gateway, the traffic will go directly out through an interface on that subnet.
Related concepts
Managing routing options on page 379
Source-based routing
Source-based routing selects which gateway to direct outgoing client traffic through based on the source IP address in each packet
header.
When enabled, source-based routing automatically scans your network configuration to create client traffic rules. If you modify your
network configuration, for example, changing the IP address of a gateway server, source-based routing adjusts the rules. Source-based
routing is applied across the entire cluster and does not support the IPv6 protocol.
In the following example, you enable source-based routing on a PowerScale cluster that is connected to SubnetA and SubnetB. Each
subnet is configured with a SmartConnect zone and a gateway, also labeled A and B. When a client on SubnetA makes a request to
SmartConnect ZoneB, the response originates from ZoneB. The result is a ZoneB address as the source IP in the packet header, and the
response is routed through GatewayB. Without source-based routing, the default route is destination-based, so the response is routed
through GatewayA.
In another example, a client on SubnetC, which is not connected to the PowerScale cluster, makes a request to SmartConnect ZoneA and
ZoneB. The response from ZoneA is routed through GatewayA, and the response from ZoneB is routed through GatewayB. In other
words, the traffic is split between gateways. Without source-based routing, both responses are routed through the same gateway.
Source-based routing is disabled by default. Enabling or disabling source-based routing goes into effect immediately. Packets in transit
continue on their original courses, and subsequent traffic is routed based on the status change. If the status of source-based routing
changes during transmission, transactions that are composed of multiple packets might be disrupted or delayed.
Source-based routing can conflict with static routes. If a routing conflict occurs, source-based routing rules are prioritized over the static
route.
Consider enabling source-based routing if you have a large network with a complex topology. For example, if your network is a multitenant
environment with several gateways, traffic is more efficiently distributed with source-based routing.
362 Networking
Related tasks
Enable or disable source-based routing on page 379
Static routing
A static route directs outgoing client traffic to a specified gateway based on the IP address of the client connection.
You configure static routes by IP address pool, and each route applies to all nodes that have network interfaces as IP address pool
members.
You might configure static routing in order to connect to networks that are unavailable through the default routes or if you have a small
network that only requires one or two routes.
NOTE: If you have upgraded from a version earlier than OneFS 7.0.0, existing static routes that were added through rc
scripts will no longer work and must be re-created.
Related tasks
Add or remove a static route on page 380
Related concepts
Internal IP address ranges on page 355
Networking 363
Modify the internal network netmask
You can modify the netmask value for the internal network.
If the netmask is too restrictive for the size of the internal network, you must modify the netmask settings. It is recommended that you
specify a class C netmask, such as 255.255.255.0, for the internal netmask. This netmask is large enough to accommodate future
nodes.
NOTE: For the changes in netmask value to take effect, you must reboot the cluster.
Related concepts
Internal IP address ranges on page 355
Related concepts
Internal network failover on page 356
364 Networking
Disable internal network failover
You can disable the int-b and failover internal networks.
1. Click Cluster Management > Network Configuration > Internal Network.
2. In the Internal Network Settings area, click int-b/Failover.
3. In the State area, click Disable.
4. Click Submit.
The Confirm Cluster Reboot dialog box appears.
5. Restart the cluster by clicking Yes.
Related concepts
Internal network failover on page 356
Managing groupnets
You can create and manage groupnets on a cluster.
Create a groupnet
You can create a groupnet and configure DNS client settings.
1. Click Cluster Management > Networking Configuration > External Network.
2. click Add a groupnet.
The Create Groupnet window opens.
3. In the Name field, type a name for the groupnet that is unique in the system.
The name can be up to 32 alphanumeric characters long and can include underscores or hyphens, but cannot include spaces or other
punctuation.
4. Optional: In the Description field, type a descriptive comment about the groupnet.
The description cannot exceed 128 characters.
5. In the DNS Settings area, configure the following DNS settings you want to apply to the groupnet:
• DNS Servers
• DNS Search Suffixes
• DNS Resolver Rotate
• Server-side DNS Search
• DNS Cache
6. Click Add Groupnet.
Related concepts
Groupnets on page 356
DNS name resolution on page 357
Related References
DNS settings on page 365
DNS settings
You can assign DNS servers to a groupnet and modify DNS settings that specify DNS server behavior.
Setting Description
DNS Servers Sets a list of DNS IP addresses. Nodes issue DNS requests to
these IP addresses.
You cannot specify more than three DNS servers.
Networking 365
Setting Description
DNS Search Suffixes Sets the list of DNS search suffixes. Suffixes are appended to
domain names that are not fully qualified.
You cannot specify more than six suffixes.
Enable DNS resolver rotate Sets the DNS resolver to rotate or round-robin across DNS
servers.
Enable DNS server-side search Specifies whether server-side DNS searching is enabled, which
appends DNS search lists to client DNS inquiries handled by a
SmartConnect service IP address.
Enable DNS cache Specifies whether DNS caching for the groupnet is enabled.
Related concepts
Groupnets on page 356
Related tasks
Create a groupnet on page 365
Modify a groupnet
You can modify groupnet attributes including the name, supported DNS servers, and DNS configuration settings.
1. Click Cluster Management > Networking Configuration > External Network.
2. Click the View/Edit button in the row of the groupnet you want to modify.
3. From the View Groupnet Details window, click Edit.
4. From the Edit Groupnet Details window, modify the groupnet settings as needed.
5. Click Save changes.
Related concepts
Groupnets on page 356
DNS name resolution on page 357
Related References
DNS settings on page 365
Delete a groupnet
You can delete a groupnet from the system, unless it is associated with an access zone, an authentication provider, or it is the default
groupnet. Removal of the groupnet from the system might affect several other areas of OneFS and should be performed with caution.
In several cases, the association between a groupnet and another OneFS component, such as access zones or authentication providers, is
absolute. You cannot modify these components so that they become associate with another groupnet.
In the event that you need to delete a groupnet, we recommend that you complete the these tasks in the following order:
1. Delete IP address pools in subnets associated with the groupnet.
2. Delete subnets associated with the groupnet .
3. Delete authentication providers associated with the groupnet .
4. Delete access zones associated with the groupnet .
1. Click Cluster Management > Networking Configuration > External Network.
2. Click the More button in the row of the groupnet you want to delete, and then click Delete Groupnet.
3. At the Confirm Delete dialog box, click Delete.
If you did not first delete access zones associated with the groupnet, the deletion fails, and the system displays an error.
366 Networking
Related concepts
Groupnets on page 356
View groupnets
You can view a list of all groupnets on the system and view the details of a specific groupnet.
1. Click Cluster Management > Networking Configuration > External Network.
The External Network table displays all groupnets in the system and displays the following attributes:
• Groupnet name
• DNS servers assigned to the groupnet
• The type of groupnet
• Groupnet description
2. Click the View/Edit button in a row to view the current settings for that groupnet.
The View Groupnet Details dialog box opens and displays the following settings:
• Groupnet name
• Groupnet description
• DNS servers assigned to the groupnet
• DNS search suffixes
• Whether DNS resolver is enabled
• Whether DNS search is enabled
• Whether DNS caching is enabled
3. Click the tree arrow next to a groupnet name to view subnets assigned to the groupnet.
The table displays each subnet in a new row within the groupnet tree.
4. When you have finished viewing groupnet details, click Close.
Related concepts
Groupnets on page 356
Create a subnet
You can add a subnet to the external network. Subnets are created under a groupnet.
1. Click Cluster Management > Network Configuration > External Network.
2. Click More > Add Subnet next to the groupnet that will contain the new subnet.
The system displays the Create Subnet window.
3. In the Name field, specify the name of the new subnet.
The name can be up to 32 alphanumeric characters long and can include underscores or hyphens, but cannot include spaces or other
punctuation.
4. Optional: In the Description field, type a descriptive comment about the subnet.
The comment can be no more than 128 characters.
5. From the IP family area, select one of the following IP address formats for the subnet:
• IPv4
• IPv6
All subnet settings and IP address pools added to the subnet must use the specified address format. You cannot modify the address
family once the subnet has been created.
6. In the Netmask field, specify a subnet mask or prefix length, depending on the IP family you selected.
• For an IPv4 subnet, type a dot-decimal octet (x.x.x.x) that represents the subnet mask.
• For an IPv6 subnet, type an integer (ranging from 1 to 128) that represents the network prefix length.
Networking 367
7. In the Gateway Address field, type the IP address of the gateway through which the cluster routes communications to systems
outside of the subnet.
8. In the Gateway Priority field, type the priority (integer) that determines which subnet gateway will be installed as the default
gateway on nodes that have more then one subnet.
A value of 1 represents the highest priority.
9. In the MTU list, type or select the size of the maximum transmission units the cluster uses in network communication. Any numerical
value is allowed, but must be compatible with your network and the configuration of all devices in the network path. Common settings
are 1500 (standard frames) and 9000 (jumbo frames).
Although OneFS supports both 1500 MTU and 9000 MTU, using a larger frame size for network traffic permits more efficient
communication on the external network between clients and cluster nodes. For example, if a subnet is connected through a 10 GbE
interface and NIC aggregation is configured for IP address pools in the subnet, we recommend that you set the MTU to 9000. To
benefit from using jumbo frames, all devices in the network path must be configured to use jumbo frames.
10. If you plan to use SmartConnect for connection balancing, in the SmartConnect Service IP field, type the IP address that will
receive all incoming DNS requests for each IP address pool according to the client connection policy. You must have at least one
subnet configured with a SmartConnect service IP in order to use connection balancing.
11. In the SmartConnect Service Name field, specify the SmartConnect service name.
12. In the Advanced Settings section, you can enable VLAN tagging if you want to enable the cluster to participate in virtual networks.
NOTE: Configuring a VLAN requires advanced knowledge of network switches. Consult your network switch
documentation before configuring your cluster for a VLAN.
13. If you enable VLAN tagging, specify a VLAN ID that corresponds to the ID number for the VLAN set on the switch, with a value from 2
through 4094.
14. In the Hardware Load Balancing IPs field, type the IP address for a hardware load balancing switch using Direct Server Return
(DSR). This routes all client traffic to the cluster through the switch. The switch determines which node handles the traffic for the
client, and passes the traffic to that node.
15. Click Remove IP to remove a hardware load balancing IP.
16. Click Add Subnet.
Related concepts
Subnets on page 357
IPv6 support on page 357
Modify a subnet
You can modify a subnet on the external network.
1. Click Cluster Management > Network Configuration > External Network.
2. Click View/Edit next to the subnet that you want to modify.
The system displays the View Subnet Details window.
3. Click Edit.
The system displays the Edit Subnet Details window.
4. Modify the subnet settings, and then click Save Changes.
Related concepts
Subnets on page 357
Delete a subnet
You can delete a subnet from the external network.
Deleting an subnet that is in use can prevent access to the cluster. Client connections to the cluster through any IP address pool that
belongs to the deleted subnet will be terminated.
1. Click Cluster Management > Network Configuration > External Network.
2. Click More > Delete Subnet next to the subnet that you want to delete.
3. At the confirmation prompt, click Delete.
368 Networking
Related concepts
Subnets on page 357
Related concepts
Subnets on page 357
Related concepts
Subnets on page 357
DNS request handling on page 360
Related concepts
Subnets on page 357
VLANs on page 358
Networking 369
Add or remove a DSR address
If your network contains a hardware load balancing switch using Direct Server Return (DSR), you must configure a DSR address for each
subnet.
The DSR address routes all client traffic to the cluster through the switch. The switch determines which node handles the traffic for the
client and passes the traffic to that node.
1. Click Cluster Management > Network Configuration > External Network.
2. Click View/Edit next to the subnet that you want to modify.
The system displays the View Subnet Details window.
3. Click Edit.
The system displays the Edit Subnet Details window.
4. In the Hardware Load Balancing IPs field, type the DSR address.
5. Click Save Changes.
Related concepts
Subnets on page 357
370 Networking
Specify the range in the following format: <lower_ip_address>–<higher_ip_address>.
7. Click Add Pool.
Related concepts
IP address pools on page 358
Related concepts
IP address pools on page 358
Related concepts
IP address pools on page 358
Related concepts
IP address pools on page 358
Networking 371
Specify the range in the following format: low IP address - high IP address
5. To add an additional range, click Add an IP range.
The system provides fields in which you can enter the low and high IP addresses of the additional range.
6. To delete an IP address range, click Remove IP range next to the range you want to delete.
7. Click Save Changes.
Related concepts
IP address pools on page 358
Related concepts
SmartConnect zones and aliases on page 359
372 Networking
Related concepts
DNS name resolution on page 357
Related tasks
Configure a SmartConnect service IP address on page 369
Related concepts
DNS name resolution on page 357
Related concepts
IP address allocation on page 360
Static Assigns one IP address to each network interface added to the IP address pool, but does not guarantee that all IP
addresses are assigned.
Networking 373
Once assigned, the network interface keeps the IP address indefinitely, even if the network interface becomes
unavailable. To release the IP address, remove the network interface from the pool or remove it from the cluster.
Without a license for SmartConnect Advanced, static is the only method available for IP address allocation.
Dynamic Assigns IP addresses to each network interface added to the IP address pool until all IP addresses are assigned.
This guarantees a response when clients connect to any IP address in the pool.
If a network interface becomes unavailable, its IP addresses are automatically moved to other available network
interfaces in the pool as determined by the IP address failover policy.
This method is only available with a license for SmartConnect Advanced.
Related concepts
IP address allocation on page 360
• SMB Static
• HTTP
• HDFS
• FTP
• sFTP
• FTPS
• SyncIQ
• Swift
• NFSv3 Dynamic
• NFSv4
Related concepts
IP address allocation on page 360
374 Networking
Related concepts
Connection balancing on page 361
Round-robin Selects the next available node on a rotating basis. This is the default method. Without a SmartConnect license
for advanced settings, this is the only method available for load balancing.
Connection count Determines the number of open TCP connections on each available node and selects the node with the fewest
client connections.
Network Determines the average throughput on each available node and selects the node with the lowest network
throughput interface load.
CPU usage Determines the average CPU utilization on each available node and selects the node with lightest processor
usage.
Related concepts
Connection balancing on page 361
Related concepts
IP address failover on page 361
Networking 375
5. Click Save Changes.
Related concepts
IP address rebalancing on page 361
Manual Does not redistribute IP addresses until you manually issue a rebalance command through the command-line
interface.
Upon rebalancing, IP addresses will be redistributed according to the connection balancing method specified by
the IP address failover policy defined for the IP address pool.
Automatic Automatically redistributes IP addresses according to the connection balancing method specified by the IP
address failover policy defined for the IP address pool.
Automatic rebalance may also be triggered by changes to cluster nodes, network interfaces, or the configuration
of the external network.
NOTE: Rebalancing can disrupt client connections. Ensure the client workflow on the IP address
pool is appropriate for automatic rebalancing.
Related concepts
IP address rebalancing on page 361
376 Networking
The system displays the Edit Pool Details window.
4. To add a network interface to the IP address pool:
a. From the Pool Interface Members area, select the interface you want from the Available table.
If you add an aggregated interface to the pool, you cannot individually add any interfaces that are part of the aggregated interface.
b. Click Add.
5. To remove a network interface from the IP address pool:
a. From the Pool Interface Members area, select the interface you want from the In Pool table.
b. Click Remove.
6. Click Save Changes.
Related concepts
Managing network interface members on page 376
Link aggregation on page 358
Related concepts
Link aggregation on page 358
Link Aggregation Dynamic aggregation mode that supports the IEEE 802.3ad Link Aggregation Control Protocol (LACP). You can
Control Protocol configure LACP at the switch level, which allows the node to negotiate interface aggregation with the switch.
(LACP) LACP balances outgoing traffic across the interfaces based on hashed protocol header information that includes
the source and destination address and the VLAN tag, if available. This option is the default aggregation mode.
Loadbalance Static aggregation method that accepts all incoming traffic and balances outgoing traffic over aggregated
(FEC) interfaces based on hashed protocol header information that includes source and destination addresses.
Active/Passive Static aggregation mode that switches to the next active interface when the primary interface becomes
Failover unavailable. The primary interface handles traffic until there is an interruption in communication. At that point, one
of the secondary interfaces will take over the work of the primary.
Round-robin Static aggregation mode that rotates connections through the nodes in a first-in, first-out sequence, handling all
processes without priority. Balances outbound traffic across all active ports in the aggregated link and accepts
inbound traffic on any port.
Networking 377
NOTE: This method is not recommended if your cluster is handling TCP/IP workloads.
Related tasks
Configure link aggregation on page 377
Related tasks
Configure link aggregation on page 377
378 Networking
Related concepts
Node provisioning rules on page 362
Related concepts
Node provisioning rules on page 362
Related concepts
Node provisioning rules on page 362
Related concepts
Node provisioning rules on page 362
Related concepts
Source-based routing on page 362
Networking 379
Add or remove a static route
You can configure static routes to direct outgoing traffic to specific destinations through a specific gateway. Static routes are configured
at the IP address pool level.
Static routes must match the IP address family (IPv4 or IPv6) of the IP address pool they are configured within.
1. Click Cluster Management > Network Configuration > External Network.
2. Click View/Edit next to the IP address pool that you want to modify.
The system displays the View Pool Details window.
3. Click Edit.
The system displays the Edit Pool Details window.
4. To add a static route:
a. From the Static Routes area, click Add static route.
The system displays the Create Static Route window.
b. In the Subnet field, specify the IPv4 or IPv6 address of the subnet that traffic will be routed to.
c. In the Netmask or Prefixlen field, specify the netmask (IPv4) or prefix length (IPv6) of the subnet you provided.
d. In the Gateway field, specify the IPv4 or IPv6 address of the gateway that traffic will be routed through.
e. Click Add Static Route.
5. To remove a static route, from the Static Routes table, click the Remove button next to the route you want to delete.
6. Click Save Changes.
Related concepts
Static routing on page 363
Setting Description
TTL No Error Minimum Specifies the lower boundary on time-to-live for cache hits.
The default value is 30 seconds.
TTL No Error Maximum Specifies the upper boundary on time-to-live for cache hits.
The default value is 3600 seconds.
TTL Non-existent Domain Minimum Specifies the lower boundary on time-to-live for nxdomain.
380 Networking
Setting Description
TTL Non-existent Domain Maximum Specifies the upper boundary on time-to-live for nxdomain.
The default value is 3600 seconds.
TTL Other Failures Minimum Specifies the lower boundary on time-to-live for non-nxdomain
failures.
The default value is 0 seconds.
TTL Other Failures Maximum Specifies the upper boundary on time-to-live for non-nxdomain
failures.
The default value is 60 seconds.
TTL Lower Limit For Server Failures Specifies the lower boundary on time-to-live for DNS server
failures.
The default value is 300 seconds.
TTL Upper Limit For Server Failures Specifies the upper boundary on time-to-live for DNS server
failures.
The default value is 3600 seconds.
Eager Refresh Specifies the lead time to refresh cache entries that are nearing
expiration.
The default value is 0 seconds.
Cache Entry Limit Specifies the maximum number of entries that the DNS cache can
contain.
The default value is 65536 entries.
Test Ping Delta Specifies the delta for checking the cbind cluster health.
The default value is 30 seconds.
Networking 381
29
Partitioned Performance Performing for NFS
This section contains the following topics:
Topics:
• Partitioned Performance Monitoring using NFS
• Workload monitoring
• Additional information
Workload monitoring
Workload monitoring is a key for show-back and charge-back resource accounting.
Workload is a set of identification metrics and resource consumption metrics. For example, {username bob zone_name System}
consumed {cpu 1 2 s, bytes_in 10 K, bytes_out 20 M,...}
Dataset is a specification of identification metrics to aggregate workloads by, and the workloads collected that match that specification.
For instance, the workload above would be in a dataset that specified identification metrics {username, zone_name}.
Filter is a method for including only workloads that match specific identification metrics, for example take the following workloads for a
dataset with filter {zone_name:System}:
• {username:bob , zone_name:System} would be included
• {username:mary , zone_name:System} would be included
• {username:bob , zone_name:Quarantine} would not be included
A performance dataset automatically collects a list of the top workloads, with pinning and filtering to allow further customization to that
list.
Additional information
These pointers provide you with some tips regarding the feature.
• Name lookup failures, for example UID to username mappings, are reported in an additional column in the statistics output.
• Statistics are updated every 30 s. A newly created dataset does not show up in the statistics.
output until the update has occurred. Similarly, an old dataset might show up until that update occurs.
• Export ID tracking is only available post commit when upgrading. All other features and statistics are available during upgrade.
• export_id and share_name metrics can be combined in a dataset.
○ Dataset with both metrics list workloads with either export_id or share_name
○ Dataset with only share_name metric excludes NFS workloads.
○ Dataset with only export_id metric excludes SMB workloads.
• Path Tracking is only enabled for SMB.
Anti-virus overview
You can scan the files that you store on a PowerScale cluster for viruses, malware, and other security threats by integrating with third-
party scanning services through the Internet Content Adaptation Protocol (ICAP).
OneFS sends files through ICAP to a server running third-party anti-virus scanning software. These servers are called ICAP servers. ICAP
servers scan files for viruses.
After an ICAP server scans a file, it informs OneFS of whether the file is a threat. If a threat is detected, OneFS informs system
administrators by creating an event, displaying near real-time summary information, and documenting the threat in an anti-virus scan
report. You can configure OneFS to request that ICAP servers attempt to repair infected files. You can also configure OneFS to protect
users against potentially dangerous files by truncating or quarantining infected files.
Before OneFS sends a file for scanning, it ensures that the scan is not redundant. If a scanned file has not been modified since the last
scan, the file will not be scanned again unless the virus database on the ICAP server has been updated since the last scan.
NOTE: Anti-virus scanning is available only on nodes in the cluster that are connected to the external network.
On-access scanning
You can configure OneFS to send files to be scanned before they are opened, after they are closed, or both. This can be done through file
access protocols such as SMB, NFS, and SSH. Sending files to be scanned after they are closed is faster but less secure. Sending files to
be scanned before they are opened is slower but more secure.
If OneFS is configured to ensure that files are scanned after they are closed, when a user creates or modifies a file on the cluster, OneFS
queues the file to be scanned. OneFS then sends the file to an ICAP server to be scanned when convenient. In this configuration, users
can always access files without any delay. However, it is possible that after a user modifies or creates a file, a second user might access
the file before the file is scanned. If a virus was introduced to the file from the first user, the second user will be able to access the
infected file. Also, if an ICAP server is unable to scan a file, the file will still be accessible to users.
If OneFS ensures that files are scanned before they are opened, when a user attempts to download a file from the cluster, OneFS first
sends the file to an ICAP server to be scanned. The file is not sent to the user until the scan is complete. Scanning files before they are
opened is more secure than scanning files after they are closed, because users can access only scanned files. However, scanning files
before they are opened requires users to wait for files to be scanned. You can also configure OneFS to deny access to files that cannot be
384 Antivirus
scanned by an ICAP server, which can increase the delay. For example, if no ICAP servers are available, users will not be able to access
any files until the ICAP servers become available again.
If you configure OneFS to ensure that files are scanned before they are opened, it is recommended that you also configure OneFS to
ensure that files are scanned after they are closed. Scanning files as they are both opened and closed will not necessarily improve security,
but it will usually improve data availability when compared to scanning files only when they are opened. If a user wants to access a file, the
file may have already been scanned after the file was last modified, and will not need to be scanned again if the ICAP server database has
not been updated since the last scan.
NOTE: When scanning, do not exclude any file types (extensions). This will ensure that any renamed files are caught.
Related tasks
Configure on-access scanning settings on page 388
Related concepts
Managing antivirus policies on page 391
Related tasks
Create an antivirus policy on page 390
Antivirus 385
• The total number of files scanned.
• The total size of the files scanned.
• The total network traffic sent.
• The network throughput that was consumed by virus scanning.
• Whether the scan succeeded.
• The total number of infected files detected.
• The names of infected files.
• The threats associated with infected files.
• How OneFS responded to detected threats.
Related concepts
Managing antivirus reports on page 394
Related tasks
Configure antivirus report retention settings on page 388
View antivirus reports on page 394
ICAP servers
The number of ICAP servers that are required to support a PowerScale cluster depends on how virus scanning is configured, the amount
of data a cluster processes, and the processing power of the ICAP servers.
If you intend to scan files exclusively through anti-virus scan policies, it is recommended that you have a minimum of two ICAP servers per
cluster. If you intend to scan files on access, it is recommended that you have at least one ICAP server for each node in the cluster.
If you configure more than one ICAP server for a cluster, ensure that the processing power of each ICAP server is relatively equal. OneFS
distributes files to the ICAP servers on a rotating basis, regardless of the processing power of the ICAP servers. If one server is more
powerful than another, OneFS does not send more files to the more powerful server.
CAUTION: When files are sent from the cluster to an ICAP server, they are sent across the network in cleartext. Ensure
that the path from the cluster to the ICAP server is on a trusted network. Authentication is not supported. If
authentication is required between an ICAP client and ICAP server, hop-by-hop Proxy Authentication must be used.
Related concepts
Managing ICAP servers on page 389
Related tasks
Add and connect to an ICAP server on page 389
Alert All threats that are detected cause an event to be generated in OneFS at the warning level, regardless of the
threat response configuration.
Repair The ICAP server attempts to repair the infected file before returning the file to OneFS.
Quarantine OneFS quarantines the infected file. A quarantined file cannot be accessed by any user. However, a quarantined
file can be removed from quarantine by the root user if the root user is connected to the cluster through secure
shell (SSH). If you back up your cluster through NDMP backup, quarantined files will remain quarantined when the
files are restored. If you replicate quarantined files to another PowerScale cluster, the quarantined files will
continue to be quarantined on the target cluster. Quarantines operate independently of access control lists
(ACLs).
Truncate OneFS truncates the infected file. When a file is truncated, OneFS reduces the size of the file to zero bytes to
render the file harmless.
You can configure OneFS and ICAP servers to react in one of the following ways when threats are detected:
386 Antivirus
Repair or Attempts to repair infected files. If an ICAP server fails to repair a file, OneFS quarantines the file. If the ICAP
quarantine server repairs the file successfully, OneFS sends the file to the user. Repair or quarantine can be useful if you want
to protect users from accessing infected files while retaining all data on a cluster.
Repair or truncate Attempts to repair infected files. If an ICAP server fails to repair a file, OneFS truncates the file. If the ICAP server
repairs the file successfully, OneFS sends the file to the user. Repair or truncate can be useful if you do not care
about retaining all data on your cluster, and you want to free storage space. However, data in infected files will be
lost.
Alert only Only generates an event for each infected file. It is recommended that you do not apply this setting.
Repair only Attempts to repair infected files. Afterwards, OneFS sends the files to the user, whether or not the ICAP server
repaired the files successfully. It is recommended that you do not apply this setting. If you only attempt to repair
files, users will still be able to access infected files that cannot be repaired.
Quarantine Quarantines all infected files. It is recommended that you do not apply this setting. If you quarantine files without
attempting to repair them, you might deny access to infected files that could have been repaired.
Truncate Truncates all infected files. It is recommended that you do not apply this setting. If you truncate files without
attempting to repair them, you might delete data unnecessarily.
Related tasks
Configure antivirus threat response settings on page 388
Antivirus 387
Wildcard character Description
Related concepts
On-access scanning on page 384
Related concepts
Antivirus threat responses on page 386
Related concepts
Antivirus scan reports on page 385
388 Antivirus
Enable or disable antivirus scanning
You can enable or disable all antivirus scanning.
This procedure is available only through the web administration interface.
1. Click Data Protection > Antivirus > Summary.
2. In the Service area, select or clear Enable Antivirus service.
Related concepts
ICAP servers on page 386
Related concepts
ICAP servers on page 386
Related concepts
ICAP servers on page 386
Related concepts
ICAP servers on page 386
Antivirus 389
Related tasks
Test an ICAP server connection on page 389
Related concepts
ICAP servers on page 386
Related tasks
Reconnect to an ICAP server on page 390
Related concepts
ICAP servers on page 386
Related tasks
Temporarily disconnect from an ICAP server on page 390
Related concepts
ICAP servers on page 386
Related tasks
Add and connect to an ICAP server on page 389
390 Antivirus
4. In the Policy Name field, type a name for the antivirus policy.
5. Optional: To specify an optional description of the policy, in the Description field, type a description.
6. In the Paths field, specify the directory that you want to scan.
Optionally, click Add another directory path to specify additional directories.
7. In the Recursion Depth area, specify how much of the specified directories you want to scan.
• To scan all subdirectories of the specified directories, click Full recursion.
• To scan a limited number of subdirectories of the specified directories, click Limit depth and then specify how many sub
directories you want to scan.
8. Optional: To scan all files regardless of whether OneFS has marked files as having been scanned, or if global settings specify that
certain files should not be scanned, select Enable force run of policy regardless of impact policy.
9. Optional: To modify the default impact policy of the antivirus scans, from the Impact Policy list, select a new impact policy.
10. In the Schedule area, specify whether you want to run the policy according to a schedule or manually.
Scheduled policies can also be run manually at any time.
Related concepts
Antivirus policy scanning on page 385
Related concepts
Antivirus policy scanning on page 385
Related concepts
Antivirus policy scanning on page 385
Related concepts
Antivirus policy scanning on page 385
Antivirus 391
Enable or disable an antivirus policy
You can temporarily disable antivirus policies if you want to retain the policy but do not want to scan files.
1. Click Data Protection > Antivirus > Policies.
2. In the Antivirus Policies table, in the row for the antivirus policy you want to enable or disable, click More > Enable Policy or More
> Disable Policy.
Related concepts
Antivirus policy scanning on page 385
Related concepts
Antivirus policy scanning on page 385
Scan a file
You can manually scan an individual file for viruses.
This procedure is available only through the command-line interface (CLI).
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Run the isi antivirus scan command.
The following command scans the /ifs/data/virus_file file for viruses:
392 Antivirus
Managing antivirus threats
You can repair, quarantine, or truncate files in which threats are detected. If you think that a quarantined file is no longer a threat, you can
rescan the file or remove the file from quarantine.
Related concepts
Antivirus threat responses on page 386
Rescan a file
You can rescan a file for viruses if, for example, you believe that a file is no longer a threat.
This procedure is available only through the command-line interface (CLI).
1. Open a secure shell (SSH) connection to any node in the cluster and log in.
2. Run the isi antivirus scan command.
For example, the following command scans /ifs/data/virus_file:
Related concepts
Antivirus threat responses on page 386
Related concepts
Antivirus threat responses on page 386
truncate -s 0 /ifs/data/virus_file
Related concepts
Antivirus threat responses on page 386
Antivirus 393
View threats
You can view files that have been identified as threats by an ICAP server.
1. Click Data Protection > Antivirus > Detected Threats.
2. In the Antivirus Threat Reports table, view potentially infected files.
Related concepts
Antivirus threat responses on page 386
Related References
Antivirus threat information on page 394
Name Displays the name of the detected threat as it is recognized by the ICAP server.
Path Displays the file path of the potentially infected file.
Remediation Indicates how OneFS responded to the file when the threat was detected.
Policy Displays the ID of the antivirus policy that caused the threat to be detected.
Detected Displays the time that the threat was detected.
Actions Displays actions that can be performed on the file.
Related concepts
Antivirus scan reports on page 385
394 Antivirus
31
File System Explorer
This section contains the following topics:
Topics:
• File System Explorer overview
• Browse the file system
• Create a directory
• Modify file and directory properties
• View file and directory properties
• File and directory properties
Create a directory
You can create a directory in the /ifs directory tree through the File System Explorer.
1. Click File System > File System Explorer.
2. Navigate to the directory that you want to add the directory to.
3. Click Create Directory.
4. In the Create a Directory dialog box, in the Directory Name field, type a name for the directory.
5. In the Permissions area, assign permissions to the directory.
6. Click Create Directory.
Properties
Path Displays the absolute path of the file or directory.
File Size Displays the logical size of the file or directory.
Space Used Displays the physical size of the file or directory.
Last Modified Displays the time that the file or directory was last modified.
Last Accessed Displays the time that the file or directory was last accessed.
UNIX Permissions
User Displays the permissions assigned to the owner of the file or directory.
Group Displays the permissions assigned to the group of the file or directory.
Others Displays the permissions assigned to other users for the file or directory.