HP StoreAll Storage File System User Guide
HP StoreAll Storage File System User Guide
Abstract
This guide describes how to configure and manage StoreAll software file systems and how to use NFS, SMB, FTP, and HTTP
to access file system data. The guide also describes the following file system features: quotas, remote replication, snapshots,
data retention and validation, data tiering, and file allocation. The guide is intended for system administrators managing 9300
Storage Gateway, 9320 Storage, X9720 Storage, and 9730 Storage. For the latest StoreAll guides, browse to
http://www.hp.com/support/StoreAllManuals.
nl
Edition Date
Software
Description
Version
1
November 2009
5.3.1
December 2009
5.3.2
April 2010
5.4.0
Added information about file cloning, CIFS, directory tree quotas, the Statistics tool,
and GUI procedures
July 2010
5.4.1
December 2010
5.5.0
Added information about authentication, CIFS, FTP, HTTP, SSL certificates, and remote
replication
April 2011
5.6
September 2011
6.0
Added or updated information about data retention and validation, software snapshots,
block snapshots, remote replication, HTTP, case insensitivity, quotas
June 2012
6.1
Added or updated information about file systems, file share creation, rebalancing
segments, remote replication, user authentication, CIFS, LDAP, data retention, data
tiering, file allocation, quotas, Antivirus software
December 2012
6.2
10
March 2013
6.3
11
May 2013
6.3
Updated information about creating StoreAll REST API shares, snapshots, creating SMB
shares, and using the online quota check. Replaced references of the 9000 with
StoreAll.
Contents
1 Using StoreAll software file systems.............................................................10
File system operations..............................................................................................................10
File system building blocks.......................................................................................................12
Configuring file systems...........................................................................................................12
Accessing file systems.............................................................................................................13
3 Configuring quotas...................................................................................28
How quotas work...................................................................................................................28
Enabling quotas on a file system and setting grace periods..........................................................28
Setting quotas for users, groups, and directories.........................................................................29
Using a quotas file..................................................................................................................32
Importing quotas from a file................................................................................................32
Exporting quotas to a file....................................................................................................33
Format of the quotas file.....................................................................................................33
Using online quota check........................................................................................................34
Configuring email notifications for quota events..........................................................................35
Deleting quotas......................................................................................................................35
Troubleshooting quotas............................................................................................................36
5 Using NFS...............................................................................................55
Exporting a file system............................................................................................................55
Unexporting a file system....................................................................................................58
Using case-insensitive file systems ............................................................................................58
Setting case insensitivity for all users (NFS/Linux/Windows)....................................................58
Viewing the current setting for case insensitivity......................................................................59
Clearing case insensitivity (setting to case sensitive) for all users (NFS/Linux/Windows)..............59
Log files............................................................................................................................59
Case insensitivity and operations affecting directories.............................................................60
7 Using SMB..............................................................................................76
Configuring file serving nodes for SMB......................................................................................76
Starting or stopping the SMB service and viewing SMB statistics...................................................76
Monitoring SMB services.........................................................................................................77
SMB shares...........................................................................................................................78
Configuring SMB shares with the GUI...................................................................................79
Configuring SMB signing ...................................................................................................83
Managing SMB shares with the GUI....................................................................................84
Configuring and managing SMB shares with the CLI..............................................................85
Linux permissions on files created over SMB..........................................................................87
Managing SMB shares with Microsoft Management Console...................................................88
Linux static user mapping with Active Directory...........................................................................93
Configuring Active Directory................................................................................................93
Assigning attributes............................................................................................................95
Synchronizing Active Directory 2008 with the NTP server used by the cluster.............................96
Consolidating SMB servers with common share names................................................................96
SMB clients............................................................................................................................98
Viewing quota information..................................................................................................98
4
Contents
8 Using FTP..............................................................................................104
Best practices for configuring FTP............................................................................................104
Managing FTP from the GUI..................................................................................................104
Configuring FTP ..............................................................................................................104
Managing the FTP configuration........................................................................................108
Managing FTP from the CLI....................................................................................................109
Configuring FTP ..............................................................................................................109
Managing the FTP configuration........................................................................................109
The vsftpd service.................................................................................................................110
Starting or stopping the FTP service manually...........................................................................110
Accessing shares..................................................................................................................111
FTP and FTPS commands for anonymous shares...................................................................111
FTP and FTPS commands for non-anonymous shares.............................................................112
FTP and FTPS commands for Fusion Manager......................................................................113
9 Using HTTP............................................................................................114
HTTP share types..................................................................................................................114
Uses for the StoreAll REST API.................................................................................................115
Features for each file share mode...........................................................................................115
Best practices for HTTP REST API shares...................................................................................115
Obtaining the HP StoreAll REST API Sample Client Application...................................................116
Checklist for creating HTTP shares...........................................................................................116
Best practices for configuring HTTP.........................................................................................117
Object mode shares and data retention...................................................................................117
Creating HTTP shares from the GUI.........................................................................................118
Creating standard HTTP shares .........................................................................................118
Creating StoreAll REST API shares......................................................................................123
Managing the HTTP configuration......................................................................................129
Tuning the socket read block size and file write block size .........................................................130
Creating HTTP shares from the CLI..........................................................................................131
Creating HTTP shares.......................................................................................................132
Managing the HTTP configuration......................................................................................132
Starting or stopping the HTTP service manually.........................................................................133
Accessing standard and file-compatible mode HTTP shares........................................................133
Configuring Windows clients to access HTTP WebDAV shares....................................................135
Troubleshooting HTTP............................................................................................................136
Delete Container..............................................................................................................150
Set Container Permission...................................................................................................150
Get Container Permission..................................................................................................150
Create/Update Object.....................................................................................................150
Retrieve Object................................................................................................................151
Delete Object..................................................................................................................151
Contents
15 Express Query......................................................................................217
Managing the metadata service.............................................................................................217
Backing up and restoring file systems with Express Query data..............................................218
Saving and importing file system metadata..........................................................................219
Metadata and continuous remote replication.......................................................................221
Metadata and synchronized server times.................................................................................221
Managing auditing...............................................................................................................222
Audit log........................................................................................................................222
Audit log reports..............................................................................................................223
Contents
22 Documentation feedback.......................................................................287
Glossary..................................................................................................288
Index.......................................................................................................290
Contents
The topology in the diagram reflects the architecture of the HP 9320, which uses a building block
of server pairs (known as couplets) with SAS attached storage. In the diagram:
There are four file serving nodes, SS1SS4. These nodes are also called segment servers.
SS1 and SS2 share access to segments 14 through SAS connections to a shared storage
array.
SS3 and SS4 share access to segments 5-8 through SAS connections to a shared storage
array.
10
2.
3.
(Specifically, a segment need not be a complete, rooted directory tree). Segments can be any
size and different segments can be different sizes.
The location of files and directories within particular segments in the file space is independent
of their respective and relative locations in the namespace. For example, a directory (Dir1)
can be located on one segment, while the files contained in that directory (File1 and File2)
are resident on other segments. The selection of segments for placing files and directories is
done dynamically when the file/directory is created, as determined by an allocation policy.
The allocation policy is set by the system administrator in accordance with the anticipated
access patterns and specific criteria relevant to the installation (such as performance and
manageability). The allocation policy can be changed at any time, even when the file system
is mounted and in use. Files can be redistributed across segments using a rebalancing utility.
For example, rebalancing can be used when some segments are too full while other have free
capacity, or when files need to be distributed across new segments.
Segment servers are responsible for managing individual segments of the file system. Each
segment is assigned to one segment server and each server may own multiple segments, as
shown by the color coding in the diagram. Segment ownership can be migrated between
servers with direct access to the storage volume while the file system is mounted. For example,
Seg1 can be migrated between SS1 and SS2 but not to SS3 or SS4.
Additional servers can be added to the system dynamically to meet growing performance
needs, without adding more capacity, by distributing the ownership of existing segments for
proper load balancing and utilization of all servers. Conversely, additional capacity can be
added to the file system while in active use without adding more serversownership of the
new segments is distributed among existing servers. Servers can be configured with failover
protection, with other servers being designated as standby servers that automatically take
control of a servers segments if a failure occurs.
4.
5.
6.
7.
Clients run the applications that use the file system. Clients can access the file system either
as a locally mounted cluster file system using the StoreAll Client or using standard network
attached storage (NAS) protocols such as NFS and Server Message Block (SMB).
Use of the StoreAll Client on a client system has some significant advantages over the NAS
approachspecifically, the StoreAll Client driver is aware of the segmented architecture of
the file system and, based on the file/directory being accessed, can route requests directly to
the correct segment server, yielding balanced resource utilization and high performance.
However, the StoreAll Client is available only for a limited range of operating systems.
NAS protocols such as NFS and SMB offer the benefits of multi-platform support and low cost
of administration of client software, as the client drivers for these protocols are generally
available with the base operating system. When using NAS protocols, a client must mount
the file system from one (or more) of the segment servers. As shown in the diagram, all requests
are sent to the server from which the share is mounted, which then performs the required
routing.
Any segment server in the namespace can access any segment. There are three cases:
a. Selected segment is owned by the segment server initiating the operation (for example,
SS1 accessing Seg1).
b. Selected segment is owned by another segment server but is directly accessible at the
block level by the segment server initiating the operation (for example, SS1 accessing
Seg3).
c. Selected segment is owned by another segment server and is not directly accessible by
the segment server initiating the operation (for example, SS1 accessing Seg5).
Each case is handled differently. The data paths are shown in heavy red broken lines in the
diagram:
a. The segment server initiating the operation services the read or write request to the local
segment.
b. In this case, reads and writes take different routes:
File system operations
11
1)
2)
c.
8.
The segment server initiating the operation can read files directly from the segment
across the SAN; this is called a SAN READ.
The segment server initiating the operation routes writes over the IP network to the
segment server owning the segment. That server then writes data to the segment.
All reads and writes must be routed over the IP network between the segment servers.
Step 7 assumed that the server had to go to a segment to read a file. However, every segment
server that reads a file keeps a copy of it cached in its memory regardless of which segment
it was read from (in the diagram, two servers have cached copies of File 1). The cached
copies are used to service local read requests for the file until the copy is made invalid, for
example, because the original file has been changed. The file system keeps track of which
servers have cached copies of a file and manages cache coherency using delegations, which
are StoreAll file system metadata structures used to track cached copies of data and metadata.
12
Quotas. This feature allows you to assign quotas to individual users or groups, or to a directory
tree. Individual quotas limit the amount of storage or the number of files that a user or group
can use in a file system. Directory tree quotas limit the amount of storage and the number of
files that can be created on a file system located at a specific directory tree. See Configuring
quotas (page 28).
Remote replication. This feature provides a method to replicate changes in a source file system
on one cluster to a target file system on either the same cluster or a second cluster. See Using
remote replication (page 178).
Data retention and validation. Data retention ensures that files cannot be modified or deleted
for a specific retention period. Data validation scans can be used to ensure that files remain
unchanged. See Managing data retention (page 196).
Antivirus support. This feature is used with supported Antivirus software, allowing you to scan
files on a StoreAll file system. See Configuring Antivirus support (page 226).
StoreAll software snapshots. This feature allows you to capture a point-in-time copy of a file
system or directory for online backup purposes and to simplify recovery of files from accidental
deletion. Users can access the file system or directory as it appeared at the instant of the
snapshot. See Creating StoreAll software snapshots (page 239).
Block Snapshots. This feature uses the array capabilities to capture a point-in-time copy of a
file system for online backup purposes and to simplify recovery of files from accidental deletion.
The snapshot replicates all file system entities at the time of capture and is managed exactly
like any other file system. See Creating block snapshots (page 249).
Data tiering. This feature allows you to set a preferred tier where newly created files will be
stored. You can then create a tiering policy to move files from initial storage, based on file
attributes such as such as modification time, access time, file size, or file type. See Using
data tiering (page 261).
File allocation. This feature allocates new files and directories to segments according to the
allocation policy and segment preferences that are in effect for a client. An allocation policy
is an algorithm that determines the segments that are selected when clients write to a file
system. See Using file allocation (page 279).
You can also use StoreAll clients to access file systems. Typically, these clients are installed during
the initial system setup. See the HP StoreAll Storage Installation Guide for more information.
13
NOTE: The 12th column (FFREE) is your total available inode count per segment for the original
segments 66 Million per segment and that of the newer 64 bit segments of 1 billion per
segment. This segment mix and inode count does not negatively affect the operation of your file
system nor any applications.
For details about the prompts for each step of the wizard, see the GUI online help.
On the Select Storage dialog box, select the storage that will be used for the file system.
14
Configure Options dialog box. Enter a name for the file system, and specify the appropriate
configuration options.
15
WORM/Data Retention dialog box. If data retention will be used on the file system, enable it and
set the retention policy. See Managing data retention (page 196) for more information.
16
Default retention period. This period determines whether you can manage WORM
(non-retained) files as well as WORM-retained files. (WORM (non-retained) files can be deleted
at any time; WORM-retained files can be deleted only after the file's retention period has
expired.)
To manage only WORM-retained files, set the default retention period to a non-zero value.
WORM-retained files then use this period by default; however, you can assign a different
retention period if desired.
To manage both WORM (non-retained) and WORM-retained files, uncheck Set Default Retention
Period. The default retention period is then set to 0 seconds. When you make a WORM file
retained, you will need to assign a retention period to the file.
Autocommit period. When the autocommit period is set, files become WORM or
WORM-retained if they are not changed during the period. (If the default retention period is
set to zero, the files become WORM. If the default retention period is set to a value greater
than zero, the files become WORM-retained.) To use this feature, check Set Auto-Commit
Period and specify the time period. The minimum value for the autocommit period is five
minutes, and the maximum value is one year. If you plan to keep normal files on the file system,
do not set the autocommit period.
Data validation. Select this option to schedule periodic scans on the file system. Use the default
schedule, or click Modify to open the Data Validation Scan Schedule dialog box and configure
your own schedule.
17
Report Data Generation. Select this option if you want to create data retention reports. Use
the default schedule, or click Modify to open the Report Data Generation Schedule dialog
box and configure your own schedule.
Express Query. Check this option to enable StoreAll Express Query on the file system. Express
Query is a database used to record metadata state changes occurring on the file system.
Auditing Options dialog box. If you enabled Express Query on the WORM/Data Retention dialog
box, you can also enable auditing and select the events that you want to log.
18
Default File Shares dialog box. Use this dialog box to create an NFS export and/or an SMB share
at the root of the file system. The default settings are used. See Using NFS (page 55) and Using
SMB (page 76) for more information.
Review the Summary to ensure that the file system is configured properly. If necessary, you can
return to a dialog box and make any corrections.
Creating a file system
19
The Data Retention tab allows you to change the data retention configuration. The file system must
be unmounted. See Configuring data retention on existing file systems (page 201) for more
information.
NOTE: Data retention cannot be enabled on a file system created on StoreAll software 5.6 or
earlier versions until the file system is upgraded.
The Allocation, Segment Preference, and Host Allocation tabs are used to modify file allocation
policies and to specify segment preferences for file serving nodes and StoreAll clients. See Using
file allocation (page 279) for more information.
20
Create a file system with the specified segments (segments are logical volumes):
ibrix_fs -c -f FSNAME -s LVLIST [-t TIERNAME] [-a] [-q] [-o
OPTION1=VALUE1,OPTION2=VALUE2,...] [-t TIERNAME] [-F
FMIN:FMAX:FTOTAL] [-D DMIN:DMAX:DTOTAL]
Create a file system and assign specify segments to specific file serving nodes:
ibrix_fs -c -f FSNAME -S LV1:HOSTNAME1,LV2:HOSTNAME2,... [-a] [-q]
[-o OPTION1=VALUE1,OPTION2=VALUE2,...] [-t TIERNAME] [-F
FMIN:FMAX:FTOTAL] [-D DMIN:DMAX:DTOTAL]
In the commands, the -t option specifies a tier. TIERNAME can be any alphanumeric, case-sensitive,
text string. Tier assignment is not affected by other options that can be set with the ibrix_fs
command.
NOTE: A tier is created whenever a segment is assigned to it. Be careful to spell the name of the
tier correctly when you add segments to an existing tier. If you make an error in the name, a new
tier is created with the incorrect tier name, and no error is recognized.
Option
Data
retention
-o "retenMode=<mode>,retenDefPeriod=<period>,retenMinPeriod=<period>,
retenMaxPeriod=<period>,retenAutoCommitPeriod=<period>"
Express
Query
-T
Auditing
-oa OPTION1=VALUE1[,OPTION2=VALUE2,...]
The following example enables data retention, Express Query, and auditing, with all events being
audited:
ibrix_fs -o
"retenMode=Enterprise,retenDefPeriod=5m,retenMinPeriod=2s,retenMaxPeriod=30y,retenAutoCommitPeriod=1d"
-T -oa audit_mode=on,all=on -c -f ifs1 -s ilv_[1-4] -a
21
To mount or remount a file system, select it on the Filesystems panel and click Mount. You can
select several mount options on the Mount Filesystem dialog box. To remount the file system, click
remount.
22
IMPORTANT:
Keep in mind:
Mount options do not persist, unless they are set at the mountpoint. Mount options that are
not set at the mountpoint are reset to match the mount options on the mount point when the
file system is rebooted or remounted.
The ibrix_fs i and ibrix_mountpoint l commands display only the mount options
for the mount point.
The mount command displays the noatime option. Ignore the noatime option. It is no longer
used. If you set the atime option, the atime option will display instead of the noatime option
when you run the mount command.
nodiratime: Do not update the directory inode access time when the directory is accessed
nodquotstatfs: Disable file system reporting based on directory tree quota limits
Managing mountpoints and mount/unmount operations
23
path: For StoreAll clients only, mount on the specified subdirectory path of the file system
instead of the root.
remount: Remounts a file system without taking it offline. Use this option to change the current
mount options on a file system.
You can also view mountpoint information for a particular server. Select that server on the Servers
panel, and select Mountpoints from the lower Navigator. To delete a mountpoint, select that
mountpoint and click Delete.
CLI procedures
The CLI commands are executed immediately on file serving nodes. For StoreAll clients, the command
intention is stored in the active Fusion Manager. When StoreAll software services start on a client,
the client queries the active Fusion Manager for any commands. If the services are already running,
you can force the client to query the Fusion Manager by executing either ibrix_client or
ibrix_lwmount -a on the client, or by rebooting the client.
If you have configured hostgroups for your StoreAll clients, you can apply a command to a specific
hostgroup. For information about creating hostgroups, see the administration guide for your system.
Creating mountpoints
Mountpoints must exist before a file system can be mounted. To create a mountpoint on file serving
nodes and StoreAll clients, enter the following command:
ibrix_mountpoint -c [-h HOSTLIST] -m MOUNTPOINT
For information about mountpoint options, see the "ibrix_mountpoint" section in the HP StoreAll
CLI Reference Guide.
Deleting mountpoints
Before deleting mountpoints, verify that no file systems are mounted on them. To delete a mountpoint
from file serving nodes and StoreAll clients, use the following command:
ibrix_mountpoint -d [-h HOSTLIST] -m MOUNTPOINT
24
NOTE: If you do not include the -o parameter, the default access option for the mounted file
system is Read Write.
To unmount a file system from a specific mountpoint on a file serving node, StoreAll client, or
hostgroup:
ibrix_umount -m MOUNTPOINT
25
To unmount a file system locally, use one of the following commands on the StoreAll Linux client.
The first command detaches the specified file system from the client. The second command detaches
the file system that is mounted on the specified mountpoint.
ibrix_lwumount -f [fmname:]FSNAME
ibrix_lwumount -m MOUNTPOINT
26
To remove a client access entry, select the affected file system on the GUI, and then select Client
Exports from the lower Navigator. Select the access entry from the Client Exports display, and click
Delete.
On the CLI, use the ibrix_exportfs command to create an access entry:
ibrix_exportfs -c -f FSNAME -p CLIENT:/PATHNAME,CLIENT2:/PATHNAME,...
To see all access entries that have been created, use the following command:
ibrix_exportfs -c -l
To mount a restricted file system or a subdirectory of the restricted file system on a StoreAll client
using the CLI, specify the exported path as the option for the ibrix_lwmount command:
ibrix_lwmount -f FSNAME -m MOUNTPOINT -o mountpath=/PATHNAME
To disable Export Control, execute the ibrix_fs command with the -C and -D options:
ibrix_fs -C -D -f FSNAME
To mount a file system that has Export Control enabled, include the ibrix_mount -o {RW|RO}
option to specify that all clients have either RO or RW access to the file system. The default is RO.
In addition, when specifying a hostgroup, the root user can be limited to RO access by adding
the root_ro parameter.
27
3 Configuring quotas
Quotas can be assigned to individual users or groups, or to a directory tree. Individual quotas
limit the amount of storage or the number of files that a user or group can use in a file system.
Directory tree quotas limit the amount of storage and the number of files that can be created on a
file system located at a specific directory tree. Note the following:
You can assign quotas to a user, group, or directory on the GUI or from the CLI. You can also
import quota information from a file.
If a user has a user quota and a group quota for the same file system, the first quota reached
takes precedence.
Nested directory quotas are not supported. You cannot configure quotas on a subdirectory
differently than the parent directory.
28
Configuring quotas
To change the quotas configuration, click Modify on the Quota Summary panel.
On the CLI, run the following command to enable quotas on an existing file system:
ibrix_fs -q -E -f FSNAME
For the purpose of setting quotas, no UID or GID can exceed 2,147,483,647.
29
The User Quotas dialog box is used to create, modify, or delete quotas for users. To add a user
quota, enter the required information and click Add. Users having quotas are listed in the table at
the bottom of the dialog box. To modify quotas for a user, check the box preceding that user. You
can then adjust the quotas as needed. To delete quotas for a user, check the box and click Delete.
The Group Quotas dialog box is used to create, modify, or delete quotas for groups. To add a
group quota, enter the required information and click Add. The new quota applies to all users in
the group. Groups having quotas are listed in the table at the bottom of the dialog box. To modify
quotas for a group, check the box preceding that group. You can then adjust the quotas as needed.
To delete quotas for a group, check the box and click Delete. group
30
Configuring quotas
The Directory Quotas dialog box is used to create, modify, or delete quotas for directories. To add
a directory quota, enter the required information and click Add. The Name (Alias) is a unique
identifier for the quota, and cannot include commas. The new quota applies to all users and groups
storing data in the directory. Directories having quotas are listed in the table at the bottom of the
dialog box. To modify quotas for a directory, check the box preceding that directory. You can
then adjust the quotas as needed. To delete quotas for a directory, check the box and click Delete.
summary
31
32
Configuring quotas
From the CLI, use the following command to import quotas from a file, where PATH is the path to
the quotas file:
ibrix_edquota -t -p PATH -f FSNAME
See Format of the quotas file (page 33) for information about the format of quota file.
From the CLI, use the following command to export the existing quotas information to a file, where
PATH is the pathname of the quotas file:
ibrix_edquota -e -p PATH -f FSNAME
33
{id}
The UID for a user quota or the GID for a group quota.
{name}
A user name, group name, or directory tree identifier.
{path}
The full path to the directory tree. The path must already exist.
NOTE: When a quotas file is imported, the quotas are stored in a different, internal format.
When a quotas file is exported, it contains lines using the internal format. However, when adding
entries, you must use the A, B, or C format.
The following is an example of the syntax for a file to import a directory tree quota (2048=2 MB):
C,2,2048,1024,0,0,"ba","/fs1/a/aa"
C,2,2048,1024,0,0,"bb","/fs1/a/ab"
C,2,2048,1024,0,0,"bc","/fs1/a/ac"
You turned quotas off for a user, the user continued to store data in a file system, and
you now want to turn quotas back on for this user.
You are setting up quotas for the first time for a user who has previously stored data in
a file system.
You moved a subdirectory into another parent directory that is outside of the directory
having the directory tree quota.
DTREE_CREATE mode. After setting quotas on a directory tree, use this mode to take into
account the data used under the directory tree.
DTREE_DELETE mode. After deleting a directory tree quota, use this mode to unset quota IDs
on all files and folders in that directory.
CAUTION: When ibrix_onlinequotacheck is started in DTREE_DELETE mode, it removes
quotas for the specified directory. Be sure not to use this mode on directories that should retain
quota information.
To run an online quota check from the GUI, select the file system and then select Online quota
check from the lower Navigator.
On the Task Summary panel, select Start to open the Start Online quota check dialog box and
select the appropriate mode.
34
Configuring quotas
The Task Summary panel displays the progress of the scan. If necessary, select Stop to stop the
scan.
To run an online quota check in FILESYSTEM_SCAN mode from the CLI, use the following command:
ibrix_onlinequotacheck -s -S -f FSNAME
To run an online quota check in DTREE_CREATE mode, use this command:
ibrix_onlinequotacheck -s -c -f FSNAME -p PATH
To run an online quota check in DTREE_DELETE mode, use this command:
ibrix_onlinequotacheck -s -d -f FSNAME -p PATH
The command must be run from a file serving node that has the file system mounted.
Deleting quotas
To delete quotas from the GUI, select the quota from the appropriate Quota Usage Limits panel
and then click Delete. To delete quotas from the CLI, use the following commands.
To delete quotas for a user, use the following command:
ibrix_edquota -D -u UID [-f FSNAME]
To delete the entry and quota limits for a directory tree quota, use the following command:
ibrix_edquota -D -d NAME -f FSNAME
The -d NAME option specifies the name of the directory tree quota.
35
Troubleshooting quotas
Recreated directory does not appear in directory tree quota
If you create a directory tree quota on a specific directory and delete the directory (for example,
with rmdir/rm -rf) and then recreate it on the same path, the directory does not count as part
of the directory tree, even though the path is the same. Consequently, the
ibrix_onlinequotacheck command does not report on the directory.
Moving directories
After moving a directory into or out of a directory containing quotas, run the
ibrix_onlinequotacheck command as follows:
36
After moving a directory from a directory tree with quotas (the source) to a directory without
quotas (the destination), take these steps:
1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the source directory tree
to remove the usage information for the moved directory.
2. Run ibrix_onlinequotacheck in DTREE_DELETE mode on the directory that was
moved to delete residual quota information.
After moving a directory from a directory without quotas (the source) to a directory tree with
quotas (the destination), take this step:
1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the destination directory
tree to add the usage for the moved directory.
After moving a directory from one directory tree with quotas (the source) to another directory
tree with quotas (the destination), take these steps:
1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the source directory tree
to remove the usage information for the moved directory.
2. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the destination directory
tree to add the usage for the moved directory.
Configuring quotas
If segments are approaching 85% full, either expand the file system with new segments or
clean up the file system.
If only a few segments are between 85% and 90% and other segments are much lower, run
a rebalance task. However, if those few segments are at 90% or higher, it is best to adjust
the file system allocation policy to exclude the full segments from being used. Then initiate a
rebalance task to balance the full segments out onto other segments with more available space.
When the rebalance task is complete and all segments are below the 85% threshold, you can
reapply the original file system allocation policy.
The GUI displays the space used in each segment. Select the file system, and then select Segments
from the lower Navigator.
37
Description
PV_NAME
Physical volume name. Regular physical volume names begin with the letter d. The names of physical
volumes that are part of a mirror device begin with the letter m. Both are numbered sequentially.
SIZE (MB)
VG_NAME
LUN_GROUP
LV_NAME
FILESYSTEM
SEGNUM
USED%
SEGOWNER
DEVICE ON
RAID type
RAID host
RAID device
Network host
Network port
The VG_FREE field indicates the amount of group space that is not allocated to any logical volume.
The VG_USED field reports the percentage of available space that is allocated to a logical volume.
To display detailed information about volume groups, use the ibrix_vg -i command. The -g
VGLIST option restricts the output to the specified volume groups.
ibrix_vg -i [-g VGLIST]
The following table lists the output fields for ibrix_vg -i.
38
Field
Description
VG_NAME
SIZE(MB)
Field
Description
FREE(MB)
USED%
FS_NAME
PV_NAME
SIZE (MB)
Size, in MB, of the physical volume used to create this volume group.
LV_NAME
LV_SIZE
Size, in MB, of each logical volume created from this volume group.
GEN
Number of times the structure of the file system has changed (for example, new segments
were added).
SEGNUM
HOSTNAME
STATE
Operational state of the file serving node. See the administration guide for your system for
a list of the states.
Description
LV_NAME
LV_SIZE
FS_NAME
SEG_NUM
VG_NAME
OPTIONS
Linux lvcreate options that have been set on the volume group.
Description
FS_NAME
STATE
CAPACITY (GB)
USED%
Files
FilesUsed%
39
Field
Description
GEN
Number of times the structure of the file system has changed (for example, new segments
were added).
NUM_SEGS
To view detailed information about file systems, use the ibrix_fs -i command. To view
information for all file systems, omit the -f FSLIST argument.
ibrix_fs -i [-f FSLIST]
The following table lists the file system output fields reported by ibrix_fs -i.
Field
Description
Total Segments
Number of segments.
STATE
Mirrored?
Compatible?
Yes indicates that the file system is 32-bit compatible; the maximum number of segments
(maxsegs) allowed in the file system is also specified. No indicates a 64-bit file system.
Generation
Number of times the structure of the file system has changed (for example, new segments
were added).
FS_ID
FS_NUM
40
QUOTA_ENABLED
RETENTION
DEFAULT_BLOCKSIZE
CAPACITY
FREE
AVAIL
USED PERCENT
FILES
FFREE
Prealloc
Readahead
NFS Readahead
Number of KB that StoreAll software pre-fetches under NFS; default: 256 KB.
Default policy
Allocation policy assigned on this file system. Defined policies are: ROUNDROBIN,
STICKY, DIRECTORY, LOCAL, RANDOM, and NONE. See File allocation policies
(page 279) for information on these policies.
The first segment to which an allocation policy is applied in a file system. If a segment
is not specified, allocation starts on the segment with the most storage space available.
File replicas
NA.
Dir replicas
NA.
Mount Options
Field
Description
Root Segment Replica(s) Hint Possible segment numbers for root segment replicas. This value is used internally.
Snap FileSystem Policy
The following table lists the per-segment output fields reported by ibrix_fs -i.
Field
Description
SEGMENT
Number of segments.
OWNER
LV_NAME
STATE
BLOCK_SIZE
CAPACITY (GB)
FREE (GB)
AVAIL (GB)
FILES
FFREE
USED%
BACKUP
TYPE
Segment type. MIXED means the segment can contain both files and directories.
TIER
LAST_REPORTED
HOST_NAME
MOUNTPOINT
Host mountpoint.
PERMISSION
Root_RO
Specifies whether the root user is limited to read-only access, regardless of the access setting.
Lost+found directory
When browsing the contents of StoreAll software file systems, you will see a directory named
lost+found. This directory is required for file system integrity and should not be deleted. The
lost+found directory only exists at the top-level directory of a file system, which is also the
mountpoint. Additionally, there are several directories the you can see, at the top-level (mount
point) of a file system, that are "internal use only" and should not be deleted or edited. They are:
lost+found
.archiving1
.audit
.webdav
There are a few exceptions in the .archiving directory. Some files in this directory are created
for user consumption in certain subdirectories of .archiving (described in various places in this
user guide, for example validation summary outputs 1-0.sum, and audit log reports), and those
specific files can be deleted if desired, but other files should not be deleted.
41
Description
Name
CAPACITY
FREE
AVAIL
USED PERCENT
FILES
FFREE
42
On the CLI, use the ibrix_fs command to extend a file system. Segments are added to the file
serving nodes in a round-robin manner. If tiering rules are defined for the file system, the -t option
is required. Avoid expanding a file system while a tiering job is running. The expansion takes
priority and the tiering job is terminated.
Extend a file system with the logical volumes (segments) specified in LVLIST:
ibrix_fs -e -f FSNAME -s LVLIST [-t TIERNAME]
Extend a file system with segments created from the physical volumes in PVLIST:
ibrix_fs -e -f FSNAME -p PVLIST [-t TIERNAME]
Extend a file system with specific logical volumes on specific file serving nodes:
ibrix_fs -e -f FSNAME -S LV1:HOSTNAME1,LV2:HOSTNAME2...
43
To move files out of certain segments and place them in certain destinations, specify both
source and destination segments.
Click Rebalance/Evacuate on the Segments panel to open the Segment Rebalance and Evacuation
Wizard. The wizard can rebalance all files in the selected tier or in the file system, or you can
select the segments for the operation. Chose the appropriate rebalance option on the Select Mode
dialog box.
44
The Rebalance All dialog box allows you to rebalance all segments in the file system or in the
selected tier.
The Rebalance Advanced dialog box allows you to select the source and destination segments for
the rebalance operation.
45
For example, to rebalance segments 2 and 3 only and to specify them by segment name:
ibrix_rebalance -r -f ifs1 -s 2,3
To rebalance segments 1 and 2 only and to specify them by their logical volume names:
ibrix_rebalance -r -f ifs1 -S ilv1,ilv2
For example, to rebalance segments 3 and 4 only and to specify them by segment name:
ibrix_rebalance -r -f ifs1 -d 3,4
To rebalance segments 3 and 4 only and to specify them by their logical volume names:
ibrix_rebalance -r -f ifs1 -D ilv3,ilv4
The first command reports summary information. The second command lists jobs by task ID and
file system and indicates whether the job is running or stopped. Jobs that are in the analysis
(Coordinator) phase are listed separately from those in the implementation (Worker) phase.
If data retention is enabled on the file system, include the -R option in the command. For example:
ibrix_fs -d -R -f ifs2
A segment cannot be deleted until the file system to which it belongs is deleted.
A volume group cannot be deleted until all segments that were created on it are deleted.
A physical volume cannot be deleted until all volume groups created on it are deleted.
If you delete physical volumes but do not remove the physical storage from the network, the volumes
might be rediscovered when you next perform a discovery scan on the cluster.
To delete segments:
ibrix_lv -d -s LVLIST
Deleting file systems and file system components
47
Phase 0 checks host connectivity and the consistency of segment byte blocks and repairs them
in corrective mode.
Phase 1 checks segments and repairs them in corrective mode. Results are stored locally.
Phase 2 checks the file system and repairs it in corrective mode. Results are stored locally.
Phase 3 moves files from lost+found on each segment to the global lost+found directory
on the root segment of the file system.
If a file system shows evidence of inconsistencies, contact HP Support. A representative will ask
you to run ibrix_fsck in analytical mode and, based on the output, will recommend a course
of action and assist in running the command in corrective mode. HP strongly recommends that you
use corrective mode only with the direct guidance of HP Support. Corrective mode is complex and
difficult to run safely. Using it improperly can damage both data and the file system. Analytical
mode is completely safe, by contrast.
48
NOTE: During an ibrix_fsck run, an INFSCK flag is set on the file system to protect it. If an
error occurs during the job, you must explicitly clear the INFSCK flag (see Clearing the INFSCK
flag on a file system (page 49)), or you will be unable to mount the file system.
Unmount the file system for phases 0 and 1 and mount the file system for phases 2 and 3.
NOTE:
If phase 1 is run in analytic mode on a mounted file system, false errors can be reported.
Run phase 2:
ibrix_fsck -p 2 -f FSNAME [-s LVNAME] [-c] [-o "options"]
The command can be run on the specified file system or optionally only on segment LVNAME. Use
-o to specify any options.
Run phase 3:
ibrix_fsck -p 3 -f FSNAME [-c]
49
1.
Disable the Express Query and auditing feature for the file system, including the removal of
any StoreAll REST API shares. Disable the auditing feature before you disable the Express
Query feature.
a. To disable auditing, enter the following command:
ibrix_fs -A [-f FSNAME] -oa audit_mode=off
b.
Remove all StoreAll REST API shares created in the file system by entering the following
command:
ibrix_httpshare -d -f <fs_name>
c.
To disable the Express Query settings on a file system, enter the following command:
ibrix_fs -T -D -f FSNAME
2.
To re-enable the Express Query settings on a file system , enter the following command:
ibrix_fs -T -E -f FSNAME
3.
4.
To recreate your REST API HTTP shares, enter the ibrix_httpshare -a command with the
appropriate parameters. See Using HTTP (page 114).
Express Query re-synchronizes the file system and the database by using the restored database
information. This process might take some time.
5.
Wait for the metadata resync process to finish. Enter the following command to monitor the
resync process for a file system:
ibrix_archiving -l
The status should be at OK for the file system before you proceed. Refer to the ibrix_archiving
section in the HP StoreAll Storage CLI Reference Guide for information about the other states.
6.
Import your previously exported custom metadata and audit logs according to Importing
metadata to a file system (page 220).
50
The file system is mounted and functioning on the file serving nodes.
The mountpoint exists on the StoreAll client. If not, create the mountpoint locally on the client.
Software management services have been started on the StoreAll client (see Starting and
stopping processes in the administrator guide for your system).
Mark the evacuated segment as bad (retired), using the following command. The file system
state changes to okay and the file system can now be mounted. However, the operation
marking the segment as bad cannot be reversed.
ibrix_fs -B -f FSNAME {-n RETIRED_SEGNUMLIST | -s RETIRED_LVLIST}
Keep the evacuated segment in the file system. Take one of the following steps to enable
mounting the file system:
Use the force option (-X) when mounting the file system:
ibrix_mount -f myFilesystem -m /myMountpoint -X
Clear the unavailable segment flag on the file system with the ibrix_fsck command
and then mount the file system normally:
ibrix_fsck -f FSNAME -C -s LVNAME_OF_EVACUATED_SEG
SegmentNotAvailable is reported
When IAS heartbeats to segments (disk heartbeat every 15 seconds for each segment) or writes
to a segment do not succeed, the segment status may change to SegmentNotAvailable on the
Management Console and an alert message might be generated. If there is not an underlying
hardware storage event related to the affected segments, or a storage firmware update failed while
the file system was mounted, complete the following steps to resolve the issue:
NOTE: If a storage hardware event was generated and the reason for segmentNotAvailable
is due to a file system journal abort on write error or a storage controller failure resulting in the
segment going unavailable, HP recommends that you contact HP Support for analysis of the segment
health and to run the ibrix_fsck command to validate data integrity. If you clear the unvailable
segment status without information about why the segment became unavailable, be aware that
your data could be at further risk of damage or corruption.
1.
2.
Identify the file serving node that owns the segment. This information is reported on the
Filesystem Segments panel on the Management Console.
Run phase 0 and phase 1 of the ibrix_fsck command to verify the issue with the segment.
You can run the command on the file system or specify the segment name using the s LVNAME
parameter:
ibrix_fsck -p 0 -f FSNAME [-s LVNAME] [-c]
ibrix_fsck -p 1 -f FSNAME [-s LVNAME] [-c]
3.
If you have set Fusion Manager to fail over when a segment becomes unavailable, failover
occurs automatically. For more information, see the ibrix_fm_tune command in the HP
Troubleshooting file systems
51
4.
StoreAll Command Line Reference Guide. You can manually fail over the file serving node.
See the administration guide for your system for more information about this procedure.
If you have set Fusion Manager to make the segment available after failover, the segment
automatically becomes available after failover. For more information, see the ibrix_fm_tune
command in the HP StoreAll Command Line Reference Guide. To manually make the segment
available:
a. Enter the following command to clear the in_fsck flag:
ibrix_fsck -f FSNAME -C
b.
Enter the following command to clear the unavailable flag on the specified segment and
file system:
ibrix_fsck -f FSNAME -C -s LVNAME
c.
d.
5.
6.
7.
SegmentRejected is reported
This alert is generated by a client call for a segment that is no longer accessible by the segment
owner or file serving node specified in the client's segment map. The alert is logged to the a
StoreAll.log and messages files. It is usually an indication of an out-of-date or stale segment
map for the affected file system and is caused by a network condition. Other possible causes are
rebooting the node, unmounting the file system on the node, segment migrations, and, in a failover
scenario, stale a StoreAll, an unresponsive kernel, or a network RPC condition.
To troubleshoot this alert, check network connectivity among the nodes, ensuring that the network
is optimal and any recent network conditions have been resolved. From the file system perspective,
verify segment maps by comparing the file system generation numbers and the ownership for those
segments being rejected by the clients.
Use the following commands to compare the file system generation number on the local file serving
nodes and the clients logging the error.
/usr/local/ibrix/bin/rtool enumseg <FSNAME> <SEGNUMBER>
For example:
rtool enumseg ibfs1 3
segnum=3 of 4 ----------fsid ........................... 7b3ea891-5518-4a5e-9b08-daf9f9f4c027
fsname ......................... ibfs1
device_name .................... /dev/ivg3/ilv3
host_id ........................ 1e9e3a6e-74e4-4509-a843-c0abb6fec3a6
host_name ...................... ib50-87 <-- Verify owner of segment
ref_counter .................... 1038
state_flags .................... SEGMENT_LOCAL SEGMENT_PREFERED SEGMENT_DHB <SEGMENT_
ORPHAN_LIST_CREATED (0x00100061)
write_WM ....................... 99129 4K-blocks (387 Mbytes)
create_WM ...................... 793033 4K-blocks (3097 Mbytes)
spillover_WM ................... 892162 4K-blocks (3485 Mbytes)
generation ..................... 26
quota .......................... usr,grp,dir
f_blocks ....................... 0011895510 4K-blocks (==0047582040 1K-blocks, 46466 M)
f_bfree ........................ 0011785098 4K-blocks (==0047140392 1K-blocks, 46035 M)
f_bused ........................ 0000110412 4K-blocks (==0000441648 1K-blocks, 431 M)
f_bavail ....................... 0011753237 4K-blocks (==0047012948 1K-blocks, 45911 M)
f_files ........................ 6553600
f_ffree ........................ 6552536
52
Use the output to determine whether the FS generation number is in sync and whether the file
serving nodes agree on the ownership of the rejected segments. In the rtool enumseg output,
check the state_flags field for SEGMENT_IN_MIGRATION, which indicates that the segment
is stuck in migration because of a failover.
Typically, if the segment has a healthy state flag on the file serving node that owns the segment
and all file serving nodes agree on the owner of the segment, this is not a file system or file serving
node issue. If a state flag is stale or indicates that a segment is in migration, call HP Support for
a recovery procedure.
Otherwise, the alert indicates a file system generation mismatch. Take the following steps to resolve
this situation:
1. From the active Fusion Manager, run the following command to propagate a new file system
segment map throughout the cluster. This step takes a few minutes.
ibrix_dbck -I -f <FSNAME>
2.
53
To work around the problem, recreate the segment on the failing LUN. To identify the LUN associated
with the failure, run a command such as the following on the first server in the system:
# ibrix_pv -l -h glory2
PV_NAME SIZE(MB) VG_NAME
------- -------- ------d1
131070 vg1_1
d2
131070 vg1_2
d3
131070 vg1_3
d5
23551 vg1_5
d6
131070 vg1_4
DEVICE
--------------/dev/mxso/dev4a
/dev/mxso/dev5a
/dev/mxso/dev6a
/dev/mxso/dev8a
/dev/mxso/dev7a
RAIDTYPE
--------
RAIDHOST
--------
RAIDDEVICE
----------
The Device column identifies the LUN number. In this example, the volume group vg1_4 is created
from LUN 7. Recreate the segment and then run the file system creation command again.
54
5 Using NFS
To allow NFS clients to access a StoreAll file system, the file system must be exported. You can
export a file system using the GUI or CLI. By default, StoreAll file systems and directories follow
POSIX semantics and file names are case-sensitive for Linux/NFS users. If you prefer to use Windows
semantics for Linux/NFS users, you can make a file system or subdirectory case-insensitive.
NOTE: The latest release of NFS supported by current version of the StoreAll software is NFS
version 3.
55
Use the Settings window to specify the clients allowed to access the share. Also select the permission
and privilege levels for the clients, and specify whether the export should be available from a
backup server.
The Advanced Settings window allows you to set NFS options on the share.
On the Host Servers window, select the servers that will host the NFS share. By default, the share
is hosted by all servers that have mounted the file system.
56
Using NFS
The Summary window shows the configuration of the share. You can go back and revise the
configuration if necessary. When you click Finish, the export is created and appears on the File
Shares panel.
Description
-f FSNAME
-h HOSTNAME
-p CLIENT1:PATHNAME1,
CLIENT2:PATHNAME2,..
The clients that will access the file system can be a single file serving node, file
serving nodes represented by a wildcard, or the world (:/PATHNAME). Note that
world access omits the client specification but not the colon (for example, :/usr/
src).
-o "OPTIONS"
The default Linux exportfs mount options are used unless specific options are
provided. The standard NFS export options are supported. Options must be enclosed
in double quotation marks (for example, -o "ro"). Do not enter an FSID= or
sync option; they are provided automatically.
-b
By default, the file system is exported to the NFS clients standby. This option
excludes the standby for the file serving node from the export.
For example, to provide NFS clients *.hp.com with read-only access to file system ifs1 at the
directory /usr/src on file serving node s1.hp.com:
ibrix_exportfs -f ifs1 -h s1.hp.com -p *.hp.com:/usr/src -o "ro"
To provide world read-only access to file system ifs1 located at /usr/src on file serving node
s1.hp.com:
ibrix_exportfs -f ifs1 -h s1.hp.com -p :/usr/src -o "ro"
Exporting a file system
57
On the GUI, select the file system, select NFS Exports from the lower Navigator, and then
select Unexport.
The file system or directory must be created under the StoreAll File Serving Software 6.0 or
later release.
58
Using NFS
To set case insensitivity from the CLI, use the following command:
ibrix_caseinsensitive -s -f FSNAME -c [ON|OFF] -p PATH
Clearing case insensitivity (setting to case sensitive) for all users (NFS/Linux/Windows)
When you set the directory tree to be case insensitive OFF, the directory and all recursive
subdirectories are again case sensitive, restoring the POSIX semantics for Linux users.
Log files
A new task is created when you change case insensitivity or query its status recursively. A log file
is created for each task and an ID is assigned to the task. The log file is placed in the directory
/usr/local/ibrix/log/case_insensitive on the server specified as the coordinating
server for the task. Check that server for the log file.
NOTE: To verify the coordinating server, select File System > Inactive Tasks. Then select the task
ID from the display and select Details.
The log file names have the format IDtask.log, such as ID26.log.
The following sample log file is for a query reporting case insensitivity:
0:0:26275:Reporting Case Insensitive status for the following directories
1:0:/fs_test1/samename-T: TRUE
59
2:0:/fs_test1/samename-T/samename: TRUE
2:0:DONE
The first line of the output contains the PID for the process and reports the action taken. The first
column specifies the number of directories visited. The second column specifies the number of errors
found. The third column reports either the results of the query or the directories where case
insensitivity was turned on or off.
For example:
# ibrix_task -l
TASK ID
TYPE
ENDED AT
----------- ------------- -------caseins_237 caseins
FILE SYSTEM
-----------
fs_test1
SUBMITTED BY
--------------------
TASK STATUS
-----------
STARTING
IS COMPLETED?
-------------
No
EXIT STATUS
-----------
STARTED AT
--------------
11:31:38
To terminate a task, run the following command and specify the task ID:
# ibrix_task -k -n <task ID>
For example:
# ibrix_task -k -n caseins_237
tar/untar
compress/uncompress
cp -R
rsync
Remote replication
xcopy
robocopy
The case-insensitive setting of the source directories is not retained on the destination directories.
Instead, the setting for the destination file system is applied. However, if you use a command such
as the Linux mv command, a Windows drag and drop operation, or a Mac uncompress operation,
a new directory is not created, and the affected directory retains its original case-insensitive setting.
60
Using NFS
Active Directory with LDAP ID mapping as a secondary lookup source (supported for SMB)
Local Users and Groups (supported for SMB, FTP, and HTTP)
Local Users and Groups can be used with Active Directory or LDAP.
NOTE:
You can configure authentication from the GUI or CLI. When you configure authentication with
the GUI, the selected authentication services are configured on all servers. The CLI commands
allow you to configure authentication differently on different servers.
LDAP Configuration
user:
user1
uid:
user1
primary group:
Domain Users
uidNumber:
1010
UNIX uid:
not specified
gidNumber:
1001 (group1)
UNIX gid:
not specified
cn:
Domain Users
gidNumber:
1111
user1
group1 (1001)
LDAP ID mapping uses AD as the primary source for identifying the primary group and all
supplemental groups. If AD does not specify a UNIX GID for a user, LDAP ID mapping looks up
the GID for the primary group assigned in AD. In the example, the primary group assigned in AD
is Domain Users, and LDAP ID mapping looks up the GID of that group in LDAP. The lookup
operation returns:
user:
primary group:
user1
Domain Users (1111)
AD does not force the supplied primary group to match the supplied UNIX GID.
The supplemental groups assigned in AD do not need to match the members assigned in LDAP.
LDAP ID mapping uses the members list assigned in AD and ignores the members list configured
in LDAP.
IMPORTANT: If the users primary group in AD is not resolved to a GID number from either Active
Directory or LDAP, the user will be denied access to StoreAll.
61
Configure LDAP authentication on all the cluster nodes by using Fusion Manager.
Update the appropriate configuration template with information specific to the OpenLDAP
server being configured.
customized-schema-template.conf
samba-schema-template.conf
posix-schema-template.conf
Pick the schema your server supports. If your server supports both Posix and Samba schemas, pick
the schema most appropriate for your environment. Choose any one of the three supported schema
templates to proceed.
Make a copy of the template corresponding to the schema your LDAP server supports, and update
the copy with your configuration information.
Customized template. If the OpenLDAP server has a customized or a special schema, you must
provide information to help map between the standard schema attribute and class names to the
new names that are extant on the OpenLDAP server. This situation is not a common one. Use this
template only if your OpenLDAP server has overridden the standardized Posix or Samba schema
with customized extensions. Provide values (equivalent names) for all virtual attributes in the
configuration. For example:
mandatory; virtual; uid; your-schema-equivalent-of-uid
optional; virtual; homeDirectory; your-schema-equivalent-of-homeDirectory
Samba template. Enter the required attributes for Samba/POSIX templates. You can use the default
values specified in the Map (mandatory) variables and Map (Optional) variables sections of
the template.
POSIX template. Enter the required attributes for Samba/POSIX templates. Also remove or comment
out the following virtual attributes:
# mandatory; virtual; SID;sambaSID
# mandatory; virtual; PrimaryGroupSID;sambaPrimaryGroupSID
# mandatory; virtual; sambaGroupMapping;sambaGroupMapping
62
Value
Description
VERSION
LDAPServerHost
IP Address string
LdapConfigurationOU
LdapWriteDN
DN name string
LDAPWritePassword
schematype
Samba, posix, or user defined Supported schema for the OpenLDAP server.
schema
63
Click Authentication Wizard to start the wizard. On the Configure Options page, select the
authentication service to be applied to the servers in the cluster.
NOTE:
SMB.
CIFS in the GUI has not been rebranded to SMB yet. CIFS is just a different name for
The wizard displays the configuration pages corresponding to the option you selected.
64
Active Directory
Enter your domain name, the Auth Proxy username (an AD domain user with privileges to join the
specified domain; typically a Domain Administrator), and the password for that user. These
credentials are used only to join the domain and do not persist on the cluster nodes.
NOTE: When you successfully configure Active Directory authentication, the machine is part of
the domain until you remove it from the domain, either with the ibrix_auth -n command or
with the Management Console. Because Active Directory authentication is a one-time event, it is
not necessary to update authentication if you change the proxy user information.
IMPORTANT: See Linux static user mapping with Active Directory (page 93) for information
about enabling Linux Static User Mapping. You can return to the wizard to modify settings if it is
not enabled at the first pass and is later required.
Linux static use mapping is optional. If you do not want to enable Linux Static User Mapping, leave
it set to the default value of None.
If you want to enable Linux Static User Mapping using Active Directory based ID Mapping set it
to Enabled with Active Directory.
If you want to use LDAP ID mapping as a secondary lookup for Active Directory, select Enabled
with LDAP ID Mapping and AD. When you click Next, the LDAP ID Mapping dialog box appears.
65
LDAP ID mapping
If LDAP ID mapping is enabled and the system cannot locate a UID/GID in Active Directory, it
searches for the UID/GID in LDAP. On the LDAP ID Mapping dialog box, specify the appropriate
search parameters.
Port
Enter the LDAP server port (TCP port 389 for unencrypted or TLS encrypted; 636 for SSL encrypted).
Base of Search
Enter the LDAP base for searches. This is normally the root suffix of the directory, but you can
provide a base lower down the tree for business rules enforcement, ACLs, or performance reasons.
For example, ou=people,cd=enx,dc=net.
Bind DN
Enter the LDAP user account used to authenticate to the LDAP server to read data. This account
must have privileges to read the entire directory. Write credentials are not required. For example,
scn=hp9000-readonly-user,dc=entx,dc=net.
Password
Max Entries
Enter the maximum number of entries to return from the search (the default is 10). Enter 0 (zero)
for no limit.
Enter the local maximum search time-out value in seconds. This value determines how long the
client will wait for search results.
LDAP Scope
Namesearch Case If LDAP searches should be case sensitive, check this box.
Sensitivity
66
LDAP
To configure LDAP as the primary authentication mechanism for SMB shares, enter the server name
or IP address of the LDAP server host and the password for the LDAP user account.
NOTE:
Enter the LDAP user account used to authenticate to the LDAP server to read data, such as
cn=hp9000-readonly-user,dc=entx,dc=net. This account must have privileges to read the
entire directory. Write credentials are not required.
Write OU
Enter the OU (organizational unit) on the LDAP server to which configuration entries can be written.
This OU must be pre-provisioned on the remote LDAP server. The previous schema configuration
step would have seeded this OU with values that will now be read. The LDAPBindDN credentials
must be able to read (but not write) from the LDAPWriteOU. For example,
ou=9000Config,ou=configuration,dc=entx,dc=net.
Base of Search
This is normally the root suffix of the directory, but you can provide a base lower down the tree for
business rules enforcement, ACLs, or performance reasons. For example,
ou=people,cd=enx,dc=net.
NetBIOS Name
Enter any string that identifies the StoreAll host, such as StoreAll.
If your LDAP configuration requires a certificate for secure access, click Edit to open the LDAP
dialog box. You can enter a TLS or SSL certificate. When no certificate is used, the Enable SSL
field shows Neither TLS or SSL.
67
NOTE: If LDAP is the primary authentication service, Windows clients such as Explorer or MMC
plug-ins cannot be used to add new users.
Local Groups
Specify local groups allowed to access shares. On the Local Groups page, enter the group name
and, optionally, the GID and RID. If you do not assign a GID and RID, they are generated
automatically. Click Add to add the group to the list of local groups. Repeat this process to add
other local groups.
When naming local groups, you should be aware of the following:
68
Group names must be unique. The new name cannot already be used by another user or
group.
NOTE: If Local Users and Groups is the primary authentication service, Windows clients such as
Explorer or MMC plug-ins cannot be used to add new users.
Local Users
Specify local users allowed to access shares. On the Local Users page, enter a user name and
password. Click Add to add the user to the Local Users list.
When naming local users, you should be aware of the following:
User names must be unique. The new name cannot already be used by another user or group.
69
To provide account information for the user, click Advanced. The default home directory is /home/
<username> and the default shell program is /bin/false.
70
NOTE: If Local Users and Groups is the primary authentication service, Windows clients such as
Explorer or MMC plug-ins cannot be used to add new users.
To add an Active Directory or LDAP share administrator, enter the administrator name (such as
domain\user1 or domain\group1) and click Add to add the administrator to the Windows
Share Administrators list.
To add an existing Local User as a share administrator, select the user and click Add.
Summary
The Summary page shows the authentication configuration. You can go back and revise the
configuration if necessary. When you click Finish, authentication is configured, and the details
appear on the File Sharing Authentication panel.
71
You cannot change the UID or RID for a Local User account. If it is necessary to change a UID or
RID, first delete the account and then recreate it with the new UID or RID. The Local Users and
Local Groups panels allow you to delete the selected user or group.
RFC2307 defines extensions to the Active Directory schema to store UNIX Attributes for users and
groups. These are present in all versions of Windows since Windows 2003 R2. Enabling RFC2307
support enables Linux static user mapping with Active Directory. To enable RFC2307 support, use
the following command:
ibrix_cifsconfig -t [-S SETTINGLIST] [-h HOSTLIST]
Enable RFC2307 in the SETTINGLIST as follows:
rfc2307_support=rfc2307
For example:
ibrix_cifsconfig -t -S "rfc2307_support=rfc2307"
To disable RFC2307, set rfc2307_support to unprovisioned. For example:
ibrix_cifsconfig -t -S "rfc2307_support=unprovisioned"
IMPORTANT: After making configuration changes with the ibrix_cifsconfig -t -S
command, use the following command to restart the SMB services on all nodes affected by the
change.
ibrix_server -s -t cifs -c restart [-h SERVERLIST]
Clients will experience a temporary interruption in service during the restart.
Configuring LDAP
Use the ibrix_ldapconfig command to configure LDAP as the primary authentication service
for SMB shares.
72
IMPORTANT: Before using ibrix_ldapconfig to configure LDAP on the cluster nodes, you
must configure the remote LDAP server. For more information, see Configuring LDAP for StoreAll
software (page 62).
IMPORTANT: Linux Static User mapping is not supported if LDAP is configured as the primary
authentication service.
Add an LDAP configuration and enable LDAP:
ibrix_ldapconfig -a -h LDAPSERVERHOST [-P LDAPSERVERPORT] -b LDAPBINDDN
-p LDAPBINDDNPASSWORD -w LDAPWRITEOU -B LDAPBASEOFSEARCH -n NETBIOS -E
ENABLESSL [-f CERTFILEPATH] [-c CERTFILECONTENTS]
The options are:
-h LDAPSERVERHOST
-P LDAPSERVERPORT
-b LDAPBINDDN
-p LDAPBINDDNPASSWORD
-w LDAPWRITEOU
-B LDAPBASEOFSEARCH
-n NETBIOS
-E ENABLESSL
The type of certificate required. Enter 0 for no certificate, 1 for TLS, or 2 for SSL.
-f CERTFILEPATH
-c CERTFILECONTENTS
The contents of the certificate file. Copy the contents and paste them between quotes.
73
This command automatically enables LDAP RFC 2307 ID Mapping. The options are:
-h LDAPSERVERHOST
-B LDAPBASEOFSEARCH
-P LDAPSERVERPORT
-b LDAPBINDDN
The LDAP bind Distinguished Name (the default is anonymous). For example:
cn=hp9000-readonly-user,dc=entx,dc=net.
-p LDAPBINDDNPASSWORD
-m MAXWAITTIME
-M MAXENTRIES
-n
-s
Search the LDAP scope base (search the base level entry only).
-o
LDAP scope one (search all entries in the first level below the base entry, excluding
the base entry).
-u
LDAP scope sub (search the base-level entries and all entries below the base level).
[-h HOSTLIST]
Be sure to create a local user account for each user that will be accessing SMB, FTP, or HTTP
shares, and create at least one local group account for the users. The account information is stored
internally in the cluster.
Configure Active Directory authentication:
ibrix_auth -n DOMAIN_NAME -A AUTH_PROXY_USER_NAME@domain_name [-P
AUTH_PROXY_PASSWORD] [-S SETTINGLIST] [-h HOSTLIST]
In the command, DOMAIN_NAME is your Active Directory domain.
AUTH_PROXY_USER_NAME@domain_name is the name and domain for an AD domain user
(typically a Domain Administrator) having privileges to join the specified domain and
AUTH_PROXY_PASSWORD is the password for that account.
To configure Active Directory authentication on specific nodes, specify those nodes in HOSTLIST.
For the -S option, enter the settings as settingname=value. Use commas to separate the
settings, and enclose the list in quotation marks. If there are multiple values for a setting, enclose
the values in square brackets. The users you specify must already exist. For example:
ibrix_auth -t -S 'share admins=[domain\user1, domain\user2,
domain\user3]'
To remove a setting, enter settingname=.
All servers, or only the servers specified in HOSTLIST, will be joined to the specified Active
Directory domain.
74
75
7 Using SMB
The SMB server implementation allows you to create file shares for data stored on the cluster. The
SMB server provides a true Windows experience for Windows clients. A user accessing a file
share on a StoreAll system will see the same behavior as on a Windows server.
IMPORTANT: SMB and StoreAll Windows clients cannot be used together because of incompatible
AD user to UID mapping. You can use either SMB or StoreAll Windows clients, but not both at the
same time.
IMPORTANT: Before configuring SMB, select an authentication method. See Configuring
authentication for SMB, FTP, and HTTP (page 61) for more information.
To verify that a file serving node can resolve SRV records for your AD domain, run the Linux dig
command. (In the following example, the Active Directory domain name is mydomain.com.)
% dig SRV _ldap._tcp.mydomain.com
In the output, verify that the ANSWER SECTION contains a line with the name of a domain controller
in the Active Directory domain. Following is some sample output:
; <<>> DiG 9.3.4-P1 <<>> SRV _ldap._tcp.mydomain.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56968
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2
;; QUESTION SECTION:
;_ldap._tcp.mydomain.com.
IN
SRV
;; ANSWER SECTION:
_ldap._tcp.mydomain.com. 600
IN
SRV
;; ADDITIONAL SECTION:
adctrlr.mydomain.com.
;;
;;
;;
;;
3600
IN
192.168.11.11
CIFS in the GUI has not been rebranded to SMB yet. CIFS is just a different name for
Use the SMB panel on the GUI to start, stop, or restart the SMB service on a particular server, or
to view SMB activity statistics for the server. Select Servers from the Navigator and then select the
76
Using SMB
appropriate server. Select CIFS in the lower Navigator to display the CIFS panel, which shows
SMB activity statistics on the server. You can start, stop, or restart the SMB service by clicking the
appropriate button.
NOTE: Click CIFS Settings to configure SMB signing on this server. See Configuring SMB signing
(page 83) for more information.
To start, stop, or restart the SMB service from the CLI, use the following command:
ibrix_server -s -t cifs -c {start|stop|restart}
lwreg
dcerpc
eventlog
lsass
lwio
netlogin
srvsvc
If the monitor finds that a service is not running, it attempts to restart the service. If the service
cannot be restarted, that particular service is not monitored.
The command can be used for the following tasks.
Start the SMB monitoring daemon and enable monitoring:
ibrix_cifsmonitor -m [-h HOSTLIST]
Display the health status of the SMB services:
ibrix_cifsmonitor -l
77
Condition
Up
Degraded
The lwio service is running but one or more of the other services are down
Down
The lwio service is down and one or more of the other services are down
Not Monitored
Monitoring is disabled
N/A
The active Fusion Manager could not communicate with other file serving nodes in the cluster
SMB shares
Windows clients access file systems through SMB shares. You can use the StoreAll GUI or CLI to
manage shares, or you can use the Microsoft Management Console interface. The SMB service
must be running when you add shares.
When working with SMB shares, you should be aware of the following:
78
The permissions on the directory exporting an SMB share govern the access rights that are
given to the Everyone user as well as to the owner and group of the share. Consequently, the
Everyone user may have more access rights than necessary. The administrator should set ACLs
on the SMB share to ensure that users have only the appropriate access rights. Alternatively,
permissions can be set more restrictively on the directory exporting the SMB share.
When the cluster and Windows clients are not joined in a domain, local users are not visible
when you attempt to add ACLs on files and folders in an SMB share.
A directory tree on an SMB share cannot be copied if there are more than 50 ACLs on the
share. Also, because of technical constraints in the SMB service, you cannot create subfolders
in a directory on an SMB share having more than 50 ACLs.
When configuring an SMB share, you can specify IP addresses or ranges that should be
allowed or denied access to the share. However, if your network includes packet filters, a
NAT gateway, or routers, this feature cannot be used because the client IP addresses are
modified while in transit.
You can use an SMB share as a DFS target. However, the SMB share does not support DFS
load balancing or DFS replication.
With the release of version 6.2, SMB shares support Large MTU, which provides a 1 MB
buffer for reads and writes. On the client, you must enable Large MTU in the registry to enable
support for Large MTU on the SMB server.
SMB shares support alternate data streams. SMB clients with files containing the Alternate
Data Streams type '$DATA' can be written to SMB shares. The files are stored on the StoreAll
file system in a special format and should only be handled by SMB clients.
Using SMB
IMPORTANT:
If files are handled over a different protocol or directly on the StoreAll server via
PowerShell, the alternate data streams could be lost.
If you rename the master file table while archiving and auto commit is enabled, the
alternative data streams associated with the Master File Table are missing after the rename.
HP-SMB supports the following subset of Windows LSASS Local Authentication Provider
Privileges:
SE_BACKUP_PRIVILEGE
SE_MACHINE_ACCOUNT_PRIVILEGE
SE_MACHINE_VOLUME_PRIVILEGE
SE_RESTORE_PRIVILEGE
SE_TAKE_OWNERSHIP_PRIVILEGE
See the Microsoft documentation for more information about these privileges.
Do not include any of the following special characters in a share name. If the name contains
any of these special characters, the share might not be set up properly on all nodes in the
cluster.
' & ( [ { $ ` , / \
Do not include any of the following special characters in the share description. If a description
contains any of these special characters, the description might not propagate correctly to all
nodes in the cluster.
* % + & `
SMB shares
79
On the Permissions page, specify permissions for users and groups allowed to access the share.
80
Using SMB
Click Add to open the New User/Group Permission Entry dialog box, where you can configure
permissions for a specific user or group. The completed entries appear in the User/Group Entries
list on the Permissions page.
On the Client Filtering page, specify IP addresses or ranges that should be allowed or denied
access to the share.
NOTE: This feature cannot be used if your network includes packet filters, a NAT gateway, or
routers.
Click Add to open the New Client UP Address Entry dialog box, where you can allow or deny
access to a specific IP address or a range of addresses. Enter a single IP address, or include a
bitmask to specify entire subnets of IP addresses, such as 10.10.3.2/25. The valid range for the
SMB shares
81
bitmask is 1-32. The completed entry appears on the Client IP Filters list on the Client Filtering
page.
On the Advanced Settings page, enable or disable Access Based Enumeration and specify the
default create mode for files and directories created in the share. The Access Based Enumeration
option allows users to see only the files and folders to which they have access on the file share.
On the Host Servers page, select the servers that will host the share.
82
Using SMB
SMB shares
83
Use the Required check box to specify whether SMB signing (with either SMB1 or SMB2) is
required.
The Disabled check box applies only to SMB1. Use this check box to enable or disable SMB
signing with SMB1.
The File Share Settings dialog box does not display whether SMB signing is currently enabled
or disabled. Use the following command to view the current setting for SMB signing:
ibrix_cifsconfig -i
SMB signing must not be required to support connections from 10.5 and 10.6 Mac clients.
It is possible to configure SMB signing differently on individual servers. Backup SMB servers
should have the same settings to ensure that clients can connect after a failover.
The SMB signing settings specified here are not affected by Windows domain group policy
settings when joined to a Windows domain.
Use commas to separate the settings, and enclose the list in quotation marks. For example, the
following command sets SMB signing to enabled and required:
ibrix_cifsconfig -t -S "smb signing enabled=1,smb signing required=1"
To disable SMB signing, enter settingname= with no value. For example:
ibrix_cifsconfig -t -S "smb signing enabled=,smb signing required="
IMPORTANT: After making configuration changes with the ibrix_cifsconfig -t -S
command, use the following command to restart the SMB services on all nodes affected by the
change.
ibrix_server -s -t cifs -c restart [-h SERVERLIST]
Clients will experience a temporary interruption in service during the restart.
84
Using SMB
On the CIFS Shares panel, click Add or Modify to open the File Shares wizard, where you can
create a new share or modify the selected share. Click Delete to remove the selected share. Click
CIFS Settings to configure global file share settings; see Configuring SMB signing (page 83))
for more information.
You can also view SMB shares for a specific file system. Select that file system on the GUI, and
then select CIFS Shares from the lower Navigator.
SMB shares
85
NOTE: You cannot create an SMB share with a name containing an exclamation point (!) or a
number sign (#) or both.
Use the -A ALLOWCLIENTIPSLIST or -E DENYCLIENTIPSLIST options to list client IP addresses
allowed or denied access to the share. Use commas to separate the IP addresses, and enclose the
list in quotes. You can include an optional bitmask to specify entire subnets of IP addresses (for
example, ibrix_cifs -A "192.186.0.1,102.186.0.2/16"). The default is "", which
allows all IP addresses when it is used with the -A option or it denies all IP addresses when it is
used with the -E option.
The -F FILEMODE and -M DIRMODE options specify the default mode for newly created files or
directories, in the same manner as the Linux chmod command. The range of values is 00000777.
The default is 0700.
To see the valid settings for the -S option, use the following command:
ibrix_cifs -L
View share information:
ibrix_cifs -i [-h HOSTLIST]
Modify a share:
ibrix_cifs -m -s SHARENAME [-D SHAREDESCRIPTION] [-S SETTINGLIST] [-A
ALLOWCLIENTIPSLIST] [-E DENYCLIENTIPSLIST] [-F FILEMODE] [-M DIRMODE]
[-h HOSTLIST]
Delete a share:
ibrix_cifs -d -s SHARENAME [-h HOSTLIST]
fullcontrol
change
read
For example, the following command gives everyone read permission on share1:
ibrix_cifsperms -a -s share1 -u Everyone -t allow -p read
Modify share-level permissions for a user or group:
ibrix_cifsperms -m -s SHARENAME -u USERNAME -t TYPE -p PERMISSION [-h
HOSTLIST]
Delete share-level permissions for a user or group:
ibrix_cifsperms -d -s SHARENAME [-u USERNAME] [-t TYPE] [-h HOSTLIST]
Display share-level permissions:
ibrix_cifsperms -i -s SHARENAME [-t TYPE] [-h HOSTLIST]
86
Using SMB
87
Gid: 1060635137
SID: S-1-5-21-3681183244-3700010909-334885885-513
We can find the GID assigned by the StoreAll CIFS server for any Active Directory Group. Here
we lookup the GID for Domain Admins by entering the following command:
[root@ibrix01a ~]# /opt/likewise/bin/lw-find-group-by-name IB\\Domain\
Admins
The command displays the following output:
Group info (Level 0):
====================
Name: IB\domain^admins
Gid: 1060635136
SID: S-1-5-21-3681183244-3700010909-334885885-512
NOTE:
Backslashes have been used to escape special characters in the group name.
The SMB servers file and folder create modes control the Linux permissions on files and directories
created over CIFS. These default to 0700 (read/write/execute to the owner).
The create modes can be managed with the ibrix_cifs command and with the GUI Wizard
(in Advanced Settings) when creating or modifying shares. Below is a usage example changing
a shares default create mode for files:
[root@ibrix01a ~]# ibrix_cifs -m -s cifs1 -F 0770 Command succeeded!
In this command:
The -F option specifies that we are changing the mask for files; use -M to change the directory
mask.
Using SMB
3.
4.
5.
6.
7.
8.
SMB shares
89
6.
7.
8.
90
The share path must include the StoreAll file system name. For example, if the file system is
named data, you could specify C:\data1\folder1.
Using SMB
NOTE:
The permissions on the shared directory will be set to 777. It is not possible to change the
permissions on the share.
Do not include any of the following special characters in a share name. If the name contains
any of these special characters, the share might not be set up properly on all nodes in the
cluster.
' & ( [ { $ ` , / \
Do not include any of the following special characters in the share description. If a description
contains any of these special characters, the description might not propagate correctly to all
nodes in the cluster.
* % + & `
The management console GUI or CLI cannot be used to alter the permissions for shares created
or managed with Windows Share Management. The permissions for these shares are marked
as externally managed on the GUI and CLI.
Open the MMC with the Shared Folders snap-in that you created earlier. On the Select Computer
dialog box, enter the IP address of a server that will host the share.
The Computer Management window shows the shares currently available from server.
SMB shares
91
To add a new share, select Shares > New Share and run the Create A Shared Folder Wizard. On
the Folder Path panel, enter the path to the share, being sure to include the file system name.
When you complete the wizard, the new share appears on the Computer Management window.
92
By default, a share is made available from all of the file serving nodes in an cluster.
An SMB client will map the share from only one file serving node. All network traffic between
the client and the cluster will go via that node.
Using SMB
A share should always be mapped using the User Virtual Interface (the User VIF) of a file
serving node as that interface will be migrated to the nodes HA partner in event of the first
node failing.
A share should never be mapped using the Admin IP address of a node as that interface
cannot migrate to the nodes HA partner.
A share should never be mapped using the StoreAll Virtual Management Interface.
Where many clients will be mapping shares the most common method of directing mapping requests
to file serving nodes is to set up a round robin DNS entry for all of the clusters User VIFs.
NOTE: Mapping UID 0 and GID 0 to any AD user or group is not compatible with SMB static
mapping.
Add the uidNumber and gidNumber attributes to the partial-attribute-set of the AD global
catalog.
You can perform these procedures from any domain controller. However, the account used to add
attributes to the partial-attribute-set must be a member of the Schema Admins group.
93
1.
2.
3.
4.
Click Start, click Run, type mmc, and then click OK.
On the MMC Console menu, click Add/Remove Snap-in.
Click Add, and then click Active Directory Schema.
Click Add, click Close, and then click OK.
The next dialog box shows the properties for the gidNumber attribute.
94
Using SMB
The following article provides more information about modifying attributes in the Active Directory
global catalog:
http://support.microsoft.com/kb/248717
Assigning attributes
To set POSIX attributes for users and groups, start the Active Directory Users and Computers GUI
on the Domain Controller. Open the Administrator Properties dialog box, and go to the UNIX
Attributes tab. For users, you can set the UID, login shell, home directory, and primary group. For
groups, set the GID.
95
Synchronizing Active Directory 2008 with the NTP server used by the cluster
It is important to synchronize Active Directory with the NTP server used by the StoreAll cluster. Run
the following commands on the PDC:
net stop w32time
w32tm /config /syncfromflags:manual /manualpeerlist:"<NTP server>"
w32tm /config /reliable:yes
net start w32time
96
Using SMB
4.
Create the new shares on the cluster storage and assign each share the appropriate path. For
example, assign srv1-DATA to /srv1/data, and assign srv2-DATA to /srv2/data.
Because SRV3 originally pointed to the same share as SRV1, we will assign the share
srv3-DATA the same path as srv1-DATA, but set the permissions differently.
5.
Optionally, create a share having the original share name, DATA in our example. Assign a
path such as /ERROR/DATA and place a file in it named SHARE_MAP_FAILED. Doing this
ensures that if a user configuration error occurs or the map fails, clients will not gain access
to the wrong shares. The file name notifies the user that their access has failed.
When this configuration is in place, a client request to access share \\srv1\data will be translated
to share srv1-DATA at /srv1/data on the file system. Client requests for \\srv3\data will
also be translated to /srv1/data, but the clients will have different permissions. The client requests
for \\srv2\data will be translated to share srv2-DATA at /srv2/data.
Client utilities such as net use will report the requested share name, not the new share name.
Share names are case insensitive, and must be unique with respect to case.
The oldShareName and newShareName do not need to exist when creating the file; however,
they must exist for a connection to be established to the share.
If a client specifies a share name that is not in the file, the share name will not be translated.
Care should be used when assigning share names longer than 12 characters. Some clients
impose a limit of 12 characters for a share name.
Verify that the IP addresses specified in the file are legal and that Vhost names can be resolved
to an IP address. IP addresses must be IP4 format, which limits the addresses to 15 characters.
IMPORTANT: When you update the vhostmap file, the changes take effect a few minutes after
the map is saved. If a client attempts a connection before the changes are in effect, the previous
map settings will be used. To avoid any delays, make your changes to the file when the SMB
service is down.
After creating or updating the vhostmap file, copy the file manually to the other servers in the
cluster.
97
SMB clients
SMB clients access shares on the StoreAll software cluster in the same way they access shares on
a Windows server.
Zero-length byte-range locks acquired on one file serving node are not observed on other file
serving nodes.
Byte-range locks acquired on one file serving node are not enforced as mandatory on other
file serving nodes.
If a shared byte-range lock is acquired on a file opened with write-only access on one file
serving node, that byte-range lock will not be observed on other file serving nodes. ("Write-only
access" means the file was opened with GENERIC_WRITE but not GENERIC_READ access.)
If an exclusive byte-range lock is acquired on a file opened with read-only access on one file
serving node, that byte-range lock will not be observed on other file serving nodes. ("Read-only
access" means the file was opened with GENERIC_READ but not GENERIC_WRITE access.)
98
Using SMB
Restore operations
If a file has been deleted from a directory that has Previous Versions, the user can recover a previous
version of the file by performing a Restore of the parent directory. However, the Properties of the
restored file will no longer list those Previous Versions. This condition is due to the StoreAll snapshot
infrastructure; after a file is deleted, a new file in the same location is a new inode and will not
have snapshots until a new snapshot is subsequently created. However, all pre-existing previous
versions of the file continue to be available from the Previous Versions of the parent directory.
For example, folder Fold1 contains files f1 and f2. There are two snapshots of the folder at
timestamps T1 and T2, and the Properties of Fold1 show Previous Versions T1 and T2. The
Properties of files f1 and f2 also show Previous Versions T1 and T2 as long as these files have
never been deleted.
If the file f1 is now deleted, you can restore its latest saved version from Previous Version T2 on
Fold1.
From that point on, the Previous Versions of \Fold1\f1 no longer show timestamps T1 and T2.
However, the Previous Versions of \Fold1 continue to show T1 and T2, and the T1 and T2 versions
of file f1 continue to be available from the folder.
After the user skips the file or folder, the restore operation may or may not continue depending on
the Windows client being used. For Windows Vista, the restore operation continues by skipping
the folder or file. For other Windows clients (Windows 2003, XP, 2008), the operation stops
abruptly or gives an error message. Testing has shown that Windows Vista is an ideal client for
SMB clients
99
SMB shadow copy support. StoreAll software does not have any control over the behavior of other
clients.
NOTE: HP recommends that the share root is not at the same level as the file system root, and is
instead a subdirectory of the file system root. This configuration reduces access and other
permissions-related issues, as there are many system files (such as lost+found, quota subsystem
files, and so on) at the root of the file system.
After the failover is complete, the user must skip the file that could not be accessed. The restore
operation then proceeds. The file will not be restored and can be manually copied later, or the
user can cancel the restore operation and then restart it.
access to and security for Windows clients. The SMB server maintains the ACLs as requested by
the Windows clients, and emulates the inheritance of ACLs identically to the way Windows servers
maintain inheritance. This creates a true Windows experience around accessing files from a
Windows client.
This mechanism works well for pure Linux environments, but (like the SMB server) Linux applications
do not understand any permissions mechanisms other than their own. Note that a Linux application
can also use POSIX ACLs to control access to a file; POSIX ACLs are honored by the SMB server,
but will not be inherited or propagated. The SMB server also does not map POSIX ACLs to be
compatible with Windows ACLs on a file.
These permission mechanisms have some ramifications for setting up shares, and for cross-protocol
access to files on a StoreAll system. The details of these ramifications follow.
101
Changing the way SMB inherits permissions on files accessed from Linux applications
To prevent the SMB server from modifying file permissions on directory trees that a user wants to
access from Linux applications (so keeping permissions other than 700 on a file in the directory
tree), a user can set the setgid bit in the Linux permissions mask on the directory tree. When the
setgid bit is set, the SMB server honors that bit, and any new files in the directory inherit the
parent directory permission bits and group that created the directory. This maintains group access
for new files created in that directory tree until setgid is turned off in the tree. That is, Linux-style
permissions semantics are kept on the files in that tree, allowing SMB users to modify files in the
directory while NFS users maintain their access though their normal group permissions.
For example, if a user wants all files in a particular tree to be accessible by a set of Linux users
(say, through NFS), the user should set the setgid bit (through local Linux mechanisms) on the
top level directory for a share (in addition to setting the desired group permissions, for example
770). Once that is done, new files in the directory will be accessible to the group that creates the
directory and the permission bits on files in that directory tree will not be modified by the SMB
server. Files that existed in the directory before the setgid bit was set are not affected by the
change in the containing directory; the user must manually set the group and permissions on files
that already existed in the directory tree.
This capability can be used to facilitate cross-protocol sharing of files. Note that this does not affect
the permissions inheritance and settings on the SMB client side. Using this mechanism, a Windows
user can set the files to be inaccessible to the SMB users of the directory tree while opening them
up to the Linux users of the directory tree.
Troubleshooting SMB
Changes to user permissions do not take effect immediately
The SMB implementation maintains an authentication cache that is set to four hours. If a user is
authenticated to a share, and the user's permissions are then changed, the old permissions will
remain in effect until the cache expires, at four hours after the authentication. The next time the
user is encountered, the new, correct value will be read and written to the cache for the next four
hours.
This is not a common occurrence. However, to avoid the situation, use the following guidelines
when changing user permissions:
After a user is authenticated to a share, wait four hours before modifying the user's permissions.
Conversely, it is safe to modify the permissions of a user who has not been authenticated in
the previous four hours.
Power down the file serving node before failing it over, and do failback operations only during
off hours.
The following xcopy and robocopy options are recommended for copying files from a client to
a highly available SMB server:
xcopy: include the option /C; in general, /S /I /Y /C are good baseline options.
robocopy: include the option /ZB; in general, /S /E /COPYALL /ZB are good baseline
options.
8 Using FTP
The FTP feature allows you to create FTP file shares for data stored on the cluster. Clients access
the FTP shares using standard FTP and FTPS protocol services.
IMPORTANT: Before configuring FTP, select an authentication method (either Local Users or Active
Directory). See Configuring authentication for SMB, FTP, and HTTP (page 61) for more information.
An FTP configuration consists of one or more configuration profiles and one or more FTP shares.
A configuration profile defines global FTP parameters and specifies the file serving nodes on which
the parameters are applied. The vsftpd service starts on these nodes when the cluster services
start. Only one configuration profile can be in effect on a particular node.
An FTP share defines parameters such as access permissions and lists the file system to be accessed
through the share. Each share is associated with a specific configuration profile. The share
parameters are added to the profile's global parameters on the file serving nodes specified in the
configuration profile.
You can create multiple shares having the same physical path, but with different sets of properties,
and then assign users to the appropriate share. Be sure to use a different IP address or port for
each share.
You can configure and manage FTP from the GUI or CLI.
If an SSL certificate will be required for FTPS access, add the SSL certificate to the cluster
before creating the shares. See Managing SSL certificates (page 174) for information about
creating certificates in the format required by StoreAll software and then adding them to the
cluster.
When configuring a share on a file system, the file system must be mounted.
If the directory path to the share includes a subdirectory, be sure to create the subdirectory
on the file system and assign read/write/execute permissions to it. (StoreAll software does
not create the subdirectory if it does not exist, and instead adds a /pub/ directory to the
share path.)
For High Availability, when specifying IP addresses for accessing a share, use IP addresses
for VIFs having VIF backups. See the administrator guide for your system for information about
creating VIFs.
Configuring FTP
On the GUI, select File Shares from the Navigator to open the File Shares panel, and then click
Add to start the Add New File Share Wizard.
On the File Share page, select FTP as the File Sharing Protocol. Select the file system, which must
be mounted, and enter the default directory path for the share. If the directory path includes a
subdirectory, be sure to create the subdirectory on the file system and assign read/write/execute
permissions to it.
NOTE: StoreAll software does not create the subdirectory if it does not exist, and for anonymous
shares only, adds a /pub/ directory to the share path instead. All files uploaded through the
anonymous user will then be placed in that directory. The /pub/ directory is not created for a
non-anonymous share.
On the Config Profile page, select an existing configuration profile or create a new profile,
specifying a name and defining the appropriate parameters.
On the Host Servers page, select the servers that will host the configuration profile.
On the Settings page, configure the FTP parameters that apply to the share. The parameters are
added to the file serving nodes hosting the configuration profile. Also enter the IP addresses and
ports that clients will use to access the share. For High Availability, specify the IP address of a VIF
having a VIF backup.
NOTE:
NOTE: If you need to allow NAT connections to the share, use the Modify FTP Share dialog box
after the share is created.
On the Users page, specify the users to be given access to the share. If no users are specified on
this page, then any user who can be authenticated according to your StoreAll authentication settings
for the cluster can access the share as read-write. Users must also have access permissions at the
file system level to read or write. If any users are specified on this page, only those users may
access the share and all other users are denied regardless of their file system permissions.
IMPORTANT: Ensure that all users who are given read or write access to shares have sufficient
access permissions at the file system level for the directories exposed as shares.
To define permissions for a user, click Add to open the Add User to Share dialog box.
Use the buttons on the panels to modify or delete the selected configuration profile or share. You
can also add another FTP share to the selected configuration profile. Use the Modify FTP Share
dialog box if you need to allow NAT connections on the share.
Configuring FTP
To configure FTP, first add a configuration profile, and then add an FTP share:
Add a configuration profile:
ibrix_ftpconfig -a PROFILENAME [-h HOSTLIST] [-S SETTINGLIST]
For the -S option, use a comma to separate the settings, and enclose the settings in quotation
marks, such as "passive_enable=TRUE,maxclients=200". To see a list of available settings
for the profile, use the following command:
ibrix_ftpconfig -L
110
Using FTP
Accessing shares
Clients can access an FTP share by specifying a URL in their Web browser, such as Internet Explorer.
In the following URLs, IP_address:port is the IP (or virtual IP) and port configured for the share.
For a share configured with the anonymous parameter is set to true, use the following URL:
ftp://IP_address:port/
For a share configured with a userlist and having the anonymous parameter set to false,
use the following URL:
ftp://<ADDomain\username>@IP_address:port/
NOTE: When a file is uploaded into an FTP share, the file is owned by the user who uploaded
the file to the share.
If a user uploads a file to an FTP share and specifies a subdirectory that does not already exist,
the subdirectory will not be created automatically. Instead, the user must explicitly use the mkdir
ftp command to create the subdirectory. The permissions on the new directory are set to 755. If
the anonymous user created the directory, it is owned by ftp:ftp. If a non-anonymous user
created the directory, the directory is owned by user:group.
You can also use curl commands to access an FTP share. (The default SSL port is 990.)
You can access the shares as follows:
As an anonymous share. See FTP and FTPS commands for anonymous shares (page 111).
As a non-anonymous share. See FTP and FTPS commands for non-anonymous shares
(page 112).
From any Fusion Manager that has FTP clients. See FTP and FTPS commands for Fusion
Manager (page 113).
Command
Table 2 Upload a file by using the FTPS protocol for anonymous shares
Use this command when...
Command
Accessing shares
111
Command
curl ftp://IP_address/pub/server.pem -o
<path to download>\<filename>
curl ftp://IP_address/pub/server.pem -o
<path to download>\<filename> -u ftp:ftp
Command
112
Command
Using FTP
Table 6 Upload a file by using the FTPS protocol for local user
Use this command when...
Command
You need to supply the user name and password but not
the domain
Table 7 Download a file by using the FTP protocol for domain user
Use this command when...
Command
Table 8 Download a file by using the FTPS protocol for local user
Use this command when...
Command
Accessing shares
113
9 Using HTTP
The HTTP feature allows you to create HTTP file shares for data stored on the cluster. Clients access
the HTTP shares using standard HTTP and HTTPS protocol services.
IMPORTANT: Before configuring HTTP, select an authentication method (either Local Users or
Active Directory). See Configuring authentication for SMB, FTP, and HTTP (page 61) for more
information.
The HTTP configuration consists of a configuration profile, a virtual host, and an HTTP share. A
profile defines global HTTP parameters that apply to all shares associated with the profile. The
virtual host identifies the IP addresses and ports that clients will use to access shares associated
with the profile. A share defines parameters such as access permissions and lists the file system to
be accessed through the share.
HTTP is administered from the GUI or CLI. On the GUI, select HTTP from the File Shares list in the
Navigator. The HTTP Config Profiles panel lists the existing configuration profiles and the virtual
hosts configured on the selected profile.
Description
This type of share provides programmatic access to user-stored files and their
metadata. The metadata is stored on the HP StoreAll Express Query database
in the StoreAll cluster and provides fast query access to metadata without
scanning the file system.
HTTP-StoreAll REST API shares in object This type of share provides concepts similar to OpenStack Object Storage API
mode
to support programmatic access to user-stored files. Users create containers
within each account to hold objects (files), and the user's string identifier for
the object maps to a hashed path name on the file system.
114
Using HTTP
Digest responses from StoreAll and present results to user in a readable format
Can be coded in any language for example, Java or python, on any client operating system,
such as Windows or Linux.
Certain tools let you send ad-hoc direct requests and show responses:
Web browsers with add-ons, curl, and others. You must enter request data in the StoreAll
REST API syntax.
Sample Java client application provided by HP to guide customer developers. See Obtaining
the HP StoreAll REST API Sample Client Application (page 116) for information on how to
access the sample Java client.
File-Compatible mode
Object mode
File upload/download
No access by other
protocols
No Express Query
System/custom metadata
queries
Custom metadata
assignment
WORM/Retention
management
Do not put file-compatible and object mode shares on the same filesystem.
Avoid putting object mode shares on retention-enabled file systems, unless the auto-commit
feature is needed. The object mode API does not include retention management features.
Managing WORM or retention states must be performed outside the API as described in
Managing data retention (page 196).
Uses for the StoreAll REST API
115
You must assign read, write, and execute permissions to the shares directory path and all
parent directories up to the file system mount point to allow accounts to be created by their
owners through the API. For example, if your shares directory path is /objFS1/objStore,
and the file system objFS1 is mounted at /objFS1, both directories must be set to read,
write, and execute permissions.
Do not set the directory path of the file-compatible mode share to a subdirectory of the mount
point. Make the mount point directory be the directory path for the share.
Step
Step applies
only
to REST API
Shares
Task
No
Yes
nl
file-compatible
shares.
Optional for
object mode
shares.
3
Yes
Required only
for
file-compatible
shares.
116
Using HTTP
Enable Express Query on the mounted file system (file Enabling file systems for
compatible mode only) if it was not enabled when the data retention (page 198)
file system was created.
IMPORTANT: Do not enable Express Query on
object mode shares or on any existing file system that
has any object mode API shares defined.
Step
completed?
Step
4
Step applies
to all HTTP
share types.
Task
Step applies
to all HTTP
share types.
Step
completed?
Step applies
to all HTTP
share types.
If an SSL certificate will be required for HTTPS access, add the SSL certificate to the cluster
before creating the shares. See Managing SSL certificates (page 174) for information about
creating certificates in the format required by StoreAll software and then adding them to the
cluster.
When configuring a share on a file system, the file system must be mounted.
If the directory path to the share includes a subdirectory, be sure to create the subdirectory
on the file system and assign read/write/execute permissions to it. (StoreAll software does
not create the subdirectory if it does not exist, and instead adds a /pub/ directory to the
share path.)
Ensure that all users who are given read or write access to HTTP shares have sufficient access
permissions at the file system level for the directories exposed as shares.
For High Availability, when specifying IP addresses for accessing a share, use IP addresses
for VIFs having VIF backups. See the administrator guide for your system for information about
creating VIFs.
117
See
3.
118
Enter the share name and directory path. Then, click Next.
Using HTTP
4.
Configure a new profile on the Config Profile dialog box, specifying a name and the
appropriate parameters for the profile. Select Host servers on the Host Servers page. Click
Next.
119
5.
On the Virtual Host page, enter the vhost name. Select the false option from the Enable
StoreAll REST API menu. Fill in remaining details of SSL certificate, domain and IP address.
Click Next.
6.
On the Settings page, enter the URL path and fill in the remaining details. Click Next. On
the Settings page, set the appropriate parameters for the share. Note the following:
When specifying the URL Path, do not include http://<IP address> or any variation
of this in the URL path. For example, /reports/ is a valid URL path. The beginning and
ending slashes of the path are optional. For example, /reports/, reports, and
/reports are valid entries and will be stored as /reports/.
Default Permissions: New files uploaded via the HTTP share are given default permissions
in standard UNIX octal notation. The owner user and group receive read-write permissions
(77) and everyone else receive read-only permission (5). This default permission is ignored
when creating directories, which are set to 0755.
When the WebDAV feature is enabled for a standard HTTP share, the share becomes a
readable and writable medium with locking capability. The primary user can make edits,
while other users can only view the resource in read-only mode. The primary user must
unlock the resource before another user can make changes.
Set the Anonymous field to false only if you want to restrict access to specific users.
If Browseable is set to true, the user can issue HTTP GET requests for any directory
path within the share's directory tree, and that directory's listing of files and
subdirectories will be returned in the HTTP response. An error will be returned if the
user issuing the HTTP request does not have file system permission to navigate down
the path to that directory and read its contents.
If Browseable is set to false, a GET request for a directory path will always return
an error, regardless of users permissions.
Creating HTTP shares from the GUI
121
7.
122
In the Virtual Host summary section, the value of IBRIX REST API is displayed as Disabled
In the File Share summary section, the value of IBRIX REST API Mode as disabled.
Using HTTP
8.
Click Finish.
When the wizard is complete, users can access the share from a browser. For example, if you
configured the share with the anonymous user, specified 99.226.50.92 as the IP address on the
Create Vhost dialog box, and specified /reports/ as the URL path on the Add HTTP Share
dialog box, users can access the share using the following URL:
http://99.226.50.92/reports/
The users will see the directory listing of the base URL path directory of the share (if the browseable
property of the share is set to true), and can open and save files. For more information about
accessing shares and uploading files, Accessing standard and file-compatible mode HTTP shares
(page 133).
Do not create file-compatible mode and object mode REST API shares on the same file system.
Use separate file systems for each type of REST API share.
Do not create an object mode REST API share on any file system where Express Query is
enabled. Express Query does not support storing metadata from objects in object mode shares.
If Express Query is enabled on a file system with an object mode API share, metadata from
the object mode files are ingested incorrectly, causing unusable metadata to be added to the
Express Query database. This situation negatively impacts the performance of Express Query
for the files outside the object mode share on the same file system that it ingests correctly.
123
1.
2.
3.
124
On the GUI, select File Shares from the Navigator to open the File Shares panel, and then
click Add to start the Add New File Share Wizard.
On the File Share page, select HTTP from the File Sharing Protocol menu. Select the file system,
which must be mounted, and enter a share name and the default directory path for the share.
Select an existing profile or configure a new profile on the Config Profile dialog box, specifying
a name and the appropriate parameters for the profile.
Using HTTP
4.
The Host Servers dialog box displays differently whether you selected a previous profile or
you are create a new one. If you selected the option Create a new HTTP Profile, you are
prompted to select the file server nodes on which the HTTP service will be active. Only one
configuration profile can be in effect on a particular server.
125
126
5.
If you selected an existing profile on the Config Profile dialog box, you are shown the hosts
defined for that profile, as shown in the following figure.
6.
The Virtual Host dialog box displays differently whether you selected a previous profile or you
are create a new one. If you are creating a new profile, the Virtual Host dialog box prompts
you to enter additional information, as shown in the following figure. Enter a name for the
virtual host. If an HTTP-StoreAll REST API share is to be created then select true from the Enable
StoreAll REST API menu. If the standard HTTP share is to be created then select false from the
Enable StoreAll REST API menu. Specify an SSL certificate and domain name if used. Also add
one or more IP addresses:ports for the virtual host. For High Availability, specify a VIF having
a VIF backup.
Using HTTP
7.
If you selected a previous profile, the Virtual Host prompts you to select a pre-existing Vhost
or create an HTTP Vhost.
8.
9.
If you already have Vhosts defined, you can select an existing Vhost.
On the Settings page, set the appropriate parameters for the share. Note the following:
When specifying the URL Path, do not include http://<IP address> or any variation
of this in the URL path. For example, /reports/ is a valid URL path. The beginning and
ending slashes of the path are optional. For example, /reports/, reports, and
/reports are valid entries and will be stored as /reports/. For REST API shares in
File-Compatible mode, do not define a URL path of more than one directory level, such
as reports/sales; however, your single-directory URL path can correspond to any
arbitrarily deep directory path on the StoreAll file system.
StoreAll REST API Mode field on the Settings page is displayed only when the Enable
StoreAll REST API on Virtual Host page is selected as true (for example, when HTTP-StoreAll
REST API share is to created). The StoreAll REST API Mode can be selected as File
Compatible or Object from the drop-down list, and it and defines which mode's syntax
will be accepted by this API share. For example, if object mode is selected, then HTTP
requests using the File-Compatible mode syntax will not be understood and will most likely
return an error.
Default Permissions: For File-compatible shares, new files uploaded via the HTTP share
will be given these permissions on the file system. The value is in standard UNIX octal
notation, the default giving read-write permission to the owning user and group (the 77)
and read-only permission to everyone else (the 5). This default permission is ignored
when creating directories, which will always be set to 0755.
For object mode shares, this setting is ignored. Containers (directories, on the file system)
are always created with permissions 0700, and access to a containers objects by other
users is controlled at the container level instead (see Set Container Permission (page 150)).
Permissions cannot be assigned to objects individually.
The Enable WebDAV option is greyed out for HTTP-StoreAll REST API shares and it is
shown as false because every StoreAll REST API share is also a WebDAV-disabled share.
127
Set the Anonymous field to false only if you want to restrict access to specific users. The
Anonymous field must be set to false when an HTTP-StoreAll REST API share in object
mode is to be created.
If Browseable is set to true, the user can issue HTTP GET requests for any directory
path within the share's directory tree, and that directory's listing of files and
subdirectories will be returned in the HTTP response. An error will be returned
if the user issuing the HTTP request does not have file system permission to
navigate down the path to that directory and read its contents.
If Browseable is set to false, a GET request for a directory path will always return
an error, regardless of users permissions.
10. On the Users page, specify the users to be given access to the share. If no users are specified
on this page, then any user who can be authenticated according to your StoreAll authentication
settings for the cluster can access the share as read-write. Users must also have access
permissions at the file system level to read or write. If any users are specified on this page,
only those users may access the share and all other users are denied regardless of their file
system permissions.
IMPORTANT: Ensure that all users who are given read or write access to shares have sufficient
access permissions at the file system level for the directories exposed as shares.
128
Using HTTP
11. To allow specific users read access, write access, or both, click Add. On the Add Users to
Share dialog box, assign the appropriate permissions to the user. When you complete the
dialog, the user is added to the list on the Users page.
The Summary panel presents an overview of the HTTP configuration. You can go back and modify
any part of the configuration if necessary.
When the wizard is complete, users can access the API HTTP share from a client. See HTTP-REST
API object mode shares (page 138) and HTTP-REST API file-compatible mode shares (page 152)
for details.
129
Use the buttons on the panels to modify or delete the selected configuration profile or virtual host.
To view HTTP shares on the GUI, select the appropriate profile on the HTTP Config Profiles top
panel, and then select the appropriate virtual host from the lower navigator tree. The Shares bottom
panel shows the shares configured on that virtual host. Click Add Share to add another share to
the virtual host. For example, you could create multiple shares having the same physical path, but
with different sets of properties, and then assign users to the appropriate share. You can also have
any number of REST API and regular HTTP shares attached to the same Vhost.
Tuning the socket read block size and file write block size
By default, the socket read block size and file write block size used by Apache are set to 8192
bytes. If necessary, you can adjust the values with the ibrix_httpconfig command. The values
must be between 8 KB and 2 GB.
ibrix_httpconfig -a profile1 -h node1,node2 -S
"wblocksize=<value>,rblocksize=<value>"
You can also set the values on the Modify HTTP Profile dialog box:
Task
Command/Pointer
Add a
configuration
profile.
Add a virtual
host.
For the -S option, use a comma to separate the settings, and enclose the settings in quotation
marks,I such as "keepalive=true,maxclients=200,...". To see a list of available
settings for the share, use ibrix_httpconfig -L.
ibrix_httpvhost -a VHOSTNAME -c PROFILENAME -I IP-Address:port [-S
SETTINGLIST]
To create a virtual host with the REST API enabled, use the ibrixrestapi setting for example:
ibrix_httpvhost -a VHOSTNAME -c PROFILENAME -I IP-Address:port -S
ibrixrestapi=true
For the -S option, use a comma to separate the settings, and enclose the settings in quotation
marks, such as "sslcert=name,...". To see a list of the allowable settings for the vhost,
use ibrix_httpvhost -L.
For the -I option, use a semicolon to separate the IP-address:port settings and enclose
the settings in quotation marks, such a "ip1:port1;ip2:port2;...". For example:
ibrix_httpvhost -a vhost1 -c myprofile -I "99.1.26.1:80;99.1.26.1:81"
Use the IP-Address of the User Virtual Interface (the User VIF) of a file serving node as that
interface will be migrated to the nodes HA partner in event of the first node failing. Use
theibrix_nic -l command to find the VIF. The VIF is the IP address of type cluster in an
up, Link Up state, and it is not an Active FM.
Add an HTTP
share.
131
Keep in mind the following when creating StoreAll REST API shares in file-compatible
Do not create file-compatible mode and object mode REST API shares on the same file system.
Use separate file systems for each type of REST API share.
Do not create an object mode REST API share on any file system where Express Query is
enabled. Express Query does not support storing metadata from objects in object mode shares.
If Express Query is enabled on a file system with an object mode API share, metadata from
the object mode files are ingested incorrectly, causing unusable metadata to be added to the
Express Query database. This situation negatively impacts the performance of Express Query
for the files outside the object mode share on the same file system that it ingests correctly.
HTTP StoreAll REST API share in object ibrix_httpshare -a SHARENAME -c PROFILENAME -t VHOSTNAME
mode1,2
-f FSNAME -p dirpath -P urlpath [-u USERLIST] -S
"ibrixRestApiMode=object,anonymous=false"
The anonymous setting must be set to false. If you do not provide the
anonymous setting ("ibrixRestApiMode=object"), the anonymous value
is false by default.
1
The parameter userlist is optional, and it is not necessarily needed for the StoreAll REST API.
All the other listed arguments are required for the StoreAll REST API.
2
Additional steps are required to take full advantage of the object mode and its use of containers.
See Tutorial for using the HTTP StoreAll REST API object mode (page 139).
For the -S option, use a comma to separate the settings, and enclose the settings in quotation
marks, such as "davmethods=true,browseable=true,readonly=true".
For example, to create a new HTTP share and enable the WebDAV property on that share:
# ibrix_httpshare -a share3 -c cprofile1 -t dav1vhost1 -f ifs1 -p
/ifs1/dir1 -P url3 -S "davmethods=true"
To see all of the valid settings for an HTTP share, use the following command:
ibrix_httpshare -L
Using HTTP
133
If the pathname ends with a filename, the browser either opens the file or prompts the user to open
or save the file, depending on the browser settings.
You can also use curl commands to access an HTTP share.
NOTE: When a file is uploaded into an HTTP share, the file is owned by the user who uploaded
the file to the share.
If a user uploads a file to an HTTP share and specifies a subdirectory that does not already exist,
the subdirectory will be created. For example, you could have a share mapped to the directory
/ifs/http/ and using the URL path named http_url, a user could upload a file into the share:
curl -T file http://<ip>:<port>/http_url/new_dir/file
If the directory new_dir does not exist under /ifs/http, the http service automatically creates
the directory /ifs/http/new_dir/ and sets the permissions to 755. If the anonymous user
performed the upload, the new_dir directory is owned by daemon:daemon. If a non-anonymous
user performed the upload, the new_dir directory is owned by user:group.
For anonymous users:
For Active Directory users (specify the user as in this example: mycompany.com\\User1):
For more information on operations that can be performed for HTTP-StoreAll REST API share in
file-compatible mode, see HTTP-REST API file-compatible mode shares (page 152).
134
Using HTTP
Create an SSL certificate. When using basic authentication to access WebDAV-enabled HTTP
shares, SSL-based access is mandatory.
Verify that the hostname in the certificate matches the Vhost name.
When creating a certificate, the hostname should match the Vhost name or the domain name
issued when mapping a network drive or opening the file directly using the URL such as https://
storage.hp.com/share/foo.docx.
Ensure that the WebDAV URL includes the port number associated with the Vhost.
135
Use the correct URL path when mapping WebDAV shares on Windows 2003.
When mapping WebDAV shares on Windows 2003, the URL should not end with a trailing
slash (/). For example, http://storage.hp.com/share can be mapped, but http://
storage.hp.com/ cannot be mapped. Also, you cannot map https:// because of
limitations with Windows 2003.
NOTE: Symbolic links are not implemented in the current WebDAV implementation (Apaches
mod-dav module).
NOTE: After mapping a network drive of a WebDAV share on Windows, Windows Explorer
reports an incorrect folder size or available free space on the WebDAV share.
Troubleshooting HTTP
After upgrading the StoreAll software, the HTTP WebDAV share might be
inaccessible or display a permission error when trying to write to a share
During the StoreAll software upgrade, the active connection to the WebDAV share might be lost
and cause share access issues. The share will be inaccessible while node failover is occurring. If
you still experience share access issues after the upgrade, remount the WebDAV share on the
Windows client machine:
net use * http://192.168.1.1/smita/
In this instance, the HTTP WebDAV share is 192.168.1.1/smita.
Using HTTP
disconnected and re-mapped through Windows Explorer. The files are accessible on the file serving
node and through BitKinex.
Use
1.
2.
3.
4.
5.
6.
7.
HTTP WebDAV share fails when downloading a large file from a mapped
network drive
When downloading or copying a file greater than 800 MB in Windows Explorer, the HTTP
WebDAV share fails. Use the following workaround to resolve this condition:
1. In Windows, select Start > Run and type regedit to open the Windows registry editor.
2. Navigate to:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters
NOTE:
7.
3.
Change the registry parameter values to allow for the increased file size.
a. Set the value of FileAttributesLimitInBytes to 1000000 in decimal.
b. Set the value of FileSizeLimitInBytes to 2147483648 in decimal, which equals
2 GB.
Troubleshooting HTTP
137
Description
Account
The user is represented by an account. Accounts are implemented as existing LDAP, AD, or
local user accounts. Accounts can contain up to 10,000 containers.
Container
Label
Object
A file uploaded by a user into a container within a user's account. Users can store an infinite
amount of objects in a container.
Object ID
A string unique to this object within a container, identifying the object. It does not have to
relate to the original file being uploaded. It can be a path and file name, such as:
/dir1/file1.txt
But it does not have to be a file system construct. It can be any string, such as:
Monthly_report-Jan2013
The file is stored in a directory with the filename defined by StoreAll on the file system, in the
given container's directory, based on a hash code of the object ID string. The file location in
the file system is not based on any paths in the object ID string.
138
Tutorial for using the HTTP StoreAll REST API object mode
This section walks you through using the major components of object mode. You will be shown
how to:
Create a container.
How to obtain
To obtain the name of the virtual host for the HTTP share, enter the following
command: ibrix_httpshare -l
To find the directory path and URL path of the HTTP share, enter the following
command:
nl
To obtain the IP address of the virtual host, enter the following command:
ibrix_httpvhost -l -v 1
ibrix_httpshare -l -f <file_server_name>
The directory path of the HTTP share is under the Path column, and the URL Path
is under the URL Path column.
Tutorial for using the HTTP StoreAll REST API object mode
139
1.
Create a container.
When you first create a container, the account directory, named as the numeric user ID of the
user creating the container, is automatically created as a subdirectory of the root of the HTTP
share.
See Terminology for StoreAll REST API object mode (page 138) for a list of requirements for
creating the container name.
The curl format for this command is the following:
NOTE:
The <IP_address:port> is the IP address and port of the virtual host for the HTTP
share.
The <account_name> is the name of the account under which you want to create the
container, for example, jsmith.
The <username> is the user name of the account creating the container. Only the account
owner can create a container, so <account_name> and <username> must be the
same.
The account and user name is either a StoreAll local user, an Active Directory user, or an
LDAP user. The user must be one that can authenticate to use the HTTP share. Use the
ibrix_localusers command to create a local user. See the HP StoreAll Storage CLI
Reference Guide for more information.
For example, for a local user:
curl -X PUT http://192.168.2.2/obj/localuser1/
container-a -u localuser1:mypassword
HTTP version of the command
PUT /<urlpath>/<account_name>/<container_name> HTTP/1.1
CURL version of the command for Active Directory users
You can use any of the following formats for Active Directory users:
NOTE:
You can provide the <domain_name> and <account_name> three different ways in
the curl command:
In the first format, double backslashes are used to preserve (escape) the backslash
separator between username and domain name:
curl -X PUT http://<IP_address:port>/<urlpath>/<domain_name>
\\<account_name>/<container_name> -u <domain_name>\\
<username>:<password>
In the second format, double quotes are used to preserve the backslash:
curl -X PUT "http://<IP_address:port>/<urlpath>/<domain_name>
\<account_name>/<container_name>" -u "<domain_name>\
<username>:<password>"
In the third format, the %5C URL encoding of the backslash is used in the URL, but it
cannot be used in the -u user parameter:
curl -X PUT http://<IP_address:port>/<urlpath>/<domain_name>
%5C<account_name>/<container_name> -u <domain_name>\\
<username>:<password>
As shown in the following example:
curl -X PUT http://192.168.48.204/obj/administrator/
activedomaincontainer -u [email protected]:mypassword
PUT /<urlpath>/<domain_name>%5C<account_name>/
<container_name> HTTP/1.1
The %5C is the URL encoding for a backslash.
2.
Tutorial for using the HTTP StoreAll REST API object mode
141
3.
To create an empty object in your new container, enter the following command:
NOTE:
4.
142
View the list of objects in the container. Viewing the contents of a container (page 144).
5.
qa1\jsmith is the user making the HTTP request, who has been granted read access
to objects in the administrator's "container-a" container.
Double backslashes were used in this example instead of quotes to preserve the backslash
HTTP version of the command
GET /<urlpath>/<account_name>/<container_name> HTTP/1.1
curl http://<IP_address>:<port>/<urlpath>/
<account_name> -u <username>:<password>
For example:
curl http://192.168.2.2/obj/qa1\\
administrator -u qa1\\
administrator:mypassword
The list of all containers created by the user associated with the given account are returned in
JSON format. For example:
[
{
"name":"container-a",
"attributes" : {
"system::size" : 4096,
"system::ownerUserId" :
"system::permissions" :
}
},
{ "name":"container-b",
"attributes" : {
"system::size" : 4096,
"system::ownerUserId" :
"system::permissions" :
}
}
"administrator",
700
"administrator",
775
The system::size refers to the number of bytes used by the directory inode representing the
container on the StoreAll server (initially 4096 for any new directory), not the number of objects
Viewing the list of containers for an account
143
in the container. In this example, the permissions for container-a are the default 700, but the
permissions for container-b have been changed by qa1\\administrator to 775.
curl http://<IP_address:port>/<urlpath>/
<account_name>/<container name> -u <user_name>:<password>
For example:
curl http://192.168.2.2/obj/qa1\\
administrator/container-a -u qa1\\
administrator:mypassword
In this example:
The list of all objects in the container will be returned in JSON format. For example:
NOTE: Although the system::permissions are shown as 666, access to all objects is subject only
to the container permissions assigned by the account owning the container. The default permissions
for a container is 700, allowing object access only to the account owner. Permissions cannot be
assigned to individual objects via the REST API.
The HTTP StoreAll REST API object mode saves files on files system with hashed names which are
generated while uploading objects/files and not with their actual name, specified by the user
during their initial uploading.
In the steps below, assume your user name is jsmith, and that you know the location of the hash
reference to which you want to find the corresponding file name.
To obtain the corresponding object ID string name from a hash name:
1. To find the directory path of the HTTP share, enter the following command:
ibrix_httpshare -l -f ibrixFS
The directory path of the HTTP share is under the Path column, and the URL Path is under the
URL Path column.
2.
Go to the directory path of the HTTP share by entering the following command:
[root@bv07-07 3ca]#cd /ibrixFS/objectStore/
In the following example, /ibrixFS/objectStore/ is the directory path of the HTTP Object
Mode API share defined for the ibrixFS file system.
3.
objectapi_group
4096 Dec
ENAS\domain^users 4096 Dec
7 15:43 2003
7 14:54 367002807
In this example, 2003 is the owner user ID of the user jsmith and 367002807 is the Active
Directory ID of another user (user1) in the ENAS domain. To find the user ID of a local user,
enter the command:
ibrix_localusers -l -u <username>
To find the user ID of an AD or LDAP user, contact the AD or LDAP servers.
4.
5.
6.
Go to the container containing the hash name for the object that you want to find the
corresponding file name:
[root@bv07-07 2003]# cd newcontainer
Finding the corresponding object ID from a hash name
145
8.
9.
The first time a user creates a container, a directory with the numeric user ID of the user representing
that account, is created to hold the container. The container directory within this account directory
is the container name provided by the user in the container creation request. Subsequent containers
created by that user are also stored under the same account directory. Each container contains
the first level directory and then the second level directory containing the SHA-1 hash code for the
file object. The following diagram shows the directory layout for objects created in object mode
API shares.
All file objects have first and second level directories, regardless of any directory paths that might
be present in the user's object ID string. For example, assume you upload a file to
<container_name>/mydirectory/subdirectory1/subdirectory2/subdirectory3.
If you traverse the directory structure, the hash file would appear in its second-level directory.
All HTTP commands always contain the following path:
<URL of the file server>/<URL Path of the HTTP
share>/<account_name>/<container_name>
NOTE: The hash name is based off of the object ID string and not of the content. If you have two
different objects with the same ID string, they will be in the same hash directory within the container.
If you upload a file to a container that has the same object ID string as an existing object, it will
replace it.
To find the hash name corresponding to the object ID string of an object stored via the object mode
API:
147
1.
2.
To determine the names of the two directory levels within the container where this object is
stored, calculate the hex value of the 10 least significant bits of the returned hash code, then
the next 10 least significant bits. In the above example, these values are 33a and 2ca, in
lowercase letters. The first hexadecimal value is the second-level directory name. The second
hexadecimal value is the first-level directory name. So, the above file will be located at the
path:
<Path of object mode REST API share>/<user ID>/<container
name>/2ca/33a
List Containers
Type of Request: Account Services
Description: Returns the list of containers for a user account.
HTTP command:
GET /<urlpath>/<account_name> HTTP/1.1
CURL command (Enter on one line):
curl http://<IP_address:port>/<urlpath>/
<user_id> -u <user_name>:<password>
Create Container
Type of Request: Container services
Description: Creates a container. See Terminology for StoreAll REST API object mode (page 138)
for information in regards to the naming requirements.
148
HTTP command:
PUT /<urlpath>/<account_name>/<container_name>
HTTP/1.1
You can provide the <domain_name> and <account_name> three different ways in the
curl command:
In the first format, double backslashes are used to preserve (escape) the backslash
separator between username and domain name:
curl -X PUT http://<IP_address:port>/<urlpath>/<domain_name>
\\<account_name>/<container_name> -u <domain_name>
\\<username>:<password>
As shown in the following example:
curl -X PUT http://192.168.2.2/obj/qa1\\administrator/
activedomaincontainer -u qa1\\administrator
In the second format, double quotes are used to preserve the backslash:
curl -X PUT "http://<IP_address:port>/<urlpath>/<domain_name>
\<account_name>/<container_name>" -u "<domain_name>
\<username>:<password>"
In the third format, the %5C URL encoding of the backslash is used in the URL, but it cannot
be used in the -u user parameter:
curl -X PUT http://<IP_address:port>/<urlpath>/<domain_name>
%5C<account_name>/<container_name> -u <domain_name>
\\<username>:<password>
As shown in the following example:
curl -X PUT http://192.168.48.204/obj/administrator
/activedomaincontainer -u [email protected]
:mypassword
List Objects
Type of Request: Container services
Description: Lists the objects in a container.
HTTP command:
GET /<urlpath>/<account_name>/<container_name> HTTP/1.1
CURL command (Enter on one line):
curl http://<IP_address:port>/<urlpath>/<account_name>/
<container name> -u <user_name>:<password>
The list of all objects in the container will be returned in JSON format.
149
Delete Container
Type of Request: Container services
Description: Deletes the container.
IMPORTANT:
HTTP command:
DELETE /<urlpath>/<account_name>/<container_name> HTTP/1.1
CURL command (Enter on one line):
curl -X DELETE http://<IP_address:port>/<urlpath>/<user_id>/
<container_name> -u <username>:<password>
Create/Update Object
Type of Request: Object Requests
Description: Uploads an object into a container.
HTTP command:
PUT /<urlpath>/<account_name>/<container_name>/<object_id> HTTP/1.1
CURL command (Enter on one line):
curl -T <local_pathname> http://<IP_address:port>/
<urlpath>/<account_name>/<container_name>/
<object_id> -u <username>:<password>
The user_id is the id of the authenticated user in the case of the user trying to put an object in
its own account.
150
Retrieve Object
Type of Request: Object Requests
Description: Returns the list of containers for a user account.
HTTP command:
GET /<urlpath>/<account_name>/<container_name> HTTP/1.1
CURL command (Enter on one line):
curl -o <local_pathname> http://<IP_address:port>/<urlpath>/
<account_name>/<container_name>/<object_id> -u <username>:<password>
Delete Object
Type of Request: Object Requests
Description: Deletes an object.
HTTP command:
DELETE /<urlpath>/<account_name>/<container_name>/<object_name> HTTP/1.1
CURL command (Enter on one line):
curl -X DELETE http://<IP_address:port>/<urlpath>/<user_id>/
<container_name>/<objectname> -u <username>:<password>
151
Component overview
The StoreAll REST API for the file-compatible mode has a number of components, such as custom
metadata assignments, metadata queries, and retention properties assignments.
Metadata queries
You can issue StoreAll REST API commands that query the pathname and custom and system
metadata attributes for a set of files and directories. Queries can be augmented with a search
criterion for a certain system or custom attribute; only files and directories that match the criterion
are included in the results. The query can specify a single file or a directory. If identifying a directory,
152
the user can query all files in that directory only, or all files in all subdirectories of that directory
recursively.
[,<attribute2>='<value2>'] HTTP/1.1
curl command
Enter the following command on one line:
curl -g -X PUT
"http[s]://<IP_address>:<port>/<urlpath>[/<pathname>]?[version=1]
nl
assign=<attribute1>='<value1>'[,<attribute2>='<value2>']"
When using this syntax, note the following:
Optional parameters are shown in square brackets [ and ]. Everything enclosed in the brackets
can be omitted from the request. Do not include the square brackets in the request. For example,
the API supports either http or https for all requests, hence the http[s] nomenclature.
Parameters are shown in angle brackets < and >. Replace the parameter with the actual value,
without the angle brackets.
Other characters shown in the syntax (such as =, ?, &, and /) must also be entered as-is in
the request and sometimes must be URL-encoded.
All parameters before the ? (such as pathname) should be entered as strings without any
surrounding quotes in standard URL format.
All parameters after the ? (the query string in HTTP parlance) are either commands, attribute
names, or literals:
Attribute names must be 80 characters or less. The first character must be alphabetic (a-z
or A-Z), followed by a sequence of alphanumeric characters or underscores. No other
characters are allowed. Colon characters (:) are allowed in system attribute names. All
attribute names are case-sensitive.
Literal strings must be enclosed in single quotes. Non-escaped UTF-8 characters are
allowed. Literals are case-sensitive. Any single quotes that are part of the string must be
escaped with a second single quote (no double quotes). For example:
'Dave''s book'
Literal numeric values must not be enclosed by quotes, and are always in decimal (0-9).
Component overview
153
All HTTP query responses generated by the API code follow the JSON standard. No XML
response format is provided at this time.
HTTP request messages have a practical limit of about 2000 bytes, and it can be less if certain
proxy servers are traversed in the network path.
URL encoding
HTTP query strings are URL-decoded by the API code. API clients must encode special characters,
such as greater-than character (>), by replacing them with their hexadecimal equivalent values as
shown by the examples in this section.
The APIs URL decoder interprets certain special characters properly without being URL encoded.
Before the question mark character (?) in any HTTP request URL, the following characters are safe
and do not need to be URL encoded:
/ : - _ . ~ @ #
After the question mark character (?), the following characters are safe and do not need to be URL
encoded:
= & #
All other characters must be URL encoded as their hexadecimal value as described in the ISO-8859-1
(ISO-Latin) standard. For example, the plus character (+) must be encoded as %2B, and the greater
than character (>) must be encoded as %3E.
Spaces can be encoded as either %20 or as the plus character (+), such as "my%20file.txt"
or "my+file.txt" for the file "my file.txt". The plus character (+) is converted to a space
when the URL is decoded by the API code. To include a plus character (+) in the URL, encode it
as %2B, such as "A%2B" instead of "A+".
If you are using a tool such as curl to send the HTTP request, the tool might URL-encode certain
characters automatically, although you might have to enclose at least part of the URL in quotes for
it to do so. The exact behavior depends on the tool.
In the curl examples shown in this section, the entire URL is enclosed in double quotes so that the
non-encoded characters can be shown for readability. The curl tool URL-encodes all required
characters within the double quotes correctly. If you are using a different tool or constructing the
URL programmatically or manually, ensure that the right characters are URL-encoded before sending
it over HTTP to the API.
Pathname parameters
The pathname parameter provided in HTTP requests throughout the syntax must be specified as a
relative path from the <urlpath>, including the file name.
However, the system metadata attribute system::path available for metadata queries must be
specified as a path relative to the mount point of the StoreAll file system. Paths are stored in the
metadata database by this technique.
error. Any request with the version field and a value less than or equal to the current version, is
handled correctly by the new API version unless the capability has been removed or is beyond the
support lifetime of the product.
Component overview
155
HTTP syntax
The HTTP request line format is:
PUT /<urlpath>/<pathname> HTTP/1.1
The files contents are supplied as the HTTP message body.
The equivalent curl command format is:
NOTE:
curl -T <local_pathname>
http[s]://<IP_address>:<port>/<urlpath>/<pathname>
If the urlpath does not exist, an HTTP 405 error is returned with the message (Method Not
Allowed).
See Using HTTP (page 114) for information about the IP address, port, and URL path.
Parameter
Description
local_pathname
The pathname of the file, stored on the clients system, to be uploaded to the HTTP share.
pathname
The pathname to be assign to the new file being created on the HTTP share, if the file
does not yet exist. If the file does exist, the file will be overwritten. The pathname should
be specified as a relative path from the <urlpath>, including the files name.
Example
curl -T temp/a1.jpg https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg
This example uploads the file a1.jpg, stored on the clients machine in the temp subdirectory of
the users current directory, to the HTTP share named ibrix_share1.
The share is accessed by the IP address 99.226.50.92. Because it is accessed using the standard
HTTPS port (443), the port number is not needed in the URL.
The file is created as filename xyz.jpg in the subdirectory lab/images on the share. If the file
already exists at that path in the share, its contents are overwritten by the contents of a1.jpg ,
provided that StoreAll permissions and retention settings on that file and directory allow it. If the
overwriting is denied, an HTTP error is returned.
If the local file does not exist, the response behavior depends on the client tool. In the case of
curl, it returns an error message, such as the following:
curl: can't open '/temp/a1.jpg'
156
Download a file
This command transfers the contents of a file to the client from the HTTP share. Download capability
already exists in the StoreAll HTTP shares feature, and it is documented here for completeness. If
the file does not exist, a 404 Not Found HTTP error is returned, in addition to HTML output such
as the following:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /api/myfile.txt was not found on this server.</p>
</body></html>
If using curl, the HTML output is saved to the specified local file as if it were the contents of the
file.
The HTTP command is sent in the form of an HTTP GET request.
HTTP syntax
The HTTP request line format is:
GET /<urlpath>/<pathname> HTTP/1.1
The equivalent curl command format is:
curl -o <local_pathname>
http[s]://<IP_address>:<port>/<urlpath>/<pathname>
See Using HTTP (page 114) for information about the IP address, port, and URL path.
Parameter
Description
local_pathname
The pathname of the file to be downloaded from the HTTP share and stored on the clients
system.
pathname
The pathname of the existing file on the HTTP share to download to the client.
Example
curl -o temp/a1.jpg http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg
This example downloads an existing file called xyz.jpg in the lab/images subdirectory of the
ibrix_share1 HTTP share. The file is created with the filename a1.jpg on the client system, in
the subdirectory temp of the users current directory.
If the file already exists at that path on the client, its contents are overwritten by the contents of
xyz.jpg, provided that the local clients permissions and retention settings on that file and directory
allow it. If the overwriting is denied, a local client system-specific error message is returned.
Delete a file
This command removes a file from the StoreAll file system by using the HTTP share interface. File
deletion is subject to StoreAll permissions on the file and directory, and it is subject to retention
settings on that file system. If file deletion is denied, an HTTP error is returned. If the file does not
exist, a 404 Not Found HTTP error is returned.
NOTE:
157
HTTP syntax
The HTTP request line format is:
DELETE /<urlpath>/<pathname> HTTP/1.1
The equivalent curl command format is:
curl -X DELETE http[s]://<IP_address>:<port>/<urlpath>/<pathname>
See Using HTTP (page 114) for information about the IP address, port, and URL path.
Parameter
Description
pathname
Example
curl -X DELETE http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg
This example deletes the existing file called xyz.jpg in the lab/images subdirectory on the
ibrix_share1 HTTP share.
158
HTTP syntax
The HTTP request line format is:
NOTE:
PUT command
PUT /<urlpath>[/<pathname>]?[version=1]assign=<attribute1>='<value1>'
[,<attribute2>='<value2>'] HTTP/1.1
curl command
The equivalent curl command format is:
curl -g -X PUT "http[s]://<IP_address>:<port>/<urlpath>[/<pathname>]?
assign=<attribute1>=<value1>[,<attribute2=value2>"]
See Using HTTP (page 114) for information about the IP address, port, and URL path.
If the urlpath does not exist, an HTTP 405 error is returned with the message (Method Not
Allowed).
Parameter
Description
pathname
The name of the existing file/directory on the HTTP share for which custom metadata is
being added or replaced.
Directory pathnames must end in a trailing slash /.
If the pathname parameter is not present, custom metadata is applied to the directory
identified by <urlpath>.
attribute[n]
The attribute name. Up to 15 attributes can be assigned in a single command. The first
character must be alphabetic (a-z or A-Z), followed by a sequence of alphanumeric
characters or underscores. No other characters are allowed. Attribute names must be
80 characters in length or less.
value[n]
The value to associate with this attribute. Currently, only a string value can be assigned
and the value must be enclosed in single quotes. Future versions of the API may support
numeric or other value types. If the attribute already exists for this file or directory, its
value will be replaced with this supplied value. Values must be less than 80 characters
in length or less.
Example
curl -g -X PUT
"https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?assign=physician
='Smith,+John;+8136',scan_pass='17'"
This example assigns two custom metadata attributes to the existing file called xyz.jpg in the
lab/imagessubdirectory on the ibrix_share1 HTTP share.
Attribute
Description
Metadata Value
159
If the file exists but any attributes being deleted do not exist, no HTTP error status is returned, and
the non-existent attributes are silently ignored.
The HTTP command is sent in the form of an HTTP DELETE request.
HTTP syntax
The HTTP request line format is the following on one line:
DELETE /<urlpath>[/<pathname>]?[version=1]attributes=<attribute1>
nl
[,<attribute2>] HTTP/1.1
The equivalent curl command format is the following on one line:
curl -g -X DELETE
"http[s]://<IP_address>:<port>/<urlpath>[/<pathname>]?[version=1]
nl
attributes=<attribute1>[,<attribute2>"]
See Using HTTP (page 114) for information about the IP address, port, and URL path.
Parameter
Description
pathname
The name of the existing file/directory on the HTTP share for which custom metadata is
to be deleted.
Directory pathnames must end in a trailing slash /.
attribute[n]
The existing name(s) for the custom metadata attribute(s) to be deleted from the file or
directory custom metadata list.
Example
curl -g -X DELETE "http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?
attributes=physician,scan_pass"
This example deletes two custom metadata attributes from an existing file called xyz.jpg in the
lab/images subdirectory on the ibrix_share1 HTTP share. The first attribute to delete is
physician and the second is scan_pass.
Metadata queries
The API provides a command to query the metadata about a file or directory on a StoreAll HTTP
share. The command defines the file or directory to query, the metadata fields to return, and how
to filter the list of files and directories returned based on metadata criteria. All queries are performed
on the Express Query database, requiring no other file system access or scans.
The HTTP command is sent in the form of an HTTP GET request.
160 HTTP-REST API file-compatible mode shares
System metadata applies to all files and directories. Each file and directory stored in StoreAll
includes a fixed set of attributes comprising its system metadata. System metadata attributes
are distinguished from custom metadata attributes by the system:: prefix. System metadata
attributes cannot be deleted by the user through the API.
Custom metadata applies only to files and directories where the user assigns them. Custom
metadata names are user-defined, with value strings also defined by the user. Custom metadata
is meaningful to the user, but it is not used by StoreAll. Custom metadata can be added,
replaced, or deleted by the user (see Custom metadata assignment (page 152)).
Type
Description
Example
Writable
system::path
string
system::ownerUserId
numeric
433
no
system::size
numeric
1025489
no
system::ownerGroupId
numeric
no
Metadata queries
161
Type
Description
Example
system::onDiskAtime
numeric
Writable
no
334642962.556708192
See API date
formats (page 155).
system::lastChangedTime
numeric
system::lastModifiedTime
numeric
system::retentionExpirationTime
numeric
yes (see
Retention
properties
assignment
(page 153))
system::mode
numeric
The Linux
mode/permission bits,
a combination of the
values shown by the
Linux man 2 stat
command). See
system::mode
(page 165) for more
information.
no
system::tier
numeric
A decimal number,
such as 33060 for
the octal value
0100444 (regular
file, read-only for
owner / group /
other).
no
162
numeric
Type
Description
Example
system::retentionState
numeric
The current
WORM/retention state
of the file, which is a
combination of these bit
values:
A decimal number,
partial (see
such as 11 for the bit system::worm)
value 0x0B (under
legal hold, and
retained, and
WORM)
0x01: WORM
0x02: Retained
0x04: (not used)
0x08: Under legal hold
Writable
nl
nl
nl
numeric
true
yes, to
true only,
at most one
time (see
Retention
properties
assignment
(page 153))
system::deleteTime
numeric
Metadata queries
163
Type
Description
Example
Writable
system::lastActivityTime
numeric
system::createTime
nl
system::lastModifiedTime
nl
system::lastChangedTime
nl
system::deleteTime
The system attribute,
system::lastActivityTime,
is useful for determining
the last date/time at
which a file had any
modification activity.
This attribute will only
be returned in query
results if the request
explicitly includes
system::lastActivityTime
as an attribute to be
returned.
system::lastActivityReason
numeric
A decimal number,
no
such as 6, signifying
that the last activity
on this file was a
content modification,
which changes both
0x1:
lastModifiedTime
system::createTime (0x2) and
lastChangedTime
0x2:
(0x4) .
nl
nl
nl
system::lastModifiedTime
nl
0x4:
nl
system::lastChangedTime
nl
0x8:
nl
system::deleteTime
nl
0x10:
nl
custom metadata
assignment time
(not queryable as
a system::
attribute)
This attribute is returned
in query results if the
request explicitly
includes
system::lastActivityReason
as an attribute to be
returned.
system::onDiskAtime
The atime inode field in StoreAll can be accessed as the system::onDiskAtime attribute from
the API. This field represents different concepts in the lifetime of a WORM/retained file, and it
often represents a concept other than the time of the files last access, which is why the field was
named onDiskAtime rather than (for example) lastAccessedTime. (See Retention properties
assignment (page 153) for a description of this life cycle).
Before a file is retained, whether WORM state or not, atime represents the last accessed time,
as long as the file system is mounted with the non-default atime option. If the file system is
mounted with the default noatime option, atime is the files creation time, and never changes
unless the file is retained (see the second bullet). See Creating and mounting file systems
(page 14) for more information about mount options.
While a file is in the retained state, atime represents the retention expiration time.
After retention expires, atime represents the time at which the file was first retained (even if
the file has been retained and expired more than once), and it never changes again, unless
the file is re-retained (see the second bullet).
If you have enabled the auditing of file read events, then reads are logged in the audit logs.
However, file reads do not update system::onDiskAtime even if the file reads are being
audited. All other file accesses modify the system::onDiskAtime with the current value of
atime. Therefore, before the file is retained (first bullet), if the file system is mounted with the
atime option, system::onDiskAtime represents the last accessed time before the last file
modification, not necessarily the current atime or the last accessed time. To list all read accesses
to a file, use the ibrix_audit_reports command as described in the CLI Reference Guide
system::mode
The following system::mode bits are defined (in octal):
0140000
0120000
0100000
0060000
0040000
0020000
0010000
0004000
0002000
0001000
0000400
0000200
0000100
0000040
0000020
0000010
0000004
0000002
0000001
socket
symbolic link
regular file
block device
directory
character device
FIFO
set UID bit
set-group-ID bit
sticky bit
owner has read permission
owner has write permission
owner has execute permission
group has read permission
group has write permission
group has execute permission
others have read permission
others have write permission
others have execute permission
Metadata queries
165
Wildcards
The StoreAll REST API provides three wildcards:
Wildcard
Description
system::*
custom::*
For wildcards that return system metadata attributes, the results will not include attributes that
describe deleted files (system::deleteTime, system::lastActivityTime, and
system::lastActivityReason).
Pagination
The StoreAll REST API provides a way for users to specify a portion of the total list of records (files
and directories) to return in the JSON query results.
Parameter
Description
skip
top
The skip and top parameters can be combined. For example, supplying both skip=100 and
top=2000 returns records 101 through 2100. By combining these two parameters, the user can
absorb a large result set in chunks, for example, records 1-2000, 2001-4000, and so on.
The following limitations apply:
Every query will be executed in full, even if only a subset of results is returned. For some
queries, this may place a substantive load on the system. Keeping top values as large as
possible will limit this load.
Because a query is executed for every request, there may be inconsistencies in query results
if files are created or deleted between API requests.
By default, if the skip parameter is not supplied, the results will not skip any records. Similarly,
if the top parameter is not supplied, the results will contain all records.
HTTP syntax
The HTTP request line format is the following on one line:
GET /<urlpath>[/[<pathname>]]?[version=1][attributes=<attr1>[,<attr2>,]]
[&query=<query_attr><operator><query_value>][&recurse][&skip=<skip_records>]
[&top=<max_records>][&ordered] HTTP/1.1
The equivalent curl command format is the following on one line:
166 HTTP-REST API file-compatible mode shares
curl -g "http[s]://<IP_address>:<port>/<urlpath>[/[<pathname>]]?
nl
[version=1][attributes=<attr1>[,<attr2>,]]&query=<query_attr><operator><query_value>
nl
[&recurse][&skip=<skip_records>][&top=<max_records>][&ordered]"
See Using HTTP (page 114) for information about the IP address, port, and URL path. If the
urlpath or pathname does not exist, a JSON output of no results is returned (see the JSON
response format (page 168)), and the HTTP status code 200 (OK) is returned rather than an HTTP
error such as 404 (Not Found).
Parameter
Description
pathname
The name of the existing file or directory on the HTTP share, if querying metadata of a
single file/directory. If not present, the query applies to the <urlpath>. Furthermore:
Directory pathnames must end in a trailing slash /.
If the &recurse identifier is supplied for a directory, the query applies to the entire
directory tree: the directory itself, all files in that directory, and all subdirectories
recursively.
If the &recurse identifier is not supplied and the pathname is for a directory, the
query operates only on the given directory and all files in that directory or directory
of files, but not subdirectories.
If the pathname is for a file, the query applies only to the file
attr[n]
query_attr
A system and/or custom metadata attribute to be compared against the value as the
query criterion. Only one attribute can be listed per command.
operator
The query operation to perform against the query_attr and value, one of:
= (equals exactly)
!= (does not equal)
< (less than)
<= (less than or equal to)
> (greater than)
>= (greater than or equal to)
Only for custom attributes and string-valued system attributes (for example,
system::path, system::tier):
~ (regular expression match)
!~ (does not match regular expression)
query_value
The value to compare against the query_attr using the operator. The value is either
a numeric or string literal. See General topics regarding HTTP syntax (page 153) for
details about literals.
recurse
If the recurse attribute is present, the query searches through the given directory and
all of its subdirectories. If the recurse attribute is not present, the query operates only
on the given file, directory, or directory of files (but not subdirectories). See pathname
earlier in this table for details.
skip_records
If this attribute is present, it defines the number of records to skip before returning any
results. The value is zero-based. See HTTP syntax (page 166).
max_records
If this attribute is present, it defines the maximum number of total records to return from
the result set. See HTTP syntax (page 166).
ordered
If this attribute is present, the list of files and attributes returned is sorted lexicographically
by file name. The use of ordered on large results sets might affect the performance of
the query. Without ordered, files might occur in any order in the result set.
Metadata queries
167
Regular expressions
The arguments to the regular expression operators (~ and !~) are POSIX regular expressions, as
described in POSIX 1003.1-2008 at http://pubs.opengroup.org/onlinepubs/9699919799/,
section 9, Regular Expressions.
If no files or directory meet the criteria of the query (an empty result set), or if the urlpath or pathname
does not exist, then a JSON output of no results is returned, consisting of just an open and close
bracket on two separate lines:
[
]
Example queries
Get selected metadata for a given file
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?
nl
attributes=system::size,physician"
This example queries only the file called xyz.jpg in the lab/images subdirectory on the
ibrix_share1 HTTP share. A JSON document is returned containing the system size value and
the custom metadata value for the physician attribute, for this file only.
Metadata queries
169
issued queries to receive the first 2000 results. The client usually issues further queries until no more
results are returned.
Get selected metadata for all files in a given directory tree that matches a system metadata
query
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
system::size,physician&query=system::size>2048&recurse"
This example queries all files larger than 2 KB in the lab/images subdirectory of the
ibrix_share1 HTTP share, as well as all files in all subdirectories, recursively walking the
directory tree. A JSON document is returned containing the system size value and the custom
metadata value for the physician attribute, for all > 2KB files and subdirectories in the lab/
images directory tree, as well as for the lab/imagesdirectory itself (if >2KB).
Get selected metadata for all files in a given directory tree that matches a custom metadata
query
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
system::size,physician&query=department!='billing'&recurse"
This example queries all files that do not have a custom metadata attribute of department with
a value other than billing, in the lab/images subdirectory of the ibrix_share1 HTTP share,
in addition to the files in all subdirectories, recursively walking the directory tree. A JSON document
is returned containing the system size value and the custom metadata value for the physician
attribute, for all files and subdirectories not in the billing department in the lab/images
directory tree. Files without a department attribute are not included in the results.
Get all metadata for all files in a given directory tree that matches a custom metadata
query
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
*&query=physician~'^S.*'&recurse"
This example queries all files that have a custom metadata attribute of physician with a value
that starts with S in the lab/images subdirectory of the ibrix_share1 HTTP share, in addition
to the files in all subdirectories, recursively walking the directory tree. A JSON document is returned
containing all attribute values, for all files and subdirectories in the lab/images directory tree
that matches the custom metadata criterion.
Get all custom metadata for all files in a given directory tree that matches a custom metadata
query
The following is one command line:
curl -g "http://99.226.50.92/ibrix_share1/lab/images/?attributes=
custom::*&query=physician~'^S.*'&recurse"
This example queries all files that have a custom metadata attribute of physician with a value
that starts with S in the lab/images subdirectory of the ibrix_share1 HTTP share, in addition
to the files in all subdirectories, recursively walking the directory tree. A JSON document is returned
containing all custom metadata attribute values, for all files and subdirectories in the lab/images
directory tree that matches the custom metadata criterion.
170
system::path~'.*\.(gif|jpg)$'"
This example returns a JSON document that contains all files in the lab/images directory that
end in .gif or .jpg.
createTime,system::lastChangedTime,system::lastModifiedTime,system::deleteTime&query=system::
nl
lastActivityTime>1334642962"
This example returns a JSON document that contains all files in the lab/images directory that have
experienced activity since April 17, 2012, 06:09:22 UTC/GMT. For live files, the following
attributes are returned: system::createTime,system::lastChangedTime and
system::lastModifiedTime. For live files, system::deleteTime is returned.
171
HTTP syntax
The commands provided in this section should be entered on one line.
The HTTP request line format is the following on one line:
PUT
/<urlpath>/<pathname>?assign=[system::retentionExpirationTime=<retentionExpirationTime>]
[,system::worm='true'] HTTP/1.1
The equivalent curl command format is the following on one line:
curl -g -X PUT
"http[s]://<IP_address>:<port>/<urlpath>/<pathname>?assign=
nl
[system::retentionExpirationTime= <retentionExpirationTime>]
[,system::worm='true']"
Either or both system::retentionExpirationTime or system::worm can be specified.
Parameter
Description
pathname
The name of an existing file on the HTTP share. The retention properties of
this file will be changed.
system:retentionExpirationTime If present, defines the date/time at which the file should expire from the
retained state. After that time, the file will still be WORM (immutable) forever,
but the file can be deleted. The date/time must be formatted according to
API date formats (page 155).
If the file is not currently in the retained state, the date/time is stored as the
files atime, but retention rules are only applied to the file if
system::worm=true in this command or a later command.
If the file is already retained, the date/time is changed to
system::retentionExpirationTime, unless
system::retentionExpirationTime is earlier than the files existing
retention expiration date/time and the file systems retention mode is set to
enterprise. In this case, an error is returned and the date/time is not
changed. The retention period can be shortened only in relaxed mode, not
in enterprise mode.
If not present, and system::worm is present, the default retention period
is applied to the file, if a default is defined for this file system. If no default
is applied, then the file becomes WORM (immutable) but not retained (so it
can still be deleted).
system::worm
This attribute sets the state of the file to WORM. If present, the value must be
the literal string true; no other value is accepted. At the same time, if the
atime (retention expiration date/time) is in the future, or if the file systems
default retention period is nonzero, it sets the retention expiration date/time
either to the atime (if it is in the future) or the default retention period.
A files state can be changed to WORM only once. A file in WORM or
retained state cannot be reverted to non-WORM, and cannot be un-retained
through the StoreAll REST API. See the ibrix_reten_adm command or
the equivalent Management Console actions for administrative override
methods to un-retain a file.
172
As part of processing this command, the file may also be set to the retained state. This will occur
if the atime has already been set into the future, or if the file systems default retention period is
non-zero. The retention expiration time will be set to the atime (if in the future) or the default.
Example: Set a file to WORM and retained with a retention expiration date/time
curl -g -X PUT
"https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?assign=system::retentionExpirationTime=
1376356584,system::worm='true'"
In this example, the file state is changed to WORM and retained. The retention expiration date/time
is set to 13 Aug 2013 01:16:24. The file system default retention period is ignored.
Example: Set/change the retention expiration date/time without a WORM state transition
curl -g -X PUT
"https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg?assign=system::retentionExpiration
Time=1376356584"
In this example, a files retention expiration date/time is assigned, but no state transition to WORM
is performed.
If the file is not already retained, the atime is assigned this value, and it remains un-retained. But
the value will take effect if the file is ever transitioned to WORM in the future, either manually or
by autocommit. If the file is already retained, the retention expiration date/time will be changed
to this new value. If retention settings prohibit this, an error is returned.
access_log
error_log
The logs are in the following directory, on the Active FM server node of the cluster:
/usr/local/ibrix/httpd/debug/logs
By default, there is no activity written to the access_log file. To enable the HTTP Server to write
entries to the file for every HTTP access from a client, uncomment this line in the file /usr/local/
ibrix/httpd/conf/httpd.conf:
# CustomLog "debug/logs/access_log" common
Be aware that this log file can grow quickly from client HTTP accesses. Manage the size of this file
so that it does not fill up the local root file system. Enable it only when needed to diagnose HTTP
traffic.
Status code
Description
200 (OK)
204
If no errors are encountered and there is no content to be returned that fills the Store REST API
query conditions/restrictions, the status code 204 is returned in the message header.
If the URL parser in the StoreAll REST API detects an error in the URL it receives, it returns a 400
error. See the access and error logs for details.
If the path and filename in the URL does not exist and the request is not a PUT (upload) of a
new file, the StoreAll REST API returns a 404 error.
If the StoreAll REST API encounters an error other than those described previously, it returns a
500 error. See the access and error logs for details.
173
The certificate contents (the .crt file) and the private key (the .key file) must be concatenated
into a single file.
The concatenated certificate file must include the headers and footers from the .crt and
.key files.
Before creating a real certificate, you can create a self-signed SSL certificate and test access with
it. Complete the following steps to create a test certificate that meets the requirements for use in a
StoreAll cluster:
174
1.
2.
Remove the passphrase from the private key file (server.key). When you are prompted for
a passphrase, enter the passphrase you specified in step 1.
cp server.key server.key.org
openssl rsa -in server.key.org -out server.key
rm -f server.key.org
3.
4.
5.
When adding a certificate to the cluster, use the concatenated file (server.pem in our example)
as the input for the GUI or CLI.
The following example shows a valid PEM encoded certificate that includes the certificate contents,
the private key, and the headers and footers:
-----BEGIN CERTIFICATE----MIICUTCCAboCCQCIHW1FwFn2ADANBgkqhkiG9w0BAQUFADBtMQswCQYDVQQGEwJV
UzESMBAGA1UECBMJQmVya3NoaXJlMRAwDgYDVQQHEwdOZXdidXJ5MQwwCgYDVQQK
EwNhYmMxDDAKBgNVBAMTA2FiYzEcMBoGCSqGSIb3DQEJARYNYWRtaW5AYWJjLmNv
bTAeFw0xMDEyMTEwNDQ0MDdaFw0xMTEyMTEwNDQ0MDdaMG0xCzAJBgNVBAYTAlVT
MRIwEAYDVQQIEwlCZXJrc2hpcmUxEDAOBgNVBAcTB05ld2J1cnkxDDAKBgNVBAoT
A2FiYzEMMAoGA1UEAxMDYWJjMRwwGgYJKoZIhvcNAQkBFg1hZG1pbkBhYmMuY29t
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDdrjHH/W93X7afTIUOrllCHw21
u31tinMDBZzi+R18r9SZ/muuyvG4kJCbOoQnohuir/s4aAEULAOnf4mvqLfZlkBe
25HgT+ImshLzyHqPImuxTEXvjG5H1sEDLNuQkHvl8hF9Wxao1tv4eL8TL5KqK1W6
8juMVAw2cFDHxji2GQIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAKvYJK8RXKMObCKk
ae6oJ36FEkdl/ACHCw0Nxk/VMR4dv9lIk8Dv8sdYUUqHkNAME2yOaRI190c5bWSa
MjhSjOOqUmmgmeDYlAu+ps3/1Fte5yl4ZV8VCu7bHCWx2OSy46Po03MMOu99JXrB
/GCKE8fO8Fhyq/7LjFDR5GeghmSw
-----END CERTIFICATE---------BEGIN RSA PRIVATE KEY----MIICXgIBAAKBgQDdrjHH/W93X7afTIUOrllCHw21u31tinMDBZzi+R18r9SZ/muu
yvG4kJCbOoQnohuir/s4aAEULAOnf4mvqLfZlkBe25HgT+ImshLzyHqPImuxTEXv
jG5H1sEDLNuQkHvl8hF9Wxao1tv4eL8TL5KqK1W68juMVAw2cFDHxji2GQIDAQAB
AoGBAMXPWryKeZyb2+np7hFbompOK32vAA1vLZHUwFoI0Tch7yQ60vv2PBvlZCQf
4y06ik5xmkqLA+tsGxarx8DnXKUy0PHJ3hu6mTocIJdqqN0n+KO4tG2dvDPdSE7l
phX2sY9MVt4X/QN3eNb/F3cHjnM9BYEr0BY3mTkKXz61jzABAkEA+M3PProYwvS6
P8m4DenZh6ehsu4u/ycjmW/ujdp/PcRd5HBAWJasTXTezF5msugHnnNBe8F1i1q4
9PfL0C+kuQJBAOQXjrmPZxDc8YA/V45MUKv4eHHN0E03p84budtblHQ70BCLaO41
n267t3DrZfW+VtsVDVBMja4UhoBasgv3rGECQQCILDR6k2YMBd+OG/xleRD6ww+o
G96S/bvpNa7t6qFrj/cHmTxOgCDLv+RVHHG/B2lsGo7Dig2oeL30LU9aoUjZAkBV
KSqDw7PyitusS3oQShQQsTufGf385pvDi3yQFxhNcYuUschisCivumyaP3mZEBDz
yV9oLLz1UvqI79PsPfPhAkEAxSqebd1Ymqr2wi0RnKTmHfDCb3yWLPi57kc+lgrK
LUlxawhTzDwzTWJ9m4gQqRlAaXoIElfk6ITwW0g9Th5Ouw==
-----END RSA PRIVATE KEY-----
NOTE: When you are ready to create a real SSL certificate, consult the following site for a
description of the procedure:
http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html#selfcer
175
The certificate is saved on all file serving nodes in the directory /usr/local/ibrix/pki.
To add a certificate from the CLI, use the following command.
ibrix_certificate -a -c CERTNAME -p CERTPATH
For example:
# ibrix_certificate -a -c mycert -p server.pem
Run the command from the active Fusion Manager. To add a certificate for a different node, copy
that certificate to the active Fusion Manager and then add it to the cluster. For example, if node
ib87 is hosting the active Fusion Manager and you have generated a certificate for node ib86,
copy the certificate to ib87:
scp server.pem ib87/tmp
Then, on node ib87, add the certificate to the cluster:
ibrix_certificate -a -c cert86 -p /tmp/server.pem
176
Exporting a certificate
If necessary, you can display a certificate and then copy and save the contents for future use. This
step is called exporting. Select the certificate on the Certificates panel and click Export.
Deleting a certificate
To delete a certificate from the GUI, select the certificate on the Certificates panel, click Delete,
and confirm the operation.
To delete a certificate from the CLI, use this command:
ibrix_certificate -d -c CERTNAME
Overview
The CRR service provides a method to replicate changes in a source file system on one cluster to
a target file system on either the same cluster (intra-cluster replication) or a second cluster (inter-cluster
replication). Both files and directories are replicated with remote replication, and no special
configuration of segments is needed. A remote replication task includes the initial synchronization
of the source and target file systems.
When selecting file systems for remote replication, you should be aware of the following:
Remote replication is a one-way process. Bidirectional replication of a single file system is not
supported.
The mountpoint of the source file system can be different from the mountpoint on the target
file system.
The directory path /mnt/ibrix is reserved for use by CRR for internal operations. Do not
use the /mnt/ibrix path for mounting any file systems, including ibrix. The CRR feature
does not work properly if /mnt/ibrix is occupied by another file system mount.
Cluster expansion (adding a new server) is allowed as usual on both the source and target.
Source or target file systems can be rebalanced while a remote replication job is in progress.
File system policies (ibrix_fs_tune) can be set on both the source and target without any
restrictions.
The Fusion Manager initializes remote replication. However, each file serving node runs its own
replication and synchronization processes, independent of and in parallel with other file serving
nodes. The individual daemons running on the file serving nodes perform the actual file system
replication.
The source-side Fusion Manager monitors the replication and reports errors, failures, and so on.
178
Run-once replication. This method replicates a single directory sub-tree or an entire file system from
the source file system to the target file system. Run-once is a single-pass replication of all files and
subdirectories within the specified directory or file system. All changes that have occurred since
the last replication task are replicated from the source file system to the target file system. File
systems specified as the replication source or target must exist. If a directory is specified as the
replication source, the directory must exist on the source cluster under the specified source file
system.
NOTE: Run-once can also be used to replicate a single software snapshot. This must be done on
the GUI.
You can replicate to a remote cluster (an intercluster replication) or the same cluster (an intracluster
replication).
Continuous: asynchronously replicates the initial state of a file system and any changes to it.
Snapshots cannot be replicated.
Run-once: replicates the current state of a file system, folder, or file system snapshot.
The examples in the configuration rules use three StoreAll clusters: C1, C2, and C3:
C1 has two file systems, c1ifs1 and c1ifs2, mounted as /c1ifs1 and /c1ifs2.
C2 has two file systems, c2ifs1 and c2ifs2, mounted as /c2ifs1 and /c2ifs2.
C3 has two file systems, c3ifs1 and c3ifs2, mounted as /c3ifs1 and /c3ifs2.
Remote replication is not supported between 6.1.x and 6.2 clusters in either direction if Express
Query is enabled on the 6.2 cluster.
Only one continuous Remote Replication task can run per file system. It must replicate from
the root of the file system; you cannot continuously replicate a subdirectory of a file system.
A continuous Remote Replication task can replicate to only one target cluster.
Replication targets are directories in a StoreAll file system and can be:
Targets must be explicitly exported using CRR commands to make them available to CRR
replication tasks.
A subdirectory created beneath a CRR export can be used as a target by a replication task
without being explicitly exported in a separate operation. For example, if the exported target
is /c3ifs1/target1, you can replicate to folder /c3ifs1/target1/subtarget1 if the
folder already exists.
A cluster can be a target for one replication task at the same time that it is replicating data to
another cluster. For example, C1 can replicate /c1ifs1 to C2:/c2ifs1/target1 and C2
can replicate /c2ifs2 to C1:/c1ifs2/target2, with both replications occurring at the
same time.
Overview
179
A cluster can be a target for multiple replication tasks. For example, C1 can replicate /c1ifs1
to C3:/c3ifs1/target1 and C2 can replicate /c2ifs1 to C3:/c3ifs1/target2, with
both replications occurring at the same time.
NOTE:
cluster.
If a different file system is used for the target, the linkage can go back to the original
For information about configuring intercluster replications, see Configuring the target export for
replication to a remote cluster (page 180).
The same cluster and a different file system. Configure either continuous or run-once replication.
You will need to specify a target file system and optionally a target directory (the default is
the root of the file system or the mount point).
The same cluster and the same file system. Configure run-once replication. You will need to
specify a file system, a source directory, and a target directory. Be sure to specify two different,
non-overlapping subdirectories as the source and target. For example, the following replication
is not allowed:
From <fs_root>dir1 to <fs_root>dir1/dir2
However, the following replication is allowed:
From <fs_root>dir1 to <fs_root>dir3/dir4
Planning Considerations
When planning for StoreAll Continuous Replication, consider the following:
All changes on the source are replicated to the target, including creation or deletion of files
and directories, whether planned or accidental. File system snapshots can be used on the
source and target clusters to protect against accidental file deletion.
If you only change the attributes of a file or directory, only the attribute changes are replicated.
This includes changes to extended attributes.
If you make any updates to the data blocks of a previously replicated file the entire file is
replicated again, not just the changed blocks in the file.
Because of the way StoreAll replication works it is important to understand how applications using
the system will modify files to avoid unexpectedly large amounts of data being replicated.
Applications typically behave in one of the following ways:
The application rarely changes files, so most files are replicated only once.
The application completely replaces the old file when saving changes. Some applications
create a local temporary copy of a file in memory or on disk while you are working on it. The
application then overwrites the old version with the new version when saving changes. Because
the whole file is new, it is a candidate for replication after updates, regardless of the replication
technology used.
The application updates ranges of blocks in the file or appends data to the file. This will cause
a file to be replicated regardless of how much or how little data was changed.
NOTE:
Register source and destination clusters. The source and target clusters of a remote replication
configuration must be registered with each other before remote replication tasks can be
created.
Create a target export. This step identifies the target file system and directory for replication
and associates it with the source cluster. Before replication can take place, you must create
a mapping between the source cluster and the target export that receives the replicated data.
This mapping ensures that only the specified source cluster can write to the target export.
Identify server assignments to use for remote replication. Select the servers and corresponding
NICs to handle replication requests, or use the default assignments. The default server
assignment is to use all servers that have the file system mounted.
NOTE: Do not add or change files on the target system outside of a replication operation. Doing
this can prevent replication from working properly.
Table 17 Configuring the target export for replication to a remote cluster
To run the steps...
See...
In the GUI
GUI procedure
This procedure must be run from the target cluster, and is not required or applicable for intracluster
replication.
Select the file system on the GUI, and then select Remote Replication Exports from the lower
Navigator. On the Remote Replication Exports bottom panel, select Add. The Create Remote
Replication Export dialog box allows you to specify the target export for the replication. The mount
point of the file system is displayed as the default export path. You can add a directory to the
target export.
181
The Server Assignments section allows you to specify server assignments for the export. Check the
box adjacent to Server to use the default assignments. If you choose to assign particular servers
to handle replication requests, select those servers and then select the appropriate NICs.
If the remote cluster does not appear in the selection list for Export To (Cluster), you will need to
register the cluster. Select New to open the Add Remote Cluster dialog box and then enter the
requested information.
If the remote cluster is running an earlier version of StoreAll software, you will be asked to enter
the clustername for the remote cluster. This name appears on the Cluster Configuration page on
the GUI for the remote cluster.
182
The Remote Replication Exports panel lists the replication exports you created for the file system.
Expand Remote Replication Exports in the lower Navigator and select the export to see the
configured server assignments for the export. You can modify or remove the server assignments
and the export itself.
CLI procedure
NOTE:
Use the following commands to configure the target file system for remote replication:
1. Register the source and target clusters with each other using the ibrix_cluster -r
command if needed. To list the known remote clusters, run ibrix_cluster -l on the source
cluster.
2. Create the export on the target cluster. Identify the target export and associate it with the
source cluster using the ibrix_crr_export command.
3. Identify server assignments for the replication export using the ibrix_crr_nic command.
The default assignments are:
FSNAME is the target file system to be exported. The -p option exports a directory located under
the root of the specified file system (the default is the root of the file system). The -C option specifies
the source cluster containing the file system to be replicated.
Include the -P option if you do not want this command to set the server assignments. You will then
need to identify the server assignments manually with ibrix_crr_nic, as described in the next
section.
To list the current remote replication exports, use the following command on the target cluster:
ibrix_crr_export -l
To unexport a file system for remote replication, use the following command:
ibrix_crr_export -U -f TARGET_FSNAME [-p DIRECTORY]
Specify servers by their host name or IP address (use commas to separate the names or IP
addresses). A host is any server on the target cluster that has the target file system mounted.
Specify the network using the StoreAll software network name (NIC). Enter a valid user NIC
or the cluster NIC. The NIC assignment is optional. If it is not specified, the host name (or IP)
is used to determine the network.
A previous server assignment for the same export must not exist, or must be removed before
a new assignment is created.
The listed servers receive remote replication data over the specified NIC. To increase capacity,
you can expand the number of preferred servers by executing this command again with another
list of servers.
You can also use the ibrix_crr_nic command for the following tasks:
View server assignments for remote replication. The output lists the target exports and associated
server assignments on this cluster. The assigned servers and NIC are listed with a corresponding
ID number that can be used in commands to remove assignments.
ibrix_crr_nic -l
You can use CRR health reports to check the status of CRR activities on the source and target cluster.
To see a list of health reports for active replication tasks, click List Report on the Remote Replication
Tasks panel.
Select a report from the CRR Health Reports dialog box and click OK to see details about that
replication task.
If the health check finds an issue in the CRR operation, it generates a critical event.
Reports are generated on the source cluster. If the target cluster is running a version of StoreAll
software earlier than 6.2, only the network connectivity check is performed.
It takes approximately two minutes to generate a CRR health report. Reports are updated every
10 minutes. Only the last five CRR health reports are preserved.
On the CLI, use the following commands to view reports:
List reports:
ibrix_crrhealth -l
Show details for a report:
ibrix_crrhealth -i -n REPORTNAME
To see other reports for a specific task, expand Active Tasks > Remote Replication and then select
the task (crr-25 in the following example). Select Overall Status to see a status summary.
Select Server Tasks to display the state of the task and other information for the servers where the
task is running.
186 Using remote replication
187
If you are replicating a snapshot, click Use a snapshot and then select the appropriate Snap Tree
and snapshot.
188 Using remote replication
For replications to the same cluster and different file system, the Target Settings dialog box asks
for the target file system. Optionally, you can also specify a target directory in the file system.
For replications to the same cluster and file system, the Target Settings dialog box asks only for
the target directory. This field is required.
Use the -s option to start a continuous remote replication task. The applicable options are:
-f SRC_FSNAME
-C TGT_CLUSTERNAME
-F TGT_FSNAME
-X TGTEXPORT
The remote replication target (exported directory). The default is the root of the file
system.
NOTE:
This option is used only for replication to a remote cluster. The file system
specified with -F and the directory specified with -X must both be exported from the
target cluster (target export).
-P TGTDIR
A directory under the remote replication target export (optional). This directory must
exist on the target, but does not need to be exported.
-R
-e
Use the -e option to provide a comma separated list of file and directory exclude
patterns, which should be excluded during replication. Enter up to 16 patterns per
task. Enclose the list of patterns in double-quotes. The syntax of the patterns should be
proper. To exclude a directory and all its contents, the exclude pattern to be provided
is the following: dir_name/***
To exclude any file starting with some pattern, the exclude pattern to be used is
pattern*. To exclude any file ending with a particular pattern, the exclude pattern
to be provided is *pattern. For example to exclude text files, the exclude pattern
will be *.txt.
Omit the -o option to start a continuous replication task. A continuous replication task does an
initial full synchronization and then continues to replicate any new changes made on the source.
Continuous replication tasks continue to run until you stop them manually. Use the -o option for
run-once tasks. This option synchronizes single directories or entire file systems on the source and
target in a single pass. If you do not specify a source directory with the -S option, the replication
starts at the root of the file system. The run-once job terminates after the replication is complete;
however, the job can be stopped manually, if necessary.
Use -P to specify an optional target directory under the target export. For example, you could
configure the following replication, which does not include the optional target directory:
The -F option specifies the name of the target file system (the default is the same as the source file
system). The -P option specifies the target directory under the target file system (the default is the
root of the file system).
191
Use the -o option to start a run-once task. The -S option specifies a directory under the source
file system to synchronize with the target directory.
To see more detailed information, run ibrix_crr with the -i option. The display shows the status
of tasks on each node, as well as task summary statistics (number of files in the queue, number of
files processed). The query also indicates whether scanning is in progress on a given server and
lists any error conditions.
ibrix_crr -i [-f SRC_FSNAME] [-h HOSTNAME] [-C SRC_CLUSTERNAME]
The following command prints detailed information about replication tasks matching the specified
task IDs. Use the -h option to limit the output to the specified server.
ibrix_crr -i -n TASKIDS [ [-h HOSTNAME] [-C SRC_CLUSTERNAME]
The source and target file systems must use the same data retention mode (Enterprise or
Relaxed).
The default, maximum, and minimum retention periods must be the same on the source and
target file systems.
A clock synchronization tool such as ntpd must be used on the source and target clusters. If
the clock times are not in sync, file retention periods might not be handled correctly.
Multiple hard links on retained files on the replication source are not replicated. Only the first
hard link encountered by remote replication is replicated, and any additional hard links are
not replicated. (The retainability attributes on the file on the target prevent the creation of any
additional hard links). For this reason, HP strongly recommends that you do not create hard
links on files that will be retained if you wish to replicate them.
For continuous remote replication, if a file is replicated as retained, but later its retainability
is removed on the source file system (using the ibrix_reten_adm -c command or the File
Administration panel on the Management Console), the new files attributes and any additional
changes to that file will fail to replicate. This is because of the retainability attributes that the
file already has on the target, which cause the file system on the target to prevent remote
replication from changing it. If necessary, use data retention management commands on the
corresponding file on the target to make the same changes.
When a legal hold is applied to a file (using the ibrix_reten_adm -h command or the
File Administration panel on the Management Console), the legal hold is not replicated on
the target. If the file on the target should have a legal hold, you will also need to set the legal
hold on that file. Likewise, you will need to release legal hold on the source and target file
separately.
If a file has been replicated to a target and you then change the file's retention expiration
time with the ibrix_reten_adm -e command or the File Administration panel on the
Management Console, the new expiration time is not replicated to the target. If necessary,
also change the file's retention expiration time on the target.
193
3.
4.
When the Run-Once replication is complete, restore shares to their original configuration on
the local site, and verify that clients can access the shares.
Redirect write traffic to the local site.
%L represents the string "-> SYMLINK", " => HARDLINK", or "" (where SYMLINK or HARDLINK
is a filename).
. is the item is not being updated (though it might have attributes that are being
modified).
d is for a directory
D is for a device
S is for a special file (for example, named sockets and First Ins, First Outs (FIFOs)).
The other letters in the %i string are the actual letters that are outputted if the associated
attribute for the file is being updated or a dot (.) for no change. Three exceptions to this
are the following:
A newly created item replaces each letter with a plus sign (+)
An unknown attribute replaces each letter with a question mark (?). This situation
can happen when talking to an older version of ibrcfrworker.
194
c means the checksum of the file is different and will be updated by the file transfer
(requires -checksum and is not used in StoreAll version 6.0 or later).
s means the size of the file is different and will be updated by the file transfer.
t means the modification time is different and is being updated to the senders value
(requires --times). An alternate value of T means that the time will be set to the transfer
time (without --times).
p means the permissions are different and are being updated to the senders value
(requires --permissions).
o means the owner is different and is being updated to the senders value (requires
--owner and super-user privileges).
g means the group is different and is being updated to the senders value (requires
--group and the authority to set the group).
u means the atime is different and is being updated to the senders value.
a means the CIFS ACL is different and is being updated to the senders value.
x means the POSIX extended attributes are different and are being updated to the
senders values.
Another possible output for %i is - when deleting the files, the "%i" represents the string
"*deleting" for each item that is being removed.
nl
nl
nl
nl
nl
15:50:40),<jobid=5>,<4449>,.d..t......,./
15:50:40),<jobid=5>,<4449>,cd+++++++++,new_dir/
15:50:40),<jobid=5>,<4449>,<f+++++++++,new_dir/foo1.txt
15:50:40),<jobid=5>,<4449>,<f+++++++++,new_dir/foo10.txt
15:50:40),<jobid=5>,<4449>,<f+++++++++,new_dir/foo100.txt
15:50:40),<jobid=5>,<4449>,<f+++++++++,new_dir/foo11.txt
15:50:40),<jobid=5>,<4449>,<f+++++++++,new_dir/foo12.txt
You can ignore these error messages. The file was replicated successfully.
Overview
This section provides overview information for data retention and data validation scans.
Data retention
Data retention must be enabled on a file system. When you enable data retention, you can specify
a retention profile that includes minimum, maximum, and default retention periods that specify how
long a file must be retained.
Normal. The file is created read-only or read-write, and can be modified or deleted at any
time. A checksum is not calculated for normal files and they are not managed by data retention.
Write-Once Read-Many (WORM). The file cannot be modified, but can be deleted at any
time. WORM files can be managed by data retention. A checksum is calculated for WORM
files and they can be managed by data retention.
NOTE: You can apply a legal hold to a WORM or WORM-retained file. The file then cannot be
deleted until the hold is released, even if the retention period has expired.
For WORM and WORM-retained files, the file's contents and the following file attributes cannot
be modified:
Also, no new hard links can be made to the file and the extended attributes cannot be added,
modified, or removed.
The following restrictions apply to directories in a file system enabled for data retention:
A directory cannot be moved or renamed unless it is empty (even if it contains only normal
files).
You can delete directories containing only WORM and normal files, but you cannot delete
directories containing retained files.
Default retention period. If a specific retention period is not applied to a file, the file will be retained
for the default retention period. The setting for this period determines whether you can manage
WORM (non-retained) files as well as WORM-retained files:
To manage both WORM (non-retained) files and WORM-retained files, set the default retention
period to zero. To make a file WORM-retained, you will need to set the atime to a date in
the future.
To manage only WORM-retained files, set the default retention period to a non-zero value.
Minimum and maximum retention periods. Retained files cannot be deleted until their retention
period expires, regardless of the file system retention policy. You can set a specific retention period
for a file; however, it must be within the minimum and maximum retention periods associated with
the file system. If you set a time that is less than the minimum retention period, the expiration time
of the period will be adjusted to match the minimum retention period. Similarly, if the new retention
period exceeds the maximum retention period, the expiration time will be adjusted to match the
maximum retention period. If you do not set a retention period for a file, the default retention period
is used. If that default is zero, the file will not be retained.
Autocommit period. Files that are not changed during this period automatically become WORM
or WORM-retained when the period expires. (If the default retention period is set to zero, the files
become WORM. If the default retention period is set to a value greater than zero, the files become
WORM-retained.) The autocommit period is optional and should not be set if you want to keep
normal files in the file system.
IMPORTANT: For a file to become WORM, its ctime and mtime must be older than the
autocommit period for the file system. On Linux, ctime means any change to the file, either its
contents or any metadata such as owner, mode, times, and so on. The mtime is the last modified
time of the file's contents.
Retention mode. Controls how the expiration time for the retention period can be adjusted:
Enterprise mode. The expiration date of the retention period can be extended to a later date.
Relaxed mode. The expiration date of the retention period can be moved in or extended to
a later date.
The autocommit and default retention periods determine the steps you will need to take to make a
file WORM or WORM-retained. See Setting a normal file to WORM or WORM-retained (page 202)
for more information.
Degrading of on-disk data over time, which can change the stored bit values, even if no
accesses to the data are performed
A data validation scan computes hash sum values for the WORM, WORM-retained, and
WORM-hold files in the scanned file system or subdirectory and compares them with the values
originally computed for the files. If the scan identifies changes in the values for a particular file,
an alert is generated on the Management Console. You can then replace the bad file with an
unchanged copy from an earlier backup or from a remote replication.
NOTE:
The time required for a data scan depends on the number of files in the file system or subdirectory.
If there are a large number of files, the scan could take up to a few weeks to verify all content on
Overview
197
storage. A scheduled scan will quit immediately if it detects that a scan of the same file system is
already running.
You can schedule periodic data validation scans, and you can also run on-demand scans.
The default retention period determines whether you can manage WORM (non-retained) files as
well as WORM-retained files. To manage only WORM-retained files, set the default retention
period. WORM-retained files then use this period by default; however, you can assign a different
retention period if desired.
To manage both WORM (non-retained) and WORM-retained files, uncheck Set Default Retention
Period, which sets the default retention period to 0 seconds. When you make a WORM file retained,
you will need to assign a retention period to the file.
The Set Auto-Commit Period option specifies that files will become WORM or WORM-retained if
they are not changed during the specified period. (If the default retention period is set to zero, the
files become WORM. If the default retention period is set to a value greater than zero, the files
become WORM-retained.) To use this feature, check Set Auto-Commit Period and specify the time
period. The minimum value for the autocommit period is five minutes, and the maximum value is
one year. If you plan to keep normal files on the file system, do not set the autocommit period.
Enable Data Validation. Check this option to schedule periodic scans on the file system. Use the
default schedule, or select Modify to open the Data Validation Scan Schedule dialog box and
configure your own schedule.
Enable Report Data Generation. Check this option to generate data retention reports. Use the
default schedule, or select Modify to open the Report Data Generation Schedule dialog box and
configure your own schedule.
Enable Express Query. Check this option to enable Express Query on the file system. See Express
Query (page 217), for details.
The retenMode option is required and is either enterprise or relaxed. You can specify any,
all, or none of the period options. retenDefPeriod is the default retention period,
retenMinPeriod is the minimum retention period, and retenMaxPeriod is the maximum
retention period.
The retenAutoCommitPeriod option specifies that files will become WORM or WORM-retained
if they are not changed during the specified period. (If the default retention period is set to zero,
the files become WORM. If the default retention period is set to a value greater than zero, the files
become WORM-retained.) The minimum value for the autocommit period is five minutes, and the
maximum value is one year. If you plan to keep normal files on the file system, do not set the
autocommit period.
When using a period option, enter a decimal number, optionally followed by one of these
characters:
s (seconds)
m (minutes)
h (hours)
d (days)
w (weeks)
M (months)
y (years)
If you do not include a character specifier, the decimal number is interpreted as seconds.
The following example creates a file system with Enterprise mode retention, with a default retention
period of 1 month, a minimum retention period of 3 days, a maximum retention period of 5 years,
and an autocommit period of 1 hour:
ibrix_fs -o "retenMode=Enterprise,retenDefPeriod=1M,retenMinPeriod=3d,
retenMaxPeriod=5y,retenAutoCommitPeriod=1h" -c -f ifs1 -s ilv_[1-4] -a
To enable data retention on an existing file system using the CLI, run this command:
ibrix_fs -W -f FSNAME -o "retenMode=<mode>,retenDefPeriod=<period>,retenMinPeriod=<period>,
retenMaxPeriod=<period>"
To enable data retention on an existing file system, created with StoreAll version 6.0 or earlier,
follow the steps to upgrade the file system as described in the HP StoreAll 9300/9320 Storage
Administrator Guide in the section, Upgrading the StoreAll software to the 6.2 release. Then,
configure the file system retention profile as described in the steps provided earlier in this section.
To view the retention profile from the CLI, use the ibrix_fs -i command, as in the following
example:
ibrix_fs -i -f ifs1
FileSystem: ifs1
=========================
{ }
RETENTION
Enterprise [default=15d,mininum=1d,maximum=5y]
Autocommit period is set and the default retention period is zero seconds:
Files remaining unchanged during the autocommit period automatically become WORM but
are not retained and can be deleted. To make a WORM file retained, set the atime to a time
in the future, either before or after the file becomes WORM.
Autocommit period is not set and the default retention period is zero seconds:
To make a normal file WORM, run a command to set the file to read-only. If the file was
created as read-only, the act of setting it to read-only will cause the file to become WORM,
even though the permissions will not change.
To make a WORM file retained, set the atime to a time in the future, either before or after the
file becomes WORM.
Auto commit period is not set and the default retention period is non-zero:
To make a normal file WORM-retained, run a command to set the file to read-only. If the file
was created as read-only, the act of setting it to read-only will cause the file to become WORM,
even though the permissions will not change. By default, the file uses the default retention
period.
To assign a different retention period to the WORM-retained file, set the atime to a time in
the future.
NOTE: If you are not using autocommit, files must explicitly be made read-only to make them
WORM or WORM-retained. Typically, you can configure your application to do this.
NOTE: For SMB users setting the access time manually for a file, the maximum retention period
is 100 years from the date the file was retained. For NFS users setting the access time manually
for a file, the retention expiration date must be before February 5, 2106.
The access time has the following effect on the retention period:
If the access time is set to a future date, the retention period of the file is set so that retention
expires at that date.
If the access time is not set, the file inherits the default retention period for the file system.
Retention expires at that period in the future, starting from the time the file is set read-only.
If the access time is not set and the default retention period is zero, the file will become WORM
but not retained, and can be deleted.
You can change the retention period if necessary; see Changing a retention period (page 206).
File administration
To administer files from the Management Console, select File Administration on the WORM/Data
Retention panel. Select the action you want to perform on the WORM/Data Retention File
Administration dialog box.
Each entry can be a fully-qualified path, such as /myfs1/here/a.txt. An entry can also
be relative to the file system mount point. For example, if myfs1 is mounted at /myfs1, the
path here/a.txt is a valid entry.
A relative path cannot begin with a slash (/). Relative paths are always relative to the mount
point; they cannot be relative to the users current directory, unlike other UNIX commands.
A directory cannot be specified in a path list. Directories themselves have no retention settings,
and the command returns an error message if a directory is entered.
To apply an action to all files in a directory, you need to specify the paths to the files. You can use
wildcards in the pathnames, such as /my/path/*,/my/path/.??*. The command does not
apply the action recursively; you need to enter subdirectories.
To apply a command to all files in all subdirectories of the tree, you can wrap the
ibrix_reten_adm command in a find script (or other similar script) that calls the command
for every directory in the tree. For example, the following command sets a legal hold on all files
in the specified directory, except for dot-hidden files such as .bashrc:
find /ibrixFS/mydir -type d -exec ibrix_reten_adm -h -f ibrixFS -P {}/*
\;
The following script includes files beginning with a dot, such as .a or .bashrc. (This includes
files uploaded to the file system, not file system files such as the .archiving tree.)
When you click OK, the Start a new Validation Scan dialog box appears. Change the path to be
scanned if necessary.
Go to the Schedule tab to specify when you want to run the scan.
When you click OK, the Start a new Validation Scan dialog box appears. Change the path to be
scanned if necessary and click OK.
To start an on-demand validation scan from the CLI, use the following command:
ibrix_datavalidation -s -f FSNAME [-d PATH]
210
The showvms command displays the hash sums stored for the file. For example:
# /usr/local/ibrix/sbin/showvms rhnplugin.py
VMSQuery returned 0
Path hash: f4b82f4da9026ba4aa030288185344db46ffda7b
Meta hash: 80f68a53bb4a49d0ca19af1dec18e2ff0cf965da
Data hash: d64492d19786dddf50b5a7c3bebd3fc8930fc493
last attempt: Wed Dec 31 17:00:00 1969
last success: Wed Dec 31 17:00:00 1969
changed: 0
In this example, the hash sums match and there are no inconsistencies. The 1969 dates appearing
in the showvms output mean than the file has not yet been validated.
211
Checksum corruption:
If the checksums of the <filesystem/file> and <fileFromBackup> are identical, the
.archiving directory may have been corrupted (a checksum corruption). If this is the case, you
must restore the checksums:
If only a few files are inconsistent and you want to postpone restoring the checksums, you can
back up the files with a checksum inconsistency, delete those files from the file system, and
restore the backed up files to the file system.
If many checksums are corrupted, there may be a hardware failure. To restore the checksums,
complete the following procedure:
IMPORTANT:
1.
2.
3.
4.
File corruption
If the checksums of the <filesystem/file> and <fileFromBackup> are not identical, there
is data (content) corruption.
To replace an inconsistent file, follow these steps:
1. Obtain a good version of the file from a backup or a remote replication.
2. If the file is retained, remove the retention period for the file, using the Management Console
or the ibrix_reten_adm -c command.
3. Delete the file administratively using the Management Console or the ibrix_reten_adm
-d command.
4. Copy/restore the good version of the file to the data-retained file system or directory. If you
recover the file using an NDMP backup application, the proper retention expiration period is
applied from the backup copy of the file. If you copy the file another way, you will need to
set the atime and read-only status.
212
The utilization report summarizes how storage is utilized between retention states and free space.
The next example shows the first page of a utilization report broken out by tiers. The results for
each tier appear on a separate page. The total size scales automatically, and is reported as MB,
GB, or TB, depending on the size of the file system or tier.
A data validation report shows when files were last validated and reports any mismatches. A
mismatch can be either content or metadata. The Number of Files scales automatically and is
reported as individual files, thousands of files, or millions of files.
213
If an error occurs during report generation, a message appears in red text on the report. Simply
run the report again.
214
retention
retention_by_tier
validation
validation by tier
utilization
utilization_by_tier
You cannot make any new hard links to the file. Doing this would increment the metadata of
the link count in the file's inode, which is not allowed under WORM rules.
You can delete hard links (the original file system entry or a hard-link entry) without deleting
the other file system entries or the file itself. WORM rules allow the link count to be
decremented.
The source and target file systems must use the same retention mode (Enterprise or Relaxed).
The default, maximum, and minimum retention periods must be the same on the source and
target file systems.
A clock synchronization tool such as ntpd must be used on the source and target clusters. If
the clock times are not in sync, file retention periods might not be handled correctly.
Multiple hard links on retained files on the replication source are not replicated. Only the first
hard link encountered by remote replication is replicated, and any additional hard links are
not replicated. (The retainability attributes on the file on the target prevent the creation of any
additional hard links). For this reason, HP strongly recommends that you do not create hard
links on retained files.
For continuous remote replication, if a file is replicated as retained, but later its retainability
is removed on the source file system (using data retention management commands), the new
files attributes and any additional changes to that file will fail to replicate. This is because of
the retainability attributes that the file already has on the target, which will cause the file system
on the target to prevent remote replication from changing it.
Using hard links with WORM files
215
When a legal hold is applied to a file, the legal hold is not replicated on the target. If the file
on the target should have a legal hold, you will also need to set the legal hold on that file.
If a file has been replicated to a target and you then change the file's retention expiration
time with the ibrix_reten_adm -e command, the new expiration time is not replicated to
the target. If necessary, also change the file's retention expiration time on the target.
To avoid these errors, set the auto-commit period to a higher value (the minimum value is five
minutes and the maximum value is one year).
216
15 Express Query
Express Query provides a per-file system database of system and custom metadata, and audit
histories of system and file activity. When Express Query is enabled on the file system, you can
manage the metadata service, configure auditing, create reports from the audit history, assign
custom metadata and certain system metadata to files and directories, and query for selected
metadata from files.
NOTE:
Express Query can only be enabled on file systems which have Data Retention enabled.
Brief Overview
Metadata Service
The processes that manage the set of Managing the metadata service
(page 217)
per-file system metadata databases,
including auditing of file changes and
accessing the database in response
to REST API requests.
Auditing
StoreAll REST API in File Compatibility The StoreAll REST API share in
HTTP-REST API file-compatible mode
mode
file-compatible mode provides
shares (page 152)
programmatic access to user-stored
files and their metadata. The metadata
is stored on the HP StoreAll Express
Query database in the StoreAll cluster
and provides fast query access to
metadata without scanning the file
system.
IMPORTANT: Express Query cannot
be performed on StoreAll REST API in
object mode
217
Restore the backup to a new StoreAll file system. See Restoring a backup to a new StoreAll
file system (page 218).
or
Restore the backup to an existing file system that has Express Query enabled. See Restore
to an existing file system that has Express Query enabled (page 219).
(Optional) Enable auditing, using the ibrix_fs -A [-f FSNAME] -oa ... CLI command
or in the Management Console.
(Optional) Create REST API shares. See Using HTTP (page 114).
Express Query re-synchronizes the file system and database by using the restored database.
This process might take some time.
Wait for the metadata resync process to finish. Enter the following command to monitor the
resync process for a file system:
ibrix_archiving -l
The status should be at OK for the file system before you proceed. Refer to the
ibrix_archiving section in the HP StoreAll Storage CLI Reference Guide for information
about the other states.
218
Express Query
Remove all StoreAll REST API shares created in the file system by entering the following
command:
ibrix_httpshare -d -f <fs_name>
c.
To disable the express query settings on a file system, enter the following command:
ibrix_fs -T -D -f FSNAME
2.
3.
Delete the previously existing archive journal files that the file system creates for Express Query
to ingest:
rm -Rf <mountpoint>/.audit/*
4.
5.
Restore the backed up file system to this file system, overwriting existing files.
Re-enable Express Query on the file system, either in the Management Console or by the CLI
command:
ibrix_fs -T -E -f <FSNAME>
6.
7.
(Optional) Enable auditing, using the ibrix_fs -A [-f FSNAME] -oa CLI command
or in the Management Console.
(Optional) Create REST API shares. See Using HTTP (page 114).
Express Query re-synchronizes the file system and database by using the restored database
information. This process might take some time.
8.
Wait for the metadata resync process to finish. Enter the following command to monitor the
resync process for a file system:
ibrix_archiving -l
The status should be at OK for the file system before you proceed. See the ibrix_archiving
section in the HP StoreAll Storage CLI Reference Guide for information about the other states.
219
Description
--dbconfig
The metadata configuration file. Use only this path and file name.
/usr/local/Metabox/scripts/startup.xml
--database <dbname>
--outputfile <fname>
--user ibrix
The username for accessing the database. Use only the ibrix
username.
Description
-f <FSname>
-n <Fname>
-t <TYPE>
The following command imports custom metadata exported by the MDExport script:
MDimport -f newIbrixFs -t custom -n /home/mydir/save.csv
220 Express Query
The next command imports audit metadata exported by the ibrix_audit_reports command:
MDimport -f target -t audit -n
simple_report_for_source_at_1341513594723.csv
The ibrix_audit_reports command automatically generates the file name
simple_report_for_source_at_1341513594723.csv.
Managing auditing
Auditing lets you:
Find out which events you have already captured in the Express Query database and control
what gets captured in regards to file changes in the Express Query database. See Audit log
(page 222) for more information.
Gather information from audit reports as to what is in the Express Query database. See Audit
log reports (page 223) for more information.
Audit log
The audit log provides a detailed history of activity for specific file system events. The Audit Log
panel shows the current audit configuration.
To change the configuration, click Modify on the Audit Log panel. On the Modify Audit Settings
dialog box, you can change the expiration policies and schedule, and you can change the events
that are audited. The default Audit Logs Expiration Policy is 45 days. If you need to keep audit
history for a longer period of time, increase the time period. Enable and disable event types and
groups using the checkboxes and the arrows to move events between the Disabled and Enabled
lists. If an event is not selected for auditing, it cannot be included in an audit report. By default,
all events are enabled. For significantly enhanced system performance and reduced audit log size,
if files are accessed frequently, disable the File Read event. Monitor the space used by the
<mount point>/.archiving/.database tree, which includes both current metadata and
audit log history. To reduce space usage, reduce the number of event types enabled for auditing
and/or shorten the Audit Logs Expiration Policy.
The audit reports are in CSV (comma-separated) format and are placed in the following directory:
<file_system_mountpoint>/.archiving/reports
The file names have this format:
<report_type>_report_for_<FS_name>_at_<numeric_timestamp>.csv
For example:
file_report_for_ibrixFS_at_1343771410270.csv
simple_report_for_ibrixFS_at_1343772788085.csv
nl
Following are definitions for the less obvious fields in an audit report.
Field
Description
seqno
The sequence number of this event, increased incrementally for each event processed per node
in the StoreAll cluster
eshost
The ID of the node in the StoreAll cluster that recorded this event a hex string)
eventsuccess
eventerrorcode
description
Currently unused
reserved1/2/3
POID_lo32/hi64
The Permanent Object ID that uniquely identifies the file within the StoreAll cluster (a 96-bit
integer split in two parts)
Field
Description
*time[n]sec
The seconds and nanoseconds of that time, in UNIX epoch time, which is the number of seconds
since the start of Jan 1, 1970 in UTC
mode
The Linux mode/permission bits (a combination of the values shown by the Linux man 2 stat
command)
*hash, content*,
meta*
Currently unused
To generate reports from the command line, use the ibrix_audit_reports command:
ibrix_audit_reports -t SORT_ORDER -f FILESYSTEM [-p PATH] [-b BEGIN_DATE]
[-e END_DATE] [-o class1[,class2,...]]
See the HP StoreAll Storage CLI Reference Guide for more information about this command,
including the events that can be specified for the report.
Audit log reports expiration policy: whether reports should be deleted after a specific number
of days, weeks, months, or years, or should never be deleted.
Audit log reports expiration schedule: the time each day at which expired audit reports are
deleted.
You can set also set these options for one or more file systems using the ibrix_audit_reports
command.
Be sure to monitor the space used by audit reports, especially if you are retaining them for a long
period of time.
On the CLI, use the ibrix_avconfig command to configure Antivirus support. Use the ibrix_av
command to update Antivirus definitions or view statistics.
To remove an external virus scan engine from the configuration, select that system on the Virus
Scan Engines panel and click Delete.
To add an external virus scan engine from the CLI, use the following command:
ibrix_avconfig -a -S -I IPADDR -p PORTNUM
The port number specified here must match the ICAP port number configured on the virus scan
engines.
Use the following command to remove an external virus scan engine:
ibrix_avconfig -r -S -I IPADDR
NOTE: All virus scan engines should have the same virus definitions. Inconsistencies in virus
definitions can cause files to be rescanned.
Be sure to coordinate the schedules for updates to virus definitions on the virus scan engines and
updates of virus definitions on the cluster nodes.
On the CLI, use the following commands:
Schedule cluster-wide updates of virus definitions:
ibrix_av -t [-S CRON_EXPRESSION]
The CRON_EXPRESSION specifies the time for the virus definition update. For example, the
expression "0 0 12 * * ?" executes this command at noon every day.
View the current schedule:
ibrix_av -l -T
Allow (Default). All operations triggering scans are allowed to run to completion.
Deny. All operations triggering scans are blocked and returned with an error. This policy
ensures that a virus is not returned when Antivirus is not available.
The cluster nodes cannot communicate with the virus scan engines because of network issues.
The number of incoming scan requests exceeds the threads available on the cluster nodes to
process the requests.
The Antivirus Settings panel shows the current setting for this policy. To toggle the policy, click
Configure AV Policy.
NOTE: If you configure the protocol specific policy to CLOSE Scan on close, older written files
are not scanned automatically whenever the virtual scan engine is updated with newer virus
definitions. Also there is a delay of 35 seconds before the file is subject to scanning after the file
closes to flush all the data. Any read or open to the file during this time is not scanned.
As virus detection and updates of virus definition files always lags new viruses being discovered,
it is highly recommended that AV be configured to use the Both option to re-scan older written files
on open whenever new virus definitions are provided, thereby ensuring protection against virus
infections. Scan on Close should be used only as an optimization to take the virus scan penalty at
close time instead of at an open time.
To set the policy:
1. Select Protocol Scan Settings from the lower Navigator tree.
The AV Protocol Settings panel then displays the current setting.
2.
To set or change the setting, click Modify on the panel and then select the appropriate setting
from the Action dialog box.
Defining exclusions
Exclusions specify files to be skipped during Antivirus scans. Excluding files can improve
performance, as files meeting the exclusion criteria are not scanned. You can exclude files based
on their file extension or size.
By default, when exclusions are set on a particular directory, all of its child directories inherit those
exclusions. You can overwrite those exclusions for a child directory by explicitly setting exclusions
on the child directory or by using the No rule option to stop exclusion inheritance on the child
directory.
IMPORTANT: The exclusion by file extension feature is not supported for files objects stored under
an HTTP StoreAll REST API share created in the object mode. If the share is created under the file
system on which you created the exclusion, the exclusion still does not apply to the file objects
present under that share in object mode. This situation occurs because the HTTP StoreAll REST API
object mode references file objects with hash names.
To configure exclusions by using the Management Console:
1. Select an appropriate AV-enabled file system from the list.
2.
3.
4.
231
5.
Inherited Rule/Remove Rule. Use this option to reset or remove exclusions that were
explicitly set on the child directory. The child directory will then inherit exclusions from
its parent directory. You should also use this option to remove exclusions on the top-most
level directory where exclusions rules have been are set.
No rule. Use this option to remove or stop exclusions at the child directory. The child
directory will no longer inherit the exclusions from its parent directory.
Custom rule. Use this option to exclude files having specific file extensions or exceeding
a specific size. If you specify multiple file extensions, use commas to separate the extension.
To exclude all types of files from scans, enter an asterisk (*) in the file extension field.
You can specify either file extensions or a file size (or both).
On the CLI, use the following options to specify exclusions with the ibrix_avconfig command:
-x FILE_EXTENSION Excludes all files having the specified extension, such as .jpg. If
you specify multiple extensions, use commas to separate the extensions.
-s FILE_SIZE Excludes all files larger than the specified size (in MB).
Run Antivirus scans when the system is not being heavily used.
Configure your Antivirus scans so that a huge number of files in a subtree are not assigned
to an Antivirus scan.
Do not run Antivirus scans on many file systems at the same time as there is a resource limitation
on the AV daemon.
Antivirus scan task let you specify a file system or directory path under which all the files will
be subjected to antivirus scans, which is different from on-access scanning, where a scan is
triggered, when the file is accessed by an application, typically during a open/read operation.
On-access scanning is done automatically by the kernel. The AV feature defines a mechanism,
which lets you run or schedule periodic Antivirus scans on an entire file system or directory
at any time.
Antivirus scans are independent of on-access scanning, and they can be run in parallel.
Antivirus scans are similar to on-access scans in that they continue to honor exclusion rules
defined by you.
When you set an Antivirus scan, you are asked to enter a value for the duration scan. The maximum
duration scan that can be provided is 168 hours (7 days). If a duration time is not provided for
the scan, all files in the given path are scanned without any timeout.
You can plan the scheduling option so that Antivirus scans run on multiple directories in serial. For
example, assume you have five directories on which you want to run Antivirus scans in a particular
priority order. You could schedule an Antivirus scan to run on the first directory for 2 hours maximum
(value in the Duration of Scans text box) at a set time. Then, schedule the scan task on the next
directory after 2 hours (T+2.15 (15 minutes extra as a previous Antivirus scan needs a few minutes
for cleanup)). Do the same steps for the next three directories.
Management Console
To start a scan or schedule periodic scans on the Management Console:
1. Select the file system to be scanned from the Filesystems panel.
2. Select Active Tasks > Antivirus Scan from the lower Navigator panel.
3. Click Start on the Antivirus Task Summary panel
You can also click New on the Active Tasks panel and then select Antivirus Scan as the task
type on the Starting a New Task dialog box.
4.
5.
Complete the Scan Settings tab on the New Antivirus Scan Task dialog box. Specify the
directory path to be scanned and the maximum number of hours (optional) that the scan should
run. At the end of that time, the scan is stopped and becomes an inactive task. You can view
the scan statistics of an inactive task in the Inactive Tasks panel.
Antivirus scans can be scheduled or started immediately. If you click OK on the Scan tab
without populating the Schedule tab, the scan starts immediately.
6.
On the Schedule tab, click Schedule this task and then select the frequency (once, daily,
weekly, monthly) and specify when the scan should run.
CLI
On the CLI, use the following command to start an Antivirus scan:
ibrix_avscan -s -f FSNAME -p PATH [-d DURATION]
The scan runs immediately.
then select Active Tasks > Antivirus Scan from the lower Navigator. The Antivirus Task Summary
panel then shows current information for the scan.
For more information about an inactive task, select the task and click Details on the Inactive Tasks
panel. Inactive tasks cannot be restarted but can be deleted.
Inodes scanned indicates the files that were scanned by the Antivirus scan.
Inodes might be marked as skipped when an Antivirus scan task runs on a file system in which:
AV becomes unavailable
A snap is taken
The quarantine utility cannot locate the snap file because the link was formed with the new filename
assigned after the snap was taken.
A snap is taken
The quarantine utility cannot track the original file because the link was not created with its name.
That file cannot be listed, reset, moved, or deleted by the quarantine utility.
Limitation 3: When the following sequence of events occurs:
A snap is taken
The quarantine utility displays both the snap name (which still has the original name), and the new
filename, although they are same file.
To enable a directory tree for snapshots, click Add on the Snap Trees panel.
You can create a snapshot directory tree for an entire file system or a directory in that file system.
When entering the directory path, do not specify a directory that is a parent or child of another
snapshot directory tree. For example, if directory /dir1/dir2 is a snapshot directory tree, you
cannot create another snapshot directory tree at /dir1 or /dir1/dir2/dir3.
IMPORTANT:
The snapshot schedule can include any combination of hourly, daily, weekly, and monthly snapshots.
Also specify the number of snapshots to retain on the system. When that number is reached, the
oldest snapshot is deleted.
All weekly and monthly snapshots are taken at the same time of day. The default time is 9 pm. To
change the time, click the time shown on the dialog box, and then select a new time on the Modify
Weekly/Monthly Snapshot Creation Time dialog box.
To enable a directory tree for snapshots using the CLI, run the following command:
ibrix_snap -m -f FSNAME -P SNAPTREEPATH
SNAPTREEPATH is the full directory pathname, starting at the root of the file system. For example:
ibrix_snap -m -f ifs1 -P /ifs1/dir1/dir2
IMPORTANT: A snapshot reclamation task is required for each file system containing snap trees
that have scheduled snapshots. If a snapshot reclamation task does not already exist, you will need
to configure the task. See Reclaiming file system space previously used for snapshots (page 245).
The following CLI commands display information about snapshots and snapshot directory trees:
List all snapshots, or only the snapshots on a specific file system or snapshot directory tree:
ibrix_snap -l -s [-f FSNAME [-P SnapTreePath]]
List all snapshot directory trees, or only the snapshot directory trees on a specific file system:
ibrix_snap -l [-f FSNAME]
241
The ls and du commands report the size of a file depending on the version you are viewing.
if you are looking at a snapshot, the commands report the size of the file when it was snapped.
If you are looking at the current version, the commands report the current size.
The df command reports the total space used in the file system by files and snapshots.
The following example lists snapshots created on an hourly schedule for snap tree /ibfs1/users.
Using ISO 8601 naming ensures that the snapshot directories are listed in order according to the
time they were taken.
[root@9000n1 ~]# # cd /ibfs1/users/.snapshot/
[root@9000n1 .snapshot]# ls
2011-06-01T110000_hourly 2011-06-01T190000_hourly
2011-06-01T120000_hourly 2011-06-01T200000_hourly
2011-06-01T130000_hourly 2011-06-01T210000_hourly
2011-06-01T140000_hourly 2011-06-01T220000_hourly
2011-06-01T150000_hourly 2011-06-01T230000_hourly
2011-06-01T160000_hourly 2011-06-02T000000_hourly
2011-06-01T170000_hourly 2011-06-02T010000_hourly
2011-06-01T180000_hourly 2011-06-02T020000_hourly
2011-06-02T030000_hourly
2011-06-02T040000_hourly
2011-06-02T050000_hourly
2011-06-02T060000_hourly
2011-06-02T070000_hourly
2011-06-02T080000_hourly
2011-06-02T090000_hourly
Users having access to the root of the snapshot directory tree (in this example, /ibfs1/users/)
can navigate the /ibfs1/users/.snapshot directory, view snapshots, and copy all or part
of a snapshot. If necessary, users can copy a snapshot and overlay the present copy to achieve
manual rollback.
NOTE:
Access to .snapshot directories is limited to administrators and NFS and SMB users.
2011-06-01T190000_hourly
2011-06-01T200000_hourly
Deleting snapshots
Scheduled snapshots are deleted automatically according to the retention schedule specified for
the snapshot tree; however you can delete a snapshot manually if necessary. You also need to
delete on-demand snapshots manually. Deleting a snapshot does not free the file system space that
was used by the snapshot; you will need to reclaim the space.
IMPORTANT:
Reclaim the file system space used by the snapshots (use ibrix_snapreclamation).
Select New on the Task Summary panel to open the New Snapshot Space Reclamation Task dialog
box.
Maximum Space Reclaimed. The reclamation task recovers all snapped space eligible for
recovery. It takes longer and uses more system resources than Maximum Speed. This is the
default.
Maximum Speed of Task. The reclamation task reclaims only the most easily recoverable
snapped space. This strategy reduces the amount of runtime required by the reclamation task,
but leaves some space potentially unrecovered (that space is still eligible for later reclamation).
You cannot create a schedule for this type of reclamation task.
If you are using the Maximum Space Reclaimed strategy, you can schedule the task to run
periodically. On the Schedule tab, click Schedule this task and select the frequency and time to
run the task.
To stop a running reclamation task, click Stop on the Task Summary panel.
Managing reclamation tasks from the CLI
To start a reclamation task from the CLI, use the following command:
ibrix_snapreclamation -r -f FSNAME [-s {maxspeed | maxspace}] [-v]
The reclamation task runs immediately; you cannot create a recurring schedule for it.
To stop a reclamation task, use the following command:
ibrix_snapreclamation -k -t TASKID [-F]
The following command shows summary status information for all replication tasks or only the tasks
on the specified file systems:
ibrix_snapreclamation -l [-f FSLIST]
The following command provides detailed status information:
ibrix_snapreclamation -i [-f FSLIST]
Backing up snapshots
Snapshots are stored in a .snapshot directory under the directory tree. For example:
# ls -alR /fs2/dir.tst
/fs2/dir.tst:
drwxr-xr-x 4 root root 4096 Feb 8 09:11 dir.dir
-rwxr-xr-x 1 root root 99999999 Jan 31 09:33 file.0
-rwxr-xr-x 1 root root 99999999 Jan 31 09:33 file.1
drwxr-xr-x 2 root root 4096 Apr 6 15:55 .snapshot
/fs2/dir.tst/.snapshot:
lrwxrwxrwx 1 root root 15 Apr 6 15:39 2011-04-06T15:39:57_ -> ../.@1302118797
lrwxrwxrwx 1 root root 15 Apr 6 15:55 2011-04-06T15:55:07_tst1 -> ../.@1302119707 /fs2/dir.tst/dir.dir: -rwxr-xr-x
1 root root 99999999 Jan 31 09:34 file.1
NOTE: The links beginning with .@ are used internally by the snapshot software and cannot be
accessed.
To back up the snapshots, use the procedure corresponding to your backup method.
HP 9320 Storage: supported on the HP P2000 G3 MSA Array System or HP 2000 Modular
Smart Array G2 provided with the platform.
HP 9300 Storage Gateway: supported on the HP P2000 G3 MSA Array System; HP 2000
Modular Smart Array G2; HP P4000 G2 Models; HP 3PAR F200, F400, T400 and T800s
Storage Systems (OS version 2.3.1 (MU3); and Dell EqualLogic storage array (no arrays are
provided with the 9300 system).
The block snapshot feature uses the copy-on-write method to preserve the snapshot regardless of
changes to the origin file system. Initially, the snapshot points to all blocks that the origin file system
is using (B in the following diagram). When a block in the origin file system is overwritten with
additions, edits, or deletions, the original block (prior to changes) is copied to the snapshot store,
and the snapshot points to the copied block (C in the following diagram). The snapshot continues
to point to the origin file system contents from the point in time that the snapshot was executed.
To create a block snapshot, first provision or register the snapshot store. You can then create a
snapshot from type-specific storage resources. The snapshot is active from the moment it is created.
You can take snapshots via the StoreAll software block snapshot scheduler or manually, whenever
necessary. Each snapshot maintains its origin file system contents until deleted from the system.
Snapshots can be made visible to users, allowing them to access and restore files (based on
permissions) from the available snapshots.
NOTE: By default, snapshots are read only. HP recommends that you do not allow writes to any
snapshots.
249
Setting up snapshots
This section describes how to configure the cluster to take snapshots.
To remove the registration information from the configuration database, use the following command.
The partition will then no longer be recognized as a repository for snapshots.
ibrix_vs -d -n STORAGENAME
To see detailed information for named snapshot partitions on either a specific array or all arrays,
use the following command:
ibrix_vs -i [-n STORAGENAME]
A snapshot scheme specifies the number of snapshots to keep and the number of snapshots to
mount. You can create a snapshot scheme from either the Management Console or the CLI.
The type of storage array determines the maximum number of snapshots you can keep and mount
per file system.
Array
EqualLogic array
For the P2000 G3 MSA System/MSA2000, the storage array itself also limits the total number of
snapshots that can be stored. Arrays count the number of LUNs involved in each snapshot. For
example, if a file system has four LUNs, taking two snapshots of the file system increases the total
snapshot LUN count by eight. If a new snapshot will cause the snapshot LUN count limit to be
exceeded, an error will be reported, even though the file system limits may not be reached. The
snapshot LUN count limit on P2000 G3 MSA System/MSA2000 arrays is 255.
The 3PAR storage system allows you to make a maximum of 500 virtual copies of a base volume.
Up to 256 virtual copies can be read/write copies.
251
Under Snapshot Configuration, select New to create a new snapshot scheme. The Create Snapshot
Scheme dialog box appears.
On the General tab, enter a name for the strategy and then specify the number of snapshots to
keep and mount on a daily, weekly, and monthly basis. Keep in mind the maximums allowed for
your array type.
Daily means that one snapshot is kept per day for the specified number of days. For example, if
you enter 6 as the daily count, the snapshot feature keeps 1 snapshot per day through the 6th day.
On the 7th day, the oldest snapshot is deleted. Similarly, Weekly specifies the number of weeks
that snapshots are retained, and Monthly specifies the number of months that snapshots are retained.
On the Advanced tab, you can create templates for naming the snapshots and mountpoints. This
step is optional.
For either template, enter one or more of the following variables. The variables must be enclosed
in braces ({ }) and separated by underscores (_). The template can also include text strings. When
a snapshot is created using the templates, the variables are replaced with the following values.
Variable
Value
fsname
shortdate
yyyy_mm_dd
fulldate
yyyy_mm_dd_HHmmz + GMT
When you have completed the scheme, it appears in the list of snapshot schemes on the Create
Snapshot dialog box. To create a snapshot schedule using this scheme, select it on the Create
Snapshot dialog box and go to the Schedule tab. Click Schedule this task, set the frequency of the
snapshots, and schedule when they should occur. You can also set start and end dates for the
schedule. When you click OK, the snapshot scheduler will begin taking snapshots according to
the specified snapshot strategy and schedule.
-k KEEP
The number of snapshots to keep per file system. For the P2000 G3 MSA System/MSA2000
G2 array, the maximum is 32 snapshots per file system. For P4000 G2 storage systems, the
maximum is 32 snapshots per file system. For P4000 G2 storage systems, the maximum is 32
snapshots per file system. For Dell EqualLogic arrays, the maximum is eight snapshots per file
system.
Enter the number of days, weeks, and months to retain snapshots. The numbers must be separated
by commas, such as -k 2,7,28.
NOTE: One snapshot is kept per day for the specified number of days. For example, if you
enter 6 as the daily count, the snapshot feature keeps 1 snapshot per day through the 6th day.
On the 7th day, the oldest snapshot is deleted. Similarly, the weekly count specifies the number
of weeks that snapshots are retained, and the monthly count specifies the number of months that
snapshots are retained.
-m MOUNT
The number of snapshots to mount per file system. The maximum number of snapshots is 7 per
file system.
Enter the number of snapshots to mount per day, week, and month. The numbers must be
separated by commas, such as -m 2,2,3. The sum of the numbers must be less than or equal
to 7.
-N NAMESPEC
Snapshot name template. The template specifies a scheme for creating unique names for the
snapshots. Use the variables listed below for the template.
-M MOUNTSPEC
Snapshot mountpoint template. The template specifies a scheme for creating unique mountpoints
for the snapshots. Use the variables listed below for the template.
yyyy_mm_dd_HHmmz + GMT
shortdate
yyyy_mm_dd
fsname
You can specify one or more of these variables, enclosed in braces ({ }) and separated by
underscores (_). The template can also include text strings. Two sample templates follow. When a
snapshot is created using one of these templates, the variables will be replaced with the values
listed above.
{fsname}_snap_{fulldate}
snap_{shortdate}_{fsname}
To see details about a specific automated snapshot scheme, use the following command:
ibrix_vs_snap_strategy -i -n NAME
For example, to create a snapshot named ifs1_snap for file system ifs1:
ibrix_vs_snap -c -n ifs1_snap -f ifs1
For example, to create and mount a snapshot named ifs1_snap for file system ifs1:
ibrix_vs_snap -c -M -n ifs1_snap -f ifs1
For example, to clean up database records for a failed snapshot named ifs1_snap:
ibrix_vs_snap -r -f ifs1_snap
On the Management Console, select the snapshot on the Block Snapshots panel and click Cleanup.
Deleting snapshots
Delete snapshots to free up resources when the snapshot is no longer needed or to create a new
snapshot when you have already created the maximum allowed for your storage system.
On the Management Console, select the snapshot on the Block Snapshots panel and click Delete.
On the CLI, use the following command:
ibrix_vs_snap -d -f SNAPFSLIST
NUM_SEGS
-------3
MOUNTED?
-------No
GEN
--6
TYPE
---msa
CREATETIME
---------Wed Oct 7 15:09:50 EDT 2009
The following table lists the output fields for ibrix_vs_snap -l.
Field
Description
NAME
Snapshot name.
NUM_SEGS
MOUNTED?
GEN
Number of times the snapshot configuration has been changed in the configuration database.
TYPE
CREATETIME
Creation timestamp.
To list information about snapshots of specific file systems, use the following command:
ibrix_vs_snap -i [-f SNAPFSLIST]
The ibrix_vs_snap -i command lists the same information as ibrix_fs -i, plus information
fields specific to snapshots. Include the -f SNAPFSLIST argument to restrict the output to specific
snapshot file systems.
The following example shows only the snapshot-specific fields. To view an example of the common
fields, see Viewing file system information (page 39).
SEGMENT
FREE(GB)
-------------1
0.00
2
0.00
3
OWNER
LV_NAME
AVAIL(GB) FILES FFREE USED%
-------- ----------------------------- ----- ----- ----ib50-243 ilv11_msa_snap9__snap
0.00
0
0
0
ib50-243 ilv12_msa_snap9__snap
0.00
0
0
0
ib50-243 ilv13_msa_snap9__snap
STATE
BLOCK_SIZE CAPACITY(GB)
BACKUP TYPE
TIER LAST_REPORTED
--------------- ---------- ----------------- ----- ---- ------------OK, SnapUsed=4%
4,096
0.00
MIXED
7 Hrs 56 Mins 46 Secs ago
OK, SnapUsed=6%
4,096
0.00
MIXED
7 Hrs 56 Mins 46 Secs ago
OK, SnapUsed=6%
4,096
0.00
0.00
4
0.00
5
0.00
6
0.00
0.00
ib50-243
0.00
ib50-243
0.00
ib50-243
0.00
0
0
0
ilv14_msa_snap9__snap
0
0
0
ilv15_msa_snap9__snap
0
0
0
ilv16_msa_snap9__snap
0
0
0
MIXED
OK, SnapUsed=8%
MIXED
OK, SnapUsed=6%
MIXED
OK, SnapUsed=5%
MIXED
7 Hrs 56
4,096
7 Hrs 56
4,096
7 Hrs 56
4,096
7 Hrs 56
NOTE: For P4000 G2 storage systems, the state is reported as OK, but the SnapUsed field
always reports 0%.
The following table lists the output fields for ibrix_vs_snap -i.
Field
Description
SEGMENT
OWNER
LV_NAME
Logical volume.
STATE
BLOCK_SIZE
CAPACITY (GB)
FREE (GB)
AVAIL (GB)
FILES
FFREE
USED%
BACKUP
TYPE
Segment type. Mixed means the segment can contain both directories and files.
TIER
Last Reported
/<snapshot_name>
/<original_file_system>/.<snapshot_name>
For example, if you take a snapshot of the fs1 file system and name the snapshot fs1_snap1,
it will be mounted at /fs1_snap1 and at /fs1/.fs1_snap1.
The StoreAll clients must mount the snapshot file system (/<snapshot_name>) to access the
contents of the snapshot.
NFS and SMB clients can access the contents of the snapshot through the original file system (such
as /fs1/.fs1_snap1) or they can mount the snapshot file system (in this example, /fs1_snap1).
The following window shows an NFS client browsing the snapshot file system .fs1_snap2 in the
fs1_nfs file system.
The next window shows an SMB client accessing the snapshot file system .fs1_snap1. The
original file system is mapped to drive X.
If the cluster has been rebuilt, use the MSA GUI or CLI to check for old snapshots that were
not deleted before the cluster was rebuilt. The CLI command is show snapshots.
Migrate all files that have not been modified for 30 minutes from Tier1 to Tier2. (This rule is
not valid for production, but is a good rule for testing.)
You cannot modify the tiering configuration for a file system while an active migration task is
running.
You cannot move segments between tiers, assign them to new tiers, or unassign them from
tiers while an active migration task is running or while any rules exist that apply to the segments.
261
For a new tier, on the Manage Tier dialog box, choose Create New Tier, enter a name for the tier,
and select one or more segments to be included in the tier. To modify an existing tier, choose Use
Existing Tier, select the tier, and make any changes to the segments included in the tier.
Segments not currently included in a tier are specified as Unassigned. If you select a segment that
is already mapped to a tier, the segment will be unassigned from that tier and reassigned to the
tier you specified. If you remove a segment from a tier, that segment becomes unassigned.
You can work on only one tier at a time. However, when you click Next, you will be asked if you
want to manage more tiers. If you answer Yes, the Manage Tier dialog box will be refreshed and
you can work on another tier.
All new files are written to the primary tier. On the Primary Tier dialog box, select the tier that
should receive these files. You can also select cluster servers and any StoreAll clients whose I/O
operations should be redirected to the primary tier.
The tiering policy consists of rules that specify the data to be migrated from one tier to another.
The parameters and directives used in the migration rules include actions based on file access
Creating and managing data tiers 263
patterns (such as access and modification times), file size, and file type. Rules can be constrained
to operate on files owned by specific users and groups and to specific paths. Logical operators
can be used to combine directives.
The Tiering Policy dialog box displays the existing tiering policy for the file system.
To add a new tiering policy, click New. On the New Data Tiering Policy dialog box, select the
source and destination tiers. Initially RuleSet1 is empty. Select a rule name, and the other fields
will appear according to the rule you selected.
NOTE: LDAP and AD users cannot be selected from the menu under RuleSet. If you want to
include users in a ruleset, you can select only local users.
Click + to specify the and/or operators and another rule. Click New to open another ruleset. The
following example shows two new rulesets.
To delete a ruleset, check the box in the rule set and click Delete.
The Tiering Schedule dialog box lists all executed and running migration tasks. Click New to add
a new schedule, click Edit to reschedule the selected task, or click Delete to delete the selected
schedules.
Use the Enabled and Disabled buttons to enable or disable the selected schedule. When a schedule
is enabled, it is put in a runnable state. When a schedule is disabled, it is put in a paused state.
To run a migration task now, select the task and click Run Now.
When you click New to create a new schedule, the default frequency for migration tasks is
displayed. For an existing schedule, the current frequency is displayed. To change the frequency,
click Modify.
On the Data Tiering Schedule Wizard dialog box, select a time to run the migration task.
You can assign, reassign, or unassign segments from tiers using the Data Tiering Wizard. The
Management Console also provides additional options to perform these tasks.
Assign or reassign a segment: On the Segments panel, select the segments you are assigning and
click Assign to Tier. On the Assign to Tier dialog box, specify whether you are assigning the
segment to an existing tier or a new tier and specify the tier.
When you click OK, the segment is assigned to the tier and the information on the Segments panel
is updated.
Usassign a segment from a tier: Select the file system from the Filesystems panel and expand
Segments in the lower Navigator to list the tiers in the file system. Select the tier containing the
segment. On the Tier Segments panel, select the segment and click Unassign.
The Data Tiering Rules panel lists the existing rules for the file system. You can also create a new
rule from this panel; however, it is simpler to use the Data Tiering Wizard to create rules. To create
a rule from the Data Tiering Rules panel, click Create. On the Create Data Tiering Rule dialog box,
select the source and destination tier and then define a rule. The rule can move files between any
two tiers.
When you click OK, the rule is checked for correct syntax. If the syntax is correct, the rule is saved
and appears on the Data Tiering Rules panel. The following example shows the three rules created
for the example.
You can delete rules if necessary. Select the rule on the Data Tiering Rules panel and click Delete.
The next example migrates all mpeg4 files in the subtree. A logical and operator combines the
rules:
path=testdata4 and name="*mpeg4"
The next example narrows the scope of the rule to files owned by users in a specific group. Note
the use of parentheses.
gname=users and (path=testdata4 and name="*mpeg4")
For more examples and detailed information about creating rules, see Writing tiering rules
(page 275).
If necessary, click Stop to stop the data tiering task. There is no pause/resume function. When the
task is complete, it appears on the Management Console under Inactive Tasks for the file system.
You can check the exit status there.
OWNER
--------
LV_NAME
-------
STATE
-----
BLOCK_SIZE
----------
CAPACITY(GB)
------------
271
1
2
3
4
.
.
ibrix01b
ibrix01a
ibrix01b
ibrix01a
ilv1
ilv2
ilv3
ilv4
OK
OK
OK
OK
4,096
4,096
4,096
4,096
3,811.11
3,035.67
3,811.11
3,035.67
. . .
Use the following command to assign segments to a tier. The tier is created if it does not already
exist.
ibrix_tier -a -f FSNAME -t TIERNAME -S SEGLIST
For example, the following command creates Tier 1 and assigns segments 1 and 2 to it:
[root@ibrix01a ~]# ibrix_tier -a -f ifs1 -t Tier1 -S 1,2
Assigned segment: 1 (ilv1) to tier Tier1
Assigned segment: 2 (ilv2) to tier Tier1
Command succeeded!
NOTE: Be sure to spell the name of the tier correctly when you add segments to an existing tier.
If you spell the name incorrectly, a new tier is created with the incorrect tier name, and no error
is recognized.
Tier
---Tier2
Tier2
The following rule migrates all files that have not been modified for 30 minutes from Tier1 to Tier2:
[root@ibrix01a ~]# ibrix_migrator -A -f ifs1 -r 'mtime older than 30 minutes' -S Tier1 -D Tier2
Rule: mtime<now - 0-0-0-0:30:0
Command succeeded!
Rule
--------------------------mtime older than 30 minutes
name = "*.mpeg4"
size > 4M
Source Tier
----------Tier1
Tier1
Tier1
Destination Tier
---------------Tier2
Tier2
Tier2
NOTE:
To list the active migration task for a file system, use the ibrix_migrator -i option. For example:
[root@ibrix01a ~]# ibrix_migrator -i -f ifs1
Operation: Migrator_163
=======================
Task Summary
============
Task Id
: Migrator_163
Type
: Migrator
File System
: ifs1
Submitted From
: root from Local Host
Run State
: STARTING
Active?
: Yes
EXIT STATUS
:
Started At
: Jan 17, 2012 10:32:55
Coordinator Server
: ibrix01b
Errors/Warnings
:
Dentries scanned
: 0
Number of Inodes moved
: 0
Number of Inodes skipped
: 0
Avg size (kb)
: 0
Avg Mb Per Sec
: 0
Number of errors
: 0
To view summary information after the task has completed, run the ibrix_migrator -i command
again and include the -n option, which specifies the task ID. (The task ID appears in the output
from ibrix migrator -i.)
[root@ibrix01a testdata1]# ibrix_task -i -n Migrator_163
Operation: Migrator_163
=======================
Task Summary
============
Task Id
: Migrator_163
Type
: Migrator
File System
: ifs1
Submitted From
: root from Local Host
Run State
: STOPPED
Active?
: No
EXIT STATUS
: OK
Started At
: Jan 17, 2012 10:32:55
Coordinator Server
: ibrix01b
Errors/Warnings
:
Dentries scanned
: 1025
Number of Inodes moved
: 1002
Number of Inodes skipped
: 1
Avg size (kb)
: 525
Avg Mb Per Sec
: 16
Number of errors
: 0
You cannot modify the tiering configuration for a file system while an active migration task is
running.
You cannot move segments between tiers, assign them to new tiers, or unassign them from
tiers while an active migration task is running or while any rules exist that apply to the segments.
Deleting a tier
Before deleting a tier, take these steps:
274
To unassign all segments and delete the tier, use the following command:
ibrix_tier -d -f FSNAME -t TIERNAME
Rule attributes
Each rule identifies file attributes to be matched. It also specifies the source tier to scan and the
destination tier where files that meet the rules criteria will be moved and stored.
Note the following:
All rules are executed when the tiering policy is applied during execution of the
ibrix_migrator command.
It is important that different rules do not target the same files, especially if different destination
tiers are specified. If tiering rules are ambiguous, the final destination for a file is not
predictable. See Ambiguous rules (page 277), for more information.
The following are examples of attributes that can be specified in rules. All attributes are listed in
Rule keywords (page 276). You can use AND and OR operators to create combinations.
Access time
Modification time
Time: Enter as three pairs of colon-separated integers using a 24-hour clock. The format is
hh:mm:ss (for example, 15:30:00).
Date: Enter as yyyy-mm-dd [hh:mm:ss], where time of day is optional (for example,
2008-06-04 or 2008-06-04 15:30:00). Note the space separating the date and time.
When specifying an absolute date and/or time, the rule must use a compare type operator (< |
<= | = | != | > | >=). For example:
ibrix_migrator -A -f ifs2 -r "atime > '2010-09-23' " -S TIER1 -D TIER2
Use the following qualifiers for relative times and dates:
Relative time: Enter in rules as year or years, month or months, week or weeks, day or
days, hour or hours.
Relative date: Use older than or younger than. The rules engine uses the time the
ibrix_migrator command starts execution as the start time for the rule. It then computes
the required time for the rule based on this start time. For example, ctime older than 4
weeks refers to that time period more that 4 weeks before the start time.
Rule keywords
The following keywords can be used in rules.
Keyword
Description
atime
ctime
mtime
gid
gname
A string corresponding to a group name. Enclose the name string in double quotes.
uid
uname
A string corresponding to a user name, where the user is the owner of the file. Enclose the name
string in double quotes.
type
File system entity the rule operates on. Currently, only the file entity is supported.
size
In size-based rules, the threshold value for determining migration. Value is an integer specified in
K (KB), M (MB), G (GB), and T (TB). Do not separate the value from its unit (for example, 24K).
name
Regular expression. A typical use of a regular expression is to match file names. Enclose a regular
expression in double quotes. The * wildcard is valid (for example, name = "*.mpg").
A name cannot contain a / character. You cannot specify a path; only a filename is allowed.
path
Path name that allows these wild cards: *, ?, /. For example, if the mountpoint for the file system
is /mnt, path=ibfs1/mydir/* matches the entire directory subtree under /mnt/ibfs1/mydir.
(A path cannot start with a /).
strict_path
Path name that rigidly conforms to UNIX shell file name expansion behavior. For example,
strict_path=/mnt/ibfs1/mydir/* matches only the files that are explicitly in the mydir
directory, but does not match any files in subdirectories of mydir.
Use the following command to write a rule. The rule portion of the command must be enclosed in
single quotes.
ibrix_migrator -A -f FSNAME -r 'RULE' -S SOURCE_TIER -D DEST_TIER
Examples:
The rule in the following example is based on the files last modification time, using a relative time
period. All files whose last modification date is more than one month in the past are moved.
# ibrix_migrator -A -f ifs2 -r 'mtime older than 1 month' -S T1 -D T2
In the next example, the rule is modified to limit the files being migrated to two types of graphic
files. The or expression is enclosed in parentheses, and the * wildcard is used to match filename
patterns.
# ibrix_migrator -A -f ifs2 -r 'mtime older than 1 month and
( name = "*.jpg" or name = "*.gif" )' -S T1 -D T2
In the next example, three conditions are imposed on the migration. Note that there is no space
between the integer and unit that define the size threshold (10M):
# ibrix_migrator -A -f ifs2 -r 'ctime older than 1 month and type = file
and size >= 10M' -S T1 -D T2
The following example uses the path keyword. It moves files greater than or equal to 5M that are
under the directory /ifs2/tiering_test from TIER1 to TIER2:
ibrix_migrator -A -f ifs2 -r "path = tiering_test and size >= 5M" -S
TIER1 -D TIER2
Rules can be group- or user-based as well as time- or data-based. In the following example, files
associated with two users are migrated to T2 with no consideration of time. The names are quoted
strings.
# ibrix_migrator -A -f ifs2 -r 'type = file and ( uname = "ibrixuser"
or uname = "nobody" )' -S T1 -D T2
Conditions can be combined with and and or to create very precise tiering rules, as shown in the
following example.
# ibrix_migrator -A -f ifs2 -r ' (ctime older than 3 weeks and ctime younger
than 4 weeks) and type = file and ( name = "*.jpg" or name = "*.gif" )
and (size >= 10M and size <= 25M)' -S T1 -D T2
Ambiguous rules
It is possible to write a set of ambiguous rules, where different rules could be used to move a file
to conflicting destinations. For example, if a file can be matched by two separate rules, there is
no guarantee which rule will be applied in a tiering job.
Ambiguous rules can cause a file to be moved to a specific tier and then potentially moved back.
Examples of two such situations follow.
Example 1:
In the following example, if a .jpg file older than one month exists in tier 1, then the first rule
moves it to tier 2. However, once it is in tier 2, it is matched by the second rule, which then moves
the file back to tier 1.
# ibrix_migrator -A -f ifs2 -r ' mtime older than 1 month ' -S T1 -D T2
# ibrix_migrator -A -f ifs2 -r ' name = "*.jpg" ' -S T2 -D T1
There is no guarantee as to the order in which the two rules will be executed; therefore, the final
destination is ambiguous because multiple rules can apply to the same file.
Example 2:
Rules can cause data movement in both directions, which can lead to issues. In the following
example, the rules specify that all .doc files in tier 1 to be moved to tier 2 and all .jpg files in
tier 2 be moved to tier 1. However, this might not succeed, depending on how full the tiers are.
For example, if tier 1 is filled with .doc files to 70% capacity and tier2 is filled with .jpg files to
80% capacity, then tiering might terminate before it is able to fully "swap" the contents of tier 1
and tier 2. The files are processed in no particular order; therefore, it is possible that more .doc
files will be encountered at the beginning of the job, causing space on tier 2 to be consumed faster
than on tier 1. Once a destination tier is full, obviously no further movement in that direction is
possible.
These rules in these two examples are ambiguous because they give rise to possible conflicting
file movement. It is the users responsibility to write unambiguous rules for the data tiering policy
for their file systems.
Overview
StoreAll software allocates new files and directories to segments according to the allocation policy
and segment preferences that are in effect for a client. An allocation policy is an algorithm that
determines the segments that are selected when clients write to a file system.
Preferred segments. The segments where a file serving node or the StoreAll client creates all
new files and directories.
Allocation policy. The policy that a file serving node or the StoreAll client uses to choose
segments from its pool of preferred segments to create new files and directories.
The segment preferences and allocation policy are set locally for the StoreAll client. For NFS, CIFS,
HTTP, and FTP clients (collectively referred to as NAS clients), the allocation policy and segment
preferences must be set on the file serving nodes from which the NAS clients access shares.
Segment preferences and allocation policies can be set and changed at any time, including when
the target file system is mounted and in use.
IMPORTANT: It is possible to set separate allocation policies for files and directories. However,
this feature is deprecated and should not be used unless you are directed to do so by HP support.
NOTE: The StoreAll client accesses segments directly through the owning file serving node and
do not honor the file allocation policy set on file serving nodes.
IMPORTANT:
behavior.
Changing segment preferences and allocation policy will alter file system storage
The following tables list standard and deprecated preference settings and allocation policies.
Overview 279
Description
Comment
ALL
LOCAL
RANDOM
Name
Description
Comment
AUTOMATIC
Lets the StoreAll software select the allocation policy. Should be used only on the advice
of HP support.
DIRECTORY
STICKY
A StoreAll client or StoreAll file serving node (referred to as the host) uses the following precedence
rules to evaluate the file allocation settings that are in effect:
The host uses the default allocation policies and segment preferences: The RANDOM policy
is applied, and a segment is chosen from among ALL the available segments.
The host uses a non-default allocation policy (such as ROUNDROBIN) and the default segment
preference: Only the file or directory allocation policy is applied, and a segment is chosen
from among ALL available segments.
The host uses a non-default segment preference and a non-default allocation policy (such as
LOCAL/ROUNDROBIN): A segment is chosen according to the following rules:
From the pool of preferred segments, select a segment according to the allocation policy
set for the host, and store the file in that segment if there is room. If all segments in the
pool are full, proceed to the next rule.
Use the AUTOMATIC allocation policy to choose a segment with enough storage room
from among the available segments, and store the file.
To perform a task for NAS clients (NFS, CIFS, FTP, HTTP), specify file serving nodes for the
-h HOSTLIST argument.
To perform a task for StoreAll clients, specify individual clients for -h HOSTLIST or specify
a hostgroup for -g GROUPLIST. Hostgroups are a convenient way to configure file allocation
settings for a set of StoreAll clients. To configure file allocation settings for all StoreAll clients,
specify the clients hostgroup.
281
The following example sets the ROUNDROBIN policy for files only on the file system ifs1 on file
serving node s1.hp.com, starting at segment ilv1:
ibrix_fs_tune -f ifs1 -h s1.hp.com -p ROUNDROBIN -s ilv1
The following example sets the ROUNDROBIN directory allocation policy on the file system ifs1
for file serving node s1.hp.com, starting at segment ilv1:
ibrix_fs_tune -f ifs1 -h s1.hp.com -p ROUNDROBIN -R
Prefer a single segment for files created by a specific user or group on the clients.
Both methods can be in effect at the same time. For example, you can prefer a segment for a user
and then prefer a pool of segments for the clients on which the user will be working.
On the Management Console, open the Modify Filesystem Properties dialog box and select the
Segment Preferences tab.
Use the following command and the LOCAL keyword to create a pool of all segments on file serving
nodes. Use the ALL keyword to restore the default segment preferences.
Setting segment preferences 283
NOTE: Preallocation, Readahead, and NFS readahead are set to the recommended values
during the installation process. Contact HP Support for guidance if you want to change these
values.
On the Management Console, open the Modify Filesystem Properties dialog box and select the
Allocation tab.
FSNAME
ifs1
POLICY
RANDOM
SEGBITS
DEFAULT
SWM
DEFAULT
Error messages
Detailed questions
Related information
The following documents provide related information:
HP websites
For additional information, see the following HP websites:
http://www.hp.com
http://www.hp.com/go/StoreAll
http://www.hp.com/go/storage
http://www.hp.com/support/manuals
Subscription service
HP recommends that you register your product at the Subscriber's Choice for Business website:
http://www.hp.com/go/e-updates
After registering, you will receive e-mail notification of product enhancements, new driver versions,
firmware updates, and other product resources.
22 Documentation feedback
HP is committed to providing documentation that meets your needs. To help us improve the
documentation, send any errors, suggestions, or comments to Documentation Feedback
([email protected]). Include the document title and part number, version number, or the URL
when submitting your feedback.
287
Glossary
ACE
ACL
ADS
ALB
BMC
CIFS
Common Internet File System. The protocol used in Windows environments for shared folders.
CLI
Command-line interface. An interface comprised of various commands which are used to control
operating system responses.
CSR
DAS
Direct attach storage. A dedicated storage device that connects directly to one or more servers.
DNS
FTP
GSI
HA
High availability.
HBA
HCA
HDD
IAD
iLO
Integrated Lights-Out.
IML
IOPS
IPMI
JBOD
KVM
LUN
Logical unit number. A LUN results from mapping a logical unit number, port ID, and LDEV ID to
a RAID group. The size of the LUN is determined by the emulation mode of the LDEV and the
number of LDEVs associated with the LUN.
MTU
NAS
NFS
Network file system. The protocol used in most UNIX environments to share folders or mounts.
NIC
Network interface card. A device that handles communication between a device and other devices
on a network.
NTP
Network Time Protocol. A protocol that enables the storage systems time and date to be obtained
from a network-attached server, keeping multiple hosts and storage devices synchronized.
OA
Onboard Administrator.
OFED
OSD
On-screen display.
OU
RO
Read-only access.
RPC
RW
Read-write access.
SAN
Storage area network. A network of storage devices available to one or more servers.
SAS
288 Glossary
SELinux
Security-Enhanced Linux.
SFU
SID
SMB
Server Message Block. The protocol used in Windows environments for shared folders.
SNMP
TCP/IP
UDP
UID
Unit identification.
VACM
VC
HP Virtual Connect.
VIF
Virtual interface.
WINS
WWN
WWNN
World wide node name. A globally unique 64-bit identifier assigned to each Fibre Channel node
process.
WWPN
World wide port name. A unique 64-bit address used in a FC storage network to identify each
device in a FC network.
289
Index
Symbols
/etc/likewise/vhostmap file, 97
A
Active Directory
configure, 65
configure from CLI, 72
Linux static user mapping, 93
synchronize with NTP server, 96
use with LDAP ID mapping, 61
Antivirus
configure, 226
enable or disable, 228
file exclusions, 230
protocol scan settings, 230
scans, start or schedule, 233
scans, status, 235
statistics, 237
unavailable policy, 229
virus definitions, 228
virus scan engine, 226
add, 227
remove, 228
audit log, 222
authentication
Active Directory, 61
configure from CLI, 72
configure from GUI, 63
Local Users, 61
Authentication Wizard, 63
automated block snapshots
create from CLI, 255
create on GUI, 251
delete snapshot scheme, 256
modify snapshot scheme, 255
snapshot scheme, 250
view snapshot scheme from CLI, 256
B
backups
snapshots, software, 248
C
case-insensitive filenames, 58
CIFS
locking behavior, 98
commands
object mode, 148
contacting HP, 286
D
data retention
audit log, 222
audit log reports, 223
autocommit period, 197
290 Index
documentation
providing feedback on, 287
E
Export Control, enable, 20, 27
Express Query
export metadata, 219
HTTP-StoreAll REST API shares., 118
import metadata, 220
save audit metadata, 220
F
file allocation
allocation policies, 279
evaluation of allocation settings, 280
list policies, 285
segment preferences, 279
set file and directory policies, 281
set segment preferences, 282
tune policy settings, 284
file serving nodes
delete, 48
segment management, 11
unmount a file system, 25
view SMB shares, 84
file systems
32-bit compatibility mode, 25
64mode, 14
allocation policy, 11
case-insensitive filenames, 58
check and repair, 48
components of, 12
create
from CLI, 21
New Filesystem Wizard, 14
options for, 14
data retention and validation, 196
delete, 47
disk space information, 42
enable or disable Export Control, 20
Export Control, enable, 20, 27
extend, 42
file limit, 22
file space reserved, 25
lost+found directory, 41
mount, 22, 25
mounting, 25
mountpoints, , 22
operating principles, 10
performance, 37
quotas, 28
remote replication, 178
segments
defined, 10
rebalance, 43
snapshots, block, 249
snapshots, software, 239
troubleshooting, 49
unmount, 25
H
help
obtaining, 286
HP
technical support, 286
HP websites, 286
HTTP
Apache tunables, 130
authentication , 61
configuration, 114
configuration best practices, 117
configure from the CLI, 131
configure on the GUI, 118
shares, access, 133
start or stop, 133
troubleshooting, 136
WebDAV shares, 135
HTTP shares
creating, 116
share types, 114
HTTP-StoreAll REST API shares, 118
L
LDAP authentication
configuration template, 62
configure, 67
configure from CLI, 72
remote LDAP server, configure, 62
requirements, 62
LDAP ID mapping
configure, 66
configure from CLI, 73
use with Active Directory, 61
Linux static user mapping, 93
Linux StoreAll clients
disk space information, 42
Local Groups authentication, 68
configure from CLI, 74
Local Users authentication, 69
configure from CLI, 74
logical volumes
view information, 39
logs
ibrcfrworker log file, 194
lost+found directory, 41
291
M
mapping
SMB shares, 92
Microsoft Management Console
manage SMB shares, 88
migration, files, 270
mounting, file system, 22, 25
mountpoints
create from CLI, 24
delete, 24
view, 22, 25
N
New Filesystem Wizard, 14
NFS
case-insensitive filenames, 58
configure NFS server threads, 55
export file systems, 55
support, 55
unexport file systems, 58
O
object mode
commands, 148
data retention, 117
finding hash name, 146
finding object ID, 145
terminology, 138
tutorial, 139
uses, 138
viewing container contents, 144
viewing containers, 143
obtaining
sample client application, 116
P
physical volumes
delete, 47
view information, 38
Q
quotas
configure email notifications, 35
delete, 35
enable, 28
export to file, 33
import from file, 32
online quota check, 34
operation of, 28
quotas file format, 33
SMB shares, 98
troubleshoot, 36
user and group, 29
R
rebalancing segments, 43
stop tasks, 47
track progress, 46
view task status, 47
292 Index
S
SegmentNotAvailable alert, 51
SegmentRejected alert, 52
segments
defined, 10
delete, 47
rebalance, 43
stop tasks, 47
track progress, 46
view task status, 47
SMB
Active Directory domain, configure, 93
activity statistics per node, 76
authentication, 61
configure nodes, 76
Linux permissions, 87
Linux static user mapping, 93
monitor SMB services, 77
permissions management, 100
RFC2037 support, 93
shadow copy, 98
share administrators, 71
SMB server consolidation, 96
SMB signing, 84
start or stop SMB service, 76
troubleshooting, 102
SMB server consolidation, 96
SMB shares
add with MMC, 90
configure with GUI, 79
delete with MMC, 92
manage with CLI, 85
manage with MMC, 88
mapping shares, 92
quotas information, 98
SMB signing, 83
view share information, 84
SMB signing, 84
snapshots, block
access snapshot file systems, 258
automated, 250
create from CLI, 255
create on GUI, 251
delete snapshot scheme, 256
modify snapshot scheme, 255
view snapshot scheme from CLI, 256
clear invalid snapshot, 256
create, 256
defined, 249
delete, 257
discover LUNs, 250
list storage allocation, 250
mount, 256
register the snapshot partition, 250
set up the snapshot partition, 250
troubleshooting, 260
view information about, 257
snapshots, software
access, 242
backup, 248
defined, 239
delete, 244
on-demand snapshots, 241
reclaim file system space, 245
replicate, 188
restore files, 244
schedule, 240
snap trees
configure, 239
move files, 248
remove snapshot authorization, 247
schedule snapshots, 240
space usage, 242
view on GUI, 241
SSL certificates
add to cluster, 176
create, 174
delete, 177
export, 177
StoreAll clients
delete, 48
limit file access, 26
locally mount a file system, 26
locally unmount file system, 26
StoreAll REST API
best practices, 115
client application, 116
creating shares, 116
data retention, 117
features, 115
object mode, 138
share types, 114
uses, 115
Subscriber's Choice, HP, 286
T
technical support
HP, 286
service locator website, 286
tiering, data
assign segments, 267
configure, 261
migration task, 270, 271
primary tier, 272
tiering policy, 268
tiering rules, 275
U
unmounting, file systems, 25
V
validation scans, 197
validation, data
compare hash sums, 211
293
W
websites
HP Subscriber's Choice for Business, 286
294 Index