6-68456-02 StorNextNAS CommandLineInterfaceGuide
6-68456-02 StorNextNAS CommandLineInterfaceGuide
StorNext NAS
6-68456-02 Rev A *6-68456-02*
StorNext NAS Command Line Interface Guide, 6-68456-02 Rev A, September2016, Product of USA.
Quantum Corporation provides this publication “as is” without warranty of any kind, either express or implied, including
but not limited to the implied warranties of merchantability or fitness for a particular purpose. Quantum Corporation
may revise this publication from time to time without notice.
COPYRIGHT STATEMENT
© 2016 Quantum Corporation. All rights reserved.
Your right to copy this manual is limited by copyright law. Making copies or adaptations without prior written
authorization of Quantum Corporation is prohibited by law and constitutes a punishable violation of the law.
TRADEMARK STATEMENT
Artico, Be Certain (and the Q brackets design), DLT, DXi, DXi Accent, DXi V1000, DXi V2000, DXi V4000, GoVault,
Lattus, NDX, the Q logo, the Q Quantum logo, Q-Cloud, Quantum (and the Q brackets design), the Quantum logo,
Quantum Be Certain (and the Q brackets design), Quantum Vision, Scalar, StorageCare, StorNext,
SuperLoader, Symform, the Symform logo (and design), vmPRO, and Xcellis are either registered trademarks or
trademarks of Quantum Corporation and its affiliates in the United States and/or other countries. All other trademarks
are the property of their respective owners.
Products mentioned herein are for identification purposes only and may be registered trademarks or trademarks of their
respective companies. All other brand names or trademarks are the property of their respective owners.
Quantum specifications are subject to change.
Preface vi
This manual introduces the Quantum StorNext NAS and contains the following chapters:
l Get Started with NAS on page 1
l NAS Upgrades on page 15
l NAS User Authentication on page 36
l NAS Clusters on page 48
l NAS Share Management on page 76
l NAS System Management on page 93
l Troubleshooting on page 107
Audience
This manual is written for StorNext NAS operators, system administrators, and field service engineers.
Note: It is useful for the audience to have a basic understanding of UNIX® and backup/recovery
systems.
Notational Conventions
This manual uses the following conventions:
Convention Example
For UNIX and Linux commands, the command prompt is implied. ./DARTinstall
is the same as
# ./DARTinstall
File and directory names, menu commands, button names, and /data/upload
window names are shown in bold font.
Related Documents
The following Quantum documents are also available for StorNext NAS:
6-68362 StorNext NAS Release Notes Presents updates, resolved issues, and known issues
for the associated release.
Contacts
For information about contacting Quantum, including Quantum office locations, go to:
http://www.quantum.com/aboutus/contactus/index.aspx
Comments
To provide comments or feedback about this document, or about other Quantum technical publications,
send e-mail to:
[email protected]
l eSupport - Submit online service requests, update contact information, add attachments, and receive
status updates via email. Online Service accounts are free from Quantum. That account can also be used
to access Quantum’s Knowledge Base, a comprehensive repository of product support information. Get
started at:
https://onlineservice.quantum.com
For further assistance, or if training is desired, contact the Quantum Customer Support Center:
Region Support Contact
StorageCare Guardian
StorageCare Guardian securely links Quantum hardware and the diagnostic data from the surrounding
storage ecosystem to Quantum's Global Services Team for faster, more precise root cause diagnosis.
StorageCare Guardian is simple to set up through the internet and provides secure, two-way
communications with Quantum’s Secure Service Center. Learn more at:
http://www.quantum.com/ServiceandSupport/Services/GuardianInformation/Index.aspx
Multi-Protocol
StorNext NAS supports both SMB and NFS file sharing.
l SMB support is for SMB 1 (CIFS) through SMB 3.
l NFS support is for NFSv3 through NFSv4.
Flexible User Authentication
StorNext NAS supports several methods of user authentication, making it easy to implement the software
within a networked environment.
See NAS User Authentication on page 36.
NAS Failover
For both SMB and NFS shares, StorNext NAS failover automatically transfers NAS management services
from the active master node to another node in a NAS cluster in the event that the active master node
becomes unavailable. Through this feature, continuous access to NAS shares is possible because NAS
management services can be run on any node within the NAS cluster.
See NAS Failover on page 63.
G300 Load-Balancing
Load-balancing allows you to group several G300 StorNext NAS Gateways together as a NAS cluster.
Users connect to the master node within the NAS cluster, which then equally distributes connections to the
other nodes within the cluster.
In addition, load-balancing ensures that if one of the StorNext NAS System in the NAS cluster goes offline,
its current connections will be rerouted / reconnected to another StorNext NAS System in the NAS cluster.
See Supported NAS Cluster Configurations on page 50.
Licensing
Beginning with StorNext 5.3.0, StorNext NAS is a licensed feature for Artico, Xcellis Workflow Director,
G300 Gateways, and M-Series Metadata Controllers.
l For Artico, the appliances are shipped with StorNext NAS licenses pre-installed.
l For Xcellis Workflow Director, G300 Gateways, and M-Series Metadata Controllers, you must purchase
add-on StorNext NAS licenses, and then install these licenses on each node running the StorNext NAS
software.
l You can install StorNext NAS licenses using the StorNext GUI's licensing feature. See the License NAS
in the StorNext GUI topic of the StorNext Connect Documentation Center.
Guide Terminology
This guide uses the following terms to generally refer to Quantum hardware, unless a specific product is
being discussed.
Term Definition
Migration Assistance
If it becomes necessary to migrate a StorNext NAS environment from a metadata controller (MDC)
server to a G300 Gateway environment, you will need to contact Quantum Professional Services for
assistance.
StorNext NAS currently supports the following software configurations on Quantum appliances.
Note: If a specific software configuration is not listed, it is not a supported configuration for StorNext
NAS.
5.3.2.1 NAS Apps version 3 or later 1.2.4 l Upgrades via the YUM
(Not bundled with software repository
StorNext 5.3.2.1) l Connect NAS Apps version
3
l NAS failover for SMB
shares
5.3.1 NAS Apps version 3 1.2.1 or later l Connect NAS Apps version
or later (bundled with 3
StorNext 5.3.1)
l NAS failover for SMB
shares
Note: If a specific software configuration is not listed, it is not a supported configuration for StorNext
NAS.
5.3.2.1 or later NAS Apps version 4 1.3.0 l NAS failover for NFS
(Bundled with shares
StorNext 5.4) l Upgrades via the
YUM software
repository
l Connect NAS Apps
version 3
l NAS failover for SMB
shares
Parameter Components
Parameters are made up of the following components.
Object
An object provides the item on which to perform the action, such as share or nascluster.
Commands (cmd)
A command provides an action to be performed, such as add, delete, or enable.
Options
Options are additional objects to which to apply the command, such as node. Not all commands have
options.
Values
Values further define objects and options, such as providing the IP address for the node option. Values
immediately follow the command or the option. Not all commands have values.
Syntax Conventions
The syntax for CLI commands uses one of the following formats:
object cmd
object cmd option
object cmd <value>...
object cmd option <value>...
object cmd <value> option...
Syntax Characters
The following syntax characters are used in CLI commands.
Character Description
< > Angle brackets surrounding a value indicates that you need to replace it with an appropriate
value.
Example
nascluster enable node <ip_addr>
The above command has the following components:
l Object (nascluster)
l Command (enable)
l Option (node)
l Value requiring an appropriate replacement (<ip_addr>)
Enter the command as follows, where 10.1.1.1 is the IP address of the node to enable for
the NAS cluster:
nascluster enable node 10.1.1.1
[ ] Square brackets surrounding options or values indicate that it is not mandatory to enter an option
or value. If you do not specify an option or value, the CLI provides a default replacement.
Example
share change <share_type> <share_name> [option, option, …]
The above command has the following components:
l Object (share)
l Command (change)
l Values requiring an appropriate replacement (<share_type> <share_name>)
l Options that you can use to define additional attributes ([option, option, …])
Enter the command as follows, where write list = james doris is the attribute to
change within the share.
share change smb myshare write list = james doris
| The pipe character indicates that you need to specify only one of the possible options or values.
Read this character as "OR".
Character Description
( ) Parentheses surrounding options indicate that you must specify one or more of the displayed
options.
Example
share add (nfs|smb) <share_name> <share_path>
In the above command, you must specify whether the share being added is an SMB or NFS
share.
{ } Curly brackets are used to group options or values for readability purposes. Do not use these
characters in the actual command.
, A comma indicates that you can specify multiple options. Place a comma after each option to
separate it from the next.
Note: StorNext NAS 1.3.0 does not support the use of commas (,) to separate the
nfshosts share option. If you are defining multiple NFS hosts for a share, separate each
host with a space. See CLI Syntax Conventions on page 6 for an example.
You will use a console command line interface (CLI) to manage StorNext NAS. Access the console
command line from an SecureShell (SSH) client in one of the following ways:
l Log in as the stornext user below and assume the sysadmin user role.
l Log in as the sysadmin user on the next page directly if the sysadmin user account has been enabled.
CLI Example
ssh [email protected]
sysadmin@quantum's password:
Last login: Tue Jan 27 15:36:00 2015 from eng.acme.com
Welcome to Quantum G300 SN-NAS Console
----------------------------------------
*** Type 'help' for a list of commands.
G300:gateway>
sysadmin Password
StorNext NAS is installed with a special administrator user account, sysadmin. The password for the
sysadmin user account is randomly generated during installation, and you must change it before
performing any administrative steps.
Change the sysadmin Password
1. Log in to the console command line.
2. At the prompt, enter the following:
auth change local password sysadmin
3. At the prompt, enter the new password.
4. At the prompt, re-enter the new password for verification.
CLI Example
> auth change local password sysadmin
Please enter the new password
Re-enter the password
Applying local configuration settings ...
Modified password for user sysadmin
Authentication Commands
Click the associated link to view information for the following authentication commands.
For an overview of authentication commands, see NAS User Authentication on page 36. To troubleshoot
issues with authentication, see NAS Authentication Issues and FAQs on page 110.
Command Topic
Log Commands
Click the associated link to view information for the following log commands.
Command Topic
Command Topic
nascluster set virtual ipaddr Set the VIP for a NAS Cluster on page 71
nascluster set master offload Offload SMB Connections from the Master Node
nascluster set nfs-ha Configure NAS Failover for NAS NFS Clusters
NTP Commands
Click the associated link to view information for the following NTP commands.
Command Topic
Command Topic
Command Topic
share enable support|upgrade Enable or Disable the Support and Upgrade Shares on page 91
share disable support|upgrade Enable or Disable the Support and Upgrade Shares on page 91
Command Topic
System Commands
Click the associated link to view information for the following system commands.
Command Topic
system show version View Your Current NAS Software Version on page 96
NAS Upgrades 15
Run the NAS Repo Upgrade 18
Upgrade to NAS 1.2.5 19
Upgrade to NAS 1.3.0 23
Upgrade to NAS 1.3.x 29
Check NAS Upgrade Availability 35
NAS Upgrades
StorNext Connect Manage NAS App vs. StorNext NAS
If you are configuring StorNext NAS for Xcellis Workflow Director, M44x, or M66x, you can use the
StorNext Connect Manage NAS application to perform supported tasks. If — after using the Manage
NAS app — you use the console command line interface to issue StorNext NAS commands, make sure
to (re)import the NAS configuration into StorNext Connect.
See the Manage NAS section of the StorNext Connect Documentation Center.
The StorNext NAS software upgrade process varies depending on the version of StorNext NAS you are
currently running. Use the What Version Do I Need To Upgrade To? on the next page section to select the
right upgrade process.
Additional Information
Console Command Line
Keep in mind that you will perform all StorNext NAS configuration tasks from the console command line.
To log in to the console command line for StorNext NAS, see the StorNext NAS Documentation Center
Upgrade Time
Each node generally takes less than 5 minutes to complete the upgrade, and in many cases, each node
will complete the upgrade between 30 seconds and 1 minute. Because the upgrade process could briefly
interrupt access, we recommend performing an upgrade when minimal disruption to file access will
occur.
Local and Remote Upgrades
After upgrading to StorNext NAS 1.2.3 or 1.2.4, you have the option of upgrading from one of the
following locations:
l Quantum's remote upgrade repository, accessed over the Internet. For more information, see Run
the NAS Repo Upgrade on page 18.
l The local StorNext NAS /var/upgrade directory, performed offline.
/var/upgrade Directory
To perform a local upgrade of StorNext NAS, you need to transfer the upgrade files from your local
workstation in to the /var/upgrade directory on the StorNext NAS server. Make sure you can
answer the following before performing this transfer.
Does the /var/upgrade directory exist?
Make sure the /var/upgrade directory exists on your StorNext NAS server. If it does not, you
need to create the directory.
Steps
a. Log in to your console command line as the root user.
b. At the prompt, enter cd /var to navigate to the /vardirectory.
c. At the prompt, enter mkdir -m 664/upgrade to create the /var/upgrade directory.
Are the upgrade files unpacked?
If the upgrade files are bundled into a tar.gz package, unpack the files before transferring them
to the StorNext NAS server. If you are using Windows OS, you may need to download an
application to unpack the files.
What transfer method am I using?
Transfer the upgrade files into the /var/upgrade directory using the same method that you
would use to transfer other files into your StorNext file system, such as:
o Copy the files straight from the local file system to the /var/upgrade directory
o Transport the files with Secure Copy or File Transfer Protocol
1.2.0, 1.2.1, or 1.2.2* N/A Run the NAS Repo Upgrade on the
next page
AND
Upgrade to NAS 1.2.5 on page 19
* The StorNext NAS upgrade process requires you upgrade to StorNext NAS 1.2.5 before you can upgrade to
StorNext NAS 1.3.0.
** We recommend upgrading to StorNext NAS 1.3.0 before upgrading to StorNext NAS 1.3.x.
*** You must be running StorNext NAS 1.3.0 to use the automatic NAS cluster upgrade feature.
How do I find out my current StorNext NAS version?
To discover the version of StorNext NAS software that you are currently running, see View Your Current NAS
Software Version on page 96.
Why Do I Need to Perform this Step?
After running the NAS Repo Upgrade RPM, you have the option of upgrading from Quantum's remote
upgrade repository accessed over the Internet. With this feature, you will no longer need to manually
download the latest StorNext NAS RPMs. Instead your StorNext NAS software will be able to directly
pull the latest version at your command.
You can still upgrade your StorNext NAS software locally (without direct-Internet access). Going
forward, however, all upgrade packages are signed, meaning that the public key — imported to your
system from the NAS Repo Upgrade RPM — is required to install the software upgrade. You need to
import this public key regardless of whether the upgrade packages are installed directly from Quantum's
remote upgrade repository over the Internet or offline from a local /var/upgrade directory.
The NAS Repo Upgrade RPM does the following:
l Properly configures your StorNext NAS software to access Quantum's remote upgrade repository
(over the Internet) in which the latest StorNext NAS RPM is stored.
l Imports a public key required to install the latest StorNext NAS RPM.
CentOS6 (supported on G300s, Artico, and StorNext M-Series MDCs)
quantum-snfs-nas-repo-upgrade-1.2.3-5181.el6.x86_64.rpm
CentOS7 (supported on the Xcellis Workflow Director)
quantum-snfs-nas-repo-upgrade-1.2.3-5181.el7.centos.x86_64.rpm
2. Log in to the console command line as the NAS sysadmin user or issue the su - sysadmin
command to access the NAS controller.
3. Run the following command to point your StorNext NAS software to Quantum's remote upgrade
repository and to import the public key required to install the latest StorNext NAS RPM:
system upgrade local
4. Proceed to Upgrade to NAS 1.2.5 below.
Additional Information
l You need to perform all upgrades on each StorNext NAS node.
l Keep in mind that you will perform all StorNext NAS configuration tasks from the console command
line. To log in to the console command line for StorNext NAS, see Access the Console Command
Line on page 9.
l To access the version of StorNext NAS software that you are currently running, see View Your
Current NAS Software Version on page 96.
l Each node generally takes less than 5 minutes to complete the upgrade, and in many cases, each
node will complete the upgrade between 30 seconds and 1 minute. Because the upgrade process
could briefly interrupt access, we recommend performing an upgrade when minimal disruption to file
access will occur.
Additional Information
l Keep in mind that you will perform all StorNext NAS configuration tasks from the console command
line. To log in to the console command line for StorNext NAS, see Access the Console Command
Line on page 9.
l Each node generally takes less than 5 minutes to complete the upgrade, and in many cases, each
node will complete the upgrade between 30 seconds and 1 minute. Because the upgrade process
could briefly interrupt access, we recommend performing an upgrade when minimal disruption to file
access will occur.
View the StorNext NAS Software Version
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. At the prompt, enter the following:
system show version
CLI Example
> system show version
I am currently running StorNext NAS Upgrade Task(s)
version...
1.2.3 or 1.2.4 Task: Upgrade to StorNext NAS 1.2.5 on the next page
Prerequisites
l You must be on StorNext 5.3.1 or later and StorNext NAS 1.2.0 or later to upgrade to StorNext NAS
1.2.5.
l Unless you are already running versions 1.2.3 or 1.2.4 of StorNext NAS, you need to run the
NAS Repo upgrade. See Run the NAS Repo Upgrade on page 18.
l You need to perform all upgrades on each StorNext NAS node.
l You need either to be logged in as the NAS sysadmin user or issue the su -sysadmin command to
complete the system upgrade.
Upgrade a StorNext NAS Node Online from Quantum's Remote Upgrade Repository
1. From the console command line, run the following command to upgrade your StorNext NAS software
to version 1.2.5:
system upgrade
The console returns a list of all RPM packages that will be upgraded.
2. At the prompt, enter yes to complete the upgrade.
CLI Example
> system upgrade
The following Packages will be upgraded:
------------------------------------------------------------
quantum-snfs-nas.x86_64 1.2.5-5580.el7.centos snfs-nas
------------------------------------------------------------
*** During system upgrade, filesystem access will be interrupted.
Are you sure (yes/No)? yes
Upgrade a StorNext NAS Node Offline from the Local /var/upgrade Directory
1. Download and unpack the applicable tar.gz file to the /var/upgrade directory. See /var/upgrade
Directory on page 16
CentOS6 (supported on G300s, Artico, and StorNext M-Series MDCs)
quantum-snfs-nas-1.2.5-5580.el6.x86_64.tar.gz
CentOS7 (supported on Xcellis Workflow Director)
quantum-snfs-nas-1.2.5-5580.el7.centos.x86_64.tar.gz
2. From the console command line, run the following command to upgrade your StorNext NAS software:
system upgrade local
CLI Example
> system upgrade local
The following Packages will be upgraded:
------------------------------------------------------------
/var/upgrade/quantum-snfs-nas.x86_64 1.2.5-5580.el7.centos snfs-nas
------------------------------------------------------------
*** During system upgrade, filesystem access will be interrupted.
Are you sure (yes/No)? yes
After you have upgraded to StorNext NAS 1.2.5, you can perform the StorNext NAS 1.3.0 upgrade.
Prerequisites
l You must be on StorNext NAS 1.2.5 or later and StorNext 5.3.1 or later to upgrade to the StorNext
NAS 1.3.0.
l You need either to be logged in as the NAS sysadmin user or issue the su -sysadmin command to
complete the system upgrade.
l If you are upgrading StorNext NAS offline, you will need to download the upgrade bundle. Contact
Quantum Service and Support (http://www.quantum.com/serviceandsupport/index.aspx).
Additional Upgrade Information
l When you upgrade to StorNext NAS 1.3.0, your Samba version will automatically update to version
4.2.
l Keep in mind that you will perform all StorNext NAS configuration tasks from the console command
line. To log in to the console command line for StorNext NAS, see Access the Console Command
Line on page 9.
l Each node generally takes less than 5 minutes to complete the upgrade, and in many cases, each
node will complete the upgrade between 30 seconds and 1 minute. Because the upgrade process
could briefly interrupt access, we recommend performing an upgrade when minimal disruption to file
access will occur.
View the StorNext NAS Software Version
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. At the prompt, enter the following:
system show version
CLI Example
> system show version
IMPORTANT
You must be running version 1.2.5 of StorNext NAS before upgrading to StorNext NAS 1.3.0.
You can upgrade a single StorNext NAS node, either directly from Quantum's remote upgrade repository
accessed over the Internet or from your local /var/upgrade directory in which you have downloaded the
applicable upgrade bundle.
Upgrade a Single Node Online from Quantum's Remote Upgrade Repository
1. From the console command line, run the following command to upgrade your StorNext NAS software
to the latest version:
system upgrade
The console returns a list of all RPM packages that will be upgraded.
2. At the prompt, enter yes to complete the upgrade.
CLI Example
> system upgrade
The following Packages will be upgraded:
------------------------------------------------------------
quantum-snfs-nas.x86_64 1.3.0-5687.el7.centos snfs-nas
sernet-samba.x86_64 99:4.2.12-20.el7 snfs-nas
sernet-samba-client.x86_64 99:4.2.12-20.el7 snfs-nas
sernet-samba-common.x86_64 99:4.2.12-20.el7 snfs-nas
sernet-samba-libs.x86_64 99:4.2.12-20.el7 snfs-nas
sernet-samba-libsmbclient0.x86_64 99:4.2.12-20.el7 snfs-nas
sernet-samba-winbind.x86_64 99:4.2.12-20.el7 snfs-nas
sernet-samba-ctdb.x86_64 99:4.2.12-20.el7 snfs-nas
------------------------------------------------------------
*** During system upgrade, filesystem access will be interrupted.
Are you sure (yes/No)? yes
Upgrade a Single Node Offline from the Local /var/upgrade Directory
1. Unpack and move the applicable tar.gz file to the /var/upgrade directory. See /var/upgrade Directory
on page 16.
CentOS6 (supported on G300s, Artico, and StorNext M-Series MDCs)
quantum-snfs-nas-1.3.0-5687.el6.x86_64.tar.gz
CentOS7 (supported on Xcellis Workflow Director)
quantum-snfs-nas-1.3.0-5687.el7.centos.x86_64.tar.gz
2. From the console command line, run the following command to upgrade your StorNext NAS software:
system upgrade local
3. At the prompt, enter yes to complete the upgrade.
CLI Example
> system upgrade local
The following Packages will be upgraded:
------------------------------------------------------------
/var/upgrade/quantum-snfs-nas.x86_64 1.3.0-5687.el7.centos snfs-nas
/var/upgrade/sernet-samba.x86_64 99:4.2.12-20.el7 snfs-nas
/var/upgrade/sernet-samba-client.x86_64 99:4.2.12-20.el7 snfs-nas
/var/upgrade/sernet-samba-common.x86_64 99:4.2.12-20.el7 snfs-nas
/var/upgrade/sernet-samba-libs.x86_64 99:4.2.12-20.el7 snfs-nas
/var/upgrade/sernet-samba-libsmbclient0.x86_64 99:4.2.12-20.el7 snfs-nas
/var/upgrade/sernet-samba-winbind.x86_64 99:4.2.12-20.el7 snfs-nas
/var/upgrade/sernet-samba-ctdb.x86_64 99:4.2.12-20.el7 snfs-nas
------------------------------------------------------------
*** During system upgrade, filesystem access will be interrupted.
Are you sure (yes/No)? yes
IMPORTANT
You must be running version 1.2.5 of StorNext NAS before upgrading to StorNext NAS 1.3.0.
You can upgrade NAS clusters, either directly from Quantum's remote upgrade repository accessed over
the Internet or from your local /var/upgrade directory in which you have downloaded the applicable
upgrade bundle.
Additional Considerations
l You must upgrade the master node before upgrading the non-master nodes.
l You can upgrade non-master nodes concurrently.
Upgrade a NAS Cluster Online from Quantum's Remote Upgrade Repository
1. From the master node, access the console command line. See Access the Console Command Line on
page 9.
2. At the prompt, enter the following command:
system upgrade
Results from the issued command
l All non-master nodes automatically leave the cluster.
l The master node is upgraded to 1.3.0.
l The NAS controller and supporting NAS processes are restarted on the master node.
Results from the issued command
l Each node will be upgraded to 1.3.0.
l Each node will automatically rejoin the cluster.
Additional Information
It can take several minutes for all nodes to rejoin the NAS cluster after the upgrade completes.
To check whether the NAS cluster has been rebuilt, issue the nascluster show command from
the master node:
l If each node's status is Joined, then the NAS cluster has been rebuilt.
l If any of the node's status is Not Ready, then the NAS cluster is still being rebuilt. Wait another
minute, and reissue the nascluster show command.
Upgrade a NAS Cluster Offline from the Local /var/upgrade Directory
1. Unpack and move the applicable targ.gz file to each node's /var/upgrade directory. See /var/upgrade
Directory on page 16.
CentOS6 (supported on G300s, Artico, and StorNext M-Series MDCs)
quantum-snfs-nas-1.3.0-5687.el6.x86_64.tar.gz
CentOS7 (supported on Xcellis Workflow Director)
quantum-snfs-nas-1.3.0-5687.el7.centos.x86_64.tar.gz
2. From the master node, access the console command line. See Access the Console Command Line on
page 9.
Results from the issued command
l All non-master nodes automatically leave the cluster.
l The master node is upgraded to 1.3.0.
l The NAS controller and supporting NAS processes are restarted on the master node.
Results from the issued command
l Each node will be upgraded to 1.3.0.
l Each node will automatically rejoin the cluster.
Additional Information
It can take several minutes for all nodes to rejoin the NAS cluster after the upgrade completes.
To check whether the NAS cluster has been rebuilt, issue the nascluster show command from
the master node:
l If each node's status is Joined, then the NAS cluster has been rebuilt.
l If any of the node's status is Not Ready, then the NAS cluster is still being rebuilt. Wait another
minute, and reissue the nascluster show command.
Prerequisites
l You must be on StorNext NAS 1.2.5 or later and StorNext 5.3.1 or later to upgrade to the StorNext
NAS 1.3.0.
l You need either to be logged in as the NAS sysadmin user or issue the su -sysadmin command to
complete the system upgrade.
l If you are upgrading StorNext NAS offline, you will need to download the upgrade bundle. Contact
Quantum Service and Support (http://www.quantum.com/serviceandsupport/index.aspx).
Additional Upgrade Information
l Keep in mind that you will perform all StorNext NAS configuration tasks from the console command
line. To log in to the console command line for StorNext NAS, see Access the Console Command
Line on page 9.
l Each node generally takes less than 5 minutes to complete the upgrade, and in many cases, each
node will complete the upgrade between 30 seconds and 1 minute. Because the upgrade process
could briefly interrupt access, we recommend performing an upgrade when minimal disruption to file
access will occur.
View the StorNext NAS Software Version
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. At the prompt, enter the following:
system show version
CLI Example
> system show version
* We recommend upgrading to StorNext NAS 1.3.0 before upgrading to StorNext NAS 1.3.x.
** You must be running StorNext NAS 1.3.0 to use the nascluster upgrade command.
Upgrade a Single Node Online from Quantum's Remote Upgrade Repository
1. From the console command line, run the following command to upgrade your StorNext NAS software
to the latest version:
system upgrade
The console returns a list of all RPM packages that will be upgraded.
2. At the prompt, enter yes to complete the upgrade.
CLI Example
> system upgrade
The following Packages will be upgraded:
------------------------------------------------------------
<packages_available_for_upgrade>
------------------------------------------------------------
*** During system upgrade, filesystem access will be interrupted.
Are you sure (yes/No)? yes
Upgrade a Single Node Offline from the Local /var/upgrade Directory
1. Unpack and move the applicable tar.gz file to the /var/upgrade directory. See Move Upgrade Files to
the /var/upgrade Directory.
2. From the console command line, run the following command to upgrade your StorNext NAS software:
system upgrade local
3. At the prompt, enter yes to complete the upgrade.
CLI Example
> system upgrade local
The following Packages will be upgraded:
------------------------------------------------------------
<packages_available_for_upgrade>
------------------------------------------------------------
*** During system upgrade, filesystem access will be interrupted.
Are you sure (yes/No)? yes
Prerequisites
l All nodes must be enabled and joined to the NAS cluster. See NAS Clusters on page 48.
l All nodes within the cluster must be at the same StorNext NAS version.
Cluster Upgrade Workflow
a. You must issue the nascluster upgrade command from the NAS cluster's master node.
b. The master node will then broadcast the command to all non-master nodes within the cluster.
c. After all non-master nodes have upgraded, the master node will upgrade itself to completed the
process.
Upgrade an Entire StorNext NAS Cluster Online from Quantum's Remote Upgrade Repository
1. From the console command line of the master node, run the following command to upgrade your
StorNext NAS cluster to the latest version of StorNext NAS:
nascluster upgrade
The console returns a list of all RPM packages that will be upgraded.
2. At the prompt, enter yes to complete the upgrade.
CLI Example
> nascluster upgrade
The following Packages will be upgraded:
------------------------------------------------------------
<packages_available_for_upgrade>
------------------------------------------------------------
*** During nascluster upgrade, filesystem access will be interrupted.
Are you sure (yes/No)? yes
Upgrade an Entire StorNext NAS Cluster Offline from the Local /var/upgrade Directory
1. Unpack and move the applicable tar.gz file to the /var/upgrade directory. See /var/upgrade Directory
on page 16
2. From the console command line of the master node, run the following command to upgrade your
StorNext NAS software:
CLI Example
> nascluster upgrade local
The following Packages will be upgraded:
------------------------------------------------------------
<packages_available_for_upgrade>
------------------------------------------------------------
*** During nascluster upgrade, filesystem access will be interrupted.
Are you sure (yes/No)? yes
If an upgrade is available, you will see:
If an upgrade is not available, you will see:
You must configure your StorNext NAS System to authenticate users who will be accessing NAS shares.
When selecting your authentication service, keep in mind that StorNext NAS can support Access Control
Lists (ACLs) only when the NAS server is bound to an AD server.
Authentication Methods
Do one of the following to authenticate user access to NAS shares:
l Apply AD Authentication To NAS below
l Apply LDAP Authentication to NAS on page 40
l Configure Local NAS Authentication on page 43
Additional Considerations
l StorNext NAS can support Access Control Lists (ACLs) only when the NAS server is bound to an AD
server.
l If your environment consists of a large AD network, we recommend adding the StorNext NAS
System object to the AD server and allowing for replication to complete before configuring AD
authentication for your StorNext NAS System.
[idmap] (Optional) The method used to map UNIX IDs from the AD server for user
accounts.
Valid Entries
l rfc2307 (default)
l rid
l tdb
CLI Example
> auth config ads administrator ADS.ACME.COM
Please enter the password for user ADS.ACME.COM\administrator:
Applying ads configuration settings ...
Configured ads directory services authentication
4. Validate your AD configuration by displaying authenticated users. See View User Information on
page 44.
Note: The auth map ads user and auth map ads group commands are supported with AD
authentication only when the TDB idmap is used.
UID
To map the SID of an AD user to a specific UID, enter the following command:
auth map ads user <username> <UID>
The parameters are:
GID
To map the UID of an AD group to a specific GID, enter the following command:
auth map ads group <groupname> <GID>
The parameters are:
About ID Mapping
An ID map is used to map UNIX IDs from AD user accounts. When configuring your StorNext NAS
System to use AD, you can use the following ID map options.
RFC2307
The RFC2307 ID map option uses the AD UNIX Attributes mechanism. This mechanism guarantees
consistency in user IDs (UIDs) and group IDs (GIDs) when connecting to multiple StorNext NAS
Gateways across your environment.
When the auth config ads command is issued, the StorNext NAS System verifies whether RFC2307
has been configured on the AD server.
l If it has been configured, then the StorNext NAS System uses RFC2307 as the default ID map.
l If it has not been configured, then the auth config ads request returns an error stating such.
To set up the RFC2307 extension for AD, see the following Microsoft instructions:
https://technet.microsoft.com/en-us/library/cc754871(v=ws.11).aspx.
RID
The Relative Identifier (RID) ID map option converts a Security Identifier (SID) to an RID, using an
algorithm that allows all Quantum appliances to see the same UID.
TDB
The Trivial Database (TDB) ID map option tells Samba to generate UIDs and GIDs locally on demand.
Choosing the ID Map
Use the following table to determine the appropriate ID map value, depending on whether you have
multiple StorNext NAS Gateways in your environment and whether the RFC2307 extension has been
configured
No Yes RFC2307
No No RID or TDB
Important
If you change from your current mapping configuration to a new mapping configuration, you will also
need to manually reconcile the mapping to any existing files within StorNext that users accessed
under the original mapping. Otherwise, users may not be able to access these files.
Directory Services
StorNext NAS supports the following directory services.
l OpenLDAP with Samba Schema (LDAPS)
l OpenLDAP with Kerberos (LDAP)
CLI Example
> auth config ldapsam Manager sam.acme.com MYDOMAIN.COM
Please enter the password for user cn=Manager:
Configured ldapsam directory services authentication
CLI Example
> auth config ldap kadmin nod.acme.com ACME.COM OD.ACME.COM
kadmin = Administrator-principal in Kerberos
nod.acme.com = LDAP/Kerberos-server
Kerberos KeyTab
If a Kerberos keytab has already been prepared for your StorNext NAS System, you can configure it as
the LDAP server's authenticated user without needing to specify an administrator user and password.
Configure a Kerberos Keytab as the NAS Authenticated User
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. Copy the keytab to the /var/upgrade directory on your StorNext NAS System.
3. At the prompt, enter the following command to import the keytab into the StorNext NAS System:
auth import keytab
CLI Example
> auth import keytab
Imported keytab /var/upgrade/krb5.keytab
4. After the keytab has been imported, enter the following command to specify the keytab as the
authenticated user for the LDAP server:
auth config ldap keytab <ip_addr|host> <ldap_domain> <kerberos_realm>
The parameters are:
CLI Example
> auth config ldap keytab nod.acme.com ACME.COM OD.ACME.COM
Configured ldap directory services authentication
CLI Example
> auth config local
Applying local configuration settings ...
Successfully configured local authentication
<username> User for whom to allow access to NAS shares on your StorNext NAS System.
[<UID> <GID>] (Optional) Specify a UID and GID for the newly created user.
CLI Example
> auth add local user joe
Please enter a password for the new user
Re-enter the password
Added user joe
CLI Example
> auth delete local user joe
Are you sure you want to delete the user joe (Yes/no)? yes
Deleted user joe
CLI Example
> auth change local password joe
Please enter the new password
Re-enter the password
Modified password for user joe
CLI Example
> auth show user mary
uid=1001(mary) gid=1000(ldapusers)
CLI Example
> auth show local users
4 local users:
1: uid=497(sysadmin) gid=0(root)
2: uid=1002(michael) gid=1000(ldapusers)
3: uid=2001(fred) gid=1000(ldapusers)
4: uid=1001(mary) gid=1000(ldapusers)
CLI Example from a StorNext NAS System incorrectly configured to use ADS
> auth show config
Status: Invalid - details: net ads join failed :Failed to join domain: failed
to set machine kerberos encryption types: Insufficient access
Type: ads
Domain: ADS.ACME.COM
Url: ldap://ADS.ACME.COM:389
DC: dc=ADS,dc=ACME,dc=COM
CN: administrator,dc=ADS,dc=ACME,dc=COM
IDMap: rfc2307
CLI Example from a StorNext NAS System correctly configured to use LDAPS
> auth show config
Status: OK
Type: ldapsam
Domain: MYDOMAIN.COM
Url: ldaps://sam.acme.com:636
DC: dc=NASTEST,dc=COM
CN: cn=Manager,dc=NASTEST,dc=COM
CLI Example from a StorNext NAS System correctly configured to use LDAP
> auth show config
Status: OK
Type: ldap
Domain: ACME.COM
Url: ldaps://nod.acme.com:636
DC: dc=ACME,dc=COM
CN: kadmin,dc=ACME,dc=COM
Realm: OD.ACME.COM
CLI Example
> auth reset config
Applying local configuration settings ...
Successfully reset configuration for local authentication
NAS Clusters 48
Supported NAS Cluster Configurations 50
NAS Cluster Command Scenarios 58
NAS Failover 63
Enable NAS Clusters 67
Join NAS Clusters 69
Set the VIP for a NAS Cluster 71
Remove Nodes from NAS Clusters 72
View NAS Cluster Information 74
NAS Clusters
StorNext Connect Manage NAS App vs. StorNext NAS
If you are configuring StorNext NAS for Xcellis Workflow Director, M44x, or M66x, you can use the
StorNext Connect Manage NAS application to perform supported tasks. If — after using the Manage
NAS app — you use the console command line interface to issue StorNext NAS commands, make sure
to (re)import the NAS configuration into StorNext Connect.
See the Manage NAS section of the StorNext Connect Documentation Center.
A StorNext NAS cluster provides users the ability to access NAS shares located on any of the NAS cluster's
nodes.
Through NAS clusters, you can also take advantage of additional features built into StorNext NAS, such as
NAS failover to ensure that users can always access NAS shares, or G300 load-balancing to maintain a
desirable level of network-response time.
Configuration Requirements
Make sure that you meet the following requirements when configuring StorNext NAS clusters.
Basic Requirements
To configure a basic StorNext NAS cluster, you must do the following:
l Ensure that at least two nodes are licensed for and running StorNext NAS.
l Designate the NAS cluster's preferred master node, to which the NAS clients connect to access
NAS shares.
l Assign each of the NAS cluster's nodes an IP address that is within the same network and subnet
used by the NAS clients — this network is typically the LAN client network.
l Enable a hosted StorNext file system to maintain configuration information about the NAS cluster.
Each node within the NAS cluster must be able to access this file system through the master node.
NAS Failover Requirements
To configure a StorNext NAS cluster for NAS failover, you must do the following:
l Ensure that each node within the NAS cluster is licensed for and running StorNext NAS.
l Set a NAS VIP for the NAS cluster. The NAS VIP must be set within the same network and subnet
used by the NAS cluster and NAS clients — this network is typically the LAN client network.
l For the Xcellis Workflow Director, configure both nodes of the NAS cluster and the NAS VIP on the
10 Gb network port, which is typically the LAN client network. If you use another network port, the
NAS cluster will fail to configure correctly.
l Enable global file locking within the StorNext file systembefore setting the NAS VIP for the
NAS cluster. See the snfs_config MAN page for more information on the fileLocks parameter.
For additional information, see NAS Failover on page 63.
Workflow
The following steps provide a typical workflow to use both in constructing and deconstructing
NAS clusters.
Construction Steps
Deconstruction Steps
Resource Topics
Review the following topics to access additional information about StorNext NAS features, supported
configurations, and configuration scenarios:
l Supported NAS Cluster Configurations below
l NAS Failover on page 63
l NAS Cluster Command Scenarios on page 58
Topics
Important
To enable NAS failover within these NAS clusters, a single NAS VIP must be assigned to the cluster.
Additional requirements apply to NAS NFS clusters. See NAS Failover on page 63.
NAS Failover Components
For StorNext NAS clusters configured on Xcellis, Artico, or M-Series MDC systems, NAS failover
functions as follows:
l NAS Failover is automatic.
l StorNext NAS software runs on both nodes, supporting an active/passive NAS failover configuration
if a NAS VIP has been configured for the cluster. For NAS NFS clusters, additional requirements
apply.
l StorNext NAS services are active on one node at a time, with the master node (Node 2) being the
preferred active node.
l Failback to the preferred master node is not automatic. See Transfer NAS Services to Another Node
on page 66.
NAS Failover Workflow
1. If the active master node becomes unavailable, StorNext NAS automatically fails NAS management
services over to the passive node within the NAS cluster.
2. When the preferred master node becomes available, failback is not automatic. You must manually
issue a release command to return NAS management services to the original master node. See
Transfer NAS Services to Another Node on page 66.
Important
For Xcellis Workflow Director, StorNext NAS should be configured on the 10 Gb network port. In
addition, the IP addresses reserved for both nodes of the NAS cluster and the NAS VIP must be
configured on the 10 Gb network port, which is typically the LAN client network. If you use another
network port, the NAS cluster will fail to configure correctly.
NAS Failover Workflow
1. If the active master node becomes unavailable, StorNext NAS automatically fails NAS management
services over to the passive node within the NAS cluster.
2. When the preferred master node becomes available, failback is not automatic. You must manually
issue a release command to return NAS management services to the original master node. See
Transfer NAS Services to Another Node on page 66.
NAS Failover Workflow
1. If the active master node becomes unavailable, StorNext NAS automatically fails NAS management
services over to the passive node within the NAS cluster.
2. When the preferred master node becomes available, failback is not automatic. You must manually
issue a release command to return NAS management services to the original master node. See
Transfer NAS Services to Another Node on page 66.
Additional Information
l You must purchase the StorNext NAS license separately for each Xcellis or M-Series MDC system.
l A StorNext NAS license is included and pre-installed on every Artico at the factory.
l For help with configuring NAS clusters, see NAS Clusters on page 48.
l For an example scenario in configuring an MDC NAS cluster with NAS failover enabled, see NAS
Cluster Command Scenarios on page 58.
Important
The Load-Balancing feature is supported only on a NAS cluster of G300 systems, and only for SMB
shares.
StorNext NAS load-balancing allows you to group up to 8 G300s together as a NAS cluster. Users connect
to the preferred master node within the cluster, which then distributes connections to the other nodes within
the cluster.
Additional Considerations
l Before creating a NAS cluster of G300s, you need to determine which G300s will be part of the NAS
cluster. Any G300 running StorNext NAS software can be included as a node in the NAS cluster.
Note the IP address for each G300 to be included in the NAS cluster, as you will need them to assign
the G300 node to the NAS cluster.
lIP addresses must be ‘static’ for the G300s. Assigning IP addresses via DCHP may cause
unpredictable behavior.
Automatic Load-Balancing
Load-balancing ensures that connections are distributed to nodes within the cluster using a "least
number of connections" algorithm. In addition, if one of the nodes in the NAS cluster goes offline, its
current connections will be rerouted / reconnected to another node in the NAS cluster.
You can also configure a load-balanced NAS cluster for NAS failover. This configuration ensures that a
master node is always available to act as the load-balancer to distribute connections appropriately.
Important
To enable NAS failover within these NAS clusters, a single NAS VIP must be assigned to the cluster.
Additional requirements apply to NAS NFS clusters. See NAS Failover on page 63.
G300 Multi-Node Load-Balancing NAS Cluster with Failover
Scenario A: Load-Balancing without NAS Failover Workflow
l If a non-master node becomes unavailable, the master node redistributes all connections from the failed
node to other nodes within the cluster.
Scenario B: Load Balancing with NAS Failover Workflow
l If the active master node becomes unavailable, StorNext NAS automatically fails NAS management
services, including load-balancing operations, over to the next available node within the NAS cluster. The
next available node can be either the original preferred master node or another node within the NAS
cluster.
l When the preferred master node becomes available, failback is not automatic. You must manually issue
a release command to return NAS management services to the original master node. See Transfer NAS
Services to Another Node on page 66.
Additional Information
l You must purchase the StorNext NAS license separately for each G300.
l For help with configuring NAS clusters, see NAS Clusters on page 48.
l For an example scenario in configuring a G300 load-balancing NAS cluster, see NAS Cluster
Command Scenarios on page 58.
Important
A G300 Load-Balancing NAS cluster cannot export shares from an Artico system.
Configuration Components
For this type of configuration, the following applies:
l The NAS cluster consists of only the G300 nodes.
l The Xcellis Workflow Director or M-Series MDC systems host the StorNext file system.
l StorNext NAS software is enabled on both nodes of the Xcellis Workflow Director or M-Series MDC
system.
l The master G300 node communicates with the Xcellis Workflow Director or M-Series MDC system to
export shares for users.
l NAS failover is not available between the G300 and Xcellis Workflow Director or M-Series MDC
systems.
l Load-balancing is not available between the G300 and Xcellis Workflow Director or M-Series MDC
systems.
G300 Load-Balancing NAS Cluster with Xcellis Workflow Director
NAS Export Workflow
l The active master node of the G300 NAS Cluster exports shares from the StorNext file system located on
the Xcellis Workflow Director nodes.
l Xcellis Workflow Director nodes are not available for NAS failover or load-balancing operations.
Additional Information
l You must purchase the StorNext NAS license separately for each G300, and each Xcellis Workflow
Director or M-Series MDC.
l For help with configuring NAS clusters, see NAS Clusters on page 48.
l For an example scenario in configuring a G300 load-balancing NAS cluster, see NAS Cluster
Command Scenarios on the next page.
Prerequisites and Scenario Assumptions
Review the following information to better understand this scenario.
Prerequisites
l Enable global file locking within the StorNext file systembefore setting the NAS VIP for the
NAS cluster. See the snfs_config MAN page for more information on the fileLocks parameter.
l Set a NAS VIP to enable load-balancing and failover for the NAS cluster. The NAS VIP must be set
within the same network and subnet used by the NAS cluster and NAS clients — this network is
typically the LAN client network. See Set the VIP for a NAS Cluster on page 71.
l Obtain and install valid StorNext NAS licenses for each G300 to include in the NAS cluster.
l Configure each G300 within the NAS cluster to access the StorNext file system.
l Configure all G300s within the NAS cluster with the same NTP configuration. See Configure NTP on
page 102.
Assumptions
l StorNext file system: stornext/snfs.
l StorNext NAS Gateways within the NAS cluster:
o gw01: 10.20.4.35
o gw02: 10.20.4.36
o gw03: 10.20.4.37
l NAS VIP: 10.30.5.200
l DNS name for the NAS VIP: eng-nas-cluster.acme.com
l Master node: gw01
Steps
1. Access the StorNext NAS console command line from gw01.
2. Issue the following command to enable the master node for the NAS cluster:
nascluster enable master 10.20.4.35 /stornext/snfs1
3. Issue the following commands to enable gw02 and gw03 as nodes for the NAS cluster:
nascluster enable node 10.20.4.36
nascluster enable node 10.20.4.37
4. Issue the following command to set the NAS VIP for the NAS cluster:
nascluster set virtual ipaddr 10.30.5.200
5. Issue the following command to add gw01 to the NAS cluster:
nascluster join /stornext/snfs1
6. Issue the following commands to add gw02 and gw03 to the NAS cluster:
nascluster join /stornext/snfs1 10.20.4.36
nascluster join /stornext/snfs1 10.20.4.37
Result and Next Steps
The 3 G300 StorNext NASGateways are part of a NAS cluster that has been configured for load-
balancing and failover.
Proceed with configuring adding shares. After this NAS cluster has been configured, users can
access the shares using eng-nas-cluster.acme.com, which is the DNS name associated with the
NAS VIP.
Prerequisites and Scenario Assumptions
Review the following information to better understand this scenario.
Prerequisites
l Enable global file locking within the StorNext file systembefore setting the NAS VIP for the
NAS cluster. See the snfs_config MAN page for more information on the fileLocks parameter.
l Set a NAS VIP to enable NAS failover for the NAS cluster. The NAS VIP must be set within the same
network and subnet used by the NAS cluster and NAS clients — this network is typically the LAN
client network. See Set the VIP for a NAS Cluster on page 71.
l For the Xcellis Workflow Director, configure both nodes of the NAS cluster and the NAS VIP on the
10 Gb network port, which is typically the LAN client network . If you use another network port, the
NAS cluster will fail to configure correctly.
Assumptions
l Node 1 IP address: 10.60.4.35
l Node 2 IP address: 10.60.4.36
l NAS VIP: 10.60.4.200
l DNS name for the NAS VIP: archive-cluster.acme.com
l Master node: Node 2
Steps
1. Access the StorNext NAS console command line from Node 2.
2. Issue the following command to enable the master node for the NAS cluster:
nascluster enable master 10.60.4.35 /stornext/snfs1
3. Issue the following command to enable Node 1 for the NAS cluster:
nascluster enable node 10.60.4.36
4. Issue the following command to set the NAS VIP for the NAS cluster:
nascluster set virtual ipaddr 10.60.4.200
5. Issue the following command to join the master node to the NAS cluster:
nascluster join /stornext/snfs1
6. Issue the following command to join Node 1 to the NAS cluster:
nascluster join /stornext/snfs1 10.60.4.36
Result and Next Steps
The NAS cluster is configured for NAS failover.
l Proceed with adding shares. After this NAS cluster has been configured, users can access the
shares using archive-cluster.acme.com, which is the DNS name associate with the NAS VIP.
l Log into the master node to configure the NAS cluster for an authentication scheme.
Keep in mind that you do not need to configure the other node in the NAS cluster. Instead, this
configuration setting is automatically synchronized to it. As you add, modify, or delete shares,
these changes are also synchronized to the other node in the NAS cluster. Remember that all
share changes must be issued from the master node.
In addition, due to the NAS failover capabilities of a NAS cluster, the master can change. We
recommend connecting to the NAS VIP to ensure that you are always connected to the active
master node.
The following scenario outlines how to create an Xcellis StorNext NAS NFS cluster with NAS failover
enabled.
Note: For NFS clusters, NAS failover is supported only on the Xcellis Workflow Director (CentOS7).
Prerequisites and Scenario Assumptions
Review the following information to better understand this scenario.
Prerequisites
l Enable global file locking within the StorNext file systembefore setting the NAS VIP for the
NAS cluster. See the snfs_config MAN page for more information on the fileLocks parameter.
l Set a NAS VIP to enable NAS failover for the NAS cluster. See Set the VIP for a NAS Cluster on
page 71.
l Configure both nodes of the NAS cluster and the NAS VIP on the 10 Gb network port. If you use
another network port, the NAS cluster will fail to configure correctly.
l Make sure that all clients are running NFSv4 to support the NFSv4.1 lock recovery feature.
l Make sure that all NAS cluster nodes are running CentOS7 to support the NFSv4.1 lock recovery
feature.
Assumptions
l Node 1 IP address: 10.60.4.35
l Node 2 IP address: 10.60.4.36
l NAS VIP: 10.60.4.200
l DNS name for the NAS VIP: archive-cluster.acme.com
l Master node: Node 2
Steps
1. Access the StorNext NAS console command line from Node 2.
2. Issue the following command to enable the master node for the NAS cluster:
nascluster enable master 10.60.4.36 /stornext/snfs1
3. Issue the following command to enable Node 1 for the NAS cluster:
nascluster enable node 10.60.4.35
4. Issue the following command to set the NAS VIP for the NAS cluster:
nascluster set virtual ipaddr 10.60.4.200
5. Issue the following command to join the master node to the NAS cluster:
nascluster join /stornext/snfs1
Result and Next Steps
The NAS cluster is configured for NAS failover.
l Proceed with adding shares. After this NAS cluster has been configured, users can access the
shares using archive-cluster.acme.com, which is the DNS name associate with the NAS VIP.
l Log into the master node to configure the NAS cluster for an authentication scheme.
Keep in mind that you do not need to configure the other node in the NAS cluster. Instead, this
configuration setting is automatically synchronized to it. As you add, modify, or delete shares,
these changes are also synchronized to the other node in the NAS cluster. Remember that all
share changes must be issued from the master node.
In addition, due to the NAS failover capabilities of a NAS cluster, the master can change. We
recommend connecting to the NAS VIP to ensure that you are always connected to the active
master node.
Scenario Assumptions
l System: Artico
l Node 2 IP address: 11.11.11.118
l Node 1 IP address: 11.11.11.116
l NAS VIP: 11.11.11.119
l Master node: Node 2
Steps: Create an Artico NAS Cluster
1. Access the StorNext NAS console command line from Node 2.
2. Issue the following command to enable Node 2 as the master:
nascluster enable master 11.11.11.118 /stornext/artico
3. Access the StorNext NAS console command line from Node 1.
4. Issue the following command to enable Node 1 for the NAS cluster:
nascluster enable node 11.11.11.116
5. From Node 2, issue the following command to set the NAS VIP:
nascluster set virtual ipaddr 11.11.11.119
6. Issue the following command to join Node 2 to the NAS cluster:
nascluster join /stornext/artico
7. Issue the following command to verify that Node 2 has been added to the NAS cluster:
nascluster show
8. Issue the following command to join Node 1 to the NAS cluster:
nascluster join /stornext/artico 11.11.11.116
9. Issue the following command to verify that Node 1 has been added to the NAS cluster:
nascluster show
Steps: Remove an Artico NAS Cluster
1. Access the StorNext NAS console command line from Node 2.
2. Issue the following command to remove Node 1 from the NAS cluster:
nascluster leave 11.11.11.116
3. Issue the following command to disable Node 1:
nascluster disable node 11.11.11.116
4. Issue the following command to remove Node 2 from the NAS cluster:
nascluster leave
5. Issue the following command to disable Node 2:
nascluster disable node 11.11.11.118
6. Issue the following command from both Nodes 1 and 2 to verify the NAS cluster has been removed:
nascluster show
NAS Failover
In the event that the active master node becomes unavailable, StorNext NAS failover automatically
transfers NAS management services from the active master node to another node in the NAS cluster.
Through this feature, clients have continuous access to NAS shares because NAS management services
can be run on any node within the NAS cluster.
StorNext NAS failover is supported with both SMB and NFS shares. For NFS clusters, NAS failover is
supported only on the Xcellis Workflow Director (CentOS7).
Note: For environments exporting NFS shares on G300s or MDCs that are not an Xcellis Workflow
Director (CentOS7), clients connect to the shares through the master StorNext NAS System static IP
address.
Enable Global File Locking
Enable global file locking within the StorNext file systembefore setting the NAS VIP for the NAS cluster. See
the snfs_config MAN page for more information on the fileLocks parameter.
Set a NAS VIP
StorNext NAS failover is automatically configured for NAS SMB clusters when you set a NAS virtual IP
(VIP) address for the NAS cluster. The NAS VIP is also required to configure NAS NFS clusters for NAS
failover, along with additional manual steps (see Manually Enable NAS NFS Clusters below).
When you set a NAS VIP for the NAS cluster, make sure to set it within the same network and subnet used
by the NAS cluster and NAS clients — this network is typically the LAN client network.
For the Xcellis Workflow Director, configure both nodes of the NAS cluster and the NAS VIP on the 10 Gb
network port, which is typically the LAN client network. If you use another network port, the NAS cluster will
fail to configure correctly.
See Set the VIP for a NAS Cluster on page 71.
Additional Information
l In the event that the active master node becomes unavailable, the NAS VIP allows for NAS failover
by transferring to the next available node in the NAS cluster, enabling it as the new active master
node.
l To ensure continuous access to NAS shares accessed through the NAS cluster, users should always
connect to the NAS cluster through the NAS VIP or virtual hostname assigned to the NAS VIP. In
addition, to ensure that you are always connected to the master node for CLI purposes, you should
connect through the NAS VIP.
Manually Enable NAS NFS Clusters
In addition to setting a NAS VIP, you must also manually enable StorNext NAS failover for NAS
NFS clusters through the console command line.
See Configure NAS Failover for NAS NFS Clusters.
Additional Information
l When you configure NFS shares in a StorNext NAS cluster for NAS failover, the master node
manages StorNext NAS services. In the event of NAS failover, these services along with the
NAS VIP are transferred to the new active master node.
l NAS failover for NFS clusters is supported only on the Xcellis Workflow Director (CentOS7).
Failover Pathways
When you configure NAS clusters, the first node added to the cluster becomes the preferred master node. If
the preferred master node fails, its duties fail over to the next available node in the NAS cluster. If this new
active master node fails, StorNext NAS does one of the following depending on the NAS cluster:
NAS cluster with 2 Xcellis Workflow Director, Artico, or MDC nodes
In an Xcellis Workflow Director, Artico, or MDC NAS cluster, StorNext NAS uses an active/standby failover
arrangement, in which the master node actively runs NAS services and the other node is on standby ready
to take over NAS services, as needed.
See Xcellis, Artico, and M-Series MDC NAS Clusters on page 51.
If you want NAS management services to be returned to the original master node, you must manually issue
a release command. See Transfer NAS Services to Another Node on the next page.
NAS cluster with up to 8 G300 nodes
In a G300 NAS cluster, StorNext NAS transfers services to the next available node. Keep in mind that
StorNext NAS does not automatically fail services back to the original master node.
See G300 NAS Clusters on page 54.
If you want NAS management services to be returned to the original master node, you must manually issue
a release command. See Transfer NAS Services to Another Node on the next page.
Failover Behavior
When a failover occurs, it will effect the StorNext NAS console command line, clients, and clusters as
follows.
Command Line Console Behavior
If a failover occurs, it could terminate your command line session. Log into your command line console
again.
Client Behavior
Failover notification is OS or client dependent. If a node in a NAS cluster fails, users connected to the node
might experience a momentary interruption to services. This interruption can range from a pause
communicating with the remote share to a user needing to reenter authentication credentials to access data
residing on the NAS share.
Cluster Behavior
l If the master node fails, the NAS management services and NAS VIP are transferred to another node in
the NAS cluster. Users may experience a more noticeable interruption in service. In some cases, users
may be required to reenter authentication credentials to access a NAS share.
l For a NAS cluster of G300s, any user connections being serviced by a failed non-master node are
redistributed to other nodes within the NAS cluster. In most cases, this transfer is completely transparent
to users accessing the share within the NAS cluster.
l For a NAS cluster of NFS shares, all NAS services along with the NAS VIP are transferred during a
NAS failover to the new active master node. The NAS failover service follows the appropriate sequence
to ensure that the NFSv4.1 lock recovery feature succeeds.
Duplicate Request Cache
When NAS failover occurs in a NAS cluster of NFS shares, the duplicate request cache is not preserved
and the new server cannot recover it. The cache is stored in memory.
Important
For NAS clusters of G300s that are configured without a VIP, this command will not return services
to the preferred master.
Configuration Requirements
Make sure that you meet the following requirements when configuring StorNext NAS clusters.
Basic Requirements
To configure a basic StorNext NAS cluster, you must do the following:
l Ensure that at least two nodes are licensed for and running StorNext NAS.
l Designate the NAS cluster's preferred master node, to which the NAS clients connect to access
NAS shares.
l Assign each of the NAS cluster's nodes an IP address that is within the same network and subnet
used by the NAS clients — this network is typically the LAN client network.
l Enable a hosted StorNext file system to maintain configuration information about the NAS cluster.
Each node within the NAS cluster must be able to access this file system through the master node.
NAS Failover Requirements
To configure a StorNext NAS cluster for NAS failover, you must do the following:
l Ensure that each node within the NAS cluster is licensed for and running StorNext NAS.
l Set a NAS VIP for the NAS cluster. The NAS VIP must be set within the same network and subnet
used by the NAS cluster and NAS clients — this network is typically the LAN client network.
l For the Xcellis Workflow Director, configure both nodes of the NAS cluster and the NAS VIP on the
10 Gb network port, which is typically the LAN client network. If you use another network port, the
NAS cluster will fail to configure correctly.
l Enable global file locking within the StorNext file systembefore setting the NAS VIP for the
NAS cluster. See the snfs_config MAN page for more information on the fileLocks parameter.
For additional information, see NAS Failover on page 63.
CLI Example
> nascluster enable master 10.65.188.89 /stornext/snfs1
Verifying NAS cluster configuration for 10.65.188.89 ...
NAS cluster enable master node 10.65.188.89 starting...
Updating system NAS cluster configuration ...
Check for master takeover ...
Publish master configuration ...
Setting master local auth config ...
Applying local configuration settings ...
Master node successfully enabled for NAS cluster using 10.65.188.89
CLI Example
> nascluster enable node 10.65.188.91
Verifying NAS cluster configuration for 10.65.188.91 ...
Node 10.65.188.91 successfully enabled for NAS cluster
Considerations
Review the following considerations before joining nodes to NAS clusters.
Sequence for Issuing Commands
l You must first enable nodes for NAS clustering before you can join them in NAS clusters. See Enable
NAS Clusters on page 67.
l When configuring a NAS cluster for NAS failover, you must also set a NAS VIP for the cluster before
joining nodes to the cluster. Additional requirements apply to NAS NFS clusters. See NAS Failover
on page 63.
Cluster Synchronization
When joining a node to a NAS cluster, the node is synchronized with the master node. This
synchronization ensures that all nodes in the cluster are using the same authentication scheme, and that
they all have the same shares configured.
Keep in mind that the StorNext NAS software synchronizes authentication and share configuration
information between all nodes in a NAS cluster. Changes made to a single StorNext NAS System will
not be synchronized between nodes unless you use the auth config or share commands. After you
have created a NAS cluster, you can only execute the auth config or share commands from the master
node.
With NAS failover configured, if you connect to the NAS cluster through the NAS VIP, you will ensure
that you are always connected to the master node.
Requirements
Before setting a NAS VIP, review the following information.
Important
The VIP set for the NAS cluster is NOT the same as the VIP used for the StorNext MDC network.
Make sure the NAS VIP host name used for the NAS cluster is NOT the same host name used for the
HA pair
l For the Xcellis Workflow Director, configure both nodes of the NAS cluster and the NAS VIP on the
10 Gb network port, which is typically the LAN client network. If you use another network port, the
NAS cluster will fail to configure correctly.
l The NAS VIP must be set within the same network and subnet used by the NAS cluster and NAS
clients — this network is typically the LAN client network.
l Enable global file locking within the StorNext file systembefore setting the NAS VIP for the
NAS cluster. See the snfs_config MAN page for more information on the fileLocks parameter.
l If you have configured Active Directory (AD) to authenticate users accessing your NAS cluster, you
must add your NAS VIP to the same DNS as your AD server. Otherwise, users authenticated through
AD will not be able to access the NAS shares through the NAS cluster.
Additional Information
l If you need to change a NAS VIP, re-issue the nascluster set virtual ipaddr <ip_addr>
command from the master node.
l For a workflow of configuring NAS clusters with a NAS VIP, see NAS Cluster Command
Scenarios on page 58.
Additional Information
After you have removed and disabled a node from its NAS cluster, the node will still display in the NAS
cluster node list. To remove the disabled node from this list, you must completely rebuild the NAS cluster
by doing the following:
a. Remove each node from the NAS cluster. See Remove a Node from a NAS Cluster below.
b. Disable each node. See Disable a Node below.
c. Re-enable each node, excluding the node that you do want excluded from the NAS cluster node list.
See Enable NAS Clusters on page 67.
d. Re-join each node to the NAS cluster, again excluding the node that you do want excluded from the
NAS cluster node list. See Join NAS Clusters on page 69.
CLI Example
> nascluster leave 10.1.1.1
NAS cluster leave applying settings ...
Updating system NAS cluster configuration ...
Resetting local auth config settings ...
Applying local configuration settings ...
Successfully left NAS cluster
Disable a Node
1. Log in to the console command line from the master node. See Access the Console Command Line on
page 9.
CLI Example Output from Master Node: SMB NAS Cluster of G300 Appliances
Configured for NAS Failover
> nascluster show
NAS Cluster IP: 10.65.188.89/eth0, Master: Yes, SNFS Root: /stornext/snfs1,
Joined: Yes
Load balancing: leastconn
Master IP: 10.65.188.89
VIP: 10.65.166.179 active
Nodes: 3
1: 10.65.188.89 (Joined)
2: 10.65.188.91 (Disabled)
3: 10.65.188.96 (Not-Ready)
Node States
The following table presents the different states of nodes.
State Description
State Description
Not-Ready A node has been enabled for the NAS cluster, but has not been joined to the cluster.
Disabled For master nodes, the node has not been joined to the cluster.
For non-master nodes, the node has been removed from the cluster.
Share Management 76
Create Shares 77
Modify Shares 79
Export and Import Share Information 82
Share Options 85
Delete Shares 88
View Shares 89
View Active Sessions 90
Enable or Disable the Support and Upgrade Shares 91
Share Management
StorNext Connect Manage NAS App vs. StorNext NAS
If you are configuring StorNext NAS for Xcellis Workflow Director, M44x, or M66x, you can use the
StorNext Connect Manage NAS application to perform supported tasks. If — after using the Manage
NAS app — you use the console command line interface to issue StorNext NAS commands, make sure
to (re)import the NAS configuration into StorNext Connect.
See the Manage NAS section of the StorNext Connect Documentation Center.
This section presents information about about adding and managing NAS shares on your StorNext NAS
System.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
System. Any edits made directly to either of these files will be lost when the StorNext NAS System is
restarted, or when changes are made using any of the share commands.
Topics
l Create Shares below
l Modify Shares on page 79
l Export and Import Share Information on page 82
l Share Options on page 85
l Delete Shares on page 88
l View Shares on page 89
l View Active Sessions on page 90
l Enable or Disable the Support and Upgrade Shares on page 91
Create Shares
Prior to StorNext NAS 1.2.0, an administrator would have to create the directories being shared before
issuing the share add command. Now you can issue the share create command to simultaneously
create a directory and its share.
After issuing the share create command to create your first share, you can use the share add command
to continue to add shares to the directory.
Default Ownership Settings
In addition to creating a directory to be shared, the share create command assigns default ownership
settings. If you have configured your StorNext NAS System to use ADS, the share directories will have
the UID and GID of the AD administrator user. For all other authentication schemes, directories created
with share create are owned by the sysadmin user.
Important
If you use AD authentication with the RFC2307 idmap option, you cannot use the share create
command. See NAS Share Issues and FAQs on page 115.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
System. Any edits made directly to either of these files will be lost when the StorNext NAS System is
restarted, or when changes are made using any of the share commands.
Example
guest ok = yes
users = john sue mary
In the above example, the first option contains 14 characters and the
second option contains 21 characters. Both options are well within the
1024-character limit.
See Share Options on page 85.
[nfshosts = host1, hostN] (NFS shares only) Host(s) allowed access to an NFS share. If no host(s)
is provided, any host may access the NFS share.
When entering options, use the following conventions:
l You must enter nfshosts =.
l List hosts in one of the following ways:
o Host name or IP address
o Wildcards
o IP networks or netgroups
CLI Example
> share create smb myshare /stornext/snfs1/myshare
Share myshare successfully created
Add Shares
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. At the prompt, enter the following:
share add <share_type> <share_name> <share_path> [option, option, …] [nfshosts =
host1, hostN]
See Create a Share and its Directory for parameter descriptions.
3. Repeat steps 1 and 2 for each share to add to the StorNext NAS System.
CLI Example
> share add smb mysmbshare /stornext/snfs1/mysmbshare
Modify Shares
After adding SMB or NFS shares to your StorNext NAS System, you can modify the share settings. The
same settings accepted by the share add command can be modified with the share change command.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
System. Any edits made directly to either of these files will be lost when the StorNext NAS System is
restarted, or when changes are made using any of the share commands.
Modify Shares
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. At theprompt, enter the following:
share change <share_type> (<share_name> | global) <option> [option, …] [nfshosts
= host1, hostN]
The parameters are:
(<share_name> | global) The name assigned to a specific share. The input is limited to 64
characters.
If you are modifying options in the global section of the smb.conf
file, see Modify Global SMB Options on the next page
Example
guest ok = yes
users = john sue mary
In the above example, the first option contains 14 characters and the
second option contains 21 characters. Both options are well within
the 1024-character limit.
See Share Options on page 85.
[nfshosts = host1, hostN] (NFS shares only) Host(s) allowed access to an NFS share. If no
host(s) is provided, any host may access the NFS share.
When entering options, use the following conventions:
l You must enter nfshosts =.
l List hosts in one of the following ways:
o Host name or IP address
o Wildcards
o IP networks or netgroups
3. Repeat steps 1 and 2 for each share to add to the StorNext NAS System.
CLI Example Command: Change the create mask and write list options of an SMB
share
> share change smb myshare create mask = 600, write list = @smb-rw
CLI Example Command: Change an NFS share to read only and restrict root
privileges to remote root users
> share change nfs mynfsshare ro,root_squash
Consideration
To append additional options to the share, you need to enter the share's current options along with
the new options.
If you need to add options to the global section of the smb.conf file, use the share change command
with the reserved share name global. In addition to any valid SMB option that can be applied to a share,
the following options can be added to the global section:
l interfaces
l log level
l map to guest
l max smbd processes
l socket options
CLI Example Command: Place the valid users option in the global section
> share change global valid users = @smb-ro
For a list of valid SMB share options, see Share Options on page 85.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
System. Any edits made directly to either of these files will be lost when the StorNext NAS System is
restarted, or when changes are made using any of the share commands.
[nfs | smb] (Optional) The type of shares — NFS or SMB — to export. If you do not
provide a share type, all shares will be exported, regardless of type.
[<filename>] (Optional) The file to which the configuration is written. If you do not
provide a file name, the configuration is written to /var/shares.config.
ro
nfs_clients = *
[myshare042]
type = nfs
path = /stornext/snfs1/share042
anongid=150
anonuid=150
ro
nfs_clients = *
[nfs | smb] (Optional) The type of shares — NFS or SMB — to import. If you do not
provide a share type, all shares will be imported, regardless of type.
[<filename>] (Optional) The file from which the configuration is imported If you do not
provide a file name, the configuration is read from /var/shares.config.
CLI Example
> share import config smb
Note: In the above example, only SMB shares are being imported. The Skipping import for
share... lines are referring to NFS shares that are not being imported.
Share Options
You can include SMB and NFS share options with the share add, share change, and share create
commands.
CLI Example Command: Specify a write list and admin users for myshare
> share add smb myshare /stornext/snfs1/myshare write list=james doris, admin
users = sysadmin
Option Conventions
When entering options, use the following conventions:
l Separate multiple options by a comma.
l For options that can have multiple values, separate the values by a space.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
System. Any edits made directly to either of these files will be lost when the StorNext NAS System is
restarted, or when changes are made using any of the share commands.
See the smb.conf(5) MAN page for descriptions of the accepted options, along with how best to use them
for your environment.
Default Options
If you do not provide options for SMB shares, default values are set to the following:
l writable = yes
l public = no
Default Options
If you do not provide ro/rw or async/sync options for NFS shares, default values are set to the
following:
l read write (rw)
l sync
Limiting NFS Share Access with Options
When you add an NFS share or change its options, StorNext NAS allows you to limit access to the NFS
share by hostname, IP Network, or netgroup. The following scenarios present options to limit NFS share
access.
Important
StorNext NAS 1.3.0 does not support the use of commas (,) to separate the nfshosts share option. If
you are defining multiple NFS hosts for a share, separate each host with a space.
See Scenario 3: Limit access to the myhost.acme.com, myhost2.acme.com, and 10.20.30.123 hosts
below for an example.
Note: For the following scenarios, the same options are specified for each host.
Scenario 1: Export myshare with the default options of rw and sync without
restricting access to any hosts
Command
> share add nfs myshare /stornext/snfs/myshare
Scenario 2: Add an NFS share as read only (ro) and secure for the eng.acme.com
host
Command
> share add nfs myshare /stornext/snfs/myshare ro,secure nfshosts = eng.acme.com
Command
> share add myshare /stornext/snfs/myshare nfshosts = myhost.acme.com myhost2.acme.com
10.20.30.123
Delete Shares
You can remove SMB and NFS shares from the StorNext NAS configuration, and in turn, prohibit the
shares from being exported from the StorNext NAS System. Removing SMB and NFS shares from the
StorNext NAS configuration does not remove the share directory or any of its files.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
System. Any edits made directly to either of these files will be lost when the StorNext NAS System is
restarted, or when changes are made using any of the share commands.
Delete a Share
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. At theprompt, enter the following:
share delete <share_type> <share_name> [force]
The parameters are:
[force] If this option is not included, then you must confirm deletion of the share.
If this option is included, then the specified share is deleted without
confirmation.
View Shares
You can view a list of shares being managed by the StorNext NAS System. The list includes the number of
shares, share name, share type, path, options, and host(s) associated with the share. You can also show
information for a defined group of shares.
Caution: StorNext NAS manages the smb.conf and /etc/exports files on yourStorNext NAS
System. Any edits made directly to either of these files will be lost when the StorNext NAS System is
restarted, or when changes are made using any of the share commands.
Show Shares
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. At the prompt, enter the following:
share show [<share_name>] [<share_type>] [paged]
The parameters are:
<share_name> (Optional) The name of the share(s) for which to list information.
<share_type> (Optional) The type of share, either nfs or smb, for which to list
information.
CLI Example
> share show
4 shares:
1: nfs100 | nfs | /stornext/snfs/share100 | ro,sync |
myhost1.acme.com,10.20.30.37
2: myshare2 | nfs | /stornext/snfs/share2 | rw,sync |
myhost2.acme.com,10.20.30.35
3: myshare1 | nfs | /stornext/snfs/share1 | ro,wdelay,root_squash |
*.acme.com
4: myx | nfs | /stornext/snfs/myx | ro,async | *
CLI Example
> system show smb
1 connections:
PID | User | Group | Machine
---------------------------------------------
1: 1:12013 | mtester | marcom-print-group | 10.65.167.5
1 services:
Service | PID | Machine | Connected at
---------------------------------------------------------------------------
1: smb1 | 1:12013 | 10.65.167.5 | Wed Apr 13 13:15:18 2016I k
CLI Example
> system show nfs
2 remote clients:
Client | Mounted directory
---------------------------------------------
10.30.x.xxx | /stornext/snfs1/myshare1
10.30.xxx.xx | /stornext/snfs1/myshare1
1 remote client:
NFSv2/v3 Client | Mounted directory
-------------------------------------------------------------------------
10.30.x.xxx | /stornext/snfs1/share01
Note: Only the support and upgrade shares can be enabled or disabled using the following
commands. For all user-defined shares, use the share add or share delete command.
System Management 93
View System Status 94
View Your Current NAS Software Version 96
Disable or Enable SMB or NFS Services 97
Manage NFS Client Versions 97
Restart StorNext NAS 98
Manage Support Logs 99
View System Logs 100
Files Managed by StorNext NAS 101
Configure NTP 102
Backup and Restore 103
System Management
StorNext Connect Manage NAS App vs. StorNext NAS
If you are configuring StorNext NAS for Xcellis Workflow Director, M44x, or M66x, you can use the
StorNext Connect Manage NAS application to perform supported tasks. If — after using the Manage
NAS app — you use the console command line interface to issue StorNext NAS commands, make sure
to (re)import the NAS configuration into StorNext Connect.
See the Manage NAS section of the StorNext Connect Documentation Center.
This section provides the following help in managing your StorNext NAS system.
Topics
l View System Status below
l View Your Current NAS Software Version on page 96
l Disable or Enable SMB or NFS Services on page 97
l Manage NFS Client Versions on page 97
l Restart StorNext NAS on page 98
l Manage Support Logs on page 99
l View System Logs on page 100
l Files Managed by StorNext NAS on page 101
l Configure NTP on page 102
System Features
The system status command reports on the following features.
Feature Description
Feature Description
ctdb The state of the clustered trivial database (ctdb). This process manages the lock
state for the SMB cluster.
l If the Samba server daemon (smbd) is disabled or is not running within a NAS
cluster, then the ctdb state is disabled.
l If smbd is running within a NAS cluster, then the ctdb state is running.
haproxy The state of the haproxy process. This process is only applicable to a NAS cluster
of G300s — it manages the load balancing between the cluster's nodes.
l If the system is not part of a NAS cluster or if the node being checked is not the
master node, then the haproxy state is disabled.
l If the system is part of a NAS cluster and the node being checked is the master
node, then the haproxy state is running.
Feature Description
keepalived The state of the keepalived process. This process manages the VIP movement
from one node to another during NAS failover.
l If the system is not part of a NAS cluster, then the keepalived state is disabled.
l If the system is part of a NAS cluster, then the keepalived state is running.
[all] Enter all to view the status of all system services. Otherwise, only the
status of the StorNext NAS controller is returned.
CLI Example
> system show version
Disable NFSv4
1. Log in to the console command line from the node on which to disable services. See Access the
Enable NFSv4
1. Log in to the console command line from the node on which to enable services. See Access the
Console Command Line on page 9.
2. At the prompt, enter the following:
system set nfsv4 yes
CLI Example
> system restart services
Restarting snnas_controller
CLI Example
> system restart services all
Stopping all services ...
keepalived
nfs
winbind
smbd
ctdb
Starting all services ...
ctdb
smbd
winbind
nfs
keepalived
Restarting controller ...
Restart of services successful
Note: Depending on your configuration, different services may be restarted while they are
running.
CLI Example
> supportbundle create
Gathering support logs...
Finished support package creation /var/snnas_auto_support.sh.tar.bz2 (155
KB)Done.
CLI Example
> supportbundle send [email protected]
Gathering support logs...
Finished support package creation /var/snnas_auto_support.sh.tar.bz2 (155 KB)
Emailing support package...
Support package sent successfully.
Note: The supportbundle send command gathers the same logs and files as the
supportbundle create command.
CLI Example
> log view snnas_controller
This example allows you to view the snnas_controller log file.
CLI Example
> log watch snnas_controller
This example allows you to watch the snnas_controller log file.
Important
If you change any of the following files manually, modifications or edits to these files may be lost when
you restart the StorNext NAS System.
/etc/default/nfs
/etc/exports
/etc/krb5.conf
/etc/nslcd.conf
/etc/nsswitch.conf
/etc/ntp.conf
/etc/openldap/ldap.conf
/etc/pam.d/login
/etc/pam.d/password-auth
/etc/samba/smb.conf
/etc/ssh/sshd_config
/etc/sssd/sssd.conf
/etc/sysconfig/ctdb
/etc/sysconfig/nfs
Configure NTP
You can configure your StorNext NAS System to use a Network Time Protocol (NTP) server to control its
internal clock.
Action Command
Synchronize the StorNext NAS System with the NTP server ntp sync
CLI Examples
Use the system backup and system restore commands to protect the configuration information
controlled by and stored in the StorNext NAS System.
System Backup
When the first NAS share is added to your system, a small StorNext NAS configuration file is placed in the
root directory of the StorNext file system. The system backup command protects this configuration file.
If you have a managed file system, your configuration file is stored in the .ADIC_INTERNAL_
BACKUP/snnas directory of your StorNext file system. Placing backup configuration files on a managed
file system ensures that redundant copies of the configuration are protected.
By default, the system backup command automatically runs once a day. You can manually back up the
StorNext NAS configuration file at any time, and we recommend running a manual backup after the file has
been modified. For detailed steps, see Back Up Your System below.
System Restore
If you need to restore the StorNext NAS configuration file from a previous backup, run the system
restore command. The most recent backup file will be in the /var directory. If you do not want to restore
from the most recent configuration file, make sure that the file from which to restore system configuration is
in the /var directory. For detailed steps, see Restore Your System on the next page.
CLI Example
> system backup
Creating configuration backup /var/snnas-db-package.tar.bz2.enc (79 KB)
Finished configuration backup /var/snnas-db-package.tar.bz2.enc (79 KB)
Saved configuration backup package /var/snnas-db-package.tar.bz2.enc to
/stornext/snfs1/.StorNext/.snnas/snnas-db-
package.tar.bz2.enc.ceb028ca81a211e587c7ecf4bbdc1708
Backup of configuration successful
Turn Off the NO_STORE Flag
1. Log in to the console command line from the MDC node. See Access the Console Command Line
on page 9.
2. At the prompt, enter the following:
/usr/adic/TSM/util/dm_util -d no_store <managed_file_system>/.ADIC_INTERNAL_
BACKUP/snnas
Important
Always execute the dm_util command. Otherwise, the StorNext NAS configuration file for your
StorNext NASSystem may not be copied to tiers managed by File Manager.
CLI Example
> system restore
Are you sure you want to restore the configuration from backup (Yes/no)? Yes
Restore of configuration in-progress ...
Restore of configuration successful
Restarting services ...
Stopping all services . . .
keepalived
haproxy
winbind
smbd
ctdb
console
snnas_controller
Starting all services . . .
snnas_controller
console
ctdb
smbd
winbind
haproxy
keepalived
Topics
Add the log level option
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. At the prompt, enter the following:
share change global log level = 3
3. When you are finished troubleshooting the issue, return the logging level back to the default by issuing
the following command:
share change global log level = 0
To resolve this issue, you will need to verify that the Samba server is connected appropriately. If the Samba
server is not bound to the correct interface or IP address, you will need to reset its connection.
Verify Samba is connected to the correct interface
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. At the prompt, enter the following:
net show smb interfaces
3. Verify that Samba is connected to the IP address configured for your StorNext NAS network.
CLI Example
> net show smb interfaces
SMB bind interfaces: lo 10.65.190.178
Reset the Samba server's connection
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. At the prompt, enter the following:
net set smb interfaces <interface | ip_addr>
The parameter is:
<interface | ip_addr> The interface, such as eth0, or the IP address to which to reset the
Sambs server's connection.
You can enter wildcards (*) with the interface name, such as eth*.
If StorNext NAS processes are inadvertently interrupted, they have been configured to automatically restart.
If you need to stop or restart processes, you can use the following commands:
l initctl on CentOS6
l systemctl on CentOS7
Why Can't I Mount NFS Shares After I Restart StorNext NAS Services?
When you issue the system restart services all command on the master node of an
NFS NAS cluster with NAS failover configured, a NAS failover occurs. If NFS clients have shares mounted
when the NAS failover occurs, the NFS services that were transferred to the new master node may not
allow I/O from the clients. In turn, the clients are unable to mount NFS shares.
To resolve this issue, you must manually restart NFS services.
Resolution
1. Log in to the console command line from the master node.
2. Issue the following commands to restart NFS services on the new master node:
systemctl restart nfs-config
systemctl restart nfs-lock
systemctl restart nfs-server
3. Verify that the NFS services have been restarted by issuing the following command:
exportfs
Resolution
1. Verify you have entered in the user name correctly.
2. Verify the user is an administrator or has administrative privileges. See Apply AD Authentication To
NAS on page 37 and View User Information on page 44.
Resolution
1. Verify that you have entered a valid password for the administrator user.
2. Retry the command. See Apply LDAP Authentication to NAS on page 40.
Resolution
Issue the auth config ads command, specifying RID as the ID Map. See About ID Mapping on page 39.
Why Am I Receiving a user 'administrator' not found Alert When I Issue the
share create Command?
When you issue the share create command, StorNext NAS creates a directory and assigns ownership of
the directory to the administrator user. However, if the administrator user has a UID of 0, the SMB server
denies root access because an administrator account with a UID of 0 is an invalid account.
You will receive the user 'administrator' not found (E-5060) alert if all 3 of the following settings
have been configured and you issue the share create command :
l You have configured your StorNext NAS System to authenticate user access to NAS shares with
Microsoft AD.
l You specified an ID map of RFC2307.
l The administrator user for your Microsoft AD server has a UID of 0 (zero).
Resolution
Issue the auth config ads command, specifying TDB or RID as the ID Map. See About ID Mapping on
page 39.
Verify that your StorNext NAS System is connected to your AD server
1. Log in to the console command line as the root user. See Access the Console Command Line on
page 9.
2. At the rootsh shell prompt, enter the following:
net ads testjoin
3. Do one of the following, depending on the command's return:
StorNext NAS System is connected to your AD server
Proceed to the next task: Verify that your AD server is connected to winbind below.
StorNext NAS System is not connected to your AD server
Re-authenticate your connection to the AD server by issuing the auth config ads command. See
Apply AD Authentication To NAS on page 37.
Verify that your AD server is connected to winbind
1. Log in to the console command line as the root user. See Access the Console Command Line on
page 9.
2. At the rootsh shell prompt, enter the following:
wbinfo -P
AD server is connected to winbind
Re-authenticate your connection to the AD server by issuing the auth config ads command. See
Apply AD Authentication To NAS on page 37.
AD server is not connected to winbind
Verify that winbind is running, and if it is not, restart winbind services by issuing the following command
from the StorNext NAS command line:
system restart services winbind
Verify Authentication
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. Verify that you can display authenticated users. See View User Information on page 44.
3. At the root shell prompt, enter the following command:
wbinfo -P
Result
l If this command succeeds and you can display authenticated users, then you can disregard the
authentication configuration errors.
l If this command does not succeed or you cannot display authenticated users, then you need to
reconfigure NAS user authentication. See NAS User Authentication on page 36.
Resolution
To change a directory's UNIX permissions, do one of the following:
l Access the directory from the StorNext NAS System, and then change the directory's UNIX permissions.
l Mount the directory from a Linux client, and then change the directory's UNIX permissions.
Resolution
Perform the following steps to create files and change permissions, and ensure that the OS X Samba client
remains active and operational.
1. Make sure that the Mac Samba client's user ID matches the StorNext NAS System's user ID.
2. Use Active Directory to authenticate access to shares.
Server Timeout
You could be experiencing issues with the StorNext NAS server timing out and not being able to process the
command. If you receive the following error, rerun the command:
Resolving IP Addresses into Hostnames
If you receive an [Errno 1] Unknown host (E-7014) error, your NAS cluster may not be performing IP
address-to- hostname resolution. To resolve this issue, you must configure your StorNext NAS server to
resolve IP addresses into hostnames.
Use the share add Command
You can add shares to the directory by issuing the share add command. Keep in mind that you will need to
create the share directory before issuing this command.
See Create Shares on page 77.
Reconfigure AD Authentication with the RID or TDB ID Map Option
You can re-issue the auth config ads command with one of the following ID map options:
RID
The Relative Identifier (RID) ID map option converts a Security Identifier (SID) to an RID, using an algorithm
that allows all Quantum appliances to see the same UID.
TDB
The Trivial Database (TDB) ID map option tells Samba to generate UIDs and GIDs locally on demand.
Are There Any Issues With Sharing Volumes Under Storage Manager
Control?
If you share volumes that under Storage Manager Control, you could experience the following issues.
Offline File Retrieval Hangs for Apple SMB Clients
Apple SMB clients do not recognize offline (truncated) files. If you use Finder along with Show icon
preview to locate a file, the process will attempt to read each file — including offline files — in the folder to
generate a preview. This process causes the SMB connection to hang while trying to retrieve offline files.
To avoid this issue, we recommend disabling Show icon preview. We also recommend against using
Finder so that a file retrieve is not generated.
Offline File Retrieval Hangs for Apple Double Files
Apple SMB clients save file attributes in the Apple Double File. Every time that Finder is used or that the ls
-l command issued, the Apple SMB client reads the saved file attributes. Depending on the Archive
destination, the file retrieves can cause the SMB connection to hang if offline (truncated) files are included in
the Apple Double File.
To avoid this issue, ensure that the Apple Double File is excluded from the truncation policy. Perform this
exclusion from the primary MDC by populating the /usr/adic/TSM/config/excludes.truncate file with
the following line:
BEGINS:._
Note: If the /usr/adic/TSM/config/excludes.truncate file does not exist, you need to create it
on the primary MDC.
Offline File Retrieval for NFS Clients
If an NFS client tries to access an offline file under Storage Manager Control, the process could hang until
the file becomes available.
To avoid this issue, use the dmnfsthreads=<value> mount option.
For more information, see StorNext 5 Man Pages Reference Guide.
Scenario 1
NFS clients cannot distinguish between two NFS shares that are exported from the same underlying
StorNext file system.
By default, the Linux NFS server uses the underlying StorNext file system's UUID to identify an exported
share. However, shares exported from the same StorNext file system have the same UUID, and therefore,
NFS clients cannot differentiate between them. Without being able to differentiate between shares, the
NFS client cannot mount a share.
To resolve this issue, you can assign a unique file system ID to each share.
Resolution
Issue the following command to assign a file system ID to each NFS share exported from the StorNext file
system:
share change nfs <share_name> fsid=<unique_file_system_ID>
See Modify Shares on page 79.
Scenario 2
If you attempt to mount an NFS share on an OS X client, you may receive an Operation is not permitted …
error message. This error can occur when OS X clients cannot mount NFS shares that are exported with
secure options.
Resolution
Perform one of the following steps to resolve this issue:
l Include the resvport option when mounting the NFS share.
l Include the insecure option when adding or changing options for an NFS share
CLI Example
> share add nfs myshare /stornext/snfs/myshare ro,insecure
Why Shouldn't I Share a Directory Over Both SMB and NFS Protocols?
If you attempt to share a directory over both SMB and NFS protocols, you risk file corruption, stale file locks,
and other file access issues in the share.
Resolution
Use standard file browsers, such as Windows Explorer or the Windows cmd prompt interface, to move files.
Do not move files from mounted shares.
Resolution
If you must enforce StorNext directory quotas, we recommend using the StorNext Windows client to write
data to the StorNext file system. This resolution ensures that users can access SMB shares in a read-only
mode to avoid content being rewritten and quotas being exceeded.
Resolution
1. Do one of the following to unlock the file:
l Deselect the file you are trying to save by selecting another file.
l Switch Finder to another mode, other than view mode.
l Close the Finder window.
2. Save your file.
If these steps do not resolve the issue, ask your system administrator to issue the system show smb
detail command on the StorNext NAS System. They will then need to submit a support ticket to
Quantum. See Manage Support Logs on page 99.
Verify whether the NAS controller has been disabled by the etc/init/snnas_controller.override
file.
1. Log in to the console command line. See Access the Console Command Line on page 9.
2. At the prompt, enter the following:
nascluster show
If the following error is returned, the NAS controller has been disabled:
Controller not running error: request to POST failed
3. At the root shell prompt, enter the following command to verify that the NAS controller has been
disabled by the etc/init/snnas_controller.override file:
# ls /etc/init/snnas_controller.override
If the /etc/init/snnas_controller.override file is returned, delete it. See Resolution below.
Resolution
1. Log in to the console command line as the root user. See Access the Console Command Line on
page 9.
2. At the prompt, enter the following to delete the etc/init/snnas_controller.override file.
# del /etc/init/snnas_controller.override
3. At the prompt, enter the following to restart the NAS controller:
# initctl start snnas_controller
cachebufsize
The cache buffer size is the amount of data that will be processed as a single block from the StorNext file
system.
l Recommended Setting: 256 KB
l Default Setting: 64 KB
Keep in mind that when you set the cache buffer size value, the StorNext file system will read/write the entire
buffer size. So even if you are modifying a file that is only 4 KB, the system will read/write the entire 256 KB
data block.
Important
When you increase the cache buffer size, you must also increase the buffer cache cap. See
buffercachecap below.
buffercachecap
The buffer cache cap is the total amount of memory reserved for caching data. The reserved cache memory
is shared by all mount points with the same cache size.
l Recommended Setting (dependent upon the amount of available memory): 4096 MB or 8192 MB
l Default Setting: 256 MB
Avoid reserving too much cache memory. Otherwise, there may bot be enough physical memory available
to ensure that the FSM process doesn’t get swapped out.
dircachesize
The directory cache size sets the size of the directory information cache on the Shared Content Product
System. By increasing this value, the Shared Content Product System is able to keep more directory
structure data in memory, dramatically improving the speed of readdir operations by reducing metadata
network message traffic between it and FSM.
l Recommend Setting: 32 MB
l Default Setting: 10 MB
buffercache_iods
The buffer cache I/O daemons setting defines the number of background daemons used for performing
buffer cache I/O.
l Recommended Setting: 16
l Default Setting: 8
Example Command
snfs1 /stornext/snfs1 cvfs
rw,cachebufsize=256k,buffercachecap=4096,dircachesize=32m,buffercache_iods=16 0 0
For additional information, see the StorNext 5 Tuning Guide and the StorNext 5 Man Pages Reference
Guide.
Steps
1. From the Mac client, access the /etc/nsmb.conf file.
2. Populate this file with the following lines:
[default]
signing_required=no
3. Unmount and remount the SMB share.
SMB file transfer times should return to normal.