0% found this document useful (0 votes)
53 views

IBM - IBM Spectrum Scale 5.1.0 Protocols Quick Overview

The document provides an overview of IBM Spectrum Scale protocols and the installation process. It discusses prerequisites, cluster setup, enabling protocols, configuration, and upgrading a cluster. The document outlines the installation toolkit workflow including user input, installation, deployment, and upgrade phases.

Uploaded by

Ramon Barrios
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

IBM - IBM Spectrum Scale 5.1.0 Protocols Quick Overview

The document provides an overview of IBM Spectrum Scale protocols and the installation process. It discusses prerequisites, cluster setup, enabling protocols, configuration, and upgrading a cluster. The document outlines the installation toolkit workflow including user input, installation, deployment, and upgrade phases.

Uploaded by

Ramon Barrios
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

IBM Spectrum Scale 5.1.

0 Protocols Quick Overview Nov 6, 2020

Before Starting Cluster Installation Protocol & File System Deployment Configuration Upgrade & Cluster additions

Always start here to understand: Start here if you would like to: Start here if you already have a cluster and Start here if you already have a cluster with Start here to gain a basic of understanding of:
would like to: protocols enabled and would like to:
Common pre-requisites Create a new cluster from scratch Upgrade guidance
Add/Enable protocols on existing cluster nodes Check cluster state and health, basic logging/debugging
Basic Install Toolkit operation Add and install new GPFS nodes to an existing cluster
(client, NSD, GUI) Create a file system on existing NSDs Configure a basic SMB or NFS export or Object How to add nodes, NSDs, FSs, protocols, to an existing
Requirements when an existing cluster exists, both cluster
Create new NSDs on an existing cluster Configure File or Object Protocol Authentication Configure and Enable File Audit Logging / Watch Folders
with or without an ESS

How does the Install Toolkit work? 1 Setup the node that will start the installation 1 Setup the node that will start the installation Path to binaries: Upgrading 4.2.3.x to 5.1.0.x:
1 A direct path from 4.2.3.x to 5.1.0.x is not possible unless all
IBM Spectrum Scale Install Toolkit operation can be Pick an IP existing on this node which is accessible to/ Setup is necessary unless spectrumscale setup had Add the following PATH variable to your shell profile to allow
summarized by 4 phases: from all nodes via promptless ssh: previously been run on this node for a past GPFS convenient access of gpfs ‘mm’ commands: nodes of the cluster are offline (see offline section below).
1) User input via ‘spectrumscale’ commands installation or protocol deployment. Pick an IP existing on However, it is possible to upgrade first, from 4.2.3.x to
2) A ‘spectrumscale install’ phase ./spectrumscale setup -s IP this node which is accessible to/from all nodes via export PATH=$PATH:/usr/lpp/mmfs/bin 5.0.5.x, and second, from 5.0.5.x to 5.1.0.x, while the cluster
3) A ‘spectrumscale deploy’ phase promptless ssh: is online.
4) A ‘spectrumscale upgrade’ phase Setup in an ESS environment Basic GPFS Health
If the spectrumscale command is being run on a node(s) ./spectrumscale setup -s IP Upgrading 5.0.x.x to 5.1.0.x
mmgetstate -aL
Each phase can be run again at later points in time to in a cluster with an ESS, make sure to switch to ESS a) Extract the 5.1.0.x Spectrum Scale PTF package
mmlscluster ./Spectrum_Scale_Data_Management-5.1.0.x-Linux
introduce new nodes, protocols, authentication, NSDs, mode (see page 2 for ESS examples): Setup in an ESS environment mmlscluster --ces
file systems, or updates. If the spectrumscale command is being run on a node(s) in mmnetverify b) Setup and Configure the Install Toolkit
./spectrumscale setup -s IP -st ess a cluster with an ESS, make sure to switch to ESS mode ./spectrumscale setup -s <IP of installer node>
All user input via ‘spectrumscale’ commands is recorded (see page 2 for ESS examples): ./spectrumscale config populate -N <any cluster node>
into a clusterdefinition.txt file in /usr/lpp/mmfs/5.1.0.x/ CES service and IP check
installer/configuration/ 2 Populate the cluster ./spectrumscale setup -s IP -st ess mmces address list **If config populate is incompatible with your cluster config, you will have to
If a cluster pre-exists, the Install Toolkit can automatically mmces service list -a manually add the nodes and config to the Install Toolkit OR copy the last
Each phase will act upon all nodes inputted into the traverse the existing cluster and populate its mmhealth cluster show used clusterdefinition.txt file to the new 5.1.0.x Install Toolkit.** cp -p /usr/
Populate the cluster lpp/mmfs/<5.0.5.x.your_last_level>/installer/configuration/
cluster definition file. For example, if you only want to clusterdefinition.txt file with current cluster configuration 2 mmhealth node show -N all -v
clusterdefinition.txt /usr/lpp/mmfs/5.1.0.x/installer/configuration/
deploy protocols in a cluster containing a mix of details. Point it at a node within the cluster with Optionally, the Install Toolkit can automatically traverse the mmhealth node show <component> -v
unsupported and supported OSs, input only the promptless ssh access to all other cluster nodes: existing cluster and populate its clusterdefinition.txt file with mmces events list -a
./spectrumscale node list
supported protocol nodes and leave all other nodes out current cluster details. Point it at a node within the cluster ./spectrumscale nsd list
of the cluster definition. ./spectrumscale config populate -N hostname with promptless ssh access to all other cluster nodes: Authentication ./spectrumscale filesystem list
mmuserauth service list ./spectrumscale config gpfs
If in ESS mode, point config populate to the EMS: ./spectrumscale config populate -N hostname mmuserauth service check ./spectrumscale config protocols
./spectrumscale upgrade precheck
2 Hardware / Performance Sizing ./spectrumscale config populate -N ems1 If in ESS mode, point config populate to the EMS: ./spectrumscale upgrade run
Please work with your IBM account team or Business Callhome
Partner for suggestions on the best configuration * Note the limitations of the config populate command
mmcallhome info list
./spectrumscale config populate -N ems1 Upgrading 5.1.0.x to future PTFs
possible to fit your environment. In addition, make sure mmcallhome group list
Follow the same procedure as indicated above.
to review the protocol sizing guide. *Note the limitations of the config populate command mmcallhome status list
3 Add NSD server nodes (non-ESS nodes)
Adding NSD nodes is necessary if you would like the File protocols (NFS & SMB) Upgrade compatibility with LTFS-EE
3 OS levels and CPU architecture install toolkit to configure new NSDs and file systems. 3 Add protocol nodes Verify all file systems to be used with protocols have nfs4
a) ltfsee stop (on all LTFSEE nodes)
The Install Toolkit supports the following OSs: ./spectrumscale node add hostname -p b) umount /ltfs (on all LTFSEE nodes)
ACLs and locking in effect. Protocols will not work correctly c) dsmmigfs disablefailover (on all LTFSEE nodes)
./spectrumscale node add hostname -n ./spectrumscale node add hostname -p without this setting in place. d) dsmmigfs stop (on all LTFSEE nodes)
x86: RHEL7.x / 8.x, SLES15, Ubuntu20 ./spectrumscale node add hostname -n …. Check with: mmlsfs all -D -k e) systemctl stop hsm.service (on all LTFSEE nodes)
ppc64 LE: RHEL7.x / 8.x …. f) Upgrade using the Install Toolkit
s390x: RHEL7.x / 8.x, SLES15, Ubuntu 20
4 Assign protocol IPs (CES-IPs) Example NFS export creation: g) Upgrade LTFS-EE if desired
mkdir /ibm/fs1/nfs_export1 h) Reverse steps e through a and restart/enable
All cluster nodes the Install Toolkit acts upon must be of 4 Add NSDs (non-ESS devices) Add a comma separated list of IPs to be used specifically
the same CPU architecture and endianness. NSDs can be added as non-shared disks seen by a for cluster export services such as NFS, SMB, Object.
mmnfs export add /ibm/fs1/nfs_export1 -c Upgrade compatibility with TCT
primary NSD server. NSDs can also be added as shared Reverse DNS lookup must be in place for all IPs. CES-IPs
"*(Access_Type=RW,Squash=no_root_squash,SecType=sys a)Stop TCT on all nodes prior to the upgrade
All protocol nodes must be of the same OS, architecture. disks seen by a primary and multiple secondary NSD must be unique and different than cluster node IPs.
,Protocols=3:4)" mmcloudgateway service stop -N Node | Nodeclass
and endianness. servers.
b) Upgrade using the Install Toolkit
./spectrumscale config protocols -e EXPORT_IP_POOL
mmnfs export list c) Upgrade the TCT rpm(s) manually, then restart TCT
In this example we add 4 /dev/dm disks seen by both
4 Repositories primary and secondary NSD servers: *All protocol nodes must see the same CES-IP network(s). If CES-Groups
A base repository must be setup on every node. For are to be used, apply them after the deployment is successful. Example SMB export creation: Offline upgrade using the Install Toolkit
RHEL8, also setup the AppStream repo. mkdir /ibm/fs1/smb_export1 The Install Toolkit supports offline upgrade of all nodes in the
./spectrumscale nsd add -p primary_nsdnode_hostname
RHEL check: yum repolist, dnf repolist -s secondary_nsdnode_hostname /dev/dm-1 /dev/dm-2 / cluster or a subset of nodes in the cluster. This is useful for
5 Verify file system mount points are as chown "DOMAIN\USER" /ibm/fs1/smb_export1 4.2.3.x -> 5.1.0.x upgrades. It is also useful when nodes are
SLES check: zypper repos dev/dm-3 /dev/dm-4
Ubuntu check: apt edit-sources expected unhealthy and cannot be brought into a healthy/active state
./spectrumscale filesystem list mmsmb export add smb_export1 /ibm/fs1/smb_export1 -- for upgrade. See the Knowledge Center for limitations.
5 Define file systems (non-ESS FSs) option "browseable=yes"
5 Firewall & Networking & SSH File systems are defined by assigning a file system *Skip this step if you setup file systems / NSDs manually and not through a) Check the upgrade configuration
the install toolkit. mmsmb export list
All nodes must be networked together and ping-able via name to one or more NSDs. Filesystems will be defined ./spectrumscale upgrade config list
IP, FQDN, and hostname but not created until this install is followed by a deploy.
b) Add nodes that are already shutdown
6 Configure protocols to point to a shared root Object protocol ./spectrumscale upgrade config offline -N <node1,node2>
Reverse DNS lookup must be in place In this example we assign all 4 NSDs to the fs1 file Verify the Object protocol by listing users and uploading an
file system location ./spectrumscale upgrade config list
system: object to a container:
A ces directory will be automatically created at root of the
If /etc/hosts is used for name resolution, ordering within
specified file system mount point. This is used for protocol c) Start the upgrade
must be: IP FQDN hostname ./spectrumscale nsd list source $HOME/openrc ./spectrumscale upgrade precheck
admin/config and needs >=4GB free. Upon completion of
./spectrumscale filesystem list openstack user list ./spectrumscale upgrade run
protocol deployment, GPFS configuration will point to this
Promptless ssh must be setup between all nodes and ./spectrumscale nsd modify nsd1 -fs fs1 openstack project list
as cesSharedRoot. It is recommended that cesSharedRoot
themselves using IP, FQDN, and hostname ./spectrumscale nsd modify nsd2 -fs fs1 swift stat
be a separate file system. Upgrading subsets of nodes (excluding nodes)
./spectrumscale nsd modify nsd3 -fs fs1 date > test_object1.txt
Firewalls should be turned off on all nodes else specific The Install Toolkit supports excluding groups of nodes from
./spectrumscale nsd modify nsd4 -fs fs1 swift upload test_container test_object1.txt
./spectrumscale config protocols -f fs1 -m /ibm/fs1 the upgrade. This allows for staging cluster upgrades across
ports must be opened both internally for GPFS and the swift list test_container
installer and externally for the protocols. See the IBM multiple windows. For example, upgrading only NSD nodes
If desired, multiple file systems can be assigned at this *If you setup file systems / NSDs manually, perform a manual check of
Knowledge Center for more details before proceeding. and then at a later time, upgrading only protocol nodes.
point. See the IBM Knowledge Center for details on <mmlsnsd> and <mmlsfs all -L> to make sure all NSDs and file systems
Performance Monitoring This is also useful if specific nodes are down and
“spectrumscale nsd modify”. We recommend a separate required by the deploy are active and mounted before continuing.
systemctl status pmsensors unreachable. See the Knowledge Center for limitations.
file system for shared root to be used with protocols.
6 Time sync among nodes is required systemctl status pmcollector
A consistent time must be established on all nodes of the 7 Enable the desired file protocols mmperfmon config show a) Check the upgrade configuration
cluster. NTP can be automatically configured during 6 Add GPFS client nodes ./spectrumscale enable nfs mmperfmon query -h ./spectrumscale upgrade config list
install. See step 9 of the installation stage. ./spectrumscale node add hostname ./spectrumscale enable smb b) Add nodes that are NOT to be upgraded
File Audit Logging &/or Watch Folder ./spectrumscale upgrade config exclude -N <node1,node2>
The installer will assign quorum and manager nodes by File audit logging (FAL) and Watch Folder (WF) functionality ./spectrumscale upgrade config list
7 Cleanup prior SMB, NFS, Object default. Refer to the IBM Knowledge Center if a specific 8 Enable the Object protocol if desired is available with Advanced and Data Management Editions of
Prior implementations of SMB, NFS, and Object must be configuration is desired. ./spectrumscale enable object Spectrum Scale. c) Start the upgrade
completely removed before proceeding with a new ./spectrumscale upgrade precheck
protocol deployment. Refer to the cleanup guide within Configure an admin user, password, and database a) FAL: Enable and configure using the Install Toolkit as ./spectrumscale upgrade run
the IBM Knowledge Center. 7 Add Spectrum Scale GUI nodes password to be used for Object operations: follows:
./spectrumscale node add hostname -g -a d) Prepare to upgrade the previously excluded nodes
… ./spectrumscale config object -au admin -ap -dp ./spectrumscale fileauditlogging enable ./spectrumscale upgrade config list
8 If a GPFS cluster pre-exists ./spectrumscale upgrade config exclude --clear
./spectrumscale filesystem modify —fileauditloggingenable gpfs1
Proceed to the Protocol Deployment section as long as The management GUI will automatically start after Configure the Object endpoint using a single hostname with ./spectrumscale upgrade exclude -N <already_upgraded_nodes>
./spectrumscale fileauditlogging list
you have: installation and allow for further cluster configuration and a round robin DNS entry mapping to all CES IPs: ./spectrumscale filesystem modify —logfileset <LOGFILESET>
monitoring. retention <days> gpfs1 e) Start the upgrade
a) file system(s) created and mounted ahead of time & ./spectrumscale config object -e hostname ./spectrumscale upgrade precheck
nfs4 ACLs in place b) WF: Enable and configure ./spectrumscale upgrade run
b) ssh promptless access among all nodes 8 Configure performance monitoring Specify a file system and fileset name where your Object ./spectrumscale watchfolder enable
c) firewall ports open Configure performance monitoring consistently across data will go: Resume of a failed upgrade
d) CCR enabled nodes. *If less than 3 protocol nodes, specify exact broker nodes If an Install Toolkit upgrade fails, it is possible to correct the
e) set mmchconfig release=LATEST ./spectrumscale config object -f fs1 -m /ibm/fs1 ./spectrumscale node add hostname -b failure and resume the upgrade without needing to recover
f) installed GPFS rpms should match the exact build ./spectrumscale config perfmon -r on ./spectrumscale config object -o Object_Fileset all nodes/services. Resume with: ./spectrumscale upgrade run
dates of those included within the protocols package c) Install the File Audit Logging / WF rpms on all nodes
*The Object fileset must not pre-exist. If an existing fileset is detected at ./spectrumscale install --precheck
9 Configure network time protocol (NTP) the same location, deployment will fail so that existing data is preserved. ./spectrumscale install
Handling Linux kernel updates
9 If an ESS is part of the cluster The network time protocol can be automatically The GPFS portability layer must be rebuilt on every node that
Proceed to the Cluster Installation section to use the undergoes a Linux kernel update. Apply the kernel, reboot, rebuild
configured and started on all nodes provided the NTP d) Deploy the File Audit Logging / WF configuration
9 Setup Authentication the GPFS portability layer on each node with this command prior to
Install Toolkit to install GPFS and add new nodes to the package has been pre-installed on all nodes: *gpfs.adv.* or gpfs.dm.* rpms must be installed on all nodes* starting GPFS: /usr/lpp/mmfs/bin/mmbuildgpl. Or mmchconfig
existing ESS cluster. Proceed to the Protocol Authentication must be setup prior to using any protocols. If
autoBuildGPL=yes and mmstartup.
Deployment section to deploy protocols. ./spectrumscale config ntp -e on -s ntp_server1, you are unsure of the appropriate authentication config you ./spectrumscale deploy --precheck
ntp_server2, ntp_server3, … may skip this step and revisit by re-running the deployment ./spectrumscale deploy
a) CCR must be enabled at a later time or manually using the mmuserauth Adding to the installation
b) EMS node(s) must be in the ems nodeclass. IO nodes commands. Refer to the IBM Knowledge Center for the *WF: Once deployed, use the mmwatch command to start a watch The procedures below can be combined to reduce the
must be in their own nodeclass: gss or gss_ppc64. 10 Configure Callhome many supported authentication configurations. on a filesystem, fileset. number of installs and deploys necessary.
c) GPFS on the ESS nodes must be at minimum 5.0.5.x Starting with 5.0.0.0, callhome is enabled by default
e) Check the status To add a node:
d) All Quorum and Quorum-Manager nodes are within the Install Toolkit. Refer to the callhome settings Install Toolkit AD example for File and/or Object a) Choose one or more node types to add
recommended to be at the latest levels possible and configure mandatory options for callhome: ./spectrumscale auth file ad mmhealth node show FILEAUDITLOG -v
Client node: ./spectrumscale node add hostname
e)A CES shared root file system has been created and ./spectrumscale auth object ad mmhealth node show MSGQUEUE -v
NSD node: ./spectrumscale node add hostname -n
./spectrumscale callhome config -h mmaudit all list
mounted on the EMS. Protocol node: ./spectrumscale node add hostname -p
mmmsgqueue status
GUI node: ./spectrumscale node add hostname -g -a
Alternatively, disable callhome: 10 Configure Callhome mmaudit all consumeStatus -N <node list> …. repeat for as many nodes as you’d like to add.
10 Protocols in a stretch cluster Starting with 5.0.0.0, callhome is enabled by default within mmwatch all list b) Install GPFS on the new node(s):
Refer to the stretch cluster use case within the ./spectrumscale callhome disable the Install Toolkit. Refer to the callhome settings and ./spectrumscale install -pr
Knowledge Center. configure mandatory options for callhome: Logging & Debugging ./spectrumscale install
Installation / deployment: c) If a protocol node is being added, also run deploy
11 Name your cluster ./spectrumscale deploy -pr
./spectrumscale callhome config -h /usr/lpp/mmfs/5.1.0.x/installer/logs
11 Extract Spectrum Scale package ./spectrumscale config gpfs -c my_cluster_name ./spectrumscale deploy
With 5.1.0.0, there is no longer a protocols specific Alternatively, disable callhome: Verbose logging for all spectrumscale commands by adding
To add an NSD:
package. Any standard, advanced, or data management a ‘-v’ immediately after ./spectrumscale:
12 Review your config a) Verify the NSD server connecting this new disk exists
package is now sufficient for protocol deployment. ./spectrumscale callhome disable within the cluster.
./spectrumscale node list /usr/lpp/mmfs/5.1.0.x/installer/spectrumscale -v <cmd>
Extracting the package will present a license agreement. b) Add the NSD(s) to the install toolkit
./spectrumscale nsd list
./spectrumscale filesystem list GPFS default log location: ./spectrumscale nsd add -h
./Spectrum_Scale_Data_Management-5.1.0.x-<arch>-Linux-install 11 Review your config /var/adm/ras/ … repeat for as many NSDs as you’d like to add
./spectrumscale config gpfs --list
./spectrumscale node list c) Run an install
./spectrumscale install --precheck Linux syslog or journal is recommended to be enabled
./spectrumscale deploy --precheck ./spectrumscale install -pr
12 Explore the spectrumscale help ./spectrumscale install
From location /usr/lpp/mmfs/5.1.0.x/installer 13 Start the installation
Use the -h flag. 12 Start the deployment To add a file system:
./spectrumscale install Data Capture for Support a) Verify free NSDs exist and are known to the install toolkit
./spectrumscale -h ./spectrumscale deploy
System-wide data capture: b) Define the file system
./spectrumscale setup -h ———————————————— ———————————————— /usr/lpp/mmfs/bin/gpfs.snap ./spectrumscale nsd list
./spectrumscale node add -h Upon completion you will have an active GPFS cluster with available Upon completion you will have protocol nodes with active cluster export ./spectrumscale nsd modify nsdX -fs file_system_name
./spectrumscale config -h NSDs, performance monitoring, time sync, callhome, and a GUI. File services and IPs. File systems will have been created and Authentication
systems will be fully created and protocols installed in the next stage: Installation/Deploy/Upgrade specific: c) Deploy the new file system
./spectrumscale config protocols -h will be configured and ready to use. Performance Monitoring tools will ./spectrumscale deploy -pr
deployment. also be usable at this time.
/usr/lpp/mmfs/5.1.0.x/installer/installer.snap.py
./spectrumscale deploy
Install can be re-run in the future to: Deploy can be re-run in the future to:
13 FAQ and Quick Reference - add GUI nodes - enable additional protocols To enable another protocol:
Refer to the Knowledge Center Quick Reference - add NSD server nodes - enable authentication for file or Object See the Protocol & File System deployment column.
Refer to the Spectrum Scale FAQ - add GPFS client nodes - create additional file systems (run install first to add more NSDs) Proceed with steps 7, 8, 9, 10, 11. Note that some protocols
- add NSDs - add additional protocol nodes (run install first to add more nodes) necessitate removal of the Authentication configuration prior to
- enable and configure or update callhome settings - enable and configure or update callhome settings enablement.

**URL links are subject to change**


Examples
Example of readying Red Hat Linux nodes for Spectrum Scale installation and deployment of protocols Example of a new Spectrum Scale cluster installation followed by a protocol deployment

Configure promptless SSH (promptless ssh is required) Install Toolkit commands for Installation:
# ssh-keygen (if using RHEL 8.x, make sure to run ssh-keygen -m PEM or else the install toolkit will have issues with node logins)
- Toolkit is running from cluster-node1 with an internal cluster network IP of 10.11.10.11, which all nodes can reach
# ssh-copy-id <FQDN of node>
# ssh-copy-id <IP of node> cd /usr/lpp/mmfs/5.1.0.x/installer/
# ssh-copy-id <non-FQDN hostname of node> ./spectrumscale setup -s 10.11.10.11
repeat on all nodes to all nodes, including current node ./spectrumscale node add cluster-node1 -a -g
./spectrumscale node add cluster-node2 -a -g
Turn off firewalls (alternative is to open ports specific to each Spectrum Scale functionality) ./spectrumscale node add cluster-node3
# systemctl stop firewalld
./spectrumscale node add cluster-node4
# systemctl disable firewalld
repeat on all nodes ./spectrumscale node add cluster-node5 -n
./spectrumscale node add cluster-node6 -n
How to check if a yum repository is configured correctly ./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs cesSharedRoot -fg 1 "/dev/sdb"
# yum repolist -> should return no errors. It must also show an RHEL7.x base repository. Other repository possibilities include a satellite site, a custom yum repository, an ./spectrumscale nsd add -p node6.tuc.stglabs.ibm.com -s node5.tuc.stglabs.ibm.com -u dataAndMetadata -fs cesSharedRoot -fg 2 "/dev/sdc"
RHELx.x DVD iso, an RHELx.x physical DVD. ./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs ObjectFS -fg 1 "/dev/sdd"
./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs ObjectFS -fg 1 "/dev/sde"
Use the included local-repo tool to spin up a repository for a base OS DVD (this tool works on RHEL, Ubuntu, SLES)
# cd /usr/lpp/mmfs/5.1.0.x/tools/repo ./spectrumscale nsd add -p node6.tuc.stglabs.ibm.com -s node5.tuc.stglabs.ibm.com -u dataAndMetadata -fs ObjectFS -fg 2 "/dev/sdf"
# cat readme_local-repo | more ./spectrumscale nsd add -p node6.tuc.stglabs.ibm.com -s node5.tuc.stglabs.ibm.com -u dataAndMetadata -fs ObjectFS -fg 2 "/dev/sdg"
# ./local-repo --mount default --iso /root/RHEL7.9.iso ./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 1 "/dev/sdh"
./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 1 "/dev/sdi"
What if I don't want to use the Install Toolkit - how do I get a repository for all the Spectrum Scale rpms? ./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 2 "/dev/sdj"
# cd /usr/lpp/mmfs/5.1.0.x/tools/repo
./spectrumscale nsd add -p node5.tuc.stglabs.ibm.com -s node6.tuc.stglabs.ibm.com -u dataAndMetadata -fs fs1 -fg 2 “/dev/sdk"
# ./local-repo --repo
# yum repolist ./spectrumscale config perfmon -r on
./spectrumscale config ntp -e on -s ntp_server1,ntp_server2,ntp_server3
Pre-install pre-req rpms to make installation and deployment easier ./spectrumscale callhome enable <- If you prefer not to enable callhome, change the enable to a disable
# yum install kernel-devel cpp gcc gcc-c++ glibc sssd ypbind openldap-clients krb5-workstation ./spectrumscale callhome config -n COMPANY_NAME -i COMPANY_ID -cn MY_COUNTRY_CODE -e MY_EMAIL_ADDRESS
./spectrumscale config gpfs -c mycluster
Turn off selinux (or set to permissive mode)
./spectrumscale node list
# sestatus
# vi /etc/selinux/config ./spectrumscale install --precheck
change SELINUX=xxxxxx to SELINUX=disabled ./spectrumscale install
save and reboot
repeat on all nodes Install Outcome: A 6node Spectrum Scale cluster with active NSDs
2 GUI nodes
Setup a default path to Spectrum Scale commands (not required)
2 NSD nodes
# vi /root/.bash_profile
——add this line—— 2 client nodes
export PATH=$PATH:/usr/lpp/mmfs/bin 10 NSDs
——save/exit—— configured performance monitoring
logout and back in for changes to take effect callhome configured
**3 file systems defined, each with 2 failure groups. File systems will not be created until a deployment**

Install Toolkit commands for Protocol Deployment (assumes cluster created from above configuration./
- Toolkit is running from the same node that performed the install above, cluster-node1
./spectrumscale node add cluster-node3 -p
./spectrumscale node add cluster-node4 -p
./spectrumscale config protocols -e 172.31.1.10,172.31.1.11,172.31.1.12,172.31.1.13,172.31.1.14
./spectrumscale config protocols -f cesSharedRoot -m /ibm/cesSharedRoot
./spectrumscale enable nfs
Example of adding protocol nodes to an ESS
./spectrumscale enable smb
./spectrumscale enable object
Starting point
./spectrumscale config object -e mycluster-ces
If you have a 5148-22L protocol node, stop following these directions: please refer to the ESS 5.3.6 (or higher) Quick Deployment Guide
./spectrumscale config object -o Object_Fileset
The cluster containing ESS is active and online
./spectrumscale config object -f ObjectFS -m /ibm/ObjectFS
RHEL7.x/8.x, SLES15, or Ubuntu20.04 is installed on all nodes that are going to serve as protocol nodes
./spectrumscale config object -au admin -ap -dp
RHEL7.x/8.x, SLES15, or Ubuntu 20.04 base repository is set up on nodes that are going to serve as protocol nodes
./spectrumscale node list
The nodes that will serve as protocol nodes have connectivity to the GPFS cluster network
./spectrumscale deploy --precheck
Create a cesSharedRoot from the EMS: gssgenvdisks --create-vdisk --create-nsds --create-filesystem --contact-node gssio1-hs --crcesfs
./spectrumscale deploy
Mount the CES shared root file system on the EMS node and set it to automount. When done with this full procedure, make sure the protocol
nodes are set to automount the CES shared root file system as well.
Use the ESS GUI or CLI to create additional file systems for protocols if desired. Configure each file system for nfsv4 ACLs
Deploy Outcome:
Pick a protocol node to run the Install Toolkit from.
2 Protocol nodes
The Install Toolkit is contained within these packages: Spectrum Scale Protocols Standard or Advanced or Data Management Edition
Active SMB and NFS file protocols
Download and extract one of the Spectrum Scale Protocols packages to the protocol node that will run the Install Toolkit
Active Object protocol
Once extracted, the Install Toolkit is located in the /usr/lpp/mmfs/5.1.0.x/installer directory.
cesSharedRoot file system created and used for protocol configuration and state data
Inputting the configuration into the Install Toolkit with the commands detailed below, involves pointing the Install Toolkit to the EMS node, telling
ObjectFS file system created with an Object_Fileset created within
the Install Toolkit about the mount points and paths to the CES shared root and optionally, the Object file systems, and designating the protocol
fs1 file system created and ready
nodes and protocol config to be installed/deployed.
Next Steps:
Install Toolkit commands:
- Configure Authentication with mmuserauth or by configuring authentication with the Install Toolkit and re-running the deployment
./spectrumscale setup -s 10.11.10.11 -st ess <- internal GPFS network IP on the current Installer node that can see all protocol nodes
./spectrumscale config populate -N ems-node <- OPTIONAL. Have the Install Toolkit traverse the existing cluster and auto-populate its config.
./spectrumscale node list <- OPTIONAL. Check the node configuration discovered by config populate.
Example of adding protocols to an existing cluster
./spectrumscale node add ems-node -a -e <- designate the EMS node for the Install Toolkit to use for coordination of the install/deploy
./spectrumscale node add cluster-node1 -p
Pre-req Configuration
./spectrumscale node add cluster-node2 -p
Decide on a file system to use for cesSharedRoot (>=4GB). Preferably, a standalone file system solely for this purpose.
./spectrumscale node add cluster-node3 -p
Take note of the file system name and mount point. Verify the file system is mounted on all protocol nodes.
./spectrumscale node add cluster-node4 -p
Decide which nodes will be the Protocol nodes
./spectrumscale config protocols -e 172.31.1.10,172.31.1.11,172.31.1.12,172.31.1.13,172.31.1.14
Set aside CES-IPs that are unused in the current cluster and network. Do not attempt to assign the CES-IPs to any adapters.
./spectrumscale config protocols -f cesSharedRoot -m /ibm/cesSharedRoot
Verify each Protocol node has a pre-established network route and IP not only on the GPFS cluster network, but on the same network the CES-
./spectrumscale enable nfs
IPs will belong to. When Protocols are deployed, the CES-IPs will be aliased to the active network device matching their subnet. The CES-IPs
./spectrumscale enable smb
must be free to move among nodes during failover cases.
./spectrumscale enable object
Decide which protocols to enable. The protocol deployment will install all protocols but will enable only the ones you choose.
./spectrumscale config object -e mycluster-ces
Add the new to-be protocol nodes to the existing cluster using mmaddnode (or use the Install Toolkit).
./spectrumscale config object -o Object_Fileset
In this example, we will add the protocol functionality to nodes already within the cluster.
./spectrumscale config object -f ObjectFS -m /ibm/ObjectFS
./spectrumscale config object -au admin -ap -dp
Install Toolkit commands (Toolkit is running on a node that will become a protocol node)
./spectrumscale node list <- It is normal for ESS IO nodes to not be listed in the Install Toolkit. Do not add them.
./spectrumscale setup -s 10.11.10.15 <- internal gpfs network IP on the current Installer node that can see all protocol nodes
./spectrumscale config populate -n cluster-node5 <- pick a node in the cluster for the toolkit to use for automatic configuration
./spectrumscale install --precheck
./spectrumscale node add cluster-node5 -a -p
./spectrumscale install <- The install will install GPFS on the new protocol nodes and add them to the existing ESS cluster
./spectrumscale node add cluster-node6 -p
./spectrumscale node add cluster-node7 -p
./spectrumscale deploy --precheck <- It’s important to make sure CES shared root is mounted on all protocol nodes before continuing
./spectrumscale node add cluster-node8 -p
./spectrumscale deploy <- The deploy will install / configure protocols on the new protocol nodes
./spectrumscale config protocols -e 172.31.1.10,172.31.1.11,172.31.1.12,172.31.1.13,172.31.1.14
./spectrumscale config protocols -f cesSharedRoot -m /ibm/cesSharedRoot
Install Outcome:
./spectrumscale enable nfs
EMS node used as an admin node by the Install Toolkit, to coordinate the installation
./spectrumscale enable smb
4 new nodes installed with GPFS and added to the existing ESS cluster
./spectrumscale enable object
Performance sensors automatically installed on the 4 new nodes and pointed back to existing collector / GUI on the EMS node
./spectrumscale config object -e mycluster-ces
ESS I/O nodes, NSDs/vdisks, left untouched by the Install Toolkit.
./spectrumscale config object -o Object_Fileset
./spectrumscale config object -f ObjectFS -m /ibm/ObjectFS
Deploy Outcome:
./spectrumscale config object -au admin -ap -dp
CES Protocol stack added to 4 nodes, now designated as Protocol nodes with server licenses
./spectrumscale callhome enable <- If you prefer not to enable callhome, change the enable to a disable
4 CES-IPs distributed among the protocol nodes
./spectrumscale callhome config -n COMPANY_NAME -i COMPANY_ID -cn MY_COUNTRY_CODE -e MY_EMAIL_ADDRESS
Protocol configuration and state data will use the cesSharedRoot file system, which was pre-created on the ESS
./spectrumscale node list
Object protocol will use the ObjectFS filesystem, which was pre-created on the ESS
./spectrumscale deploy --precheck
./spectrumscale deploy

Deploy Outcome:
Example of Upgrading protocol nodes / other nodes in the same cluster as an ESS CES Protocol stack added to 4 nodes, now designated as Protocol nodes with server licenses
4 CES-IPs distributed among the protocol nodes
Pre-Upgrade planning: Protocol configuration and state data will use the cesSharedRoot file system
- Refer to the Knowledge Center for supported upgrade paths of Spectrum Scale nodes Object protocol will use the ObjectFS filesystem
- If you have a 5148-22L protocol node attached to an ESS, please refer to the ESS 5.3.6 (or higher) Quick Deployment Guide Callhome will be configured
- Consider whether OS, FW, or drivers on the protocol node(s) should be upgraded and plan this either before or after the install toolkit upgrade
- SMB: requires quiescing all I/O for the duration of the upgrade. Due to the SMB clustering functionality, differing SMB levels cannot co-exist
within a cluster at the same time. This requires a full outage of SMB during the upgrade. Example of Upgrading protocol nodes / other nodes (not in an ESS)
- NFS: Recommended to quiesce all I/O for the duration of the upgrade. NFS experiences I/O pauses, and depending upon the client, mounts
may disconnect during the upgrade. Pre-Upgrade planning:
- Object: Recommended to quiesce all I/O for the duration of the upgrade. Object service will be down or interrupted at multiple times during the - Refer to the Knowledge Center for supported upgrade paths of Spectrum Scale nodes
upgrade process. Clients may experience errors or they might be unable to connect during this time. They should retry as appropriate. - Consider whether OS, FW, or drivers on the protocol node(s) should be upgraded and plan this either before or after the install toolkit upgrade
- Performance Monitoring: Collector(s) may experience small durations in which no performance data is logged, as the nodes upgrade. - SMB: requires quiescing all I/O for the duration of the upgrade. Due to the SMB clustering functionality, differing SMB levels cannot co-exist
within a cluster at the same time. This requires a full outage of SMB during the upgrade.
Install Toolkit commands for Scale 5.0.0.0 or higher - NFS: Recommended to quiesce all I/O for the duration of the upgrade. NFS experiences I/O pauses, and depending upon the client, mounts
./spectrumscale setup -s 10.11.10.11 -st ess <- internal gpfs network IP on the current Installer node that can see all protocol nodes may disconnect during the upgrade.
- Object: Recommended to quiesce all I/O for the duration of the upgrade. Object service will be down or interrupted at multiple times during the
./spectrumscale config populate -N ems1 <- Always point config populate to the EMS node when an ESS is in the same cluster upgrade process. Clients may experience errors or they might be unable to connect during this time. They should retry as appropriate.
** If config populate is incompatible with your configuration, add the nodes and CES configuration to the install toolkit manually ** - Performance Monitoring: Collector(s) may experience small durations in which no performance data is logged, as the nodes upgrade.
./spectrumscale node list <- This is the list of nodes the Install Toolkit will upgrade. Remove any non-CES nodes you would rather do manually Install Toolkit commands:
./spectrumscale upgrade precheck ./spectrumscale setup -s 10.11.10.11 -st ss <- internal gpfs network IP on the current Installer node that can see all protocol nodes
./spectrumscale upgrade run
./spectrumscale config populate -N <hostname_of_any_node_in_cluster>
** If config populate is incompatible with your configuration, add the nodes and CES configuration to the install toolkit manually **

./spectrumscale node list <- This is the list of nodes the Install Toolkit will upgrade. Remove any non-CES nodes you would rather do manually
./spectrumscale upgrade precheck
./spectrumscale upgrade run

You might also like