IBM - IBM Spectrum Scale Container Storage Interface Driver Guide Version 2.9 (2023)
IBM - IBM Spectrum Scale Container Storage Interface Driver Guide Version 2.9 (2023)
IBM
SC28-3113-17
Note
Before using this information and the product it supports, read the information in “Notices” on page
69.
This edition applies to Version 5 release 1 modification 7 of the following products, and to all subsequent releases and
modifications until otherwise indicated in new editions:
• IBM Spectrum Scale Data Management Edition ordered through Passport Advantage® (product number 5737-F34)
• IBM Spectrum Scale Data Access Edition ordered through Passport Advantage (product number 5737-I39)
• IBM Spectrum Scale Erasure Code Edition ordered through Passport Advantage (product number 5737-J34)
• IBM Spectrum Scale Data Management Edition ordered through AAS (product numbers 5641-DM1, DM3, DM5)
• IBM Spectrum Scale Data Access Edition ordered through AAS (product numbers 5641-DA1, DA3, DA5)
• IBM Spectrum Scale Data Management Edition for IBM® ESS (product number 5765-DME)
• IBM Spectrum Scale Data Access Edition for IBM ESS (product number 5765-DAE)
• IBM Spectrum Scale Backup ordered through Passport Advantage® (product number 5900-AXJ)
• IBM Spectrum Scale Backup ordered through AAS (product numbers 5641-BU1, BU3, BU5)
• IBM Spectrum Scale Backup for IBM® Storage Scale System (product number 5765-BU1)
Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the change.
IBM welcomes your comments; see the topic “How to send your comments” on page xxx. When you send information
to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without
incurring any obligation to you.
© Copyright International Business Machines Corporation 2015, 2023.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents
Tables................................................................................................................... v
Figures................................................................................................................ vii
About this information.......................................................................................... ix
Prerequisite and related information...................................................................................................... xxix
Conventions used in this information......................................................................................................xxix
How to send your comments....................................................................................................................xxx
Chapter 2. Introduction......................................................................................... 3
Chapter 3. Planning............................................................................................... 5
Hardware and software requirements........................................................................................................ 5
Deployment considerations.......................................................................................................................10
Roles and personas for IBM Spectrum Scale Container Storage Interface driver ..................................12
Chapter 4. Installation.........................................................................................13
Performing pre-installation tasks..............................................................................................................13
Installing IBM Spectrum Scale Container Storage Interface driver using CLIs....................................... 15
Offline Installation and Upgrade............................................................................................................... 17
Chapter 5. Upgrading...........................................................................................19
IBM Spectrum Scale CSI 2.8.0 to 2.9.0.................................................................................................... 19
Chapter 6. Configurations.................................................................................... 21
IBM Spectrum Scale Container Storage Interface driver configurations.................................................21
Secrets.................................................................................................................................................. 21
Certificates........................................................................................................................................... 21
Operator................................................................................................................................................21
Changing the configuration after deployment.......................................................................................... 31
Advanced configuration............................................................................................................................. 31
iii
Creating a persistent volume (PV)....................................................................................................... 44
Creating a PersistentVolumeClaim (PVC)............................................................................................ 45
Tiering Support.......................................................................................................................................... 46
Compression Support................................................................................................................................ 46
Chapter 8. Managing IBM Spectrum Scale when used with IBM Spectrum Scale
Container Storage Interface driver................................................................... 49
Adding a new node to the Kubernetes or Red Hat OpenShift cluster...................................................... 49
Unmounting IBM Spectrum Scale file system.......................................................................................... 49
Shutting down IBM Spectrum Scale..........................................................................................................49
IBM Spectrum Scale monitoring considerations...................................................................................... 50
Upgrading IBM Spectrum Scale on IBM Spectrum Scale Container Storage Interface driver nodes..... 50
On the worker nodes............................................................................................................................ 50
On the nodes running CSI sidecars (Provisioner, Attacher, Snapshotter, Resizer etc) ......................51
Chapter 9. Cleanup.............................................................................................. 55
Cleaning up IBM Spectrum Scale Container Storage Interface driver and Operator by using CLIs........55
Notices................................................................................................................69
Trademarks................................................................................................................................................ 70
Terms and conditions for product documentation................................................................................... 70
Glossary..............................................................................................................73
Index.................................................................................................................. 81
iv
Tables
2. Conventions...............................................................................................................................................xxix
3. CSI Features, OCP, Kubernetes and IBM Spectrum Scale Compatibility Matrix......................................... 5
5. Image Links for IBM Spectrum Scale Container Storage Interface driver 2.9.0.......................................11
6. IBM-Spectrum-Scale-CSI-operator role.................................................................................................... 12
9. Parameter description.................................................................................................................................27
v
vi
Figures
2. Operator configuration................................................................................................................................ 22
3. Deployment of two IBM Spectrum Scale clusters with remote-mounted file systems............................ 25
vii
viii
About this information
This edition applies to IBM Spectrum Scale version 5.1.7 for AIX®, Linux®, and Windows.
IBM Spectrum Scale is a file management infrastructure, based on IBM General Parallel File System
(GPFS) technology, which provides unmatched performance and reliability with scalable access to critical
file data.
To find out which version of IBM Spectrum Scale is running on a particular AIX node, enter:
lslpp -l gpfs\*
To find out which version of IBM Spectrum Scale is running on a particular Linux node, enter:
rpm -qa | grep gpfs (for SLES and Red Hat Enterprise Linux)
To find out which version of IBM Spectrum Scale is running on a particular Windows node, open Programs
and Features in the control panel. The IBM Spectrum Scale installed program name includes the version
number.
Which IBM Spectrum Scale information unit provides the information you need?
The IBM Spectrum Scale library consists of the information units listed in Table 1 on page x.
To use these information units effectively, you must be familiar with IBM Spectrum Scale and the AIX,
Linux, or Windows operating system, or all of them, depending on which operating systems are in use at
your installation. Where necessary, these information units provide some background information relating
to AIX, Linux, or Windows. However, more commonly they refer to the appropriate operating system
documentation.
Note: Throughout this documentation, the term "Linux" refers to all supported distributions of Linux,
unless otherwise specified.
x IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 1. IBM Spectrum Scale library information units (continued)
Information unit Type of information Intended users
IBM Spectrum Scale: Planning
Concepts, Planning, and
Installation Guide • Planning for GPFS
• Planning for protocols
• Planning for Cloud services
• Planning for IBM Spectrum Scale
on Public Clouds
• Planning for AFM
• Planning for AFM DR
• Planning for AFM to cloud object
storage
• Planning for performance
monitoring tool
xii IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 1. IBM Spectrum Scale library information units (continued)
Information unit Type of information Intended users
IBM Spectrum Scale: • Upgrading IBM Spectrum® Scale System administrators, analysts,
Concepts, Planning, and non-protocol Linux nodes installers, planners, and
Installation Guide programmers of IBM Spectrum
• Upgrading IBM Spectrum Scale Scale clusters who are very
protocol nodes experienced with the operating
• Upgrading GPUDirect Storage systems on which each IBM
• Upgrading AFM and AFM DR Spectrum Scale cluster is based
• Upgrading object packages
• Upgrading SMB packages
• Upgrading NFS packages
• Upgrading call home
• Manually upgrading the
performance monitoring tool
• Manually upgrading pmswift
• Manually upgrading the IBM
Spectrum Scale management GUI
• Upgrading Cloud services
• Upgrading to IBM Cloud Object
Storage software level 3.7.2 and
above
• Upgrade paths and commands for
file audit logging and clustered
watch folder
• Upgrading IBM Spectrum Scale
components with the installation
toolkit
• Protocol authentication
configuration changes during
upgrade
• Changing the IBM Spectrum Scale
product edition
• Completing the upgrade to a new
level of IBM Spectrum Scale
• Reverting to the previous level of
IBM Spectrum Scale
xiv IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 1. IBM Spectrum Scale library information units (continued)
Information unit Type of information Intended users
IBM Spectrum Scale: This guide provides the following System administrators or
Administration Guide information: programmers of IBM Spectrum
Scale systems
Configuring
• Configuring the GPFS cluster
• Configuring GPUDirect Storage for
IBM Spectrum Scale
• Configuring the CES and protocol
configuration
• Configuring and tuning your
system for GPFS
• Parameters for performance
tuning and optimization
• Ensuring high availability of the
GUI service
• Configuring and tuning your
system for Cloud services
• Configuring IBM Power Systems
for IBM Spectrum Scale
• Configuring file audit logging
• Configuring clustered watch
folder
• Configuring Active File
Management
• Configuring AFM-based DR
• Configuring AFM to cloud object
storage
• Tuning for Kernel NFS backend on
AFM and AFM DR
• Configuring call home
• Integrating IBM Spectrum Scale
Cinder driver with Red Hat
OpenStack Platform 16.1
• Configuring Multi-Rail over TCP
(MROT)
xvi IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 1. IBM Spectrum Scale library information units (continued)
Information unit Type of information Intended users
IBM Spectrum Scale: • Managing protocol services System administrators or
Administration Guide programmers of IBM Spectrum
• Managing protocol user Scale systems
authentication
• Managing protocol data exports
• Managing object storage
• Managing GPFS quotas
• Managing GUI users
• Managing GPFS access control
lists
• Native NFS and GPFS
• Accessing a remote GPFS file
system
• Information lifecycle
management for IBM Spectrum
Scale
• Creating and maintaining
snapshots of file systems
• Creating and managing file clones
• Scale Out Backup and Restore
(SOBAR)
• Data Mirroring and Replication
• Implementing a clustered NFS
environment on Linux
• Implementing Cluster Export
Services
• Identity management on
Windows / RFC 2307 Attributes
• Protocols cluster disaster
recovery
• File Placement Optimizer
• Encryption
• Managing certificates to secure
communications between GUI
web server and web browsers
• Securing protocol data
• Cloud services: Transparent cloud
tiering and Cloud data sharing
• Managing file audit logging
• RDMA tuning
• Configuring Mellanox Memory
Translation Table (MTT) for GPFS
RDMA VERBS Operation
• Administering AFM
• Administering AFM DR
xviii IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 1. IBM Spectrum Scale library information units (continued)
Information unit Type of information Intended users
IBM Spectrum Scale: This guide provides the following System administrators of GPFS
Problem Determination information: systems who are experienced with
Guide the subsystems used to manage
Monitoring
disks and who are familiar with
• Monitoring system health by using the concepts presented in the IBM
IBM Spectrum Scale GUI Spectrum Scale: Concepts, Planning,
• Monitoring system health by using and Installation Guide
the mmhealth command
• Performance monitoring
• Monitoring GPUDirect storage
• Monitoring events through
callbacks
• Monitoring capacity through GUI
• Monitoring AFM and AFM DR
• Monitoring AFM to cloud object
storage
• GPFS SNMP support
• Monitoring the IBM Spectrum
Scale system by using call home
• Monitoring remote cluster through
GUI
• Monitoring file audit logging
• Monitoring clustered watch folder
• Monitoring local read-only cache
Troubleshooting
• Best practices for troubleshooting
• Understanding the system
limitations
• Collecting details of the issues
• Managing deadlocks
• Installation and configuration
issues
• Upgrade issues
• CCR issues
• Network issues
• File system issues
• Disk issues
• GPUDirect Storage
troubleshooting
• Security issues
• Protocol issues
• Disaster recovery issues
• Performance issues
xx IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 1. IBM Spectrum Scale library information units (continued)
Information unit Type of information Intended users
IBM Spectrum Scale: This guide provides the following • System administrators of IBM
Command and Programming information: Spectrum Scale systems
Reference Guide
Command reference • Application programmers who are
experienced with IBM Spectrum
• cloudkit command
Scale systems and familiar with
• gpfs.snap command the terminology and concepts in
• mmaddcallback command the XDSM standard
• mmadddisk command
• mmaddnode command
• mmadquery command
• mmafmconfig command
• mmafmcosaccess command
• mmafmcosconfig command
• mmafmcosctl command
• mmafmcoskeys command
• mmafmctl command
• mmafmlocal command
• mmapplypolicy command
• mmaudit command
• mmauth command
• mmbackup command
• mmbackupconfig command
• mmbuildgpl command
• mmcachectl command
• mmcallhome command
• mmces command
• mmchattr command
• mmchcluster command
• mmchconfig command
• mmchdisk command
• mmcheckquota command
• mmchfileset command
• mmchfs command
• mmchlicense command
• mmchmgr command
• mmchnode command
• mmchnodeclass command
• mmchnsd command
• mmchpolicy command
• mmchpool command
• mmchqos command
• mmclidecode command
• mmcrnsd command
• mmcrsnapshot command
• mmdefedquota command
• mmdefquotaoff command
• mmdefquotaon command
• mmdefragfs command
• mmdelacl command
• mmdelcallback command
• mmdeldisk command
• mmdelfileset command
• mmdelfs command
• mmdelnode command
• mmdelnodeclass command
• mmdelnsd command
• mmdelsnapshot command
• mmdf command
• mmdiag command
• mmdsh command
• mmeditacl command
• mmedquota command
• mmexportfs command
• mmfsck command
• mmfsckx command
• mmfsctl command
• mmgetacl command
• mmgetstate command
• mmhadoopctl command
• mmhdfs command
• mmhealth command
• mmimgbackup command
• mmimgrestore command
• mmimportfs command
• mmkeyserv command
xxii IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 1. IBM Spectrum Scale library information units (continued)
Information unit Type of information Intended users
IBM Spectrum Scale: • mmlinkfileset command • System administrators of IBM
Command and Programming Spectrum Scale systems
• mmlsattr command
Reference Guide
• mmlscallback command • Application programmers who are
experienced with IBM Spectrum
• mmlscluster command Scale systems and familiar with
• mmlsconfig command the terminology and concepts in
• mmlsdisk command the XDSM standard
• mmlsfileset command
• mmlsfs command
• mmlslicense command
• mmlsmgr command
• mmlsmount command
• mmlsnodeclass command
• mmlsnsd command
• mmlspolicy command
• mmlspool command
• mmlsqos command
• mmlsquota command
• mmlssnapshot command
• mmmigratefs command
• mmmount command
• mmnetverify command
• mmnfs command
• mmnsddiscover command
• mmobj command
• mmperfmon command
• mmpmon command
• mmprotocoltrace command
• mmpsnap command
• mmputacl command
• mmqos command
• mmquotaoff command
• mmquotaon command
• mmreclaimspace command
• mmremotecluster command
• mmremotefs command
• mmrepquota command
• mmrestoreconfig command
• mmrestorefs command
• mmrestrictedctl command
• mmrestripefile command
• mmsnapdir command
• mmstartup command
• mmstartpolicy command
• mmtracectl command
• mmumount command
• mmunlinkfileset command
• mmuserauth command
• mmwatch command
• mmwinservctl command
• mmxcp command
• spectrumscale command
Programming reference
• IBM Spectrum Scale Data
Management API for GPFS
information
• GPFS programming interfaces
• GPFS user exits
• IBM Spectrum Scale management
API endpoints
• Considerations for GPFS
applications
xxiv IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 1. IBM Spectrum Scale library information units (continued)
Information unit Type of information Intended users
IBM Spectrum Scale: Big This guide provides the following • System administrators of IBM
Data and Analytics Guide information: Spectrum Scale systems
Summary of changes • Application programmers who are
experienced with IBM Spectrum
Big data and analytics support
Scale systems and familiar with
Hadoop Scale Storage Architecture the terminology and concepts in
the XDSM standard
• Elastic Storage Server
• Erasure Code Edition
• Share Storage (SAN-based
storage)
• File Placement Optimizer (FPO)
• Deployment model
• Additional supported storage
features
IBM Spectrum Scale support for
Hadoop
• HDFS transparency overview
• Supported IBM Spectrum Scale
storage modes
• Hadoop cluster planning
• CES HDFS
• Non-CES HDFS
• Security
• Advanced features
• Hadoop distribution support
• Limitations and differences from
native HDFS
• Problem determination
IBM Spectrum Scale Hadoop
performance tuning guide
• Overview
• Performance overview
• Hadoop Performance Planning
over IBM Spectrum Scale
• Performance guide
IBM Spectrum Scale: Big Cloudera HDP 3.X • System administrators of IBM
Data and Analytics Guide Spectrum Scale systems
• Planning
• Application programmers who are
• Installation
experienced with IBM Spectrum
• Upgrading and uninstallation Scale systems and familiar with
• Configuration the terminology and concepts in
the XDSM standard
• Administration
• Limitations
• Problem determination
Open Source Apache Hadoop
• Open Source Apache Hadoop
without CES HDFS
• Open Source Apache Hadoop with
CES HDFS
xxvi IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 1. IBM Spectrum Scale library information units (continued)
Information unit Type of information Intended users
IBM Spectrum Scale Erasure IBM Spectrum Scale Erasure Code • System administrators of IBM
Code Edition Guide Edition Spectrum Scale systems
• Summary of changes • Application programmers who are
experienced with IBM Spectrum
• Introduction to IBM Spectrum
Scale systems and familiar with
Scale Erasure Code Edition
the terminology and concepts in
• Planning for IBM Spectrum Scale the XDSM standard
Erasure Code Edition
• Installing IBM Spectrum Scale
Erasure Code Edition
• Uninstalling IBM Spectrum Scale
Erasure Code Edition
• Creating an IBM Spectrum Scale
Erasure Code Edition storage
environment
• Using IBM Spectrum Scale
Erasure Code Edition for data
mirroring and replication
• Upgrading IBM Spectrum Scale
Erasure Code Edition
• Incorporating IBM Spectrum
Scale Erasure Code Edition in
an Elastic Storage Server (ESS)
cluster
• Incorporating IBM Elastic Storage
Server (ESS) building block in an
IBM Spectrum Scale Erasure Code
Edition cluster
• Administering IBM Spectrum
Scale Erasure Code Edition
• Troubleshooting
• IBM Spectrum Scale RAID
Administration
IBM Spectrum Scale Data This guide provides the following • System administrators of IBM
Access Service information: Spectrum Scale systems
• Release notes • Application programmers who are
• Product overview experienced with IBM Spectrum
Scale systems and familiar with
• Planning the terminology and concepts in
• Installing the XDSM standard
• Upgrading
• Administering
• Monitoring
• Troubleshooting
• Command reference (mmdas
command)
• Programming reference (REST
APIs)
xxviii IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 1. IBM Spectrum Scale library information units (continued)
Information unit Type of information Intended users
IBM Spectrum Scale This guide provides the following • System administrators of IBM
Container Storage Interface information: Spectrum Scale systems
Driver Guide
• Summary of changes • Application programmers who are
• Introduction experienced with IBM Spectrum
Scale systems and familiar with
• Planning the terminology and concepts in
• Installation the XDSM standard
• Upgrading
• Configurations
• Using IBM Spectrum Scale
Container Storage Interface Driver
• Managing IBM Spectrum Scale
when used with IBM Spectrum
Scale Container Storage Interface
driver
• Cleanup
• Limitations
• Troubleshooting
Table 2. Conventions
Convention Usage
bold Bold words or characters represent system elements that you must use literally,
such as commands, flags, values, and selected menu options.
Depending on the context, bold typeface sometimes represents path names,
directories, or file names.
bold bold underlined keywords are defaults. These take effect if you do not specify a
underlined different keyword.
italic Italic words or characters represent variable values that you must supply.
Italics are also used for information unit titles, for the first use of a glossary term,
and for general emphasis in text.
<key> Angle brackets (less-than and greater-than) enclose the name of a key on the
keyboard. For example, <Enter> refers to the key on your terminal or workstation
that is labeled with the word Enter.
\ In command examples, a backslash indicates that the command or coding example
continues on the next line. For example:
{item} Braces enclose a list from which you must choose an item in format and syntax
descriptions.
[item] Brackets enclose optional items in format and syntax descriptions.
<Ctrl-x> The notation <Ctrl-x> indicates a control character sequence. For example,
<Ctrl-c> means that you hold down the control key while pressing <c>.
item... Ellipses indicate that you can repeat the preceding item one or more times.
| In synopsis statements, vertical lines separate a list of choices. In other words, a
vertical line means Or.
In the left margin of the document, vertical lines indicate technical changes to the
information.
Note: CLI options that accept a list of option values delimit with a comma and no space between
values. As an example, to display the state on three nodes use mmgetstate -N NodeA,NodeB,NodeC.
Exceptions to this syntax are listed specifically within the command.
xxx IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Chapter 1. Summary of changes
Summary of changes for IBM Spectrum Scale Container Storage Interface driver.
The following enhancements are made in this release:
• IBM Spectrum Scale Container Storage Interface driver plug-in prerequisite operations moved from CSI
driver to operator
• Support of log levels for CSI driver
• Kubernetes CSI sidecar containers upgrade
• Support for Kubernetes 1.26
• Security fixes
IBM Spectrum Scale is a clustered file system that provides concurrent access to a single file system or
set of file systems from multiple nodes. The nodes can be SAN-attached, network attached, a mixture
of SAN-attached and network attached, or in a shared nothing cluster configuration. This enables high-
performance access to this common set of data to support a scale-out solution or to provide a high
availability platform. For more information on IBM Spectrum Scale features, see the Product overview
section in the IBM Spectrum Scale: Concepts, Planning, and Installation Guide.
Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems
to containerized workloads on Container Orchestration Systems like Kubernetes. The IBM Spectrum
Scale Container Storage Interface driver specification is defined in the CSI specification repository in the
Container Storage Interface project in the GitHub.
IBM Spectrum Scale Container Storage Interface driver allows IBM Spectrum Scale to be used as a
persistent storage for stateful application running in Kubernetes clusters. Through the IBM Spectrum
Scale Container Storage Interface driver, Kubernetes persistent volumes (PVs) can be provisioned from
IBM Spectrum Scale. Containers can essentially be used with stateful microservices such as database
applications (MongoDB, PostgreSQL, and so on).
IBM implements the CSI specification of storage plug-in in the following manner:
The external-provisioner, external-snapshotter, external-attacher, and external-resizer Sidecar Containers
are deployments, which might be deployed on separate infrastructure hosts for resiliency.
• The external-provisioner watches for create and delete volume API calls.
• The external-snapshotter watches for create and delete volume snapshots calls.
• The external-attacher watches for mount and unmount API calls.
• The external-resizer watches the volume expansion calls.
Features covered
The following features are available with IBM Spectrum Scale Container Storage Interface driver:
• Static provisioning: Ability to use existing directories and filesets as persistent volumes.
• Lightweight dynamic provisioning: Ability to create directory-based volumes dynamically.
• Fileset-based dynamic provisioning: Ability to create fileset-based volumes dynamically.
• Multiple file systems support: Ability to create volume across multiple file systems.
• Remote mount support: Ability to create volume on a remotely mounted file system.
• Operator support for easier deployment, upgrade, and cleanup.
• Supported volume access modes: RWX (ReadWriteMany) and RWO (ReadWriteOnce)
• Snapshot feature support: Ability to create a volume snapshot and to restore a snapshot into a new
volume.
• Volume Expansion support: Ability to expand a dynamically provisioned volume.
• Volume Cloning support: Ability to create a clone of an existing volume.
• Compression support: Ability to enable compression for dynamically provisioned volumes.
• Tiering support: Ability to enable tiering for dynamically provisioned volumes.
• Consistency Group support: Ability to have group of volumes for application groups.
• fsGroup support for RWO volumes: When used, Kubernetes recursively changes the ownership and
permission of volumes content to match the fsGroup specified in a pod’s securityContext.
• Volume stat support for fileset based volumes: Ability to show available and used capacity of fileset
based volumes.
• Support for the multiple GUIs configuration (GUI HA) for a CSI driver on Vanilla Kubernetes: Ability to
configure multiple GUIs in case of GUI is installed on multiple nodes of a storage cluster.
4 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Chapter 3. Planning
Describes the planning information for using IBM Spectrum Scale Container Storage Interface driver.
Note: If you are using a Red Hat® OpenShift® cluster, ensure to replace "kubectl" with "oc" in all
commands.
Table 3. CSI Features, OCP, Kubernetes and IBM Spectrum Scale Compatibility Matrix
IBM Spectrum IBM Spectrum OCP level Kubernetes IBM Spectrum IBM Spectrum
Scale CSI Scale CSI level level Scale level Scale
feature or Filesystem
parameter version
Volume 2.2.0+ 4.7+ 1.20+ 5.1.1.0+ N/A
Snapshot
Permissions 2.3.0+ N/A N/A 5.1.1.2+ N/A
parameter in
Recommended
storageClass
: 5.1.2.1 or
later
6 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 4. Hardware requirements of IBM Spectrum Scale Container Storage Interface
Pods Where Contain CPU CPU Memory Memory Epheme Epheme Descript
deploye er name request limit request limit ral ral ion
d storage storage
request limit
Driver All ibm- 20mCPU 600mCP 20MiB 600Mi 1GiB 10GiB Driver
(ibm- worker spectru U pod
spectru nodes m-scale- allows
m-scale- with csi IBM
csi- scale=tr Spectru
driver- 20mCPU 300mCP 20MiB 300Mi 1GiB 5GiB
driver- ue label m Scale
registrar U
xxxxx) to be
liveness- 20mCPU 300mCP 20MiB 300Mi 1GiB 5GiB used as
probe U a
persiste
nt
storage
for
stateful
applicati
on
running
in
Kuberne
tes
clusters.
Operator Single operator 50mCPU 600mCP 50MiB 600Mi 1GiB 5GiB The
(ibm- worker U controlle
spectru node r
m-scale- runtime
csi- that
operator manage
-xxxxxxx s CSI
xxx- custom
xxxxx) resource
s.
Chapter 3. Planning 7
Table 4. Hardware requirements of IBM Spectrum Scale Container Storage Interface (continued)
Pods Where Contain CPU CPU Memory Memory Epheme Epheme Descript
deploye er name request limit request limit ral ral ion
d storage storage
request limit
Attacher Two ibm- 20mCPU 300mCP 20MiB 300Mi 1GiB 5GiB Attacher
sidecar worker spectru U Sidecar
(ibm- nodes m-scale- is the
spectru with csi- pod
m-scale- scale=tr attacher which
csi- ue label run
attacher along
-xxxxxxx with the
xxx- main
xxxxx) CSI
driver
containe
r
responsi
ble for
attach/
detach
of
Persiste
nt
Volume.
Provisio Single ibm- 20mCPU 300mCP 20MiB 300Mi 1GiB 5GiB Provisio
ner worker spectru U ner
sidecar node m-scale- Sidecar
(ibm- with csi- is the
spectru scale=tr provisio pod
m-scale- ue label ner which
csi- runs
provisio along
ner- with the
xxxxxxx main
xxx- CSI
xxxxx) driver
containe
r
responsi
ble for
creation,
deletion,
or
cloning
of
Persiste
nt
Volume.
8 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Table 4. Hardware requirements of IBM Spectrum Scale Container Storage Interface (continued)
Pods Where Contain CPU CPU Memory Memory Epheme Epheme Descript
deploye er name request limit request limit ral ral ion
d storage storage
request limit
Snapsho Single ibm- 20mCPU 300mCP 20MiB 300Mi 1GiB 5GiB Snapsho
tter worker spectru U tter
sidecar node m-scale- Sidecar
(ibm- with csi- is the
spectru scale=tr snapsho pod
m-scale- ue label tter which
csi- runs
snapsho along
tter- with the
xxxxxxx main
xxx- CSI
xxxxx) driver
containe
r
responsi
ble for
creation
or
deletion
of
Persiste
nt
Volume
Snapsho
ts.
Resizer Single ibm- 20mCPU 300mCP 20MiB 300Mi 1GiB 5GiB Resizer
sidecar worker spectru U Sidecar
(ibm- node m-scale- is the
spectru with csi- pod
m-scale- scale=tr resizer which
csi- ue label runs
resizer- along
xxxxxxx with the
xxx- main
xxxxx) CSI
driver
containe
r
responsi
ble for
Expansi
on of
Persiste
nt
Volume.
Note: For more information on resource requests and limits, see Kubernetes resource management in
Kubernetes documentation.
Chapter 3. Planning 9
Deployment considerations
Ensure that the following steps are completed before you deploy IBM Spectrum Scale Container Storage
Interface driver in your cluster.
• Worker nodes selection: By default, Kubernetes or Red Hat OpenShift schedules the IBM Spectrum
Scale Container Storage Interface driver pods on all worker nodes. It is essential to have IBM Spectrum
Scale client installed on all these nodes. If you want to schedule the IBM Spectrum Scale Container
Storage Interface driver pods only on selected worker nodes, you must label the selected nodes and
use this label in node selector. For more information, see “Using the node selector” on page 29.
• Node selection for sidecar pods: The CSI sidecar pods can be scheduled on specific nodes by adding
labels to nodes, and adding the nodeSelectors for the sidecar pods. For more information, see Using
the node selector. IBM Spectrum Scale Container Storage Interface driver pod must also be scheduled
on the nodes that run sidecar pods. On the Red Hat OpenShift, if the infrastructure nodes are worker
nodes, schedule the sidecar pods to run on the infrastructure nodes.
• Local file system: If you plan to use a local file system for PVC provisioning, ensure that the IBM
Spectrum Scale GUI is initialized and running on your IBM Spectrum Scale cluster.
• Remote cluster setup: If you plan to use a remotely mounted file system for PVC provisioning, ensure
that the following setup is completed:
– The IBM Spectrum Scale GUI is initialized and running on both clusters (owning cluster and accessing
cluster)
– Remote cluster details are added to the Operator configuration. For more information, see “Remote
cluster support” on page 25.
• SELinux considerations: Different Kubernetes distributions handle the SELinux enforcing mode
differently. There might be differences in terms of SELinux context that is set on files, relabeling of
volumes and the process context of containers. As a prerequisite, appropriate SELinux rules must
be set up to allow IBM Spectrum Scale Container Storage Interface driver containers to access the
required resources on host. For example, “container_t” context needs to have access to csi.sock and
the IBM Spectrum Scale file system, or the files that need access from containers need to have the
“container_file_t” context set. Refer to audit logs for any SELinux failures and set up appropriate
rules as required.
Note: If you are running IBM Spectrum Scale Container Storage Interface driver with IBM Spectrum
Scale Container Native, refer the SELinux limitations.
• Node names: At times, it is possible that IBM Spectrum Scale cluster and Kubernetes or Red Hat
OpenShift cluster are configured with different node names for the same host.
1. Issue the following command to check the nodes used by the Kubernetes. Do not consider the nodes
name where the IBM Spectrum Scale is not expected to run. For example, the master nodes.
2. Check the node name used by IBM Spectrum Scale by issuing the following curl command against
the IBM Spectrum Scale GUI host of primary cluster.
Note:
– The preceding command lists the node names where the filesystem is mounted in the field
nodesMountedReadWrite. The preceding command may return the long list of the nodes where
the specified file system mounts. Consider the node names listed only for the Kubernetes nodes.
– If the node names listed in step 1 are not present as is (exact string) in the node names listed in step
2, then configure node mapping in the Operator configuration. For more information, see “Kubernetes
to IBM Spectrum Scale node mapping” on page 30.
10 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
• Internet connectivity: If your worker nodes do not have internet connectivity and access to the quay.io
registry, you need to manually download the following images and upload to the local image registry.
Table 5. Image Links for IBM Spectrum Scale Container Storage Interface driver 2.9.0
Name Version Image
ibm-spectrum-scale-csi- 2.9.0 quay.io/ibm-spectrum-scale/
operator ibm-spectrum-scale-csi-
operator@sha256:da7ada19c06
b20edc9b3c8067a8380f68798
99022dda8a5c1cbed7c15b2a3
81d
ibm-spectrum-scale-csi-driver 2.9.0 quay.io/ibm-spectrum-scale/
ibm-spectrum-scale-csi-
driver@sha256:573b3b2d3493
59d7871d53060a0fc7df6e03de
2e2900d1be46b4146ab1972fb
7
csi-node-driver-registrar 2.7.0 registry.k8s.io/sig-storage/csi-
node-driver-
registrar@sha256:4a4cae5118c
4404e35d66059346b7fa0835d
7e6319ff45ed73f4bba335cf518
3
livenessprobe 2.9.0 registry.k8s.io/sig-storage/
livenessprobe@sha256:2b10b2
4dafdc3ba94a03fc94d9df9941c
a9d6a9207b927f5dfd21d59fbe
05ba0
csi-attacher 4.1.0 registry.k8s.io/sig-storage/csi-
attacher@sha256:08721106b9
49e4f5c7ba34b059e17300d73
c8e9495201954edc90eeb3e6d
8461
csi-provisioner 3.4.0 registry.k8s.io/sig-storage/csi-
provisioner@sha256:e468dddcd
275163a042ab297b2d8c2aca5
0d5e148d2d22f3b6ba119e2f31
fa79
csi-snapshotter 6.2.0 registry.k8s.io/sig-storage/csi-
snapshotter@sha256:0d8d8194
8af4897bd07b86046424f022f7
9634ee0315e9f1d4cdb5c1c8d5
1c90
csi-resizer 1.7.0 registry.k8s.io/sig-storage/csi-
resizer@sha256:3a7bdf5d1057
83d05d0962fa06ca53032b016
94556e633f27366201c2881e0
1d
• Parallel volume clones and volume restore: To increase the limit of copy jobs in parallel for the volume
clones and the volume restore use the following command:
Chapter 3. Planning 11
Default limit is 10 and maximum limit is 100.
• IBM Spectrum Scale services: GUI nodes, protocol nodes, and NSD nodes are not part of the
Kubernetes cluster.
• IBM Spectrum Scale Container Storage Interface driver version support: Rollback to the old versions
of IBM Spectrum Scale Container Storage Interface driver is not supported.
Personas
A Kubernetes or Red Hat OpenShift cluster administrator is required to deploy the IBM Spectrum Scale
Container Storage Interface driver cluster.
Operator permissions
The IBM Spectrum Scale Container Storage Interface driver operator is a namespace scoped operator.
The operator keeps a watch on whatever the namespace it is deployed into. As part of the operator
installation process, the user deploys various role-based access control (RBAC) related YAML files. These
RBAC YAML files control the operator's access to the resources within the namespace it is watching.
While the operator is running with a namespace scope, it requires access to the cluster level resources
to successfully deploy. Access to the cluster level resources are handled through a cluster role that is
deployed during the previously mentioned deployment of RBAC YAML files. The role and cluster role are
bound to the custom ibm-spectrum-scale-csi-operator ServiceAccount, which the operator uses
to create the IBM Spectrum Scale Container Storage Interface driver cluster.
12 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Chapter 4. Installation
Install or clean up the IBM Spectrum Scale Container Storage Interface driver. CSI Operators are used for
performing these activities.
Note: To install IBM Spectrum Scale Container Storage Interface driver with CNSA, see IBM Spectrum
Scale Container Native documentation.
/usr/lpp/mmfs/gui/cli/initgui
• Create an IBM Spectrum Scale user group "CsiAdmin" if it does not exist. Issue the following command
to create the CsiAdmin user:
• Create an IBM Spectrum Scale user in the "CsiAdmin" group. This user must be used on IBM Spectrum
Scale Container Storage Interface driver configuration. Issue this command on the GUI node to create
the user:
• Issue the following command from the Kubernetes node to ensure that the GUI server is running and
can communicate with the Kubernetes nodes:
/usr/lpp/mmfs/bin/mmchconfig enforceFilesetQuotaOnRoot=yes
• For Red Hat OpenShift, ensure that the controlSetxattrImmutableSELinux parameter is set to
"yes" by issuing the following command:
/usr/lpp/mmfs/bin/mmchconfig controlSetxattrImmutableSELinux=yes
• To display the correct volume size in the container, enable filesetdf of the file system by issuing the
following command:
• With the IBM Spectrum Scale 5.1.4 release, inode expansion can happen automatically.
Once this setting is enabled, inode-limit setting specified on the fileset gets ignored. For more
information, see mmchfs command.
To enable the auto inode expansion issue the following command:
14 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
• Mount the file system that is used for IBM Spectrum Scale Container Storage Interface driver on the
same mount point on worker nodes.
• Issue the following command to label the Kubernetes worker nodes where IBM Spectrum Scale client is
installed and where IBM Spectrum Scale Container Storage Interface driver runs:
For more information, see “Using the node selector” on page 29.
• For Vanilla Kubernetes cluster, perform the following steps for the snapshot functions to work:
Note: You need to perform the following steps only if the snapshot controller is not available. These
steps are not required for OCP 4.7 and later clusters.
1. Install the external snapshotter CRDs:
oc new-project ibm-spectrum-scale-csi-driver
2. Issue the following command to download the operator manifest for CSI 2.9.0:
curl -O https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi/v2.9.0/generated/
installer/ibm-spectrum-scale-csi-operator.yaml
Chapter 4. Installation 15
If you are using OCP cluster with RHEL nodes, issue the following command:
curl -O https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi/v2.9.0/generated/
installer/ibm-spectrum-scale-csi-operator-ocp-rhel.yaml
Note: See Offline Installation and Upgrade for additional steps required for offline upgrade.
3. Issue the following command to apply the operator manifest to deploy the operator.
For OCP cluster with RHEL nodes, issue the following command:
4. Verify that the Operator is deployed, and the Operator pod is in running state.
4. Verify that the IBM Spectrum Scale Container Storage Interface driver is installed, Operator and driver
resources are ready, and pods are in running state.
16 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
deployment.apps/ibm-spectrum-scale-csi-operator 1/1 1 0 7d
deployment.apps/ibm-spectrum-scale-csi-provisioner 1/1 1 1 5d22h
deployment.apps/ibm-spectrum-scale-csi-resizer 1/1 1 1 5d22h
deployment.apps/ibm-spectrum-scale-csi-snapshotter 1/1 1 1 5d22h
For more information, see IBM Spectrum Scale Container Storage Interface (CSI) project in the IBM
GitHub repository.
curl -O https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi/v2.9.0/generated/
installer/ibm-spectrum-scale-csi-operator.yaml
If you are using OCP cluster with RHEL nodes, issue the following command:
curl -O https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi/v2.9.0/generated/
installer/ibm-spectrum-scale-csi-operator-ocp-rhel.yaml
curl -O https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi/v2.9.0/operator/config/
samples/csiscaleoperators.csi.ibm.com_cr.yaml
containers:
- name: operator
image: <CSI Operator Image>
env:
- name: CSI_DRIVER_IMAGE
value: <CSI Driver Image>
- name: CSI_SNAPSHOTTER_IMAGE
value: <CSI Snapshotter Image>
- name: CSI_ATTACHER_IMAGE
value: <CSI Attacher Image>
- name: CSI_PROVISIONER_IMAGE
value: <CSI Provisioner Image>
- name: CSI_LIVENESSPROBE_IMAGE
value: <CSI Livenessprobe Image>
- name: CSI_NODE_REGISTRAR_IMAGE
value: <CSI node registrar Image>
Chapter 4. Installation 17
- name: CSI_RESIZER_IMAGE
value: <CSI Resizer Image>
18 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Chapter 5. Upgrading
You can upgrade the IBM Spectrum Scale Container Storage Interface driver to utilize the enhanced
feature.
curl -O https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi/v2.9.0/generated/
installer/ibm-spectrum-scale-csi-operator.yaml
The preceding step upgrades the Operator, and the IBM Spectrum Scale Container Storage Interface
driver. The Operator and the pods are restarted with the upgraded image.
3. Verify that the pods are back in the running state by issuing the following command:
4. Verify that the Operator and the pods are using the upgraded images by issuing the following
command:
Note: If the CustomResource is updated to use custom image names, then the CustomResource
must be updated to use new version of the images. Users are not recommended to use the custom
images in the CustomResource.
Image: quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-
operator@sha256:355a4bfc89a96b81664ec915b63ed02d5a35d49a9c8386d9c09567f33765004e
Image: quay.io/ibm-spectrum-scale/ibm-spectrum-scale-csi-
driver@sha256:86e0138bec8189eefb1eb6cc90885e930a333444c1077ca75df33266efc83f86
20 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Chapter 6. Configurations
You can configure the IBM Spectrum Scale Container Storage Interface driver at your site.
Secrets
Secret is needed to store credentials to connect to IBM Spectrum Scale REST API server. The GUI user
must have csiadmin role.
Perform the following steps:
1. Create a secret in the CSI namespace by issuing the following command:
2. Apply the CSI product label to the secret by issuing the following command:
Certificates
For secure SSL mode, a CA certificate must be specified. This certificate is used in SSL communication
with the IBM Spectrum Scale GUI server. The certificate must be created as a ConfigMap. There must be
as many ConfigMaps as the number of clusters with secure SSL enabled.
ConfigMap command syntax
For example,
Note:
• Configmap name and --from-file value must match and this --from-file value must be used as
“cacert” value in the Operator.
• Specifying different CA signed certificate for GUI host is not supported while configuring the GUI High
Availability feature in the CSI driver custom resource.
Operator
You can define the configuration parameters that are needed for creating a CSIScaleOperator custom
resource that is used to configure the IBM Spectrum Scale Container Storage Interface driver.
For more information, see a sample CSIScaleOperator custom resource configuration YAML file.
• Primary cluster: IBM Spectrum Scale cluster where some or all of the client nodes are also worker
nodes of Red Hat OpenShift/Kubernetes cluster. The aim of running IBM Spectrum Scale on worker
node is to provide persistent storage from IBM Spectrum Scale to the application running on
Kubernetes/Red Hat OpenShift.
• Primary file system: One of the existing IBM Spectrum Scale file systems from the primary cluster
must be designated as the primary file system. One fileset from this file system is used by the IBM
Spectrum Scale Container Storage Interface driver internally to store the volume references. This fileset
is referred to as primary fileset. For proper functioning of IBM Spectrum Scale Container Storage
Interface driver, the primary file system must be mounted on all worker nodes all the time.
The CSIScaleOperator custom resource for a sample deployment looks like the following sample as
shown. There are two file systems gpfs0 and gpfs1. For this deployment, we chose gpfs0 as PrimaryFs.
csiscaleoperators.csi.ibm.com_cr.yaml file
---
apiVersion: csi.ibm.com/v1
kind: "CSIScaleOperator"
metadata:
name: "ibm-spectrum-scale-csi"
namespace: "ibm-spectrum-scale-csi-driver"
labels:
app.kubernetes.io/name: ibm-spectrum-scale-csi-operator
app.kubernetes.io/instance: ibm-spectrum-scale-csi-operator
app.kubernetes.io/managed-by: ibm-spectrum-scale-csi-operator
release: ibm-spectrum-scale-csi-operator
status: {}
spec:
clusters:
- id: "<cluster id of IBM Spectrum Scale running on node1,node2,node3>"
22 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
secrets: "guisecret"
secureSslMode: false
primary:
primaryFs: "gpfs0"
restApi:
- guiHost: "<FQDN/IP of GUI Node 1>"
#- guiHost: "<FQDN/IP of GUI Node 2>" #Optional - Multiple GUI nodes can be specified
if the storage cluster has GUI installed on multiple nodes.
attacherNodeSelector:
- key: "scale"
value: "true"
provisionerNodeSelector:
- key: "scale"
value: "true"
pluginNodeSelector:
- key: "scale"
value: "true"
snapshotterNodeSelector:
- key: "scale"
value: "true"
resizerNodeSelector:
- key: "scale"
value: "true"
---
Chapter 6. Configurations 23
Table 7. CSIScaleOperator configuration parameter description (continued)
Parameter Usage Description
1Do not update CR to create an imagePullSecret array with the ibm-spectrum-scale-csi-
registrykey name because the ibm-spectrum-scale-csi-registrykey name is used as a
default secret internally.
For deployment involving two or more IBM Spectrum Scale clusters, see “Remote cluster support” on
page 25.
Status
To check status of csiScaleOperator resource, issue the following command:
Output:
{
"conditions": [
{
"lastTransitionTime": "2023-02-22T10:36:47Z",
"message": "The CSI driver resources have been created/updated successfully.",
"reason": "CSIConfigured",
"status": "True",
"type": "Success"
}
],
"versions": [
{
"name": "ibm-spectrum-scale-csi",
"version": "2.9.0"
}
]
}
Note:
• If status.condition.status is False, look for status.condition.reason and
status.condition.message to identify the cause of an error. For more information, refer operator
logs.
• From IBM Spectrum Scale Container Storage Interface driver 2.9.0 onward, after an instance of the
custom resource CSIScaleOperator is created, the change in primary stanza of the instance is not
allowed as changing the primary stanza causes loss of access to version 1 volumes. If you do not need
access to these volumes, you can delete CSIScaleOperator instance and create a new instance with the
required primary stanza.
24 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
• If you change any value in a cluster stanza of a CSIScaleOperator instance, the changed value will be
passed from the operator to the driver only if it is valid. For an invalid value, you can see corresponding
error message in the status of CSIScaleOperator instance.
Figure 3. Deployment of two IBM Spectrum Scale clusters with remote-mounted file systems
The primary cluster is the IBM Spectrum Scale cluster where Red Hat OpenShift or Kubernetes worker
nodes coexist with IBM Spectrum Scale client nodes. In this example deployment, cluster A is designated
as the primary cluster.
The cluster O is another IBM Spectrum Scale cluster that has two file systems gpfs and fs1. The file
system gpfs is mounted on Cluster A as file system mygpfs while file system fs1 is not exposed to
Cluster A.
For each IBM Spectrum Scale cluster, cluster entry must be added under the clusters section of the
custom resource.
One IBM Spectrum Scale cluster must be the primary cluster for IBM Spectrum Scale Container Storage
Interface driver deployment. Primary cluster is marked by adding the primary section in the respective
cluster entry. In the example described in the figure, deployment Cluster A is the primary cluster.
Chapter 6. Configurations 25
An example of an entry for a primary cluster is shown
- id: "<cluster id of IBM Spectrum Scale Cluster which is Primary cluster >”
primary:
primaryFs: <name of primary filesystem>
restApi:
- guiHost: "<FQDN/IP of GUI Node 1>"
#- guiHost: "<FQDN/IP of GUI Node 2>" #Optional
secrets: "<secret name for GUI of Primary Spectrum Scale cluster >”
secureSslMode: false
In the deployment example, there are two IBM Spectrum Scale clusters, hence two entries of clusters are
added, one for the primary cluster (Cluster A) and another one for cluster O (Owning cluster).
The custom resource configuration slightly changes based on whether the primary file system is locally
owned (gpfs0 in the example deployment) or remotely mounted (mygpfs in the example deployment).
The changes are in the primary section of the primary cluster entry.
The custom resource for the example deployment when primaryFS is a locally owned file system (gpfs0)
appears as shown:
---
apiVersion: csi.ibm.com/v1
kind: "CSIScaleOperator"
metadata:
name: "ibm-spectrum-scale-csi"
namespace: "ibm-spectrum-scale-csi-driver"
labels:
app.kubernetes.io/name: ibm-spectrum-scale-csi-operator
app.kubernetes.io/instance: ibm-spectrum-scale-csi-operator
app.kubernetes.io/managed-by: ibm-spectrum-scale-csi-operator
release: ibm-spectrum-scale-csi-operator
status: {}
spec:
clusters:
- id: "<cluster id of IBM Spectrum Scale Cluster A>"
secrets: "guisecret"
secureSslMode: false
primary:
primaryFs: "gpfs0"
restApi:
- guiHost: "<FQDN/IP of GUI Node 1>"
#- guiHost: "<FQDN/IP of GUI Node 2>" #Optional - Multiple GUI nodes can be specified
if the storage cluster has GUI installed on multiple nodes.
The custom resource, for example, deployment when primaryFs is a remotely mounted file system
(mygpfs) appears as shown:
---
apiVersion: csi.ibm.com/v1
kind: "CSIScaleOperator"
26 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
metadata:
name: "ibm-spectrum-scale-csi"
namespace: "ibm-spectrum-scale-csi-driver"
labels:
app.kubernetes.io/name: ibm-spectrum-scale-csi-operator
app.kubernetes.io/instance: ibm-spectrum-scale-csi-operator
app.kubernetes.io/managed-by: ibm-spectrum-scale-csi-operator
release: ibm-spectrum-scale-csi-operator
status: {}
spec:
clusters:
- id: "<cluster id of IBM Spectrum Scale Cluster A>"
secrets: "guisecret"
secureSslMode: false
primary:
primaryFs: "mygpfs"
remoteCluster: "<cluster id of IBM Spectrum Scale Cluster O (Owning cluster)>"
restApi:
- guiHost: "<FQDN/IP of GUI Node 1>"
- id: "<cluster id of IBM Spectrum Scale Cluster O(owning cluster)>"
secrets: "remoteguisecret"
secureSslMode: false
restApi:
- guiHost: "<FQDN/IP of GUI Node A>" # Multiple GUIs can be provided here also similar
to primary cluster.
attacherNodeSelector:
- key: "scale"
value: "true"
provisionerNodeSelector:
- key: "scale"
value: "true"
pluginNodeSelector:
- key: "scale"
value: "true"
snapshotterNodeSelector:
- key: "scale"
value: "true"
resizerNodeSelector:
- key: "scale"
value: "true"
---
Chapter 6. Configurations 27
Table 9. Parameter description (continued)
Parameter name Status Parameter description
cacert Mandatory if secureSslMode is Name of the pre-created CA
true. certificate configmap that is
used to connect to the GUI
server that is running on the
guiHost. For more information,
see “Certificates” on page 21.
secrets Mandatory Name of the pre-created
Secret containing username and
password to connect to the
GUI running on the guiHost for
cluster specified against the id
parameter. For more information,
see “Secrets” on page 21.
guiHost Mandatory FQDN or IP address of the GUI
node of IBM Spectrum Scale
cluster that is specified against
the id parameter.
Optionally, multiple GUI hosts
can be specified for a storage
cluster with multiple GUIs. In
case of multiple guiHosts, use
the same port number for all the
GUIs on a storage cluster.
Note:
• Owning cluster might have more than one file system and not all file systems need to be remotely
mounted on the accessing cluster.
• There can be more than one owning cluster that exposes their file systems to the accessing cluster.
• Accessing cluster or primary cluster can be compute-only cluster without any of its own file system.
• Secrets contain the credentials to connect to the GUI for a specified cluster. For each cluster in
the custom resource, there should be a pre-created secret before Operator deployment. For more
information, see “Secrets” on page 21. Same secret cannot be used for multiple clusters even if the
credentials are same.
• Custom resource also contains other parameters that are optional, so those parameters should be
added as per your requirement.
28 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Using the node selector
By default, the IBM Spectrum Scale Container Storage Interface driver gets deployed on all worker nodes.
Node selector controls the Kubernetes worker nodes on which the IBM Spectrum Scale Container Storage
Interface driver must run. It helps in cases where new worker nodes are added to Kubernetes cluster
but do not have IBM Spectrum Scale installed. It helps in ensuring that sidecar pods are running on the
desired nodes.
To configure node selector, perform the following steps:
1. Label the Kubernetes worker nodes where sidecar pods should run, as shown in the following
example:
Note:
• Use specific labels like the one for attacher and provisioner sidecar pods, only if there is a
requirement of running these sidecar pods for very specific nodes. Otherwise, use single label like
scale=true for running sidecar pods and IBM Spectrum Scale Container Storage Interface driver
DaemonSet.
• Nodes marked for running sidecar pods must be a subset of the nodes marked with the scale=true
label.
2. Label the Kubernetes worker nodes where IBM Spectrum Scale Container Storage Interface driver
must run, as shown:
attacherNodeSelector:
- key: "scale"
value: "true"
# - key: "infranode" # Only if there is requirement of running Attacher
# value: "2" # on specific Node
provisionerNodeSelector:
- key: "scale"
value: "true"
# - key: "infranode". # Only if there is requirement of running Provisioner
# value: "1" # on specific Node
snapshotterNodeSelector:
- key: "scale"
value: "true"
# - key: "infranode" # Only if there is requirement of running Snapshotter
# value: "2" # on specific Node
pluginNodeSelector:
- key: "scale"
value: "true"
resizerNodeSelector:
- key: "scale"
value: "true"
# - key: "infranode" # Only if there is requirement of running Resizer
# value: "2" # on specific Node
Note: If you choose to run IBM Spectrum Scale Container Storage Interface driver on selective nodes
using the nodeSelector, then make sure that the pod using IBM Spectrum Scale Container Storage
Interface driver PVC is scheduled on the nodes where IBM Spectrum Scale Container Storage Interface
driver is running.
Chapter 6. Configurations 29
Kubernetes to IBM Spectrum Scale node mapping
In some environments, Kubernetes node names might be different from the IBM Spectrum Scale node
names. This results in failure during mounting of pods. Kubernetes node to IBM Spectrum Scale node
mapping must be configured to address this condition during the Operator configuration.
To configure this, add "nodeMapping" section under "spec" in the
csiscaleoperators.csi.ibm.com_cr.yaml, as shown in the following example:
nodeMapping:
- k8sNode: "kubernetesNode1"
spectrumscaleNode: "scaleNode1"
- k8sNode: "kubernetesNode2"
spectrumscaleNode: "scaleNode2"
If Kubernetes node name starts with a number, then add node mapping for such nodes in the following
format:
For example, if Kubernetes node name is 198.51.100.10, then use the following node mapping:
- k8sNode: "K8sNodePrefix_198.51.100.10"
spectrumscaleNode: "spectrumscalenode11"
Note:
• Kubernetes node name is listed by issuing the kubectl get nodes command.
• You can list the IBM Spectrum Scale node name by issuing the following command. Look for the field
nodesMountedReadWrite.
• All entries for nodes that differ in name must be added. Consider only nodes where IBM Spectrum Scale
CSI is expected to run along with IBM Spectrum Scale.
Tolerations
Tolerations are applied to pods, and allow (but do not require) the pods to to be scheduled on nodes with
matching taints.
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. For
more information, see Taints and Tolerations in the Kubernetes documentation.
To allow the IBM Spectrum Scale Container Storage Interface driver pods to be scheduled on nodes with
taints, configure the CSIScaleOperator custom resource csiscaleoperators.csi.ibm.com_cr.yaml
under "spec" section as shown in the following example:
tolerations:
- key: "key1" # Node taint key name. Mandatory
operator: "Equal” # Valid values are "Exists" and "Equal". Mandatory
value: "value1” # Required if operator is "Equal"
effect: "NoExecute” # Valid values are "NoSchedule", "PreferNoSchedule" and
"NoExecute”. An empty effect matches all effects with given key. Mandatory
tolerationSeconds: 3600 # Used only when effect is "NoExecute". It determines how long
the pod will stay bound to the node after the taint is added.
30 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Changing the configuration after deployment
IBM Spectrum Scale Container Storage Interface driver configuration can be changed after the driver is
deployed. Any change in the configuration post deployment reinitializes IBM Spectrum Scale Container
Storage Interface driver.
Updating a Secret
The IBM Spectrum Scale Container Storage Interface driver uses secrets to store API authentication. If
the password is expired or you want to change the password, you must update the secret in Kubernetes.
To update the secret and have the operator apply it, do the following steps:
1. Delete the old secret.
2. Change the password of the GUI user on the IBM Spectrum Scale cluster.
3. Create a secret with new credentials and apply the required labels.
It is mandatory to label the secret issuing kubectl label command to trigger reconciliation. The
process can then be monitored in the Operator logs.
Additionally, if the Operator's custom resource was deployed before the secrets were created the above
process can be used to start the operator without deleting the Custom Resource.
Cluster Details
To change cluster details such as guiHost, remote cluster information or node mapping, edit the
CSIScaleOperator by using the following command.
When this command is issued, a vi editor opens up, which contains a temporary YAML file with the
contents for CSIScaleOperator object. You must update the cluster details, save the file, and exit. The
Operator restarts the IBM Spectrum Scale Container Storage Interface driver with the new configuration.
Note: The parameter does not work after you edit the CSIScaleOperator. To make the parameter work,
you must delete the custom resource, update the parameter in the custom resource, and re-create the
custom resource.
• cacert
Advanced configuration
The IBM Spectrum Scale Container Storage Interface driver supports some advanced configurations
that could be applied by using an optional ConfigMap . Any change in the ConfigMap after the driver is
deployed reinitializes the IBM Spectrum Scale Container Storage Interface driver.
1. Create an optional ConfigMap.
An optional ConfigMap is used to manage advanced features of IBM Spectrum Scale Container Storage
Interface driver. Use the following YAML file to manage the features.
kind: ConfigMap
apiVersion: v1
Chapter 6. Configurations 31
metadata:
name: ibm-spectrum-scale-csi-config
namespace: ibm-spectrum-scale-csi-driver
data:
VAR_DRIVER_LOGLEVEL: INFO
a. Update the value to the corresponding field and save the file to update the modified value in IBM
Spectrum Scale Container Storage Interface driver.
b. To use the default configuration, delete the applied ConfigMap by using the following command.
4. Update the logger level of IBM Spectrum Scale Container Storage Interface driver by changing
the VAR_DRIVER_LOGLEVEL parameter of optional ConfigMap. The driver uses six levels of logger:
TRACE, DEBUG, INFO, WARNING, ERROR, and FATAL. Default logger level of IBM Spectrum Scale
Container Storage Interface driver is INFO.
32 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Chapter 7. Using IBM Spectrum Scale Container
Storage Interface driver
You can create storage volumes such as PVCs and PVs to suit your requirements.
Storage class
Storage class is used for creating lightweight volumes, fileset-based volumes as well as consistency group
volumes.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibm-spectrum-scale-csi-lt
provisioner: spectrumscale.csi.ibm.com
parameters:
volBackendFs: "gpfs0"
volDirBasePath: "pvfileset/lwdir"
reclaimPolicy: Delete
Following fields comes under parameters section of storageClass. Parameters section is mandatory for
Spectrum Scale CSI driver storageClass:
Note:
• Since lightweight volume does not enforce quota, it can grow beyond defined size, which may result
in consuming whole file system. To avoid this, you must manually create or use an existing fileset
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibm-spectrum-scale-csi-fileset
provisioner: spectrumscale.csi.ibm.com
parameters:
volBackendFs: gpfs0
uid: "1000"
gid: "1000"
reclaimPolicy: Delete
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibm-spectrum-scale-csi-fileset-dependent
provisioner: spectrumscale.csi.ibm.com
parameters:
volBackendFs: "gpfs0"
uid: "1000"
gid: "1000"
filesetType: "dependent"
parentFileset: "independent-fileset-fset1"
reclaimPolicy: Delete
The following fields are available for the "parameters:" section in a storageClass manifest. The
"parameters:" section is a mandatory section for the IBM Spectrum Scale Container Storage Interface
driver driver storageClass:
clusterId (Optional) The Cluster ID of the owning cluster for the remote
files system.
The Cluster ID of the primary cluster for local file
system.
34 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Field name Description
uid (Optional) The username of the fileset. The uid/gid must exist
on the IBM Spectrum Scale GUI node of accessing
and owning clusters. Default value is "root".
gid (Optional) The group name of the fileset. The gid/group name
must exist on the IBM Spectrum Scale GUI node of
the accessing and owning clusters. Default value is
"root".
filesetType Valid options are "independent", "dependent". The
default value is "independent".
parentFileset Parent fileset name. Valid with
filesetType=dependent. Default value is "root".
inodeLimit (Optional) Inode limit for fileset. Valid with
filesetType=independent. If not specified,
inodeLimit is calculated using this formula:
volume size/block size of the file system.
Note: With IBM Spectrum Scale 5.1.4 and later,
auto inode expansion can be used instead of
inodeLimit. For more details, see the mmchfs
command.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibm-spectrum-scale-csi-consistency-group
provisioner: spectrumscale.csi.ibm.com
parameters:
version: "2"
volBackendFs: gpfs0
reclaimPolicy: Delete
allowVolumeExpansion: true
Following fields comes under parameters section of storageClass. Parameters section is mandatory for
Spectrum Scale CSI driver storageClass:
volBackendFs (Mandatory) Name of the file system under which the fileset
must be created. File system name is the name of
the remotely mounted file system on the primary
cluster.
clusterId (Optional) The Cluster ID of the owning cluster for remote
files system.
The Cluster ID of the primary cluster for local file
system.
uid (Optional) The username of the fileset. The uid/gid must exist
on the IBM Spectrum Scale GUI node of accessing
and owning clusters. Default value is "root".
gid (Optional) The group name of the fileset. The gid/group name
must exist on the IBM Spectrum Scale GUI node of
the accessing and owning clusters. Default value is
"root".
inodeLimit (Optional) Inode limit for consistency group. If not specified,
inodeLimit is set to 1M.
Note: With IBM Spectrum Scale 5.1.4 and later,
auto inode expansion can be used instead of
inodeLimit. For more details, see the mmchfs
command.
36 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Note: For using the consistency group feature, you must use IBM Spectrum Scale 5.1.3.0 or later.
Dynamic provisioning
Administrators use dynamic volume provisioning to create storage volumes on-demand.
Do the following steps:
1. Create a traditional storageClass or consistency group based storageClass. For more information, see
“Storage class” on page 33.
2. Apply the following configuration:
3. Create a persistent volume claim (PVC) using this storageclass, as shown in the following example:
38 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: scale-fset-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: [name_of_your_storageclass]
Modify the PVC name, storage, and storageClassName values according to your requirement.
4. Create a PVC by issuing the following command:
Creating pods
To configure a pod, do the following steps:
1. Create a manifest file (pod.yaml) with pod definition referencing the persistent volume claim (PVC).
Following is an example of a pod definition for creating a nginx container using a previously created
PVC:
# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: csi-scale-staticdemo-pod
labels:
app: nginx
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- name: mypvc
mountPath: /usr/share/nginx/html/scale
ports:
- containerPort: 80
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: [pvc name]
readOnly: false
Note: claimName is the PVC name to be used by pod for persistent storage. The readOnly flag can be
set to true in which case the pod will mount the PVC in the read-only mode.
2. Issue the following command to create the pod:
For more information on pods, see Configure a Pod to Use a PersistentVolume for Storage in the
Kubernetes documentation.
seLinuxContext:
type: MustRunAs
metadata:
annotations:
openshift.io/scc: <scc_name>
securityContext:
seLinuxOptions:
level: <SELinux level label>
spec:
securityContext:
fsGroup: 5000
containers:
When fsGroup is specified in Pod Spec, it means that the specified group with ID 5000 is associated
with all containers in the pod. When fsGroup is specified, Kubernetes recursively change the ownership of
volume content to group specified against fsGroup. Also, the owning GD will be that of fsGroup, and the
setuid bit is set to that new files created in the volume that are owned by owning GID.
For more information, see Kubernetes documentation.
Volume Snapshot
The Volume Snapshot feature is used to take a point-in-time snapshot of the IBM Spectrum Scale
Container Storage Interface driver volume. The volume snapshot feature also provides the capability of
creating a new IBM Spectrum Scale Container Storage Interface driver volume from the existing snapshot.
Create a VolumeSnapshot
VolumeSnapshot creates a point-in-time snapshot of the independent fileset based IBM Spectrum Scale
Container Storage Interface driver volume on the IBM Spectrum Scale storage system.
Create a VolumeSnapshotClass
VolumeSnapshotClass is like a StorageClass that defines driver-specific attributes for the snapshot
to be created.
A sample VolumeSnapshotClass is created as shown in the following example:
# cat volumesnapshotclass.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: ibm-spectrum-scale-snapshot-class-consistency-group
40 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
driver: spectrumscale.csi.ibm.com
deletionPolicy: Delete
#parameters:
# snapWindow: "30"
snapWindow parameter is valid for only consistency group. It indicates how long a snapshot stays valid
for a consistency group after a snapshot is taken. The value specified should indicate snapWindow time in
minutes. Default value is "30" minutes.
Note:
• snapWindow must not be less than 30 minutes while taking snapshots of multiple volumes.
• If there are multiple requests, only one snapshot for consistency group must be taken within the
specified snapWindow.
• snapWindow time starts when snapshot of any volume that belongs to the consistency group is taken
either for the first time or anytime after snapWindow is passed.
• If one needs to take a snapshot of consistency group, request for snapshot for all volumes that belong
to the consistency group must be created in a short span of time.
Create a VolumeSnapshot
VolumeSnapshot is a copy of a volume content on a storage system.
Specify the source volume to be used for creating snapshot here as shown in the following sample
manifest. Source persistent volume claim (PVC) must be in the same namespace in which the snapshot
is being created. Snapshots can be created only from independent fileset-based PVCs or for consistency
group based PVCs.
A sample VolumeSnapshot is created as shown in the following example
# cat volumesnapshot.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: ibm-spectrum-scale-snapshot
spec:
volumeSnapshotClassName: ibm-spectrum-scale-snapshot-class
source:
persistentVolumeClaimName: ibm-spectrum-scale-pvc
Note:
• The volume size of the source PVC is used as the restore size of the snapshot. Any volume that is
created from this snapshot must be of the same or larger capacity.
• Volume Snapshot is supported only for the independent fileset based PVCs.
# cat pvcfromsnapshot.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ibm-spectrum-scale-pvc-from-snap
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: ibm-spectrum-scale-storageclass
dataSource:
name: ibm-spectrum-scale-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
Restriction:
• Snapshot and new volume must be from filesystems belonging to same cluster
• Restoring snapshot to lightweight PVC of remotely mounted filesystem is not supported
Volume Cloning
Clone your volume to duplicate an existing persistent volume at a particular point-in-time.
To create a clone, expand the existing dataSource field in the PersistentVolumeClaim object. So
that, field accepts the name of an existing PersistentVolumeClaim in the same namespace.
Example of PersistentVolumeClaim for cloning:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: clone-of-scale-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: [name_of_your_storageclass]
resources:
requests:
storage: 5Gi
dataSource:
kind: PersistentVolumeClaim
name: scale-pvc
Note:
• The storage capacity of the cloned volume must be the same or larger than the capacity of the source
volume.
• The destination persistent volume claim (PVC) must exist in the same namespace as the source PVC.
• The source PVC must be bound and available but must not be in use.
Restriction:
42 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
• Volume cloning is supported only for dynamically provisioned volumes (with or without consistency
group).
• Volume cloning of lightweight volumes on remote file systems is not supported.
• Volume cloning across file systems from different spectrum scale clusters is not supported.
• Volume cloning between version 1 and version 2 storageClass is not supported or vice versa.
• Volume cloning between lightweight volumes and fileset based volumes or vice versa is not supported.
Volume Expansion
Expand the capacity of dynamically provisioned volumes to meet the growing storage needs of your
environment.
To enable the expansion of a volume, you must set the allowVolumeExpansion parameter to true in
the StorageClass.
Example of StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibm-spectrum-scale-csi-fileset-expansion
provisioner: spectrumscale.csi.ibm.com
parameters:
volBackendFs: gpfs0
clusterId: "17797813605352210071"
reclaimPolicy: Delete
allowVolumeExpansion: true
To expand volumes where volume expansion is enabled, edit the size of the PVC.
The following example illustrates how to use the patch command to expand a volume to 2Gi:
If the volume is created using IBM Spectrum Scale Container Storage Interface driver 2.3.0 or earlier,
the "allowVolumeExpansion" is unset in StorageClass. You must patch the storage class as given below,
before PVC expansion.
Available: Volume expansion is supported only for dynamically provisioned volumes (with or without
consistency group).
Restriction: Volume shrinking is not supported.
Static provisioning
In static provisioning, an administrator creates a number of persistent volumes (PVs), which include
information about the storage that is available to each user in the cluster.
To use the existing volume on the storage system, do the following steps:
1. Create a persistent volume using the PV manifest file. For more information, see “Creating a persistent
volume (PV)” on page 44.
2. Create a persistent volume claim (PVC) using the PVC manifest file. For more information, see
“Creating a PersistentVolumeClaim (PVC)” on page 45.
Note: Static provisioning is not supported for consistency group feature.
Related concepts
“Creating pods” on page 39
generate_static_provisioning_yamls.sh
You can issue the following command to download the script for CSI 2.9.0:
curl -O https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi/v2.9.0/tools/
generate_static_provisioning_yamls.sh
Note: This script must be run on the IBM Spectrum Scale cluster node.
Usage of the script is as follows:
Usage: ./generate_static_provisioning_yamls.sh
-f|--filesystem <Name of Volume's Source Filesystem>
-l|--path <full Path of Volume in Primary Filesystem>
-F|--fileset <name of source fileset>
-s|--size <size in GB>
-u|--username <Username of spectrum scale GUI user account>
-p|--password <Password of spectrum scale GUI user account>
-r|--guihost <HostName(or route) used to access IBM Spectrum Scale GUI service
running on Primary Cluster>
[-P|--pvname <name for pv>]
[-c|--storageclass <StorageClass for pv>]
[-a|--accessmode <AccessMode for pv>]
[-h|--help]
Note: --path and --fileset options are mutually exclusive. At least one of the options must be
specified.
Example 1: Directory based static volume
The following example illustrates how to create a volume from a directory /mnt/fs1/staticpv
within the file system 'fs1'.
Note: The Path specified for option --path must be valid a gpfs path from the primary file system.
https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi/v2.9.0/driver/examples/version1/
volume/staticprovisioning/static_pv.yaml
2. Configure persistent volume (PV) manifest file with a volumeHandle as described in this example.
# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
44 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
metadata:
name: static-scale-static-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
csi:
driver: spectrumscale.csi.ibm.com
volumeHandle: 0;2;7171748422707577770;13280B0A:61F4048E;;fset2;/ibm/fs1/fset2
# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: scale-static-pvc
spec:
This PVC is bound to an available PV with storage equal to or greater than what is specified in the
pvc.yaml file.
Tiering Support
Manage the location of newly created files in a specific storage pool with tiering support.
When the storage class is assigned to a specific "tier", files created in volumes belonging to that storage
class will be placed in the assigned "tier". For more information about storage pools in IBM Spectrum
Scale, visit the IBM Spectrum Scale Documentation.
A sample storage class with tiering is created by applying a config like the following:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibm-spectrum-scale-tiering
parameters:
version: "2"
volBackendFs: fs0
tier: "storagePoolName"
provisioner: spectrumscale.csi.ibm.com
reclaimPolicy: Delete
Where, tier field is optional, and can be set to one of the storage pools defined in the IBM Spectrum
Scale filesystem specified by the field volBackendFs. If the storage pool that is specified in the tier
field does not exist in IBM Spectrum Scale volBackendFs, the request will fail.
Requirements: IBM Spectrum Scale 5.1.3 or later and Filesystem 27.00 or later
Compression Support
Provides support for compressing files, typically on some schedule with cronjobs.
The files will not be compressed until a cronjob runs. As files are created or modified, they will be
uncompressed until the job runs again. When the storage class is created with compression: true,
the fileset created in the IBM Spectrum Scale filesystem will have a name appended to it such as
-COMPRESSZcsi. The compression field is optional, and by default is "false" meaning no files will be
compressed within that PV.
A sample storage class with compression enabled:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ibm-spectrum-scale-compression
parameters:
version: "2"
volBackendFs: fs0
compression: "true" # Default: false
provisioner: spectrumscale.csi.ibm.com
reclaimPolicy: Delete
46 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
$ echo "RULE 'FSETCOMPRESSION' MIGRATE COMPRESS('z') WHERE FILESET_NAME LIKE
'%COMPRESSZcsi%'" > zcompress.policy
$ chmod 400 zcompress.policy
2. Create a crontab entry on the owning cluster of the filesystem to run compression policy.
$ crontab -e
# insert line
0 0 * * * /usr/lpp/mmfs/bin/mmapplypolicy <filesystemName> -P zcompress.policy -I yes
# save and quit
Note: Stop all pods manually before running the mmshutdown command. Otherwise, a worker node
might crash. If a crash occurs, its recovery involves recovery of the node, followed by manually
stopping all pods before resuming any prior shutdown.
lsof <filesystem>
lsmod | grep mm
c. If there is any mm* present, then unmount and shut down file systems on the worker node. You can
use the following options to view the details of the mounted file systems:
• Check the GPFS status:
/usr/lpp/mmfs/bin/mmgetstate
/usr/lpp/mmfs/bin/mmlsmount all
• List disk space usage to see what file systems are mounted by issuing the df command:
df
50 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
/usr/lpp/mmfs/bin/mmunmount all
/usr/lpp/mmfs/bin/mmshutdown
lsmod | grep mm
Note:
• All file systems must be unmounted, and GPFS must be shut down. Continue to step 5 to proceed
with IBM Spectrum Scale upgrade.
• If there are any file systems or mounted kernel modules (mm*) present, then you require a reboot
of the worker node to clean up the state. Ensure autoload is set to off for the node before
rebooting.
h. Set autoload to off.
reboot
5. Upgrade IBM Spectrum Scale by using the toolkit, set the worker node as an offline node, and exclude
the other nodes.
6. After the upgrade is completed, do the following steps:
a. Log on to the worker node and ensure that autoload is set back to on.
/usr/lpp/mmfs/bin/mmstartup
1. Move or stop all pods that use volumes that are managed by the IBM Spectrum Scale Container
Storage Interface driver.
2. Drain the nodes so that sidecar pods move to other nodes.
Chapter 8. Managing IBM Spectrum Scale when used with IBM Spectrum Scale Container Storage Interface
driver 51
kubectl label node <nodename> scale-
lsof <filesystem>
lsmod | grep mm
c. If there is any mm* present, then unmount and shut down file systems on the node.
• Ensure that GPFS is active on the node.
/usr/lpp/mmfs/bin/mmgetstate
/usr/lpp/mmfs/bin/mmlsmount all
• To list disk space usage and more importantly to see what file systems are mounted.
df
/usr/lpp/mmfs/bin/mmunmount all
/usr/lpp/mmfs/bin/mmshutdown
lsof <filesystem>
lsmod | grep mm
Note:
• If all file systems are unmounted and GPFS is shut down, continue to step 5 to proceed with IBM
Spectrum Scale upgrade.
• If there are any file systems or mounted kernel modules (mm*) present, then do a reboot of
the worker node to clean up the state. Ensure that autoload is set to off for the node before
rebooting.
h. Disable autoload by issuing the following command:
reboot
5. Upgrade the IBM Spectrum Scale using the IBM Spectrum Scale installation toolkit, set the worker
node as an offline node, and exclude other nodes.
6. After the upgrade is completed, do the following steps:
a. Log on to the worker node and issue the following command to enable autoload:
52 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
b. Log on to the worker node and start GPFS.
/usr/lpp/mmfs/bin/mmstartup
d. Relabel the node with IBM Spectrum Scale Container Storage Interface driver.
Chapter 8. Managing IBM Spectrum Scale when used with IBM Spectrum Scale Container Storage Interface
driver 53
54 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Chapter 9. Cleanup
Note: Ensure that CustomResource is properly deleted before you proceed for operator deletion.
This operation takes few minutes to delete all the resources created by the Operator.
2. To uninstall Operator and clean up all resources, issue the following commands:
If you are using OCP cluster with RHEL nodes, issue the following command:
Note: Delete the secrets for GUI credentials and configmap for CA certificates (if any) under the
ibm-spectrum-scale-csi-driver namespace.
3. Delete the ibm-spectrum-scale-csi-driver namespace:
4. Remove all the IBM Spectrum Scale Container Storage Interface driver container images from the
Kubernetes or OCP worker nodes.
Note: If you use a different container engine than Docker, replace the Docker commands with the
commands of the container engine that you use.
5. To delete PVC data, unlink and delete the primary fileset that is defined in the
csiscaleoperators.csi.ibm.com_cr.yaml file from your IBM Spectrum Scale cluster, by issuing
the following commands:
Note: The command in "step 4" completely deletes the PVC data, and any PVCs that are created
before would no longer be useful even if the IBM Spectrum Scale Container Storage Interface driver is
reinstalled.
58 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Chapter 11. Troubleshooting
For any issue with the IBM Spectrum Scale Container Storage Interface driver functions, you must
obtain the logs, which can be done by running the spectrum-scale-driver-snap.sh tool. These logs
along with the output of the gpfs.snap command can be used for debugging the issue. Additionally,
detailed output of k8s resources like pvc, snapshot, and pod for which failures are being seen is needed.
The output can be generated by issuing the command in the format of kubetctl get resource
<resource name> -o yaml.
curl -O https://raw.githubusercontent.com/IBM/ibm-spectrum-scale-csi/v2.9.0/tools/spectrum-
scale-driver-snap.sh
-n: Debug data for CSI resources under this namespace will be collected. If not specified,
default namespace is used. The tool returns error if CSI is not running under the given
namespace.
-o: Output directory where debug data will be stored. If not specified, the debug data is
stored in current directory.
-h: Prints the usage
The resultant folder contains the following files with debug information:
• ibm-spectrum-scale-csi-attacher.log
• ibm-spectrum-scale-csi-attacher
• ibm-spectrum-scale-csi-provisioner.log
• ibm-spectrum-scale-csi-provisioner
• ibm-spectrum-scale-csi-resizer.log
• ibm-spectrum-scale-csi-resizer
• ibm-spectrum-scale-csi-snapshotter.log
• ibm-spectrum-scale-csi-snapshotter
• ibm-spectrum-scale-csi-describe-CSIScaleOperator
• ibm-spectrum-scale-csi-attacher-0.log
• ibm-spectrum-scale-csi-attacher-0-previous.log
• ibm-spectrum-scale-csi-attacher-1.log
• ibm-spectrum-scale-csi-attacher-1-previous.log
• ibm-spectrum-scale-csi-operator-xxxxxxxxxxxx-xxxxx.log
• ibm-spectrum-scale-csi-operator-xxxxxxxxxxxx-xxxxx-previous.log
Issue: IBM Spectrum Scale Container Storage Interface driver driver pod and
sidecar pods do not come up during deployment
# kubectl get pod -n ibm-spectrum-scale-csi-driver
60 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
How to troubleshoot?
Operator validates the configuration parameters for CSIScaleOperator. If there is any failure in validation,
driver and sidecar pods do not come up. Check the CSIScaleOperator status in the CSIScaleOperator
custom resource as shown in the following example, where you can see the reason for the failure.
For example the above status indicate that the secret 'guisecret1' used to connect to the GUI server
does not exist. In order to fix this issue, make sure that the secret used in the CSIScaleOperator custom
resource, exists in the namespace ibm-spectrum-scale-csi-driver. To get more details about the failure
check operator logs.
How to troubleshoot?
Look for the PVC description. It should highlight any error prohibiting the volume creation, as shown in the
following example:
Issue: The PVC remains in the "Pending" state while cloning or restoring multiple
volumes
# kubectl get pvc scale-fset-clone-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
AGE
scale-fset-clone-pvc Pending ibm-spectrum-
scale-csi-fileset 45s
How to troubleshoot?
1. Get the PVC "uid" by issuing the command as shown in the following example.
2. Look for "uid" in the IBM Spectrum Scale Container Storage Interface driver container logs in the IBM
Spectrum Scale Container Storage Interface driver pod as shown in the following example, where you
can see the root cause of the failure.
The error log shows that the volume creation failed with Invalid jobId error from GUI.
3. Delete the PVC that is in a pending state and create a new PVC to resolve the Invalid jobId error.
Note: Complete the additional cleanup steps on IBM Spectrum Scale side along with deletion of
"Pending" PVC. Cleanup includes deletion of fileset with name pvc-<pvcuid> and deletion of softlink
with the same name present in primary fileset.
3. From storage cluster, delete the fileset with name pvc-<UID> by issuing following commands:
62 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Debugging pod mounting issues
This section discusses the troubleshooting of issues related to pod mounting.
Issue
Application pod fails to start and does not go in the Running state.
How to troubleshoot?
Look for pod description for the root cause of failure.
Root cause
If the above error is seen despite the file system being mounted on a given node, then the root cause is
that IBM Spectrum Scale node names and Kubernetes node names are different, and node mapping is not
configured. For more information, see “Kubernetes to IBM Spectrum Scale node mapping” on page 30.
Symptoms
• PVC and Snapshot creation and deletion operation fails.
• Attach and Detach operation of PVC to Pod fails.
• IBM Spectrum Scale Container Storage Interface driver pod fails repeatedly during the GUI start.
Causes
• IBM Spectrum Scale GUI does not function as expected.
• GUI user credentials that are needed for IBM Spectrum Scale Container Storage Interface driver setup
are no longer valid.
• GUI SSL CA certificate is expired.
If the GUI user credentials are reset or expired, then you can see the related error messages
during various volume-specific operations. The following example shows pending status for Create PVC
operation due to issues as written in the messages.
2. To resolve GUI access issue, reset the IBM Spectrum Scale GUI user credential and update the secret
for IBM Spectrum Scale Container Storage Interface driver. For more information about updating a
secret, see “Changing the configuration after deployment” on page 31.
Note: The PVC creation issue as mentioned above can be resolved by fixing the secret as given in Step
2.
For more information about how to monitor, administer, and troubleshoot GUI-related issues, see
monitoring, administration, and troubleshooting sections in the specific versions of IBM Spectrum Scale
documentation.
64 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Appendix A. Installing IBM Spectrum Scale CSI on a
Kubernetes cluster with RHEL 8 nodes
Note: No official Kubernetes community or Red Hat support documentation is available for the
Kubernetes installation on RHEL 8. Although many unofficial documentations are available on the
Internet, the Kubernetes configuration in these documentations is not validated with IBM Spectrum Scale
CSI driver. Ideally, if Kubernetes works properly, CSI is also expected to work. If CSI does not work
because of some Kubernetes configuration issues, these issues can be fixed with a limited support. The
following configuration is used on RHEL 8 nodes.
1. Enable IP forwarding on the nodes on which you are setting up a Kubernetes cluster, if it is not
enabled.
Accessibility features
The following list includes the major accessibility features in IBM Spectrum Scale:
• Keyboard-only operation
• Interfaces that are commonly used by screen readers
• Keys that are discernible by touch but do not activate just by touching them
• Industry-standard devices for ports and connectors
• The attachment of alternative input and output devices
IBM Documentation, and its related publications, are accessibility-enabled.
Keyboard navigation
This product uses standard Microsoft Windows navigation keys.
IBM Director of Licensing IBM Corporation North Castle Drive, MD-NC119 Armonk, NY 10504-1785 US
For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual
Property Department in your country or send inquiries, in writing, to:
Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan Ltd. 19-21, Nihonbashi-
Hakozakicho, Chuo-ku Tokyo 103-8510, Japan
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in
certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:
IBM Director of Licensing IBM Corporation North Castle Drive, MD-NC119 Armonk, NY 10504-1785 US
Such information may be available, subject to appropriate terms and conditions, including in some cases,
payment of a fee.
The licensed program described in this document and all licensed material available for it are provided by
IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any
equivalent agreement between us.
The performance data discussed herein is presented as derived under specific operating conditions.
Actual results may vary.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
Each copy or any portion of these sample programs or any derivative work must include
a copyright notice as follows:
If you are viewing this information softcopy, the photographs and color illustrations may not appear.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
Copyright and trademark information at www.ibm.com/legal/copytrade.shtml.
Intel is a trademark of Intel Corporation or its subsidiaries in the United States and other countries.
Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or
its affiliates.
The registered trademark Linux is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or
both.
Red Hat, OpenShift, and Ansible® are trademarks or registered trademarks of Red Hat, Inc. or its
subsidiaries in the United States and other countries.
UNIX is a registered trademark of the Open Group in the United States and other countries.
70 Notices
IBM Privacy Policy
At IBM we recognize the importance of protecting your personal information and are committed to
processing it responsibly and in compliance with applicable data protection laws in all countries in which
IBM operates.
Visit the IBM Privacy Policy for additional information on this topic at https://www.ibm.com/privacy/
details/us/en/.
Applicability
These terms and conditions are in addition to any terms of use for the IBM website.
Personal use
You can reproduce these publications for your personal, noncommercial use provided that all proprietary
notices are preserved. You cannot distribute, display, or make derivative work of these publications, or
any portion thereof, without the express consent of IBM.
Commercial use
You can reproduce, distribute, and display these publications solely within your enterprise provided
that all proprietary notices are preserved. You cannot make derivative works of these publications, or
reproduce, distribute, or display these publications or any portion thereof outside your enterprise, without
the express consent of IBM.
Rights
Except as expressly granted in this permission, no other permissions, licenses, or rights are granted,
either express or implied, to the Publications or any information, data, software or other intellectual
property contained therein.
IBM reserves the right to withdraw the permissions that are granted herein whenever, in its discretion, the
use of the publications is detrimental to its interest or as determined by IBM, the above instructions are
not being properly followed.
You cannot download, export, or reexport this information except in full compliance with all applicable
laws and regulations, including all United States export laws and regulations.
IBM MAKES NO GUARANTEE ABOUT THE CONTENT OF THESE PUBLICATIONS. THE PUBLICATIONS
ARE PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED,
INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT,
AND FITNESS FOR A PARTICULAR PURPOSE.
Notices 71
72 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Glossary
This glossary provides terms and definitions for IBM Spectrum Scale.
The following cross-references are used in this glossary:
• See refers you from a nonpreferred term to the preferred term or from an abbreviation to the spelled-
out form.
• See also refers you to a related or contrasting term.
For other terms and definitions, see the IBM Terminology website (www.ibm.com/software/globalization/
terminology) (opens in new window).
B
block utilization
The measurement of the percentage of used subblocks per allocated blocks.
C
cluster
A loosely coupled collection of independent systems (nodes) organized into a network for the purpose
of sharing resources and communicating with each other. See also GPFS cluster.
cluster configuration data
The configuration data that is stored on the cluster configuration servers.
Cluster Export Services (CES) nodes
A subset of nodes configured within a cluster to provide a solution for exporting GPFS file systems by
using the Network File System (NFS), Server Message Block (SMB), and Object protocols.
cluster manager
The node that monitors node status using disk leases, detects failures, drives recovery, and selects
file system managers. The cluster manager must be a quorum node. The selection of the cluster
manager node favors the quorum-manager node with the lowest node number among the nodes that
are operating at that particular time.
Note: The cluster manager role is not moved to another node when a node with a lower node number
becomes active.
clustered watch folder
Provides a scalable and fault-tolerant method for file system activity within an IBM Spectrum Scale
file system. A clustered watch folder can watch file system activity on a fileset, inode space, or an
entire file system. Events are streamed to an external Kafka sink cluster in an easy-to-parse JSON
format. For more information, see the mmwatch command in the IBM Spectrum Scale: Command and
Programming Reference Guide.
control data structures
Data structures needed to manage file data and metadata cached in memory. Control data structures
include hash tables and link pointers for finding cached data; lock states and tokens to implement
distributed locking; and various flags and sequence numbers to keep track of updates to the cached
data.
D
Data Management Application Program Interface (DMAPI)
The interface defined by the Open Group's XDSM standard as described in the publication
System Management: Data Storage Management (XDSM) API Common Application Environment (CAE)
Specification C429, The Open Group ISBN 1-85912-190-X.
E
ECKD
See extended count key data (ECKD).
ECKD device
See extended count key data device (ECKD device).
encryption key
A mathematical value that allows components to verify that they are in communication with the
expected server. Encryption keys are based on a public or private key pair that is created during the
installation process. See also file encryption key, master encryption key.
extended count key data (ECKD)
An extension of the count-key-data (CKD) architecture. It includes additional commands that can be
used to improve performance.
extended count key data device (ECKD device)
A disk storage device that has a data transfer rate faster than some processors can utilize and that is
connected to the processor through use of a speed matching buffer. A specialized channel program is
needed to communicate with such a device. See also fixed-block architecture disk device.
F
failback
Cluster recovery from failover following repair. See also failover.
failover
(1) The assumption of file system duties by another node when a node fails. (2) The process of
transferring all control of the ESS to a single cluster in the ESS when the other clusters in the ESS fails.
See also cluster. (3) The routing of all transactions to a second controller when the first controller fails.
See also cluster.
failure group
A collection of disks that share common access paths or adapter connections, and could all become
unavailable through a single hardware failure.
FEK
See file encryption key.
74 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
fileset
A hierarchical grouping of files managed as a unit for balancing workload across a cluster. See also
dependent fileset, independent fileset.
fileset snapshot
A snapshot of an independent fileset plus all dependent filesets.
file audit logging
Provides the ability to monitor user activity of IBM Spectrum Scale file systems and store events
related to the user activity in a security-enhanced fileset. Events are stored in an easy-to-parse JSON
format. For more information, see the mmaudit command in the IBM Spectrum Scale: Command and
Programming Reference Guide.
file clone
A writable snapshot of an individual file.
file encryption key (FEK)
A key used to encrypt sectors of an individual file. See also encryption key.
file-management policy
A set of rules defined in a policy file that GPFS uses to manage file migration and file deletion. See
also policy.
file-placement policy
A set of rules defined in a policy file that GPFS uses to manage the initial placement of a newly created
file. See also policy.
file system descriptor
A data structure containing key information about a file system. This information includes the disks
assigned to the file system (stripe group), the current state of the file system, and pointers to key files
such as quota files and log files.
file system descriptor quorum
The number of disks needed in order to write the file system descriptor correctly.
file system manager
The provider of services for all the nodes using a single file system. A file system manager processes
changes to the state or description of the file system, controls the regions of disks that are allocated
to each node, and controls token management and quota management.
fixed-block architecture disk device (FBA disk device)
A disk device that stores data in blocks of fixed size. These blocks are addressed by block number
relative to the beginning of the file. See also extended count key data device.
fragment
The space allocated for an amount of data too small to require a full block. A fragment consists of one
or more subblocks.
G
GPUDirect Storage
IBM Spectrum Scale's support for NVIDIA's GPUDirect Storage (GDS) enables a direct path between
GPU memory and storage. File system storage is directly connected to the GPU buffers to reduce
latency and load on CPU. Data is read directly from an NSD server's pagepool and it is sent to the GPU
buffer of the IBM Spectrum Scale clients by using RDMA.
global snapshot
A snapshot of an entire GPFS file system.
GPFS cluster
A cluster of nodes defined as being available for use by GPFS file systems.
GPFS portability layer
The interface module that each installation must build for its specific hardware platform and Linux
distribution.
Glossary 75
GPFS recovery log
A file that contains a record of metadata activity and exists for each node of a cluster. In the event of
a node failure, the recovery log for the failed node is replayed, restoring the file system to a consistent
state and allowing other nodes to continue working.
I
ill-placed file
A file assigned to one storage pool but having some or all of its data in a different storage pool.
ill-replicated file
A file with contents that are not correctly replicated according to the desired setting for that file. This
situation occurs in the interval between a change in the file's replication settings or suspending one of
its disks, and the restripe of the file.
independent fileset
A fileset that has its own inode space.
indirect block
A block containing pointers to other blocks.
inode
The internal structure that describes the individual files in the file system. There is one inode for each
file.
inode space
A collection of inode number ranges reserved for an independent fileset, which enables more efficient
per-fileset functions.
ISKLM
IBM Security Key Lifecycle Manager. For GPFS encryption, the ISKLM is used as an RKM server to
store MEKs.
J
journaled file system (JFS)
A technology designed for high-throughput server environments, which are important for running
intranet and other high-performance e-business file servers.
junction
A special directory entry that connects a name in a directory of one fileset to the root directory of
another fileset.
K
kernel
The part of an operating system that contains programs for such tasks as input/output, management
and control of hardware, and the scheduling of user tasks.
M
master encryption key (MEK)
A key used to encrypt other keys. See also encryption key.
MEK
See master encryption key.
metadata
Data structures that contain information that is needed to access file data. Metadata includes inodes,
indirect blocks, and directories. Metadata is not accessible to user applications.
metanode
The one node per open file that is responsible for maintaining file metadata integrity. In most cases,
the node that has had the file open for the longest period of continuous time is the metanode.
76 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
mirroring
The process of writing the same data to multiple disks at the same time. The mirroring of data
protects it against data loss within the database or within the recovery log.
Microsoft Management Console (MMC)
A Windows tool that can be used to do basic configuration tasks on an SMB server. These tasks
include administrative tasks such as listing or closing the connected users and open files, and creating
and manipulating SMB shares.
multi-tailed
A disk connected to multiple nodes.
N
namespace
Space reserved by a file system to contain the names of its objects.
Network File System (NFS)
A protocol, developed by Sun Microsystems, Incorporated, that allows any host in a network to gain
access to another host or netgroup and their file directories.
Network Shared Disk (NSD)
A component for cluster-wide disk naming and access.
NSD volume ID
A unique 16-digit hex number that is used to identify and access all NSDs.
node
An individual operating-system image within a cluster. Depending on the way in which the computer
system is partitioned, it may contain one or more nodes.
node descriptor
A definition that indicates how GPFS uses a node. Possible functions include: manager node, client
node, quorum node, and nonquorum node.
node number
A number that is generated and maintained by GPFS as the cluster is created, and as nodes are added
to or deleted from the cluster.
node quorum
The minimum number of nodes that must be running in order for the daemon to start.
node quorum with tiebreaker disks
A form of quorum that allows GPFS to run with as little as one quorum node available, as long as there
is access to a majority of the quorum disks.
non-quorum node
A node in a cluster that is not counted for the purposes of quorum determination.
Non-Volatile Memory Express (NVMe)
An interface specification that allows host software to communicate with non-volatile memory
storage media.
P
policy
A list of file-placement, service-class, and encryption rules that define characteristics and placement
of files. Several policies can be defined within the configuration, but only one policy set is active at one
time.
policy rule
A programming statement within a policy that defines a specific action to be performed.
pool
A group of resources with similar characteristics and attributes.
Glossary 77
portability
The ability of a programming language to compile successfully on different operating systems without
requiring changes to the source code.
primary GPFS cluster configuration server
In a GPFS cluster, the node chosen to maintain the GPFS cluster configuration data.
private IP address
An IP address used to communicate on a private network.
public IP address
An IP address used to communicate on a public network.
Q
quorum node
A node in the cluster that is counted to determine whether a quorum exists.
quota
The amount of disk space and number of inodes assigned as upper limits for a specified user, group of
users, or fileset.
quota management
The allocation of disk blocks to the other nodes writing to the file system, and comparison of the
allocated space to quota limits at regular intervals.
R
Redundant Array of Independent Disks (RAID)
A collection of two or more disk physical drives that present to the host an image of one or more
logical disk drives. In the event of a single physical device failure, the data can be read or regenerated
from the other disk drives in the array due to data redundancy.
recovery
The process of restoring access to file system data when a failure has occurred. Recovery can involve
reconstructing data or providing alternative routing through a different server.
remote key management server (RKM server)
A server that is used to store master encryption keys.
replication
The process of maintaining a defined set of data in more than one location. Replication consists of
copying designated changes for one location (a source) to another (a target) and synchronizing the
data in both locations.
RKM server
See remote key management server.
rule
A list of conditions and actions that are triggered when certain conditions are met. Conditions include
attributes about an object (file name, type or extension, dates, owner, and groups), the requesting
client, and the container name associated with the object.
S
SAN-attached
Disks that are physically attached to all nodes in the cluster using Serial Storage Architecture (SSA)
connections or using Fibre Channel switches.
Scale Out Backup and Restore (SOBAR)
A specialized mechanism for data protection against disaster only for GPFS file systems that are
managed by IBM Spectrum Protect for Space Management.
secondary GPFS cluster configuration server
In a GPFS cluster, the node chosen to maintain the GPFS cluster configuration data in the event that
the primary GPFS cluster configuration server fails or becomes unavailable.
78 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Secure Hash Algorithm digest (SHA digest)
A character string used to identify a GPFS security key.
session failure
The loss of all resources of a data management session due to the failure of the daemon on the
session node.
session node
The node on which a data management session was created.
Small Computer System Interface (SCSI)
An ANSI-standard electronic interface that allows personal computers to communicate with
peripheral hardware, such as disk drives, tape drives, CD-ROM drives, printers, and scanners faster
and more flexibly than previous interfaces.
snapshot
An exact copy of changed data in the active files and directories of a file system or fileset at a single
point in time. See also fileset snapshot, global snapshot.
source node
The node on which a data management event is generated.
stand-alone client
The node in a one-node cluster.
storage area network (SAN)
A dedicated storage network tailored to a specific environment, combining servers, storage products,
networking products, software, and services.
storage pool
A grouping of storage space consisting of volumes, logical unit numbers (LUNs), or addresses that
share a common set of administrative characteristics.
stripe group
The set of disks comprising the storage assigned to a file system.
striping
A storage process in which information is split into blocks (a fixed amount of data) and the blocks are
written to (or read from) a series of disks in parallel.
subblock
The smallest unit of data accessible in an I/O operation, equal to one thirty-second of a data block.
system storage pool
A storage pool containing file system control structures, reserved files, directories, symbolic links,
special devices, as well as the metadata associated with regular files, including indirect blocks and
extended attributes. The system storage pool can also contain user data.
T
token management
A system for controlling file access in which each application performing a read or write operation
is granted some form of access to a specific block of file data. Token management provides data
consistency and controls conflicts. Token management has two components: the token management
server, and the token management function.
token management function
A component of token management that requests tokens from the token management server. The
token management function is located on each cluster node.
token management server
A component of token management that controls tokens relating to the operation of the file system.
The token management server is located at the file system manager node.
transparent cloud tiering (TCT)
A separately installable add-on feature of IBM Spectrum Scale that provides a native cloud storage
tier. It allows data center administrators to free up on-premise storage capacity, by moving out cooler
data to the cloud storage, thereby reducing capital and operational expenditures.
Glossary 79
twin-tailed
A disk connected to two nodes.
U
user storage pool
A storage pool containing the blocks of data that make up user files.
V
VFS
See virtual file system.
virtual file system (VFS)
A remote file system that has been mounted so that it is accessible to the local user.
virtual node (vnode)
The structure that contains information about a file system object in a virtual file system (VFS).
W
watch folder API
Provides a programming interface where a custom C program can be written that incorporates the
ability to monitor inode spaces, filesets, or directories for specific user activity-related events within
IBM Spectrum Scale file systems. For more information, a sample program is provided in the following
directory on IBM Spectrum Scale nodes: /usr/lpp/mmfs/samples/util called tswf that can be
modified according to the user's needs.
80 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
Index
H P
hardware requirements Persistent Volume Claim 40
IBM Spectrum Scale Container Storage Interface driver
5
T
I Troubleshooting 59
Index 81
82 IBM Spectrum Scale: Container Storage Interface Driver Guide Version 2.9
IBM®
SC28-3113-17