Installing and Configuring For Linux Express Guide
Installing and Configuring For Linux Express Guide
Before using this information and the product it supports, be sure to read and
understand the safety information and the safety instructions, which are available at:
http://thinksystem.lenovofiles.com/help/topic/safety_documentation/pdf_files.html
In addition, be sure that you are familiar with the terms and conditions of the Lenovo warranty
for your server,which can be found at:
http://datacentersupport.lenovo.com/warrantylookup
• iSCSI
• SAS
Component Assumptions
Hardware
• You have used the Installation and Setup Instructions included with
the controller shelves to install the hardware.
• You have connected cables between the optional drive shelves and
the array controllers.
Host
• You have made a connection between the storage array and the
data host.
• You are not configuring the data (I/O attached) host to boot from
SAN.
• If you are NVMe over Fabrics, you have installed the latest
compatible Linux version as listed under the Lenovo Storage
Interoperation Center.
Storage
management • You are using a 1 Gbps or faster management network.
station
1 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Component Assumptions
• You are using a separate station for management rather than the
data (I/O attached) host.
IP addressing
• You have installed and configured a DHCP server.
Storage
provisioning • You will not use shared volumes.
Protocol: FC
• You have made all host-side FC connections and activated switch
zoning.
Protocol: iSCSI
• You are using Ethernet switches capable of transporting iSCSI
traffic.
Protocol: SAS
• You are using Lenovo-supported SAS HBAs.
• You are using SAS HBA driver versions as listed on Lenovo Storage
Interoperation Center (LSIC).
Protocol: NVMe
over RoCE • You have received the 100G host interface cards in a DE6000H and
DE6000F storage system pre-configured with the NVMe over RoCE
protocol
Protocol: NVMe
over Fibre • You have received the 32G host interface cards in a DE6000H and
Channel DE6000F storage system pre-configured with the NVMe over Fibre
Channel protocol or the controllers were ordered with standard FC
ports and need to be converted to NVMe-oF.
2 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Understanding the workflow
This workflow guides you through the express method for configuring your storage array
and ThinkSystem System Manager to make storage available to a host.
3 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Verifying the configuration is supported
To ensure reliable operation, you create an implementation plan and then verify that the entire
configuration is supported.
In this file, you may search for the product family that applies, as well as other criteria
for the configuration such as Operating System, ThinkSystem SAN OS, and Host
Multipath driver.
3. As necessary, make the updates for your operating system and protocol as listed in the
table.
4 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Configuring management port IP addresses
In this express method for configuring communications between the management station and the
storage array, you use Dynamic Host Configuration Protocol (DHCP) to provide IP addresses. Each
controller has two storage management ports, and each management port will be assigned an IP
address.
You have installed and configured a DHCP server on the same subnet as the storage management
ports.
The following instructions refer to a storage array with two controllers (a duplex configuration).
1. If you have not already done so, connect an Ethernet cable to the management station
and to management port 1 on each controller (A and B).
Note: Do not use management port 2 on either controller. Port 2 is reserved for use by
Lenovo technical personnel.
Important: If you disconnect and reconnect the Ethernet cable, or if the storage array
is power-cycled, DHCP assigns IP addresses again. This process occurs until static IP
addresses are configured. It is recommended that you avoid disconnecting the cable or
power-cycling the array.
If the storage array cannot get DHCP-assigned IP addresses within 30 seconds, the
following default IP addresses are set:
2. Locate the MAC address label on the back of each controller, and then provide your
network administrator with the MAC address for port 1 of each controller.
Your network administrator needs the MAC addresses to determine the IP address for
each controller. You will need the IP addresses to connect to your storage system
through your browser.
10 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Access ThinkSystem System Manager and use the Setup
Wizard
You use the Setup wizard in ThinkSystem System Manager to configure your storage array.
• You have ensured that the device from which you will access ThinkSystem System
Manager contains one of the following browsers:
Browser Minimum version
Google Chrome 47
Microsoft Internet Explorer 11
Microsoft Edge EdgeHTML 12
Mozilla Firefox 31
Safari 9
If you are an iSCSI user, make sure you have closed the Setup wizard while configuring iSCSI.
The wizard automatically relaunches when you open System Manager or refresh your browser
and at least one of the following conditions is met:
If the Setup wizard does not automatically appear, contact technical support.
The first time ThinkSystem System Manager is opened on an array that has not been
configured, the Set Administrator Password prompt appears. Role-based access
management configures four local roles: admin, support, security, and monitor. The
latter three roles have random passwords that cannot be guessed. After you set a
password for the admin role you can change all of the passwords using
the admin credentials. See ThinkSystem System Manager online help for more
information on the four local user roles.
2. Enter the System Manager password for the admin role in the Set Administrator
Password and Confirm Password fields, and then select the Set Password button.
When you open System Manager and no pools, volumes groups, workloads, or
notifications have been configured, the Setup wizard launches.
• Verify hosts and operating systems – Verify the host and operating
system types that the storage array can access.
13 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
• Accept pools – Accept the recommended pool configuration for the
express installation method. A pool is a logical group of drives.
For more information, see the online help for ThinkSystem System Manager.
14 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Install ThinkSystem Host Utilities
Storage Manager (Host Utilities) can only be installed on host servers.
1. Download the ThinkSystem Host Utilities package from DE Series Product Support Site.
15 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Performing FC-specific tasks
For the Fibre Channel protocol, you configure the switches and determine the host port identifiers.
• Most HBA vendors offer an HBA utility. You will need the correct version of HBA for your host
operating system and CPU. Examples of FC HBA utilities include:
• Host I/O ports might automatically register if the host context agent is installed.
1. Download the appropriate utility from your HBA vendor's web site.
Configuring (zoning) the Fibre Channel (FC) switches enables the hosts to connect to the storage
array and limits the number of paths. You zone the switches using the management interface for the
switches.
• You must have used your HBA utility to discover the WWPN of each host initiator port
and of each controller target port connected to the switch.
For details about zoning your switches, see the switch vendor's documentation.
You must zone by WWPN, not by physical port. Each initiator port must be in a separate zone with
all of its corresponding target ports.
1. Log in to the FC switch administration program, and then select the zoning configuration
option.
2. Create a new zone that includes the first host initiator port and that also includes all of
the target ports that connect to the same FC switch as the initiator.
3. Create additional zones for each FC host initiator port in the switch.
4. Save the zones, and then activate the new zoning configuration.
16 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Configure the multipath software
Multipath software provides a redundant path to the storage array in case one of the physical paths
is disrupted. The multipath software presents the operating system with a single virtual device that
represents the active physical paths to the storage. The multipath software also manages the failover
process that updates the virtual device. You use the device mapper multipath (DM-MP) tool for Linux
installations.
• For Red Hat (RHEL) hosts, verify the packages are installed by running rpm -q device-
mapper- multipath.
• For SLES hosts, verify the packages are installed by running rpm -q multipath-tools.
By default, DM-MP is disabled in RHEL and SLES. Complete the following steps to enable DM-MP
components on the host.
If you have not already installed the operating system, use the media supplied by your operating
system vendor.
2. Use the default multipath settings by leaving the multipath.conf file blank.
7. Rebuild the initramfs image or the initrd image under /boot directory:
8. Make sure that the newly created /boot/initrams-* image or /boot/initrd-* image is
selected in the boot configuration file.For example, for grub it is /boot/grub/menu.lst and
for grub2 it is /boot/ grub2/menu.cfg.
9. Use the "Create host manually" procedure in the online help to check whether the hosts
are defined.
17 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Verify that each host type is either Linux DM-MP (Kernel 3.10 or later) if you enable the
Automatic Load Balancing feature, or Linux DM-MP (Kernel 3.9 or earlier) if you disable
the Automatic Load Balancing feature. If necessary, change the selected host type to
the appropriate setting.
The multipath.conf file is the configuration file for the multipath daemon, multipathd. The
multipath.conf file overrides the built-in configuration table for multipathd. Any line in the file whose
first non-white-space character is # is considered a comment line. Empty lines are ignored.
Note: For ThinkSystem operating system 8.50 and newer, Lenovo recommends using the default
settings as provided.
The host must have discovered the LUN. See common tasks on storage array in chapter Creating a
workload, Create volumes, and Defining a host in ThinkSystem System Manager, and Mapping a
volume to a host.
In the /dev/mapper folder, you have run the ls command to see the available disks.
You can initialize the disk as a basic disk with a GUID partition table (GPT) or Master boot record
(MBR).
Format the LUN with a file system such as ext4. Some applications do not require this step.
1. Retrieve the SCSI ID of the mapped disk by issuing the multipath -ll command.The SCSI ID
is a 33-character string of hexadecimal digits, beginning with the number 3. If user-friendly
names are enabled, Device Mapper reports disks as mpath instead of by a SCSI ID.
# multipath -ll
mpathd(360080e5000321bb8000092b1535f887a) dm-2 LENOVO ,DE_Series
size=1.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 16:0:4:4 sde 69:144 active ready running
| `- 15:0:5:4 sdf 65:176 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 16:0:5:4 sdg 70:80 active ready running
`- 15:0:1:4 sdh 66:0 active ready running.
2. Create a new partition according to the method appropriate for your Linux OS release.
Typically, characters identifying the partition of a disk are appended to the SCSI ID (the
number 1 or p3 for instance).
# mkfs.ext4 /dev/mapper/360080e5000321bb8000092b1535f887a1
# mkdir /mnt/ext4
You must have initialized the volume and formatted it with a file system.
1. On the host, copy one or more files to the mount point of the disk
3. Run the diff command to compare the copied files to the originals.
FC worksheet
You can use this worksheet to record FC storage configuration information. You
need this information to perform provisioning tasks.
The illustration shows a host connected to an DE Series storage array in two zones. One
zone is indicated by the blue line; the other zone is indicated by the red line. Any single port
has two paths to the storage (one to each controller).
Host indentifiers
19 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Callou Host (initiator) port connections WWPN
t No.
1 Host not applicable
2 Host port 0 to FC switch zone 0
7 Host port 1 to FC switch zone 1
Target identifiers
Mapping host
20 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Performing iSCSI-specific tasks
For the iSCSI protocol, you configure the switches and configure networking on the array side
and the host side. Then you verify the IP network connections.
• You have two separate networks for high availability. Make sure that you isolate your
iSCSI traffic to separate network segments.
• You have enabled send and receive hardware flow control end to end.
Note: Port channels/LACP is not supported on the controller's switch ports. Host-side LACP is not
recommended; multipathing provides the same, and in some cases better, benefits.
Consult your network administrator for tips on selecting the best configuration for your environment.
An effective strategy for configuring the iSCSI network with basic redundancy is to connect each host
port and one port from each controller to separate switches and partition each set of host and
controller ports on separate network segments using VLANs.
You must enable send and receive hardware flow control end to end. You must disable priority flow
control.
If you are using jumbo frames within the IP SAN for performance reasons, make sure to configure
the array, switches, and hosts to use jumbo frames. Consult your operating system and switch
documentation for information on how to enable jumbo frames on the hosts and on the switches. To
enable jumbo frames on the array, complete the steps in Configuring array-side networking—iSCSI.
Note: Many network switches must be configured above 9,000 bytes for IP overhead. Consult your
switch documentation for more information.
21 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Configuring array-side networking - iSCSI
You use the ThinkSystem System Manager GUI to configure iSCSI networking on the array side.
• You must know the IP address or domain name for one of the storage array controllers.
• You or your system administrator must have set up a password for the System
Manager GUI, or you must have configured Role-Based Access Control (RBAC) or
LDAP and a directory service for the appropriate security access to the storage array.
See the ThinkSystem System Manager online help for more information about Access
Management.
This task describes how to access the iSCSI port configuration from the Hardware page. You can
also access the configuration from System > Settings > Configure iSCSI ports.
The first time ThinkSystem System Manager is opened on an array that has not been
configured, the Set Administrator Password prompt appears. Role-based access
management configures four local roles: admin, support, security, and monitor. The
latter three roles have random passwords that cannot be guessed. After you set a
password for the admin role you can change all of the passwords using
the admin credentials. See ThinkSystem System Manager online help for more
information on the four local user roles.
2. Enter the System Manager password for the admin role in the Set Administrator
Password and Confirm Password fields, and then select the Set Password button.
When you open System Manager and no pools, volumes groups, workloads, or
notifications have been configured, the Setup wizard launches.
You will use the wizard later to complete additional setup tasks.
4. Select Hardware.
6. Click the controller with the iSCSI ports you want to configure.
8. In the drop-down list, select the port you want to configure, and then click Next.
20 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
To see all port settings, click the Show more port settings link on the right of the
dialog box.
MTU size
If necessary, enter a new size in bytes for the Maximum
(Available by Transmission Unit (MTU).
clicking Show more
port settings.) The default Maximum Transmission Unit (MTU) size is 1500
bytes per frame. You must enter a value between 1500 and
9000.
Enable ICMP PING Select this option to enable the Internet Control Message
responses Protocol (ICMP). The operating systems of networked computers
use this protocol to send messages. These ICMP messages
determine whether a host is reachable and how long it takes to
get packets to and from that host.
If you selected Enable IPv4, a dialog box opens for selecting IPv4 settings after you
click Next. If you selected Enable IPv6, a dialog box opens for selecting IPv6 settings
after you click Next. If you selected both options, the dialog box for IPv4 settings opens
first, and then after you click Next, the dialog box for IPv6 settings opens.
10. Configure the IPv4 and/or IPv6 settings, either automatically or manually. To see all
port settings, click the Show more settings link on the right of the dialog box.
21 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Port setting Description
physical and virtual local area networks (LANs) supported by the
(Available by same switches, the same routers, or both.
clicking Show
more settings.)
In most cases, you can use the inbox software-initiator for iSCSI CNA/NIC. You do not need to
download the latest driver, firmware, and BIOS. Refer to the Interoperability Matrix document to
determine code requirements.
• You have fully configured the switches that will be used to carry iSCSI storage traffic.
• You must have enabled send and receive hardware flow control end to end and
disabled priority flow control.
These instructions assume that two NIC ports will be used for iSCSI traffic.
node.session.nr_sessions = 1
node.session.timeo.replacement_timeout=20
22 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
3. Make sure iscsid and (open-)iscsi services are on and enabled for boot.Red Hat Enterprise
Linux 7 (RHEL 7) and Red Hat Enterprise Linux 7 and 8 (RHEL 7 and RHEL 8)
SUSE Linux Enterprise Server 12 (SLES 12) and SUSE Linux Enterprise Server 12 and 15
(SLES 12 and SLES 15)
4. Get the host IQN initiator name, which will be used to configure the host to an array.
# cat /etc/iscsi/initiatorname.iscsi
Edit:
BOOTPROTO=none
ONBOOT=yes
NM_CONTROLLED=no
Add:
IPADDR=192.168.xxx.xxx
NETMASK=255.255.255.0
Note: Be sure to set the address for both iSCSI initiator ports.
3. Restart network services.
4. Make sure the Linux server can ping all of the iSCSI target ports.
1. On the host, run one of the following commands, depending on whether jumbo frames are
enabled:
• If jumbo frames are enabled, run the ping command with a payload size of 8,972
bytes. The IP and ICMP combined headers are 28 bytes, which when added to the
payload, equals 9,000 bytes. The -s switch sets the packet size bit. The -d switch
sets the debug option. These options allow jumbo frames of 9,000 bytes to be
successfully transmitted between the iSCSI initiator and the target.
ping -I <hostIP> -s 8972 -d <targetIP>
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-
seconds:
24 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
• For Red Hat (RHEL) hosts, verify the packages are installed by running rpm -q device-
mapper- multipath.
• For SLES hosts, verify the packages are installed by running rpm -q multipath-tools.
By default, DM-MP is disabled in RHEL and SLES. Complete the following steps to enable DM-MP
components on the host.
If you have not already installed the operating system, use the media supplied by your operating
system vendor.
2. Use the default multipath settings by leaving the multipath.conf file blank.
7. Rebuild the initramfs image or the initrd image under /boot directory:
8. Make sure that the newly created /boot/initrams-* image or /boot/initrd-* image is
selected in the boot configuration file.For example, for grub it is /boot/grub/menu.lst and
for grub2 it is /boot/ grub2/menu.cfg.
9. Use the "Create host manually" procedure in the online help to check whether the hosts
are defined. Verify that each host type is either Linux DM-MP (Kernel 3.10 or later) if
you enable the Automatic Load Balancing feature, or Linux DM-MP (Kernel 3.9 or
earlier) if you disable the Automatic Load Balancing feature. If necessary, change the
selected host type to the appropriate setting.
The multipath.conf file is the configuration file for the multipath daemon, multipathd. The
multipath.conf file overrides the built-in configuration table for multipathd. Any line in the file whose
first non-white-space character is # is considered a comment line. Empty lines are ignored.
25 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Note: For ThinkSystem operating system 8.50 and newer, Lenovo recommends using the default
settings as provided.
The host must have discovered the LUN. See common tasks on storage array in chapter Creating a
workload, Create volumes, and Defining a host in ThinkSystem System Manager, and Mapping a
volume to a host.
In the /dev/mapper folder, you have run the ls command to see the available disks.
You can initialize the disk as a basic disk with a GUID partition table (GPT) or Master boot record
(MBR).
Format the LUN with a file system such as ext4. Some applications do not require this step.
1. Retrieve the SCSI ID of the mapped disk by issuing the multipath -ll command.The SCSI ID
is a 33-character string of hexadecimal digits, beginning with the number 3. If user-friendly
names are enabled, Device Mapper reports disks as mpath instead of by a SCSI ID.
# multipath -ll
mpathd(360080e5000321bb8000092b1535f887a) dm-2 LENOVO ,DE_Series
size=1.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 16:0:4:4 sde 69:144 active ready running
| `- 15:0:5:4 sdf 65:176 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 16:0:5:4 sdg 70:80 active ready running
`- 15:0:1:4 sdh 66:0 active ready running.
2. Create a new partition according to the method appropriate for your Linux OS release.
Typically, characters identifying the partition of a disk are appended to the SCSI ID (the
number 1 or p3 for instance).
# parted -a optimal -s -- /dev/mapper/360080e5000321bb8000092b1535f887a mklabel gpt mkpart
primary ext4 0% 100%
3. Create a file system on the partition.The method for creating a file system varies depending
on the file system chosen.
# mkfs.ext4 /dev/mapper/360080e5000321bb8000092b1535f887a1
26 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Before you begin
You must have initialized the volume and formatted it with a file system.
4. On the host, copy one or more files to the mount point of the disk
6. Run the diff command to compare the copied files to the originals.
iSCSI worksheet
You can use this worksheet to record iSCSI storage configuration information. You need this
information to perform provisioning tasks.
Recommended configuration
Recommended configurations consist of two initiator ports and four target ports with one or more
VLANs.
Target IQN
Callout No. Target port connection IQN
2 Target port
Mappings host name
Callout No. Host information Name and type
1 Mappings host name
Host OS type
27 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Performing SAS-specific tasks
Determining SAS host identifiers
For the SAS protocol, you find the SAS addresses using the HBA utility, then use the HBA BIOS to
make the appropriate configuration settings.
• Most HBA vendors offer an HBA utility. Depending on your host operating system and
CPU, use either the LSI-sas2flash(6G) or sas3flash(12G) utility.
• Host I/O ports might automatically register if the host context agent is installed.
1. Download the LSI-sas2flash(6G) or sas3flash(12G) utility from your HBA vendor's web
site.
3. Use the HBA BIOS to select the appropriate settings for your configuration.
• For Red Hat (RHEL) hosts, verify the packages are installed by running rpm -q device-
mapper- multipath.
• For SLES hosts, verify the packages are installed by running rpm -q multipath-tools.
By default, DM-MP is disabled in RHEL and SLES. Complete the following steps to enable DM-MP
components on the host.
If you have not already installed the operating system, use the media supplied by your operating
system vendor.
2. Use the default multipath settings by leaving the multipath.conf file blank.
28 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
6. Do one of the following to enable the multipathd daemon on boot.
7. Rebuild the initramfs image or the initrd image under /boot directory:
RHEL 6.x, 7.x and 8.x systems: dracut --force --add multipath
8. Make sure that the newly created /boot/initrams-* image or /boot/initrd-* image is
selected in the boot configuration file.For example, for grub it is /boot/grub/menu.lst and
for grub2 it is /boot/ grub2/menu.cfg.
9. Use the "Create host manually" procedure in the online help to check whether the hosts
are defined. Verify that each host type is either Linux DM-MP (Kernel 3.10 or later) if
you enable the Automatic Load Balancing feature, or Linux DM-MP (Kernel 3.9 or
earlier) if you disable the Automatic Load Balancing feature. If necessary, change the
selected host type to the appropriate setting.
The multipath.conf file is the configuration file for the multipath daemon, multipathd. The
multipath.conf file overrides the built-in configuration table for multipathd. Any line in the file whose
first non-white-space character is # is considered a comment line. Empty lines are ignored.
Note: For ThinkSystem operating system 8.50 and newer, Lenovo recommends using the default
settings as provided.
The host must have discovered the LUN. See common tasks on storage array in chapter Creating a
workload, Create volumes, and Defining a host in ThinkSystem System Manager, and Mapping a
volume to a host.
In the /dev/mapper folder, you have run the ls command to see the available disks.
You can initialize the disk as a basic disk with a GUID partition table (GPT) or Master boot record
(MBR).
29 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Format the LUN with a file system such as ext4. Some applications do not require this step.
1. Retrieve the SCSI ID of the mapped disk by issuing the multipath -ll command.The SCSI ID
is a 33-character string of hexadecimal digits, beginning with the number 3. If user-friendly
names are enabled, Device Mapper reports disks as mpath instead of by a SCSI ID.
# multipath -ll
mpathd(360080e5000321bb8000092b1535f887a) dm-2 LENOVO ,DE_Series
size=1.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 16:0:4:4 sde 69:144 active ready running
| `- 15:0:5:4 sdf 65:176 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 16:0:5:4 sdg 70:80 active ready running
`- 15:0:1:4 sdh 66:0 active ready running.
2. Create a new partition according to the method appropriate for your Linux OS release.
Typically, characters identifying the partition of a disk are appended to the SCSI ID (the
number 1 or p3 for instance).
# parted -a optimal -s -- /dev/mapper/360080e5000321bb8000092b1535f887a mklabel gpt mkpart
primary ext4 0% 100%
3. Create a file system on the partition.The method for creating a file system varies depending
on the file system chosen.
# mkfs.ext4 /dev/mapper/360080e5000321bb8000092b1535f887a1
You must have initialized the volume and formatted it with a file system.
7. On the host, copy one or more files to the mount point of the disk
9. Run the diff command to compare the copied files to the originals.
SAS worksheet
You can use this worksheet to record SAS storage configuration information. You need this
information to perform provisioning tasks.
30 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Host Identifiers
Callout No. Host (initiator) port connections SAS address
1 Host not applicable
2 Host (initiator) port 1 connected to Controller A, port 1
3 Host (initiator) port 1 connected to Controller B, port 1
4 Host (initiator) port 2 connected to Controller A, port 1
5 Host (initiator) port 2 connected to Controller B, port 1
Target Identifiers
Mappings Host
Mappings Host Name
Host OS Type
31 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Performing NVMe over RoCE-specific tasks
You can use NVMe with the RDMA over Converged Ethernet (RoCE) network protocol.
Controller restrictions
• NVME over RoCE can be configured for the DE6000H or DE6000F 64GB controllers.
The controllers must have 100GB host ports.
Switch restrictions
Attention: RISK OF DATA LOSS. You must enable Priority Flow Control or Global Pause Control
on the switch to eliminate the risk of data loss in an NVMe over RoCE environment.
• For a list of supported host channel adapters see the Lenovo Storage Interoperation
Center (LSIC).
• In-band CLI management via 11.50.3 SMcli is not supported in NVMe-oF modes.
Attention: RISK OF DATA LOSS. You must enable Priority Flow Control or Global Pause Control
on the switch to eliminate the risk of data loss in an NVMe over RoCE environment.
Enable Ethernet pause frame flow control end to end as the best practice configuration.
Consult your network administrator for tips on selecting the best configuration for your environment.
32 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
1. Install the rdma and nvme-cli packages:
# zypper install rdma-core
# zypper install nvme-cli.
2. Setup IPv4 IP addresses on the ethernet ports used to connect NVMe over RoCE. For
each network interface, create a configuration script that contains the different variables
for that interface.
The variables used in this step are based on server hardware and the network
environment. The variables include the IPADDR and GATEWAY. These are example
instructions for the latest SUSE Linux Enterprise Server 12 service pack:
Create the following file under /etc/modules-load.d/ to load the nvme-rdma kernel
module and make sure the kernel module will always be on, even after a reboot:
# cat /etc/modules-load.d/nvme-rdma.conf
nvme-rdma
Attention: RISK OF DATA LOSS. You must enable Priority Flow Control or Global Pause Control
on the switch to eliminate the risk of data loss in an NVMe over RoCE environment.
33 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
• Your controller must include an NVMe over RoCE host port; otherwise, the NVMe over
RoCE settings are not available in ThinkSystem System Manager.
You can access the NVMe over RoCE configuration from the Hardware page or from Settings >
System. This task describes how to configure the ports from the Hardware page.
Note: The NVMe over RoCE settings and functions appear only if your storage array's controller
includes an NVMe over RoCE port.
1. Select Hardware.
2. Click the controller with the NVMe over RoCE port you want to configure.
The controller's context menu appears.
4. In the drop-down list, select the port you want to configure, and then click Next.
5. Select the port configuration settings you want to use, and then click Next. To see all
port settings, click the Show more port settings link on the right of the dialog box
Note: The configured NVMe over RoCE port speed should match the
speed capability of the SFP on the selected port. All ports must be set
to the same speed.
Enable IPv4 and/or Enable IPv6 Select one or both options to enable support for IPv4 and IPv6
networks.
MTU size If necessary, enter a new size in bytes for the maximum
(Available by clicking transmissionunit (MTU).
Showmore port settings.)
The default MTU size is 1500 bytes per frame. You must enter a
valuebetween 1500 and 4200.
If you selected Enable IPv4, a dialog box opens for selecting IPv4 settings after you
click Next. If you selected Enable IPv6, a dialog box opens for selecting IPv6 settings
after you click Next. If you selected both options, the dialog box for IPv4 settings opens
first, and then after you click Next, the dialog box for IPv6 settings opens.
6. Configure the IPv4 and/or IPv6 settings, either automatically or manually. To see all
port settings, click the Show more settings link on the right of the dialog box
34 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Automatically obtain Select this option to obtain the configuration automatically.
configuration from
DHCPserver
Manually specify Select this option, and then enter a static address in the fields. For IPv4,
staticconfiguration include the network subnet mask and gateway. For IPv6, include the routable
IP addresses and router IP address.
Note: If there is only one routable IP address, set the remaining address to
0:0:0:0:0:0:0:0.
7. Click Finish.
1. Discover available subsystems on the NVMe-oF target for all paths using the following
command:
nvme discover -t rdma -a target_ip_address
Note: The nvme discover command discovers all controller ports in the subsystem,
regardless ofhost access.
Connect to the discovered subsystem on the first path using the command: nvme
connect -t rdma -n discovered_sub_nqn -a target_ip_address -Q queue_depth_setting -
l controller_loss_timeout_period.
Important: Connections are not established for any discovered port inaccessible
35 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
by the host.
Important: If you specify a port number using this command, the connection fails.
The default port is the only port set up for connections.
Important: The recommended queue depth setting is 1024. Override the default
setting of 128 with 1024 using the -Q 1024 command line option, as shown in the
following example.
[Unit]
Description=Connect NVMe-oF subsystems automatically during boot
ConditionPathExists=/etc/nvme/discovery.conf
After=network.target Before=remote-fs-pre.target
[Service] Type=oneshot
ExecStart=/usr/sbin/nvme connect-all
[Install] WantedBy=default.target
• For SLES 12 SP5 and later hosts, verify the packages are installed by running rpm -q
multipath-tools
By default, DM-MP is disabled in RHEL and SLES. Complete the following steps to enable DM-MP
components on the host.
1. Add the NVMe DE Series device entry to the devices section of the
/etc/multipath.conf file, as shown in the following example:
devices {
device {
vendor "NVME"
product "NetApp E-Series" path_grouping_policy group_by_prio failback immediate
no_path_retry 30
36 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
}
}
2. Configure multipathd to start at system boot
You can configure the I/O directed to the device target based on your Linux version.
The host must have discovered the namespace. See common tasks on storage array in chapter
Creating a workload, Create volumes, and Defining a host in ThinkSystem System Manager, and
Mapping a volume to a host.
For SLES 12, I/O is directed to virtual device targets by the Linux host. DM-MP manages the
physical paths underlying these virtual targets. Make sure you are running I/O only to the virtual
devices created by DM-MP and not to the physical device paths. If you are running I/O to the
physical paths, DM-MP cannot manage a failover event and the I/O fails.
You can access these block devices through the dm device or the symlink in /dev/mapper, for
example:
/dev/dm-1
/dev/mapper/eui.00001bc7593b7f5f00a0980000af4462
Example
The following example output from the nvme list command shows the host node name and its
correlation with the namespace ID.
37 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Column Description
Node The node name includes two parts:
• The notation nvme1 represents controller A and nvme2 represents controller B.
• The notation n1, n2, and so on represent the namespace identifier from the
host perspective. These identifiers are repeated in the table, once for
controller A and once forcontroller B.
Namespace The Namespace column lists the namespace ID (NSID), which is the identifier from
the storagearray perspective.
In the following multipath -ll output, the optimized paths are shown with a prio value of 50, while the
non- optimized paths are shown with a prio value of 10.
The Linux operating system routes I/O to the path group that is shown as status=active, while the
path groups listed as status=enabled are available for failover.
eui.00001bc7593b7f500a0980000af4462 dm-0 NVME,Lenovo DE-Series size=15G features='1
queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- #:#:#:# nvme1n1 259:5 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- #:#:#:# nvme2n1 259:9 active ready running
policy='service- This line and the following line show that nvme1n1, which is the namespace
time0' prio=50 with an NSID of10, is optimized on the path with a prio value of 50 and a status
status= active value of active.
policy='service- Note that the active path refers to nvme2, so the I/O is being directed on this path
time0' prio=10 to controllerB.
status= active
Create filesystems
You create a file system on the namespace or native nvme device and mount the filesystem.
# multipath -ll
2. The result
38 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
of this command shows two devices, dm-19 and dm-16
3. Create a file system on the partition for each /dev/mapper/dm device.The method for
creating a file system varies depending on the file system chosen. In this example, we
are creating an ext4 file system.
# mkdir /mnt/ext4
1. On the host, copy one or more files to the mount point of the disk
3. Run the diff command to compare the copied files to the originals.
39 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
In a direct connect topology, one or more hosts are directly connected to the subsystem. In the
ThinkSystem SAN OS 11.60.2 release, we support a single connection from each host to a
subsystem controller, as shown below. In this configuration, one HCA (host channel adapter) port
from each host should be on the same subnet as the DE Series controller port it is connected to, but
on a different subnet from the other HCA port.
An example configuration that satisfies the requirements consists of four network subnets as follows:
In a fabric topology, one or more switches are used. For a list of supported switches, go to
the Lenovo Storage Interoperation Center (LSIC) and look for proper configuration.
40 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Host port connections Initiator NQN
Host (initiator) 1 Host
Host (initiator) 2
Mappings Host
Mappings Host Name
Host OS Type
41 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Performing NVMe over Fibre Channel tasks
You can use NVMe with the Fibre Channel protocol.
Controller restrictions
• NVME over Fibre Channel can be configured for the DE6000H or DE6000F 64GB
controllers. The controllers must have 100GB host ports.
Switch restrictions
Attention: RISK OF DATA LOSS. You must enable Priority Flow Control or Global Pause Control
on the switch to eliminate the risk of data loss in an NVMe over RoCE environment.
• The host must be running SUSE Linux Enterprise Server 12 SP5 or later system. See
the Lenovo Storage Interoperation Center (LSIC) for a complete list of requirements.
• For a list of supported host channel adapters see the Lenovo Storage Interoperation
Center (LSIC).
• In-band CLI management via 11.50.3 SMcli is not supported in NVMe-oF modes.
• You must have used your HBA utility to discover the WWPN of each host initiator port
and of each controller target port connected to the switch.
For details about zoning your switches, see the switch vendor's documentation.
You must zone by WWPN, not by physical port. Each initiator port must be in a separate zone with
all of its corresponding target ports.
1. Log in to the FC switch administration program, and then select the zoning configuration
option.
42 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
2. Create a new zone that includes the first host initiator port and that also includes all of
the target ports that connect to the same FC switch as the initiator.
3. Create additional zones for each FC host initiator port in the switch.
4. Save the zones, and then activate the new zoning configuration.
These are the instructions for SUSE Linux Enterprise Server 15 SP1 and 32Gb FC HBAs.
# cat /etc/modprobe.d/lpfc.conf
options lpfc lpfc_enable_fc4_type=3
4. Re-build the initrd to get the Emulex change and the boot parameter change.
# dracut –force
# reboot
The host is rebooted and the NVMe/FC initiator is enabled on the host.
Note: After completing the host side setup, configuration of the NVMe over Fibre
Channel ports occur automatically.
The host must have discovered the namespace. See common tasks on storage array in chapter
Creating a workload, Create volumes, and Defining a host in ThinkSystem System Manager, and
Mapping a volume to a host.
The SMdevices tool, part of the nvme-cli package, allows you to view the volumes currently visible
on the host. This tool is an alternative to the nvme list command.
1. To view information about each NVMe path to a DE Series volume, use the nvme netapp
smdevices [-o <format>] command. The output <format> can be normal (the default if -o
is not used), column, or json.
43 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
/dev/nvme1n2, Array Name ICTM0706SYS04, Volume Name NVMe3, NSID 2, Volume ID
000015c05903e24000a0980000af4462, Controller A, Access State unknown, 2.15GB
/dev/nvme1n3, Array Name ICTM0706SYS04, Volume Name NVMe4, NSID 4, Volume ID
00001bb0593a46f400a0980000af4462, Controller A, Access State unknown, 2.15GB
/dev/nvme1n4, Array Name ICTM0706SYS04, Volume Name NVMe6, NSID 6, Volume ID
00001696593b424b00a0980000af4112, Controller A, Access State unknown, 2.15GB
/dev/nvme2n1, Array Name ICTM0706SYS04, Volume Name NVMe2, NSID 1, Volume ID
000015bd5903df4a00a0980000af4462, Controller B, Access State unknown, 2.15GB
/dev/nvme2n2, Array Name ICTM0706SYS04, Volume Name NVMe3, NSID 2, Volume ID
000015c05903e24000a0980000af4462, Controller B, Access State unknown, 2.15GB
/dev/nvme2n3, Array Name ICTM0706SYS04, Volume Name NVMe4, NSID 4, Volume ID
00001bb0593a46f400a0980000af4462, Controller B, Access State unknown, 2.15GB
/dev/nvme2n4, Array Name ICTM0706SYS04, Volume Name NVMe6, NSID 6, Volume ID
00001696593b424b00a0980000af4112, Controller B, Access State unknown, 2.15GB
For SLES 15 SP1, I/O is directed to the physical NVMe device targets by the Linux host. A native
NVMe multipathing solution manages the physical paths underlying the single apparent physical
device displayed by the host.
Note: It is best practice to use the links in /dev/disk/by-id/ rather than /dev/nvme0n1, for example: # ls
/dev/disk/by-id/ -l lrwxrwxrwx 1 root root 13 Oct 18 15:14 nvme-
eui.0000320f5cad32cf00a0980000af4112 -> ../../nvme0n1
Run I/O to the physical nvme device path. There should only be one of these devices present for
each namespace using the following format:
/dev/nvme[subsys#]n[id#]
All paths are virtualized using the native multipathing solution underneath this device. You can view
your paths by running:
# nvme list-subsys
nvme-subsys0 - NQN=nqn.1992-08.com.netapp:5700.600a098000d709d6000000005e27796e
\
+- nvme0 fc traddr=nn-0x200200a098d709d6:pn-0x204200a098d709d6 host_traddr=\ nn-
0x200000109b211680:pn-0x100000109b211680 live
+- nvme1 fc traddr=nn-0x200200a098d709d6:pn-0x204300a098d709d6 host_traddr=\ nn-
0x200000109b21167f:pn-0x100000109b21167f live
If you specify a namespace device when using the nvme list-subsys command, it provides additional
information about the paths to that namespace:
44 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
There are also hooks into the multipath commands to allow you to view your path information for
native failover through them as well:
# multipath -ll
eui.000007e15e903fac00a0980000d663f2 [nvme]:nvme0n1 NVMe,Lenovo DE-Series,98620002
size=207618048 features='n/a' hwhandler='ANA' wp=rw
|-+- policy='n/a' prio=n/a status=n/a\
| `- 0:10:1 nvme0c10n1 0:0 n/a n/a live
`-+- policy='n/a' prio=n/a status=n/a\
`- 0:32778:1 nvme0c32778n1 0:0 n/a n/a live
Create filesystems
Create filesystems (SLES 15)
For SLES 15 SP1, you create a filesystem on the native nvme device and mount the filesystem.
1. Run the multipath -ll command to get a list of /dev/nvme devices.The result of this
command shows device nvme0n1:
# multipath -ll
eui.000007e15e903fac00a0980000d663f2 [nvme]:nvme0n1 NVMe,Lenovo DE-Series,98620002
size=207618048 features='n/a' hwhandler='ANA' wp=rw
|-+- policy='n/a' prio=n/a status=n/a\
| `- 0:10:1 nvme0c10n1 0:0 n/a n/a live
`-+- policy='n/a' prio=n/a status=n/a\
`- 0:32778:1 nvme0c32778n1 0:0 n/a n/a live
2. Create a file system on the partition for each /dev/nvme0n# device. The method for
creating a file system varies depending on the file system chosen. This example shows
creating an ext4 file system.
In a direct connect topology, one or more hosts are directly connected to the controller.
45 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
• Host 1 HBA Port 1 and Controller A Host port 1
In a fabric topology, one or more switches are used. For a list of supported switches, go to
the Lenovo Storage Interoperation Center (LSIC) and look for proper configuration.
46 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Host port connections Software NQN
Host (initiator) 1 Host
Host (initiator) 2
Mappings Host
Mappings Host Name
Host OS Type
47 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Creating a workload
You create storage by first creating a workload for a specific application type. Next, you add storage
capacity to the workload by creating volumes with similar underlying volume characteristics.
Create workloads
You can create workloads for any type of application.
A workload is a storage object that supports an application. You can define one or more workloads,
or instances, per application. For some applications, the system configures the workload to contain
volumes with similar underlying volume characteristics. These volume characteristics are optimized
based on the type of application the workload supports.
• When using other application types , you manually specify the volume configuration
using the Add/Edit Volumes dialog box.
3. Use the drop-down list to select the type of application that you want to create the
workload for and then type a workload name.
4. Click Create .
You are ready to add storage capacity to the workload you created. Use the Create Volume option
to create one or more volumes for an application, and to allocate specific amounts of capacity to
each volume.
48 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Create volumes
You create volumes to add storage capacity to an application-specific workload, and to make the
created volumes visible to a specific host or host cluster. In addition, the volume creation sequence
provides options to allocate specific amounts of capacity to each volume you want to create.
Most application types default to a user-defined volume configuration. Some application types have a
smart configuration applied at volume creation. For example, if you are creating volumes for
Microsoft Exchange application, you are asked how many mailboxes you need, what your average
mailbox capacity requirements are, and how many copies of the database you want. System
Manager uses this information to create an optimal volume configuration for you, which can be edited
as needed.
Note: If you want to mirror a volume, first create the volumes that you want to mirror, and then use
the Storage > Volumes > Copy Services > Mirror a volume asynchronously option.
• Before creating a DA-enabled volume, the host connection you are planning to use
must support DA. If any of the host connections on the controllers in your storage array
do not support DA, the associated hosts cannot access data on DA-enabled volumes.
ThinkSystem DE Series storage only supports DA between the controller and the
drives.
49 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Keep these guidelines in mind when you assign volumes:
• A host's operating system can have specific limits on how many volumes the host can
access. Keep this limitation in mind when you create volumes for use by a particular
host.
• You can define one assignment for each volume in the storage array.
• The same logical unit number (LUN) cannot be used twice by a host or a host cluster to
access a volume. You must use a unique LUN.
• If you want to speed the process for creating volumes, you can skip the host
assignment step so that newly created volumes are initialized offline.
Note: Assigning a volume to a host will fail if you try to assign a volume to a host cluster that conflicts
with an established assignment for a host in the host clusters.
3. From the drop-down list, select a specific host or host cluster to which you want to
assign volumes, or choose to assign the host or host cluster at a later time.
4. To continue the volume creation sequence for the selected host or host cluster,
click Next, and go to Step 2: Select a workload for a volume.
• When you are creating volumes using an application-specific workload, the system may
recommend an optimized volume configuration to minimize contention between
application workload I/O and other traffic from your application instance. You can
review the recommended volume configuration and edit, add, or delete the system-
recommended volumes and characteristics using the Add/Edit Volumes dialog box.
• When you are creating volumes using "Other" applications (or applications without
specific volume creation support), you manually specify the volume configuration using
the Add/Edit Volumes dialog box.
• Select the Create a new workload option to define a new workload for a
supported application or for "Other" applications.
50 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
▪ From the drop-down list, select the name of the application
you want to create the new workload for.
2. Click Next.
3. If your workload is associated with a supported application type, enter the information
requested; otherwise, go to Step 3: Add or edit volumes.
• The maximum number of volumes allowed in a pool depends on the storage system
model:
• To create a Data Assurance (DA)-enabled volume, the host connection you are
planning to use must support DA.
If you want to create a DA-enabled volume, select a pool or volume group that is DA
capable (look for Yes next to "DA" in the pool and volume group candidates table).
DA capabilities are presented at the pool and volume group level in System Manager.
DA protection checks for and corrects errors that might occur as data is transferred
through the controllers down to the drives. Selecting a DA-capable pool or volume
group for the new volume ensures that any errors are detected and corrected.
If any of the host connections on the controllers in your storage array do not support
DA, the associated hosts cannot access data on DA-enabled volumes. ThinkSystem
DE Series only supports DA between the controller and the drives.
• To create a secure-enabled volume, a security key must be created for the storage
array.
If you want to create a secure-enabled volume, select a pool or volume group that is
secure capable (look for Yes next to "Secure-capable" in the pool and volume group
candidates table).
51 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Drive security capabilities are presented at the pool and volume group level in System
Manager. Secure-capable drives prevent unauthorized access to the data on a drive
that is physically removed from the storage array. A secure-enabled drive encrypts data
during writes and decrypts data during reads using a unique encryption key.
You create volumes from pools or volume groups. The Add/Edit Volumes dialog box shows all
eligible pools and volume groups on the storage array. For each eligible pool and volume group, the
number of drives available and the total free capacity appears.
For some application-specific workloads, each eligible pool or volume group shows the proposed
capacity based on the suggested volume configuration and shows the remaining free capacity in
GiB. For other workloads, the proposed capacity appears as you add volumes to a pool or volume
group and specify the reported capacity.
1. Choose one of these actions based on whether you selected Other or an application-
specific workload:
• Other – Click Add new volume in each pool or volume group that you
want to use to create one or more volumes.
Reported Define the capacity of the new volume and the capacity units
Capacity to use (MiB, GiB, or TiB). For Thick volumes, the minimum
capacity is 1 MiB, and the maximum capacity is determined by
the number and capacity of the drives in the pool or volume
group.
Segment
Size Shows the setting for segment sizing, which only appears for
volumes in a volume group. You can change the segment size
to optimize performance.
52 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Table 1. Field Details
Field Description
Allowed segment size transitions – System
Manager determines the segment size transitions that are
allowed. Segment sizes that are inappropriate transitions from
the current segment size are unavailable on the drop-down
list. Allowed transitions usually are double or half of the current
segment size. For example, if the current volume segment size
is 32 KiB, a new volume segment size of either 16 KiB or 64
KiB is allowed.
53 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Table 1. Field Details
Field Description
occur when data is moved between the controllers and drives
on a Storage Array.
Field Details
Field Description
Reported Define the capacity of the new volume and the capacity units
Capacity to use (MiB, GiB, or TiB). For Thick volumes, the minimum
capacity is 1 MiB, and the maximum capacity is determined by
the number and capacity of the drives in the pool or volume
group.
Volume Volume type indicates the type of volume that was created for
Type an application-specific workload.
Segment
Size Shows the setting for segment sizing, which only appears for
volumes in a volume group. You can change the segment size
to optimize performance.
54 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Field Description
SSD Cache-enabled volumes – You can specify a 4-KiB
segment size for SSD Cache-enabled volumes. Make sure
you select the 4-KiB segment size only for SSD Cache-
enabled volumes that handle small-block I/O operations (for
example, 16 KiB I/O block sizes or smaller). Performance
might be impacted if you select 4 KiB as the segment size for
SSD Cache-enabled volumes that handle large block
sequential operations.
2. To continue the volume creation sequence for the selected application, click Next, and
go to Step 4: Review volume configuration.
2. When you are satisfied with your volume configuration, click Finish.
System Manager creates the new volumes in the selected pools and volume groups, and then
displays the new volumes in the All Volumes table.
• Perform any operating system modifications necessary on the application host so that
the applications can use the volume.
The hot_add utility and the SMdevices utility are included as part of
the SMutils package. The SMutils package is a collection of utilities to verify what the
host sees from the storage array. It is included as part of the Storage Manager software
installation.
56 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Defining a host in ThinkSystem System Manager
You can create a host automatically or manually. To make it easier to give multiple hosts access to
the same volumes, you can also create a host cluster.
The Host Context Agent (HCA) is installed and running on every host connected to the storage array.
Hosts with the HCA installed and connected to the storage array are created automatically. To install
the HCA, install ThinkSystem Storage Manager on the host and select the Host option. The HCA is
not available on all supported operating systems. If it is not available, you must create the host
manually.
2. Verify that the information provided by the HCA is correct (name, host type, host port
identifiers).
If you need to change any of the information, select the host, and then click View/Edit
Settings .
After a host is created automatically, the system displays the following items in the Hosts tile table:
• The host name derived from the system name of the host.
• The host identifier ports that are associated with the host.
• You must define the host identifier ports that are associated with the host.
• Make sure that you provide the same name as the host's assigned system name.
57 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
• This operation does not succeed if the name you choose is already in use.
• Manual add
CHAP (Optional) If you selected or manually entered a host port with an iSCSI
initiator IQN, and if you want to require a host that tries to access the storage array
to authenticate using Challenge Handshake Authentication Protocol
(CHAP), select the CHAP initiator checkbox. For each iSCSI host port
you selected or manually entered, do the following:
• Enter the same CHAP secret that was set on each iSCSI
host initiator for CHAP authentication. If you are using mutual
CHAP authentication (two-way authentication that enables a
host to validate itself to the storage array and for a storage
array to validate itself to the host), you also must set the
58 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Table 1. Field Details
Setting Description
CHAP secret for the storage array at initial setup or by
changing settings.
4. Click Create .
After the host is successfully created, the system creates a default name for each host port
configured for the host (user label).
The default alias is < Hostname_Port Number >. For example, the default alias for the first port
created for host IPT is IPT_1 .
• This operation does not start unless there are two or more hosts available to create the
cluster.
• This operation does not succeed if the name you choose is already in use.
4. Click Create .
If the selected hosts are attached to interface types that have different Data Assurance
59 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
(DA) capabilities, a dialog appears with the message that DA will be unavailable on the
host cluster. This unavailability prevents DA-enabled volumes from being added to the
host cluster. Select Yes to continue or No to cancel.
DA increases data integrity across the entire storage system. DA enables the storage
array to check for errors that might occur when data is moved between the controllers
and the drives. Using DA for the new volume ensures that any errors are detected.
The new host cluster appears in the table with the assigned hosts in the rows beneath.
60 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Mapping a volume to a host
For a host or host cluster to send I/O to a volume, you must assign the volume to the host or host
cluster.
You can select a host or host cluster when you create a volume or you can assign a volume to a host
or host cluster later. A host cluster is a group of hosts. You create a host cluster to make it easy to
assign the same volumes to multiple hosts.
Assigning volumes to hosts is flexible, allowing you to meet your particular storage needs.
• Stand-alone host, not part of a host cluster – You can assign a volume to an
individual host. The volume can be accessed only by the one host.
• Host cluster – You can assign a volume to a host cluster. The volume can be
accessed by all the hosts in the host cluster.
• Host within a host cluster – You can assign a volume to an individual host that is part
of a host cluster. Even though the host is part of a host cluster, the volume can be
accessed only by the individual host and not by any other hosts in the host cluster.
When volumes are created, logical unit numbers (LUNs) are assigned automatically. The LUN
serves as the "address" between the host and the controller during I/O operations. You can change
LUNs after the volume is created.
61 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Discovering, Configuring, and Verifying storage on the
host
Volumes on your storage system appear as disk LUNs to the Linux host when you use FC, iSCSI,
or SAS. It shows as NVMe namespace when you use NVMe over RoCE or NVMe over Fibre
Channel. When you add new volumes, you must manually rescan the associated LUNs or
namespace to discover them. The host does not automatically discover new storage space. The
discovering procedures are protocol specific, see related chapters in previous sections.
62 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Where to find additional information
Use the resources listed here if you need additional information.
63 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Contacting Support
You can contact Support to obtain help for your issue.
You can receive hardware service through a Lenovo Authorized Service Provider. To locate a
service provider authorized by Lenovo to provide warranty service, go to
https://datacentersupport.lenovo.com/serviceprovider and use filter searching for different countries. For
Lenovo support telephone numbers, see https://datacentersupport.lenovo.com/supportphonelist for
your region support details.
64 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Notices
Lenovo may not offer the products, services, or features discussed in this document in all countries.
Consult your local Lenovo representative for information on the products and services currently
available in your area.
Any reference to a Lenovo product, program, or service is not intended to state or imply that only
that Lenovo product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any Lenovo intellectual property right may be used
instead. However, it is the user's responsibility to evaluate and verify the operation of any other
product, program, or service.
Lenovo may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document is not an offer and does not provide a license under
any patents or patent applications. You can send inquiries in writing to the following:
Lenovo (United States), Inc.
8001 Development Drive
Morrisville, NC 27560
U.S.A.
Attention: Lenovo Director of Licensing
LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Some jurisdictions do not allow disclaimer of express or implied warranties in certain transactions,
therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are
periodically made to the information herein; these changes will be incorporated in new editions of
the publication. Lenovo may make improvements and/or changes in the product(s) and/or the
program(s) described in this publication at any time without notice.
The products described in this document are not intended for use in implantation or other life support
applications where malfunction may result in injury or death to persons. The information contained
in this document does not affect or change Lenovo product specifications or warranties. Nothing in
this document shall operate as an express or implied license or indemnity under the intellectual
property rights of Lenovo or third parties. All information contained in this document was obtained
in specific environments and is presented as an illustration. The result obtained in other operating
environments may vary.
Lenovo may use or distribute any of the information you supply in any way it believes appropriate
without incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and
do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites
are not part of the materials for this Lenovo product, and use of those Web sites is at your own risk.
Any performance data contained herein was determined in a controlled environment. Therefore,
the result obtained in other operating environments may vary significantly. Some measurements
may have been made on development-level systems and there is no guarantee that these
measurements will be the same on generally available systems. Furthermore, some measurements
may have been estimated through extrapolation. Actual results may vary. Users of this document
should verify the applicable data for their specific environment.
65 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021
Trademarks
LENOVO, LENOVO logo, and THINKSYSTEM are trademarks of Lenovo. All other
trademarks are the property of their respective owners. © 2021 Lenovo
66 Installing and Configuring for Linux Express Guide © Copyright Lenovo 2021