0% found this document useful (0 votes)
12 views

Configuring Linux Host for ISCSI With FlashArray

This document provides comprehensive guidance on configuring and managing Linux systems, specifically focusing on iSCSI with Pure Storage solutions. It includes step-by-step instructions for installation, configuration, and troubleshooting, along with best practices for system administrators. Key topics covered include host configuration, volume management, and multipath settings, making it a valuable resource for Linux environments.

Uploaded by

samsam28092015
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Configuring Linux Host for ISCSI With FlashArray

This document provides comprehensive guidance on configuring and managing Linux systems, specifically focusing on iSCSI with Pure Storage solutions. It includes step-by-step instructions for installation, configuration, and troubleshooting, along with best practices for system administrators. Key topics covered include host configuration, volume management, and multipath settings, making it a valuable resource for Linux environments.

Uploaded by

samsam28092015
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Linux

Generated on: 13 January 2025

1
Contents
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Installing and Configuring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4


Configuring Linux Host for iSCSI with FlashArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Linux Host Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Prepare the FlashArray with the Host, Volume, and Host IQN . . . . . . . . . . . . . . . 5

Mount Volume and Provision Filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Create Additional Interfaces (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Helpful Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Linux Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Linux Recommended Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Queue Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Regarding Large I/O Size Requests and Buffer Exhaustion . . . . . . . . . . . . . . . . . 14

Maximum IO Size Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14


Verify the Current Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Changing the Maximum Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Recommended DM-Multipath Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Space Reclamation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

ActiveCluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

ActiveDR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

SCSI Unit Attentions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21


How to disable FA Safemode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

RapidFile Toolkit v2.1 for FlashBlade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23


License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Contact Us . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Third Party Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25


Third Party Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

©2025 Copyright Pure Storage. All right reserved


2
Linux
This Linux documentation provides detailed guidance on managing and configuring Linux systems. It
includes instructions for gathering logs, installing and configuring multipath and iSCSI, and ensuring
persistent connections. The document covers various setup procedures, best practices, and
troubleshooting steps, making it a valuable resource for system administrators working with Linux
environments and Pure Storage solutions.

Explore our documentation by clicking on the topics listed in the left-hand side table of contents. This
allows you to move between different topics within a publication.

Gathering Logs for Troubleshooting

The following are some quick references on how you can gather logs for troubleshooting on Linux
systems:

• OEL (Oracle Enterprise Linux)

• RHEL (Red Hat Enterprise Linux)

©2025 Copyright Pure Storage. All right reserved


3
Installing and Configuring

Configuring Linux Host for iSCSI with FlashArray


This document covers the configuration and best practices to configure iSCSI in Linux. In this example, we
used Red Hat Enterprise Linux 6, but this procedure has also been tested on Ubuntu. This procedure also
works on SUSE/SLES systems. The following steps use commands with example IP addresses and IQNs.
When running the commands, replace the IPs and the IQNs with those from your own environment.

Linux Host Configuration

1. Make sure that you are following Linux Recommended Settings before proceeding.

Note: If multiple interfaces exist on the same subnet in RHEL, your iSCSI initiator may fail to connect to
Pure Storage target. In this case, you need to set sysctl's net.ipv4.conf.all.arp_ignore to 1 to
force each interface to only answer ARP requests for its own addresses. Please seeRHEL KB for Issue
Detail and Resolution Steps (requires Red Hat login).

2. Install the iscsi-initiator-utils package as root user:

$ sudo su
# yum install iscsi-initiator-utils

3. Start the iscsi service and enable it to start when the system boots:

For RHEL6:

# service iscsi start


# chkconfig iscsi on

For RHEL7:

# systemctl start iscsid.socket


# systemctl enable iscsi

iscsid.socket would start iscsid.service if stopped. At this stage, the status of iscsi service
service iscsi status might be seen as active or started. After the discovery command, the service
starts.

©2025 Copyright Pure Storage. All right reserved


4
4. Before setting up DM Multipath on your system, ensure that your system has been updated and
includes the device-mapper-multipath package:

# yum install device-mapper-multipath device-mapper-multipath-libs

5. Enable default multipath configuration file and start the multipath daemon:

# mpathconf --enable --with_multipathd y

6. Edit multipath.conf file with Pure Storage recommended multipath config:

# vi /etc/multipath.conf

See RHEL documentation for /etc/multipath.confattribute descriptions.

7. Restart multipath service for multipath.conf changes to take effect.

# service multipathd restart

Prepare the FlashArray with the Host, Volume, and Host IQN

1. On the Linux host, collect the IQN:

# cat /etc/iscsi/initiatorname.iscsi

2. On FlashArray, create a host:

purehost create <Linux hostname>

where

<Linux hostname> is the desired hostname.

3. Configure FlashArray host with IQN:

purehost setattr --addiqnlist <IQN number> <Linux hostname>

©2025 Copyright Pure Storage. All right reserved


5
where

<IQN number> is the initiator IQN number gathered in step 1.

<Linux hostname> is the hostname created in step 2.

4. On the FlashArray, create a volume:

purevol create <volume name> --size <size>

where

<volume name> is the desired volume name.

<size> is the desired volume size (GB or TB suffix).

5. Connect the host to volume:

purevol connect <volume name> --host <host name>

where

<volume name> is the name of the volume.

<host name> is the name of the host.

6. On the FlashArray, collect iSCSI interface IPs:

pureport list

7. On Linux Host, discover the target iSCSI portals:

# iscsiadm -m discovery -t st -p <FlashArray iSCSI IP>:3260

where

<FlashArray iSCSI IP> is the iSCSI interface IP address from either collected in
step 6.

8. From your Linux Host, log in to the FlashArray iSCSI target portals on both controllers:

©2025 Copyright Pure Storage. All right reserved


6
# iscsiadm -m node -p <FlashArray iSCSI IP CT0> --login
# iscsiadm -m node -p <FlashArray iSCSI IP CT1> --login

where

<FlashArray iSCSI IP CT0> is the iSCSI interface IP address of controller 0 col


lected from step 6
<FlashArray iSCSI IP CT1> is the iSCSI interface IP address of controller 1 col
lected from step 6

9. Add automatic iSCSI login on boot:

# iscsiadm -m node -L automatic

10. Confirm the FlashArray volume has multiple paths with multipath -ll. A multipathed volume should be
represented by a device-mapped ID, as shown in bold in the example below:

# multipath -ll
3624a93702b60622e2b014a2200011011 dm-1 PURE ,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 2:0:0:2 sdb 8:16 active ready running
| |- 3:0:0:2 sdf 8:80 active ready running
| |- 4:0:0:2 sdl 8:176 active ready running
| `- 5:0:0:2 sdk 8:160 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 6:0:0:2 sdd 8:48 active ready running
|- 7:0:0:2 sdh 8:112 active ready running
|- 8:0:0:2 sdp 8:240 active ready running
`- 9:0:0:2 sdo 8:224 active ready running
3624a93702b60622e2b014a2200011010 dm-0 PURE ,FlashArray
size=2.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| |- 2:0:0:1 sda 8:0 active ready running
| |- 3:0:0:1 sde 8:64 active ready running
| |- 4:0:0:1 sdj 8:144 active ready running
| `- 5:0:0:1 sdi 8:128 active ready running
`-+- policy='queue-length 0' prio=10 status=enabled
|- 6:0:0:1 sdc 8:32 active ready running
|- 7:0:0:1 sdg 8:96 active ready running

©2025 Copyright Pure Storage. All right reserved


7
|- 8:0:0:1 sdn 8:208 active ready running
`- 9:0:0:1 sdm 8:192 active ready running

Mount Volume and Provision Filesystem

1. Create a mount point on the Linux host.

# mkdir /mnt/store0

2. Provision filesystem on the PURE dm device using the device-mapped ID.

# mkfs.ext4 /dev/mapper/<device-mapped ID>

where

<device-mapped ID> is the device-mapped ID from step 10.

To enable automatic unmap for our thin-provisioning array, use the '-o discard' option when provisioning
the filesystem.

# mkfs.ext4 -o discard /dev/sdb5

Note:

This will cause the RHEL 6.x to issue the UNMAP command, which in turn causes space to be
released back to the array for any deletions in that ext4 file system. This only works on Physical
RDM datastores, discard will not work on a disk mapped virtually via ESX.

3. Mount PURE dm device to mount point:

# mount/dev/mapper/<device-mapped ID> <mount point>

where

<device-mapped ID> is the device-mapped ID collected from step 10.

<mount point> is the mount point created in step 1.

©2025 Copyright Pure Storage. All right reserved


8
or

# mount -a

or if you require to mount the partition as read-only:

# mount -o rw /mnt/store0

Verify the partition is mounted (this will also list the options for the mounted partition. i.e. "/dev/sdb5 on
/data type ext4 (rw,_netdev)"):

# mount

Confirm that the /mnt/iscsi folder is connected to the partition:

# df -h /mnt/store0

Note:

To make iSCSI device mount persistent across reboots, you will need to add an entry in /etc/fstab
following RHEL KB.

Create Additional Interfaces (Optional)

Open iSCSI initiator (i.e iscsiadm) utility provides a feature to create multiple interfaces:

# iscsiadm -m iface -I <iface name> -o new

You may then take the "-l" off the above command to display info about the iSCSI target:

# iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.38e69528198fee76 -p


10.124.3.159
# iscsiadm -m node -T iqn.2010-06.com.purestorage:flasharray.38e69528198fee76 -p
10.124.3.158

Now update the newly created interface with a unique initiator name:

©2025 Copyright Pure Storage. All right reserved


9
# iscsiadm -m iface -I <iface name> -o update -n iface.initiatorname -v <initiat
or name>

Rediscover paths from the new interface:

# iscsiadm -m discovery -t st -p 10.124.3.159:3260

Log in to the target IP with this newly created interface:

# iscsiadm -m node -p <FlashArray iSCSI IP CT0> --login

To verify the existing iscsi session:

iscsiadm -m session

You can use -P 0|1|2 for more verbosity on the sessions like initiator to target
IP mapping, session timeout etc

Helpful Links

• iscsiadm does not allow the discovery of the same LUN with a second NIC.

• Why does Red Hat Enterprise Linux 6 and above invalidate/discard packets when the route for
outbound traffic differs from the route of incoming traffic?

©2025 Copyright Pure Storage. All right reserved


10
Linux Reference

Linux Recommended Settings


To ensure the best performance with the Pure Storage FlashArray, please use this guide for the
configuration and implementation of Linux hosts in your environment. These recommendations apply to
the versions of Linux that we have certified as per our FlashArray Compatibility Matrix.

Important: Due to a change in path priority detection in versions between 0.6.2 and 0.9.7 of
multipath-tools (or device-mapper-multipath), customers not upgrading to at least 6.5.5 or
6.6.4 must add the statement detect_prio "no" into their multipath.conf. The default
configuration otherwise will try to override the 'alua' prioritizer and replace it with 'sysfs'
despite 'alua' being specified in multipath.conf. Failure to do so will result in half of the paths
remaining in the ALUA state of Active/Non-optimized after the upgrade is completed.

Queue Settings

We recommend two changes to the queue settings. The first selects the 'noop' I/O scheduler, which has
been shown to get better performance with lower CPU overhead than the default schedulers (usually
'deadline' or 'cfq'). The second change eliminates the collection of entropy for the kernel random number
generator, which has high CPU overhead when enabled for devices supporting high IOPS.

Manually Changing Queue Settings

Not required unless LUNs are already in use with wrong settings.

These settings can be safely changed on a running system, by locating the Pure LUNs:

grep PURE /sys/block/sd*/device/vendor

And writing the desired values into sysfs files:

echo noop > /sys/block/sdx/queue/scheduler

An example for loop is shown here to quickly set all Pure luns to the desired 'noop' elevator:

for disk in $(lsscsi | grep PURE | awk '{print $6}'); do


echo noop > /sys/block/${disk##/dev/}/queue/scheduler

©2025 Copyright Pure Storage. All right reserved


11
done

All changes in this section take effect immediately, without rebooting for RHEL5 and higher. RHEL 4
releases will require a reboot. These changes will not persist unless they are added to the udev rule.

Notice, noop has [noop] to designate it as the desired scheduler.

[robm@robm-rhel7 ~]$ cat /sys/block/sdb/queue/scheduler


[noop] deadline cfq

Applying Queue Settings with udev

Once the IO scheduler elevator has been set to 'noop', it is often desired to keep the setting persistent,
after reboots.

Step 1: Create the Rules File

Create a new file in the following location (for each respective OS). The Linux OS will use the udev rules to
set the elevators after each reboot.

RHEL

/etc/udev/rules.d/99-pure-storage.rules

Ubuntu

/lib/udev/rules.d/99-pure-storage.rules

Step 2: Add the Following Entries to the Rules File (Version Dependent)

The following entries automatically set the elevator to 'noop' each time the system is rebooted. Create a
file that has the following entries, ensuring each entry exists on one line with no carriage returns:

Note that in RHEL 8.x ‘noop’ no longer exists and has been replaced by ‘none’.

RHEL 8.x and SuSE 15.2 and higher

# Recommended settings for Pure Storage FlashArray.


# Use none scheduler for high-performance solid-state storage for SCSI devices
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDO
R}=="PURE", ATTR{queue/scheduler}="none"
ACTION=="add|change", KERNEL=="dm-[0-9]*", SUBSYSTEM=="block", ENV{DM_NAME}=="36
24a937*", ATTR{queue/scheduler}="none"

# Reduce CPU overhead due to entropy collection

©2025 Copyright Pure Storage. All right reserved


12
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDO
R}=="PURE", ATTR{queue/add_random}="0"
ACTION=="add|change", KERNEL=="dm-[0-9]*", SUBSYSTEM=="block", ENV{DM_NAME}=="36
24a937*", ATTR{queue/add_random}="0"

# Spread CPU load by redirecting completions to originating CPU


ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDO
R}=="PURE", ATTR{queue/rq_affinity}="2"
ACTION=="add|change", KERNEL=="dm-[0-9]*", SUBSYSTEM=="block", ENV{DM_NAME}=="36
24a937*", ATTR{queue/rq_affinity}="2"

# Set the HBA timeout to 60 seconds


ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDO
R}=="PURE", ATTR{device/timeout}="60"

RHEL 6.x, 7.x

# Recommended settings for Pure Storage FlashArray.

# Use none scheduler for high-performance solid-state storage


ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDO
R}=="PURE", ATTR{queue/scheduler}="noop"

# Reduce CPU overhead due to entropy collection


ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDO
R}=="PURE", ATTR{queue/add_random}="0"

# Spread CPU load by redirecting completions to originating CPU


ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDO
R}=="PURE", ATTR{queue/rq_affinity}="2"

# Set the HBA timeout to 60 seconds


ACTION=="add", SUBSYSTEMS=="scsi", ATTRS{model}=="FlashArray ", RUN+="/bin/
sh -c 'echo 60 > /sys/$DEVPATH/device/timeout'"

Note:

Please note that 6 spaces are needed after "FlashArray" under "Set the HBA timeout to 60
seconds" above for the rule to take effect.

RHEL 5.x

©2025 Copyright Pure Storage. All right reserved


13
# Recommended settings for Pure Storage FlashArray.

# Use noop scheduler for high-performance solid-state storage


ACTION=="add|change", KERNEL=="sd*[!0-9]|", SYSFS{vendor}=="PURE*", RUN+="/bin/s
h -c 'echo noop > /sys/$devpath/queue/scheduler'"

Note:

It is expected behavior that you only see the settings take effect for the sd* devices. The dm-*
devices will not reflect the change directly but will inherit it from the sd* devices that make up its
path.

Regarding Large I/O Size Requests and Buffer Exhaustion

See KB FlashArray: Large I/O Size Requests and I/O Buffers

Maximum IO Size Settings

The maximum allowed size of an I/O request in kilobytes is determined by the max_sectors_ kb setting
in sysfs. This restricts the largest IO size that the OS will issue to a block device. The Pure Storage
FlashArray can handle a maximum of 4MB writes. Therefore, we need to make sure that the maximum
allowed IO size matches our expectations. You can check your current settings to determine what the IO
size is, and as long as it does not exceed 4096, you should be fine.

Note:

In some cases, the Maximum IO Size Settings is not honored, and the host generates writes over
the 4 MB max. If you see the following errors, the IO size might be the problem:

end_request: critical target error, dev dm-14, sector 158686242


Buffer I/O error on device dm-15, logical block 19835776
lost page write due to I/O error ondm-15

Though the Pure Storage FlashArray is designed to service IO with consistently low latency, there are error
conditions that can cause much longer latencies and it is therefore important to ensure dependent servers
and applications are tuned appropriately to ride out these error conditions without issue. By design, given
the worst case, recoverable error condition, the FlashArray will take up to 60 seconds to service an

©2025 Copyright Pure Storage. All right reserved


14
individual IO.

You can do this with the following command:

For versions below RHEL 6, you can add the following command(s) into rc.local:

echo 60 > /sys/block/<Dev_name>/device/timeout

The default timeout for normal file system commands is 60 seconds when udev is being used. If udev is
not in use, the default timeout is 30 seconds. If you are running RHEL 6+, and want to ensure the rules
persist, then use the udev method.

Verify the Current Setting

1. To check all of the block devices to see if any of them are ≤ 4096 use the following grep.

[robm@robm-rhel7 ~]$ grep [0-9] /sys/block/sd*/queue/max_sectors_kb


/sys/block/sda/queue/max_sectors_kb:512
/sys/block/sdb/queue/max_sectors_kb:4096
/sys/block/sdc/queue/max_sectors_kb:4096
/sys/block/sdd/queue/max_sectors_kb:4096
/sys/block/sde/queue/max_sectors_kb:4096
/sys/block/sdf/queue/max_sectors_kb:4096
/sys/block/sdg/queue/max_sectors_kb:4096
/sys/block/sdh/queue/max_sectors_kb:4096
/sys/block/sdi/queue/max_sectors_kb:4096

2. Next review run the following command and search for the block device sd* that is set wrong and
verify it is on a FlashArray volume.

[root@robm-rhel7 robm]# multipath -ll


Feb 19 13:13:34 | /etc/multipath.conf line 97, duplicate keyword: default
s
3624a93702bcee22e6c784a32000b9b28 dm-9 PURE ,FlashArray
size=600G features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=50 status=active
|- 6:0:10:3 sdj 8:144 active ready running
|- 6:0:13:3 sdk 8:160 active ready running
|- 7:0:8:3 sdl 8:176 active ready running
`- 7:0:9:3 sdm 8:192 active ready running
3624a93702bcee22e6c784a32000b99d3 dm-3 PURE ,FlashArray
size=350G features='0' hwhandler='0' wp=rw

©2025 Copyright Pure Storage. All right reserved


15
`-+- policy='queue-length 0' prio=50 status=active
|- 6:0:10:1 sdf 8:80 active ready running
|- 6:0:13:1 sdh 8:112 active ready running
|- 7:0:8:1 sdb 8:16 active ready running
`- 7:0:9:1 sdd 8:48 active ready running

If the value is ≤ 4096, then no action is necessary. However, if this value is > 4096, we
recommend that you change the max to 4096.

Changing the Maximum Value

Reboot Persistent

We recommend that you add the value to your UDEV rules file (99-pure-storage.rules) created above. This
ensures that the setting persists through a reboot. To change that value please do the following:

1. Add this line to your 99-pure-storage.rules file:

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VEN


DOR}=="PURE", ATTR{queue/max_sectors_kb}="4096"

You can use this command to add it:

echo 'ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", EN


V{ID_VENDOR}=="PURE", ATTR{queue/max_sectors_kb}="4096"' >> /etc/udev/rul
es.d/99-pure-storage.rules

Note:
The location of your rules file may be different depending on your OS version, so please
double check the command before running it.

2. Reboot the host.

3. Check the value again.


Immediate Change but Won't Persist Through Reboot

Note:
This command should only be run if you are sure there are no running services
depending on that volume, otherwise you can risk an application crash.

©2025 Copyright Pure Storage. All right reserved


16
If you need to make the change immediately, but cannot wait for a maintenance window to
reboot, you can also change the setting with the following command:

echo # > /sys/block/sdz/queue/max_sectors_kb

Substitute # with a number equal to or less than 4096 (default).

Recommended DM-Multipath Settings

Please keep in mind that the default multipath configuration is found by running the following command.
This is where you verify the dm-multipath config. If you search the default /etc/multipath.conf file (that
was created when the "mpathconf --enable --with_multipathd y" command) in RHEL 7.3+ you will not see
what the host is running by default.

Sample multipath.conf

The following multipath.conf file has been tested with recent versions of RHEL 8. It provides settings
for volumes on FlashArray exposed via either SCSI or NVMe. Prior to use, verify the configuration with
multipath -t. Some settings may be incompatible with older distributions; we list some known
incompatibilities and workarounds below.

defaults {
polling_interval 10
}

devices {
device {
vendor "NVME"
product "Pure Storage FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
prio ana
failback immediate
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 60
}
device {

©2025 Copyright Pure Storage. All right reserved


17
vendor "PURE"
product "FlashArray"
path_selector "service-time 0"
hardware_handler "1 alua"
path_grouping_policy group_by_prio
prio alua
failback immediate
path_checker tur
fast_io_fail_tmo 10
user_friendly_names no
no_path_retry 0
features 0
dev_loss_tmo 600
}
}

Setting Compatibility Notes

• Path selectors: as listed in the sample above, Pure recommends the use of queue-length 0
with NVMe and service-time 0 with SCSI, which improve performance in situations where
paths have differing latencies by biasing I/Os towards paths that are servicing I/O more quickly.
Older kernels (before RHEL 6.2/before SUSE 12) may not support these path selectors and should
specify path_selector "round-robin 0" instead.

• Path prioritizers (ALUA for SCSI, and ANA for NVMe) and failback immediate must be
enabled on hosts connected to arrays configured in an ActiveCluster.
◦ The ANA path prioritizer for NVMe is a relatively recent feature (RHEL 8), and older
distributions may not support it. In non-ActiveCluster configurations, it can be safely
disabled by removing the line prio ana and replacing path_grouping_policy
group_by_prio with path_grouping_policy multibus.

• Please note that fast_io_fail_tmo and dev_loss_tmo do not apply to iSCSI.

• Please note that the above settings can differ based on a use case, for example - if user has
RHEL Open Stack Cinder driver configured, the settings can differ, so please, before making
recommendations, ask the customer if they have anything specific configured, or it is just a
standard Linux host.

• If multipath nodes are not showing up on the host after a rescan, you may need to add
find_multipaths yes to the defaults section above. This is the case for some hosts which
boot of a local non-multipath disk.

• As per https://access.redhat.com/solutions/3234761, RHV-H multipath configuration must include


user_friendly_names no.

• As per https://access.redhat.com/site/solutions/110553, running DM-multipath along with EMC

©2025 Copyright Pure Storage. All right reserved


18
PowerPath is not a supported configuration and may result in kernel panics on the host.

• Consult man 5 multipath.conf and/or the RHEL Documentation before making modifications
to the configuration.

Verifying DM-Multipathd Configuration

After creating and connecting some volumes on the FlashArray to the host, run multipath -ll to check
the configuration. The below output was obtained by creating two volumes and connecting the first to the
host via NVMe, and the second through SCSI.

[root@init116-13 ~]# multipath -ll


eui.00292fd80c2afd4724a9373400011425 dm-4 NVME,Pure Storage FlashArray
size=2.0T features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=50 status=active
|- 2:2:1:70693 nvme2n1 259:0 active ready running
|- 3:1:1:70693 nvme3n1 259:1 active ready running
|- 6:0:1:70693 nvme6n1 259:2 active ready running
`- 4:3:1:70693 nvme4n1 259:3 active ready running
3624a9370292fd80c2afd473400011424 dm-3 PURE,FlashArray
size=1.0T features='0' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
|- 6:0:2:1 sdd 8:48 active ready running
|- 6:0:3:1 sdf 8:80 active ready running
|- 7:0:3:1 sdl 8:176 active ready running
`- 7:0:2:1 sdj 8:144 active ready running

Note the policy='queue-length 0' and policy='service-time 0' which indicate the active path
selection policies. These should match the path selection policy settings from the configuration file.

To check if path prioritizers are working correctly in an ActiveCluster environment, create a stretched
volume and set a preferred array for the host as described in ActiveCluster: Optimizing Host Performance
with Array Preferences. The output of multipath -ll should then look similar to the following example.

# multipath -ll
3624a9370292fd80c2afd473400011426 dm-2 PURE,FlashArray
size=3.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 6:0:2:2 sde 8:64 active ready running
| |- 6:0:3:2 sdg 8:96 active ready running
| |- 7:0:2:2 sdk 8:160 active ready running
| `- 7:0:3:2 sdm 8:192 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 6:0:0:1 sdb 8:16 active ready running

©2025 Copyright Pure Storage. All right reserved


19
|- 6:0:1:1 sdc 8:32 active ready running
|- 7:0:0:1 sdh 8:112 active ready running
`- 7:0:1:1 sdi 8:128 active ready running

Notice the two distinct groups of paths. The paths to the preferred array (SCSI target numbers 2 and 3)
have priority 50, while the paths to the non-preferred array (SCSI target numbers 0 and 1) have priority 10.

Excluding Third-Party Vendor LUNs from DM-Multipathd

Note:

There are no certification requirements for storage hardware systems with Oracle Linux KVM.
Oracle Linux KVM uses kernel interfaces to communicate with storage hardware systems, and
does not depend on an application programming interface (API).

When systems have co-existing multipathing software, it is a good practice to exclude control from one
multipathing software in order to allow control by another multipathing software.

The following is an example of using DM-Multipathd to blacklist LUNs from a third party vendor. The
syntax blocks DM-Multipathd from controlling those luns that are "blacklisted".

The following can be added to the 'blacklist' section of the multipath.conf file.

blacklist {
device {
vendor "XYZ.*"
product ".*"
}

device {
vendor "ABC.*"
product ".*"
}
}

Space Reclamation

You will want to make sure that space reclamation is configured on your Linux Host so that you do not run
out of space. For more information please see this KB: SCSI UNMAP.

©2025 Copyright Pure Storage. All right reserved


20
ActiveCluster

Conditional recommendations for Linux server:

• If ActiveCluster does not influence multipaths.conf on Linux (because Linux hosts are not
configured in ActiveCluster), the recommended multipaths configuration per RHEL version should
be up-to-dated.

• If AcitveCluster does influence multipaths.conf on Linux, there should be extra information and
examples for Linux with ActiveCluster, which is separated from the existing recommended
multipaths configuration per RHEL version.

Additional multipath settings are required for ActiveCluster. Please see Linux.

ActiveDR

SCSI Unit Attentions

The Linux kernel has been enhanced to enable userspace to respond to certain SCSI Unit Attention
conditions received from SCSI devices via the udev event mechanism. The FlashArray using version 5.0
and later supports the following SCSI Unit Attentions:

Description ASC ASCQ

CAPACITY DATA HAS CHANGED 0x2A 0x09

ASYMMETRIC ACCESS STATE


0x2A 0x06
CHANGED

REPORTED LUNS DATA HAS


0x3F 0x0E
CHANGED

With these SCSI Unit Attentions, it is possible to have the Linux initiator auto-rescan on these storage
configuration changes. The requirement for auto-rescan support in RHEL/Centos is the libstoragemgmt-
udev package. On installing this package a udev rule is installed, 90-scsi-ua.rules. Uncomment the
supported Unit Attentions and reload the udev service to pick up the new rules:

©2025 Copyright Pure Storage. All right reserved


21
[root@host ~]# cat 90-scsi-ua.rules
#ACTION=="change", SUBSYSTEM=="scsi", ENV{SDEV_UA}=="INQUIRY_DATA_HAS_CHANGED",
TEST=="rescan", ATTR{rescan}="x"
ACTION=="change", SUBSYSTEM=="scsi", ENV{SDEV_UA}=="CAPACITY_DATA_HAS_CHANGED",
TEST=="rescan", ATTR{rescan}="x"
#ACTION=="change", SUBSYSTEM=="scsi", ENV{SDEV_UA}=="THIN_PROVISIONING_SOFT_THRE
SHOLD_REACHED", TEST=="rescan", ATTR{rescan}="x"
#ACTION=="change", SUBSYSTEM=="scsi", ENV{SDEV_UA}=="MODE_PARAMETERS_CHANGED", T
EST=="rescan", ATTR{rescan}="x"
ACTION=="change", SUBSYSTEM=="scsi", ENV{SDEV_UA}=="REPORTED_LUNS_DATA_HAS_CHANG
ED", RUN+="scan-scsi-target $env{DEVPATH}"

Note:

The following udevadm command will cause the following will cause all of the rules in the rules.d
directory to be triggered immediately. The customer needs to take extreme caution when running
this command because it may crash the host or have other unintended consequences. We
recommend the customer reboots when they have a change control windows if at all possible.

[root@host ~]# udevadm control --reload-rules && udevadm trigger

How to disable FA Safemode

If you are using a LUN to boot from SAN, you need to ensure the changes in your configuration files are
applied upon rebooting. This is done by rebuilding the initial ramdisk (initrd or initramfs) to include the
proper kernel modules, files and configuration directives after the configuration changes have been made.
As the procedure slightly varies depending on the host, we recommend that you refer to your vendor's
documentation for the proper procedure.

lsinitrd /boot/initramfs-$(uname -r).img | grep dm

An example file that may be missing that could result in failure to boot:

...(kernel build)/kernel/drivers/md/dm-round-robin.ko

When rebuilding the initial ramdisk, you need to confirm that the necessary dependencies are in place

©2025 Copyright Pure Storage. All right reserved


22
before rebooting the host to avoid any errors during boot. Refer to your vendor's documentation for
specific commands to confirm this information.

RapidFile Toolkit v2.1 for FlashBlade


RapidFile Toolkit is a set of supercharged tools for efficiently managing millions of files using familiar Linux
command line interfaces. RapidFile Toolkit is designed from the ground up to take advantage of Pure
Storage FlashBlade's massively parallel, scale-out architecture, while also supporting standard Linux file
systems. RapidFile Toolkit can serve as a high performance, drop-in replacement for Linux commands in
many common scenarios, which can increase employee efficiency, application performance, and business
productivity. RapidFile Toolkit is available to all Pure Storage customers.

Linux commands RapidFile Toolkit v2.1 Description

ls pls Lists files & directories

find pfind Finds matching files

©2025 Copyright Pure Storage. All right reserved


23
Linux commands RapidFile Toolkit v2.1 Description

du pdu Summarizes file space usage

rm prm Removes files & directories

chown pchown Changes file ownership

chmod pchmod Changes file permissions

cp pcopy Copies files & directories

Creates & extracts compressed


tar ptar
archives

License

Distribution and use of the RapidFile Toolkit is governed by the Pure Storage EULA for Plugin / Adaptor /
Provider / SDK / Management Pack, which is available at Legal Terms, Programs, and Product Information.

Licenses for open source software used by RapidFile Toolkit can be found in the provided .tgz file. The
provided rpm and deb installers install license files to /usr/local/share/doc/rapidfile-toolkit/licenses.

Support

If you are a registered Pure Storage user and need assistance with RapidFile Toolkit, please contact Pure
Technical Services at Contact Us.

Contact Us

If you would like to submit product feedback on RapidFile Toolkit, please submit a comment via the Leave
Feedback link below.

©2025 Copyright Pure Storage. All right reserved


24
Third Party Documentation

Third Party Documentation


Oracle® Linux 6 Administrator's Solutions Guide — Provides information about the advanced features of
Oracle Linux and, in particular, the Unbreakable Enterprise Kernel (UEK).

Red Hat Product Documentation —

SUSE Product Documentation — On this page, find technical documentation, such as quick starts, guides,
manuals, and best practices for all SUSE products and solutions.

Debian 7.5.0 (With Linux Kernel 3.2) — The Debian Project is making every effort to provide all of its users
with proper documentation in an easily accessible form.

Ubuntu 16.04 —

©2025 Copyright Pure Storage. All right reserved


25
© 2015-2025 Pure Storage® (“Pure”), Portworx® and associated its trademarks can be found here as and its virtual patent
marking program can be found here. Third party names may be trademarks of their respective owners. The Pure Storage
products and programs described in this documentation are distributed under a license agreement restricting the use,
copying, distribution, and decompilation/reverse engineering of the products. No part of this documentation may be
reproduced in any form by any means without prior written authorization from Pure Storage, Inc. and its licensors, if any.
Pure Storage may make improvements and/or changes in the Pure Storage products and/or the programs described in this
documentation at any time without notice. This documentation is provided "as is" and all express or implied conditions,
representations and warranties, including any implied warranty of merchantability, fitness for a particular purpose, or
non-infringement, are disclaimed, except to the extent that such disclaimers are held to be legally invalid. Pure shall
not be liable for incidental or consequential damages in connection with the furnishing, performance, or use of this
documentation. The information contained in this documentation is subject to change without notice.

©2025 Copyright Pure Storage. All right reserved


26

You might also like