0% found this document useful (0 votes)
239 views11 pages

Pure Storage Linux Recommended Settings

This document provides Linux configuration recommendations for Pure Storage FlashArrays, including settings for boot from SAN, HBA I/O timeouts, queue settings, maximum I/O size, and DM-Multipathd. It recommends using the noop I/O scheduler, disabling entropy collection, setting HBA timeouts to 60 seconds, and setting the maximum I/O size to 4096KB. It also recommends the queue-length path selector for DM-Multipathd.

Uploaded by

jarg200690
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
239 views11 pages

Pure Storage Linux Recommended Settings

This document provides Linux configuration recommendations for Pure Storage FlashArrays, including settings for boot from SAN, HBA I/O timeouts, queue settings, maximum I/O size, and DM-Multipathd. It recommends using the noop I/O scheduler, disabling entropy collection, setting HBA timeouts to 60 seconds, and setting the maximum I/O size to 4096KB. It also recommends the queue-length path selector for DM-Multipathd.

Uploaded by

jarg200690
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Linux Recommended Settings

Applies to: FlashArray

To ensure the best performance with the Pure Storage FlashArrays, please use this guide for configuration and
implementation of Linux hosts in your environment. These recommendations apply to the versions of Linux that we
have certified as per our Compatibility Matrix.

Boot from SAN Considerations


If you are using a LUN to boot from SAN, you need to ensure the changes in your configuration files are applied upon
rebooting. This is done by rebuilding the initial ramdisk (initrd or initramfs) to include the proper kernel modules, files and
configuration directives after the configuration changes have been made. As the procedure slightly varies depending on
the host, we recommend that you refer to your vendor's documentation for the proper procedure.
When rebuilding the initial ramdisk, you need to confirm that the necessary dependencies are in
place before rebooting the host to avoid any errors during boot. Refer to your vendor's
documentation for specific commands to confirm this information.

An example command from Oracle Linux to check the initramfs:

lsinitrd /boot/initramfs-$(uname -r).img | grep dm

An example file that may be missing that could result in failure to boot:

...(kernel build)/kernel/drivers/md/dm-round-robin.ko

HBA I/O Timeout Settings


Though the Pure Storage FlashArray is designed to service IO with consistently low latency, there are error conditions
that can cause much longer latencies and it is therefore important to ensure dependent servers and applications are
tuned appropriately to ride out these error conditions without issue. By design, given the worst case, recoverable error
condition, the FlashArray will take up to 60 seconds to service an individual IO. You can do this with the following
commands.

You can check current timeout settings using the following command as root:

find /sys/class/scsi_generic/*/device/timeout -exec grep -H . '{}' \;

©2020 Copyright Pure Storage. All rights reserved.


1
For versions below RHEL 6, you can add the following command(s) into rc.local:

echo 60 > /sys/block/<Dev_name>/device/timeout

The default timeout for normal file system commands is 60 seconds when udev is being used. If udev is not

 in use, the default timeout is 30 seconds. If you are running RHEL 6+, and want to ensure the rules persist,
then use the udev method documented below.

Queue Settings
We recommend two changes to the queue settings. The first selects the 'noop' I/O scheduler, which has been shown to
get better performance with lower CPU overhead than the default schedulers (usually 'deadline' or 'cfq'). The second
change eliminates the collection of entropy for the kernel random number generator, which has high cpu overhead when
enabled for devices supporting high IOPS.

Manually Changing Queue Settings


(not required unless LUNs are already in use with wrong settings)

These settings can be safely changed on a running system, by locating the Pure LUNs:

grep PURE /sys/block/sd*/device/vendor

And writing the desired values into sysfs files:

echo noop > /sys/block/sdx/queue/scheduler

An example for loop is shown here to quickly set all Pure luns to the desired 'noop' elevator:

for disk in $(lsscsi | grep PURE | awk '{print $6}'); do


echo noop > /sys/block/${disk##/dev/}/queue/scheduler
done

All changes in this section take effect immediately, without rebooting for RHEL5 and 6. RHEL 4 releases will require a
reboot.

Applying Queue Settings with udev


Once the IO scheduler elevator has been set to 'noop', it is often desired to keep the setting persistent, after reboots.

Step 1: Create the Rules File


Create a new file in the following location (for each respective OS). The Linux OS will use the udev rules to set the
elevators after each reboot.

©2020 Copyright Pure Storage. All rights reserved.


2
RHEL:

/etc/udev/rules.d/99-pure-storage.rules

Ubuntu:

/lib/udev/rules.d/99-pure-storage.rules

Step 2: Add the Following Entries to the Rules File (Version Dependent)
The following entries automatically set the elevator to 'noop' each time the system is rebooted. Create a file that has the
following entries, ensuring each entry exists on one line with no carriage returns:

For RHEL 6.x, 7.x and SuSE

# Recommended settings for Pure Storage FlashArray.

# Use noop scheduler for high-performance solid-state storage


ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE",
ATTR{queue/scheduler}="noop"

# Reduce CPU overhead due to entropy collection


ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE",
ATTR{queue/add_random}="0"

# Spread CPU load by redirecting completions to originating CPU


ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE",
ATTR{queue/rq_affinity}="2"

# Set the HBA timeout to 60 seconds


ACTION=="add", SUBSYSTEMS=="scsi", ATTRS{model}=="FlashArray ", RUN+="/bin/sh -c 'echo 60 >
/sys/$DEVPATH/device/timeout'"

 Please note that 6 spaces are needed after "FlashArray" under "Set the HBA timeout to 60 seconds" above
for the rule to take effect.

For RHEL 5.x

# Recommended settings for Pure Storage FlashArray.

# Use noop scheduler for high-performance solid-state storage


ACTION=="add|change", KERNEL=="sd*[!0-9]|", SYSFS{vendor}=="PURE*", RUN+="/bin/sh -c 'echo noop >
/sys/$devpath/queue/scheduler'"

©2020 Copyright Pure Storage. All rights reserved.


3
 It is expected behavior that you only see the settings take effect for the sd* devices. The dm-* devices will not
reflect the change directly but will inherit it from the sd* devices that make up its path.

Maximum IO Size Settings


The maximum allowed size of an I/O request in kilobytes is determined by the max_sectors_kb setting in sysfs. This
restricts the largest IO size that the OS will issue to a block device. The Pure Storage FlashArray can handle a
maximum of 4MB writes. Therefore, we need to make sure that the maximum allowed IO size matches our expectations.
You can check your current settings to determine what the IO size is, and as long as it does not exceed 4096, you
should be fine.

 In some cases, the Maximum IO Size Settings is not honored, and the host generates writes over the 4 MB
max. If you see the following errors, the IO size might be the problem:

end_request: critical target error, dev dm-14, sector 158686242


Buffer I/O error on device dm-15, logical block 19835776
lost page write due to I/O error ondm-15

Verify the Current Setting


If the value is ≤ 4096, then no action is necessary. However, if this value is > 4096, we recommend that you change the
max to 4096.

Changing the Maximum Value

Reboot Persistent

We recommend that you add the value to your UDEV rules file (99-pure-storage.rules) created above. This ensures that
the setting persists through a reboot. To change that value please do the following:

1. Changing the "max_sectors_kb" value by adding it to the UDEV rules (Reboot Persistent): adding it to the UDEV
rules)

echo 'ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_


VENDOR}=="PURE", ATTR{queue/max_sectors_kb}="4096"' >> /etc/udev/rules.d/99-pure-storage.rules

 NOTE: The location of your rules file may be different depending on your OS version, so please double
check the command before running it.

©2020 Copyright Pure Storage. All rights reserved.


4
2. Reboot the host.
3. Check the value again.

Immediate Change but Won't Persist Through Reboot

 This command should only be run if you are sure there are no running services depending on that volume,
otherwise you can risk an application crash.

If you need to make the change immediately, but cannot wait for a maintenance window to reboot, you can also change
the setting with the following command:

echo %VALUE% > /sys/block/sdz/queue/max_sectors_kb

%VALUE% should be ≤ 4096

Recommended DM-Multipathd Settings

 ActiveCluster: Additional multipath settings are required for ActiveCluster. Please see ActiveCluster
Requirements and Best Practices.

The Multipath Policy defines how the host distributes IOs across the available paths to the storage.
The Round Robin (RR) policy distributes IOs evenly across all Active/Optimized paths. A newer
MPIO policy, queue-length, is similar to round-robin in that IOs are distributed across all available
Active/Optimized paths, however, it provides some additional benefits. The queue-length path
selector bias IOs towards paths that are servicing IO quicker (paths with shorter queues). In the
event that one path becomes intermittently disruptive or is experiencing higher latency, queue-
length will prevent the utilization of that path reducing the effect of the problem path.

The following are recommended entries to existing multipath.conf files (/etc/multipath.conf) for Linux OSes. Add the
following to the existing section for controlling Pure devices.

 Please note that fast_io_fail_tmo and dev_loss_tmo do not apply to iSCSI.

©2020 Copyright Pure Storage. All rights reserved.


5
RHEL 7.3+

No manual changes required. The RHEL OS should configure this file automatically provided that the dm-multipath
version is device-mapper-multipath-0.4.9-99.el7.x86_64. See the RHEL KB: https://access.redhat.com/
solutions/2772111. The dm-multipath config shown below for PURE is default with the device-mapper version
included in RHEL / Oracle Linux 7.3+

device {
vendor "PURE"
product "FlashArray"
path_grouping_policy "multibus"
path_selector "queue-length 0"
path_checker "tur"
features "0"
hardware_handler "0"
prio "const"
failback immediate
fast_io_fail_tmo 10
dev_loss_tmo 60
user_friendly_names no
}
}

Included in RHEL 7.3+ is device-mapper-multipath-0.4.9-99


Support added for PURE FlashArray - With this release, multipath has added built-in configuration
support for the PURE FlashArray (BZ#1300415)

Supporting Info:
• RHEL KB - Standard dm-multipath configuration added for Pure Storage
• Bug 1300415 - Add PURE to multipath-tools on RHEL
• RHEL 7.3 Release Notes - Support added for PURE FlashArray

RHEL 6.2+, SLES 12, and supporting kernels

defaults {
polling_interval 10
find_multipaths yes
}
devices {
device {
vendor "PURE"
path_selector "queue-length 0"
path_grouping_policy group_by_prio
path_checker tur
fast_io_fail_tmo 10
dev_loss_tmo 60
no_path_retry 0

©2020 Copyright Pure Storage. All rights reserved.


6
hardware_handler "1 alua"
prio alua
failback immediate
}
}

RHEL 5.7+ - 6.1 and supporting kernels

defaults {
polling_interval 10
}

devices {
device {
vendor "PURE"
path_selector "round-robin 0"
path_grouping_policy multibus
rr_min_io 1
path_checker tur
fast_io_fail_tmo 10
dev_loss_tmo 60
no_path_retry 0
}
}

RHEL 5.6 and below, and supporting kernels

defaults {
polling_interval 10
}

devices {

device {
vendor "PURE"
path_selector "round-robin 0"
path_grouping_policy multibus
rr_min_io 1
path_checker tur
no_path_retry 0
}
}

Oracle VM Server

device {
vendor "PURE"
product "FlashArray"
path_selector "queue-length 0"
path_grouping_policy group_by_prio

©2020 Copyright Pure Storage. All rights reserved.


7
path_checker tur
fast_io_fail_tmo 10
dev_loss_tmo 60
no_path_retry 0
hardware_handler "1 alua"
prio alua
failback immediate
user_friendly_names no
}

More information on multipath settings can be found here: RHEL Documentation

Verifying DM-Multipathd Configuration


You can check the setup by looking at "multipath -ll".

6.2+ (queue-length)
# multipath -ll

Correct Configuration:
mpathe (3624a93709d5c252c73214d5c00011014) dm-2 PURE,FlashArray
size=100G features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
|- 1:0:0:4 sdd 8:48 active ready running
|- 1:0:1:4 sdp 8:240 active ready running
|- 1:0:2:4 sdab 65:176 active ready running
|- 1:0:3:4 sdan 66:112 active ready running
|- 2:0:0:4 sdaz 67:48 active ready running
|- 2:0:1:4 sdbl 67:240 active ready running
|- 2:0:2:4 sdbx 68:176 active ready running
`- 2:0:3:4 sdcj 69:112 active ready running
...

Incorrect Configuration (check for unecessary spaces in multipath.conf):

3624a9370f35b420ae1982ae200012080 dm-0 PURE,FlashArray


size=500G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| `- 2:0:0:3 sdc 8:32 active undef running
|-+- policy='round-robin 0' prio=0 status=enabled
| `- 3:0:0:3 sdg 8:96 active undef running
|-+- policy='round-robin 0' prio=0 status=enabled
| `- 1:0:0:3 sdaa 65:160 active undef running
`-+- policy='round-robin 0' prio=0 status=enabled
`- 0:0:0:3 sdak 66:64 active undef running
...

©2020 Copyright Pure Storage. All rights reserved.


8
Below 6.2 (Round Robin)
# multipath -ll
...
Correct Configuration:
3624a9370f35b420ae1982ae200012080 dm-0 PURE,FlashArray
size=500G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 2:0:0:3 sdc 8:32 active undef running
|- 3:0:0:3 sdg 8:96 active undef running
|- 1:0:0:3 sdaa 65:160 active undef running
`- 0:0:0:3 sdak 66:64 active undef running

...
Incorrect Configuration (check for unecessary spaces in multipath.conf):

3624a9370f35b420ae1982ae200012080 dm-0 PURE,FlashArray


size=500G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| `- 2:0:0:3 sdc 8:32 active undef running
|-+- policy='round-robin 0' prio=0 status=enabled
| `- 3:0:0:3 sdg 8:96 active undef running
|-+- policy='round-robin 0' prio=0 status=enabled
| `- 1:0:0:3 sdaa 65:160 active undef running
`-+- policy='round-robin 0' prio=0 status=enabled
`- 0:0:0:3 sdak 66:64 active undef running
...

Excluding Third-Party vendor LUNs from DM-Multipathd


When systems have co-existing multipathing software, it is a good practice to exclude control from one multipathing
software in order to allow control by another multipathing software.

The following is an example of using DM-Multipathd to blacklist LUNs from a third party vendor. The syntax blocks DM-
Multipathd from controlling those luns that are "blacklisted".

The following can be added to the 'blacklist' section of the multipath.conf file.

blacklist {
device {
vendor "XYZ.*"
product ".*"
}

device {
vendor "ABC.*"
product ".*"
}
}

©2020 Copyright Pure Storage. All rights reserved.


9
device-mapper-multipath and EMC PowerPath
Please note that having both device-mapper-multipath and EMC PowerPath on the same system may result in kernel
panics. Refer to RedHat's article: https://access.redhat.com/site/solutions/110553

Space Reclamation
You will want to make sure that space reclamation is configured on your Linux Host so that you do not run out of space.
For more information please see this KB: Reclaiming Space on Linux

ActiveCluster
Additional multipath settings are required for ActiveCluster. Please see ActiveCluster Requirements and Best Practices.

SCSI Unit Attentions


The Linux kernel has been enhanced to enable userspace to respond to certain SCSI Unit Attention conditions received
from SCSI devices via the udev event mechanism. The FlashArray using version 5.0 and later supports the following
SCSI Unit Attentions:

Description ASC ASCQ

CAPACITY DATA HAS CHANGED 0x2A 0x09

ASYMMETRIC ACCESS STATE


0x2A 0x06
CHANGED

REPORTED LUNS DATA HAS


0x3F 0x0E
CHANGED

With these SCSI Unit Attentions, it is possible to have the Linux initiator auto-rescan on these storage configuration
changes. The requirement for auto-rescan support in RHEL/Centos is the libstoragemgmt-udev package. On installing
this package a udev rule is installed, 90-scsi-ua.rules. Uncomment the supported Unit Attentions and reload the udev
service to pick up the new rules:

[root@host ~]# cat 90-scsi-ua.rules


#ACTION=="change", SUBSYSTEM=="scsi", ENV{SDEV_UA}=="INQUIRY_DATA_HAS_CHANGED",
TEST=="rescan", ATTR{rescan}="x"
ACTION=="change", SUBSYSTEM=="scsi", ENV{SDEV_UA}=="CAPACITY_DATA_HAS_CHANGED",
TEST=="rescan", ATTR{rescan}="x"
#ACTION=="change", SUBSYSTEM=="scsi", ENV{SDEV_UA}=="THIN_PROVISIONING_SOFT_
THRESHOLD_REACHED", TEST=="rescan", ATTR{rescan}="x"
#ACTION=="change", SUBSYSTEM=="scsi", ENV{SDEV_UA}=="MODE_PARAMETERS_CHANGED",
TEST=="rescan", ATTR{rescan}="x"
ACTION=="change", SUBSYSTEM=="scsi", ENV{SDEV_UA}=="REPORTED_LUNS_DATA_HAS_CHANGED",

©2020 Copyright Pure Storage. All rights reserved.


10
RUN+="scan-scsi-target $env{DEVPATH}"

[root@host ~]# udevadm control --reload-rules && udevadm trigger

©2020 Copyright Pure Storage. All rights reserved.


11

You might also like