SUSE Support

Here When You Need Us

Storage Performance Appears To Degrade After Upgrading To Later Service Packs

This document (7023896) is provided subject to the disclaimer at the end of this document.


SUSE Linux Enterprise Server 15 (SLES 15)
SUSE Linux Enterprise Server 15 Service Pack 1 (SLES 15 SP1)
SUSE Linux Enterprise Server 15 Service Pack 1 (SLES 15 SP2)
SUSE Linux Enterprise Server 12 Service Pack 2 (SLES 12 SP2)
SUSE Linux Enterprise Server 12 Service Pack 3 (SLES 12 SP3)
SUSE Linux Enterprise Server 12 Service Pack 4 (SLES 12 SP4)


Customer conducted in-place upgrade testing from SUSE Linux Enterprise Server 12 Service Pack 1 to later service packs.
It was found that with all later service packs, IO performance appeared to have seriously degraded.
In the customer's case, these were Cisco UCS servers running SAP HANA on SUSE Linux Enterprise Server with EMC block storage.


Set the max_sectors_kb value for all degraded IO devices to an optimal value which is derived by conducting performance testing.
This can be done for a single device via:
                 echo <value>  >  /sys/block/<device>/queue/max_sectors_kb
       e.g.    echo 1280 > /sys/block/sdg/queue/max_sectors_kb
                 echo 1280 > /sys/block/dm12/queue/max_sectors_kb
Note: where multipath is concerned, all available device paths to the target need to have the max_sectors_kb value set.


After SUSE Linux Enterprise 12 Service Pack 1, a change was made in the kernel to make use of the optimal performance IO size value reported by the storage hardware. In the majority of cases, this should produce higher performance.
However, in some cases, the storage subsystem is reporting an optimal IO size that is actually not producing the optimal performance for that hardware.

Additional Information

Hardware known to be reporting an incorrect optimal IO value includes:  EMC VNX5300, HPE P420 and P408 Storage Array controllers.
The HPE P420 uses the hpsa driver and was patched for this issue in versions later than 3.4.16-148, which is fixed in SLES12 SP2 since kernel 4.4.74-92.29
The HPE P408 uses the smartpqi driver which is fixed in SLES12 SP2 since kernel 4.4.74-92.29
NOTE: As regards the smartpqi driver, a number of commands (if executed) had the ability to reset the max_sectors_kb where it had been 'manually' set, but this issue has been patched in later kernels:

     SUSE Linux Enterprise Server 12-SP2:   2017-12-21 - The Linux Kernel 2141 -  kernel-default-4.4.103-92.53.1
     SUSE Linux Enterprise Server 12-SP3:   2017-08-29 - The Linux Kernel 1404 -  kernel-default-4.4.82-6.3.1

Regardless of if a patched kernel is used, if the below suggested udev rule is used, this will prevent any max_sectors_kb reset from being permanent (it will be automatically be reverted to the 'manually set' value).
In order to facilitate the setting of max_sectors_kb for all necessary devices on a specific host, it should be possible to use a variance of the following to help automate the process:
     Create a udev rule:
cat >/etc/udev/rules.d/50-max-sectors.rules <<EOF
  # Limit IO size for storage attached to fibre controller
  ACTION!="add", GOTO="max_sectors_end"
  SUBSYSTEM!="block", GOTO="max_sectors_end"
  KERNEL!="sd*[!0-9]", GOTO="max_sectors_end"
# This is necessary because "vendor" attribute is present on multiple levels
# ATTRS{vendor} returns the PCI vendor attribute, not SCSI
  SUBSYSTEMS=="scsi", ENV{.vnd}="$attr{vendor}", ENV{.mdl}="$attr{model}"
  DRIVERS=="<fibre_card_driver>", ENV{.vnd}=="DGC*", ENV{.mdl}=="RAID*",  ATTR{queue/max_sectors_kb}="<optimal_max_IO_value>"
This can be tested by running

     udevadm trigger -c add /dev/sd$X
     cat /sys/block/sd$X/queue/max_sectors_kb

If block devices are used as child devices for multipath dm-X devices the value change can be synced to dm devices with following command:

multipathd -k'reconfigure all'

These rules for setting up stacking devices should make sure that the dm-multipath devices also have the 'manually' set max_sectors_kb value.

The line that would need to be changed (depending on the fibre card driver and what the optimal IO size should be according to performance testing) is the line:
DRIVERS=="<fibre_card_driver>", ENV{.vnd}=="DGC*", ENV{.mdl}=="RAID*", ATTR{queue/max_sectors_kb}="<optimal_IO_value>"

e.g.  DRIVERS=="fnic", ENV{.vnd}=="DGC*", ENV{.mdl}=="RAID*", ATTR{queue/max_sectors_kb}="1280"
Note that this does not 'solve' the problem of storage wrongly reporting it's optimal IO size, it just helps to automate a setting when it is necessary to change the max_sectors_kb value because of this issue.


This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7023896
  • Creation Date: 24-May-2019
  • Modified Date:07-Jun-2023
    • SUSE Linux Enterprise Server
    • SUSE Linux Enterprise Server for SAP Applications

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.