Skip to main content
Pure1 Support Portal

SAP HANA Implementation and Best Practices on FlashArray

Pure Storage Flash Array is implemented through tailored datacenter integration(TDI) for SAP HANA. SAP HANA TDI provides customers with the flexibility regarding the hardware components required to run SAP HANA.

For more information on SAP HANA TDI please review this document from SAP.

Hardware requirements

The hardware requirements for SAP HANA can be found in SAP Note 2399995.

Operating system requirements

SAP HANA can be deployed on Red Hat Enterprise Linux (RHEL) and SUSE Enterprise Linux (SLES). For general information on how to configure each operation system please review the below SAP Notes

Further information on supported operating systems and revisions for SAP HANA can be found in SAP Note 2235581.

SAP HANA Certified Enterprise Storage

The current number of nodes certified for use with each FlashArray model can be found here.

The following connectivity is certified for use in production environments :

  • Fibre Channel

The following other connectivity which can be used for development and testing :

  • iSCSI
  • NVMe over fabrics using RDMA over converged Ethernet (RoCE).

HBA I/O Timeout Settings

Though the Pure Storage FlashArray is designed to service IO with consistent low latency, there are error conditions that can cause much longer latencies and it is therefore important to ensure dependent servers and applications are tuned appropriately to ride out these error conditions without issue. By design, given the worst case, recoverable error condition, the FlashArray will take up to 60 seconds to service an individual IO.  You can do this with the following commands. 

You can check current timeout settings using the following command as root

find /sys/class/scsi_generic/*/device/timeout -exec grep -H . '{}' \;

For versions below RHEL 6, you can add the following command(s) into rc.local 

echo 60 > /sys/block/<Dev_name>/device/timeout
Note that the default timeout for normal file system commands is 60 seconds when udev is being used. If udev is not in use, the default timeout is 30 seconds.  If you are running RHEL 6+, and want to ensure the rules persist, then use the udev method documented below

Queue Settings

We recommend two changes to the queue settings.  The first selects the 'noop' I/O scheduler, which has been shown to get better performance with lower CPU overhead than the default schedulers (usually 'deadline' or 'cfq').  The second change eliminates the collection of entropy for the kernel random number generator, which has high cpu overhead when enabled for devices supporting high IOPS.

Manually Changing Queue Settings 

(not required unless LUNs are already in use with wrong settings)

These settings can be safely changed on a running system, by locating the Pure LUNs:

grep PURE /sys/block/sd*/device/vendor

And writing the desired values into sysfs files:

echo noop > /sys/block/sdx/queue/scheduler

An example for loop is shown here to quickly set all Pure luns to the desired 'noop' elevator.

for disk in $(lsscsi | grep PURE | awk '{print $6}'); do
    echo noop > /sys/block/${disk##/dev/}/queue/scheduler
done

All changes in this section take effect immediately, without rebooting for RHEL5 and 6. RHEL 4 releases will require a reboot.

Applying Queue Settings with udev

Once the IO scheduler elevator has been set to 'noop' it is often desired to keep the setting persistent, after reboots. 

Step 1: Create the Rules File

Create a new file in the following location (for each respective OS). The Linux OS will use the udev rules to set the elevators after each reboot.

RHEL and SLES:
/etc/udev/rules.d/99-pure-storage.rules

 

Step 2: Add the Following Entries to the Rules File  (Version Dependent)

The following entries automatically sets the elevator to 'noop' each time the system is rebooted. Create a file that has the following entries, ensuring each entry exists on one line with no carriage returns:

For RHEL 6.x, 7.x and SuSE
# Recommended settings for Pure Storage FlashArray.

# Use noop scheduler for high-performance solid-state storage
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="noop"

# Reduce CPU overhead due to entropy collection
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/add_random}="0"

# Spread CPU load by redirecting completions to originating CPU
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/rq_affinity}="2"

# Set the HBA timeout to 60 seconds
ACTION=="add", SUBSYSTEMS=="scsi", ATTRS{model}=="FlashArray      ", RUN+="/bin/sh -c 'echo 60 > /sys/$DEVPATH/device/timeout'"

Please note that 6 spaces are needed after "FlashArray" under "Set the HBA timeout to 60 seconds" above for the rule to take effect.

For RHEL 5.x
# Recommended settings for Pure Storage FlashArray.
 
# Use noop scheduler for high-performance solid-state storage
ACTION=="add|change", KERNEL=="sd*[!0-9]|", SYSFS{vendor}=="PURE*", RUN+="/bin/sh -c 'echo noop > /sys/$devpath/queue/scheduler'" 

It is expected behavior that you only see the settings take effect for the sd* devices.  The dm-* devices will not reflect the change directly but will inherit it from the sd* devices that make up it's path. 

Maximum IO Size Settings

The maximum allowed size of an I/O request in kilobytes is determined by the max_sectors_kbsetting in sysfs.  This restricts the largest IO size that the OS will issue to a block device  The Pure Storage FlashArray can handle a maximum of 4MB writes.  Therefore, we need to make sure that the maximum allowed IO size matches our expectations.  You can check your current settings to determine what the IO size is, and as long as it does not exceed 4096, you should be fine.

Verify the Current Setting

  1. Check which block device you are using with the Pure Storage array.
    If you know which device you're looking at already (dm-#)
    [root@host ~]# multipath -ll | grep -A 7 -B 0 "dm-6"
    3624a9370ffa9a01386b3410600011036 dm-6 PURE,FlashArray
    size=35G features='0' hwhandler='0' wp=rw
    `-+- policy='queue-length 0' prio=1 status=active
      |- 1:0:0:9 sdf 8:80   active ready running
      |- 0:0:1:9 sdx 65:112 active ready running
      |- 1:0:1:9 sdl 8:176  active ready running
      `- 0:0:0:9 sdr 65:16  active ready running
    

    OR

    If we want to know all PURE volumes presented to the host

    multipath -ll | grep -i PURE
    
  2. Check the "max_sectors_kb" on your Linux host ( regardless of the kernel version or Linux distribution). Customer will need to know which device.
    $ cat /sys/block/sda/queue/max_sectors_kb
    512
    

If the value is  ≤ 4096, then no action is necessary.  However, if this value is > 4096, we recommend that you change the max to 4096. 

Changing the Maximum Value 

Reboot Persistent

We recommend that you add the value to your UDEV rules file (99-pure-storage.rules) created above.  This will ensure that the setting persists through a reboot.  To change that value please do the following: 

  1. Changing the "max_sectors_kb" value by adding it to the UDEV rules (Reboot Persistent):adding it to the UDEV rules)
    echo 'ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/max_sectors_kb}="4096"' >> /etc/udev/rules.d/99-pure-storage.rules

     NOTE: The location of your rules file may be different depending on your OS version, so please double check the command before running it. 

  2. Reboot the host. 
  3. Check the value again.
Immediate Change but Won't Persist Through Reboot

This command should only be run if you are sure there are no running services depending on that volume, otherwise you can risk an application crash.

If you need to make the change immediately, but cannot wait for a maintenance window to reboot, you can also change the setting with the following command: 

echo  %VALUE% > /sys/block/sdz/queue/max_sectors_kb

%VALUE% should be ≤ 4096

Recommended DM-Multipathd Settings

ActiveCluster: Additional multipath settings are required for ActiveCluster. Please see ActiveCluster Requirements and Best Practices.

The Multipath Policy defines how the host distributes IOs across the available paths to the storage. The Round Robin (RR) policy distributes IOs evenly across all Active/Optimized paths. A newer MPIO policy, queue-length, is similar to round robin in that IOs are distributed across all available Active/Optimized paths, however it provides some additional benefits. The queue-length path selector will bias IOs towards paths that are servicing IO quicker (paths with shorter queues). In the event that one path becomes intermittently disruptive or is experiencing higher latency, queue-length will prevent the utilization of that path reducing the effect of the problem path.

The following are recommended entries to existing multipath.conf files (/etc/multipath.conf) for Linux OSes.  Add the following to existing section for controlling Pure devices.

SCSI

Please note that fast_io_fail_tmo and dev_loss_tmo do not apply to iSCSI.

Scale Up Configurations

SUSE12+, SUSE 15+
  device {
          vendor "PURE"
          product "FlashArray"
          path_grouping_policy "multibus"
          path_selector "queue-length 0"
          path_checker "tur"
          features "0"
          hardware_handler "0"
          prio "const"
          failback immediate
          fast_io_fail_tmo 10
          dev_loss_tmo 60
          user_friendly_names no
          no_path_retry 0
    }
RHEL 7.3+
No manual changes required. The dm-multipath config shown below for PURE is default with the device-mapper version included in RHEL / Oracle Linux 7.3+
  device {
        vendor "PURE"
        product "FlashArray"
        path_grouping_policy "multibus"
        path_selector "queue-length 0"
        path_checker "tur"
        features "0"
        hardware_handler "0"
        prio "const"
        failback immediate
        fast_io_fail_tmo 10
        dev_loss_tmo 60
        user_friendly_names no
    }
}

Included in RHEL 7.3+ is device-mapper-multipath-0.4.9-99
Support added for PURE FlashArray - With this release, multipath has added built-in configuration support for the PURE FlashArray (BZ#1300415)

Supporting Info:
RHEL 6.2+ and supporting kernels
defaults {
   polling_interval      10
   find_multipaths       yes
}
devices {
   device {
       vendor                "PURE"
       path_selector         "queue-length 0"
       path_grouping_policy  group_by_prio
       path_checker          tur
       fast_io_fail_tmo      10
       dev_loss_tmo          60
       no_path_retry         0
       hardware_handler      "1 alua"
       prio                  alua
       failback              immediate
   }
}
RHEL 5.7+ - 6.1 and supporting kernels
defaults {
    polling_interval      10
}
 
devices {
    device {
        vendor                "PURE"
        path_selector         "round-robin 0"
        path_grouping_policy  multibus
        rr_min_io             1
        path_checker          tur
        fast_io_fail_tmo      10
        dev_loss_tmo          60
        no_path_retry         0
    }
}
RHEL 5.6 and below, and supporting kernels
defaults {
polling_interval 10
}

devices {

        device {
               vendor                "PURE"
               path_selector         "round-robin 0"
               path_grouping_policy  multibus
               rr_min_io             1
               path_checker          tur
               no_path_retry         0
               }
        }

More information on multipath settings can be found here: RHEL Documentation

Scale Out Configurations

SUSE12+, SUSE 15+
  device {
          vendor "PURE"
          product "FlashArray"
          path_grouping_policy "multibus"
          path_selector "queue-length 0"
          path_checker "tur"
          features "0"
          hardware_handler "0"
          prio "const"
          failback immediate
          fast_io_fail_tmo 10
          dev_loss_tmo 60
          user_friendly_names no
          no_path_retry 0
    }
RHEL 7.3+
No manual changes required. The dm-multipath config shown below for PURE is default with the device-mapper version included in RHEL / Oracle Linux 7.3+
  device {
        vendor "PURE"
        product "FlashArray"
        path_grouping_policy "multibus"
        path_selector "queue-length 0"
        path_checker "tur"
        features "0"
        hardware_handler "0"
        prio "const"
        failback immediate
        fast_io_fail_tmo 10
        dev_loss_tmo 60
        user_friendly_names no
    }
}

Included in RHEL 7.3+ is device-mapper-multipath-0.4.9-99
Support added for PURE FlashArray - With this release, multipath has added built-in configuration support for the PURE FlashArray (BZ#1300415)

Supporting Info:
RHEL 6.2+ and supporting kernels
defaults {
   polling_interval      10
   find_multipaths       yes
}
devices {
   device {
       vendor                "PURE"
       path_selector         "queue-length 0"
       path_grouping_policy  group_by_prio
       path_checker          tur
       fast_io_fail_tmo      10
       dev_loss_tmo          60
       no_path_retry         0
       hardware_handler      "1 alua"
       prio                  alua
       failback              immediate
   }
}

NVMeoF

Currently only Scale up configurations can be used for test and development with SAP HANA using NVMeoF

Scale Up Configurations

Native NVMe multipathing must be disabled in SUSE to ensure that DM-Multipathd can be used. This is the recommended method multipathing with NVMeoF.

In order to disable native NVMe multipathing add "nvme-core.multipath=N" as a boot parameter. 

RHEL 7.3+, SUSE12+, SUSE 15+
No manual changes required. The dm-multipath config shown below for PURE is default with the device-mapper version included in RHEL / Oracle Linux 7.3+
  device {
        vendor "PURE"
        product "Pure Storage FlashArray"
        path_grouping_policy "multibus"
        path_selector "queue-length 0"
        features "0"
        fast_io_fail_tmo 10
        dev_loss_tmo 60
        user_friendly_names no
        polling_interval 10
    }
}

Shared filesystem for scale out SAP HANA deployments

SAP HANA distributed (scale out) deployments required a shared filesystem exported to each node in the landscape for the installation to succeed.

Using Windows File Services(WFS) , as a part of Purity Run , an NFS share can be exported to each SAP HANA node without the need for any additional hardware. Further information on Purity Run and WFS can be found here.

Setting up an NFS Share for SAP HANA

In order for an NFS share exported from a Windows Server to function correctly with SAP HANA installations , the correct permissions need to be in place. These permissions will link an active directory user with full access to a directory in windows to both a group identifier (GID) and user identifier(UID).  

There is no need to provide LDAP authentication capabilities in Red Hat Enterprise Linux or SUSE Enterprise Linux.

Create a group in active directory

Typically the user group created during the installation of an SAP HANA instance is called "sapsys" with the default GID of 79. The GID of the group can be changed but it is important to know before installation what this GID will be. 

Connect to the domain controller and open the Active Directory Users and Computers management console.

Right click on "Users" in the Domain tree and select "New" and then "Group".

clipboard_e8e004903bcdb8a64f6822562d112c33a.png

In the dialog which appears give the group a name and ensure the Group scope is set to "Global" and Group Type is set to "Security".

clipboard_e196740acd267244b6a72e02b1312d777.png

Once the group has been created , a user needs to be created for the instance being installed on that system.

Right click on "Users" in the Domain tree and select "New" and then "User".

clipboard_ed6f2773d506f18070c8542d8782f0711.png

In the dialog which appears give the user a name and username.

clipboard_e97edddc1c14c558c8a05cea216adfaad.png

Give the user a password and set the password to never expire.

clipboard_e6e46f4023047531dcc14c20d73d66163.png

Do not add the newly created user to the sapsys group.This will be done automatically during the share creation process later.

Setup NFS in File and Storage Services

The NFS service in windows file services needs to be set up to be able to map credentials in the domain to an NFS GID and UID.

On the Windows File Services Instance , open server manager.

clipboard_ea475fb622eb826adaa8b1d42bb7fdd64.png

Navigate to "File and Storage Services" and right click on the file server that will be/is hosting the NFS share for SAP HANA. When the dialog appears select "NFS Settings".

clipboard_e583d6c1a6f501cb3aba878407b603c09.png

In the WFS NFS dialog set the relevant protocol versions (version 3 and Version 4.1 recommended), set the NLM grace period to 45 seconds, Lease period to 90 seconds and NFS 4.1 grace period to 180 seconds. 

clipboard_eee44acbd0998e5b3132a8db6404f941a.png

In identity mapping set the identity mapping source. In this example Active Directory Domain Services are being used.

clipboard_ecf2146f736c1b23d3ca05468cb3a8c0c.png

Return to File and Storage Services, right click on the file server that will be/is hosting the NFS share for SAP HANA and select "NFS Identity Mapping". 

clipboard_e2416af4aaf0bb6a84eeb3994b84ad5e8.png

In the WFS NFS Identity Mapping dialog, select the "New..." button for Mapped groups.

clipboard_ef7036a248c487ccc0eeffba413a3b8e0.png

Browse for the sapsys group created in step 1.

clipboard_e3e864c49fb3138d9a00655d1544e281f.png

Give the group the same GID as expected to be used in the installation of SAP HANA.

clipboard_e8566f8df48df333a94d8a6c78ce434e3.png

Return to the WFS NFS Identity Mapping dialog and select the "New.." button for Mapped Users

Browse for the user created in Step 1.

clipboard_e8983e2fd8161ef89780f1be979511a25.png

Give the user the same GID as set out for the group name and the expected UID of the user for the SAP HANA installation.

clipboard_ed9fff3418fef531b341f3b0e56e12592.png

Create NFS share and set the correct permissions.

A single volume and drive should be presented for the NFS share. The use of drive letters is recommended but mount points are also possible.

In File and Storage Services, navigate to the Shares menu and then right click and select "Create Share".

In the New Share Wizard select "NFS Share - Quick".

clipboard_e9fbb891bba81198d7cbf10be3c0cf8fe.png

For the share location , select a volume which has been specifically set aside for the NFS share. In this instance we have E: set aside as a drive letter for NFS shares.

clipboard_e12f609d325158b82f03cec4aef593e6c.png

Give the share a name and check the local path to the share and remote path to the share are correct.

clipboard_ee88eee6fc12f9b1a6fc63d9191759e4c.png

In authentication Ensure "No Server authentication (AUTH_SYS) , "Enable unmapped user access" and "Allow unmapped user accerss by UID/GID" are checked.

clipboard_ed15e4cfe0039de49e5d1bc3d914f93ad.png

In the Share Permissions dialog select "Add...".

clipboard_e42b2cf25c40de84bc0085627ea35c410.png

Set the relevant permissions for the share.

The below example is not the exact permissions required , this will vary by use case - the only permission required as below will be Language encoding and Share permissions.

clipboard_ed53d187d3e4748e30f767700b56db30f.png

Check the permissions are correct.

clipboard_e0a6d096c1e704d9a4380633d4b30b69d.png

In the permissions dialog , this is where the permissions for the directory in Windows Server will be set , select customize permissions.

clipboard_ea8d4f54e2161de4bc85be656ce634521.png

In the dialog which appears , add a new Permission entry by selecting "Add".

clipboard_e54a3bd6d8da5bc0998c260bc719393e1.png

In the further dialog which appears select "Select a principle".

clipboard_e38490048991f6745cd8892fa25d35d11.png

Use the sapsys group as a principle and all users which are a member of the group will inherit the permissions for it.

clipboard_e1c96e939da33a4abe4414310c84b647b.png

Set the basic permissions for Full Control.

clipboard_e60428b44d8ce21b243b44dfeb3cf9247.png

Also add "Everyone" as a permission with Full control.

clipboard_e94eae7b34b4a12f4562bf0c3e5500cb3.png

Review all of the permissions within the dialog.

clipboard_ed7d79d3d1f44a6f772fc3fb7a8f02bdc.png

clipboard_ef605ecd49037ec172d68d33a43513d55.png

clipboard_e9ce151e451b2992ef66a103306440026.png

clipboard_e8ac77411f166710be032c17cdddc5196.png

 

Another possible architecture for the shared SAP HANA directory is to build an on premises NFS service using any Linux distribution, with the exported NFS mount being based on storage hosted on FlashArray. For high availability of the NFS service,  a cluster can be created. RHEL details out how to do this here.

FlashBlade can also be used to export an NFS mountpoint to each SAP HANA node.