Skip to main content
Pure Technical Services

SAP HANA Implementation and Best Practices

Currently viewing public documentation. Please login to access the full scope of documentation.

KP_Ext_Announcement.png

Tailored datacenter integration is the SAP HANA deployment option which FlashArray (//X, XL) is certified to be used for. SAP HANA TDI provides customers with additional flexibility to combine the best aspects of storage, compute, and networking for their landscape.

For more information on SAP HANA TDI please review this document from SAP.

Hardware Requirements

The hardware requirements for SAP HANA can be found in SAP Note 2399995.

Operating System Requirements

SAP HANA can be deployed on Red Hat Enterprise Linux (RHEL) and SUSE Enterprise Linux (SLES). For general information on how to configure each operation system please review the below SAP Notes.

Further information on supported operating systems and revisions for SAP HANA can be found in SAP Note 2235581.

SAP HANA Certified Enterprise Storage

The current number of nodes certified for use with each FlashArray model can be found here.

The following connectivity is certified for use in production environments:

  • Fibre Channel (FCP)
  • File (NFS-Network File System)

Recommended Configuration for SAP HANA on FlashArray

File Services - NFS

For FlashArray file the Requirements and Best Practices details out the requirements to use FlashArray file services. 

The following configurations need to be performed on the relevant FlashArray to ensure a successful SAP HANA installation :

Networking

Ensure that at least one virtual interface (vif) with the file service is configured and enabled. It should be reachable from each of the SAP HANA nodes. 

The networking configuration can be seen under Settings -> Network In the FlashArray graphical user interface. 

clipboard_ebb155923062f9dd6ba55c2a36aeacb8a.png

Multiple virtual interfaces can be created on the same ports. 
When using multiple SAP HANA nodes on a single array it is recommended to create multiple virtual interfaces and have each node connect to separate virtual interface addresses. This will result in improved load balancing over different ports to the array. 

When using 10GB or 25GB ports for the file services virtual interface with many expected SAP HANA nodes sharing the same FlashArray,  it is recommended to have more ports available on each controller - each with its own virtual interface.

The following table can be used as guidance for the requirements to meet the per node KPI's : 

SAP HANA Nodes FlashArray ports and speed (per controller) File service virtual interfaces required
8 4 x 10Gbps  4
8 2 x 25Gbps  2
8 1 x 100Gbps  1
16 8 x 10Gbps 8
16 4 x 25Gbps 4
16 2 x 100Gbps 2

For each 8 nodes the requirements doubles. 

File systems, directories and policies 

  • One file system should be created per instance.
  • A single /hana/shared directory should be created within that file system. Do not use the default, root directory. 
  • A /hana/data and /hana/log directory should be created per node in the instance.
    • The below image showcases an example of how a multi-host deployment would be configured. clipboard_e2682133d82e1371a489235a819fb938f.png
    • Example : For a 3+1 scale out configuration the following managed directories need to be created within the same file system 
      • 3 directories for HANA-Data 
      • 3 directories for HANA-Log
      • 4 directories for /usr/sap/<SID>
      • 1 directory for HANA-Shared
  • A policy should be created per instance and then attached to each directory. 
    • Rules should include the following :
      • Clients - Only the SAP HANA nodes should be added as clients 
      • Access- no-root-squash 
      • permission rw
      • Version - NFSv3 or NFSv4 depending on requirements
    • Details should include the following:

The below image is an example of a policy configuration for a scale up instance :

clipboard_eaeb2f2cebab52ac3111c774c7cd469a1.png

(Host) Operating System Configuration

SUSE Linux Enterprise 

To optimize the deployment of SAP HANA on SUSE 12, or 15 kernel parameters need be to be manually set when not using saptune (see the SAP Note 3024356). This can be done  by creating a file in the /etc/sysctl.d directory name 91-Pure-NFS-HANA.conf with the following contents :

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1

The parameter, sunrpc.tcp_max_slot_table_entries, needs to be set  in /etc/modprobe.d/sunrpc.conf with the following line addition :

options sunrpc tcp_max_slot_table_entries=128
Red Hat Enterprise Linux 

To optimize the deployment of SAP HANA on SUSE 12, or 15 kernel parameters need be to be manually set (for the  use of the RHEL System Roles for SAP see the SAP Note 3024356). This can be done  by creating a file in the /etc/sysctl.d directory name 91-Pure-NFS-HANA.conf with the following contents :

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 131072 16777216
net.ipv4.tcp_wmem = 4096 16384  16777216
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1

The parameter, sunrpc.tcp_max_slot_table_entries, needs to be set  in /etc/modprobe.d/sunrpc.conf with the following line addition :

options sunrpc tcp_max_slot_table_entries=128
Supported Configurations for RHEL and SUSE Enterprise Linux 

The following table sets out the supported configurations for SAP HANA deployments with FlashArray file services

Deployment Type NFS Version HA/DR Provider
Single Host, Multi-Host without auto failover v3,v4 Not required
Multi-host with auto failover  v3,v4 Server-specific STONITH Implementation required. 
Mount Options 

The following mount options are recommended for use with FlashArray file services- NFS :

<vif>:/<Export-name>   /mountpoint      nfs     rsize=1048576,wsize=1048576,nconnect=8,hard,mountproto=tcp      0  0

Fiber Channel Protocol

For FCP The Linux Recommended Settings page provides the recommended settings to be applied for both SLES and RHEL deployments.

If the Operating System being used for SAP HANA is SLES for SAP Applications 12 SP4 or later, or RHEL for SAP Applications 8 or later, it is

recommended to use the "none" io scheduler. 

To enable none as an IO scheduler in SLES follow the below steps:

  1. Edit the /etc/default/grub file and add "scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y " to GRUB_CMDLINE_LINUX_DEFAULT
    1. GRUB_CMDLINE_LINUX_DEFAULT="splash=silent resume=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0-part2 quiet crashkernel=199M,high crashkernel=72M,low scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y"
  2. Run the grub2-mkconfig command to ensure the new bootloader options are applied on the next reboot.
    1. grub2-mkconfig -o /boot/grub2/grub.cfg
  3. Reboot the system.

To enable non as an IO scheduler in RHEL follow the below steps:

  1. Edit the /etc/default/grub file and add "scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y " to GRUB_CMDLINE_LINUX_DEFAULT
    1. GRUB_CMDLINE_LINUX_DEFAULT="splash=silent resume=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0-part2 quiet crashkernel=199M,high crashkernel=72M,low scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y"
  2. Run the grub2-mkconfig command to ensure the new bootloader options are applied on the next reboot.
    1. grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
  3. Reboot the system.

To check the "none" scheduler is available for each device look at the scheduler property:

cat /sys/block/sda/queue/scheduler
[none] mq-deadline kyber bf

If using none as an IO scheduler ensure the /etc/udev/rules.d/99-pureudev configuration file is updated to apply the correct setting for each device:

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="none"
ACTION=="add|change", KERNEL=="dm*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="none"

File System and Mount Options

The recommended file system to use with FlashArray Block devices is the XFS file system for both data and log volumes.

The only recommended mount option outside of the defaults is the use of noatime. 

/dev/mapper/<device> /mountpoint xfs noatime 0 0

SAP HANA Scale Up on FlashArray

Scale up deployments focus on single resource domain in terms of computational power. Scale of the system is achieved by increasing the existing CPU power or adding more memory to a single server. This is the simplest and highest performing type of installation.

Scale up deployments can be converted to Scale Out by adding additional servers to the landscape.

The following volumes and size recommendations are created and mounted before installation:

Volume Size Purpose
 Installation Installation size = Minimum 1 x RAM Contains run-time binaries, installation scripts and other support scripts. This also contains the SAP HANA configuration files, trace files and profiles.
Backups Backup size = (Size of Data + Size of Redo Log) x retention period Regularly scheduled backups are written to this location.
Data Data size = 1 x Amount of RAM SAP HANA persists a copy of the in-memory data to this location. This is achieved by writing changed data in the form of savepoints.
Redo Log

For systems <512GB, Redo Log size = 1/2 RAM

For systems >512GB Redo Log size = 512GB or lager

Each transaction performed on the database is recorded to this location in the form of a redo log entry.

Block based volumes on FlashArray are provided with a capacity limit at creation. 

File services based directories must have a Quota Policy applied to restrict capacity use. 

Below is an example of a 6 Terabyte capacity limit applied in the graphical user interface to all of the directories for a scale up instance :

clipboard_ebf2f3482697bca12537f9f76db1b0621.png

SAP HANA Scale Out on FlashArray

Scale out deployments offer superior scale and additional high availability options in comparison to Scale Up deployments. Multiple nodes or servers are combined into a single system.  Scale out systems allow for each host to be given a specific role:

Host Role Description
Worker A worker host is used for database processing.
Standby A standby host is available for failover in a high availability environment.
Extended Storage Worker Worker host for SAP HANA dynamic tiering.
Extended Storage Standby Standby host for SAP HANA dynamic tiering.
ets_worker Worker host for SAP HANA accelerator for SAP ASE.
ets_standby Standby host for SAP HANA accelerator for SAP ASE.
streaming Host for SAP HANA streaming analytics.
xs_worker Host for SAP HANA XS advanced runtime.
xs_standby Standby host for SAP HANA XS advanced.

Within this guide, only worker and standby hosts are used. Additional hosts can be added with other roles. 

Each worker host in the scale out landscape requires its own data and log volume. In the event of a worker failing and a standby being present, the failed worker volumes will be attached to the standby and the relevant services started to provide high availability.

The following volumes and size recommendations are created before installation:

Volume Size Purpose
Installation Installation size = Minimum 1 x RAM of worker host x number of hosts. See the section on Shared Filesystem for Scale Out SAP HANA. Contains run-time binaries, installation scripts and other support scripts. This also contains the SAP HANA configuration files, trace files, and profiles.
Data Data size = 1 x Amount of RAM SAP HANA persists a copy of the in-memory data to this location. This is achieved by writing changed data in the form of savepoints.
Redo Log

For systems <512GB, Redo Log size = 1/2 RAM

For systems >512GB, Redo Log size = 512GB or lager

Each transaction performed on the database is recorded to this location in the form of a redo log entry.

Block based volumes on FlashArray are provided with a capacity limit at creation. 

File services based directories must have a Quota Policy applied to restrict capacity use. 

Below is an example of a 3 Terabyte capacity limit applied in the graphical user interface to all of the directories for a scale out instance :

clipboard_e1f70dddcc6cffb415337123221b1bebd.png

SAP HANA Scale Out installation

When installing or expanding an SAP HANA system to be a scale out (distributed) system some configuration considerations need to be taken into account.

In the event of a host failure in a distributed system, a standby host will take over the persistence of the failing host. This is done through the use of the SAP HANA Storage Connector API. 

Fiber Channel Protocol 

Any volumes connected to multiple hosts in a scale out deployment must be connected using a Host Group.

In order for an SAP HANA scale out system to be installed using FCP the global.ini file must first be configured:

  1. Identify the WWIDN of the block storage devices to be used for data and log volumes. this can be found using the commands "multipath -ll", "lsblk" or "udevadm --query=all --name {device name} | grep | DM_NAME.
  2. Create or add(depending on the scenario) the following lines to the global.ini file(The below assumes a 2 + 1 configuration, with 2 worker nodes and one standby node):
[persistence]
basepath_datavolumes={data volume path - typically /hana/data/<SID>}
basepath_logvolumes={log volume path - typically /hana/log/<SID>}
use_mountpoints=yes
basepath_shared=yes
[storage]
ha_provider=hdb_ha.fcClient
partition_*_*__prtype=5
partition_1_data__wwid={wwid of data volume}
partition_1_log__wwid={wwid of log volume}
partition_2_data__wwid={wwid of data volume}
partition_2_log__wwid={wwid of log volume}

Note the use of "partition_*_*__prtype=5" this informs the storage API connector that the reservation type to use for each volume is Write Exclusive.

  1. Ensure that all of the volumes to be used for log and data can be seen but not mounted by each host in the SAP HANA scale out landscape.
  2. To perform the installation from the command line, as the root user pass in the location of the global.ini file (this is assuming a new system installation):
./hdblcm --storage_cfg={path to global.ini}

For any more information on the SAP HANA Storage API connector, please see SAP Note 1922823.

File Services - NFS 

With a shared file system configuration with NFS, a method to fence the failed node is imperative to prevent multiple hosts accessing the persistence volumes and potentially corrupting data.  Not all versions of NFS provide a proper fencing mechanism.  Starting with version 4, a lease-time based locking mechanism is available, which can be used for I/O fencing.  However, NFS version 3 and older versions do not support locking as required for high availability and therefore, achieve fencing capabilities using the STONITH (“shoot the other node in the head”) method.  Even in NFS version 4 environments, STONITH is commonly used to ensure that locks are always freed and to potentially speed up the failover process.   In our test environment we used STONITH with NFS versions 3 and 4.

The Storage Connector API was used for invoking the STONITH calls.  During failover, the SAP HANA master host calls the STONITH method of the custom Storage Connector with the hostname of the failed host as the input value.  SAP HANA’s behavior with an active custom Storage Connector is as follows:

  1. The master node pings the worker nodes and repeatedly does not receive an answer from one of the nodess within a certain timeout.
  2. The master node decides that the standby node shall take over the failed node’s role and initiates the failover.
  3. The master node calls the custom Storage Connector with the hostname of the failing node as a parameter.  The custom Storage Connector sends a power cycle request to its management entity, which in turn triggers a power cycle command to the failing node.
  4. Only after the custom Storage Connector returns without error, is the standby node entitled to acquire the persistences of the failed node and proceeds with the failover process.

SAP hardware partners and their storage partners are responsible for developing a corruption-safe failover solution.

In order for an SAP HANA scale out system to be installed using File Services - NFS, the global.ini file must first be configured, the following is an example where IPMI tool with a Cisco UCS server is used :

1. Define an ha_provider and ha_provider_path. This contains the stonith script. Ensure the script is owned by the <sid>adm user

host1:/hana/shared/HA # ll
total 5
-rwxrwxr-- 1 ps1adm sapsys 1344 Oct 25 03:20 ucs_ha_class.py
-rwxrwxr-- 1 ps1adm sapsys 3340 Nov  6 02:17 ucs_ipmi_reset.sh

2. Ensure the exports are all mounted, the below is an example of entiries in /etc/fstab for a 3+1 configuration

10.21.220.129:/HANA-data-01 /hana/data/PS1/mnt00001 nfs nfsvers=3,rsize=1048576,wsize=1048576,nconnect=8,hard,mountproto=tcp 0 0
10.21.220.129:/HANA-log-01 /hana/log/PS1/mnt00001 nfs nfsvers=3,rsize=1048576,wsize=1048576,nconnect=8,hard,mountproto=tcp 0 0
10.21.220.129:/HANA-data-02 /hana/data/PS1/mnt00002 nfs nfsvers=3,rsize=1048576,wsize=1048576,nconnect=8,hard,mountproto=tcp 0 0
10.21.220.129:/HANA-log-02 /hana/log/PS1/mnt00002 nfs nfsvers=3,rsize=1048576,wsize=1048576,nconnect=8,hard,mountproto=tcp 0 0
10.21.220.130:/HANA-data-03 /hana/data/PS1/mnt00003 nfs nfsvers=3,rsize=1048576,wsize=1048576,nconnect=8,hard,mountproto=tcp 0
10.21.220.130:/HANA-log-03 /hana/log/PS1/mnt00003 nfs nfsvers=3,rsize=1048576,wsize=1048576,nconnect=8,hard,mountproto=tcp 0 0
10.21.220.130:/HANA-data-04 /hana/data/PS1/mnt00004 nfs nfsvers=3,rsize=1048576,wsize=1048576,nconnect=8,hard,mountproto=tcp 0
10.21.220.130:/HANA-log-04 /hana/log/PS1/mnt00004 nfs nfsvers=3,rsize=1048576,wsize=1048576,nconnect=8,hard,mountproto=tcp 0 0
10.21.220.129:/HANA-Shared /hana/shared nfs noatime 0 0
10.21.220.129:/HANA-PS1-01 /usr/sap/PS1 nfs noatime 0 0

3. Create or add(depending on the scenario) the following lines to the global.ini file:

[persistence]
basepath_datavolumes = /hana/data/<SID>
basepath_logvolumes = /hana/log/<SID>
[storage]
ha_provider = ucs_ha_class
ha_provider_path = /hana/shared/HA

Shared file system for Scale Out SAP HANA

SAP HANA distributed (scale out) deployments required a shared filesystem exported to each node in the landscape for the installation to succeed.

Any generic NFS provider supporting NFS v3 and above can be used.

When using Purity 6.4.2 and later FlashArray File Services can be used to provide a shared filesystem (NFS) for Scale Out SAP HANA deployments.  

Set SAP HANA Parameters with hdbparam

SAP Note 2267798 sets out how a customized SAP HANA configuration file can be used to set out during the installation procedure for both scale up and scale out deployments.

With HANA 2.0 the hdbparam tool has been depreciated. See SAP Note 2399079. The parameters can now be found in Configuration -> Global.ini->fileio.

To get the optimal usage of the storage subsystem, set the parameters to the following values:

fileio.num_completion_queues 8
fileio.num_submit_queues 8
fileio.size_kernel_io_queue 512
fileio.max_parallel_io_requests 64
fileio.min_submit_batch_size 16
fileio.max_submit_batch_size 64
fileio.async_write_submit_active on
fileio.async_write_submit_blocks all

fileio.async_read_submit

on

Competing Storage Utilization

Pure Storage FlashArray comes default with Always-On QoS. No knobs and nothing to configure: Always-On QoS protects against noisy neighbors. Always-On QoS prevents workloads from using more than their fair share of resources on the array by efficiently throttling noisy neighbors. QoS limits in terms of either bandwidth or IOPS can be applied on a per volume basis to throttle individual workloads and ensure that no other workloads are impacted.

QoS limits can also be applied to a group of volumes ensuring a consistent experience for all tenants of the array by offering one performance limit setting (MB/s) to configure for the group. This also ensures that tenants receive consistent performance as new tenants are added.

Operating other workloads on an array with SAP HANA installed is possible by using QoS rate limiting and ensuring that the volumes used for SAP HANA have all the IOPS and bandwidth required to complete any operations regardless of the other workloads on the storage system.

If it is required that each volume needs to have QoS set, all that needs to occur is that a user navigates to the Storage view in the FlashArray GUI, under the volume heading a volume or volume group is selected and the QoS rate limiting is set for that volume: