Skip to main content
Pure Technical Services

SAP HANA Implementation and Best Practices

Currently viewing public documentation. Please login to access the full scope of documentation.

Tailored datacenter integration is the SAP HANA deployment option which FlashArray//X is certified to be used for. SAP HANA TDI provides customers with additional flexibility to combine the best aspects of storage, compute, and networking for their landscape.

For more information on SAP HANA TDI please review this document from SAP.

Hardware Requirements

The hardware requirements for SAP HANA can be found in SAP Note 2399995.

Operating System Requirements

SAP HANA can be deployed on Red Hat Enterprise Linux (RHEL) and SUSE Enterprise Linux (SLES). For general information on how to configure each operation system please review the below SAP Notes.

Further information on supported operating systems and revisions for SAP HANA can be found in SAP Note 2235581.

SAP HANA Certified Enterprise Storage

The current number of nodes certified for use with each FlashArray model can be found here.

The following connectivity is certified for use in production environments:

  • Fibre Channel

The following connectivity can be used for development and test scenarios:

  • iSCSI
  • NVMe over fabrics using RDMA over converged Ethernet (RoCE).

Recommended Configuration for SAP HANA on FlashArray

Operating System Settings

The Linux Recommended Settings page provides the recommended settings to be applied for both SLES and RHEL deployments.

If the Operating System being used for SAP HANA is SLES for SAP Applications 12 SP4 or later, or RHEL for SAP Applications 8 or later, it is recommended to use the "none" io scheduler. 

To enable none as an IO scheduler in SLES follow the below steps:

  1. Edit the /etc/default/grub file and add "scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y " to GRUB_CMDLINE_LINUX_DEFAULT
    1. GRUB_CMDLINE_LINUX_DEFAULT="splash=silent resume=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0-part2 quiet crashkernel=199M,high crashkernel=72M,low scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y"
  2. Run the grub2-mkconfig command to ensure the new bootloader options are applied on the next reboot.
    1. grub2-mkconfig -o /boot/grub2/grub.cfg
  3. Reboot the system.

To enable non as an IO scheduler in RHEL follow the below steps:

  1. Edit the /etc/default/grub file and add "scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y " to GRUB_CMDLINE_LINUX_DEFAULT
    1. GRUB_CMDLINE_LINUX_DEFAULT="splash=silent resume=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0-part2 quiet crashkernel=199M,high crashkernel=72M,low scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y"
  2. Run the grub2-mkconfig command to ensure the new bootloader options are applied on the next reboot.
    1. grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
  3. Reboot the system.

To check the "none" scheduler is available for each device look at the scheduler property:

cat /sys/block/sda/queue/scheduler
[none] mq-deadline kyber bf

If using none as an IO scheduler ensure the /etc/udev/rules.d/99-pureudev configuration file is updated to apply the correct setting for each device:

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="none"
ACTION=="add|change", KERNEL=="dm*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="none"

File System Settings

The recommended file system to use with FlashArray Block devices is the XFS file system for both data and log volumes.

When creating and mounting each respective file system, use the default mount options for both the log and data volume formatted with the XFS filesystem.

SAP HANA Scale Up on FlashArray

Scale up deployments focus on single resource domain in terms of computational power. Scale of the system is achieved by increasing the existing CPU power or adding more memory to a single server. This is the simplest and highest performing type of installation.

Scale up deployments can be converted to Scale Out by adding additional servers to the landscape.

The following volumes and size recommendations are created and mounted before installation:

Volume Size Purpose
 Installation Installation size = Minimum 1 x RAM Contains run-time binaries, installation scripts and other support scripts. This also contains the SAP HANA configuration files, trace files and profiles.
Backups Backup size = (Size of Data + Size of Redo Log) x retention period Regularly scheduled backups are written to this location.
Data Data size = 1 x Amount of RAM SAP HANA persists a copy of the in-memory data to this location. This is achieved by writing changed data in the form of savepoints.
Redo Log

For systems <512GB, Redo Log size = 1/2 RAM

For systems >512GB Redo Log size = 512GB or lager

Each transaction performed on the database is recorded to this location in the form of a redo log entry.

SAP HANA Scale Out on FlashArray

Scale out deployments offer superior scale and additional high availability options in comparison to Scale Up deployments. Multiple nodes or servers are combined into a single system.  Scale out systems allow for each host to be given a specific role:

Host Role Description
Worker A worker host is used for database processing.
Standby A standby host is available for failover in a high availability environment.
Extended Storage Worker Worker host for SAP HANA dynamic tiering.
Extended Storage Standby Standby host for SAP HANA dynamic tiering.
ets_worker Worker host for SAP HANA accelerator for SAP ASE.
ets_standby Standby host for SAP HANA accelerator for SAP ASE.
streaming Host for SAP HANA streaming analytics.
xs_worker Host for SAP HANA XS advanced runtime.
xs_standby Standby host for SAP HANA XS advanced.

Within this guide only worker and standby hosts are used, but additional hosts can be added with other roles. 

Each worker host in the scale out landscape requires its own data and log volume. In the event of a worker failing and a standby being present, the failed worker volumes will be attached to the standby and the relevant services started to provide high availability.

The following volumes and size recommendations are created before installation:

Volume Size Purpose
Installation Installation size = Minimum 1 x RAM of worker host x number of hosts. See the section on Shared Filesystem for Scale Out SAP HANA. Contains run-time binaries, installation scripts and other support scripts. This also contains the SAP HANA configuration files, trace files and profiles.
Data Data size = 1 x Amount of RAM SAP HANA persists a copy of the in-memory data to this location. This is achieved by writing changed data in the form of savepoints.
Redo Log

For systems <512GB, Redo Log size = 1/2 RAM

For systems >512GB, Redo Log size = 512GB or lager

Each transaction performed on the database is recorded to this location in the form of a redo log entry.

SAP HANA Scale Out installation

When installing or expanding an SAP HANA system to be a scale out (distributed) system some configuration considerations need to be taken into account.

Any volumes connected to multiple hosts in a scale out deployment must be connected using a Host Group.

In the event of a host failure in a distributed system, a standby host will take over the persistence of the failing host. This is done through the use of the SAP HANA Storage Connector API. 

In order for an SAP HANA scale out system to be installed the global.ini file must first be configured:

1. Identify the WWIDN of the block storage devices to be used for data and log volumes. this can be found using the commands "multipath -ll", "lsblk" or "udevadm --query=all --name {device name} | grep | DM_NAME.

2. Create or add(depending on the scenario) the following lines to the global.ini file(The below assumes a 2 + 1 configuration, with 2 worker nodes and one standby node):

[persistence]
basepath_datavolumes={data volume path - typically /hana/data/<SID>}
basepath_logvolumes={log volume path - typically /hana/log/<SID>}
use_mountpoints=yes
basepath_shared=yes
[storage]
ha_provider=hdb_ha.fcClient
partition_*_*__prtype=5
partition_1_data__wwid={wwid of data volume}
partition_1_log__wwid={wwid of log volume}
partition_2_data__wwid={wwid of data volume}
partition_2_log__wwid={wwid of log volume}

Note the use of "partition_*_*__prtype=5" this informs the storage API connector that the reservation type to use for each volume is Write Exclusive.

3. Ensure that all of the volumes to be used for log and data can be seen but not mounted by each host in the SAP HANA scale out landscape.

4. To perform the installation from the command line, as the root user pass in the location of the global.ini file (this is assuming a new system installation):

./hdblcm --storage_cfg={path to global.ini}

For any more information on the SAP HANA Storage API connector, please see SAP Note 1922823.

Shared filesystem for Scale Out SAP HANA

SAP HANA distributed (scale out) deployments required a shared filesystem exported to each node in the landscape for the installation to succeed.

Any generic NFS provider supporting NFS v3 and above can be used.

Using Windows File Services (WFS), as a part of Purity Run, an NFS share can be exported to each SAP HANA node without the need for any additional hardware. Further information on Purity Run and WFS can be found here.

Setting up an NFS Share for SAP HANA using WFS

In order for an NFS share exported from a Windows Server to function correctly with SAP HANA installations, the correct permissions need to be in place. These permissions will link an active directory user with full access to a directory in windows to both a group identifier (GID) and user identifier (UID).  

There is no need to provide LDAP authentication capabilities in Red Hat Enterprise Linux or SUSE Enterprise Linux.

Create a Group in Active Directory

Typically the user group created during the installation of an SAP HANA instance is called "sapsys" with the default GID of 79. The GID of the group can be changed but it is important to know before installation what this GID will be.

In Red Hat Enterprise Linux, it may be required that the "sapadm" user is also given permissions on the NFS share. The observed UID of this user is typically 996. An active directory user for sapadm should also be created with the same permissions and workflow demonstrated below for the "sidadm" user.

Connect to the domain controller and open the Active Directory Users and Computers management console.

Right click on "Users" in the Domain tree and select "New" and then "Group".

clipboard_e8e004903bcdb8a64f6822562d112c33a.png

In the dialog which appears give the group a name and ensure the Group scope is set to "Global" and Group Type is set to "Security".

clipboard_e196740acd267244b6a72e02b1312d777.png

Once the group has been created, a user needs to be created for the instance being installed on that system.

Right click on "Users" in the Domain tree and select "New" and then "User".

clipboard_ed6f2773d506f18070c8542d8782f0711.png

In the dialog which displays give the user a name and username.

clipboard_e97edddc1c14c558c8a05cea216adfaad.png

Give the user a password and set the password to never expire.

clipboard_e6e46f4023047531dcc14c20d73d66163.png

Do not add the newly created user to the sapsys group. This will be done automatically during the share creation process later.

Setup NFS in File and Storage Services

The NFS service in windows file services needs to be set up to be able to map credentials in the domain to an NFS GID and UID.

On the Windows File Services Instance, open server manager.

clipboard_ea475fb622eb826adaa8b1d42bb7fdd64.png

Navigate to "File and Storage Services" and right click on the file server that will be/is hosting the NFS share for SAP HANA. When the dialog appears select "NFS Settings".

clipboard_e583d6c1a6f501cb3aba878407b603c09.png

In the WFS NFS dialog set the relevant protocol versions (version 3 and Version 4.1 recommended), set the NLM grace period to 45 seconds, Lease period to 90 seconds and NFS 4.1 grace period to 180 seconds. 

clipboard_eee44acbd0998e5b3132a8db6404f941a.png

In identity mapping set the identity mapping source. In this example, Active Directory Domain Services are being used.

clipboard_ecf2146f736c1b23d3ca05468cb3a8c0c.png

Return to File and Storage Services, right click on the file server that will be/is hosting the NFS share for SAP HANA and select "NFS Identity Mapping". 

clipboard_e2416af4aaf0bb6a84eeb3994b84ad5e8.png

In the WFS NFS Identity Mapping dialog, select the "New..." button for Mapped groups.

clipboard_ef7036a248c487ccc0eeffba413a3b8e0.png

Browse for the sapsys group created in step 1.

clipboard_e3e864c49fb3138d9a00655d1544e281f.png

Give the group the same GID as expected to be used in the installation of SAP HANA.

clipboard_e8566f8df48df333a94d8a6c78ce434e3.png

Return to the WFS NFS Identity Mapping dialog and select the "New.." button for Mapped Users.

Browse for the user created in Step 1.

clipboard_e8983e2fd8161ef89780f1be979511a25.png

Give the user the same GID as set out for the group name and the expected UID of the user for the SAP HANA installation.

clipboard_ed9fff3418fef531b341f3b0e56e12592.png

Create NFS Share and Set the Correct Permissions

A single volume and drive should be presented for the NFS share. The use of drive letters is recommended but mount points are also possible.

In File and Storage Services, navigate to the Shares menu and then right click and select "Create Share".

In the New Share Wizard select "NFS Share - Quick".

clipboard_e9fbb891bba81198d7cbf10be3c0cf8fe.png

For the share location, select a volume that has been specifically set aside for the NFS share. In this instance we have E: set aside as a drive letter for NFS shares.

clipboard_e12f609d325158b82f03cec4aef593e6c.png

Give the share a name and check the local path to the share and the remote path to the share are correct.

clipboard_ee88eee6fc12f9b1a6fc63d9191759e4c.png

In authentication ensure "No Server authentication (AUTH_SYS), "Enable unmapped user access" and "Allow unmapped user access by UID/GID" are checked.

clipboard_ed15e4cfe0039de49e5d1bc3d914f93ad.png

In the Share Permissions dialog select "Add...".

clipboard_e42b2cf25c40de84bc0085627ea35c410.png

Set the relevant permissions for the share.

The below example is not the exact permissions required, this will vary by use case - the only permission required as below will be Language encoding and Share permissions.

clipboard_ed53d187d3e4748e30f767700b56db30f.png

Check the permissions are correct.

clipboard_e0a6d096c1e704d9a4380633d4b30b69d.png

In the permissions dialog, this is where the permissions for the directory in Windows Server will be set, select customize permissions.

clipboard_ea8d4f54e2161de4bc85be656ce634521.png

In the dialog which appears, add a new Permission entry by selecting "Add".

clipboard_e54a3bd6d8da5bc0998c260bc719393e1.png

In the further dialog which appears select "Select a principal".

clipboard_e38490048991f6745cd8892fa25d35d11.png

Use the sapsys group as a principal and all users who are a member of the group will inherit the permissions for it.

clipboard_e1c96e939da33a4abe4414310c84b647b.png

Set the basic permissions for Full Control.

clipboard_e60428b44d8ce21b243b44dfeb3cf9247.png

Also, add "Everyone" as a permission with Full control.

clipboard_e94eae7b34b4a12f4562bf0c3e5500cb3.png

Review all of the permissions within the dialog.

clipboard_ed7d79d3d1f44a6f772fc3fb7a8f02bdc.png

clipboard_ef605ecd49037ec172d68d33a43513d55.png

clipboard_e9ce151e451b2992ef66a103306440026.png

clipboard_e8ac77411f166710be032c17cdddc5196.png

Other Options for Shared Filesystem

Another possible architecture for the shared SAP HANA directory is to build an on-premises NFS service using any Linux distribution, with the exported NFS mount being based on storage hosted on FlashArray. For high availability of the NFS service, a cluster can be created. RHEL details out how to do this here.

FlashBlade can also be used to export an NFS mountpoint to each SAP HANA node.  

Set SAP HANA Parameters with hdbparam

SAP Note 2267798 sets out how a customized SAP HANA configuration file can be used to set out during the installation procedure for both scale up and scale out deployments.

With HANA 2.0 the hdbparam tool has been depreciated. See SAP Note 2399079. The parameters can now be found in Configuration -> Global.ini->fileio.

To get the optimal usage of the storage subsystem, set the parameters to the following values:

fileio.num_completion_queues 8
fileio.num_submit_queues 8
fileio.size_kernel_io_queue 512
fileio.max_parallel_io_requests 64
fileio.min_submit_batch_size 16
fileio.max_submit_batch_size 64
fileio.async_write_submit_active on
fileio.async_write_submit_blocks all

fileio.async_read_submit

on

Competing Storage Utilization

Pure Storage FlashArray//X comes default with Always-On QoS. No knobs and nothing to configure: Always-On QoS protects against noisy neighbors. Always-On QoS prevents workloads from using more than their fair share of resources on the array by efficiently throttling noisy neighbors. QoS limits in terms of either bandwidth or IOPS can be applied on a per volume basis to throttle individual workloads and ensure that no other workloads are impacted.

QoS limits can also be applied to a group of volumes ensuring a consistent experience for all tenants of the array by offering one performance limit setting (MB/s) to configure for the group. This also ensures that tenants receive consistent performance as new tenants are added.

Operating other workloads on an //XR3 array with SAP HANA installed is possible by using QoS rate limiting and ensuring that the volumes used for SAP HANA have all the IOPS and bandwidth required to complete any operations regardless of the other workloads on the storage system.

If it is required that each volume needs to have QoS set, all that needs to occur is that a user navigates to the Storage view in the FlashArray GUI, under the volume heading a volume or volume group is selected and the QoS rate limiting is set for that volume:

clipboard_ec53fbcf30105415b03473e2e2b1ce611.png