Skip to main content
Pure Technical Services

SAP HANA Implementation and Best Practices

Tailored datacenter integration is the SAP HANA deployment option which FlashArray//X is certified to be used for. SAP HANA TDI provides customers with additional flexibility to combine the best aspects of storage, compute and networking for their landscape.

For more information on SAP HANA TDI please review this document from SAP.

Hardware Requirements

The hardware requirements for SAP HANA can be found in SAP Note 2399995.

Operating System Requirements

SAP HANA can be deployed on Red Hat Enterprise Linux (RHEL) and SUSE Enterprise Linux (SLES). For general information on how to configure each operation system please review the below SAP Notes.

Further information on supported operating systems and revisions for SAP HANA can be found in SAP Note 2235581.

SAP HANA Certified Enterprise Storage

The current number of nodes certified for use with each FlashArray model can be found here.

The following connectivity is certified for use in production environments:

  • Fibre Channel

The following connectivity can be used for development and test scenarios:

  • iSCSI
  • NVMe over fabrics using RDMA over converged Ethernet (RoCE).

Recommended Configuration for SAP HANA on FlashArray

Operating system settings

The Linux Recommended Settings page provides the recommended settings to be applied for both SLES and RHEL deployments.

File System Settings

The recommended file system to use with FlashArray Block devices is the XFS file system for both data a log volumes.

When creating and mounting each respective file system, use the default mount options for both the log and data volume formatted with the XFS filesystem.

SAP HANA Scale Up on FlashArray

Scale up deployments focus on single resource domain in terms of computational power. Scale of the system is achieved by increasing the existing CPU power or adding more memory to a single server. This is the simplest and highest performing type of installation.

Scale up deployments can be converted to Scale Out by adding additional servers to the landscape.

The following volumes and size recommendations are created and mounted before installation:

Volume Size Purpose
 Installation Installation size = Minimum 1 x RAM Contains run-time binaries, installation scripts and other support scripts. This also contains the SAP HANA configuration files, trace files and profiles.
Backups Backup size = (Size of Data + Size of Redo Log) x retention period Regularly scheduled backups are written to this location.
Data Data size = 1 x Amount of RAM SAP HANA persists a copy of the in-memory data to this location. This is achieved by writing changed data in the form of savepoints.
Redo Log

For systems <512GB, Redo Log size = 1/2 RAM

For systems >512GB Redo Log size = 512GB or lager

Each transaction performed on the database is recorded to this location in the form of a redo log entry.

SAP HANA Scale Out on FlashArray

Scale out deployments offer superior scale and additional high availability options in comparison to Scale Up deployments. Multiple nodes or servers are combined into a single system.  Scale out systems allow for each host to be given a specific role:

Host Role Description
Worker A worker host is used for database processing.
Standby A standby host is available for failover in an high availability environment.
Extended Storage Worker Worker host for SAP HANA dynamic tiering.
Extended Storage Standby Standby host for SAP HANA dynamic tiering.
ets_worker Worker host for SAP HANA accelerator for SAP ASE.
ets_standby Standby host for SAP HANA accelerator for SAP ASE.
streaming Host for SAP HANA streaming analytics.
xs_worker Host for SAP HANA XS advanced runtime.
xs_standby Standby host for SAP HANA XS advanced.

Within this guide only worker and standby hosts are used, but additional hosts can be added with other roles. 

Each worker host in the scale out landscape requires its own data and log volume. In the event of a worker failing and a standby being present, the failed workers volumes will be attached to the standby and the relevant services started to provide high availability.

The following volumes and size recommendations are created before installation:

Volume Size Purpose
Installation Installation size = Minimum 1 x RAM of worker host x number of hosts. See the section on Shared Filesystem for Scale Out SAP HANA. Contains run-time binaries, installation scripts and other support scripts. This also contains the SAP HANA configuration files, trace files and profiles.
Data Data size = 1 x Amount of RAM SAP HANA persists a copy of the in-memory data to this location. This is achieved by writing changed data in the form of savepoints.
Redo Log

For systems <512GB, Redo Log size = 1/2 RAM

For systems >512GB, Redo Log size = 512GB or lager

Each transaction performed on the database is recorded to this location in the form of a redo log entry.

SAP HANA Scale Out installation

When installing or expanding an SAP HANA system to be a scale out (distributed) system some configuration considerations need to be taken into account.

Any volumes connected to multiple hosts in a Scale Out deployment must be connected using a Host Group.

In the event of a host failure in a distributed system, a standby host will take over the persistence of the failing host. This is done through the use of the SAP HANA Storage Connector API. 

In order for an SAP HANA scale out system to be installed the global.ini file must first be configured:

1. Identify the WWIDN of the block storage devices to be used for data and log volumes. this can be found using the commands "multipath -ll", "lsblk" or "udevadm --query=all --name {device name} | grep | DM_NAME.

2. Create or add(depending on the scenario) the following lines to the global.ini file(The below assumes a 2 + 1 configuration, with 2 worker nodes and one standby node):

basepath_datavolumes={data volume path - typically /hana/data/<SID>}
basepath_logvolumes={log volume path - typically /hana/log/<SID>}
partition_1_data__wwid={wwid of data volume}
partition_1_log__wwid={wwid of log volume}
partition_2_data__wwid={wwid of data volume}
partition_2_log__wwid={wwid of log volume}

Note the use of "partition_*_*__prtype=5" this informs the storage API connector that the reservation type to use for each volume is Write Exclusive.

3. Ensure that all of the volumes to be used for log and data can be seen but not mounted by each host in the SAP HANA scale out landscape.

4. To perform the installation from the command line, as the root user pass in the location of the global.ini file (this is assuming a new system installation):

./hdblcm --storage_cfg={path to global.ini}

For any more information on the SAP HANA Storage API connector, please see SAP Note 1922823.

Shared filesystem for Scale Out SAP HANA

SAP HANA distributed (scale out) deployments required a shared filesystem exported to each node in the landscape for the installation to succeed.

Any generic NFS provider supporting NFS v3 and above can be used.

Using Windows File Services (WFS), as a part of Purity Run, an NFS share can be exported to each SAP HANA node without the need for any additional hardware. Further information on Purity Run and WFS can be found here.

Setting up an NFS Share for SAP HANA using WFS

In order for an NFS share exported from a Windows Server to function correctly with SAP HANA installations, the correct permissions need to be in place. These permissions will link an active directory user with full access to a directory in windows to both a group identifier (GID) and user identifier (UID).  

There is no need to provide LDAP authentication capabilities in Red Hat Enterprise Linux or SUSE Enterprise Linux.

Create a Group in Active Directory

Typically the user group created during the installation of an SAP HANA instance is called "sapsys" with the default GID of 79. The GID of the group can be changed but it is important to know before installation what this GID will be.

In Red Hat Enterprise Linux, it may be required that the "sapadm" user is also given permissions on the NFS share. The observed UID of this user is typically 996. An active directory user for sapadm should also be created with the same permissions and workflow demonstrated below for the "sidadm" user.

Connect to the domain controller and open the Active Directory Users and Computers management console.

Right click on "Users" in the Domain tree and select "New" and then "Group".

In the dialog which appears give the group a name and ensure the Group scope is set to "Global" and Group Type is set to "Security".

Once the group has been created, a user needs to be created for the instance being installed on that system.

Right click on "Users" in the Domain tree and select "New" and then "User".

In the dialog which displays give the user a name and username.

Give the user a password and set the password to never expire.

Do not add the newly created user to the sapsys group. This will be done automatically during the share creation process later.

Setup NFS in File and Storage Services

The NFS service in windows file services needs to be set up to be able to map credentials in the domain to an NFS GID and UID.

On the Windows File Services Instance, open server manager.

Navigate to "File and Storage Services" and right click on the file server that will be/is hosting the NFS share for SAP HANA. When the dialog appears select "NFS Settings".

In the WFS NFS dialog set the relevant protocol versions (version 3 and Version 4.1 recommended), set the NLM grace period to 45 seconds, Lease period to 90 seconds and NFS 4.1 grace period to 180 seconds. 

In identity mapping set the identity mapping source. In this example, Active Directory Domain Services are being used.

Return to File and Storage Services, right click on the file server that will be/is hosting the NFS share for SAP HANA and select "NFS Identity Mapping". 

In the WFS NFS Identity Mapping dialog, select the "New..." button for Mapped groups.

Browse for the sapsys group created in step 1.

Give the group the same GID as expected to be used in the installation of SAP HANA.

Return to the WFS NFS Identity Mapping dialog and select the "New.." button for Mapped Users.

Browse for the user created in Step 1.

Give the user the same GID as set out for the group name and the expected UID of the user for the SAP HANA installation.

Create NFS Share and Set the Correct Permissions

A single volume and drive should be presented for the NFS share. The use of drive letters is recommended but mount points are also possible.

In File and Storage Services, navigate to the Shares menu and then right click and select "Create Share".

In the New Share Wizard select "NFS Share - Quick".

For the share location, select a volume that has been specifically set aside for the NFS share. In this instance we have E: set aside as a drive letter for NFS shares.

Give the share a name and check the local path to the share and the remote path to the share are correct.

In authentication ensure "No Server authentication (AUTH_SYS), "Enable unmapped user access" and "Allow unmapped user access by UID/GID" are checked.

In the Share Permissions dialog select "Add...".

Set the relevant permissions for the share.

The below example is not the exact permissions required, this will vary by use case - the only permission required as below will be Language encoding and Share permissions.

Check the permissions are correct.

In the permissions dialog, this is where the permissions for the directory in Windows Server will be set, select customize permissions.

In the dialog which appears, add a new Permission entry by selecting "Add".

In the further dialog which appears select "Select a principal".

Use the sapsys group as a principal and all users who are a member of the group will inherit the permissions for it.

Set the basic permissions for Full Control.

Also, add "Everyone" as a permission with Full control.

Review all of the permissions within the dialog.

Other Options for Shared Filesystem

Another possible architecture for the shared SAP HANA directory is to build an on-premises NFS service using any Linux distribution, with the exported NFS mount being based on storage hosted on FlashArray. For high availability of the NFS service, a cluster can be created. RHEL details out how to do this here.

FlashBlade can also be used to export an NFS mountpoint to each SAP HANA node.  

Set SAP HANA Parameters with hdbparam

SAP Note 2267798 sets out how a customized SAP HANA configuration file can be used to set out during the installation procedure for both scale up and scale out deployments.

With HANA 2.0 the hdbparam tool has been depreciated. See SAP Note 2399079. The parameters can now be found in Configuration -> Global.ini->fileio.

To get the optimal usage of the storage subsystem, set the parameters to the following values:

fileio.num_completion_queues 8
fileio.num_submit_queues 8
fileio.size_kernel_io_queue 512
fileio.max_parallel_io_requests 64
fileio.min_submit_batch_size 16
fileio.max_submit_batch_size 64
fileio.async_write_submit_active on
fileio.async_write_submit_blocks all



Competing Storage Utilization

Pure Storage FlashArray//X comes default with Always-On QoS. No knobs and nothing to configure: Always-On QoS protects against noisy neighbors. Always-On QoS prevents workloads from using more than their fair share of resources on the array by efficiently throttling noisy neighbors. QoS limits in terms of either bandwidth or IOPS can be applied on a per volume basis to throttle individual workloads and ensure that no other workloads are impacted.

QoS limits can also be applied to a group of volumes ensuring a consistent experience for all tenants of the array by offering one performance limit setting (MB/s) to configure for the group. This also ensures that tenants receive consistent performance as new tenants are added.

Operating other workloads on an //XR3 array with SAP HANA installed is possible by using QoS rate limiting and ensuring that the volumes used for SAP HANA have all the IOPS and bandwidth required to complete any operations regardless of the other workloads on the storage system.

If it is required that each volume needs to have QoS set, all that needs to occur is that a user navigates to the Storage view in the FlashArray GUI, under the volume heading a volume or volume group is selected and the QoS rate limiting is set for that volume: