SAP HANA Implementation and Best Practices
Tailored datacenter integration is the SAP HANA deployment option which FlashArray//X is certified to be used for. SAP HANA TDI provides customers with additional flexibility to combine the best aspects of storage, compute, and networking for their landscape.
For more information on SAP HANA TDI please review this document from SAP.
Hardware Requirements
The hardware requirements for SAP HANA can be found in SAP Note 2399995.
Operating System Requirements
SAP HANA can be deployed on Red Hat Enterprise Linux (RHEL) and SUSE Enterprise Linux (SLES). For general information on how to configure each operation system please review the below SAP Notes.
- SAP Note 1944799 - SAP HANA Guidelines for SLES Operating System Installation
- SAP Note 2009879 - SAP HANA Guidelines for Red Hat Enterprise Linux (RHEL) Operating System
Further information on supported operating systems and revisions for SAP HANA can be found in SAP Note 2235581.
SAP HANA Certified Enterprise Storage
The current number of nodes certified for use with each FlashArray model can be found here.
The following connectivity is certified for use in production environments:
- Fibre Channel
The following connectivity can be used for development and test scenarios:
- iSCSI
- NVMe over fabrics using RDMA over converged Ethernet (RoCE).
- NVMe over fabrics using Fibre Channel Protocal (FCP).
- NVMe over fabrics using Transmission Control Protocol (TCP)
Recommended Configuration for SAP HANA on FlashArray
Operating System Settings
The Pure Storage SAP HANA Toolkit can now be used to configure the recommended best practices for SUSE and RHEL SAP HANA deployments
The Linux Recommended Settings page provides the recommended settings to be applied for both SLES and RHEL deployments.
If the Operating System being used for SAP HANA is SLES for SAP Applications 12 SP4 or later, or RHEL for SAP Applications 8 or later, it is recommended to use the "none" io scheduler.
To enable none as an IO scheduler in SLES follow the below steps:
- Edit the /etc/default/grub file and add "scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y " to GRUB_CMDLINE_LINUX_DEFAULT
- GRUB_CMDLINE_LINUX_DEFAULT="splash=silent resume=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0-part2 quiet crashkernel=199M,high crashkernel=72M,low scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y"
- Run the grub2-mkconfig command to ensure the new bootloader options are applied on the next reboot.
- grub2-mkconfig -o /boot/grub2/grub.cfg
- Reboot the system.
To enable non as an IO scheduler in RHEL follow the below steps:
- Edit the /etc/default/grub file and add "scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y " to GRUB_CMDLINE_LINUX_DEFAULT
- GRUB_CMDLINE_LINUX_DEFAULT="splash=silent resume=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:0-part2 quiet crashkernel=199M,high crashkernel=72M,low scsi_mod.use_blk_mq=1 dm_mod.use_blk_mq=y"
- Run the grub2-mkconfig command to ensure the new bootloader options are applied on the next reboot.
- grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
- Reboot the system.
To check the "none" scheduler is available for each device look at the scheduler property:
cat /sys/block/sda/queue/scheduler [none] mq-deadline kyber bf
If using none as an IO scheduler ensure the /etc/udev/rules.d/99-pureudev configuration file is updated to apply the correct setting for each device:
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="none" ACTION=="add|change", KERNEL=="dm*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="none"
File System Settings
The recommended file system to use with FlashArray Block devices is the XFS file system for both data and log volumes.
When creating and mounting each respective file system, use the default mount options for both the log and data volume formatted with the XFS filesystem.
SAP HANA Scale Up on FlashArray
Scale up deployments focus on single resource domain in terms of computational power. Scale of the system is achieved by increasing the existing CPU power or adding more memory to a single server. This is the simplest and highest performing type of installation.
Scale up deployments can be converted to Scale Out by adding additional servers to the landscape.
The following volumes and size recommendations are created and mounted before installation:
Volume | Size | Purpose |
---|---|---|
Installation | Installation size = Minimum 1 x RAM | Contains run-time binaries, installation scripts and other support scripts. This also contains the SAP HANA configuration files, trace files and profiles. |
Backups | Backup size = (Size of Data + Size of Redo Log) x retention period | Regularly scheduled backups are written to this location. |
Data | Data size = 1 x Amount of RAM | SAP HANA persists a copy of the in-memory data to this location. This is achieved by writing changed data in the form of savepoints. |
Redo Log |
For systems <512GB, Redo Log size = 1/2 RAM For systems >512GB Redo Log size = 512GB or lager |
Each transaction performed on the database is recorded to this location in the form of a redo log entry. |
SAP HANA Scale Out on FlashArray
Scale out deployments offer superior scale and additional high availability options in comparison to Scale Up deployments. Multiple nodes or servers are combined into a single system. Scale out systems allow for each host to be given a specific role:
Host Role | Description |
---|---|
Worker | A worker host is used for database processing. |
Standby | A standby host is available for failover in a high availability environment. |
Extended Storage Worker | Worker host for SAP HANA dynamic tiering. |
Extended Storage Standby | Standby host for SAP HANA dynamic tiering. |
ets_worker | Worker host for SAP HANA accelerator for SAP ASE. |
ets_standby | Standby host for SAP HANA accelerator for SAP ASE. |
streaming | Host for SAP HANA streaming analytics. |
xs_worker | Host for SAP HANA XS advanced runtime. |
xs_standby | Standby host for SAP HANA XS advanced. |
Within this guide, only worker and standby hosts are used. Additional hosts can be added with other roles.
Each worker host in the scale out landscape requires its own data and log volume. In the event of a worker failing and a standby being present, the failed worker volumes will be attached to the standby and the relevant services started to provide high availability.
The following volumes and size recommendations are created before installation:
Volume | Size | Purpose |
---|---|---|
Installation | Installation size = Minimum 1 x RAM of worker host x number of hosts. See the section on Shared Filesystem for Scale Out SAP HANA. | Contains run-time binaries, installation scripts and other support scripts. This also contains the SAP HANA configuration files, trace files, and profiles. |
Data | Data size = 1 x Amount of RAM | SAP HANA persists a copy of the in-memory data to this location. This is achieved by writing changed data in the form of savepoints. |
Redo Log |
For systems <512GB, Redo Log size = 1/2 RAM For systems >512GB, Redo Log size = 512GB or lager |
Each transaction performed on the database is recorded to this location in the form of a redo log entry. |
SAP HANA Scale Out installation
When installing or expanding an SAP HANA system to be a scale out (distributed) system some configuration considerations need to be taken into account.
Any volumes connected to multiple hosts in a scale out deployment must be connected using a Host Group.
In the event of a host failure in a distributed system, a standby host will take over the persistence of the failing host. This is done through the use of the SAP HANA Storage Connector API.
In order for an SAP HANA scale out system to be installed the global.ini file must first be configured:
- Identify the WWIDN of the block storage devices to be used for data and log volumes. this can be found using the commands "multipath -ll", "lsblk" or "udevadm --query=all --name {device name} | grep | DM_NAME.
- Create or add(depending on the scenario) the following lines to the global.ini file(The below assumes a 2 + 1 configuration, with 2 worker nodes and one standby node):
[persistence] basepath_datavolumes={data volume path - typically /hana/data/<SID>} basepath_logvolumes={log volume path - typically /hana/log/<SID>} use_mountpoints=yes basepath_shared=yes [storage] ha_provider=hdb_ha.fcClient partition_*_*__prtype=5 partition_1_data__wwid={wwid of data volume} partition_1_log__wwid={wwid of log volume} partition_2_data__wwid={wwid of data volume} partition_2_log__wwid={wwid of log volume}
Note the use of "partition_*_*__prtype=5" this informs the storage API connector that the reservation type to use for each volume is Write Exclusive.
- Ensure that all of the volumes to be used for log and data can be seen but not mounted by each host in the SAP HANA scale out landscape.
- To perform the installation from the command line, as the root user pass in the location of the global.ini file (this is assuming a new system installation):
./hdblcm --storage_cfg={path to global.ini}
For any more information on the SAP HANA Storage API connector, please see SAP Note 1922823.
Shared filesystem for Scale Out SAP HANA
SAP HANA distributed (scale out) deployments required a shared filesystem exported to each node in the landscape for the installation to succeed.
Any generic NFS provider supporting NFS v3 and above can be used.
When using Purity 6.4.2 and later FlashArray File Services can be used to provide a shared filesystem (NFS) for Scale Out SAP HANA deployments.
Set SAP HANA Parameters with hdbparam
SAP Note 2267798 sets out how a customized SAP HANA configuration file can be used to set out during the installation procedure for both scale up and scale out deployments.
With HANA 2.0 the hdbparam tool has been depreciated. See SAP Note 2399079. The parameters can now be found in Configuration -> Global.ini->fileio.
To get the optimal usage of the storage subsystem, set the parameters to the following values:
fileio.num_completion_queues | 8 |
fileio.num_submit_queues | 8 |
fileio.size_kernel_io_queue | 512 |
fileio.max_parallel_io_requests | 64 |
fileio.min_submit_batch_size | 16 |
fileio.max_submit_batch_size | 64 |
fileio.async_write_submit_active | on |
fileio.async_write_submit_blocks | all |
fileio.async_read_submit |
on |
Competing Storage Utilization
Pure Storage FlashArray//X comes default with Always-On QoS. No knobs and nothing to configure: Always-On QoS protects against noisy neighbors. Always-On QoS prevents workloads from using more than their fair share of resources on the array by efficiently throttling noisy neighbors. QoS limits in terms of either bandwidth or IOPS can be applied on a per volume basis to throttle individual workloads and ensure that no other workloads are impacted.
QoS limits can also be applied to a group of volumes ensuring a consistent experience for all tenants of the array by offering one performance limit setting (MB/s) to configure for the group. This also ensures that tenants receive consistent performance as new tenants are added.
Operating other workloads on an //XR3 array with SAP HANA installed is possible by using QoS rate limiting and ensuring that the volumes used for SAP HANA have all the IOPS and bandwidth required to complete any operations regardless of the other workloads on the storage system.
If it is required that each volume needs to have QoS set, all that needs to occur is that a user navigates to the Storage view in the FlashArray GUI, under the volume heading a volume or volume group is selected and the QoS rate limiting is set for that volume: