Skip to main content
Pure Technical Services

SAP HANA on VMware vSphere

The SAP HANA Platform can be virtualized on VMWare vSphere, this is known as a "vHANA" deployment. Doing this allows organizations to leverage an industry standard data platform offering fault tolerance, cost optimization, and easy provisioning options. 

SAP HANA on vSphere is only supported with vSphere Standard edition and vSphere Enterprise Plus. 

Further Information can be found at the following locations:

Ensure that the FlashArray VMware Best Practices have been followed prior to setting up any virtual machine for the SAP HANA deployment. 

SAP HANA on VMware vSphere with FlashArray

Virtual Machine Best Practice Configuration 

  • When provisioning datastores for SAP HANA, create a separate datastores for the data and log volumes. Ensure the volumes for the datastores are connected to a host group with each ESXi host the vHANA virtual machine 

Example: There are 4 SAP HANA virtual machines in a vSphere cluster made up of 8 ESXi hosts. 

8 volumes are created and connected to the ESXi hosts, host group. 

This process assumes the deployment is using Raw Device Mappings or VMFS. 

For vVol based deployments, all of the virtual disks will be added to the vVol datastore. For more information about vVols see the Virtual Volumes - vVols.


Follow the same sizing guidelines as physical HANA for the log and data volumes, but assume the sizing will apply to the virtual disk (vmdk) and provision enough VMFS space to accommodate the capacity required.  

Each volume is then formatted with VMFS and a datastore created on each. 


  • Use dedicated Paravirtualized SCSI controllers for OS , data and log volumes to seperate disk I/O streams.

Example: For a single virtualized SAP HANA system there is a Hard disk added for the operating system (Hard Disk 1), a Hard disk added for the SAP HANA data volume (Hard Disk 2) and a Hard Disk added for the SAP HANA log volume (Hard Disk 3).

Upon initial set up the virtual machine should already have a Hard Disk and SCSI controller. 


Add 2 additional SCSI controllers, ensure they are of type "VMware Paravirtual":



Add new Hard Disks to the virtual machine for the log and data volumes. Set the Location of the Hard Disk to the relevant datastore, ensure the Virtual Device Node is set to use one of the new SCSI Controllers, and set the Disk Provisioning to use "Thick Provision Eager Zeroed" to avoid the initial write penalty. 

Data Volume: 


Log Volume: 


Operating System Best Practice Configuration 

  • Set the kernel parameter "elevator=noop" for virtual machines. This will avoid causing unnecessary overhead.

For SLES see SLES KB 7009616 (applies to all versions of SLES).

For RHEL see DOC 5428.

  • Format volumes with the xfs filesystem and use UUID identifiers to auto mount at system startup.

To identify device names use lsblk:

vHANA01:~ # lsblk
sda      8:0    0  960G  0 disk
├─sda1   8:1    0    4M  0 part
├─sda2   8:2    0   60G  0 part /
└─sda3   8:3    0    2G  0 part [SWAP]
sdb      8:16   0    1T  0 disk
sdc      8:32   0  512G  0 disk
sr0     11:0    1 1024M  0 rom 

To format the filesystem use the "mkfs.xfs" command with the device location:  

Do not create a partition, instead the filesystem needs to consume the entire disk

vHANA01:~ # mkfs.xfs /dev/sdb
meta-data=/dev/sdb               isize=512    agcount=4, agsize=67108864 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=268435456, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=131072, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
vHANA01:~ # mkfs.xfs /dev/sdc
meta-data=/dev/sdc               isize=512    agcount=4, agsize=33554432 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=134217728, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=65536, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

To identify the filesystem UUIID for each device use "blkid". 

Using the filesystem UUID to mount the device at startup ensures that if the virtual machine is ever migrated to another datastore or to virtual volumes (vVols) that the device names potentially changing does not become an issue.  

/dev/sda1: PARTUUID="1d27383e-c2f6-4b21-80d0-4ac2398fba3a"
/dev/sda2: UUID="999955a2-f616-4801-a843-76a0964d47af" TYPE="xfs" PARTUUID="2d9d1eb6-98c0-47cd-8f0e-bafcd4e87769"
/dev/sda3: UUID="2473f274-f269-4f06-b2d3-d933953d0641" TYPE="swap" PARTUUID="f08e0739-bd75-4509-ad59-eb2e459574d5"
/dev/sdc: UUID="133a1bf7-caa5-4424-bda5-0db3e23e568f" TYPE="xfs"
/dev/sdb: UUID="815ec4e5-9b55-4339-b919-ee0d818d9054" TYPE="xfs"

Now use the UUIDs to add the devices and mount points to the /etc/fstab file:

UUID=815ec4e5-9b55-4339-b919-ee0d818d9054       /hana/data      xfs     defaults        0  0
UUID=133a1bf7-caa5-4424-bda5-0db3e23e568f       /hana/log       xfs     defaults        0  0

Execute "mount -a" and then filesystems should mount to the correct locations. 

mount -a

Use the df command to ensure everything is mounted correctly: 

Filesystem                                         Size  Used Avail Use% Mounted on
devtmpfs                                           252G  4.0K  252G   1% /dev
tmpfs                                              380G     0  380G   0% /dev/shm
tmpfs                                              252G   26M  252G   1% /run
tmpfs                                              252G     0  252G   0% /sys/fs/cgroup
/dev/sda2                                           60G  5.2G   55G   9% /
fileserver.puredoes.local:/mnt/nfs/vHANA01_Shared  1.0T  105G  919G  11% /hana/shared
tmpfs                                               51G   24K   51G   1% /run/user/469
tmpfs                                               51G     0   51G   0% /run/user/468
tmpfs                                               51G     0   51G   0% /run/user/0
/dev/sdb                                           1.0T  1.1G 1023G   1% /hana/data
/dev/sdc                                           512G  555M  512G   1% /hana/log