Skip to main content
Pure Technical Services

SAP HANA Scale Out on VMware using vVols

Currently viewing public documentation. Please login to access the full scope of documentation.

It is possible to deploy an SAP HANA Scale Out deployment on VMware vSphere. The purpose of this guide will be to show how this can be accomplished.

Taking virtual machine backups which include the SAP HANA data and log volumes will no longer be possible with this implementation. Instead use backint/, file or data snapshots as a data protection mechanism. 

Pre-requisites 

  • Virtual Volumes are implemented following this guide.
  • The required amount of nodes are deployed as virtual machines with SUSE Enterprise Linux for SAP applications or Red Hat Enterprise Linux installed and running with the correct configuration for an SAP HANA scale-out installation.
    • Ensure device-mapper-multipath is installed , enabled and starteded.
  • An NFS server with an NFS mount point is exported, available and mounted on each of the virtual machines operating system. 

Step 1. Add Two additional VMware Paravirtual SCSI Adapters to each Virtual Machine

Ensure that the “SCSI Bus Sharing” is set to “Physical” or “Virtual”. When set to “Virtual “ virtual disks can be shared by virtual machines on the same server, “Physical” allows the disks to be shared by virtual machines on any server - as long as all servers are connected to the same datastore/storage provider. 

https://lh4.googleusercontent.com/RRt1V0xvwzIosKWIj3MpVGSplc8e-a0mqj7Ul1ilXY79jZ9YZUVDxT6Mjy5Xl8CelTxxXVx_Suqq0T7rW6ipBBYmP22glH0KsRGz9ozZgbKMEMLwN0OuujjbYAJf8EGrkwk-HSTu

Step 2. Add the Required Virtual Disks to any one of the Virtual Machines to be used in the Scale-Out Deployment

Note that the virtual machines must be powered down. Ensure that the virtual disks properties are set to the following:

  • Sharing: Multi-Writer
  • Disk Mode: Independent - Persistent
  • The first scsi controller should have all of the data volumes and the second controller should have all of the volumes intended to be used for logging. For a scale out 3+1 implementation there should be 6 disks in total access by 4 virtual machines. 
  • If the VM has used Change Block Tracking (CBT) before then there is a configuration option set for it which will block Multi-Writer Disks being used. To disable CBT  follow this guide

https://lh6.googleusercontent.com/svjE0MRnP3_nd1SRS08qGSlLViE42Zo4SEmZKc9hThixogPWNatwF2ji2YqBOhlKIhckuDDgPZ8bhGdrzTb_GBg2XnHoYybd78oqQLjKVedcsnKewOqmxCFleoI0hxsLGUKGR2iq

Step 3. Add the Existing Disks Created in Step 2 to each Subsequent Virtual Machine

Perform step 3 (while powered off) by selecting “Add New Device”  and adding an “Existing Hard Disk”.

https://lh5.googleusercontent.com/HyQjM4N9Np0l1Ect7S-jLdHrA-y70cNVo5cyVGFyv5-j6M-4AmIz6O84f1BPklBXsXfq4pfJ-5ly-Ecz7EkEZZqUZSZzEMZ6wtV9NwqQPpgpfeL8dunLrobBod_Lg4Vvi2PhU4pt

The hard disks to be added to each virtual machine in the 3+1 configuration scenario will likely be “vm_name_from_step_2_number.vmdk”. These disks do not need to be stored with the virtual machine, but it is recommended that a single virtual machine be nominated as the “owner” for traceability purposes. Ensure that when adding an existing disk all of the relevant properties are set as shown in step 2.

https://lh5.googleusercontent.com/cJ3bHh4KH2ZMi9fGR1Sks47hkF0CNpEwj_QNHhvaECiKCQb4sIBe5Nhnj_Hb8-sed1H2I4EMvO1dV33BCPBU6igJJB2HLdaBTLTh-7m6xFOZVF1oHGrmc0e_E5G6YZgqxb0vT4_3

Step 4. Once all of the Disks have been Added to each Virtual Machine, Each Virtual Machine can be Powered on

Once the power-on process is complete the disks will be visible to each system. 

Step 5. Setup DM-Multipath to list the Disks

Device-Mapper-Multipath (or multipathd) will ignore any virtual disks with the VMware vendor string, to force the operating system to consider the relevant disks for multipathing the following steps need to be followed:

  1. Use "lsblk" to identify the device names of the volumes which have been added.
lsblk

https://lh5.googleusercontent.com/VPKEqGj4CSZvCm8V2HezI5V4C-omE2nDcdzTdgCdGm6nqwKaajn3YX3zqt43OhsbTZ_YLkk51duFmO3OtZuz1pmZgKepKD7D0MphzAvU3jPThBF1jnirGqE-YFeraXgTDkaDS78X

2. For each device to be used in the SAP HANA installation run:

udevadm info --query=all --name=/dev/<device> | grep ID_SERIAL

https://lh5.googleusercontent.com/zJ-ZR8Ku9avKzj75_EcYUS9kr34ubK3Nsg6QkiYbx7covmpJVMDD5hYhSDjhE1ekEPgwvkyBQfo45Gs3D46ilQtHhLvTcGHpmR3KGGIRc0-NCnM-YCvPJhZQ9ZcTGSjsJAWHQ3kD

3. Record the value for “ID_SERIAL” and then create the multipath.conf file, ensuring that each device to be used has the value entered as a “wwid” in the multipaths field. Edit /etc/multipath.conf with the following entries: 

defaults {
    user_friendly_names no
}
blacklist {
}
multipaths {
  multipath {
    wwid  36000c29dbe4185a0bbdedc9b922747c4
  }
  multipath {
    wwid  36000c2911f4ad967aa58336edcca4445
  }
  multipath {
    wwid  36000c296e9e0bfd3231ec5a7210f7a84
  }
  multipath {
    wwid  36000c29d6460812668557f320071f09d
  }
  multipath {
    wwid  36000c2983398315d62c06fa480b3fa36
  }
  multipath {
    wwid  36000c29bc7582f9acdabaa7e0e6141cd
  }
}

4. Start and enable the multipath daemon:

 systemctl enable multipathd && systemctl start multipathd 

 The devices will show in the multipaths listing:

multipath -ll

https://lh3.googleusercontent.com/Da5hZYzEaac-0D22yWyOvkbZD9Sz9cUdAJEhY6WhIwI1y14WMCCJiSM8YMcT7sNUwvG72ekEzp17PxQRAJnDdsoXX-MG2xzAmWi9H91dqbCVdU-kKA9SXw7hFqVBOb-Vz90nyE2J

5. Copy the multipath.conf file to each virtual machine to be used as in the SAP HANA scale-out installation, start and enable the multipath daemon. 

Step 5. Format the multipath Devices

Format each device with the xfs filesystem.

Do not create a partition on the devices. The file system must fill the entire device. 

mkfs.xfs /dev/mapper/<devicewwid>

Step 6. Prepare the global.ini File

Place the wwid values in the global.ini file to be used during installation, using the ha_provider hdb.ha.fcClient with a persistent reservation type of 5. 

[communication]
listeninterface=.global
[persistence]
basepath_datavolumes=/hana/data/RH1
basepath_logvolumes=/hana/log/RH1
use_mountpoints=yes
basepath_shared=yes
[storage]
ha_provider=hdb_ha.fcClient
partition_*_*__prType=5
partition_1_data__wwid=36000c29dbe4185a0bbdedc9b922747c4
partition_1_log__wwid=36000c29d6460812668557f320071f09d
partition_2_data__wwid=36000c2911f4ad967aa58336edcca4445
partition_2_log__wwid=36000c2983398315d62c06fa480b3fa36
partition_3_data__wwid=36000c296e9e0bfd3231ec5a7210f7a84
partition_3_log__wwid=36000c29bc7582f9acdabaa7e0e6141cd
[trace]
ha_fcclient=info 

 

Step 7. Proceed with the standard SAP HANA Scale Out Installation

Execute the command with the --storage_cfg parameter, with the path to the pre-configured global.ini file.

./hdblcm --storage_cfg=<path to global.ini folder>

 

Further information on the SAP HANA Storage requirements for TDI can be found here.

Further information on the Multi-Writer Flag for VMware Virtual machines can be found here

Further information on SAP HANA on VMWare can be found here.