Skip to main content
Pure Technical Services

Implementing vSphere Metro Storage Cluster With ActiveCluster: Configuring vSphere HA

Currently viewing public documentation. Please login to access the full scope of documentation.

KP_Ext_Announcement.png

The Table of Contents for this guide can be found here and is helpful for navigating to the rest of this guide.

Configuring vSphere HA

When enabling vSphere HA, it is important to follow VMware best practices and recommendations. Pure Storage recommendations do not differ from standard VMware requirements which can be found at the following link:

Pure Storage does not support vSphere versions prior to 5.5 with ActiveCluster.

BEST PRACTICE: Pure Storage recommends (but does not require) a stretched layer 2 network in stretched cluster environments. This allows VMs to be moved to or failed over to hosts that are in a different network with reconfiguring their network information.

To enable automatic failover of virtual machines in the case of a failure, ensure that vSphere HA is turned on. To do this, go to the Host & Clusters view and left click on the desired cluster in the inventory pane. Then click on the Configure tab and vSphere Availability, click Edit on the button for vSphere HA is Turned OFF/ON. 

vSphereHAHostsAndClusters.png

Another window will pop up, click the slider to enable vSphere HA and finally click on OK to save your changes.

vSphereHAEditClusterSettings.png

BEST PRACTICE: Pure Storage recommends enabling vSphere HA on vCenter clusters. 

Proactive HA is a feature introduced in vSphere 6.5 that integrates with server hardware monitoring services that can detect and inform ESXi of specific failures, like fans, memory, or power supplies. This feature has no direct connection with ActiveCluster or storage failures (they are monitored by standard vSphere HA settings) and therefore Pure Storage has no specific recommendation on the enabling/disabling or configuration of Proactive HA.

For standard vSphere HA, there are other settings that should be verified and set which are described in the following sub-sections.

Host Failure Response

In the case of a failure of a host, surviving ESXi hosts take over and reboot the affected virtual machines on themselves. A failure could include host power loss, kernel crash or hardware failure.

HostFailureResponse.png

BEST PRACTICE: Pure Storage recommends leaving host monitoring enabled.

Host monitoring is enabled by default when vSphere HA is enabled. Pure Storage recommends leaving this enabled and has no specific recommendations for its advanced settings (see VMware documentation here). 

Datastore with PDL

For environments running ESXi 6.0 or later, the ability to respond to Permanent Device Loss (PDL) was added into vSphere HA.

PDL occurs when a storage volume is no longer accessible to an ESXi host—furthermore, communication between the host and array has not stopped and SCSI interaction can continue (exchange of SCSI operations and response code) but that specific volume is no longer available to that host. The array will inform ESXi that the volume is no longer accessible using specific SCSI sense codes and ESXi then will stop sending I/O requests to that volume.

More information on PDL can be found in the following VMware KB.

The default behavior of PDL response is for vSphere HA to not restart VMs on other hosts when PDL is encountered. Especially in non-Uniform configurations, Pure Storage recommends enabling this setting by choosing Power off and restart VMs.

PDL.png

In Uniform configurations, enabling this setting is less important since the volume is presented via two arrays, and full PDL would require the volume to be removed from both FlashArrays. While the accidental removal of access of the volume to a host via both arrays is less likely to occur, it is still possible and therefore Pure Storage recommends enabling Power off and restart VMs.

BEST PRACTICE: Pure Storage recommends setting PDL response to Power off and restart VMs.

Datastore with APD

In many failure scenarios, ESXi cannot determine if it has lost access to a volume permanently (e.g. volume removed) or temporarily (e.g. network outage). In the case where there is a failure that ESXi cannot communicate to the underlying array, the array cannot send the appropriate SCSI sense codes and ESXi is unable to determine the type of loss. This state is referred to as All Paths Down (APD).

If a failure occurs, there is a timeout of 140 seconds and then ESXi considers the volume APD. Once this occurs all non-virtual machine I/O to the storage volume is terminated, but virtual machine I/O is still indefinitely retried. ESXi 6.0 introduced the ability for vSphere HA to respond to this situation.

The APD timeout of 140 seconds is controlled by an advanced ESXi setting called Misc.APDTimeout. Pure Storage recommends leaving this at the default and should generally only be changed under the guidance of VMware or Pure Storage support. Reducing this value can lead to false positives of APD occurrences.

APD response options:

  • All Paths Down (APD) Failure Response: by default, vSphere HA does nothing in the face of APD. Pure Storage recommends setting this option to either Power off and restart VMs – Conservative restart policy or Power off and restart VMs – Aggressive restart policy. The conservative option will only power-off VMs if it is sure that they can be restarted elsewhere. The aggressive option will shut them down regardless and best effort try to restart them elsewhere. Pure Storage has no specific recommendation for either and it is up to the customer to decide. Pure Storage does recommend choosing one of the two Power off options and not leaving it Disabled or set to Issue Events.
  • Response recovery: by default, if an APD situation recovers before a power-off has occurred, ESXi will do nothing. In some cases, it might be preferable to have the VMs restarted after a prolonged APD occurrence as some applications and operating systems do not respond well to a lengthy storage outage. In this case, vSphere HA can react to the temporary loss and subsequent recovery of storage by resetting the affected virtual machines. Pure Storage has no specific recommendations on this setting.

APD.png

In order for VMs to be restarted by VMware HA in the event of an ActiveCluster failover, it is required to set the vSphere HA APD response to Power off and restart VMs

VM Monitoring

vSphere HA also offers the ability to detect a guest operating system crash when VM tools is installed in the guest and attempt a restart. This has no direct relevance to ActiveCluster and therefore Pure Storage has no specific recommendations for enabling/disabling and configuring this feature.

VMMonitoring.png

Heartbeat Datastores

Prior to vSphere 5.0, if a host management network was down, but the host and VMs were running fine (and even the VM network itself was fine), vSphere HA might shutdown the VMs on that host and boot them up elsewhere, even though there was no need. This would cause unnecessary downtime of the affected VMs. This occurred because the other hosts could not detect if the host experience a true failure or simply could not communicate over the network. In order to detect the difference between a host failure and network isolation, VMware introduced Datastore Heartbeating. With datastore heartbeating, each host constantly updates a heartbeat region on a shared datastore. If network communication is lost to a host, the heartbeat region of one or more heartbeat datastores is checked. If the host that lost network communication is still updating its heartbeat region, it is considered to still be “alive” but isolated.

Responding to network isolation is discussed in the next section.

VMware offers three settings for heartbeat datastores:

  • Automatically select datastores accessible from the host.
  • Use datastores only from the specified list.
  • Use datastores only from the specified list and complement automatically if needed.

HeartBeatDatastores.png

Pure Storage does not have strict recommendations on these settings other than that automatic selection or specify/complement are the two preferred options. Use only “Use datastores only from the specified list” if there are datastore that are not viable for heart-beating (low reliability for instance).

Response for Host Isolation

The default behavior of vSphere HA is to not automatically restart VMs if a host has been isolated. Host isolation means that the management network is no longer receiving heartbeats from other participants in the cluster. Once this occurs, the ESXi will ping the configured isolation address (by default the gateway address). If it cannot reach the gateway, the host considers itself isolated.

HostIsolation.png

Pure Storage maintains no official recommendations for host isolation settings as this can be very environment specific. Though some considerations should be evaluated—an example of some important ones are below:

  • If isolation response is enabled, and the storage is presented over the TCP/IP network (iSCSI) it is likely that a network partition of the host will also affect iSCSI access. So power-off and restart is likely the desired option since the VM will have lost its storage and a graceful power-off is not possible.
  • If isolation response is enabled, and the storage is presented over Fibre Channel, shut down and restart VMs is likely the preferred option, as Fibre Channel access will likely persist in the event of TCP/IP network loss and a graceful shutdown will be the friendlier option.
  • If the management network and the VM networks are physically the same equipment, it is likely that if one goes down, so do both. In this case, VMs often should be set to be restarted, especially if they need access to the network to run properly.
  • If the VMs either do not need network access to run properly or the management network is physically separate from the VM network, enabling VM restart may not be the best idea as it will just introduce unnecessary downtime. In this case, it might just be a better idea to configure vCenter alerts for host isolation and fix the management network issue and let the VMs continue to run. For configuring vCenter alerts, refer to VMware documentation.

Virtual Machine Overrides

vSphere HA also provides the ability to configure all the above settings on a per-VM basis. If certain virtual machines need to be recovered in a special way, or more forceful, the cluster-wide settings can be overridden for specific virtual machines as needed.

Setting these overrides is optional and is environment specific—Pure Storage has no specific recommendations concerning overrides.

To configure VM overrides, click on the cluster in the vCenter inventory, then the Configure tab then the VM Overrides section in the side panel that appears. Click Add to create an override.

VMOverrides.png

 

AddVMOverrides.png

Many of these settings are described in the previous sub-sections. But there are also some new specific settings (some of the features are unique to vSphere 6.5 and later):

  • DRS Automation level—this is enabled if vSphere DRS is turned on and can override the cluster DRS automation level for this VM.
  • VM restart priority—the important of the virtual machine to be restarted. The higher the setting the more priority given to that VM to be restarted.
  • Start next priority VMs when—dictates what vSphere should wait to see to view a restart priority group as fully rebooted and ready.
  • Additional delay—how long should vSphere wait after the previous priority group completes before starting on the next one.
  • Or after timeout occurs at—specify how long it should wait if the event it is waiting for to being starting the next priority group to respond.

VM and Host Groups and Affinity Rules

When it comes to the logistics of a host failure, there is nothing special about the virtual machine recovery process with vSphere HA when combined with ActiveCluster. vSphere HA finds a host that has access to the storage of the failed virtual machines and restarts it. That process is no different.

With that being said, it might be desirable to help vSphere HA choose what host (or hosts) to restart recovered VMs on. Since ActiveCluster can present the same storage in two separate datacenters and vMSC allows for hosts in separate datacenters to be in the same cluster, a recovered virtual machine could be on a host in datacenter A that then fails and be restarted on a host in datacenter B. Even though there are healthy any available hosts in datacenter A still left in that cluster.

There is nothing intrinsically wrong with that, but due to application relationships it might be preferential to keep certain VMs in the same physical datacenter for performance, network, or even business compliance reasons.

vSphere HA offers a useful tool to make ensuring this simple—VM/host affinity rules.

VM/host affinity rules allow the creation of groups of hosts and groups of VMs and rules to control/dictate their relationships. In this scenario, I have 10 hosts. 5 in datacenter A and 5 in datacenter B.

ClusterHostList.png

Click on the cluster object in the inventory view, then the Configure tab followed by VM/Host rules in the side panel that appears. Click on the Add button to create a host group.

VM-HostGroupsvCenter.png

In the window that appears, assign the host group a name and choose the type of Host Group. Then click Add to add hosts.

AddHostsPre.png

Select the hosts that are in the same datacenter and click OK.

AddIndividualHosts.png

Confirm the configuration and click OK.

ConfirmA.png

Repeat the process for the hosts in the other datacenter by putting them in their host group, in this case, datacenter B.

ConfirmB.png

The next step is to create a VM group. Create as many VM groups as needed, but before their creation, keep in the mind the types of rules that can associated with VM and host groups:

  • Keep all VMs in a group on the same host—this makes sure that all VMs in a given VM group are all on the same host. This is useful for certain clustering applications or for possible multi-tenant environments where VMs must be grouped in a specific way.
  • Keep all VMs in a group on separate hosts—this makes sure all VMs in a group are never run on the same host. This is useful for clustered applications to survive a host failure (i.e. not-all-eggs-in-the-same-basket) or possibly performance sensitive applications that use too many resources to ever be on the same host.
  • Keep all VMs on the same host group­—this is a bit more flexible. This allows for VMs in a group to be required to be on a specific host group, or to never be on a specific host group. It also offers the ability to set preference with “should” rules. Rules can be set that they “should” be on a specific host group or they “should not” be on a specific host group. When a “should” rule is created, vSphere will always use the preferred host group if one or more of those hosts are available. If none are, then and only then will it use non-preferred hosts. If a “must” rule is selected, then in the absence of a preferred host in the host group, the VMs in the VM group will not be recovered automatically.
  • Boot one group of VMs up prior to booting up a second group of VMs—this allows some priority of rebooting. This is useful if certain VMs rely on applications in other VMs. If those applications are not running first, the dependent VMs will either fail to boot, or its applications will fail to start. This is common in the case of database servers, application servers or services like DNS or LDAP.

BEST PRACTICE: When creating host affinity rules, it is generally advisable to use a “should” or “should not” rule over a “must” or “must not” rule as a “must” rule could prevent recovery.

With these in mind, VM groups can be created. In this environment, all four of the eight VMs will be put into the VM group A and four will be put in group B. To do this, click on the Add button in VM/Host Groups like shown earlier for the host groups. 

CreateVMGroup.png

Assign the group a name, choose VM group as the type and then click Add to select VMs.

AddVMsToVMGroup.png

Click OK and then OK again to create the VM group. Repeat as necessary for any VM groupings that are needed.

CreateVMGroupConfirmation.png

Note that VMs can be placed into more than one group—though it is recommended to keep VM membership to as few groups as possible for ease of rule management.

VMHostGroupFullList.png

One VM groups and/or host groups have been created, rules can also be specified. To create and assign a rule, click on the VM/Host Rules section under the cluster Configure tab. Click Add to create a new rule.

VMHostRulesNavigation.png

Give the rule an informative name and choose a rule type, the definitions of which are described earlier in this section.

CreateVMHostRule.png

Depending on the chosen rule type, the options for the rule are different. In the above example, VMs in group A should be put on hosts in group A. Since the “should” rule was chosen, it could be broken in the case that no hosts in group A are available.

VMHostRulesFullList.png

In the above environment, a second rule was created to keep VM group B VMs running on host group B hosts. Since this environment has vSphere DRS enabled and set to automatic, DRS automatically moved the VMs to the proper hosts as soon as the rule was committed.

VMMigrationTask.png

Creating VM and host affinity rules provides the administrator with more direct and proactive control in preparation for an outage and subsequent HA and DRS VM placement response.