Skip to main content
Pure Technical Services

Implementing vSphere Metro Storage Cluster With ActiveCluster: Configuring ActiveCluster

Currently viewing public documentation. Please login to access the full scope of documentation.

The Table of Contents for this guide can be found here and is helpful for navigating to the rest of this guide.

FlashArray ActiveCluster Configuration

A major benefit of using an ActiveCluster stretched storage solution is how simple it is to setup.

Before moving forward, ensure environment configuration requirements are followed as dictated in this KB article.

Creating a Synchronous Connection

The first step to enable ActiveCluster is to create a synchronous connection with another FlashArray. It does not matter which FlashArray that is used to create the connection—either one is fine.

Login to the FlashArray Web Interface and click on the Storage section. Click either on the plus sign or on the vertical ellipsis and choose Connect Array.

ac12.png

The window that pops-up requires three pieces of information:

  1. Management address—this is the virtual IP address or FQDN of the remote FlashArray.
  2. Connection type—choose Sync Replication for ActiveCluster.
  3. Connection Key—this is an API token that can be retrieved from the remote FlashArray.

To obtain the connection key, login to the remote FlashArray Web Interface and click on the Storage section and then click on the vertical ellipsis and choose Get Connection Key.

ac13.png

Copy the key to the clipboard using the Copy button.

ac14.png

Go back to the local FlashArray Web Interface and paste in the key.

ac15.png

The replication address field may be left blank and Purity will automatically discover all the replication port addresses. If the target addresses are via Network Address Translation (NAT), then it is necessary to enter the replication port NAT addresses. 

When complete, click Connect.

If everything is valid, the connection will appear in the Connected Arrays panel.

ac16.png

If the connection fails, verify network connectivity and IP information and/or contact Pure Storage Support.

Creating a Pod

The next step to enable ActiveCluster is to create a consistency group. With ActiveCluster, this is a called a “pod”.

A pod is both a consistency group and a namespace—in effect creating a grouping for related objects involved in ActiveCluster replication. One or more pods can be created. Pods are stretched, unstretched, and failed over together.

Therefore, the basic idea for one or more pods is simply to put related volumes in the same pod. If they host related applications that should remain in the same datacenter together or have consistency with one another, put them in the same pod. Otherwise, put them in the same pod for simplicity, or different ones if they have different requirements. See this KB for FlashArray object limits for additional guidance.

To create a pod, login to the FlashArray GUI and click on Storage, then Pods, then click on the plus sign.

ac17.png

Next, enter a name for the pod, this can be letters, numbers or dashes (must start with a letter or number). Then click Create.

ac18.png

The pod will then appear in the Pods panel.

ac19.png

To further configure the pod, click on the pod name.

ac20.png    

The default configuration for ActiveCluster is to use the Cloud Mediator—no configuration is required other than ensure the management network from the FlashArray is redundant (uses two ports per controller) and have IP access to the mediator. Refer to the networking section in this KB for more details.

The mediator in use can be seen in the overview panel under the Mediator heading. If the mediator is listed as “purestorage”, the Cloud Mediator is in use.

For sites that are unable to contact the Cloud Mediator, the ActiveCluster On-Premises Mediator is available.

 

ac21.png

Pod configuration is now complete.

Adding Volumes to a Pod

The next step is to add any pre-existing volumes to the pod. Once a pod has been enabled for replication, pre-existing volumes cannot be moved into the pod, only new volumes can be created in the pod.

To add a volume to a pod, go to the Storage screen in the FlashArray Web Interface, click on Volumes and then click on the name of a volume to be added to the pod. To find the volume quickly, it can be found by searching for its name.

ac22.png

When the volume screen loads, click on the vertical ellipsis in the upper right-hand corner and choose Move.

ac23.png

To choose the pod, click on the Container box and choose the pod name.

ac24.png

Note that as of Purity 5.0.0, the following limitations exist with respect to moving a volume into or out of a pod:

  • Volumes cannot be moved directly between pods. A volume must first be moved out of a pod, then moved into the other pod.
  • A volume in a volume group cannot be added into a pod. It must be removed from the volume group first.
  • A volume cannot be moved out of an already stretched pod. The pod must first be unstretched, then the volume can be moved out.
  • A volume cannot be moved into an already stretched pod. It must first be unstretched and then an existing volume into it can be added. At this point the pod can then be restretched.

Some of these restrictions may relax in future Purity versions.

Choose the valid target pod and click Move.

ac25.png

This will move the volume into the pod and rename the volume to have prefix consisting of the pod name and two colons. The volume name will then be in the format of <podname>::<volumename>.

ac26.png

The pod will list the volume under its Volumes panel.

ac27.png

Creating a New Volume in a Pod

Users can also create new volumes directly in a pod. Click on Storage, then Pods, then choose the pod. Under the Volumes panel, click the plus sign to create a new volume.

ac28.png

In the creation window, enter a valid name and a size and click Create. This can be done whether or not the pod is actively replicating.

ac29.png

Stretching a Pod

The next step is adding a FlashArray target to the pod. This is called “stretching” the pod, because it automatically makes the pod and its content available on the second array.

Please note that once a pod has been “stretched”, pre-existing volumes cannot be added to it until it is “un-stretched”. Otherwise, once a pod has been stretched, only new volumes can be created in the pod. Therefore, if it is necessary to add existing volumes to a pod, follow the instructions in the section Adding Volumes to a Pod earlier in this KB, then stretch the pod.

To stretch a pod, add a second array to the pod. Do this by clicking on the plus sign in the Arrays panel. 

ac30.png

Choose a target FlashArray and click Add.

ac31.png

The arrays will immediately start synchronizing data between the two FlashArrays.

ac32.png

Active/Active storage is not available until the synchronization completes which will be shown when the resyncing status ends and both arrays are online.

ac33.png

On the remote FlashArray the pod and volumes will now exist in identical fashion and will be available for provisioning to a host or hosts on either FlashArray.

ac34.png

Unstretching a Pod

Once ActiveCluster has been enabled, replication can also be terminated by unstretching it. This might be done for a variety of reasons, such as to change the pod volume membership, or maybe the replication was temporarily enabled to migrate the volumes from one array to another.

The act of terminating replication is called unstretching. To unstretch a pod, remove the array which no longer needs to host the volumes. For example, take the below pod:

ac35.png

The pod has two arrays; sn1-x70-b05-33 and sn1-x70-c05-33. Since this pod is online, both arrays offer up the volumes in the pod. If want the volumes to only remain on sn1-x70-b05-33, I would then remove the other FlashArray, sn1-x70-b05-33.

Before removing a FlashArray from a pod, ensure that the volumes in the pod are disconnected from any host or host group on the FlashArray to be removed from the pod. Purity will not allow a pod to be unstretched if the FlashArray chosen for removal has existing host connections to volumes in the pod.

To remove a FlashArray, choose the appropriate pod and inside of the Arrays panel, click on the trash icon next to the array to be removed.

ac36.png

When it has been confirmed that it is the proper array to remove, click the Remove button to confirm the removal.

ac37.png

On the FlashArray that was removed, under the Pods tab, the pod will be listed in the Destroyed Pods panel.

ac38.png

If the unstretch was done in error, go back to the FlashArray Web Interface that remains in the pod and add the other FlashArray back. This will remove the pod from the Destroyed Pods status back to active.

The pod can be instant re-stretched for 24 hours. At 24 hours, the removed FlashArray will permanently remove its references to the pod.  Permanent removal can be forced early by selecting the destroyed pod and clicking the trash icon next to it.

ac39.png

Click Eradicate to confirm the removal.

ac40.png

Configuring ESXi Hosts

Configuration of the ESXi hosts is not different than configuration for non-ActiveCluster FlashArrays, so all best practices described in the following document still apply from this KB.

With that in mind, there are still a few things worth mentioning in this document.

Host Connectivity

A FlashArray host object is a collection of a host’s initiators that can be “connected”  to a volume. This allows those specified initiators (and therefore that host) to access that volume or volumes.

Create a host object on the FlashArray by going to the Storage section and then the Hosts tab.

ac41.png

Hosts

Click on the plus sign in the Hosts panel to create a new host. Assign the host a name that makes sense and click Create.

ac42.png

Click on the newly created host and then in the Host Ports panel, click the vertical ellipsis and choose either Configure WWNs (for Fibre Channel) or Configure IQNs (for iSCSI).

ac43.png

For WWNs, if the initiator is presented on the fabric to the FlashArray (meaning zoning is complete), click in the correct WWN in the left pane to add it to the host, or alternatively click the plus sign and type it in manually. iSCSI IQNs must be always be typed in manually.

iSCSI_IQN.png

When all the initiators are added/selected, click Add.

BEST PRACTICE: All Fibre Channel hosts should have at least two initiators for redundancy. ESXi iSCSI usually only has one initiator (IQN) but should have two or more physical NICs in the host that can talk to the FlashArray iSCSI targets.

Verify connectivity by navigating to the Health section and then the Connections tab. Find the newly created host and look at the Paths column. If it lists anything besides Redundant, investigate the reported status. More information on the different connection statuses can be found in this document

ac45.png

Host Groups

To make it easier to provision storage to a cluster of hosts, it is recommended to put all the FlashArray host objects into a host group.

To create a host group, click on the Storage section followed by the Hosts tab. In the Host Group panel, click the plus sign.

ac46.png

Enter a name for the host group and click Create.

ac47.png

Now click on the host group in the Host Groups panel and then click on the vertical ellipsis in the Member Hosts panel and choose Add.

ac48.png

Select one or more hosts in the following screen to add to the host group.

ac49.png

If the environment is configured for uniform access, all hosts in the ESXi cluster should be configured on both FlashArrays and added to their host group. If the configuration is non-uniform, only the hosts that have direct access to the given FlashArray need to be added to that FlashArray and its corresponding host group.

Multipathing

Standard ESXi multipathing recommendations apply which are described in more detail in the VMware Best Practices guide found here.

Recommendations at a high level include the following:

  • In ESXi versions under 7.0, use the VMware Round Robin path selection policy for FlashArray storage with the I/O Operations Limit set to 1.
    • In ESXi 6.0 Express Patch 5 and ESXi 6.5 Update 1 and later this is a default setting in ESXi for FlashArray storage and therefore no manual configuration is required in those releases.

    • In ESXi versions 7.0 and higher, Enhanced Round Robin Load Balancing (Latency Based PSP) is recommended. In an effort to make things easier for end-users a new SATP rule has been added that will automatically apply this rule to any Pure Storage LUNs presented to the ESXi host.

  • Use multiple HBAs per host for Fibre Channel or multiple NICs per host for iSCSI.
  • It is recommended to use Port Binding for Software iSCSI when possible.
  • Connect each host to both controllers.
  • In the storage or network fabric, use redundant switches.
Uniform Configuration

In a uniform configuration, all hosts have access to both FlashArrays and can therefore see paths for a ActiveCluster-enabled volume to each FlashArray. 

To start, I will create a new VMFS volume on my FlashArray. To expedite the process, it is advisable to use the vSphere Web Client plugin, but for the purposes of explanation I will walk through the process using the FlashArray Web Interface and the vSphere Web Client.

In this environment, I have an eight-node ESXi cluster—each host is zoned to both FlashArrays.

ac50.png

The vCenter cluster:

Cluster.png

The corresponding host group on the first FlashArray:

HostGroup.png

And on the second FlashArray: 

The first step is to create a volume on my FlashArray in site A and add it to my pod “vMSC-pod01”.

ac53.png

The pod is also not yet stretched to the FlashArray in site B.

PodNotStretched.png

The next step is to add it to my host group on the FlashArray in site A.

ConnectHostGroup.pngConnectHostGroup1.png

My volume is now in the host group “Uniform” and is in an un-stretched pod on the FlashArray in site A.

ConnectedHostGroup.png

The next step is to rescan the vCenter cluster.

RescanStorage.png

Once the rescan is complete, click on one of the ESXi hosts and then go to the Configure tab, then Storage Devices and select the new volume that was provisioned and look at the Paths tab.

PathsCheckUnstretched.png

I currently have 4 paths to the new volume on the FlashArray in site A. All of them are active for I/O.

The next step is to stretch the pod to the FlashArray in site B.

AddArray.png

AddArray2.png

As soon as the FlashArray is added, the pod will start synchronizing and when it is complete the pod will go fully online and the volume will be available on both FlashArrays.

1314Comb.png

Now to have the hosts see it on the second FlashArray, add the volume to the proper host or host group on that FlashArray as well.

ac62.png

Now to see the additional paths, rescan the ESXi cluster.

8.png

Once the rescan completes, the new paths to the volume via the second FlashArray will appear (now 8).

8Paths.png

After you have the appropriate amount of paths on the host, you will now want to configure the volume as a VMFS datastore. You have two options for this:

  1. To automatically create and configure the VMFS datastore on the FlashArray, you can use Pure's vCenter Plugin by following the directions in this KB.
  2. To create and configure the VMFS datastore manually, follow VMware's steps in this KB.

After you have the VMFS datastore configured, if RDMs are required in your environment, you can follow VMware's steps in this KB to get that configured and set up.

ESXi supports up to 32 paths per volume, so do not provision more paths than that. If the per-volume count exceeds 32, unpredictable paths will be dropped, possibly causing uneven access to arrays.

Preferred Paths

The default behavior is that all paths from a FlashArray to a host will be actively used by ESXi—even ones from the secondary FlashArray. When replication occurs over extended distances, this is generally not ideal. In situations where the sites are far apart, two performance-impacting things occur:

  • Half of the writes (assuming both FlashArrays offer an equal amount of paths for each device) sent from a host in site A will be sent to the FlashArray in site B. Since writes must be acknowledged in both sites, this means the data traverses the WAN twice. First the host issues a write across the WAN to the far FlashArray, and then the far FlashArray forwards it back across the WAN to the other FlashArray. This adds unnecessary latency. The optimal path is for the host to send writes to the local FlashArray and then the FlashArray forwards it to the remote FlashArray. In the optimal situation, the write must only traverse the WAN once.
  • Half of the reads (assuming both FlashArrays offer an equal amount of paths for each device) sent from a host in site A will be sent to the FlashArray in site B. Reads can be serviced by either side, and for reads there is no need for one FlashArray to talk to the other. So a read need not ever traverse the WAN in normal circumstances. Servicing all reads from the local array to a given host is the best option for performance.

The FlashArray offers an option to intelligently tell ESXi which FlashArray should optimally service I/O in the event a ESXi host can see paths to both FlashArrays for a given device. This is a FlashArray host object setting called Preferred Arrays.

In a situation where the FlashArray are in geographically different datacenters it is important to set the preferred array for a host on BOTH FlashArrays.

For each host, login to the FlashArray Web Interface for the array that is local to that host. Click on the Storage section, then the Hosts tab, then choose the host to be configured. Then in the Details panel, click on the Add Preferred Arrays option.

BEST PRACTICE: For every host that has access to both FlashArrays that host an ActiveCluster volume, set the preferred FlashArray for that host on both FlashArrays. Tell FlashArray A that it is preferred for host A. Tell FlashArray B that FlashArray A is preferred for host A. Doing this on both FlashArrays allows a host to automatically know which paths are optimized and which are not.

PreferredvCenterGUI.png

Choose that FlashArray as preferred for that ESXi host and click Add.

PreferredArrayGUI.png

If that same host exists on the remote FlashArray, login to the remote FlashArray Web Interface. Click on the Storage section, then the Hosts tab, then choose the host to be configured. Then in the Details panel, click on the Add Preferred Arrays option.

PreferredvCenterGUI.png

Choose the earlier FlashArray as preferred for that ESXi host and click Add.

PreferredArrayGUI.png

It can then be seen in vSphere that half of the paths will be now marked as Active and the other will be marked as Active (I/O). The Active (I/O) paths are the paths which are used for VM I/O. The other paths will only be used if the paths to the preferred FlashArray go away.

PreferredArrayGUISteps.png

When preferred array has been turned off/on or changed, the FlashArray issues 6h/2a/6h (Sense code/ASC/ASCQ) which translates to UNIT ATTENTION ASYMMETRIC ACCESS STATE CHANGED to the host to inform it of the path state change proactively.

Non-Uniform Configuration

In a non-uniform configuration, hosts only have storage access to the FlashArray local to them. Therefore, in the case of a SAN or storage failure, the hosts local to that array will lose all connectivity to the storage. 

To start, I will create a new VMFS volume on my FlashArray. To expedite the process, it is advisable to use the Pure vSphere HTML Client plugin, but for the purposes of explanation I will walk through the process using the FlashArray Web Interface and the vSphere HTML Client.

In this environment, I have an eight-node ESXi cluster—4 are zoned to FlashArray 1 and the other four are zoned to FlashArray 2.

ac70.png

The vCenter cluster:

1.png

In a non-uniform environment, only the hosts local to a FlashArray have storage connectivity to it. So on FlashArray 1, the host group only includes four hosts:

ac72.png

And on the second FlashArray, the other four hosts:

ac73.png

The first step is to create a volume on my FlashArray in site A and add it to my pod “vMSC-pod01”.

ac53.png

The pod is also not yet stretched to the FlashArray in site B yet.

ac75.png

The next step is to add it to my host group on the FlashArray in site A.

ac76.png

ac77.png

My volume is now in the host group “Non-Uniform” and is in an un-stretched pod on the FlashArray in site A.

NonUniformSideAArrayVolume.png

The next step is to rescan the vCenter cluster.

Rescan.png

Once the rescan is complete, click on one of the ESXi hosts that is has access to the FlashArray that currently hosts the volume and then go to the Configure tab, then Storage Devices and select the new volume that was provisioned and look at the Paths tab.

vCenterPaths.png

I currently have 8 paths to the new volume on the FlashArray in site A. 4 of them are active for I/O. 5 hosts will have access and 5 hosts will not.

The next step is to stretch the pod to the FlashArray in site B.

AddArray.png

As soon as the FlashArray is added, the pod will start synchronizing and when it is complete the pod will go fully online and the volume will be available on both FlashArrays.

ResyncProcess.png

Now to have the other four hosts see it on the second FlashArray, add the volume to the proper host or host group on that FlashArray as well.

ac83.png

Now to see the additional paths, rescan the ESXi cluster.

Rescan.png

Once the rescan completes, the other five hosts will now have paths to the volume via the second FlashArray.

After.png

The original five hosts will have access to the volume via paths on the first FlashArray.

ESXiPaths.png

After you have the appropriate amount of paths on the host, you will now want to configure the volume as a VMFS datastore. You have two options for this:

  1. To automatically create and configure the VMFS datastore on the FlashArray, you can use Pure's vCenter Plugin by following the directions in this KB.
  2. To create and configure the VMFS datastore manually, follow VMware's steps in this KB.

After you have the VMFS datastore configured, if RDMs are required in your environment, you can follow VMware's steps in this KB to get that configured and set up.


RDM SCSI Inquiry Data

VMware has observed that some guest operating systems (the Virtual Machine's Operating System) and/or applications require current SCSI INQUIRY data to operate without disruption to the guest or application. By default, the guest OS will get the SCSI INQUIRY data from the ESXi host for RDMs rather than the array directly. VMware has a section in their product documentation that covers how an RDM device can be set so the guest OS will ignore the ESXi host's cache and for the guest to query the array directly for this information. This can be found in this VMware KB.

The setting to ignore the inquiry cache may need to be set for RDMs that are leveraging ActiveCluster. In particular, when leveraging ActiveCluster for planned migration of RDMs between arrays, because the device states will be changing in a short amount of time and the guest OS or application may encounter stale data from the host cache. This setting may need to be set depending on the Guest OS or application needs when using RDMs with ActiveCluster or ActiveDR.