Skip to main content
Pure Technical Services

Promoting an ActiveDR Pod in a VMware environment

Test Recovery: Promoting a Target Pod

Recovery: Promoting a Target Pod after Source Pod Demotion

If the source pod has been demoted and you promote the target pod, this process is generally called a "recovery" of a "failover". The workload has been brought down on the original site and is being brought up in a new site. To first demote a pod, refer to the instructions here:

Demoting an ActiveDR Pod in a VMware Environment

Once that is completed, the target pod can be brought online via a promotion operation.

Login to the target FlashArray and click on Storage > Pods and then identify the desired pod. Click on the respective vertical ellipsis and choose Promote.

clipboard_eb0e242c0483fc57214ff89f28ad6e8b4.png

Confirm the promotion.

clipboard_efe7820198e32f313f5f36db1e6654dba.png

This will take the latest point-in-time of the pod and refresh the volumes with the latest data set and make them read/writeable. 

The pod will go into the promoting status briefly:

clipboard_e5b2bf089e3c4b40e411f4071bead7fdb.png

When completed it will have the status promoted:

clipboard_e79b90e6a17f97eabe096e09397eecbd5.png

Once the pod has been promoted ActiveDR will see that the source pod is demoted and automatically swap replication directions. If you look at the replica link on either site it is now replicating from the newly promoted pod back to the original source pod. There is no need to "re-establish" replication--ActiveDR does this for you.

clipboard_eefe72669f5eabb4ca87074419445c364.png

If the volumes are not connected, you can now connect them to the cluster(s) you desire. You may pre-connect volumes while it is still in the demoted state, but you MUST have the ESXi personality set on the hosts prior to connection of demoted volumes. See the following KB for details: 

Setting the ESXi Host Personality

To connect with the FlashArray GUI:

1) Identify a host group or host 2) Click the vertical ellipsis and then Connect  3) Choose the volumes to connect.
clipboard_e840ec9615ae3c76bc5ae6dca242cd039.png clipboard_e07f5ffbe4f0ed0a6469b6c9fefbc339d.png clipboard_e24688834fe5340e1c6fe957abbae77e9.png

If you just connected the volumes, rescan the VMware clusters as needed to ensure the storage is seen:

clipboard_e242a3a6bef8caf06c702d7830b3cd570.png

Resignaturing Datastores

Once a promotion has occurred, and the volumes are connected they can be used in a VMware environment. Though VMFS datastores will not just appear upon connection--they need to be resignatured.

A resignature is required by the volume in the newly promoted pod is a new distinct volume from its source volume. In other words the serial number of the recovered volume in pod B is different from the original production volume in pod A. VMFS has a signature--this signature is how VMware tracks the uniqueness of a file system. The signature is computed from the serial number of the underlying volume and stored in the VMFS header information. So if a file system is copied block-for-block to a new volume (as in the case of ActiveDR) when ESXi first sees this volume it will ignore it. ESXi sees a file system identifier, but it does not match the serial number of the volume hosting it. It therefore knows it has been copied--mounting a volume with a VMFS with an invalid signature could lead to data corruption (same file system, different volume) because multipathing does not know which device to send the data for that file system. Therefore in this situation ESXi requires that you first "resignature" any VMFS with an invalid signature. This updates the signature to match the serial number and allows ESXi to see it as a distinct datastore--allowing the source datastore and many, many copies to be presented at once.

For ActiveDR a promotion will take the data from the source side volumes and copy it to new target side volumes. Since this is a block-for-block copy a resignature is needed.

To resignature, login to the vSphere Client and right-click on an ESXi host or cluster and choose Storage then New Datastore.

1) Right-click on a host or cluster. 2) Choose Storage, then New Datastore. 3) Choose VMFS and click Next.
clipboard_eac37cb8d2335bfd7bfddf223390c91fc.png clipboard_e6a6e6c239f5b2da280b10b09bf34401e.png clipboard_e78494df6dbb1bcb641ec1e2956a1b436.png

If you chose a cluster, select a host to perform the resignature. There is no need to specify a datastore name in this case as the name will be automatically generated based on the original VMFS name. The datastores available for resignature will report the original VMFS name in the column called Snapshot Volume. Look for the datastore name you want to resignature and select it and click Next. Choose Assign a new signature

The Assign a new signature option is not always default so be careful in this wizard. Do not select keep existing signature (as that will fail if the original is still there) and do not choose Format the disk. Format the disk will delete all of the data on the VMFS volume.

1) Choose a host 2) Find the volume with the correct snapshot name 3) Choose Assign a new signature
clipboard_ea3b957f197e66a379d5b28c68f43bbc5.png clipboard_e254a0a04c15b6feac077491b652e17f1.png clipboard_ea6823e27282cb83db9c4acd64086235f.png

Complete the wizard and click Finish.

clipboard_e1c0f972c9ab02e0dc689c2617dcce4b8.png

This will resignature the datastore and it will be mounted on the hosts that see it. 

clipboard_edde1457ee6d742937a512a5896c54538.png

As shown in the above image, the datastore will be mounted with a default name: a prefix assigned by VMware in the form of "snap-XXXXXXXX-" and the original VMFS name. There is no way to prevent VMware from adding this prefix and using the original name. Once the volume has been resignatured you can then rename it to whatever you prefer.

Right-click on the datastore, choose rename then supply the new name.

1) Right-click on the datastore and choose Rename. 2) Enter a unique name and click OK. 3) The datastore will now reflect the new name.
clipboard_ea62d36c41cab4ff998c1ac6c278e6914.png clipboard_e52303def72f3b44586fc57e9e34f91df.png clipboard_ef6b1415bc4da7038ef1357d983c7d00d.png

The last step is to re-register and power-on the VMs. The full process to do this is beyond the scope of this document. An example:

1) Right-click the datastore and choose Register VM 2) Identify the target VM folder and choose the VMX file. 3) Specify a name and folder 4) Choose a host
clipboard_ed635c79664adf674e197bc502dbbbbf8.png clipboard_eaa4ec52e7c6bc34ebc0efa168386d6f6.png clipboard_e232a1d5f1402243dae878469f12bb1f2.png clipboard_e1e074f58e767640c0e636149585199c8.png

The above process will register the virtual machine--repeat for each one you need to register. You may also need to change VM Networks, Tags, Policies, and more. This all is dependent on your environment. Consult VMware documentation for details.

 

 

Updating Raw Device Mappings

For virtual machines that have been failed over that have raw device mappings a separate additional process needs to be followed to replace the old raw device mapping with the new one. When ActiveDR recovers the volumes in the target pod they all have new serial numbers (they are different volumes with the same data). Raw Device Mappings are a way to present an array block volume directly to a virtual machine. A virtual disk pointer file is created that tells the VM the path and address of the physical (raw) device. A literal mapping of a raw device. When that is failed over, the Raw Device Mapping (RDM) is no longer correct--it maps to a volume serial number that does not exist on the target pod.

If you power-on a VM with a RDM that points to a FlashArray volume you will see an error that says "Unable to enumerate all disks."

clipboard_eff5b6eb997b46cfe7cf7b6240b930f61.png

If you look at the virtual disks, one (or more) will report 0 capacity--these are missing RDMs. clipboard_ea692b25529f394268501f1a37c4e3f39.png

The first step is to remove the pointers. Find the virtual machine, click on the Actions dropdown and choose Edit Settings. Find the RDM virtual disk listing and click the x that appears on the right to remove it. Ensure you select Delete files from datastore to ensure the mapping file is removed.

1) Select the VM actions dropdown and choose Edit Settings 2) Remove the RDM pointer by click the X in the circle next to the hard disk listing 3) Choose the Delete files from datastore option and click OK.
clipboard_ef8129335290c9f5c0e780f1e033ee05f.png clipboard_eedd6df20af10a4be8d384390660b9cca.png clipboard_e66ff41196a20b9e9ed8ffee7799881bf.png

Now add the RDM back. The FlashArray volume will be the same name as the source RDM, so look for the volume in the newly promoted pod and identify the serial number.

clipboard_e117718f2e64ffd6af3a5f111a6a03b6e.png

Now back in the vSphere Client, click on the Actions dropdown of the virtual machine and choose Edit Settings and then click Add New Device then RDM Disk. Choose the RDMs that matches the serial number and click OK and OK again to add the RDM.

1) Select the VM actions dropdown and choose Edit Settings 2) Choose Add New Device then RDM Disk 3) Pick the correct volume and click OK.
clipboard_ef8129335290c9f5c0e780f1e033ee05f.png clipboard_e63ebbadeb8973bf819ccd5f259afb4de.png clipboard_e76b4f34136a2dd6f1809b0d8d56592c3.png

 

Re-Promoting a Source Pod

There are three situations where you would want to promote a source pod (or a pod that was recently a source pod):

  1. A source pod is promoted and the target is demoted. You then bring down all of the VMs and prepare the storage for demotion and finally proceed to demote the source. Prior to promoting the target you realize this was a mistake and need to bring the source pod back online.
  2. A source pod is promoted and the target is demoted. You then bring down all of the VMs and prepare the storage for demotion and finally proceed to demote the source. You then promote target pod. Less than 24 hours (or the configured eradication window) has elapsed since demotion.
  3. A source pod is promoted and the target is demoted. You then bring down all of the VMs and prepare the storage for demotion and finally proceed to demote the source. You then promote target pod. You proceed to resignature the datastores and power-on the virtual machines and you realize this was a mistake and need to bring the original source pod back online. More than 24 hours (or the configured eradication window) has elapsed since demotion.

Let's walk through these scenarios one by one.

Scenario #1

In this scenario the VMware environment has been brought down and the source pod has been demoted and the remote pod is also demoted:

Source pod is demoted Remote pod is demoted
clipboard_ea95a6462e284202edd94b63822d35a09.png clipboard_ebfb1d5c5f695c600e78525325927e353.png

At this point it is decided that the environment does not need to be recovered on the remote pod but instead restored on the source pod (activeDRpodA in this example).

To do so, go to the source pods' FlashArray and identify the pod and click the vertical ellipsis on the top right-hand corner. Click Promote.

clipboard_e294b6f632407e7ff89bf4ef394b6017e.png

The following window will appear--since the data has not changed you do not need to select the option promote from undo pod if it appears.

clipboard_ee1026e85cf0cdbd890b0c737fc47c358.png

 

 

Scenario #2

In this scenario the VMware environment has been brought down and the source pod has been demoted and the remote pod is has been promoted:

Original source pod is demoted Original remote pod is promoted
clipboard_ef4e3d95d5acc26dd3afe17aa887cd3a8.png clipboard_e3b28aab68f4908e76d0ec5792e1131af.png

At this point data might or might not have been changed on the newly promoted pod. Before you bring up the environment back on the original source pod, you will need to demote the original remote pod. Follow the instructions here to make sure all VMs using that pod are shutdown and removed.

Demoting an ActiveDR Pod in a VMware Environment

If new data has been written on the original remote pod and it should be used in the recovery, follow the normal steps to demote and promote ActiveDR pod pairs.

If you wish to discard any changes and reset to where the environment was at the point of demoting the original source pod, continue on.

Once you have verified the environment using the original remote pod (activeDRpodB in this case) is shutdown--go to the pod and click the vertical ellipsis and choose Demote.

clipboard_e331f2bfd3f161dfa10327ce0faf84a7f.png

Choose Skip Quiesce if you are 100% certain that any changes on the original remote pod (activeDRpodB in this case) are not needed.

clipboard_e60419019069cc841b98cac3daa41a2ce.png

This will demote the pod.

clipboard_eaad5cd4c34957c8cab96702bf5ecb8e8.png

Note that the point-in-time upon demotion will be stored in an undo pod for the period of the configured eradication window (usually 24 hours). So if you decide that changed data is needed there is a restore point (if you chose Skip Quiesce though that point-in-time is NOT protected to the opposing array).

clipboard_edf42ab33c3cda8ccf24964542e7f523a.png

The next step is to promote the original source pod.

To do so, go to the source pods' FlashArray and identify the pod and click the vertical ellipsis on the top right-hand corner. Click Promote.

clipboard_e294b6f632407e7ff89bf4ef394b6017e.png

The following window will appear--select the undo pod to restore from. This will reset the original source pod to the state it was at when it was last demoted. If there is no undo pod it will not be an option and it will restore from the latest point-in-time it has access to.

clipboard_e234c0b697612375b02090151fe30033d.png

Now connect the volumes to the host and host groups and rescan the ESXi cluster(s) as needed. Right-click the datacenter(s), cluster(s) or ESXi host(s) and then Storage > Rescan Storage. Click OK to rescan.

1) Right click inventory object 2) Choose Storage > Rescan Storage 3) Select all options and click OK.
clipboard_e63a13ad9febccde16f44b7f7a0c698e4.png clipboard_ec8fe54268a313edce9abeadd16637b59.png clipboard_e768b081886893ad875e4b7e26df7223e.png

Use the process to identify the storage you need to reconnect:

https://support.purestorage.com/Solu...vSphere_Client 

For each volume ensure that they are attached. Click on an ESXi host, then Configure, then Storage Devices. For the NAA devices in the pod, select them and click the Attach button.

1) Choose an ESXi host 2) Go to Configure then Storage Devices 3) Select a device and attach it.
clipboard_ec9667dbef8acb9a766adab8dd825512d.png clipboard_e8e20889e51b02e6cf170361c87a55467.png clipboard_e5b7509821f0eea9269c2884da6e69a07.png

 

Now rescan again, but only the VMFS option is needed:

1) Datastore are not yet mounted 2) Choose rescan for storage 3) Rescan for new VMFS volumes
clipboard_e91117aa7295a3e7004130e9256112ff6.png clipboard_ea05231ae9ba9f6e23d2a06ffaef0f4e5.png clipboard_e4188830f52cdfff3c2eccdc9eb5f623a.png

The datastores will now appear as inaccessible.

clipboard_e03e4d1f49d8ddf42a20a83872a042bf5.png

Right click on the datastores, one-by-one, and choose Mount.

1) Right-click a datastore and choose Mount 2) Choose the hosts to mount the datastore to and click OK 3) Repeat for each datastore
clipboard_e1243d86b365c383d1120a11e1bfa1a3d.png clipboard_e3e5aca7da16b345333a1a5a9a377ae9b.png clipboard_ee4ae3938fba713727aa0ce4bc5617494.png

To register VMs and update RDMs. follow these sections:

The last step is to re-register and power-on the VMs. The full process to do this is beyond the scope of this document. An example:

1) Right-click the datastore and choose Register VM 2) Identify the target VM folder and choose the VMX file. 3) Specify a name and folder 4) Choose a host
clipboard_ed635c79664adf674e197bc502dbbbbf8.png clipboard_eaa4ec52e7c6bc34ebc0efa168386d6f6.png clipboard_e232a1d5f1402243dae878469f12bb1f2.png clipboard_e1e074f58e767640c0e636149585199c8.png

The above process will register the virtual machine--repeat for each one you need to register.

Scenario #3

Scenario three means that there is no longer a restore point on the original site as the eradication window has passed and the undo pod has been permanently removed. If the data that has been written on the currently promoted side is to be kept the process to restore to the original site is identical to an intended failover. So you would follow the steps in demotion KB:

Demoting an ActiveDR Pod in a VMware Environment 

and the current KB you are reading from the top.

If you would like to revert to a specific point-in-time on the formerly promoted site, you will need to follow the normal failover procedure and restore individual volumes from specific snapshots as necessary and then follow the normal recovery steps of those datastores (resignature, register VMs, power-on).