Promoting an ActiveDR Pod in a VMware environment
Test Recovery: Promoting a Target Pod
ActiveDR was specifically designed to make the test of a recovery the same operation as an actual recovery--this ensures consistency in the test and provides direct testing of how an actual recovery will operate.
The overall process looks like so:
- The source pod is online and promoted.
- The target pod then gets promoted while the source pod remains promoted.
- The target pod is now available for I/O and presents the point-in-time of the source pod according to the available recovery point upon promotion.
- The source pod continues to replicate writes and object changes to the target array--these changes are not reflected in the target pod while it is promoted.
- Once the test is over, the target pod can be demoted and any changes to objects in the target pod will be discarded (new volumes, writes, snapshots, etc). The next promotion will result in a refreshed copy of the source pod from the latest recovery point.
In this environment, there is a source pod (activeDRpodA) replicating to a target pod (activeDRpodB):
To test the latest recovery point, leave the source promoted. The source pod is indicated by the replication direction arrow as shown in the above image. The arrow points AWAY from the source pod.
Navigate to the FlashArray UI that belongs to the array that owns the target pod:
The target pod should show as demoted:
In the upper-right, click on the vertical ellipsis and choose Promote.
This will promote the pod using the latest available recovery point, which is 2020-11-03 08:33. Confirm the promotion.
This will update the inventory of the pod immediately with the data, objects, and object configuration and make them accessible for use.
To continue on with mounting datastores, and registering VMs, follow on to the sections:
Recovery: Promoting a Target Pod after Source Pod Demotion
If the source pod has been demoted and you promote the target pod, this process is generally called a "recovery" or a "failover". The workload has been brought down on the original site and is being brought up in a new site. To first demote a pod, refer to the instructions here:
Demoting an ActiveDR Pod in a VMware Environment
Once that is completed, the target pod can be brought online via a promotion operation.
Login to the target FlashArray and click on Storage > Pods and then identify the desired pod. Click on the respective vertical ellipsis and choose Promote.
Confirm the promotion.
This will take the latest point-in-time of the pod and refresh the volumes with the latest data set and make them read/writeable.
The pod will go into the promoting status briefly:
When completed it will have the status promoted:
Once the pod has been promoted ActiveDR will see that the source pod is demoted and automatically swap replication directions. If you look at the replica link on either site it is now replicating from the newly promoted pod back to the original source pod. There is no need to "re-establish" replication--ActiveDR does this for you.
Connecting Volumes in a newly-promoted Pod
If the volumes are not connected, you can now connect them to the cluster(s) you desire. You may pre-connect volumes while it is still in the demoted state, but you MUST have the ESXi personality set on the hosts prior to connection of demoted volumes. See the following KB for details:
Setting the ESXi Host Personality
To connect with the FlashArray GUI:
1) Identify a host group or host | 2) Click the vertical ellipsis and then Connect | 3) Choose the volumes to connect. |
![]() |
![]() |
![]() |
If you just connected the volumes, rescan the VMware clusters as needed to ensure the storage is seen:
Resignaturing Datastores
Once a promotion has occurred, and the volumes are connected they can be used in a VMware environment. Though VMFS datastores will not just appear upon connection--they need to be resignatured.
A resignature is required by the volume in the newly promoted pod is a new distinct volume from its source volume. In other words the serial number of the recovered volume in pod B is different from the original production volume in pod A. VMFS has a signature--this signature is how VMware tracks the uniqueness of a file system. The signature is computed from the serial number of the underlying volume and stored in the VMFS header information. So if a file system is copied block-for-block to a new volume (as in the case of ActiveDR) when ESXi first sees this volume it will ignore it. ESXi sees a file system identifier, but it does not match the serial number of the volume hosting it. It therefore knows it has been copied--mounting a volume with a VMFS with an invalid signature could lead to data corruption (same file system, different volume) because multipathing does not know which device to send the data for that file system. Therefore in this situation ESXi requires that you first "resignature" any VMFS with an invalid signature. This updates the signature to match the serial number and allows ESXi to see it as a distinct datastore--allowing the source datastore and many, many copies to be presented at once.
For ActiveDR a promotion will take the data from the source side volumes and copy it to new target side volumes. Since this is a block-for-block copy a resignature is needed.
To resignature, login to the vSphere Client and right-click on an ESXi host or cluster and choose Storage then New Datastore.
1) Right-click on a host or cluster. | 2) Choose Storage, then New Datastore. | 3) Choose VMFS and click Next. |
![]() |
![]() |
![]() |
If you chose a cluster, select a host to perform the resignature. There is no need to specify a datastore name in this case as the name will be automatically generated based on the original VMFS name. The datastores available for resignature will report the original VMFS name in the column called Snapshot Volume. Look for the datastore name you want to resignature and select it and click Next. Choose Assign a new signature.
The Assign a new signature option is not always default so be careful in this wizard. Do not select keep existing signature (as that will fail if the original is still there) and do not choose Format the disk. Format the disk will delete all of the data on the VMFS volume.
1) Choose a host | 2) Find the volume with the correct snapshot name | 3) Choose Assign a new signature |
![]() |
![]() |
![]() |
Complete the wizard and click Finish.
This will resignature the datastore and it will be mounted on the hosts that see it.
As shown in the above image, the datastore will be mounted with a default name: a prefix assigned by VMware in the form of "snap-XXXXXXXX-" and the original VMFS name. There is no way to prevent VMware from adding this prefix and using the original name. Once the volume has been resignatured you can then rename it to whatever you prefer.
Right-click on the datastore, choose rename then supply the new name.
1) Right-click on the datastore and choose Rename. | 2) Enter a unique name and click OK. | 3) The datastore will now reflect the new name. |
![]() |
![]() |
![]() |
The last step is to re-register and power-on the VMs. The full process to do this is beyond the scope of this document. An example:
1) Right-click the datastore and choose Register VM | 2) Identify the target VM folder and choose the VMX file. | 3) Specify a name and folder | 4) Choose a host |
![]() |
![]() |
![]() |
![]() |
The above process will register the virtual machine--repeat for each one you need to register. You may also need to change VM Networks, Tags, Policies, and more. This all is dependent on your environment. Consult VMware documentation for details.
Updating Raw Device Mappings
For virtual machines that have been failed over that have raw device mappings a separate additional process needs to be followed to replace the old raw device mapping with the new one. When ActiveDR recovers the volumes in the target pod they all have new serial numbers (they are different volumes with the same data). Raw Device Mappings are a way to present an array block volume directly to a virtual machine. A virtual disk pointer file is created that tells the VM the path and address of the physical (raw) device. A literal mapping of a raw device. When that is failed over, the Raw Device Mapping (RDM) is no longer correct--it maps to a volume serial number that does not exist on the target pod.
If you power-on a VM with a RDM that points to a FlashArray volume you will see an error that says "Unable to enumerate all disks."
If you look at the virtual disks, one (or more) will report 0 capacity--these are missing RDMs.
The first step is to remove the pointers. Find the virtual machine, click on the Actions dropdown and choose Edit Settings. Find the RDM virtual disk listing and click the x that appears on the right to remove it. Ensure you select Delete files from datastore to ensure the mapping file is removed.
1) Select the VM actions dropdown and choose Edit Settings | 2) Remove the RDM pointer by click the X in the circle next to the hard disk listing | 3) Choose the Delete files from datastore option and click OK. |
![]() |
![]() |
![]() |
Now add the RDM back. The FlashArray volume will be the same name as the source RDM, so look for the volume in the newly promoted pod and identify the serial number.
Now back in the vSphere Client, click on the Actions dropdown of the virtual machine and choose Edit Settings and then click Add New Device then RDM Disk. Choose the RDMs that matches the serial number and click OK and OK again to add the RDM.
1) Select the VM actions dropdown and choose Edit Settings | 2) Choose Add New Device then RDM Disk | 3) Pick the correct volume and click OK. |
![]() |
![]() |
![]() |
Re-Promoting a Source Pod
There are three situations where you would want to promote a source pod (or a pod that was recently a source pod):
- A source pod is promoted and the target is demoted. You then bring down all of the VMs and prepare the storage for demotion and finally proceed to demote the source. Prior to promoting the target you realize this was a mistake and need to bring the source pod back online.
- A source pod is promoted and the target is demoted. You then bring down all of the VMs and prepare the storage for demotion and finally proceed to demote the source. You then promote target pod. Less than 24 hours (or the configured eradication window) has elapsed since demotion.
- A source pod is promoted and the target is demoted. You then bring down all of the VMs and prepare the storage for demotion and finally proceed to demote the source. You then promote target pod. You proceed to resignature the datastores and power-on the virtual machines and you realize this was a mistake and need to bring the original source pod back online. More than 24 hours (or the configured eradication window) has elapsed since demotion.
Let's walk through these scenarios one by one.
Scenario #1
In this scenario the VMware environment has been brought down and the source pod has been demoted and the remote pod is also demoted:
Source pod is demoted | Remote pod is demoted |
![]() |
![]() |
At this point it is decided that the environment does not need to be recovered on the remote pod but instead restored on the source pod (activeDRpodA in this example).
To do so, go to the source pods' FlashArray and identify the pod and click the vertical ellipsis on the top right-hand corner. Click Promote.
The following window will appear--since the data has not changed you do not need to select the option promote from undo pod if it appears.
Scenario #2
In this scenario the VMware environment has been brought down and the source pod has been demoted and the remote pod is has been promoted:
Original source pod is demoted | Original remote pod is promoted |
![]() |
![]() |
At this point data might or might not have been changed on the newly promoted pod. Before you bring up the environment back on the original source pod, you will need to demote the original remote pod. Follow the instructions here to make sure all VMs using that pod are shutdown and removed.
Demoting an ActiveDR Pod in a VMware Environment
If new data has been written on the original remote pod and it should be used in the recovery, follow the normal steps to demote and promote ActiveDR pod pairs.
If you wish to discard any changes and reset to where the environment was at the point of demoting the original source pod, continue on.
Once you have verified the environment using the original remote pod (activeDRpodB in this case) is shutdown--go to the pod and click the vertical ellipsis and choose Demote.
Choose Skip Quiesce if you are 100% certain that any changes on the original remote pod (activeDRpodB in this case) are not needed.
This will demote the pod.
Note that the point-in-time upon demotion will be stored in an undo pod for the period of the configured eradication window (usually 24 hours). So if you decide that changed data is needed there is a restore point (if you chose Skip Quiesce though that point-in-time is NOT protected to the opposing array).
The next step is to promote the original source pod.
To do so, go to the source pods' FlashArray and identify the pod and click the vertical ellipsis on the top right-hand corner. Click Promote.
The following window will appear--select the undo pod to restore from. This will reset the original source pod to the state it was at when it was last demoted. If there is no undo pod it will not be an option and it will restore from the latest point-in-time it has access to.
Now connect the volumes to the host and host groups and rescan the ESXi cluster(s) as needed. Right-click the datacenter(s), cluster(s) or ESXi host(s) and then Storage > Rescan Storage. Click OK to rescan.
1) Right click inventory object | 2) Choose Storage > Rescan Storage | 3) Select all options and click OK. |
![]() |
![]() |
![]() |
Use the process to identify the storage you need to reconnect:
https://support.purestorage.com/Solu...vSphere_Client
For each volume ensure that they are attached. Click on an ESXi host, then Configure, then Storage Devices. For the NAA devices in the pod, select them and click the Attach button.
1) Choose an ESXi host | 2) Go to Configure then Storage Devices | 3) Select a device and attach it. |
![]() |
![]() |
![]() |
Now rescan again, but only the VMFS option is needed:
1) Datastore are not yet mounted | 2) Choose rescan for storage | 3) Rescan for new VMFS volumes |
![]() |
![]() |
![]() |
The datastores will now appear as inaccessible.
Right click on the datastores, one-by-one, and choose Mount.
1) Right-click a datastore and choose Mount | 2) Choose the hosts to mount the datastore to and click OK | 3) Repeat for each datastore |
![]() |
![]() |
![]() |
To register VMs and update RDMs. follow these sections:
The last step is to re-register and power-on the VMs. The full process to do this is beyond the scope of this document. An example:
1) Right-click the datastore and choose Register VM | 2) Identify the target VM folder and choose the VMX file. | 3) Specify a name and folder | 4) Choose a host |
![]() |
![]() |
![]() |
![]() |
The above process will register the virtual machine--repeat for each one you need to register.
Scenario #3
Scenario three means that there is no longer a restore point on the original site as the eradication window has passed and the undo pod has been permanently removed. If the data that has been written on the currently promoted side is to be kept the process to restore to the original site is identical to an intended failover. So you would follow the steps in demotion KB:
Demoting an ActiveDR Pod in a VMware Environment
and the current KB you are reading from the top.
If you would like to revert to a specific point-in-time on the formerly promoted site, you will need to follow the normal failover procedure and restore individual volumes from specific snapshots as necessary and then follow the normal recovery steps of those datastores (resignature, register VMs, power-on).