How-To: Manually Restoring a Virtual Machine in VMware
Overview
In Pure Storage's vSphere plugin release 5.2.0, this process is now greatly simplified by using a VMFS workflow built into the plugin. Read more about the workflow here.
This article is for restoring a virtual machine from a Pure Storage FlashArray snapshot of a VMFS Datatore only. This does not apply to VMs on vVols, other third party snapshot recovery processes or a Pure Storage FlashBlade. For restoring or undeleting a VM on a vVol datastore, in vSphere plugin release 5.1.0, the process is simplified with a built-in workflow.
How to Restore the Virtual Machine
Please follow the steps outlined below to successfully restore a virtual machine from a Pure Storage FlashArray snapshot:
- Identify the VMware datastore that contained the problematic VM before the issue was identified. This can be accomplished in multiple places by using the ESXi host CLI, from the vCenter GUI, the FlashArray CLI, or the FlashArray GUI.
- Here are some Examples:
Please note that they are different from the Snapshot and VM example below. This is an example of correlating a Datastore to FlashArray VolumevCenter GUI - Navigate to the Datastore Tab and select the Datastore you want, then click on the Configure Tab and Device Backing Option
- Here you'll see that the Device Backing is naa.624a937098d1ff126d20469c000199eb
- On the array, you will be looking for the Volume with the Serial 98d1ff126d20469c000199eb.
FlashArray GUI A little less specific vs using the FlashArray CLI. You'll need to know the name of the Volume or have need to look at a few volumes.
- Log into the FlashArray GUI, go to the storage tab and then the volumes tab.
- From here, click on the Volume you want to confirm correlates to that datastore
- Here I'm looking at the FlashArray Volume that correlates to the device backing of naa.624a937098d1ff126d20469c000199eb
- Note the Serial matches: 98d1ff126d20469c000199eb
The easiest way to do this is from the ESXi CLI and FlashArray CLI.
ESXi CLI:
Locate the Datastore via esxcfg-scsidevs[root@ESXi-4:~] esxcfg-scsidevs -m naa.624a937073e940225a2a52bb0003ae71:3 /vmfs/devices/disks/naa.624a937073e940225a2a52bb0003ae71:3 5b6b537a-4d4d8368-9e02-0025b521004f 0 ESXi-4-Boot-Lun naa.624a9370bd452205599f42910001edc7:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910001edc7:1 5b7d8183-f78ed720-ddf0-0025b521004d 0 sn1-405-25-Content-Library-Datastore naa.624a9370bd452205599f42910001edc8:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910001edc8:1 5b7d8325-b1db9568-4d28-0025b521004d 0 sn1-405-25-Datastore-1-LUN-150 naa.624a937098d1ff126d20469c000199ea:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199ea:1 5b7d78d7-2b993f30-6902-0025b521004d 0 sn1-405-21-ISO-Repository naa.624a937098d1ff126d20469c000199eb:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199eb:1 5b7d8309-56bd3d78-a081-0025b521004d 0 sn1-405-21-Datastore-1-LUN-100 naa.624a937098d1ff126d20469c0001aad1:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c0001aad1:1 5b8f115f-2b499358-c2e1-0025b521004d 0 prod-sn1-405-c12-21-SRM-Placeholder naa.624a937098d1ff126d20469c0001ae66:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c0001ae66:1 5b901f7e-a6bc0094-c0a3-0025b521003c 0 prod-sn1-405-c12-21-SRM-Datastore-1 naa.624a937098d1ff126d20469c00024c2e:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c00024c2e:1 5b96f277-04c3317f-85db-0025b521003c 0 Syncrep-sn1-405-prod-srm-datastore-1 naa.624a937098d1ff126d20469c00024c33:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c00024c33:1 5b96f28b-57eedbc0-ce59-0025b521004d 0 Syncrep-sn1-405-dev-srm-datastore-1 naa.624a9370bd452205599f42910003f8d8:1 /vmfs/devices/disks/naa.624a9370bd452205599f42910003f8d8:1 5ba8fbd3-f5f7b06e-5286-0025b521004f 0 Syncrep-sn1-405-prod-srm-datastore-2 [root@ESXi-4:~] [root@ESXi-4:~] esxcfg-scsidevs -m |grep "sn1-405-21-Datastore-1-LUN-100" naa.624a937098d1ff126d20469c000199eb:1 /vmfs/devices/disks/naa.624a937098d1ff126d20469c000199eb:1 5b7d8309-56bd3d78-a081-0025b521004d 0 sn1-405-21-Datastore-1-LUN-100
FlashArray CLI pureuser@sn1-405-c12-21> purevol list Name Size Source Created Serial dev-sn1-405-21-Datastore-1-LUN-41 5T - 2018-09-04 10:42:27 PDT 98D1FF126D20469C0001A6A1 prod-sn1-405-21-Datastore-1-LUN-100 15T - 2018-08-22 09:07:13 PDT 98D1FF126D20469C000199EB prod-sn1-405-21-Prod-Cluster-RDM-FileShare-1 10T - 2018-08-24 20:24:02 PDT 98D1FF126D20469C00019ABD prod-sn1-405-21-Prod-Cluster-RDM-Quorum-Witness 1G - 2018-08-24 20:23:37 PDT 98D1FF126D20469C00019ABC prod-sn1-405-21-srm-datastore-1 5T sn1-405-c12-25:prod-sn1-405-21-srm-datastore-1-puresra-demoted 2018-09-05 11:24:15 PDT 98D1FF126D20469C0001AE66 prod-sn1-405-21-srm-placeholder 100G - 2018-09-04 16:08:15 PDT 98D1FF126D20469C0001AAD1 pureuser@sn1-405-c12-21> pureuser@sn1-405-c12-21> purevol list prod-sn1-405-21-Datastore-1-LUN-100 Name Size Source Created Serial prod-sn1-405-21-Datastore-1-LUN-100 15T - 2018-08-22 09:07:13 PDT 98D1FF126D20469C000199EB
Similar to the GUI, we can match the datastore uuid to the FlashArray volume Serial number to confirm this is the datastore and volume we need to work with.
- Navigate to the Datastore Tab and select the Datastore you want, then click on the Configure Tab and Device Backing Option
- Now that you have the Datastore and Volume mapping, Determine the snapshot on the FlashArray that you would like to perform the restore from:
Please note that from this point we have a different datastore and array being used as an example.pureuser@slc-405> purevol list slc-production --snap Name Size Source Created Serial slc-production.4674 500G slc-production 2017-01-17 11:26:18 MST 309582CAEE2411F900011242
- After the snapshot has been identified create a new volume from the snapshot:
pureuser@slc-405> purevol copy slc-production.4674 slc-production-recovery Name Size Source Created Serial slc-production-recovery 500G slc-production 2017-01-17 11:26:56 MST 309582CAEE2411F900011243
- Confirm the new volume has been created based off of the snapshot listed:
pureuser@slc-405> purevol list slc-production-recovery Name Size Source Created Serial slc-production-recovery 500G slc-production 2017-01-17 11:26:18 MST 309582CAEE2411F900011243
The 'Created' date & time should match the timestamp of when the snapshot was created on the newly created volume.
- Map the newly created volume to the ESXi host you would like to deploy the virtual machine you are restoring:
pureuser@slc-405> purevol connect --hgroup ESXi-HG slc-production-recovery Name Host Group Host LUN slc-production-recovery ESXi-HG slc-esx-1 253
- Perform a rescan on the ESXi host that the newly created volume was presented to complete presentation of the LUN.
- Add the recovery LUN as a datastore to the ESXi host(s) you plan on performing the recovery on:
Note in the output above that the recovery LUN is identified as a snapshot to our 'slc-production' datastore we will be recovering.
-
While creating the datastore ensure you choose the: 'Assign a new signature' option.
It is not uncommon for the resignature process to take several minutes to complete. If this task does not complete after 10 minutes, engage additional resources for assistance.
- Once the datastore creation has completed you will note that the datastore name will be in the following format: 'snap-hexNumbers-originalDatastoreName'. The image below is an example of what this specific restore datastore looks like:
- With the recovery datastore highlighted, locate the 'Actions' wheel and click on 'Register VM...' to locate our VM that needs to be restored.
- After you have located the VM in need of restoration step through the VMware prompts to add the VM to the ESXi host inventory.
While registering the recovery virtual machine to the ESXi host, and the original VM is still live on the ESXi host, ensure you rename the recovery VM. If you do not you will have two VMs with the same name and need to look at the underlying datastore properties to determine which VM is the recovery and which is the original.
- Once the recovery VM is listed in the ESXi host inventory proceed with powering on the VM and ensuring it contains the required data and is accessible as expected. If the original VM is still in the ESXi host inventory ensure it is powered off to ensure no conflicts encountered.
When powering on the recovery VM you may be asked if the VM has been 'Copied' or 'Moved'. If the original VM is already destroyed and no longer in inventory you can safely choose 'I moved it'. If the original VM is not deleted and going to be around for additional time then you will need to select the: 'I copied it' option so that there is not a conflict in UUIDs between VMs.
- Once the recovery VM has been powered on and data integrity is confirmed, you can now storage vMotion the VM from the 'snap-hexNumber-OriginalDatastoreName' to the original datastore (if the original datastore will still be used). Otherwise, if the customer is going to destroy the old datastore you can simply rename the recovery datastore and use it as needed.
- If the customer decides to keep the original datastore, and the storage vMotion of the recovery VM has been completed, you can now safely unmap and clean-up the recovery volume as needed. If they are going to keep the newly created recovery datastore no clean-up is required and you can simply rename the LUN as needed.