Skip to main content
Pure Technical Services

VASA and vVols Related Fixes by ESXi Release

Currently viewing public documentation. Please login to access the full scope of documentation.

One of the tricky problems observed with using VMware Virtual Volumes (vVols) has been knowing what issues with vVols or VASA (vSphere API for Storage Awareness) are fixed in which releases of ESXi.  VMware provides fairly detailed release notes for each ESXi patch and update releases.  This KB's goal is to compile a list of all vVols or VASA related fixes by ESXi Release.

For further information on ESXi Version names, release names and build numbers please refer to VMware's KB.

ESXi 7.0

VMware ESXi 7.0 Update 3i - Build 20842708

  • PR 3006356: ESXi host fail with a purple diagnostic screen due to rebinding of virtual volumes

    In rare cases, vSphere Virtual Volumes might attempt to rebind volumes on ESXi hosts that have SCSI Persistent Reservations. As a result, the ESXi hosts fail with a purple diagnostic screen and an error such as Panic Message: @BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4933 - Usage error in dlmalloc in the backtrace.

    This issue is resolved in this release. 
  • PR 2981272: vSphere Client displays 0 KB regardless of the actual VMDK size of a virtual machine

    Due to a caching issue, in the vSphere Client you might see a VMDK size of 0 KB regardless of the actual size of virtual machines in a vSphere Virtual Volumes environment.

    This issue is resolved in this release.

VMware ESXi 7.0 Update 3f - Build 20036589

  • PR 2882789: While browsing a vSphere Virtual Volumes datastore, you see the UUID of some virtual machines, instead of their name

    In the vSphere Client, when you right-click to browse a datastore, you might see the UUID of some virtual machines in a vSphere Virtual Volumes datastore, such as naa.xxxx, instead of their names. The issue rarely occurs in large-scale environments with a large number of containers and VMs on a vSphere Virtual Volumes datastore. The issue has no functional impact, such as affecting VM operations, backup, or anything than VM display in the vSphere Client.

    This issue is resolved in this release.
     
  • PR 2928268: A rare issue with the ESXi infrastructure might cause ESXi hosts to become unresponsive

    Due to a rare issue with the ESXi infrastructure, a slow VASA provider might lead to a situation where the vSphere Virtual Volumes datastores are inaccessible, and the ESXi host becomes unresponsive.

    This issue is resolved in this release.
     
  • PR 2956080: Read and write operations to a vSphere Virtual Volumes datastore might fail

    If a SCSI command to a protocol endpoint of a vSphere Virtual Volumes datastore fails, the endpoint might get an Unsupported status, which might be cached. As a result, following SCSI commands to that protocol endpoint fail with an error code such as 0x5 0x20, and read and write operations to a vSphere Virtual Volumes datastore fail.

    This issue is resolved in this release.
     
  • PR 2944919: You see virtual machine disks (VMDK) with stale binding in vSphere Virtual Volumes datastores

    When you reinstall an ESXi host after a failure, since the failed instance never reboots, stale bindings of VMDKs remain intact on the VASA provider and vSphere Virtual Volumes datastores. As a result, when you reinstall the host, you cannot delete the VMDKs due to the existing binding. Over time, many such VMDKs might accrue and consume idle storage space.

    This issue is resolved in this release. However, you must contact VMware Global Support Services to implement the task.

VMware ESXi 7.0 Update 3c - Build 19193900

Pure Storage and VMware worked together to correct the issue that vSphere 7.0 U3, U3a and U3b re-introduced.  In the event that Purity//FA below 5.3.16 and below 6.1.8 is in use with a vSphere 7.0 U3c environment, the duplicate vVol ID issue will not occur.  Pure Storage recommends that any vSphere 7.0 U3 environment using vVols with Pure run 7.0 U3c or later.

  • vSphere Virtual Volume snapshot operations might fail on the source volume or the snapshot volume on Pure storage

    Due to an issue that allows the duplication of the unique ID of vSphere Virtual Volumes, virtual machine snapshot operations might fail, or the source volume might get deleted. The issue is specific to Pure storage and affects Purity release lines 5.3.13 and earlier, 6.0.5 and earlier, and 6.1.1 and earlier.

    This issue is resolved in this release.

An additional finding from Pure Storage has shown that in vSphere 7.0 U3 that vVols Managed Snapshots with the "snapshot memory" option completely quicker than they previously had.  For example, on vSphere 7.0 U2, a VM with 16 GB of memory took 3 to 5 minutes for a memory snapshot to complete.  With 7.0 U3c, it took less than 15 seconds to complete.  Clearly times will vary between arrays, VMs and environments, but memory snapshots are not closer aligned with standard managed snapshots.

VMware ESXi 7.0 Update 3a - Build 18825058

Purity//FA should be upgraded to 5.3.16+ or 6.1.8+ before upgrading to vSphere 7.0 U3 to U3b.  There is a critical issue that was re-introduced with the release of vSphere 7.0 U3 and U3a that will cause duplicate vVol IDs when vSphere takes managed snapshots of vVols based VMs on the FlashArray.  Prior to upgrading the vSphere environment to 7.0 U3 or higher, please work with Pure Storage support to confirm that all FlashArrays using vVols are upgraded to Purity//FA 5.3.16+ or 6.1.8+

While not a vVols or VASA specific bug or fix, it's a crucial fix as vVols can only be Thin Provisioned on the FlashArray.

  • PR 2861632: If a guest OS issues UNMAP requests with large size on thin provisioned VMDKs, ESXi hosts might fail with a purple diagnostic screen

    ESXi 7.0 Update 3 introduced an uniform UNMAP granularity for VMFS and SEsparse snapshots, and set the maximum UNMAP granularity reported by VMFS to 2GB. However, in certain environments, when the guest OS makes a trim or unmap request of 2GB, such a request might require the VMFS metadata transaction to do lock acquisition of more than 50 resource clusters. VMFS might not handle such requests correctly.

    As a result, an ESXi host might fail with a purple diagnostic screen. VMFS metadata transaction requiring lock actions on greater than 50 resource clusters is rare and can only happen on aged datastores. The issue impacts only thin-provisioned VMDKs. Thick and eager zero thick VMDKs are not impacted.

VMware ESXi 7.0 Update 3 - Build 18644231

Purity//FA should be upgraded to 5.3.16+ or 6.1.8+ before upgrading to vSphere 7.0 U3 or higher.  There is a critical issue that was re-introduced with the release of vSphere 7.0 U3 and U3a that will cause duplicate vVol IDs when vSphere takes managed snapshots of vVols based VMs on the FlashArray.  Prior to upgrading the vSphere environment to 7.0 U3 or higher, please work with Pure Storage support to confirm that all FlashArrays using vVols are upgraded to Purity//FA 5.3.16+ or 6.1.8+

New Feature

  • vVols Managed Snapshots can now Batched

    Prior to vSphere 7.0 U3 when vSphere issued snapshotVirtualVolume requests to a VASA provider on a single virtual disk was provided per request.  With the release of vSphere 7.0 U3 VMware has enhanced the snapshotVirtualVolume request to provide multiple virtual disks in one request.  This will help decrease the stun time during managed snapshot creation.

Known Issue with vSphere 7.0 U3

  • Virtual machine snapshot operations fail in vSphere Virtual Volumes datastores on Purity version 5.3.12 or lower

    Virtual machine snapshot operations fail in vSphere Virtual Volumes datastores on Purity version 5.3.12 with an error such as An error occurred while saving the snapshot: The VVol target encountered a vendor specific error
    The issue is specific for Purity versions lower that 5.3.13.

VMware ESXi 7.0 Update 2 - Build 17630552

  • VMware vSphere Virtual Volumes statistics for better debugging

    With ESXi 7.0 Update 2, you can track performance statistics for vSphere Virtual Volumes to quickly identify issues such as latency in third-party VASA provider responses. By using a set of commands, you can get statistics for all VASA providers in your system, or for a specified namespace or entity in the given namespace, or enable statistics tracking for the complete namespace.

    For more information, see Collecting Statistical Information for vVols.

VMware ESXi 7.0 - Update 1c - Build 17325551

  • PR 2654686: The vSphere Virtual Volumes algorithm might not pick out the first Config-VVol that an ESXi host requests

    In a vSphere HA environment, the vSphere Virtual Volumes algorithm uses UUID to pick out when multiple ESXi hosts might compete to create and mount a Config-VVol with the same friendly name at the same time. However, the Config-VVol picked by UUID might not be the first one that the ESXi host requests and this might create issues in the vSphere Virtual Volumes datastores.

    This issue is resolved in this release.

    The vSphere Virtual Volumes algorithm uses a timestamp rather than a UUID to pick out when multiple ESXi hosts might compete to create and mount a Config-VVol with the same friendly name at the same time.

ESXi 6.7

VMware ESXi 6.7 - Patch Release ESXi670-202011002 - Build 17167734

  • PR 2649677: You cannot access or power-on virtual machines on a vSphere Virtual Volumes datastore

    In rare cases, an ESXi host is unable to report protocol endpoint LUNs to the vSphere API for Storage Awareness (VASA) provider while a vSphere Virtual Volumes datastore is being provisioned. As a result, you cannot access or power on virtual machines on the vSphere Virtual Volumes datastore. This issue occurs only when a networking error or a timeout of the VASA provider happens exactly at the time when the ESXi host attempts to report the protocol endpoint LUNs to the VASA provider.

    This issue is resolved in this release.
     
  • PR 2656196: You cannot use a larger batch size than the default for vSphere API for Storage Awareness calls

    If a vendor provider does not publish or define a max batch size, the default max batch size for vSphere API for Storage Awareness calls is 16. This fix increases the default batch size to 1024.

    This issue is resolved in this release.

  • PR 2630045: The vSphere Virtual Volumes algorithm might not pick out the first Config-VVol that an ESXi host requests

    In a vSphere HA environment, the vSphere Virtual Volumes algorithm uses UUID to pick out when multiple ESXi hosts compete to create and mount a Config-VVol with the same friendly name at the same time. However, the Config-VVol picked by UUID might not be the first one that the ESXi hosts in the cluster requests and this might create issues in the vSphere Virtual Volumes datastores.

    This issue is resolved in this release.

    The vSphere Virtual Volumes algorithm uses a timestamp rather than an UUID to pick out when multiple ESXi hosts compete to create and mount a Config-VVol with the same friendly name at the same time.

VMware ESXi 6.7 - Patch Release ESXi670-202008001 - Build 16713306

  • PR 2601778: When migrating virtual machines between vSphere Virtual Volume datastores, the source VM disks remain undeleted

    In certain cases, such as when a VASA provider for a vSphere Virtual Volume datastore is not reachable but does not return an error, for instance a transport error or a provider timeout, the source VM disks remain undeleted after migrating virtual machines between vSphere Virtual Volume datastores. As result, the source datastore capacity is not correct.

    This issue is resolved in this release.
     
  • PR 2586088: A virtual machine cloned to a different ESXi host might be unresponsive for a minute

    A virtual machine clone operation involves a snapshot of the source VM followed by creating a clone from that snapshot. The snapshot of the source virtual machine is deleted after the clone operation is complete. If the source virtual machine is on a vSphere Virtual Volumes datastore in one ESXi host and the clone virtual machine is created on another ESXi host, deleting the snapshot of the source VM might take some time. As a result, the cloned VM stays unresponsive for 50 to 60 seconds and might cause disruption of applications running on the source VM.

    This issue is resolved in this release.
     
  • PR 2337784: Virtual machines on a VMware vSphere High Availability-enabled cluster display as unprotected when power on

    If an ESXi host in a vSphere HA-enabled cluster using a vSphere Virtual Volumes datastore fails to create the .vSphere-HA folder, vSphere HA configuration fails for the entire cluster. This issue occurs due to a possible race condition between ESXi hosts to create the .vSphere-HA folder in the shared vSphere Virtual Volumes datastore.

    This issue is resolved in this release.
     
  • PR 2583029: Some vSphere vMotion operations fail every time when an ESXi host goes into maintenance mode

    If you put an ESXi host into maintenance mode and migrate virtual machines by using vSphere vMotion, some operations might fail with an error such as A system error occurred: in the vSphere Client or the vSphere Web Client.

    In the hostd.log, you can see the following error:

    2020-01-10T16:55:51.655Z warning hostd[2099896] [Originator@6876 sub=Vcsvc.VMotionDst.5724640822704079343 opID=k3l22s8p-5779332-auto-3fvd3-h5:70160850-b-01-b4-3bbc user=vpxuser:<user>] TimeoutCb: Expired

    The issue occurs if vSphere vMotion fails to get all required resources before the defined waiting time of vSphere Virtual Volumes due to slow storage or VASA provider.
    This issue is resolved in this release. The fix makes sure vSphere vMotion operations are not interrupted by vSphere Virtual Volumes timeouts.

  • PR 2560998: Unmap operations might cause I/O latency to guest virtual machines

    If all snapshots of a booted virtual machine on a VMFS datastore are deleted, or after VMware Storage vMotion operations, unmap operations might start failing and cause slow I/O performance of the virtual machine. The issue occurs because certain virtual machine operations change the underlying disk unmap granularity of the guest OS. If the guest OS does not automatically refresh the unmap granularity, the VMFS base disk and snapshot disks might have different unmap granularity based on their storage layout. This issue occurs only when the last snapshot of a virtual machine is deleted or if the virtual machine is migrated to a target that has different unmap granularity from the source.

    This issue is resolved in this release. The fix prevents the effect of failing unmap operations on the I/O performance of virtual machines. However, to ensure that unmap operations do not fail, you must reboot the virtual machine or use guest OS-specific solutions to refresh the unmap granularity.

VMware ESXi 6.7 - Patch Release ESXi670-202004002 - Build 16075168

  • PR 2458201: Some vSphere Virtual Volumes snapshot objects might not get a virtual machine UUID metadata tag

    During snapshot operations, especially a fast sequence of creating and deleting snapshots, a refresh of the virtual machine configuration might start prematurely. This might cause incorrect updates of the vSphere Virtual Volumes metadata. As a result, some vSphere Virtual Volumes objects, which are part of a newly created snapshot, might remain untagged or get tagged and then untagged with the virtual machine UUID.

    This issue is resolved in this release.
     
  • PR 2424969: If the first attempt of an ESXi host to contact a VASA provider fails, vSphere Virtual Volumes datastores might remain inaccessible

    If a VASA provider is not reachable or not responding at the time an ESXi host boots up and tries to mount vSphere Virtual Volumes datastores, the mount operation fails. However, if after some time a VASA provider is available, the ESXi host does not attempt to reconnect to a provider and datastores remain inaccessible.

    This issue is resolved in this release.
     
  • PR 2424363: During rebind operations, I/Os might fail with NOT_BOUND error

    During rebind operations, the source protocol endpoint of a virtual volume might start failing I/Os with a NOT_BOUND error even when the target protocol endpoint is busy. If the target protocol endpoint is in WAIT_RBZ state and returns a status PE_NOT_READY, the source protocol endpoint must retry the I/Os instead of failing them.

    This issue is resolved in this release. With the fix, the upstream relays a BUSY status to the virtual SCSI disk (vSCSI) and the ESXi host operating system to ensure a retry of the I/O.
     
  • PR 2449462: You might not be able to mount a Virtual Volumes storage container due to a stale mount point

    If the mount point was busy and a previous unmount operation has failed silently, attempts to mount a Virtual Volumes storage container might fail with an error that the container already exists.

    This issue is resolved in this release.
     
  • PR 2467765: Upon failure to bind volumes to protocol endpoint LUNs on an ESXi host, virtual machines on vSphere Virtual Volumes might become inaccessible

    If a VASA provider fails to register protocol endpoint IDs discovered on an ESXi host, virtual machines on vSphere Virtual Volumes datastores on this host might become inaccessible. You might see an error similar to vim.fault.CannotCreateFile. A possible reason for failing to register protocol endpoint IDs from an ESXi host is that the SetPEContext() request to the VASA provider fails for some reason. This results in failing any subsequent request for binding virtual volumes, and losing accessibility to data and virtual machines on vSphere Virtual Volumes datastores.

    This issue is resolved in this release. The fix is to reschedule SetPEContext calls to the VASA provider if a SetPEContext() request on a VASA provider fails. This fix allows the ESXi host eventually to register discovered protocol endpoint IDs and ensures that volumes on vSphere Virtual Volumes datastores remain accessible.
     
  • PR 2429068: Virtual machines might become inaccessible due to wrongly assigned second level LUN IDs (SLLID)

    The nfnic driver might intermittently assign wrong SLLID to virtual machines and as a result, Windows and Linux virtual machines might become inaccessible.

    This issue is resolved in this release. Make sure that you upgrade the nfnic driver to version 4.0.0.44.

VMware ESXi 6.7 - Patch Release ESXi670-201912001 - Build 15160138

  • PR 2423301: After you revert a virtual machine to a snapshot, change block tracking (CBT) data might be corrupted

    When reverting a virtual machine that has CBT enabled to a snapshot which is not a memory snapshot, and if you use the QueryChangedDiskAreas() API call, you might see an InvalidArgument error.

    This issue is resolved in this release. With ESXi670-201912001, the output of the QuerychangedDiskAreas () call changes to FileFault and adds the message Change tracking is not active on the disk <disk_path> to provide more details on the issue.

    With the fix, you must power on or reconfigure the virtual machine to enable CBT after reverting it to a snapshot and then take a snapshot to make a full backup.

    To reconfigure the virtual machine, you must complete the following steps:
    1. In the Managed Object Browser graphical interface, run ReconfigVM_Task by using an url such as https://<vc or host ip>/mob/?moid=<the virtual machine Managed Object ID>&method=reconfigure.
    2. In the <spec> tag, add <ChangeTrackingEnabled>true</ChangeTrackingEnabled>.
    3. Click Invoke Method.
  • PR 2419339: ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore

    ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore. You see a message indicating a PF Exception 14 on the screen. This issue might affect multiple hosts.

    This issue is resolved in this release.
     
  • PR 2432530: You cannot use a batch mode to unbind VMware vSphere Virtual Volumes

    ESXi670-201912001 implements the UnbindVirtualVolumes () method in batch mode to unbind VMware vSphere Virtual Volumes. Previously, unbinding took one connection per vSphere Virtual Volume. This sometimes led to consuming all available connections to a vStorage APIs for Storage Awareness (VASA) provider and delayed response from or completely failed other API calls.

    This issue is resolved in this release.
     
  • PR 2385716: Virtual machines with Changed Block Tracking (CBT) enabled might report long waiting times during snapshot creation

    Virtual machines with CBT enabled might report long waiting times during snapshot creation due to the 8 KB buffer used for CBT file copying. With this fix, the buffer size is increased to 1 MB to overcome multiple reads and writes of a large CBT file copy, and reduce waiting time.

    This issue is resolved in this release.

VMware ESXi 6.7 Update 3 - Build 14320388

  • PR 2312215: A virtual machine with VMDK files backed by vSphere Virtual Volumes might fail to power on when you revert it to a snapshot

    This issue is specific to vSphere Virtual Volumes datastores when a VMDK file is assigned to different SCSI targets across snapshots. The lock file of the VMDK is reassigned across different snapshots and might be incorrectly deleted when you revert the virtual machine to a snapshot. Due to the missing lock file, the disk does not open, and the virtual machine fails to power on.

    This issue is resolved in this release.

  • PR 2363202: The monitoring services show that the virtual machines on a vSphere Virtual Volumes datastore are in a critical state

    In the vSphere Web Client, incorrect Read or Write latency is displayed for the performance graphs of the vSphere Virtual Volumes datastores at a virtual machine level. As a result, the monitoring service shows that the virtual machines are in a critical state.

    This issue is resolved in this release.

  • PR 2402409: Virtual machines with enabled Changed Block Tracking (CBT) might fail while a snapshot is created due to lack of allocated memory for the CBT bit map

    While a snapshot is being created, a virtual machine might power off and fail with an error similar to:
    2019-01-01T01:23:40.047Z| vcpu-0| I125: DISKLIB-CTK : Failed to mmap change bitmap of size 167936: Cannot allocate memory.
    2019-01-01T01:23:40.217Z| vcpu-0| I125: DISKLIB-LIB_BLOCKTRACK : Could not open change tracker /vmfs/volumes/DATASTORE_UUID/VM_NAME/VM_NAME_1-ctk.vmdk: Not enough memory for change tracking.

    The error is a result of lack of allocated memory for the CBT bit map.

    This issue is resolved in this release.

VMware ESXi 6.7 Update 2 - Build 13006603

  • PR 2250697: Windows Server Failover Cluster validation might fail if you configure Virtual Volumes with a Round Robin path policy

    If during the Windows Server Failover Cluster setup you change the default path policy from Fixed or Most Recently Used to Round Robin, the I/O of the cluster might fail and the cluster might stop responding.

    This issue is resolved in this release.

  • PR 2279897: Creating a snapshot of a virtual machine might fail due to a null VvolId parameter

    If a vSphere API for Storage Awareness provider modifies the vSphere Virtual Volumes policy unattended, a null VvolID parameter might update the vSphere Virtual Volumes metadata. This results in a VASA call with a null VvoId parameter and a failure when creating a virtual machine snapshot.

    This issue is resolved in this release. The fix handles the policy modification failure and prevents the null VvolId parameter.

  • PR 2227623: Parallel cloning of multiple virtual machines on a vSphere Virtual Volumes datastore might fail with an error message for failed file creation

    If a call from a vSphere API for Storage Awareness provider fails due to all connections to the virtual provider being busy, operations for parallel cloning of multiple virtual machines on a vSphere Virtual Volumes datastore might become unresponsive or fail with an error message similar to Cannot complete file creation operation.

    This issue is resolved in this release.

  • PR 2268826: An ESXi host might fail with a purple diagnostic screen when the VMware APIs for Storage Awareness (VASA) provider sends a rebind request to switch the protocol endpoint for a vSphere Virtual Volume

    When the VASA provider sends a rebind request to an ESXi host to switch the binding for a particular vSphere Virtual Volume driver, the ESXi host might switch the protocol endpoint and other resources to change the binding without any I/O disturbance. As a result, the ESXi host might fail with a purple diagnostic screen.

    This issue is resolved in this release.

VMware ESXi 6.7 Update 1 - Build 10302608

  • PR 2039186: VMware vSphere Virtual Volumes metadata might not be updated with associated virtual machines and make virtual disk containers untraceable

    vSphere Virtual Volumes set with VMW_VVolType metadata key Other and VMW_VVolTypeHint metadata key Sidecar might not get VMW_VmID metadata key to the associated virtual machines and cannot be tracked by using IDs.

    This issue is resolved in this release.

  • PR 2119610: Migration of a virtual machine with a Filesystem Device Switch (FDS) on a vSphere Virtual Volumes datastore by using VMware vSphere vMotion might cause multiple issues

    If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC) or I/O filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache I/O filters, corrupted replication I/O filters, and disk corruption, when cache I/O filters are configured in write-back mode. You might also see issues with the virtual machine encryption.

    Тhis issue is resolved in this release.

  • PR 2145089: vSphere Virtual Volumes might become unresponsive if an API for Storage Awareness (VASA) provider loses binding information from the database

    vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a VASA provider loses binding information from the database. Hostd might also stop responding. You might see a fatal error message.

    This issue is resolved in this release. This fix prevents infinite loops in case of database binding failures.

  • PR 2146206: vSphere Virtual Volumes metadata might not be available to storage array vendor software

    vSphere Virtual Volumes metadata might be available only when a virtual machine starts running. As result, storage array vendor software might fail to apply policies that impact the optimal layout of volumes during regular use and after a failover.

    This issue is resolved in this release. This fix makes vSphere Virtual Volumes metadata available at the time vSphere Virtual Volumes are configured, not when a virtual machine starts running.

ESXi 6.5

VMware ESXi 6.5 - Patch Release ESXi650-201912002 - Build 15256549

  • PR 2271176: ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore

    ESXi hosts might fail with a purple diagnostic screen during power off or deletion of multiple virtual machines on a vSphere Virtual Volumes datastore. You see a message indicating a PF Exception 14 on the screen. This issue might affect multiple hosts.

    This issue is resolved in this release.
     
  • PR 2423302: After you revert a virtual machine to a snapshot, change block tracking (CBT) data might be corrupted

    When reverting a virtual machine on which CBT is enabled to a snapshot which is not a memory snapshot, and if you use the QueryChangedDiskAreas() API call, you might see an InvalidArgument error.

    This issue is resolved in this release. With ESXi650-201912002, the output of the QuerychangedDiskAreas () call changes to FileFault and adds the message Change tracking is not active on the disk <disk_path> to provide more details on the issue.

    With the fix, you must power on or reconfigure the virtual machine to enable CBT after reverting it to a snapshot and then take a snapshot to make a full backup.
     
  • PR 2385716: Virtual machines with Changed Block Tracking (CBT) enabled might report long waiting times during snapshot creation

    Virtual machines with CBT enabled might report long waiting times during snapshot creation due to the 8 KB buffer used for CBT file copying. With this fix, the buffer size is increased to 1 MB to overcome multiple reads and writes of a large CBT file copy, and reduce waiting time.

    This issue is resolved in this release.

VMware ESXi 6.5 Update 3 - Build 13932383

  • PR 2282080: Creating a snapshot of a virtual machine from a virtual volume datastore might fail due to a null VVolId parameter

    If a vSphere API for Storage Awareness provider modifies the vSphere Virtual Volumes policy unattended, a null VVolId parameter might update the vSphere Virtual Volumes metadata. This results in a VASA call with a null VvolId parameter and a failure when creating a virtual machine snapshot.

    This issue is resolved in this release. The fix handles the policy modification failure and prevents the null VVolId parameter.

  • PR 2265828: A virtual machine with VMDK files backed by vSphere Virtual Volumes might fail to power on when you revert it to a snapshot

    This issue is specific to vSphere Virtual Volumes datastores when a VMDK file is assigned to different SCSI targets across snapshots. The lock file of the VMDK is reassigned across different snapshots and might be incorrectly deleted when you revert the virtual machine to a snapshot. Due to the missing lock file, the disk does not open, and the virtual machine fails to power on.

    This issue is resolved in this release.

  • PR 2113782: vSphere Virtual Volumes datastore might become inaccessible if you change the vCenter Server instance or refresh the CA certificate

    vSphere Virtual Volume datastore uses VMware CA signed certificate to communicate with VASA providers. When the vCenter Server instance or the CA certificate changes, vCenter Server imports the new vCenter Server CA signed certificate and the vSphere Virtual Volume datastore gets SSL reset signal which might not be triggered. As a result, the communication between vSphere Virtual Volume datastore and VASA providers might fail and the vSphere Virtual Volume datastore might become inaccessible.

    This issue is resolved in this release.

  • PR 2278591: Cloning multiple virtual machines simultaneously on vSphere Virtual Volumes might stop responding

    When you clone multiple virtual machines simultaneously from vSphere on a vSphere Virtual Volumes datastore, a setPEContext VASA API call is issued. If all connections to the VASA Providers are busy or unavailable at the time of issuing the setPEContext API call, the call might fail and the cloning process stops responding.

    This issue is resolved in this release.

VMware ESXi 6.5, Patch Release ESXi650-201811002 - Build 10884925

  • PR 2119609: Migration of a virtual machine with a Filesystem Device Switch (FDS) on a VMware vSphere Virtual Volumes datastore by using VMware vSphere vMotion might cause multiple issues

    If you use vSphere vMotion to migrate a virtual machine with file device filters from a vSphere Virtual Volumes datastore to another host, and the virtual machine has either of the Changed Block Tracking (CBT), VMware vSphere Flash Read Cache (VFRC), or IO filters enabled, the migration might cause issues with any of the features. During the migration, the file device filters might not be correctly transferred to the host. As a result, you might see corrupted incremental backups in CBT, performance degradation of VFRC and cache IO filters, corrupted replication IO filters and disk corruption, when cache IO filters are configured in write-back mode. You might also see issues with the virtual machine encryption.

    Тhis issue is resolved in this release.

  • PR 2142767: VMware vSphere Virtual Volumes might become unresponsive if a vSphere API for Storage Awareness provider loses binding information from its database

    vSphere Virtual Volumes might become unresponsive due to an infinite loop that loads CPUs at 100% if a vSphere API for Storage Awareness provider loses binding information from its database. Hostd might also stop responding. You might see a fatal error message. This fix prevents infinite loops in case of database binding failures.

    This issue is resolved in this release.