What's New with VASA Provider Version 2.0.0?
The recommended versions of Purity//FA for VASA Provider 2.0.0 are 6.2.8 or higher and 6.3.1 or higher. In particular these are the required versions for the new SPBM capabilities and features for them.
FlashArray VASA Provider 2.0.0
FlashArray VASA Provider 2.0.0 has been released with Purity//FA 6.2 and 6.3. Pure Storage has put in a lot of work to release several improvements and features with this release of the VASA Provider. This KB will be discussion the improvements and the new features with the 2.0.0 VASA Provider.
Improvements
The core work with VASA Provider 2.0.0 has been focused on revamping the architecture of the VASA provider. This was done with a few points of focus: increasing the object scale limits for large FlashArray models, updating the VASA architecture to support better performance and scale, and lastly, further improving the VASA provider's ability to process requests at higher concurrency and speed.
Increased Object Scale Limits
With the large FlashArray models the object limits for volume groups has been increased from 2000 to 5000. The vVol limit is still the volume count limit on the given FlashArray model. Here is a break down of the vVols object count limits with the release of VASA Provider 2.0.0 on Purity//FA 6.2 and 6.3.
FlashArray Limits | Small FlashArray Models | Medium FlashArray Models | Large FlashArray Models |
---|---|---|---|
Max # of vVols | 500 volumes | 10,000 volumes | 20,000 volumes |
Max # of vVol VMs | 500 volume groups | 2,000 volume groups | 5,000 volume groups |
Updated VASA Architecture
In order to use vVols on the FlashArray and in the vSphere environment the VASA Provider is required to be registered against in the vCenter server as "Storage Providers" (see this kb for more information). When both CT0 and CT1's VASA providers are registered In vCenter they are not both Active providers. Rather one provider (the one that was registered first) is the "Active" provider. While the second one registered is the "Standby" provider. Meaning that vCenter will only ever set one provider as the active and management requests to that provider (outside of heart beating to the standby provider). Additionally, only the Active Provider information is pushed from vCenter to the ESXi hosts registered there. In the event that the active provider is no longer reachable, vCenter will wait a period of time (varies depending on situation) and then attempt to promote the standby provider to the active provider. Once that is complete, than the new active provider's information is pushed down to the ESXi hosts.
During an event that the active provider is unreachable (controller upgrade, hardware replacement, network isolation, etc), the time for vCenter to promote the standby provider, confirm it's status and notify the ESXi hosts of that change in active provider, there is room for delays or issues to occur. Often this could lead to delays in the management path being established in vCenter or ESXi. Pure Engineering looked at various ways for us to address this and worked with VMware to see what could be improved both from the VASA provider or vSphere level.
After research, collaboration with VMware Engineering and comprehensive testing, Pure Engineering decided to switch the VASA provider to a hybrid where both providers could be the active provider in vCenter; however, the VASA provider on the primary controller of the array will be processing the VASA requests. The secondary controller's VASA provider will forward requests it receives to the primary controller's VASA provider. By doing this, both VASA providers don't have to be completely in sync and instate independently. The VASA provider will be able to process requests much quicker by the primary controller being the one to always forward the requests to Purity itself. Overall, both VASA providers on either the secondary or primary controller will still be much quicker than they were previously.
In the end the two big takeaways from the updated VASA architecture are this:
- The updated VASA architecture has huge improvements to overall performance of the VASA provider.
- The updated VASA architecture is able to take advantage of network HA on the array and apply it to the VASA providers.
Improved VASA Performance
There are a few parts to the VASA performance improvements that are the most important when talking about the improvements holistically. The first being VASA requests overall are much quicker, in particular at scale. The VASA provider is able to return simpler requests significantly quicker from storing frequently accessed information efficently. Managed Snapshot workflows in particular benefit a great deal from the improvements in the overall performance with VASA provider 2.0.0.
Managed Snapshot Performance Improvements
A major improvement in VASA provider, 2.0.0 specifically, are the improvements to the performance of managed snapshots. For more information about what managed snapshots are for vVols see this KB. There are two parts to getting the best performance of managed snapshots with vVols.
- Having Purity//FA 6.2.6+ or 6.3.0+ installed on the array to take advantage of VASA provider 2.0.0 improvements
- Having vSphere 7.0 U3c or higher to have both the allocated bitmap hint and batched snapshot virtual volume features with vSphere available
In the examples provided below there were thousands of VMs setup on each array and workflows in batches of 50 concurrent requests for all those VMs were issued. These VMs were configured to have 12 virtual disks each (data vVols) and have about 10 TBs of provisioned space per VM. While most of the virtual disks were sparsely filled, they did have some data on them. These workflows were ran on arrays running Purity//FA 6.1.10 and then Purity//FA 6.2.6; vSphere 6.7 U3 p03 and then vSphere 7.0 U3c. In particular we wanted to focus on what are the differences between these runs for average task time for 4 different workflows.
- Creating a normal managed snapshot
- Enabling CBT (Changed Block Tracking) and taking the first managed snapshot
- Creating a managed snapshot with CBT already enabled
- Disabling CBT and then taking a managed snapshot
These are the most common workflows when working with managed snapshots at scale. The primary reason that I wanted to focus on the average task time was that this can be evaluated against different jobs of batches, scale and total workload more effectively. The results were quite impressive for the CBT related workflows: enabling CBT and taking that first baseline snapshot and when taking the follow up managed snapshot with CBT enabled on the VMs.
![]() This workflow was simply taking a managed snapshot of 50 VMs concurrently for all VMs on that array for the environment issued against, in this example each vCenter (4) had 200 VMs per array. Each array had 800 actively powered on vVols based VMs. The primary reason for only having 800 VMs per array was that each VM had 10 virtual disks each and we wanted to keep close to the object scale limits as we could. With 800 powered on VMs, there were a total of 12 volume objects per VM (one config, ten data and 1 swap vVol). With 800 VMs, this was a total of 9600 array volumes. When taking managed snapshots for all 800 VMs, this causes the object count increase to 17600. So we would still be under the 20,000 volume count limit with a little head room. The reason that we see 7.0 U3c have a tougher time on 6.1.10 was because on this vSphere version the snapshot virtual volume requests do not have a single virtual volume in them anymore, rather they are batched in their requests. Purity 6.1 isn't designed to be as efficient when forwarding these batched requests from VASA to Purity, so the average task time is higher than we would see with Purity 6.7 in this case. |
![]() This workflow included enabling CBT for 50 VMs and then taking the managed snapshot for those 50 VMs. Then doing this for all 250 VMs per vCenter for a total of 1000 VMs. Here is the first test result where we start to see the dramatic improvement from 6.1 to 6.2 for both vSphere 6.7 and 7.0. |
![]() Here, we can see how much more efficient both vSphere 7.0 and VASA provider 2.0.0 are with managed snapshots for VMs that already have CBT enabled. This would be an example of an incremental backup with a 3rd part backup vendor. |
![]() In this example we go ahead and disable CBT on the VMs and then take the managed snapshots for them in batches of 50. This workflow would be part of a full backup job where CBT is refreshed as part of that full backup with the 3rd party backup vendor. |
Overall, if you have workflows that leverage managed snapshots for 3rd party backup software, getting to a Purity release that supports VASA provider 2.0.0 should be prioritized. Even more so if there are issues observed in the backup workflows.
Features
The primary feature added as part of VASA provider 2.0.0 are some new capabilities that are added to SPBM. For more detailed information on SPBM and all capabilities see this KB. The other feature included with VASA provider 2.0.0 is the support for thick provisioned data vVols.
Updated Storage Policy Based Management (SPBM) Capabilities
There are four new capabilities that have been added with VASA provider 2.0.0.
- QoS - Per Virtual Disk IOPs Limit
- QoS - Per Virtual Disk Bandwidth Limit
- Local Snapshot Placement
- Volume Tagging Placement
These are the new capability values that are available starting with Purity//FA 6.2.6 or Purity//FA 6.3.0 and VASA Provider 2.0.0.
QoS Placement Capability Name |
Value (not case-sensitive) |
---|---|
Per Virtual Disk IOPS Limit |
A value and the unit of measurement (hundreds, thousands or millions) |
Per Virtual Disk Bandwidth Limit |
A value and the unit of measurement (KB/s, MB/s or GB/s) |
![]() |
Local Snapshot Protection Placement Capability Name |
Value (not case-sensitive) |
---|---|
Snapshot Interval |
A time interval in seconds, minutes, hours, days, week, months or years |
Retain all Snapshots for |
A time interval in seconds, minutes, hours, days, week, months or years |
Retain Additional Snapshots | Number of snapshots to be retained for |
Days to Retain Additional Snapshots | Number of Days to retain the additional snapshots |
![]() |
Volume Tagging Placement Capability Name |
Value (not case-sensitive) |
---|---|
Key |
Name of the the volume key tag |
Value | Name of the volume value tag |
Copyable | Yes or No |
![]() |
Here are the compliance check requirements for these new capabilities.
QoS Placement Capability Name |
An array offers this capability when… | A vVol is in compliance when… | A vVol is out of compliance when… |
---|---|---|---|
Per Virtual Disk IOPS Limit | ...it is a FlashArray runing Purity//FA 6.2.6 or higher. | ...the volume QoS IOPS Limit matches the value of the rule. | ...the volume's QoS IOPS Limit is either unset or does not match the value in the rule. |
Per Virtual Disk Bandwith Limit | ...it is a FlashArray runing Purity//FA 6.2.6 or higher. | ...the volume QoS Bandwidth Limit matches the value of the rule. | ...the volume's QoS Bandwidth Limit is either unset or does not match the value in the rule. |
Local Snapshot Protection Placement Capability Name |
An array offers this capability when… | A vVol is in compliance when… | A vVol is out of compliance when… |
---|---|---|---|
Snapshot Interval | ...it is a FlashArray runing Purity//FA 6.2.6 or higher. | ...the protection group that VASA is using for the storage policy matches the ruleset, the vVol is a member of the protection group and the snapshot schedule is enabled. | ...the vVol is not a member of the paired protection group, the interval does not match the policy rule or the snapshot schedule is disabled. |
Retain all Snapshots for | ...it is a FlashArray runing Purity//FA 6.2.6 or higher. | ...the protection group that VASA is using for the storage policy matches the ruleset, the vVol is a member of the protection group and the snapshot schedule is enabled. | ...the vVol is not a member of the paired protection group, the retention interval does not match the policy rule or the snapshot schedule is disabled. |
Retain Additional Snapshots | ...it is a FlashArray runing Purity//FA 6.2.6 or higher. | ...the protection group that VASA is using for the storage policy matches the ruleset, the vVol is a member of the protection group and the snapshot schedule is enabled. | ...the vVol is not a member of the paired protection group, the value does not match the policy rule or the snapshot schedule is disabled. |
Days to Retain Additional Snapshots | ...it is a FlashArray runing Purity//FA 6.2.6 or higher. | ...the protection group that VASA is using for the storage policy matches the ruleset, the vVol is a member of the protection group and the snapshot schedule is enabled. | ...the vVol is not a member of the paired protection group, the value does not match the policy rule or the snapshot schedule is disabled. |
Volume Tagging Placement Capability Name |
An array offers this capability when… | A vVol is in compliance when… | A vVol is out of compliance when… |
---|---|---|---|
Key | ...it is a FlashArray runing Purity//FA 6.2.6 or higher. | ...a tag exists on the volume that matches the key value pair dictated by the rule. | ...a tag does not exist or does not match the key value pair dictated by the rule. |
Value | ...it is a FlashArray runing Purity//FA 6.2.6 or higher. | ...a tag exists on the volume that matches the key value pair dictated by the rule. | ...a tag does not exist or does not match the key value pair dictated by the rule. |
Those are the new capabilities and the constraints by which they are enforced. One thing that we want to take time and point out is why local snapshot protection being moved into placement matters. Previously if you wanted to create as storage policy that would have local snapshot protection the rules would have to be made in the replication capabilities. In which case you would need to already have had the protection groups created on the array and then when applying the policy you would need to select a replication group for local snapshot protection.
Now with local snapshot protection being added to placement capabilities this is not longer required. Rather when creating a storage policy with local snapshot placement rules sets, when this storage policy is first applied to a VM the VASA provider on the array will automatically create a protection group with the constraints dictated by the policy. This protection group and storage policy are then linked or paired together. Meaning that if the storage policy rule sets are changed, for example changing the interval from 2 hour to 1 hour, then the VASA provider will automatically change the the protection group schedule accordingly when the policy is re-applied. No longer do you need to manually create protection groups or assign replication groups for storage policies when local snapshot protection is desired or required.
Support for Thick Provisioned vVols
With VASA provider 2.0.0 thick provisioned vVols is now supported. First question here would be if everything on the array is thin provisioned always, then how is VASA supporting thick provisioning? Essentially the only difference now is that when queries for space stats are issued against virtual volumes that are thick provisioned, the VASA provider will return back that all of that provisioned space is used. For example, when I have a 40 GB data vVol that I chose thick provisioned, when query space stats is issued against that vVol the results will return back that provisioned space is 40 GB and used space is 40 GB. The reasons that we made the decision to support thick provisioning with vVols are:
- The change was trivial and does not impact space allocation and space usage on the array or decrease performance of the VASA provider.
- Some 3rd party vendor workflows require thick provisioning and will not retry after a failure happens when thick is not supported. Now these workflows will not fail because of thick provision failures.