Skip to main content
Pure Technical Services

What's new with VASA Version 1.2.0?

Currently viewing public documentation. Please login to access the full scope of documentation.

With the release of Purity//FA 6.1.6, there is a new version of the VASA Provider running on the FlashArray. VASA 1.2.0 brings several improvements, enhancements and some new features to the VMware Virtual Volumes ecosystem. The following KB covers these new features and the improvements found in VASA 1.2.0.

Improvements

The work that has been dedicated to improving performance of VASA at scale has led to Pure Storage increasing the vVols limits with the release of VASA 1.2.0 in the Purity//FA 6.1.7 release.  Pure Engineering has been working to support high concurrent requests to VASA at higher end scale and load on the FlashArray.  Here is a quick look at some of the improvements with Purity//FA 6.1.7 and VASA 1.2.0.

  • Optimized Purity's backend database for creating, connecting and disconnecting volumes
  • Added support for VASA certificates using IPv6 addresses
  • Added support for VASA 3.5 bitmap hint
  • VASA Hardening against bad and incomplete requests
  • Optimized performance of async VASA tasks

What follows in the KB now is a break down on the scale increases and some of the performance test results.  

vVols Scale Increases

For the first time since vVols was GA, Pure Storage is increasing the vVols scale limits for the FlashArray.  With VASA 1.2.0 and running Purity//FA 6.1.7 or higher, here are the new scale limits by FlashArray Model.

FlashArray Limits Small FlashArray Models Medium FlashArray Models Large FlashArray Models
Max # of vVols 500 volumes 10,000 volumes 10,000 volumes
Max # of vVol VMs 500 volume groups 2,000 volume groups 2,000 volume groups

The main take away here is that the volume group limits (a vVols based VM is mapped to a single volume group on the FA) have been doubled along with the volume limits for vVols.  There was a significant amount of testing done on various FlashArray models and at varying levels of workloads and workflows.  

Performance Testing Examples

In order to increase the default scale limits for vVols, Pure Storage needed to make sure that we could support high values of concurrent requests and the scale of 2000 vVol VMs.  For these tests the following was being tested.

  • One FlashArray X70 for the test connected to 4 vCenter Servers
  • Two vCenter Servers are running 7.0 U2a and two vCenter Servers are running 6.7 U3 p03
  • Each vCenter had 500 vms deployed to them, powered on, etc.  
  • Each VM has 3 virtual disks, vmdk1 is 350 GB, vmdk2 is 1 TB and vmdk3 is 1TB. 
    • Leading to 5 vVols per VM (Config + 3 Data + 1 Swap)

Here are some quick numbers for the total times and average task times for these tests.

Tests at 50k IOPs and ~30% Load Purity 6.1.7 2000 VM Test Time
Cloning 2000 VMs from a single template in batches of 100 1 hour 20 minutes
(1:24 average task time)
Powering on 2000 VMs in batches of 100 55 minutes
(1:20 average task time)
Taking Managed Snapshots of 2000 VMs in batches of 100 30 minutes
(0:55 average task time)
Destroying Managed Snapshots of 2000 VMs in batches of 100 16 minutes
(0:30 average task time)
Powering off 2000 VMs in batches of 100 17 minutes
(0:28 average task time)
Destroying 2000 VMs in batches of 100 30 minutes
(0:50 average task time)

The main thing to take away from these numbers is that the average task times are all exceeding expected times and the total workflow times for operations for 2000 VMs looked good.  As we increased the load on the FlashArray, we wanted to make sure that there weren't failed operations and the average task times were still within a reasonable variance.


Features

VASA 1.2.0 brings two new features with it's release:  IPv6 support to VASA and VASA 3.5 bitmap hint support.

IPv6 Support

Pure Storage has implemented and validated the support for IPv6 with Purity//FA 6.1.7 and VASA 1.2.0.

Here is an example of a FlashArray using IPv6 for CT0.ETH0 and CT1.ETH0 and registering the storage providers for this FlashArray.

From purenetwork eth list, we can see that both ct0.eth0 and ct1.eth0 have IPv6 enabled and in use.

>purenetwork eth list
Name      Enabled  Type      Subnet  Address                  Mask           Gateway                MTU   MAC                Speed       Services     Subinterfaces
ct0.eth0  True     physical  -       2620:125:9014:2001::100  64             2620:125:9014:2001::1  1500  24:a9:37:02:66:4b  1.00 Gb/s   management   -
ct1.eth0  True     physical  -       2620:125:9014:2001::101  64             2620:125:9014:2001::1  1500  24:a9:37:02:9e:f9  1.00 Gb/s   management   -
From vCenter, add a new storage provider and do not use the short form of the IPv6 address ([2620:125:9014:2001::100]).
In this example, the URL used to register the VASA provider is https://[2620:125:9014:2001:0:0:0:100]:8084
New-Storage-Provider-IPv6.png
Now we can see that the storage provider is registered and the URL shows the IPv6 address that was provided. Storage-Provider-Sum-IPv6.png

When looking at purecert list, the vasa-ct0 and vasa-ct1 certificates will show the IPv6 address as the issued to and common name.

>purecert list
Name        Status       Key Size  Issued To                     Issued By  Valid From               Valid To                 Country  State/Province  Locality  Organization        Organizational Unit  Email  Common Name
management  self-signed  2048                                               2018-08-15 14:21:53 PDT  2028-08-12 14:21:53 PDT  -        -               -         Pure Storage, Inc.  Pure Storage, Inc.   -      -
vasa-ct0    imported     2048      2620:125:9014:2001:0:0:0:100  CA         2021-05-24 13:39:30 PDT  2022-05-25 13:39:30 PDT  US       -               -         Pure Storage        Pure Storage         -      2620:125:9014:2001:0:0:0:100
vasa-ct1    imported     2048      2620:125:9014:2001:0:0:0:101  CA         2021-05-24 13:39:30 PDT  2022-05-25 13:39:30 PDT  US       -               -         Pure Storage        Pure Storage         -      2620:125:9014:2001:0:0:0:101

Additionally, support for importing custom CA signed certificates was added. However, keep in mind that the CSR generated from the FlashArray does not include SAN (subject alternative names) with the signing request.  This requires the CA to provide the SAN entries for the IP address.  The entry will need to follow x509 parameter standards for IPv6 addresses.

VASA 3.5 - Bitmap Hint Support

VMware released a new version of VASA (3.5) with vSphere 7.0 U1.  In version 3.5 of VASA spec, some new features were added.  The feature that was the most interesting to Pure Storage was the bitmap hint support.  Essentially, the bitmap hint support allows the VASA provider to give a hint of what the next allocated block is to the requester.  Rather than needing to scan an entire 1 TB data vVol, the VASA provider is able to do a look ahead to see where the next allocated block is for the FlashArray volume and return a offset hint to the requester.  

How does this help and when are bitmap operations issued to the VASA provider?  Great questions!  There are essentially three workflows that will generate these bitmap operations.

  1. CBT (Changed Block Tracking) has just been enabled on the VM. 
    1. If the VM was powered on, then either the VM needs to be powered off and on again or a managed snapshot needs to be taken.
    2. If the VM was powered off, then the bitmap calls are issued as soon as CBT was enabled.
  2. A vVols VM is either storage vMotioned from vVols to VMFS to the same array or to vVols/VMFS on a different array.
  3. A vVols VM is cloned to either a VMFS datastore on the same array or VMFS/vVols on a different array.

The most common time that VASA will see these large amounts of allocated bitmap calls is when CBT is being enabled on a powered on VM and the managed snapshot is taken right after.  This workflow is common when getting VMs ready to be backed up via VADP (vSphere API for Data Protection) and backup vendors such as Veeam, Rubrik, etc.  Additionally, many of these backup vendors that have workflows that will periodically disabled CBT and then enable CBT again to take a simulated "full backup" or get a CBT refresh completed.   

So why are we going through this?  The process of enabling CBT (or the svMotions) requires vSphere getting information about all allocated blocks on the volumes.  For something like a managed snapshot, vSphere will issue allocated bitmap calls over the virtual disk (on the FlashArray the default segment size for these queries is 128 GB) until it has a list of all allocated blocks.  The larger the virtual disk (vVol), the longer this process can take and the longer the VM is stunned for.

However, with the bitmap hint feature, Purity and VASA are able to do a scan for the next allocated block and return a hint to vSphere of where to issue the bitmap request next. This helps a ton for large, sparsely filled virtual disks.  Instead of having to scan a 40 TB volume in chunks of 128 GB, if only 10 TB is used, then Purity/VASA is able to skip all empty blocks and tell vSphere where to scan next.  We'll look at these times in the tests below. 

First, let's list out what environment and setup was being tested.

  • One FlashArray X70 for the test connected to 4 vCenter Servers
  • Two vCenter Servers are running 7.0 U2a and two vCenter Servers are running 6.7 U3 p03
  • Each vCenter has 500 vVol based VMs on the FlashArray, leading to 2000 total vVols based VMs that were all powered on
  • Each VM has 3 virtual disks, vmdk1 is 350 GB, vmdk2 is 1 TB and vmdk3 is 1TB

The numbers gathered from these tests are the averages of 10 different full test runs to help even out any outliers due to variance in vSphere or the FlashArray.

The first batch of tests will be enabling CBT on VMs in batches of 100 and then taking a a managed snapshot over all 2000 VMs.  1000 VMs are in vCenters running 6.7 and then 1000 VMs are in vCenters running 7.0.  The FlashArray is pushing enough to be around 30% load.

Looking at CBT related workflows and difference between 3.0 and 3.5, we can see a the difference with and without the bitmap hint with VASA 3.5.

Tests at 50k IOPs and ~30% Load VASA 3.0 (vSphere 6.7 U3 p03) VASA 3.5 (vSphere 7.0 U2a)
Enabling CBT and then taking managed snapshots 1 hour 25 minutes
(07:01 average task time)
28 minutes
(01:43 average task time)
Destroying snapshots after enabling CBT 7 minutes 30 seconds
(00:30 average task time)
6 minutes 45 seconds
(00:20 average task time)
Taking managed snapshots of CBT enabled VMs 26 minutes
(01:30 average task time)
15 minutes
(00:55 average task time)
Destroying snapshots of CBT enabled VMs 9 minutes 30 seconds
(00:40 average task time)
7 minutes 30 seconds
(00:25 average task time)

With VASA 3.0, it takes just under 90 minutes to enable CBT and take managed snapshots for 1000 VMs in batches of 100 concurrent requests.
With VASA 3.5, it takes just under 30 minutes to enable CBT and take managed snapshots for 1000 VMs in batches of 100 concurrent requests.

There is a big difference here: 7 minutes per snapshot task on VASA 3.0 and 1.75 minutes per snapshot task on VASA 3.5!  These were done in batches of 100; what would this look like at different loads and with different batch sizes?

Various batch sizes at ~60% load

The FlashArray X70 is first brought to about 60% load (around 115k IOPs), then the same 2000 VMs (1000 with 3.0 and 1000 with 3.5) have CBT enabled and finally managed snapshots are taken.

Enable CBT and take Managed Snapshot
~60% Load Batch Size 10 Batch Size 20 Batch Size 30 Batch Size 40 Batch Size 50
VASA 3.0 - avg task 01:05 02:01 02:34 03:04 03:46
VASA 3.5 - avg task 00:27 00:43 00:56 01:01 01:14

For a batch of 10 concurrent requests it's a little more than twice as fast, but we see that the larger the batch sizes get the larger the difference between 3.0 and 3.5 at batches of 50.  

Next let's take a follow up managed snapshot with the CBT enabled VMs.

Take Managed Snapshot for VM w/ CBT enabled
~60% Load Batch Size 10 Batch Size 20 Batch Size 30 Batch Size 40 Batch Size 50
VASA 3.0 - avg task 00:27 00:28 00:43 00:57 01:05
VASA 3.5 - avg task 00:11 00:17 00:21 00:26 00:29

While there aren't any bitmap calls being issued during these follow up requests, it does show that 7.0 U2a has a performance increase in these follow up snapshots.  I didn't see anything in the release notes specifically that would indicate these improvements, but we for sure see a difference.

The final testing done was increasing the load to around 85% for the FlashArray X70 (around 150k IOPs), then testing the different batch sizes.

Various batch sizes at ~85% load

Enable CBT and take Managed Snapshot
~85% Load Batch Size 10 Batch Size 20 Batch Size 30 Batch Size 40 Batch Size 50
VASA 3.0 - avg task 01:31 03:13 03:57 04:35 06:09
VASA 3.5 - avg task 00:42 01:04 01:19 01:36 01:51

There is an increase in the average task time, but with a batch of 10, we are still under 60 seconds for enabling CBT and taking the managed snapshot. 

Next is the follow up snapshot batches.

Take Managed Snapshot for VM w/ CBT enabled
~85% Load Batch Size 10 Batch Size 20 Batch Size 30 Batch Size 40 Batch Size 50
VASA 3.0 - avg task 00:32 00:47 01:03 01:21 01:33
VASA 3.5 - avg task 00:17 00:27 00:30 00:43 00:55

We see that the 3.5 times are about twice as fast as the 3.0 even at the 85% load point, and the batches of 10 for 3.5 are under 20 seconds on average.  

Overall, the big take away here is that if the FlashArray is under higher load, smaller batches of concurrent operations help decrease the average task time.  This means a lot when we are talking about the times that these VMs are stunned for during the managed snapshots.