The scale and dynamic nature of vVols intrinsically changes VMware storage provisioning. To provide scale and flexibility for vVols, VMware adopted the T10 administrative logical unit (ALU) standard, which it calls protocol endpoint (PE). vVols are connected to VMs through PEs acting as subsidiary logical units (SLUs, also called sub-luns).
The FlashArray vVol implementation makes PEs nearly transparent. Array administrators seldom deal with PEs, and not at all during day-to-day operations.
Protocol Endpoints (PEs)
A typical VM has multiple virtual disks, each instantiated as a volume on the array and addressed by a LUN, the ESXi Version 6.5 support limits of 512 SCSI devices (LUNs) per host and 2,000 logical paths to them can easily be exceeded by even a modest number of VMs.
Moreover, each time a new volume is created or an existing one is resized, VMware must rescan its I/O interconnects to discover the change. In large environments, rescans are time-consuming; rescanning each time the virtual disk configuration changes is generally considered unacceptable.
VMware uses PEs to eliminate these problems. A PE is a volume of zero capacity with a special setting in its Vital Product Data (VPD) page that ESXi detects during a SCSI inquiry. It effectively serves as a mount point for vVols. It is the only FlashArray volume that must be manually connected to hosts to use vVols.
Fun fact: Protocol endpoints were formerly called I/O de-multiplexers. PE is a much better name.
When an ESXi host requests access to a vVol (for example, when a VM is powered on), the array binds the vVol to it. Binding is synonym for sub-lun connection. For example, if a PE uses LUN 255, a vVol bound to it would be addressed as LUN 255:1. The section titled vVol Binding describes vVol binding in more detail.
PEs greatly extend the number of vVols that can be connected to an ESXi cluster; each PE can have up to 16,383 vVols per host bound to it simultaneously. Moreover, a new binding does not require a complete I/O rescan. Instead, ESXi issues a REPORT_LUNS SCSI command with SELECT REPORT to the PE to which the sub-lun is bound. The PE returns a list of sub-lun IDs for the vVols bound to that host. In large clusters, REPORT_LUNS is significantly faster than a full I/O rescan because it is more precisely targeted.
The FlashArray PE Implementation
A Protocol Endpoint on the FlashArray can be viewed and connected from either the FlashArray UI or CLI.
Using the FlashArray UI to Manage the Protocol Endpoint
When its first VASA provider is registered, a FlashArray automatically creates a PE called pure-protocol-endpoint. The pure-protocol-endpoint can be filtered in the Volumes view. A PE can be connected from the PE volume view or from a Host/Host Group view in the FlashArray UI.
|From the Storage -> Volumes view
Click on the options and select Show Protocol Endpoints
|This view will display the Protocol Endpoints for the FlashArray
From the PE View the PE can be connected to a Host or Host Group
|From the Connect Host Groups page you can select one or multiple Host Groups to connect the PE to
Using the FlashArray CLI to Manage the Protocol Endpoint
From the FlashArray CLI a storage admin can manage the Protocol Endpoint. This includes listing/viewing, creating, connecting, disconnecting or destroying a protocol endpoint.
Protocol endpoints that have been created can be listed with purevol list --protocol-endpoint
pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint Name Source Created Serial pure-protocol-endpoint - 2020-12-02 12:28:08 PST F4252922ADE248CF000113E6
A protocol endpoint can be created with purevol create --protocol-endpoint
pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint prod-protocol-endpoint Name Source Created Serial prod-protocol-endpoint - 2020-12-02 12:29:21 PST F4252922ADE248CF000113E7 pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint Name Source Created Serial prod-protocol-endpoint - 2020-12-02 12:29:21 PST F4252922ADE248CF000113E7 pure-protocol-endpoint - 2020-12-02 12:28:08 PST F4252922ADE248CF000113E6
To connect a protocol endpoint use either purehgroup connect or purevol connect
pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 10 prod-protocol-endpoint Name Host Group Host LUN prod-protocol-endpoint Prod-Cluster-FC ESXi-3-FC 10 prod-protocol-endpoint Prod-Cluster-FC ESXi-2-FC 10 prod-protocol-endpoint Prod-Cluster-FC ESXi-1-FC 10 pureuser@sn1-x50r2-b12-36> purevol list --connect Name Size LUN Host Group Host prod-protocol-endpoint - 10 Prod-Cluster-FC ESXi-1-FC prod-protocol-endpoint - 10 Prod-Cluster-FC ESXi-2-FC prod-protocol-endpoint - 10 Prod-Cluster-FC ESXi-3-FC
A protocol endpoint can be disconnected from a host and host group with purevol disonnect.
However, if there are any active sub-lun connections this operation will fail as disconnecting the PE would cause a sev-1 and data path failure to that ESXi host.
pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 11 pure-protocol-endpoint Name Host Group Host LUN pure-protocol-endpoint Prod-Cluster-FC ESXi-3-FC 11 pure-protocol-endpoint Prod-Cluster-FC ESXi-2-FC 11 pure-protocol-endpoint Prod-Cluster-FC ESXi-1-FC 11 pureuser@sn1-x50r2-b12-36> purevol disconnect --hgroup Prod-Cluster-FC pure-protocol-endpoint Name Host Group Host pure-protocol-endpoint Prod-Cluster-FC -
A disconnected Protocol Endpoint can be destroyed with purevol destroy. DO NOT DESTROY THE DEFAULT PURE-PROTOCOL-ENDPOINT!
pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint dr-protocol-endpoint Name Source Created Serial dr-protocol-endpoint - 2020-12-02 14:15:23 PST F4252922ADE248CF000113EA pureuser@sn1-x50r2-b12-36> purevol destroy dr-protocol-endpoint Name dr-protocol-endpoint
A FlashArray’s performance is independent of the number of volumes it hosts; array’s full performance capability can be delivered through a single PE. PEs are not performance bottlenecks for vVols, so a single PE per array is all that is needed.
Configuring a single PE per array does not restrict multi-tenancy. Sub-lun connections are host-specific.
A FlashArray automatically creates a default pure-protocol-endpoint PE when its first VASA provider is registered. If necessary, additional PEs can also be created manually. However, in most cases the default pure-protocol-endpoint is fine to use. There is no additional HA value added by connecting a host to multiple protocol endpoints.
Do not destroy or eradicate the pure-protocol-endpoint PE on the FlashArray. This namespace is required for VASA to be able to storage required metadata for VASA to correctly work with the FlashArray.
BEST PRACTICE: Use one (the default) PE per array. All hosts should share the same PE and vVol to host bindings are host-specific, so multi-tenancy is inherently supported.
More than one PE can be configured, but is seldom necessary
As is typical for the FlashArray architecture, vVol support, and in particular, the PE implementation are as simple as it is possible for them to be.
Protocol Endpoints in vSphere
There are multiple ways to view Protocol Endpoints that the ESXi hosts is connected with or is currently using as a mount point for a vVol Datastore.
- From the Hosts and Datacenter view, Navigate to Host -> Configure -> Storage Devices.
This view will show all connected storage devices to this ESXi hosts.
All Protocol Endpoints that are connected via SAN will show as a 1.00 MB device
From the Hosts and Datacenter view, Navigate to Host -> Configure -> Protocol Endpoints
This View will only display Protocol Endpoints that are actively being used as a mount point for a vVol Datastore and it's Operational State
This is because that PE does not have a vVol Datastore that it is being used for. This is expected behavior. If a vVol datastore is not mounted to the ESXi host then no configured PEs will display in this View.
Multipathing is configured from the Protocol Endpoint and not from a sub lun. Each sub lun connection inherits the multipathing policy set on the PE.
BEST PRACTICE: Configure the round robin path selection policy for PEs.
From the Datastore View, Navigate to a vVol Datastore -> Configure -> Protocol Endpoints
This page will display all the PEs that this vVol Datastore (storage container) that are on the FlashArray. By default there will only be one PE on the FA.
In this example there are two PEs.
From here the mounted hosts will be displayed. Take note that there is a UI bug that will always show the Operational Status as not accessible.
By comparison, when the 2nd PE is viewed, there are no mounted hosts. This is because the second PE is not connected via that SAN to any ESXi hosts in this vCenter.
From the Datastore View page, Navigate to a vVol Datastore -> Configure -> Connectivity with Hosts
With regards to PE Queue Depths, ESXi behaves differently with respect to queue depth limits for PEs than for other volumes. Pure Storage recommends leaving ESXi PE queue depth limits at the default values.
BEST PRACTICE: Leave PE queue depth limits at the default values unless performance problems occur.
The blog post at https://blog.purestorage.com/queue-depth-limits-and-vvol-protocol-endpoints/ contains additional information about PE queue depth limits.