ESXi abstracts the virtual machine storage space by introducing the concept of datastore. Datastores conceal the complexities of the underlying host attached storage devices from the vSphere administrator. Datastores may be formatted with either VMware’s File System version 5 or 6. VMFS provides shared access to ESXi hosts. Multi-host disk access exposes some of the virtual machine mobility (virtual machine and Storage vMotion) as well as high availability features.
Network File System, the widely-adopted distributed file system that has become an open standard, also provides means for shared access to files stored on the NFS server. The NFS datastore support introduced with ESX version 3 greatly extended vSphere’s storage capabilities, opening an entirely new type of the device to be deployed in virtualized environments. NFS datastores conceptually resemble VMFS datastores and may be easily deployed and integrated into existing network infrastructure.
FlashBlade™ is a modern, flash-based, file and object storage system designed from the ground up to offer effortless setup, efficient rack space utilization, and high storage density. Additionally, FlashBlade offers an Evergreen™ upgrade model, eliminating the need for forklift upgrades. Furthermore, FlashBlade lends itself well for virtualized environments by providing NFS storage. FlashBlade front view is depicted in Figure 1.
The basic building blocks of FlashBlade are the 15-slot chassis and DirectFlash™ modules (blades). The FlashBlade chassis houses DirectFlash modules and provides management and host connectivity network interfaces.
The four-Rack Unit single FlashBlade chassis, with 15 populated slots delivers:
- 1.6 PTB of usable capacity with 52 TB DirectFlash™ modules
- 17GB/s bandwidth
- Up to 1 million IO/s
- 8 40Gbs or 32 10Gb/s network ports for client connectivity
- Low, 1.8 kW power consumption
Currently supported client protocols are:
- NFS version 3
A common problem with traditional file storage devices is that as the demand for storage capacity increases, so does the number of the NFS based back-end servers and arrays. This leads to the administrative burden becoming heavier and more complex. Storage administrators must configure and manage more servers or NAS devices, associated management IP addresses, multiple name spaces, and multiple mount points, thus making the management process cumbersome and prone to mistakes.
FlashBlade, due to its high storage density and expandability, simplifies storage administration. Its deployment consolidates network address space to a single IP management address and makes it possible to have a single mount point IP address while eliminating the need for NFS device-spread.
Moreover, following Pure Storage design principles, FlashBlade also offers an intuitive user interface, allowing simple storage management and provisioning with snapshots and online capacity expansion.
FlashBlade, with its unique architecture, designed specifically as a scale-out, high-bandwidth, flash-friendly storage array, has created a new class of NAS devices.
The diagram of FlashBlade architecture is shown in Figure 2.
The 15-blade chassis delivers network connectivity via two redundant Fabric Modules (FMs). Each Fabric Module is equipped with 4 40Gb/s external network ports for storage client access. The Fabric Modules also provide low-latency, high-throughput DirectFlash™ (blade) connectivity.
The rear view of FlashBlade with two Fabric Modules is shown in Figure 3.
DirectFlash™ modules (blades) deliver high-capacity storage. Off-the-shelf Solid State Disks (SSD) have a built-in Flash Translation Layer (FTL), which presents the SSD to the storage controller as a traditional spinning disk. The designers of DirectFlash have eliminated FTL, instead exposing “raw” NAND cells to the array’s software. This allows direct control over I/O scheduling, opening the NAND performance potential for parallel workloads via PCIe/NVMe protocols, thus maximizing storage parallelism. Each DirectFlash module consist of the CPU, memory, controllers and NAND cells and is FlashBlade chassis hot-pluggable.
The DirectFlash module is shown in Figure 4.
FlashBlade Technical Specifications
- 2+2 redundant power supply
- 1+1 Elastic Fabric Modules
- 15 FlashBlade slots
Integrated CPU, DRAM, redundant Ethernet links
Chassis management network
Capacity (per blade): 8TB, 17TB, 52TB
Integrated chassis management module
Four 40Gb/s QSFP+ Ethernet port
Embedded network with load balancing and virtualization
FlashBlade™ client data is served via four 40Gb/s QSFP+ or 32 10Gb/s Ethernet ports. While it is beyond the scope of this document to describe and analyze available network technologies, at the minimum two network uplinks (one from each Fabric Module) are recommended. Each uplink should be connected to a different LAN switch. This network topology protects against the switch as well as FM and individual network port failures.
BEST PRACTICE: Provide at least two network uplinks (one per Fabric Module).
An example of high performance, high redundancy network configuration with four FlashBlade uplinks and Cisco UCS is shown in Figure 5. Please note that Cisco Nexus switches are configured with virtual Port Channel (vPC).
BEST PRACTICE: Separate storage network from other networks
ESXi Host Connectivity
The ESXi hosts connectivity to NAS devices is provided by Virtual switches with VMKernel Adapters and port groups. The Virtual switch must have at least one physical adapter (vmnic) assigned. While it is possible to connect ESXi hosts to an NFS datastore with a single vmnic, this configuration does not protect against potential NIC failures. Whenever possible, it is recommended to create a Virtual switch and to assign two vmnics to each dedicated VMKernel Adapter.
BEST PRACTICE: Assign two vmnics to a dedicated VMKernel Adapter and Virtual switch.
Additionally, to reduce the Ethernet broadcast domain, connections should be configured using separate VLANs and IP subnets. By default, the ESXi host will direct NFS data traffic through a single NIC. Therefore, single NIC’s bandwidth, even in multiple vmnic Virtual switch configurations, is a limiting factor for the NFS datastore I/O operations.
PLEASE READ: NFS datastore connection is limited by single NIC’s bandwidth.
Network traffic load balancing for the Virtual switches with multiple vmnics may be configured by changing the load-balancing policy – see VMware Load Balancing section.
ESXi Virtual Switch Configuration
A basic recommended ESXi Virtual switch configuration is shown in Figure 6.
For ESXi hosts with two high-bandwidth Network Interface Cards, adding a VMkernel port group will increase the IO parallelism – see Figure 7 and Datastores section for additional details. Please note that two VMkernel port groups are on different IP subnets.
For ESXi hosts with four or more high bandwidth Network Interface Cards, it is recommended to create a dedicated Virtual switch for each pair of NICs – see Figure 8.
Please note that each Virtual switch and each VMkernel port group exist on different IP subnets and the corresponding datastores. This configuration provides optimal connectivity to the NFS datastores by increasing the IO parallelism on the ESXi host as well as on FlashBlade– see Datastores section for additional details.
VMware Load Balancing
VMware supports several load balancing algorithms for virtual switches:
- Route based on originating virtual port – network uplinks are selected based on the virtual machine port id – this is the default routing policy.
- Route based on source MAC hash – network uplinks are selected based on the virtual machine MAC address.
- Route based on IP hash – network uplinks are selected based on the source and destination IP address of each datagram.
- Route based on physical NIC load – uplinks are selected based on the load evaluation performed by the virtual switch; this algorithm is available only on vSphere Distributed Switch.
- Explicit failover – uplinks are selected based on the order defined in the list of Active adapters; no load balancing.
The Route based on originating virtual port and Route based on source MAC hash routing teaming and failover policies require Virtual switch to virtual machine connections. Therefore, they are not appropriate for VMkernel Virtual switches and NFS datastores. The Route based on IP hash policy is the only applicable teaming option.
Route based on IP hash load balancing ensures the egress network traffic is directed through one vmnic and ingress through another.
The Route based on IP hash teaming policy also requires configuration changes of the network switches. The procedure to properly setup link aggregation is beyond the scope of this document. The following VMware Knowledge Base article provides additional details and examples regarding EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi/ESX and Cisco/HP switches configuration:
For the steps required to change VMware’s load balancing algorithm, see Appendix A.
The performance of FlashBlade™ and DirectFlash™ modules (blades) is not dependent on the number of file systems created and exported to the hosts. However, for each host connection there is an internal 10Gb/s bandwidth threshold between the Fabric Module and the DirectFlash module (blade) – see Figure 2. While the data is distributed among multiple blades, a single DirectFlash module provides host to the storage network connection. The blade selection process to service specific host connection is determined by hashing function. This methodology minimizes the possibility of the same blade being used by multiple hosts. For instance, a connection from a single host may be internally routed to blade 1 whereas another connection from the same host may be internally routed to blade 2 for storage access.
The number of the datastores connected to the ESXi host will depend on the number of available network interfaces (NICs), bandwidth, and performance requirements. To take full advantage of FlashBlade parallelism, create or mount at least one datastore per host per network connection.
BEST PRACTICE: Create or mount at least one datastore per host per network connection
The basic ESXi host single datastore connection is shown in Figure 9.
For the servers with high bandwidth NICs (40Gb/s or higher), create two or more VMkernel port groups per Virtual switch and assign IP addresses to each port group. These IP addresses need to be on different subnets. In this configuration, the connection to each exported file system will be established using dedicated VMkernel port group and corresponding NICs. This configuration is shown in Figure 10.
For servers with four or more high bandwidth network adapters, create a dedicated Virtual switch for each pair of vmnics. The VMkernel port groups need to have IP addresses which are on different subnets. This configuration parallelizes the ESXi host as well as internal FlashBlade connectivity. See Figure 11.
BEST PRACTICE: Mount datastores on all hosts.
The configuration of the FlashBlade includes the creation of the subnet and network interfaces for host connectivity.
All the tasks may be accomplished using FlashBlade’s web-based HTML 5 user interface (no client installation required), the command line or via RestAPI.
Configuring Client Connectivity
Create subnet for client (NFS, SMB, HTTP/S3) connectivity
- Command Line Interface (CLI)
puresubnet create --prefix <subnet/mask> --vlan <vlan_id> <vlan_name>
puresubnet create --prefix 10.25.0.0/16 --vlan vlan2025
- Graphical User Interface (GUI) - see figure 12.
- Select Settings in the left pane.
- Select Network
- Select ‘+’ sign at top-right in the Subnets header.
- Provide values in Create Subnet dialog window.
- Name – subnet name
- Prefix – network address for the subnet with the subnet mask length.
- Gateway – optional IP address of the gateway.
- MTU – optional Maximum Transmission Unit size, default is 1500, change to 9000 for jumbo frames - see also Appendix B.
- Click Create.
Create a Virtual Network Interface, Assign it to the Existing VLAN
- Command Line Interface (CLI):
purenetwork create vip --address <IP_address> --servicelist data name
purenetwork create vip --address 10.25.0.10 --servicelist data subnet25_NIC
- Using Graphical User Interface - see Figure 13.
- Select Settings in the left pane.
- Select Add interface ‘+’ sign.
- Provide values in Create Network Interface dialog box.
- Name – Interface name
- Address – IP address where file systems can be mounted.
- Services – not modifiable
- Subnet – not modifiable
- Click Create
Creating and Exporting File System
Create and Export File System
- Command Line Interface (CLI):
purefs create --rules <rules> --size <size> File_System
purefs create --rules '*(rw,no_root_squash)' --size 78GB DS10
For existing file systems modify export rules (if necessary).
purefs setattr --rules <rules> File_System
purefs setattr --rules '*(rw,no_root_squash)' DS10
where --rules are standard NFS (FlashBlade supported) export rules, in format ‘ip_addres(options)’
* (asterisk) – export the file system to all hosts.
rw – file system exported will be readable and writable.
ro – file system exported will be read-only.
root_squash – file system exported will be mapped to anonymous user ID when accessed by user root.
no_root_squash – file system exported will not be mapped to anonymous ID when accessed by user root.
fileid_32bit – file system exported will support clients that require 32-bit inode support.
Add the desired protocol to the file system:
purefs add --protocol <protocol> File_System
purefs add --protocol nfs DS10
Optionally enable fast-remove and/or snapshot options:
purefs enable --fast-remove-dir --snapshot-dir File_System
- Using Graphical User Interface – see Figure 14.
- Select Storage in the left pane.
- Select File Systems and ‘+’ sign.
- Provide values in Create File System.
- Files system Name
- Provisioned Size
- Select unit (K, M, G, T, P)
- Optionally enable Fast Remove.
- Optionally enable Snapshot.
- Enable NFS.
- Provide Export Rules [*(rw,no_root_squash)].
- Click Create.
For ESXi hosts the rw,no_root_squash export rules are recommended. It also recommended to export the file system to all hosts (include * in front of the parenthesis). This will allow the NFS datastores to be mounted on all ESXi hosts.
BEST PRACTICE: Use *(rw,no_root_squash) rule for exporting file systems to ESXi hosts.
ESXi Host Configuration
The basic ESXi host configuration consists of creating a dedicated Virtual switch and datastore.
Creating Virtual Switch
To create a Virtual switch and NFS based datastores using vSphere Web Based client follow the steps below:
- Create a vSwitch – see Figure 15.
- Select the hosts tab, Host ➤Configure (tab)➤Virtual switches➤Add host networking icon.
- b. Select connection type: Select VMkernel Network Adapter – see Figure 16.
- Figure 16
- c. Select target device: New standard switch - see Figure 17.
d. Create a Standard Switch: Assign free physical network adapters to the new switch (click green ‘+’ sign and select an available active adapter (vmnic)) – see Figure 18.
e. Select Next when finished assigning adapters.
f. Port properties – see Figure 19.
i. Network label (for example: VMkernelNFS)
ii. VLAN ID: leave at default (0) if you are not planning to tag the outgoing network frames.
iii. TCP/IP stack: Default
iv. Available service: all disabled (unchecked).
g. IPv4 settings – see Figure 20.
i. IPv4 settings: Use static IPv4 settings.
ii. Provide the IP address and the corresponding subnet mask.
iii. Review settings and finish creating the Virtual switch.
2. Optionally verify connectivity from ESXi host to the FlashBlade file system.
a. Login as root to the ESXi host.
b. Issue vmkping command.
1. Select the hosts tab, Host ➤Datastores-➤New Datastore - see Figure 21.
2. New Datastore - see Figure 22.
a. Type: NFS
3. Select NFS version: NFS 3 - see Figure 23.
a. Datastore name: friendly name for the datastore (for example: DS10) – see Figure 24.
b. Folder: Specify folder where this datastore was created on FlashBlade - Creating and Exporting File System.
c. Server: IP address or FQDN for the VIP on the FlashBlade.
When mounting NFS datastore on multiple hosts you must use the same FQDN, name, or IP address and datastore name. If using FQDN, ensure that DNS records have been updated and ESXi hosts have been correctly configured with IP address of the DNS server.
BEST Practice: Mount NFS datastores using IP addresses
Mounting NFS datastore using IP address instead of the FQDN removes the dependency on the availability of DNS servers.
ESXi NFS Datastore Configuration Settings
Adjust the following ESXi parameters on each ESXi server (see Table 1):
- NFS.MaxVolumes – Maximum number of NFS mounted volumes (per host)
- Net.TcpipHeapSize – Initial TCP/IP heap size in MB
- Net.TcpipHeapMax – Maximum TCP/IP heap size in MB
- SunRPC.MaxConnPerIp – Maximum number of unique TCP connections per IP address
The SunRPC.MaxConnPerIp should be increased to avoid sharing the host to NFS datastore connections. There is a maximum of 256 NFS datastores with 128 unique TCP connections, therefore forcing connection sharing when the NFS datastore limit is reached.
The settings listed in Table 1 must adjusted on each ESXi host using vSphere Web Client (Advanced System Settings) or command line and may require a reboot.
Changing ESXi Advanced System Settings
To change ESXi advanced system settings using vSphere Web Client GUI – see Figure 25.
- Select Host (tab)➤Host➤Configure➤Advanced System Settings➤Edit.
- In Edit Advanced System Setting windows use the search window to locate the required parameter and modify its value, click OK.
- Reboot if required.
To change Advanced System Settings using esxcli:
esxcli system settings set --option=“/SectionName/OptionName” --int-value=<value>
esxcli system settings set --option=“/NFS/MaxVolumes” --int-value=16
Virtual Machine Configuration
For the virtual machines residing on FlashBlade backed NFS dastastores only thin provisioning is available however FlashBlade does not support thin provisioned disks at this time. Support for thin provisioning will be added in the future.
Based on VMware recommendations, the additional disks (non-root (Linux) or other than c:\ (Windows)) should be connected to a VMware Paravirtual SCSI controller.
Snapshots provide convenient means of creating a recovery point and can be enabled on FlashBlade on a per-file-system bases. The actual snapshots are located in the .snapshot directory on the exported file systems. The content of the .snapshot directory may be copied to a different location, providing a recovery point. To recover virtual machine using FlashBlade snapshot:
1. Mount the .snapshot directory with ‘Mount NFS as read-only’ option on the host where you would like recover virtual machine – see Figure 27.
2. Select the newly mounted datastore and browse the files to locate the directory where the virtual machine files reside - see Figure 28.
3. Select and copy all virtual machine files to another directory on different datastore
4.Register the virtual machine by selecting the Host➤Datastore➤Register VM by browsing to the new location of virtual machine files – see Figure 29.
5. Unmount datastore mounted in step 1
VMware managed snapshots are fully supported.
While the recommendations and suggestions outlined in this paper do not cover all possible ESXi and FlashBlade implementation details and configuration settings, they should serve as a guideline and provide a starting point for NFS datastore deployments. Continuous data collection and analysis of the network, active ESXi hosts and FlashBlade performance characteristics are the best method of determining if and what changes may be required to deliver the most reliable, robust, high-performing virtualized compute service.
BEST PRACTICE: Always monitor your network, FlashBlade, ESXi hosts
Changing network load balancing policy – see Figure A1.
To change network load balancing policy using command line:
esxcli network vswitch standard policy failover set -l iphash -v <vswitch-name>
esxcli network vswitch standard policy failover set -l iphash -v vSwitch1
To change network load balancing policy using vSphere Web Client:
- Select Host (tab)➤Host➤Configure➤Virtual switches
- Select the switch➤Edit (Pencil)
- Virtual switch Edit Setting dialog➤Teaming and failover➤Load Balancing➤Route based on IP hash
While the typical Ethernet (IEEE 802.3 Standard) Maximum Transmission Unit is 1500 bytes, larger MTU values are also possible. Both FlashBlade and ESXi provide support for jumbo frames with an MTU of 9000 bytes.
Create subnet with MTU 9000
1. Command Line Interface (CLI):
puresubnet create --prefix <subnet/mask> --vlan <vlan_id> --mtu <mtu> vlan_name
puresubnet create –prefix 10.25.64.0/21 –vlan vlan2064
2. Graphical User Interface (GUI) - see Figure B1
i. Select Settings in the left pane
ii. Select Network and ‘+’ sign next to “Subnets”
iii. Provide values in Create Subnet dialog window changing MTU to 9000
iv. Click Save
Change an existing subnet MTU to 9000
- Select Settings in the left pane
- Select Edit Subnet icon
- Provide new value for MTU
- Click Save
ESXi Host Configuration
Jumbo frames need to be enabled on per host and per VMkernel switch basis. Only command line configuration examples are provided below.
1. Login as root to the ESXi host
2. Modify MTU for the NFS datastore vSwitch
esxcfg-vswitch -m <MTU> <vSwitch>
esxcfg-vswitch -m 9000 vSwitch2
3. Modify MTU for the corresponding port group
esxcfg-vmknic -m <MTU> <portgroup_name>
esxcfg-vmknic -m 9000 VMkernel2vs
Verify connectivity between the ESXi host and the NAS device using jumbo frames
vmkping -s 8784 -d <destination_ip>
vmkping -s 8784 -d 192.168.1.10
The -d option disables datagram fragmentation and -s options defines the size of ICMP data. ESXi does not support MTU greater than 9000 bytes, with 216 bytes for the header, the effective size should be 8784 bytes.
- Best Practices for Running vSphere on NFS Storage – https://www.vmware.com/techpapers/2010/best-practices-for-running-vsphere-on-nfs-storage-10096.html
Best Practices for running VMware vSphere on Network Attached Storage - https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-nfs-bestpractices-white-paper-en.pdf
- NFS Best Practices – Part 1: Networking – https://cormachogan.com/2012/11/26/nfs-best-practices-part-1-networking/
- NFS Best Practices – Part 2: Advanced Settings – https://cormachogan.com/2012/11/27/nfs-best-practices-part-2-advanced-settings/