Skip to main content
Pure1 Support Portal

Offloaded Data Transfer (ODX)

Overview

Offloaded Data Transfers (ODX) was introduced in Microsoft Windows Server 2012 to provide direct data transfer within or between storage arrays by bypassing the host system. By offloading file transferoperations from the host system ODX helps to reduce host side latency, leveraging the storage array throughput capabilities and reducing CPU and networking resources on the host system.

ODX is a transparent operation to the Windows Server host system whether a user or administrator initiates a file transfer using Windows Explorer via drag-n-drop or using command line tools.

Pure Storage has implemented ODX as part of Purity 4.10 maintenance release. By default, Windows Server 2012, 2012 R2 and 2016 have ODX enabled so Pure Storage customers will immediately be able to take advantage of this new Purity capability.

To check whether ODX is enabled on Windows Server run the below Windows PowerShell. If the result is 0 ODX is enabled; if the result is ODX it is disabled. 

Get-ItemProperty -Path 'hklm:\system\currentcontrolset\control\filesystem' -Name 'FilterSupportedFeaturesMode'

In the event ODX is disabled, indicated by a result of 1 from the above Windows PowerShell, run the below PowerShell to enable ODX.

Set-ItemProperty -Path 'hklm:\system\currentcontrolset\control\filesystem' -Name 'FilterSupportedFeaturesMode' -Value 0

Implementing ODX benefits the following use cases:

  • Import and export of Hyper-V virtual machines.
  • Transfer of large files, examples include: SQL Server databases, Exchange databases, VHD(X) files, image or video files.

Without ODX the operation of transferring data from one host system to another relied entirely upon the network. The basic operation was to read the content from the storage array from host system A, copy the content across the network to host system B and finally write the transferred content back to the same storage array.

To eliminate this inefficiency, ODX uses a token-based mechanism for reading and writing data within or between storage arrays. Instead of routing the data through the host, a small token is copied between the source server and destination server. The token serves as a point-in-time representation of the data. As an example, when you copy a file or migrate a virtual machine between storage locations (within or between storage arrays), a token representing the virtual machine file is copied, thereby removing the need to copy the underlying data through the servers.

 

Performance Examples

The following lab performance results are provided as an example to illustrate the benefits of using ODX.

Windows File Explorer

Windows File Explorer performs copy processing as a single-threaded operation. Single-threaded operations are a bottleneck when transfer a large amount or size of files. With ODX this bottleneck is alleviated. To illustrate this point a basic test of copying files from one Pure Storage volume to another (Server01-Vol01 to Server01-Vol02). The collection of files being used are EXE, ISO and ZIP. The total size of all files is ~1.25 TB comprised of 1,040 files.

Figure 1 shows the copy process taking place pushing >125 GB/s. Figure 2 is the view from the dashboard of a Pure Storage FlashArray//M20. You see it only shows 20.00 KB/s. This is because the array is only creating new meta data to represent what is being copied. It doesn’t have to move anything through the network.


Figure 1.


Figure 2.

 

Robocopy

Robocopy is a command line file copy utility that comes with Windows Server. The same basic test as performed with Windows File Explorer was used to test robocopy using multi-threaded (/MT) mode. /MT defaults to 8 threads. Figure 3 shows the command line syntax to copy files from G:\Files to E:\ and mirror the structure along with /MT for multithreading.

The lower portion of Figure 3 shows the outputs and the important detail is that it took 29s to copy 1.252 TB of files. Figure 4 is the view from the dashboard of a Pure Storage FlashArray//M20. 


Figure 3.


Figure 4.