Skip to main content
Pure Technical Services

Raw Device Mappings (RDM)

Currently viewing public documentation. Please login to access the full scope of documentation.

In the early days of ESXi, VMware provided the ability to present a volume / LUN from a backend storage array to a virtual machine directly. This technology is called a Raw Device Mapping, also known as an RDM. While the introduction of RDMs provided several key benefits for end-users those topics are outside of the scope of this document. For an in-depth review of RDMs, and their benefits, it is recommended you review the Raw Device Mapping documentation from VMware.

RDMs are, in some ways, becoming obsolete with the introduction and maturation of VMware vSphere Virtual Volumes (vVols). Most of the features that RDMs provide are also available with vVols. Pure Storage recommends the use of vVols when possible and would recommend you read further on this topic in our Virtual Volume (vVol) documentation. 

There may be situations where RDMs may still be required or desired over vVols.  For example, there may be 3rd party applications that work better with RDMs and have not integrated with vVols to leverage the same features as well.  RDMs work very well with Pure Storage and are a viable option when specific integrations and features with 3rd party applications are required.

RDMs are not currently supported with NVMe-oF as well.

RDM Compatibility Modes

There are two compatibility modes for a Raw Device Mapping, physical and virtual. Which option you choose relies on what features are required within your environment.

Physical Compatibility Mode

An RDM used in physical compatibility mode, also known as pass-through RDM or pRDM, exposes the physical properties of the underlying storage volume / LUN to the Guest Operating System (OS) within the virtual machine. This means that all SCSI commands (with the exception of one) are passed directly to the Guest OS thus allowing the VM to take advantage of some of the lower level storage functions that may be required.

Virtual Compatibility Mode

An RDM used in virtual compatibility mode, also known as a vRDM, virtualizes the physical properties of the underlying storage and as a result appears the same way a virtual disk file in a VMFS volume would appear. The only SCSI requests that are not virtualized are READ and WRITE commands, which are still sent directly to the underlying volume / LUN.  vRDMs still allow for some of the same benefits as a VMDK on a VMFS datatore and are a little more flexible for moving throughout the environment.

In order to determine which compatibility mode should be used within your environment it is recommend you review the Difference between Physical compatibility RDMs and Virtual compatibility RDMs from VMware. 

Due to the various configurations that are required for each RDM mode, Pure Storage does not have a best practice for which to use. Both are equally supported.

Managing Raw Device Mappings

Connecting a volume for use as a Raw Device Mapping

The process of presenting a volume to a cluster of ESXi hosts to be used as a Raw Device Mapping is no different than presenting a volume that will be used as a datastore. The most important step for presenting a volume that will be used as an RDM is ensuring that the LUN ID is consistent across all hosts in the cluster. The easiest way to accomplish this task is by connecting the volume to a host group on the FlashArray instead of individually to each host. This process is outlined in the FlashArray Configuration section of the VMware Platform Guide.

If the volume is not presented with the same LUN ID to all hosts in the ESXi cluster then VMware may incorrectly report that a volume is not in use when it is. VMware Knowledge Article Storage LUNs that are already in use as an RDM appear available in the Add Storage Window further explains this behavior.

Identifying the underlying volume for a Raw Device Mapping

There are times where you will need to determine which volume on the FlashArray is associated with a Raw Device Mapping. 

1. Right click on the virtual machine and select "Edit Settings".
2. Locate the desired RDM you wish to inspect and expand the properties of the disk.
3. Under the disk properties locate the "Physical LUN" section and note the vml identifier.

rdm-inspect-1.png

4. Once you have the vml identifier we can then find the LUN ID and the volume identifier to match it to a FlashArray volume.

vml.0200fa0000624a93708a75393becad4e430004e270466c61736841

The above string can be analyzed as follows:

  • fa - hex value of the LUN ID (250 in decimal)
  • 624a9370 - Pure Storage identifier
  • 8a75393becad4e430004e270 - volume serial number on the FlashArray

5. Now that we know the identifier and LUN ID we can look at the FlashArray to determine which volume is backing the RDM

vol-identified.png

As seen above we are able to confirm that this particular RDM is backed by the volume called "space-reclamation-test" on the FlashArray.

Removing a Raw Device Mapping from a virtual machine

The process for removing a Raw Device Mapping from a virtual machine is a little different than that of removing a virtual machine disk (VMDK).

1. Right click the virtual machine and select "Edit Settings".
2. Locate the desired RDM you wish to remove and click the "x".

remove-rdm-1.png

3. Ensure the box "Delete files from datastore" is checked.

remove-rmd-2.png

4. Click "OK".

5. If you are no longer require the volume then you can safely disconnect the volume from the FlashArray and rescan the ESXi hosts.

By selecting "Delete files from datastore" this is not deleting the data on the volume. This step simply removes the mapping file created on the datastore that points to the underlying volume (raw device). 

Resizing a Raw Device Mapping

Depending on which compatibility mode you have chosen for your RDM the resize process will vary. Following the process outlined in Expanding the size of a Raw Device Mapping (RDM) provides an example for both physical and virtual RDMs.

Multipathing

A common question when using Raw Device Maps (RDMs) is where the multipathing configuration should be completed. Because an RDM provides the ability for a virtual machine to access the underlying storage directly it is often thought that configuration within the VM itself was required.  Luckily things are not that complicated and the configuration is no different than that of a VMFS datastore. This means that the VMware Native Multipathing Plugin (NMP) is responsible for RDM path management and not the virtual machine.

This means that no MPIO configuration is required (or needed) within the virtual machines utilizing RDMs. All of the available paths are abstracted from the VM and the RDM is presented as a disk with a single path. All of the multipathing logic is handled in the lower levels of the ESXi host.

Below is an example of what a pRDM looks like within a Windows VM:

DISKPART> detail disk

PURE FlashArray SCSI Disk Device
Disk ID: {DF47FF82-8901-423E-A774-E2F6B29049E2}
Type   : SAS
Status : Online
Path   : 0
Target : 1
LUN ID : 0
Location Path : PCIROOT(0)#PCI(1500)#PCI(0000)#SAS(P00T01L00)
Current Read-only State : No
Read-only  : No
Boot Disk  : No
Pagefile Disk  : No
Hibernation File Disk  : No
Crashdump Disk  : No
Clustered Disk  : No

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ----------  ---  -----------  -----  ----------  -------  ---------  --------
  Volume 4     E   RDM-Example  NTFS   Partition    499 GB  Healthy

RDM SCSI Inquiry Data

VMware has observed that some guest operating systems (the Virtual Machine's Operating System) and/or applications require current SCSI INQUIRY data to operate without disruption to the guest or application. By default, the guest OS will get the SCSI INQUIRY data from the ESXi host for RDMs rather than the array directly. VMware has a section in their product documentation that covers how an RDM device can be set so the guest OS will ignore the ESXi host's cache and for the guest to query the array directly for this information. This can be found in this VMware KB.

The setting to ignore the inquiry cache may need to be set for RDMs that are leveraging ActiveCluster. In particular, when leveraging ActiveCluster for planned migration of RDMs between arrays, because the device states will be changing in a short amount of time and the guest OS or application may encounter stale data from the host cache. This setting may need to be set depending on the Guest OS or application needs when using RDMs with ActiveCluster or ActiveDR.

Multi-Writer

When utilizing RDMs people often have questions regarding the "Sharing" option while adding the RDM to the virtual machine (illustrated below).

multi-writer_sharing.png

Since RDMs are most often used for situations like clustering this becomes a concern on whether or not this value should be set. There is a fear that if left unspecified (which defaults to "No Sharing") corruption of come kind can happen on the disk. This is a good mindset to have as protecting data should always be the number one goal.

The first important thing to note here is that this option is meant for VMDKs or virtual RDMs (vRDM) only. It is not for use with physical RDMs (pRDM) as they are not "VMFS backed". So if your environment is utilizing physical RDMs then you do not need to worry about this setting.

If you utilizing virtual RDMs then there is a possibility that setting this option would be required, specifically if you are utilizing Oracle RAC on your virtual machines. As of this writing this is the only scenario in which multi-writer is known to be required with virtual RDMs on a Pure Storage FlashArray. VMware has provided additional information on this in their Enabling or disabling simultaneous write protection provided by VMFS using the multi-writer flag KB.

If there are questions around this topic please open a case with Pure Storage Technical Support for additional information.

Do not set multi-writer on RDMs that are going to be used in a Windows Server Failover Cluster (WSFC) as this may cause accessibility issues to the disk(s). Windows manages access to the RDMs via SCSI-3 persistent reservations and enabling multi-writer is not required.

Queue Depth

An additional benefit to utilizing a Raw Device Mapping is that each RDM has its own queue depth limit, which may in some cases provide increased performance. Because the VM is sending I/O directly to the FlashArray there is no sharing queue depth on a datastore like there is with a VMDK.

Aside from the potentially shared queue depth on the virtual machine SCSI controller, each RDM has its own dedicated queue depth and works under the same rules as a datastore would. This means that if you have a Raw Device Mapping presented to a single virtual machine the queue depth for that RDM would be whatever the HBA queue depth was configured to. Alternatively, if you presented the RDM to multiple virtual machines then the device Number of Outstanding I/Os would be the queue limit for that particular RDM.

For additional information on queue depths, you can refer to Understanding VMware ESXi Queuing and the FlashArray.

Unless directed by Pure Storage or VMware Support there typically is no reason to modify either of these values. The default queue depth is sufficient for most workloads. 

UNMAP

One of the benefits of using RDMs is that SCSI UNMAP is a much less complicated process than using VMDKs. Depending on the version of ESXi you are using there are different caveats for UNMAP to be successful with VMDKs. With RDM the only requirements are that the Guest OS support SCSI UNMAP and that the ability is enabled. 

Windows

If utilizing Windows please refer to Windows Space Reclamation KB for UNMAP requirements and how to verifying this feature is enabled.

Linux

If utilizing Linux please can refer to the Reclaiming Space on Linux KB for UNMAP requirements and how to verifying this feature is enabled.

Managing RDMs with PowerShell

Pure Storage offers a PowerShell Module called PureStorage.FlashArray.VMware to help with PowerShell management of Pure Storage and VMware environments.

To install this module, run:

PS C:\> Install-Module PureStorage.FlashArray.VMware

In this module there are a few cmdlets that assist specifically with RDMs.

PS C:\> Get-Command -Name *RDM* -Module PureStorage.FlashArray.VMware.RDM -CommandType Function

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Function        Convert-PfaRDMToVvol                               1.1.0.2    PureStorage.FlashArray.VMware.RDM
Function        Copy-PfaSnapshotToRDM                              1.1.0.2    PureStorage.FlashArray.VMware.RDM
Function        Get-PfaConnectionFromRDM                           1.1.0.2    PureStorage.FlashArray.VMware.RDM
Function        Get-PfaRDMSnapshot                                 1.1.0.2    PureStorage.FlashArray.VMware.RDM
Function        Get-PfaRDMVol                                      1.1.0.2    PureStorage.FlashArray.VMware.RDM
Function        New-PfaRDM                                         1.1.0.2    PureStorage.FlashArray.VMware.RDM
Function        New-PfaRDMSnapshot                                 1.1.0.2    PureStorage.FlashArray.VMware.RDM
Function        Remove-PfaRDM                                      1.1.0.2    PureStorage.FlashArray.VMware.RDM
Function        Set-PfaRDMCapacity                                 1.1.0.2    PureStorage.FlashArray.VMware.RDM

For instance, to create a new RDM, run:

PS C:\> connect-viserver -Server vcenter-01.purestorage.com
PS C:\> $flasharray = new-pfaarray -endpoint flasharray-01.purestorage.com -Credentials (get-credential)
PS C:\> $vm = get-vm SQLVM
PS C:\> $vm | new-pfaRDM -sizeInTB 4 -flasharray $flasharray

Replace vCenter FQDN, FlashArray FQDN, volume size and VM name with your own.