Skip to main content
Pure Technical Services

Cloning an Oracle Database on VMware RDMs

Currently viewing public documentation. Please login to access the full scope of documentation.

 

Raw device mapping or RDM for short provides a mechanism for a virtual machine to have direct access to a volume on a FlashArray™. It can be used only with Fibre Channel or iSCSI. RDM can be thought of as providing a symbolic link from a VMFS volume to a physical volume on the Flash Array. It is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device. The RDM contains metadata for managing and redirecting disk access to the physical device. It has several advantages as it enables a one-to-one relationship between a virtual disk in the VM and the volume on the Flash Array. That opens up the possibility of using Flash Array features like snapshots and replication at the virtual disk granularity, and that's a major advantage over VMFS datastores. 

RDMs are not fully integrated with the VMware stack. VMware has addressed these limitations with a feature called Virtual Volumes (vVols) that they released in vSphere 6.0. One can think of vVols as the "new and improved" RDM. Because of the advantages that vVols provide over VMFS or RDMs, Pure Storage recommends that our customers use vVols for Oracle workloads whenever possible. However, due to various constraints, some customers may not be able to move to vVols immediately and for them, RDMs may be the next best option. This document provides detailed instructions on how to create a clone of an Oracle database that is using RDMs.  

For instructions on cloning an Oracle database on VMFS, please refer to KB article Cloning an Oracle Database on VMware VMFS.

To learn about cloning an Oracle database on Virtual Volumes, please refer to KB article  - Virtualize an Oracle Database on VMware using Virtual Volumes.

 

RDMs can be configured in two different modes: physical compatibility mode (pRDM) and virtual compatibility mode (vRDM). 

Physical mode: Each pRDM is a volume on an array. No I/Os go through the virtual SCSI stack, instead the VM has direct access to the volume with minor exceptions. The OS sees the volume as presented by the array. VM snapshots are not available when RDM is used in physical compatibility mode, but that is not a significant limitation, because with RDMs we would prefer to take FlashArray snapshots, and not VM snapshots.

Virtual mode: Each vRDM is a volume on an array. I/O goes through the virtual SCSI stack, so the guest does not have direct access to the volume and therefore it looks like a VMware “virtual” disk to the OS. VM snapshot functionality is available to RDM used in virtual compatibility mode.

Both physical and virtual RDMs are supported for running Oracle databases. For details on the differences between Physical and Virtual mode, please refer to the VMware document (KB 2009226) - Difference between Physical compatibility RDMs and Virtual compatibility RDMs

Irrespective of the compatibility mode, an RDM disk is a volume on the FlashArray. Therefore the steps for cloning a VM are the same for both modes.

 

Cloning Process

We will now go over the detailed steps to clone an Oracle database on RDM disks with a live example.

In our example, we have a VM called vm-ora-prdm-prod-01 that contains three RDM disks of 2T each - one disk is for the Grid Infrastructure and Database software (XFS filesystem mounted on /u01), whereas the remaining two contain DATA and FRA diskgroups. The VM is running Oracle Enterprise Linux 7.7 (uek 4.1.12-124.37). To keep things simple, we are using Linux udev rules for device persistence.

 

Clone the source VM

The first step of cloning an Oracle database that runs on a VM is to create a VM. There are various options here: We can certainly create a new VM from scratch, install the OS from the ISO, apply any OS patches if required to bring it to the same version level as the OS on the source VM, set kernel parameters to the Oracle recommended values, create users and groups, etc. As we can see, this method is too long and error-prone.

A better approach would be to leverage the features provided by the VMware platform and create a template or clone from the production VM itself. Not only would it already have the required OS and Oracle patches applied, but it would also have the OS parameters optimized for use for an Oracle database.     

In the Hosts and Clusters view, right-click on the source VM, and select Clone -> Clone to Virtual Machine. The Clone Existing Virtual Machine wizard opens up. Provide the required details for the new clone. In the Select clone options screen, make sure to select the checkbox to Customize this virtual machine's hardware option. 

clipboard_e1006cf85b272628e78ed8899d7c4df9a.png

 

The next screen will allow customization of the Clone VM. Delete the 3 RDM disks as shown below.

 

The Ready to complete screen will show that the 3 hard disks will be removed.

clipboard_eb65e61eb6227e8072aa917451a5c9409.png

 

Click on Finish to submit the task to create the clone VM - vm-ora-prdm-dev-01.

After the task completes, power-on the VM and launch the console to view the OS boot progress. It's quite likely that the OS boot process will hang, because during boot it'll try to mount the /u01 file system which is not available. Note that the volume for /u01 was on one of the 3 disks that we removed while customizing the VM clone. Enter the root password to get to the console, comment the appropriate line for /u01 in /etc/fstab and reboot the VM. Do not delete this line as we will be uncommenting it at a later step. The OS should be able to boot now. Assign a new IP address and make sure that it is able to connect to the network. Do not change the hostname of the clone VM

 

Create Volumes for Clone VM

Use the following CLI commands to create new volumes for the clone. The sizes of these clone volumes should be the same as the source volumes.

ssh pureuser@10.22.222.28 purevol create --size 2T  oracle-rt-prdm-dev-01-orahome
ssh pureuser@10.22.222.28 purevol create --size 2T  oracle-rt-prdm-dev-01-data
ssh pureuser@10.22.222.28 purevol create --size 2T  oracle-rt-prdm-dev-01-fra

 

After the volumes are created, we connect them to the VMware cluster. If you would like to connect the volumes to an individual ESXi host, replace the --hgroup option with --host option in the commands below.

ssh pureuser@10.22.222.28 purevol connect --hgroup ora-demo-vmcluster  oracle-rt-prdm-dev-01-orahome
ssh pureuser@10.22.222.28 purevol connect --hgroup ora-demo-vmcluster  oracle-rt-prdm-dev-01-data
ssh pureuser@10.22.222.28 purevol connect --hgroup ora-demo-vmcluster  oracle-rt-prdm-dev-01-fra

 

We can log in to the GUI interface to verify the new volumes were created and connected to the right host or host group, as the case may be. Make a note of the LUN number for each volume.

clipboard_e65700c6b65b01feb0f6a28acf46ed70d.png

 

Add New Volumes to Clone VM

Next, we will add these 3 new volumes to the RDM. Open the Edit Settings dialog for the Clone VM. Notice that the VM only has one Hard Disk which has the operating system. Select RDM Disk from the New Device drop-down at the bottom and click on the Add button. Note that we have also unchecked the Connected checkbox for the Network Adapter. This will prevent the VM from connecting to the network on startup and causing a possible IP address clash with the source VM. 

clipboard_ee2afeddc0121c338bc966645c1e17769.png

 

Another dialog will pop up that'll list the LUNs available to be added as an RDM disk. Identify the three volumes we created using the LUN column and add them one by one. It's a good practice to also match the volume serial number with the Name shown in the first column. If the three clone volumes do not show up in the list, try to Rescan Storage.

clipboard_e542d28d9483d73ffc12d06c086c7a799.png

 

Refresh the Clone Volumes from Source

Now that the new volumes are added to the clone VM, the next step is to refresh them with data from the source.

#  ssh pureuser@10.21.228.28 purevol copy --overwrite oracle-rt-prdm-prod-01-orahome oracle-rt-prdm-dev-01-orahome
Name                           Size  Source                          Created                  Serial                  
oracle-rt-prdm-dev-01-orahome  2T    oracle-rt-prdm-prod-01-orahome  2020-04-16 00:57:37 PDT  730D187406C14775000BED12

#  ssh pureuser@10.21.228.28 purevol copy --overwrite oracle-rt-prdm-prod-01-data oracle-rt-prdm-dev-01-data
Name                        Size  Source                       Created                  Serial                  
oracle-rt-prdm-dev-01-data  2T    oracle-rt-prdm-prod-01-data  2020-04-16 00:58:17 PDT  730D187406C14775000BED13

#  ssh pureuser@10.21.228.28 purevol copy --overwrite oracle-rt-prdm-prod-01-fra oracle-rt-prdm-dev-01-fra
Name                       Size  Source                      Created                  Serial                  
oracle-rt-prdm-dev-01-fra  2T    oracle-rt-prdm-prod-01-fra  2020-04-16 00:58:34 PDT  730D187406C14775000BED14

Add an entry to /etc/fstab for the /u01 filesystem. Actually, the entry is already there, we just need to uncomment it. We should be able to mount the /u01 filesystem now. 

 

# lsscsi --scsi_id
[1:0:0:0]    disk    VMware   Virtual disk     2.0   /dev/sda   36000c29513f1b35efa4b3b55f7c9ee74
[1:0:1:0]    disk    PURE     FlashArray       8888  /dev/sdb   3624a9370730d187406c14775000bed12
[1:0:2:0]    disk    PURE     FlashArray       8888  /dev/sdc   3624a9370730d187406c14775000bed13      <-- DATA
[1:0:3:0]    disk    PURE     FlashArray       8888  /dev/sdd   3624a9370730d187406c14775000bed14      <-- FRA

 

Update UDEV rules - edit /etc/udev/rules.d/99-oracle-asmdevices.rules and update the RESULT value with the SCSI_ID obtained above.

KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/%k", RESULT=="3624a9370730d187406c14775000bed13", 
SYMLINK+="asm_data", ACTION=="add|change", OWNER="grid", GROUP="asmadmin", MODE="0660

KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/%k", RESULT=="3624a9370730d187406c14775000bed14", 
SYMLINK+="asm_fra", ACTION=="add|change", OWNER="grid", GROUP="asmadmin", MODE="0660"

 

Once the rules are updated, reload them and test using the commands below.

# udevadm control --reload-rules

# udevadm test /block/sdc
# udevadm test /block/sdd

 

Symbolic links should have gotten created and device ownership should show grid:asmadmin as shown below.

# ls -l /dev/asm* /dev/sdc /dev/sdd
lrwxrwxrwx. 1 root root         3 Apr 16 01:35 /dev/asm_data -> sdc
lrwxrwxrwx. 1 root root         3 Apr 16 01:35 /dev/asm_fra -> sdd
brw-rw----. 1 grid asmadmin 8, 32 Apr 16 01:35 /dev/sdc
brw-rw----. 1 grid asmadmin 8, 48 Apr 16 01:35 /dev/sdd

 

Reboot the clone VM. The VM should boot without any errors this time.

If the ASM and database instance is configured to start on reboot, they should start automatically. 

 

You might have noticed that we did not change the hostname of the cloned VM. This was done on purpose. That's because changing the hostname will break Oracle Grid Infrastructure configuration and neither ASM nor the database will come up.

To reconfigure Grid Infrastructure configuration, so that ASM comes back up after the hostname change, a few additional steps need to be performed. These steps are discussed in detail in Cloning Oracle Grid Infrastructure for a Standalone Server.