Virtualize an Oracle Database on VMware using Virtual Volumes
VMware is a widely used virtualization platform for running Oracle databases. In the past, database storage options available to Oracle DBAs for creating virtual disks were limited to either a VMFS (Virtual Machine File System) datastore, or a RDM (Raw Device Mapping) disk.
VMware Virtual Volumes (vVol) is a new technology introduced in vSphere 6.0 to radically simplify storage management for vSphere admins as well as storage admins. vVol datastores address the challenges of over-abstraction of virtual disks in VMFS datastores on one end, and rigidity of RDM (Raw Device Mapping) datastores on the other.
In this article, we'll go through the use case of migrating an Oracle database running on physical hardware to a VMware virtual machine using vVols. We'll see how simple it is to migrate from physical to virtual and visa versa using the vVol technology. The instructions provided here can also be used to move a virtualized Oracle database on VMware RDMs to VMware Virtual Volumes.
Many organizations run their production workloads on physical servers and their tier-two environments on virtual. This process can also be used to provision and refresh clones of a production Oracle database running on physical servers.
Before we dive deep into vVols, let's first take a quick look at VFMS and RDM, and how vVols address their shortcomings.
VMFS
VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. Each virtual machine consists of a set of files (vmdks) and VMFS is the default storage system for these files on physical SCSI disks and partitions. A VMFS datastore typically consists of a single storage volume, but it contains virtual disks from multiple virtual machines. VMFS datastore supports VMware distributed infrastructure services such as vSphere vMotion, DRS, and vSphere HA to operate across a cluster of ESXi hosts. This gives us a lot of flexibility, but at the same time, the increased abstraction leads to loss of control as well as monitoring capabilities. At the storage level, the granularity of array operations like snapshots and replication is at a volume level (i.e. at a VMFS datastore level). That makes it difficult to snapshot a VM, or an individual disk inside a VM, without having to snapshot the entire datastore. Moreover, VM snapshots are known to suffer from long freezes of the VM when old snapshots are deleted (please refer to VMware KB 1002836 for more details). Storage level snapshots do not have this problem and are much more efficient.
For detailed instructions on cloning an Oracle database on VMFS, please refer to KB article Cloning an Oracle Database on VMware VMFS.
RDM
RDM or Raw Device Mapping, as its name suggests, allows a virtual machine to directly access a volume on the FlashArray™. It can be used only with Fibre Channel or iSCSI. RDM can be thought of as providing a symbolic link from a VMFS volume to a physical volume on the FlashArray. This is great because it enables a one-to-one relationship between a virtual disk in the VM and the volume on the FlashArray, and opens up the possibility of using FlashArray features like snapshots and replication at the virtual disk granularity. However, RDMs are not fully integrated with the VMware stack and suffer from many limitations.
VVols deliver array native capabilities to VMs. VVol technology can be thought of as a VMware certified and integrated framework for provisioning RDMs. We get all the benefits of RDMs – data mobility, array-based granularity, but integrated into the VMware stack. No additional plugins or workflows to manage.
Please note that vVols are currently not supported with NVMe-oF, ActiveCluster or ActiveDR.
With Virtual Volumes, virtual disks become native objects on the storage array. This means VM operations like clones and snapshots are executed natively by the storage array. Any VM placed onto a vVols datastore gets all of these benefits out of the box.
There are a few different types of vVols that get created on the FlashArray:
1. Config vVol – This is 4GB volume that gets created per VM that stores the description of the hardware and settings for the virtual machine. It contains the VMX file, logs and few other descriptor files.
2. Data vVol – This gets created for each virtual disk added to the VM and is the volume that stores the actual data. Its size is the same as the requested size of the virtual disk. Different configurations and policies can be applied to each Data vVol.
3. Swap vVol – This is created automatically when a VM powers up and is deleted when the VM is powered off.
4. Memory vVol – This gets created when a snapshot is taken with the “Snapshot Memory” option.
VMware features like pausing, resuming, snapshotting a VM are all available to vVol VMs.
At a high level, we will be performing the following steps:
- Connect VMware cluster to the FlashArray
- Configure VASA storage provider on the VMware cluster
- Gather information about the database on the physical server (source)
- Create a new VM to host the migrated database (target)
- Install Oracle Linux and Oracle 19c on the target VM
- Migrate Oracle database volumes from physical server to virtual machine using snapshot copy at the FlashArray level.
1. Connect VMware cluster to the FlashArray
This is a one-time step.
Log in to the FlashArray and go to Storage -> Hosts and click on the plus icon in the upper right corner of the Hosts panel to bring up the Create Host dialog.
Provide the name of the ESXi hosts in the cluster.
After you click on Create, this host will appear in the Hosts list.
Repeat this step to create FlashArray Host objects for all ESXi hosts in the cluster.
Go to Storage -> Hosts and click on the plus icon in the upper right corner of the Host Groups panel to bring up the Create Host Group dialog.
Give the Host Group a name.
Add all ESXi hosts to the host group just created.
2. Configure VASA Storage Provider on VMware cluster
This is a one-time step, once a VASA provider is registered, virtual volumes for any number of databases can be created.
It is much easier to set up vVols with the Pure Storage plugin, so we install that first.
Log in to the FlashArray and go to Settings -> Software
On the vSphere Plugin panel, click on the Edit icon on the top right.
Enter connection details for the vCenter server.
After you press Save, it will connect to the vCenter server and check to see if the plugin is already installed.
If the plugin has not been installed, an Install button will appear on the bottom right side of the panel. If the plugin is already installed but is at a lower version, it will display an Update button.
Click on the Install button.
Next login into the vCenter server. Hover on the Home icon to bring up the menu. You should see a "Pure Storage" menu item at the bottom. If you're already logged in, you may have to logout and log back in to see it.
Click on the Pure Storage menu item to bring up the FlashArray page. Click on Add FlashArray button to bring up the Add FlashArray dialog.
Once the FlashArray is added, right-click on it and select the Register Storage Provider.
To verify that the VASA provider has been successfully registered, in the Web Client Host and Clusters view, click the target vCenter in the inventory pane, select the Configure tab. The newly-registered providers should appear in the Storage Providers table with status Online and Active.
We can see in the screenshot above that although both FlashArray controllers’ VASA providers are online, vCenter uses one provider at a time. The provider in-use is marked Active; its companion as Standby.
For more details on setting up Virtual Volumes, please refer to Virtual Volumes Quick Start Guide.
3. Gather information from the source database on the physical server
3a. Identify the FlashArray volume containing the software
[oracle@orademo1 ~]$ echo $ORACLE_HOME /u01/app/oracle/product/19.0.0/dbhome_1 [oracle@orademo1 ~]$ cat /etc/fstab | grep u01 /dev/mapper/oracle-rt-ora01prd-home-01 /u01 xfs defaults 0 0
3b. Query the data dictionary to identify the FlashArray volumes where the database is residing.
In our example, the source database is called ORA01PRD and all the data, temp, control and log files are located in an ASM disk group called ORA01PRD_DATA. In case the database is on an OS file system, you can identify the FlashArray volumes by looking at mount path or device information in /etc/fstab, like we did in the previous step for the ORACLE_HOME.
SYS@ORA01PRD >select file_name from dba_data_files; FILE_NAME -------------------------------------------------------------------------------- +ORA01PRD_DATA/ORA01PRD/DATAFILE/system.257.1029181373 +ORA01PRD_DATA/ORA01PRD/DATAFILE/sysaux.258.1029181409 +ORA01PRD_DATA/ORA01PRD/DATAFILE/undotbs1.259.1029181423 +ORA01PRD_DATA/ORA01PRD/DATAFILE/users.260.1029181425 +ORA01PRD_DATA/ORA01PRD/DATAFILE/mgmt_ecm_depot_ts.271.1029193195 +ORA01PRD_DATA/ORA01PRD/DATAFILE/mgmt_tablespace.272.1029193197 +ORA01PRD_DATA/ORA01PRD/DATAFILE/mgmt_ad4j_ts.273.1029193197
SYS@ORA01PRD >select file_name from dba_temp_files; FILE_NAME -------------------------------------------------------------------------------- +ORA01PRD_DATA/ORA01PRD/TEMPFILE/temp.269.102918148
SYS@ORA01PRD >select member from v$logfile; MEMBER -------------------------------------------------------------------------------- +ORA01PRD_DATA/ORA01PRD/ONLINELOG/group_3.265.1029181483 +ORA01PRD_DATA/ORA01PRD/ONLINELOG/group_3.268.1029181483 +ORA01PRD_DATA/ORA01PRD/ONLINELOG/group_2.264.1029181483 +ORA01PRD_DATA/ORA01PRD/ONLINELOG/group_2.267.1029181483 +ORA01PRD_DATA/ORA01PRD/ONLINELOG/group_1.263.1029181483 +ORA01PRD_DATA/ORA01PRD/ONLINELOG/group_1.266.1029181483
SYS@ORA01PRD >select name from v$controlfile; NAME -------------------------------------------------------------------------------- +ORA01PRD_DATA/ORA01PRD/CONTROLFILE/current.262.1029181479 +ORA01PRD_DATA/ORA01PRD/CONTROLFILE/current.261.1029181479
3c. The spfile file is also in ASM.
SYS@ORA01PRD >show parameter spfile NAME TYPE VALUE ------------------------------------ ----------- ----------------------------------------------------------- spfile string +ORA01PRD_DATA/ORA01PRD/PARAMETERFILE/spfile.270.1029181705
3d. Identify the destination for the Fast Recovery Area
SYS@ORA01PRD >show parameter db_recovery_file_dest NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_recovery_file_dest string +ORA01PRD_DATA
3e. Identify ASM disk paths for the disk groups identified in the previous step.
SYS@ORA01PRD >SELECT g.group_number, d.path 2 FROM v$asm_disk d, v$asm_diskgroup g 3 WHERE d.group_number = g.group_number and g.name='ORA01PRD_DATA'; GROUP_NUMBER PATH ------------ ------------------------------ 1 AFD:ORA01PRD_DATA01
3f. In this case, we have only one disk and one datastore. Using the below command, we can find the device mapper for each of the disks.
[grid@orademo1 ~]$ cat /dev/oracleafd/disks/ORA01PRD_DATA01 /dev/mapper/oracle-rt-ora01prd-data01
3g. And then from /etc/multipath.conf, we identify the FlashArray volume serial numbers for both the Oracle software directory.
multipath { wwid 3624a93706c1d7213605f49200006d926 alias oracle-rt-ora01prd-data01 } multipath { wwid 3624a93706c1d7213605f49200006d937 alias oracle-rt-ora01prd-home-01 }
3h. Using the serial numbers, the FlashArray volumes can be identified either through the GUI, or using the CLI as shown below.
[oracle@orademo1 ~]$ ssh pureuser@10.21.228.24 "purevol list --filter \"serial = '*6c1d7213605f49200006d926'\"" Name Size Source Created Serial oracle-rt-ora01prd-data01 2T - 2019-10-24 03:04:03 PDT 6C1D7213605F49200006D926 [oracle@orademo1 ~]$ ssh pureuser@10.21.228.24 "purevol list --filter \"serial = '*6c1d7213605f49200006d937'\"" Name Size Source Created Serial oracle-rt-ora01prd-home-01 500G - 2019-10-24 03:07:19 PDT 6C1D7213605F49200006D937
At the end of this step, you should have a list of volumes, their sizes, and their serial numbers.
These volumes should be added to a Protection Group, if not done already. In our example, the protection group is called oracle-rt-ora01prd-pg.
4. Create a new Virtual Machine for hosting the migrated database
We now need a VM to host the Oracle database server we are migrating. We can create it now, or it could have been created beforehand.
Add a Hard Disk for the operating system as well as the disks with the sizes identified above. To avoid problems, it is recommended to install the same operating system version on the VM as is installed on the physical server from where we are migrating.
We have created a VM called lnx-rt-vm-ora01prd and have added the required disks to it from the vVol datastore.
While we are setting up the new VM, make sure disk.enableUUID is set to TRUE. To do that, go to VM Options -> Advanced -> Edit Configuration, and add the disk.enableUUID parameter if it is not present. Note that this change can only be made when the VM is powered off.
Assign an IP address to the VM, and power it on.
5. Install OS and Oracle software on the target VM
The three disks should be visible on the VM. We have given the VM the same host name as the physical server.
[root@orademo1 ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 99G 0 part ├─ol-root 249:0 0 50G 0 lvm / ├─ol-swap 249:1 0 3.9G 0 lvm [SWAP] └─ol-home 249:2 0 45.1G 0 lvm /home sdb 8:16 0 500G 0 disk sdc 8:32 0 2T 0 disk
Create an xfs file system on the 500G volume.
Create a mount point for the software home. Create an entry in /etc/fstab and mount /u01.
# mkdir /u01 # echo "UUID=fb70ad02-edb1-4f75-affb-3ce574608856 /u01 xfs discard,noatime 0 0" >> /etc/fstab # mount /u01
As we know, when we create a disk in a vVol datastore, a volume automatically gets created on the FlashArray. The volumes actually get created inside a Volume Group whose name is derived from the name of the VM as shown below
Clicking on the volume group will show the volumes it contains.
As mentioned earlier, at a minimum, three kinds of vVols get created - Config and Swap vVol in addition to Data vVols. As we can see, it is using automatically generated names for the Data vVols. At this point we can go and rename them to same names of the volumes connected to the physical server. Even though volumes for both databases are on the same array, there will not be a naming conflict as the VM volumes are in a volume group.
To rename a volume, simply click on the button with three vertical dots (kebab icon) in the last column. After rename, this is how our volumes look like.
Follow the normal procedure to install Grid Infrastructure as well as Oracle database software from the installation media. Create a dummy database that is identical to the physical server in terms of the number of file system volumes or ASM disks, as the case may be. In case of ASM, the dummy database should use the same method (udev,ASMlib or ASMFD) for ASM disk persistence as the source. The database name should be the same, although the hostname of the VM can be different if required.
At this point, we have a database identical to the one on physical server up and running on the VM. The final step would be to refresh the contents of the dummy database with the production database whenever we are ready to make the switch.
We have created the dummy database to have the database installer do some of the tasks like registering the database with Oracle Clusterware, and just as a sanity check that the grid and database software is functioning correctly before we refresh the database volumes from production.
Alternatively, we can skip the step of creating the dummy database, i.e. we only install database software. In that case, we'll register the database with Oracle Clusterware manually as shown in Step 6d.
6. Migrate Oracle database volumes from the physical server to virtual machine
Whenever we are ready to make the switch, the following sequence of steps needs to be executed.
6a. Shutdown the dummy database on the VM.
6b. Dismount the file system or ASM diskgroups.
6c. Copy the data volumes of the prod running on the physical host to the new prod running on the VM.
We will copy the source volumes to the target with the overwrite option. We will not copy the volumes directly, but first, we will create a snapshot of the source protection group. Navigate (Storage-> Protection Groups) to the protection group oracle-rt-ora01prd-pg. Click on it to go to the details page. On the Protection Group Snapshots, click on the plus icon.
Enter a suffix and a protection group snapshot will get created immediately.
Use the following command to snapshot the protection group from the CLI.
SUFFIX=demo ssh -l ${FA_USER} ${FA_IP} purepgroup snap --suffix $SUFFIX oracle-rt-ora01prd-pg
Click on the protection group snapshot that just got created. That will open a dialog that shows snapshots of the volumes in the protection group.
Click on the copy icon for each volume snapshot that opens the dialog window shown below. Select the volume group as the container, and provide the name of the target volume. Make sure that the overwrite option is selected.
This step can be scripted as follows.
SUFFIX=demo VOL_GROUP_NAME=vvol-lnx-rt-vm-ora01prd-1de7f237-vg PG_NAME=oracle-rt-ora01prd-pg ssh -l ${FA_USER} ${FA_IP} "purevol list --notitle --csv --snap --filter \"name ='*data*'\" ${PG_NAME}.${SUFFIX}* " | while read line do SNAP=`echo $line | cut -d "," -f1` TARG=`echo ${SNAP} | cut -d "." -f3` ssh -l ${FA_USER} ${FA_IP} "purevol copy --overwrite $SNAP ${VOL_GROUP_NAME}/$TARG" < /dev/null done
Copy all the file system or ASM volumes for the database. In this example, to keeps things simple, we have only one ASM disk. We do not copy the volume that contains the grid and database software because we installed it manually from the installation media.
6d. Recall that spfile of the production database was stored in the ASM diskgroup. So we need to update the spfile setting of the dummy database on the VM with the spfile location setting from production.
oracle@orademo1$ srvctl modify database -db ora01prd -spfile +ORA01PRD_DATA/ORA01PRD/PARAMETERFILE/spfile.270.1029181705
If we chose not to create the dummy database, we need to register the database with Oracle Clusterware using the following command.
oracle@orademo1$ srvctl add database -db ora01prd -oraclehome /u01/app/oracle/product/19.0.0/dbhome_1 \ -spfile +ORA01PRD_DATA/ORA01PRD/PARAMETERFILE/spfile.270.1029181705
6e. Start the migrated database on the VM.
oracle@orademo1 $ srvctl start database -db ora01prd oracle@orademo1 $ ps -ef|grep smon grid 25916 1 0 02:22 ? 00:00:00 asm_smon_+ASM oracle 27598 1 0 02:30 ? 00:00:00 ora_smon_ORA01PRD
The actual switch over from physical to virtual server only took just a couple of minutes. No data was physically moved.
In this article, we have seen how quick and easy it is to migrate an Oracle database running on a physical server to a VMware virtual machine using Vols. This use case was for migrating a database from physical to virtual machine. Therefore we did not have to change the SID or the database name, and no changes to the database were needed. For reverse migration, i.e. from virtual to physical, the process will pretty much be the same.
This procedure can be used for other use cases like creating and refreshing secondary environments from production. A few additional steps will need to be performed at the end for things like changing the SID and database name.