Cloning an Oracle Database on Oracle Linux KVM
Overview
A large number of our customers are adopting Oracle Linux Kernel-based Virtual Machine (KVM), which is the successor to the Xen hypervisor based Oracle VM. In this article, we examine how one can quickly clone an Oracle database running on an Oracle Linux KVM.
Cloning and refresh of Oracle databases is a common operational activity performed by DBA teams. There are different methods using which clones can be created/refreshed. If the cloning/refresh method can take advantage of space-efficient and nearly instantaneous Purity Snapshots, the operation can be completed in a matter of minutes. In the case of Oracle Linux KVM, for making use of snapshot-based cloning and refresh, the database needs to be on virtual disks that use Direct LUNs instead of Storage Domains. Direct LUN enables a virtual machine to directly access RAW block-based storage, in our case, FlashArray volumes without the abstraction layer between the virtual machine and the storage.
Oracle Cloning and Refresh on Direct LUNs
Each virtual disk created on a Direct LUN corresponds to a FlashArray™ volume. The process of cloning an Oracle database that is on Direct LUNs is very similar to cloning a database that is on a physical host. In this guide, we'll go through the steps of refreshing a development clone (lvmdev01) running on VM oracle-vm-dev-01 from the production database lvmprd01 running on VM oracle-vm-prod-01.
Environment Details
Database version : 19c
Storage Type : ASM using ASMLib
Source | Target | |
---|---|---|
VM Name | oracle-vm-prod-01 | oracle-vm-dev-01 |
DB Name | lvmprd01 | lvmdev01 |
Summary of the process
Create a new clone
The high-level steps to create a new clone can be summarized as follows. This clone will have the same hostname, ASM disk and disk group names, SID as well as DB name as the source database.
- Create the target virtual machine
- Snapshot source volumes on the FlashArray
- Copy snapshot volumes to target volumes
- Connect target volumes to the host or host group
- Update multipath configuration
- Add virtual disks to the clone VM
- Mount software volume
- Scan ASM disks
- Restart the cloned VM
- Database clone created - it should automatically come up.
Change Disk Group Name, SID, DB Name
Now that the database has been cloned as exact copy of the source, we can change ASM disk and disk group names, SID, and the DB name. For that we need to perform the following one time steps:
- Create a new initialization file
- Create a script to create the new control file
Once we have these, we can follow the steps to Refresh an existing clone to refresh the clone with changed parameters.
Refresh an existing clone
The process to refresh the clone is similar to the steps listed above.
- Shutdown the clone database
- Unmount ASM disk groups
- Delete ASM disks
- Snapshot source volumes
- Copy snapshot volumes to target volumes
- Rename ASM disks on the source
- Rename Disk Groups
- Startup nomount the database
- Create control file
- Perform recovery
Oracle ASM is a volume manager and a file system for Oracle database files. Oracle ASM is the recommended storage platform for running Oracle databases and a large majority of the Oracle databases today run on ASM.
If the database is on a filesystem, the procedure to create a clone and refresh it follows the same concepts. It is much simpler because we do not have to worry about the complexities ASM introduces, like the need to setup an ASM instance, device persistence, etc. The ASM operations need to be replaced with corresponding file system operations, if applicable. For eg., mount/unmount of ASM disk groups will get replaced with mount/unmount of the file system.
Create New Database Clone
Create the target virtual machine
The first step is to create a virtual machine. This virtual machine should have the same OS and kernel version. All pre-installation tasks like creating OS users and groups, setting recommended kernel parameters, etc. need to done. It is therefore better to create the target VM by cloning it from the source.
Go to Compute -> Virtual Machines and find the source VM. Click on the VM to go to the detail page. Click on the Snapshots tab and highlight one of the snapshots you would like to use to clone the VM.
After the VM is created, it will be in the "Down" status. We would want to disconnect the network so that there is no IP address clash when we turn the VM on. Go to the VM details page and click on the Network tab and edit the network to open the following window. Select the Down radio button to disable the network.
Next, we start the VM by clicking on the Run button.
As the network is down, we will click on the Console button to open the console. Log in as root and assign a new IP address to this VM and restart the network service on the VM.
Now we go back to the "Edit Network Interface" screen and enable the network interface by selecting the Up radio button. Check that you are able to connect to this VM from other machines on the network.
Snapshot source volumes on the FlashArray
Take a snapshot of the source volumes.
SUFFIX=`date +%Y-%m-%d-%H%M%S` FLASH_ARRAY=10.21.184.113 ssh pureuser@${FLASH_ARRAY} purevol snap --suffix SNAP-$SUFFIX oracle-lvm-prod-002-data-01 oracle-lvm-prod-002-fra-01 oracle-lvm-prod-002-redo-01
Snapshot the volume that contains the Oracle Homes. This is a one time step that is needed when we are setting up a new clone.
ssh pureuser@${FLASH_ARRAY} purevol snap --suffix SNAP-$SUFFIX oracle-lvm-prod-002-orasoft
Copy snapshot volumes to target volumes
Copy them to the corresponding target volumes.
ssh pureuser@${FLASH_ARRAY} purevol copy oracle-lvm-prod-002-data-01.SNAP-$SUFFIX oracle-lvm-dev-002-data-01 ssh pureuser@${FLASH_ARRAY} purevol copy oracle-lvm-prod-002-fra-01.SNAP-$SUFFIX oracle-lvm-dev-002-fra-01 ssh pureuser@${FLASH_ARRAY} purevol copy oracle-lvm-prod-002-redo-01.SNAP-$SUFFIX oracle-lvm-dev-002-redo-01
Copy the volume that contains the Oracle Homes. This is a one time step that is needed when we are setting up a new clone.
ssh pureuser@${FLASH_ARRAY} purevol copy oracle-lvm-prod-002-orasoft.SNAP-$SUFFIX oracle-lvm-dev-002-orasoft
Connect target volumes to the host or host group
If we have a cluster of KVM hosts, then we would first create a host group on the FlashArray that includes all the hosts in the cluster and then connect the volumes to the host group. In such case, we would replace --host with --hgroup in the commands below.
ssh pureuser@${FLASH_ARRAY} purevol connect --host sn1-r720-e03-11 oracle-lvm-dev-002-orasoft ssh pureuser@${FLASH_ARRAY} purevol connect --host sn1-r720-e03-11 oracle-lvm-dev-002-data-01 ssh pureuser@${FLASH_ARRAY} purevol connect --host sn1-r720-e03-11 oracle-lvm-dev-002-fra-01 ssh pureuser@${FLASH_ARRAY} purevol connect --host sn1-r720-e03-11 oracle-lvm-dev-002-redo-01
Update multipath configuration
Create multipath aliases for the new volumes on the host.
multipath { wwid 3624a937058f0a8a5b304695a00092778 alias oracle-lvm-dev-002-data-01 } multipath { wwid 3624a937058f0a8a5b304695a00092781 alias oracle-lvm-dev-002-fra-01 } multipath { wwid 3624a937058f0a8a5b304695a00092782 alias oracle-lvm-dev-002-redo-01 } multipath { wwid 3624a937058f0a8a5b304695a00092785 alias oracle-lvm-dev-002-orasoft }
Restart the multipath service and make sure that the paths have been added
[root@sn1-r720-e03-11 ~]# multipath -ll|grep -i 58F0A8A5B304695A00092 oracle-lvm-dev-002-fra-01 (3624a937058f0a8a5b304695a00092781) dm-69 PURE ,FlashArray oracle-lvm-dev-002-data-01 (3624a937058f0a8a5b304695a00092778) dm-68 PURE ,FlashArray oracle-lvm-dev-002-orasoft (3624a937058f0a8a5b304695a00092785) dm-67 PURE ,FlashArray oracle-lvm-dev-002-redo-01 (3624a937058f0a8a5b304695a00092782) dm-71 PURE ,FlashArray
Add virtual disks to the clone VM
Next, we need to add the 4 cloned volumes as 4 disks to the clone VM.
Go to the virtual machine details page and click on Disk. Then click on New button at the top to add a new virtual disk to the VM. Create a new disk for each of the four volumes. The multipath alias would be listed under the LUN ID column. If not using multipath, the volume serial number can be used to identify the volume.
Repeat this step for the remaining disks.
At this point, all 4 disks should be visible in the VM.
[root@oracle-lvm-prod-002 /]# blkid /dev/mapper/ol_oracle--lvm--prod--001-root: UUID="8b672ccb-974f-43c7-8122-6b8cc388616c" TYPE="xfs" /dev/sda2: UUID="HVNj9Q-FlAZ-bZu7-DFWQ-um5M-MzTU-e4T8pq" TYPE="LVM2_member" /dev/sda1: UUID="0402467d-3bcb-4302-af58-a6ef9d07db4f" TYPE="xfs" /dev/mapper/ol_oracle--lvm--prod--001-swap: UUID="92ed2b9f-07f2-438b-819f-62399a139e45" TYPE="swap" /dev/mapper/ol_oracle--lvm--prod--001-home: UUID="6e6f88a6-9531-4897-821f-4df3d0a39fb9" TYPE="xfs" /dev/sdb1: LABEL="ASM_LVMPRD01_DATA01" TYPE="oracleasm" /dev/sdc1: LABEL="ASM_LVMPRD01_FRA01" TYPE="oracleasm" /dev/sdd1: LABEL="ASM_LVMPRD01_REDO" TYPE="oracleasm" /dev/sde: UUID="33d1aec8-c6a7-4fd1-ad4e-6867f800259b" TYPE="xfs"
Mount software volume
As we cloned the VM from the source, the /u01 mount point for the software directory is already present. Make any changes to the /etc/fstab file if required. In this case, no changes were needed as we were using the UUID instead of the file system path.
UUID=33d1aec8-c6a7-4fd1-ad4e-6867f800259b /u01 xfs defaults 0 0
Mount /u01 if it did not get automatically mounted.
Scan ASM disks
As the target volumes have now been refreshed from production, the ASM disk names need to be updated to correspond to the target names.
If using ASMLib, rename the disks of the cloned ASM volumes as shown below.
[root@oracle-lvm-prod-002 /]# oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks... Scanning system for ASM disks... Instantiating disk "ASM_LVMPRD01_DATA01" Instantiating disk "ASM_LVMPRD01_FRA01" Instantiating disk "ASM_LVMPRD01_REDO"
Restart the VM
At this point, all configuration, software and disks have been restored to look like the source. We restart the VM so that ASM and database can start automatically.
[root@oracle-lvm-prod-002 ~]# ps -ef|grep smon grid 6083 1 0 05:44 ? 00:00:00 asm_smon_+ASM oracle 6261 1 0 05:44 ? 00:00:00 ora_smon_lvmprd01 root 6677 6626 0 05:45 pts/0 00:00:00 grep --color=auto smon
Change Disk Group Name, SID, DB Name
These are one-time steps. Once we have these two files, they can be reused for refreshing the clone database.
Create a new database initialization file
On the source database, create pfile from spfile. Modify this parameter file according to target database configuration.
*.audit_file_dest='/u01/app/oracle/admin/lvmdev01/adump' *.audit_trail='db' *.compatible='19.0.0' *.control_files='+LVMDEV01_DATA/LVMPRD01/CONTROLFILE/coltrol01.ctl' *.db_block_size=8192 *.db_create_file_dest='+LVMDEV01_DATA' *.db_domain='puretec.purestorage.com' *.db_name='lvmdev01' *.db_recovery_file_dest='+LVMDEV01_DATA' *.db_recovery_file_dest_size=8256m *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=lvmdev01XDB)' *.local_listener='LISTENER_LVMDEV01' *.nls_language='AMERICAN' *.nls_territory='AMERICA' *.open_cursors=300 *.pga_aggregate_target=1009m *.processes=300 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=3024m *.undo_tablespace='UNDOTBS1'
Create a script to create the new control file
On the source database, run "alter database backup control file to trace" to obtain a create control file script to be used as a starting point. We intend to change the database name as well as rename the disk groups which is why we make the changes as shown below.
CREATE CONTROLFILE SET DATABASE "LVMDEV01" RESETLOGS NOARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 MAXINSTANCES 8 MAXLOGHISTORY 292 LOGFILE GROUP 1 ( '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_1.263.1079068693', '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_1.266.1079068695' ) SIZE 200M BLOCKSIZE 512, GROUP 2 ( '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_2.264.1079068693', '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_2.267.1079068695' ) SIZE 200M BLOCKSIZE 512, GROUP 3 ( '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_3.265.1079068693', '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_3.268.1079068695' ) SIZE 200M BLOCKSIZE 512 DATAFILE '+LVMDEV01_DATA/LVMPRD01/DATAFILE/system.257.1079068545', '+LVMDEV01_DATA/LVMPRD01/DATAFILE/sysaux.258.1079068591', '+LVMDEV01_DATA/LVMPRD01/DATAFILE/undotbs1.259.1079068617', '+LVMDEV01_DATA/LVMPRD01/DATAFILE/users.260.1079068617' CHARACTER SET AL32UTF8
Refresh Clone
Shut down the database
As we are doing a refresh of the clone database, we need to shut it down.
[root@oracle-vm-prod-003 ~]# srvctl stop database -d lvmdev01
Unmount the diskgroups
Unmount ASM disk groups before the underlying volumes can be overwritten with the latest copy.
ASMCMD> umount LVMDEV01_DATA -f ASMCMD> umount LVMDEV01_FRA -f ASMCMD> umount LVMDEV01_REDO -f
If using RAC, the diskgroups need to be unmounted on all nodes. Instead of connecting to each node, it's easier to do it using the following command.
[root@oracle-vm-prod-003 ~]# srvctl stop diskgroup -diskgroup LVMDEV01_DATA
Delete ASM disks
If using asmlib, delete the oracleasm disks
[root@oracle-vm-prod-003 ~]# oracleasm deletedisk ASM_LVMDEV01_DATA01 Clearing disk header: done Dropping disk: done [root@oracle-vm-prod-003 ~]# oracleasm deletedisk ASM_LVMDEV01_FRA01 Clearing disk header: done Dropping disk: done [root@oracle-vm-prod-003 ~]# oracleasm deletedisk ASM_LVMDEV01_REDO Clearing disk header: done Dropping disk: done
If the disk group is still mounted when you try to delete the oracleasm disk, you will get an error message like below.
Clearing disk header: failed Unable to clear disk "<ASM Disk Name>"
Snapshot and copy volumes on the FlashArray
Take a snapshot of the source volumes.
SUFFIX=`date +%Y-%m-%d-%H%M%S` FLASH_ARRAY=10.21.184.113 ssh pureuser@${FLASH_ARRAY} purevol snap --suffix SNAP-$SUFFIX oracle-lvm-prod-002-data-01 oracle-lvm-prod-002-fra-01 oracle-lvm-prod-002-redo-01
Copy them to the corresponding target volumes. As we are doing a refresh and the target volumes already exist, the overwrite option needs to be used.
ssh pureuser@${FLASH_ARRAY} purevol copy --overwrite oracle-lvm-prod-002-data-01 oracle-lvm-dev-003-data-01 ssh pureuser@${FLASH_ARRAY} purevol copy --overwrite oracle-lvm-prod-002-fra-01 oracle-lvm-dev-003-fra-01 ssh pureuser@${FLASH_ARRAY} purevol copy --overwrite oracle-lvm-prod-002-redo-01 oracle-lvm-dev-003-redo-01
Capture REDO information from the source
As we are creating this clone from a crash-consistent snapshot of the source database, there would be some changes in the current redo log file that will required for instance recovery. Run the following SQL on the source database right before or after taking the snapshot. Knowing which log file was current will help us later when we are doing recovery on the target database.
SQL> select * from v$log; GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME ---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ --------- 1 1 7 209715200 512 2 NO CURRENT 2163333 29-JUL-21 1.8447E+19 2 1 5 209715200 512 2 NO INACTIVE 2023319 28-JUL-21 2072663 28-JUL-21 3 1 6 209715200 512 2 NO INACTIVE 2072663 28-JUL-21 2163333 29-JUL-21
Rename ASM disks
As the target volumes have now been refreshed from production, the ASM disk names need to be updated to correspond to the target names. If using ASMLib, rename the disks of the cloned ASM volumes as shown below.
[root@oracle-vm-prod-003 ~]# oracleasm renamedisk -f /dev/asm-data-01 ASM_LVMDEV01_DATA01 Writing disk header: done Instantiating disk "ASM_LVMDEV01_DATA01": done [root@oracle-vm-prod-003 ~]# oracleasm renamedisk -f /dev/asm-fra-01 ASM_LVMDEV01_FRA01 Writing disk header: done Instantiating disk "ASM_LVMDEV01_FRA01": done [root@oracle-vm-prod-003 ~]# oracleasm renamedisk -f /dev/asm-redo-01 ASM_LVMDEV01_REDO01 Writing disk header: done Instantiating disk "ASM_LVMDEV01_REDO01": done
Rename ASM disk groups
Use renamedg to rename the disk groups
We can first run the command with the check=true option, which will run the command without actually making any changes to the disk group.
[grid@oracle-vm-prod-003 ~]$ renamedg dgname=LVMPRD01_DATA newdgname=LVMDEV01_DATA verbose=true check=true asm_diskstring='ORCL:*' [grid@oracle-vm-prod-003 ~]$ renamedg dgname=LVMPRD01_FRA newdgname=LVMDEV01_FRA verbose=true check=true asm_diskstring='ORCL:*' [grid@oracle-vm-prod-003 ~]$ renamedg dgname=LVMPRD01_REDO newdgname=LVMDEV01_REDO verbose=true check=true asm_diskstring='ORCL:*'
Here is an example of how the output of this command looks like.
[grid@oracle-vm-prod-003 ~]$ renamedg dgname=LVMPRD01_DATA newdgname=LVMDEV01_DATA verbose=true check=true asm_diskstring='ORCL:*'
Parameters in effect:
Old DG name : LVMPRD01_DATA
New DG name : LVMDEV01_DATA
Phases :
Phase 1
Phase 2
Discovery str : ORCL:*
Check : TRUE
Clean : TRUE
Raw only : TRUE
renamedg operation: dgname=LVMPRD01_DATA newdgname=LVMDEV01_DATA verbose=true check=true asm_diskstring=ORCL:*
Executing phase 1
Discovering the group
Performing discovery with string:ORCL:*
Identified disk ASM:ASM Library - Generic Linux, version 2.0.12 (KABI_V2):ORCL:ASM_LVMDEV01_DATA01 with disk number:0 and timestamp (33120132 -1415871488)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:ORCL:*
Identified disk ASM:ASM Library - Generic Linux, version 2.0.12 (KABI_V2):ORCL:ASM_LVMDEV01_DATA01 with disk number:0 and timestamp (33120132 -1415871488)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:0
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for ORCL:ASM_LVMDEV01_DATA01
Leaving the header unchanged
Completed phase 2
If the above commands complete successfully, then we can go ahead and run the renamedg command without check=true parameter.
[grid@oracle-vm-prod-003 ~]$ renamedg dgname=LVMPRD01_DATA newdgname=LVMDEV01_DATA confirm=true verbose=true asm_diskstring='ORCL:*' [grid@oracle-vm-prod-003 ~]$ renamedg dgname=LVMPRD01_FRA newdgname=LVMDEV01_FRA verbose=true confirm=true asm_diskstring='ORCL:*' [grid@oracle-vm-prod-003 ~]$ renamedg dgname=LVMPRD01_REDO newdgname=LVMDEV01_REDO verbose=true confirm=true asm_diskstring='ORCL:*'
If using UDEV rules for device persistence instead of ASM, we need to perform the following steps to change the path of the disks.
alter diskgroup <diskgroup> mount restricted; alter diskgroup <new diskgroup> rename disk ‘CURRENT DISKNAME’ to ‘NEW DISKNAME’;
We can check that the names of the disk groups have actually changed. At this stage, they would be in the DISMOUNTED state.
ASMCMD> lsdg --discovery State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name DISMOUNTED N 0 0 0 0 0 0 0 0 0 N LVMDEV01_DATA/ DISMOUNTED N 0 0 0 0 0 0 0 0 0 N LVMDEV01_FRA/ DISMOUNTED N 0 0 0 0 0 0 0 0 0 N LVMDEV01_REDO/
We can also check that the path names of the disks have changed in the v$asm_disk view.
SQL> select label,path,mount_status from v$asm_disk; LABEL PATH MOUNT_S ------------------------------- ------------------------------------------------------------ ------- ASM_LVMDEV01_DATA01 ORCL:ASM_LVMDEV01_DATA01 CLOSED ASM_LVMDEV01_FRA01 ORCL:ASM_LVMDEV01_FRA01 CLOSED ASM_LVMDEV01_REDO01 ORCL:ASM_LVMDEV01_REDO01 CLOSED
Mount the disk groups
Next, we go ahead and mount the disk groups. This can be either done using SQL below or by running asmcmd mount command.
SQL> alter diskgroup LVMDEV01_DATA mount; SQL> alter diskgroup LVMDEV01_FRA mount; SQL> alter diskgroup LVMDEV01_REDO mount;
Startup nomount instance
Startup the instance with the nomount option using the pfile we created earlier.
SQL> startup nomount pfile=initlvmdev01.ora ORACLE instance started. Total System Global Area 3170890664 bytes Fixed Size 8901544 bytes Variable Size 637534208 bytes Database Buffers 2516582400 bytes Redo Buffers 7872512 bytes
Create the control file
We use the create control file script that we created earlier to create a new control file with new disk group names as well as a new db name.
SQL> CREATE CONTROLFILE SET DATABASE "LVMDEV01" RESETLOGS NOARCHIVELOG 2 MAXLOGFILES 16 3 MAXLOGMEMBERS 3 4 MAXDATAFILES 100 5 MAXINSTANCES 8 6 MAXLOGHISTORY 292 7 LOGFILE 8 GROUP 1 ( 9 '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_1.263.1079068693', 10 '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_1.266.1079068695' 11 ) SIZE 200M BLOCKSIZE 512, 12 GROUP 2 ( 13 '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_2.264.1079068693', 14 '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_2.267.1079068695' 15 ) SIZE 200M BLOCKSIZE 512, 16 GROUP 3 ( 17 '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_3.265.1079068693', 18 '+LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_3.268.1079068695' 19 ) SIZE 200M BLOCKSIZE 512 20 DATAFILE 21 '+LVMDEV01_DATA/LVMPRD01/DATAFILE/system.257.1079068545', 22 '+LVMDEV01_DATA/LVMPRD01/DATAFILE/sysaux.258.1079068591', 23 '+LVMDEV01_DATA/LVMPRD01/DATAFILE/undotbs1.259.1079068617', 24 '+LVMDEV01_DATA/LVMPRD01/DATAFILE/users.260.1079068617' 25 CHARACTER SET AL32UTF8; Control file created.
Recover database
Recover database using backup controlfile until cancel. When prompted, provide the full path to the current redo log file identified in one of the steps earlier.
SQL> recover database using backup controlfile until cancel; ORA-00279: change 2163333 generated at 07/29/2021 03:00:45 needed for thread 1 ORA-00289: suggestion : +LVMDEV01_DATA ORA-00280: change 2163333 for thread 1 is in sequence #7 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} +LVMDEV01_DATA/LVMPRD01/ONLINELOG/group_1.263.1079068693 Log applied. Media recovery complete. SQL> alter database open resetlogs; Database altered.
Verify clone
[oracle@oracle-vm-prod-01 ~]$ ps -ef|grep smon
oracle 13104 13055 0 11:36 pts/9 00:00:00 grep --color=auto smon
grid 17158 1 0 Jul29 ? 00:01:32 asm_smon_+ASM
oracle 22481 1 0 Jul29 ? 00:01:33 ora_smon_lvmdev01
[oracle@oracle-vm-dev-01 ~]$ sqlplus / as sysdba
SQL> select name from v$database;
NAME
---------
LVMDEV01
SQL> select name from v$asm_disk;
NAME
------------------------------
ASM_LVMPRD01_DATA01
ASM_LVMPRD01_FRA01
ASM_LVMPRD01_REDO
SQL> select name from v$asm_diskgroup;
NAME
------------------------------
LVMDEV01_DATA
LVMDEV01_FRA
LVMDEV01_REDO
You might have noticed, the hostname of the cloned VM is still be the same as the source. It was not changed on purpose. That's because the hostname is hard coded in Oracle Grid Infrastructure configuration and changing the hostname will break it and neither ASM nor the database will come up.
To reconfigure Grid Infrastructure configuration, so that ASM comes back up after the hostname change, a few additional steps need to be performed. These steps are discussed in detail in Cloning Oracle Grid Infrastructure for a Standalone Server.