Migrate a Non-CDB Oracle Database to Multitenant Architecture using FlashArray Snapshots
In Oracle 12c Oracle released a new technology called Multitenant architecture that allowed an Oracle database to function as a container database (CDB) into which one or more pluggable databases (PDB) could be attached. Since its release, Oracle has been continuously enhancing it and adding more features with every release.
Multitenant databases offer a number of benefits like database consolidation, better manageability, database mobility, and easier patching, just to name a few. It was a separately licensed option which is probably the reason why the adoption of this feature was not as widespread.
But this is changing as a result of two changes Oracle has announced.
First, non-CDB architecture was deprecated since 12.1.0.2, and multitenant container database is the only supported architecture in Oracle Database 20c.
Second, Oracle 19c license allows without additional cost, 3 user-created PDBs in a given database at any time.
These changes are making organizations take a closer look at multitenant databases and start planning the migration of their production databases from non-CDB to pluggable databases inside a CDB.
There are multiple ways in which a non-cdb database can be migrated to multitenant architecture. Most methods involve physical copy of data at the storage level or over the network. That's not a problem if the database is small, but what if its size is in tens or hundreds of terabytes! Physically copying large amounts of data takes time. This extended downtime is certainly not desirable, and in many situations may even not be feasible for a mission-critical production Oracle database.
In this article, we'll see how we can leverage the power FlashArray™ snapshots to plugin a non-cdb database (Source) into a container database (Target) without actually moving any data, thereby drastically reducing the time to complete the migration.
Approach
Since the objective is to not physically move data, we would need to keep the Source database structures intact and find a way of literally plugging the Source database into the Target. Therefore, we won't be moving or merging the Source database files or ASM diskgroups into the Target mounts. Instead, we'll be creating new mount points. In case the database is on a file system, that will mean a new file system mount point, whereas for ASM, it'll mean new ASM disk groups.
The ASM case is a little more complicated and needs a few extra steps, so that's the one we will use for demonstration in this article.
At a high level, we will be performing the following actions:
- Create database file rename scripts
- Rename disk groups
- Start Source database in mount mode and execute file rename scripts
- Rename ASMLib disk labels
- Rename ASM disks
- Open database in read-only mode, and create XML file using DBMS_PDB.DESCRIBE
- Take a protection group snapshot on the Source
- Create new volumes from snapshot, and connect them to the Target host
- Check the compatibility of Source with Target and if OK, create the pluggable database using Source XML spec
- Run the dictionary conversion script - noncdb_to_pdb.sql and open the pluggable database.
Environment Setup
We have a non-cdb database dbtest39 running on host sn1-r720-e03-17 that has two diskgroups DATA and FRA.
We already have a container database dbtest38 running on host sn1-r720-e03-15 that already contains a few pluggable databases.
Both dbtest38 and dbtest39 are using ASMLib for disk persistence.
Source | Target | |
---|---|---|
Database Type | non-CDB | CDB |
DB Name | dbtest39 | dbtest38 |
Host Name | sn1-r720-e03-17 | sn1-r720-e03-15 |
DB Name | dbtest39 | dbtest38 |
Disk Groups | DATA, FRA | DATA_39, FRA_39 |
ASM Disk Persistance Type | ASMLIB | ASMLIB |
Prepare Source for migration
One of the primary benefits of container databases is that it can contain multiple pluggable databases. As we are going to mount the Source volumes on the Target host, we need to make sure that there are no naming conflicts with objects already in the Target database. To avoid conflicts, we'll rename volume labels, disks, disk groups/mount points, etc. as a preparatory step on the Source itself. In our demonstration, we will be renaming the disk groups DATA and FRA to DATA_99 and FRA_99 respectively.
Create file rename scripts
Create scripts on dbtest39 to rename data, temp and log files for use later on in the process.
SQL> select 'ALTER DATABASE RENAME FILE '''||member||'''to'''||replace(replace(member,'+DATA/','+DATA_39/'),'+FRA/','+FRA_39/') ||''';' from v$logfile; ALTER DATABASE RENAME FILE '+DATA/DBTEST39/ONLINELOG/group_3.264.1038414809' to '+DATA_39/DBTEST39/ONLINELOG/group_3.264.1038414809'; ALTER DATABASE RENAME FILE '+FRA/DBTEST39/ONLINELOG/group_3.283.1038414811' to '+FRA_39/DBTEST39/ONLINELOG/group_3.283.1038414811'; ALTER DATABASE RENAME FILE '+DATA/DBTEST39/ONLINELOG/group_2.263.1038414809' to '+DATA_39/DBTEST39/ONLINELOG/group_2.263.1038414809'; ALTER DATABASE RENAME FILE '+FRA/DBTEST39/ONLINELOG/group_2.261.1038414811' to '+FRA_39/DBTEST39/ONLINELOG/group_2.261.1038414811'; ALTER DATABASE RENAME FILE '+DATA/DBTEST39/ONLINELOG/group_1.262.1038414809' to '+DATA_39/DBTEST39/ONLINELOG/group_1.262.1038414809'; ALTER DATABASE RENAME FILE '+FRA/DBTEST39/ONLINELOG/group_1.260.1038414811' to '+FRA_39/DBTEST39/ONLINELOG/group_1.260.1038414811';
SQL> select 'ALTER DATABASE RENAME FILE ''' || file_name ||''' to '''|| replace(replace(file_name,'+DATA/','+DATA_39/'),'+FRA/','+FRA_39/') || ''';' from dba_data_files; ALTER DATABASE RENAME FILE '+DATA/DBTEST39/DATAFILE/users.260.1038414745' to '+DATA_39/DBTEST39/DATAFILE/users.260.1038414745'; ALTER DATABASE RENAME FILE '+DATA/DBTEST39/DATAFILE/undotbs1.259.1038414745' to '+DATA_39/DBTEST39/DATAFILE/undotbs1.259.1038414745'; ALTER DATABASE RENAME FILE '+DATA/DBTEST39/DATAFILE/system.257.1038414695' to '+DATA_39/DBTEST39/DATAFILE/system.257.1038414695'; ALTER DATABASE RENAME FILE '+DATA/DBTEST39/DATAFILE/sysaux.258.1038414729' to '+DATA_39/DBTEST39/DATAFILE/sysaux.258.1038414729';
SQL> select 'ALTER DATABASE RENAME FILE ''' || file_name ||''' to '''|| replace(replace(file_name,'+DATA/','+DATA_39/'),'+FRA/','+FRA_39/') || ''';' from dba_temp_files; ALTER DATABASE RENAME FILE '+DATA/DBTEST39/TEMPFILE/temp.265.1038414815' to '+DATA_39/DBTEST39/TEMPFILE/temp.265.1038414815';
Create a plain text format pfile from spfile.
SQL> show parameter spfile +DATA/DBTEST39/PARAMETERFILE/spfile.266.1038415035 SQL> create pfile from spfile;
Rename Disk Groups
Dismount the diskgroups. Use the force option if needed.
SQL> alter diskgroup FRA dismount; Diskgroup altered. SQL> alter diskgroup DATA dismount; alter diskgroup DATA dismount * ERROR at line 1: ORA-15032: not all alterations performed ORA-15027: active use of diskgroup "DATA" precludes its dismount SQL> alter diskgroup DATA dismount force; Diskgroup altered.
Run the renamedg command with check=true. This will only run a check to identify any potential issues and will not make any changes.
[grid@sn1-r720-e03-17 ~]$ renamedg dgname=data newdgname=data_39 verbose=true check=true asm_diskstring='/dev/oracleasm/disks/*'
Parameters in effect:
Old DG name : DATA
New DG name : DATA_39
Phases :
Phase 1
Phase 2
Discovery str : /dev/oracleasm/disks/*
Check : TRUE
Clean : TRUE
Raw only : TRUE
renamedg operation: dgname=data newdgname=data_39 verbose=true check=true asm_diskstring=/dev/oracleasm/disks/*
Executing phase 1
Discovering the group
Performing discovery with string:/dev/oracleasm/disks/*
Identified disk UFS:/dev/oracleasm/disks/DATA01 with disk number:0 and timestamp (33100352 346500096)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/oracleasm/disks/*
Identified disk UFS:/dev/oracleasm/disks/DATA01 with disk number:0 and timestamp (33100352 346500096)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:0
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for /dev/oracleasm/disks/DATA01
Leaving the header unchanged
Completed phase 2
Now that the check ran fine, we run the command without check parameter
[grid@sn1-r720-e03-17 ~]$ renamedg dgname=data newdgname=data_39 verbose=true asm_diskstring='/dev/oracleasm/disks/*' Parameters in effect: Old DG name : DATA New DG name : DATA_39 Phases : Phase 1 Phase 2 Discovery str : /dev/oracleasm/disks/* Clean : TRUE Raw only : TRUE renamedg operation: dgname=data newdgname=data_39 verbose=true asm_diskstring=/dev/oracleasm/disks/* Executing phase 1 Discovering the group Performing discovery with string:/dev/oracleasm/disks/* Identified disk UFS:/dev/oracleasm/disks/DATA01 with disk number:0 and timestamp (33100352 346500096) Checking for hearbeat... Re-discovering the group Performing discovery with string:/dev/oracleasm/disks/* Identified disk UFS:/dev/oracleasm/disks/DATA01 with disk number:0 and timestamp (33100352 346500096) Checking if the diskgroup is mounted or used by CSS Checking disk number:0 Generating configuration file.. Completed phase 1 Executing phase 2 Looking for /dev/oracleasm/disks/DATA01 Modifying the header Completed phase 2
Repeat the renamedg command for all other disk groups.
Start the database with renamed diskgroups
The disk groups have been renamed. Let's mount the disk groups so that we can start the database.
SQL> alter diskgroup DATA_39 mount; Diskgroup altered. SQL> alter diskgroup FRA_39 mount; Diskgroup altered.
ASMCMD should show the disk groups to be in MOUNTED state.
ASMCMD> lsdg State Type Rebal Sector L. Sector Block AU Total_MB Free_MB Req_mir_fr_MB Usable_file_MB OffL_disks Vtg_files Name MOUNTED EXTERN N 512 512 4096 1048576 2097152 2094615 0 2094615 0 N DATA_39/ MOUNTED EXTERN N 512 512 4096 1048576 2097152 2096433 0 2096433 0 N FRA_39/
Before we can start the database, we need to modify the diskgroup dependency.
[grid@sn1-r720-e03-17 ~]$ srvctl modify database -d dbtest39 -diskgroup "DATA_39,FRA_39"
We need to startup the database in mount state so that we can rename the database files to reference the new disk groups DATA_39 and FRA_39 instead of DATA and FRA.
Modify the pfile created earlier and replace references to the old disk groups with the new ones. Specifically, the following parameters will need changes.
*.control_files='+DATA_39/DBTEST39/CONTROLFILE/current.261.1038414807','+FRA_39/DBTEST39/CONTROLFILE/current.273.1038414807' *.db_create_file_dest='+DATA_39' *.db_recovery_file_dest='+FRA_39'
Startup mount the database
[oracle@sn1-r720-e03-17 dbs]$ sqlplus / as sysdba SQL> startup mount pfile='initdbtest39.ora'; ORACLE instance started. Database mounted.
Run the previously created rename scripts to change the data, temp, and log file paths to reference the new disk groups.
SQL> alter database open; Database altered.
This is a good logical point to take a snapshot of the protection group on the Source that includes volumes for the Source database. In case something goes wrong in the subsequent steps, you can restore volumes from this snapshot and resume from here.
$ SUFFIX=after-renamedg $ ssh -l ${FA_USER} ${FA_IP} purepgroup snap --suffix $SUFFIX oracle-dbtest39-pg
Rename ASMLib label
As we will be mounting the Source ASM disks in the Target ASM instance, we need to make sure that the ASMLib disk names are unique.
Shutdown the database and ASM. Then run oracleasm renamedisk with the force option to rename the disks.
[root@sn1-r720-e03-17 ~]# oracleasm renamedisk -f /dev/mapper/oracle-rt-dbtest39-data DATA_39_01 Writing disk header: done Instantiating disk "DATA_39_01": done [root@sn1-r720-e03-17 ~]# oracleasm renamedisk -f /dev/mapper/oracle-rt-dbtest39-fra FRA_39_01 Writing disk header: done Instantiating disk "FRA_39_01": done
If we list the disks, it shows both old and new ones.
[root@sn1-r720-e03-17 ~]# oracleasm listdisks DATA01 DATA_39_01 FRA01 FRA_39_01
Run oracleasm scandisks to clear the old ones.
[root@sn1-r720-e03-17 ~]# oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks... Cleaning disk "DATA01" Cleaning disk "FRA01"
Rename ASM Disk name
We will also rename the ASM disk names.
SQL> alter diskgroup data_39 dismount; Diskgroup altered. SQL> alter diskgroup data_39 mount restricted; Diskgroup altered. SQL> alter diskgroup fra_39 dismount; Diskgroup altered. SQL> alter diskgroup fra_39 mount restricted; Diskgroup altered. SQL> alter diskgroup data_39 rename disk 'DATA_0000' to 'DATA_0000_39'; Diskgroup altered. SQL> alter diskgroup fra_39 rename disk 'FRA_0000' to 'FRA_0000_39'; Diskgroup altered.
Plug it in
The following steps should be performed when we are ready to do the switch.
Startup the Source database in read-only mode.
[oracle@sn1-r720-e03-17 dbs]$ srvctl start database -db dbtest39 -startoption "read only"
Execute the following script on the Source to capture the Source metadata in an XML file.
[oracle@sn1-r720-e03-17 dbs]$ sqlplus / as sysdba SQL> BEGIN 2 DBMS_PDB.DESCRIBE(pdb_descr_file => '/home/oracle/dbtest39_18.xml'); 3 END; 4 / PL/SQL procedure successfully completed.
At this point, we will take a snapshot of the protection group covering the Source volumes using the following CLI command.
$ SUFFIX=after-read-only $ ssh -l ${FA_USER} ${FA_IP} purepgroup snap --suffix $SUFFIX oracle-dbtest39-pg
Copy the snapshot to new volumes
$ ssh -l ${FA_USER} ${FA_IP} purevol copy oracle-dbtest39-pg.after-reado-nly.oracle-rt-dbtest39-data oracle-rt-dbtest39-plug-data $ ssh -l ${FA_USER} ${FA_IP} purevol copy oracle-dbtest39-pg.after-reado-nly.oracle-rt-dbtest39-fra oracle-rt-dbtest39-plug-fra
Attach these volumes to Target host.
$ ssh -l ${FA_USER} ${FA_IP} purevol connect --host sn1-r720-e03-15 oracle-rt-dbtest39-plug-data $ ssh -l ${FA_USER} ${FA_IP} purevol connect --host sn1-r720-e03-15 oracle-rt-dbtest39-plug-fra
Rescan scsi bus on the Target host
[root@sn1-r720-e03-15 ~]# rescan-scsi-bus.sh -a
Update /etc/multipath.conf with multipath aliases for the new volumes
multipath { wwid 3624a93703401ece8c2054b8200013174 alias oracle-rt-dbtest38-plug-39-data } multipath { wwid 3624a93703401ece8c2054b8200013175 alias oracle-rt-dbtest38-plug-39-fra }
Restart multipath service
[root@sn1-r720-e03-15 ~]# service multipathd restart
Rescan ASMLib. The new volumes should get discovered and new disks should appear under /dev/oracleasm/disks/.
[root@sn1-r720-e03-15 ~]# oracleasm scandisks [root@sn1-r720-e03-15 ~]# oracleasm listdisks DATA01 DATA_39 FRA01 FRA_39
Log in to ASMCMD on the Target host. Mount the new disk groups that we migrated from Source.
ASMCMD> mount DATA_39 ASMCMD> mount FRA_39 ASMCMD> lsdg State Type Rebal Sector L. Sector Block AU Total_MB Free_MB Req_mir_fr_MB Usable_file_MB OffL_disks Vtg_files Name MOUNTED EXTERN N 512 512 4096 1048576 2097152 2097053 0 2097053 0 N DATA/ MOUNTED EXTERN N 512 512 4096 1048576 2097152 2093942 0 2093942 0 N DATA_39/ MOUNTED EXTERN N 512 512 4096 1048576 2097152 2092112 0 2092112 0 N FRA/ MOUNTED EXTERN N 512 512 4096 1048576 2097152 2096433 0 2096433 0 N FRA_39/
Transfer the XML file dbtest39_18.xml to the Target host and execute the following procedure on the Target. It should output YES as shown below.
[oracle@sn1-r720-e03-15 dbs]$ sqlplus / as sysdba
SQL> SET SERVEROUTPUT ON
SQL> DECLARE
2 compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/home/oracle/dbtest39_18.xml',
pdb_name => 'dbtest39')
3 WHEN TRUE THEN 'YES'
4 ELSE 'NO'
5 END;
6 BEGIN
7 DBMS_OUTPUT.PUT_LINE(compatible);
8 END;
9 /
YES
PL/SQL procedure successfully completed.
If the previous command outputs YES, that means the Source is compatible with the Target. We can then go ahead with the next step to create the Pluggable database from the Source metadata spec. Notice that we are using the NOCOPY keyword as the new disk groups have already been mounted on the Target ASM and no files need to be copied.
SQL> CREATE PLUGGABLE DATABASE dbtest39 USING '/home/oracle/dbtest39_18.xml' NOCOPY;
Pluggable database created.
We can run show pdbs command to see the pluggable database in this container. A new PDB - DBTEST39 that we just migrated appears in the list in MOUNTED state.
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 DBTST39B READ WRITE NO
5 DBTST39C READ WRITE NO
6 DBTEST39 MOUNTED
Next, we need to convert the dictionary of the newly added PDB from non-CDB to PDB before we can open it. Make PDB DBTEST39 as the current container.
SQL> alter session set container=DBTEST39;
Now we can run the Oracle provided script noncdb_to_pdb.sql located in $ORACLE_HOME/rdbms/admin.
SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql;
On successful completion of the script, we can go ahead and open the migrated pluggable database - DBTEST39.
SQL> alter pluggable database DBTEST39 open; Pluggable database altered.
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
6 DBTEST39 READ WRITE NO
In case there are errors while opening the pluggable database, the details can be found by querying the view - pdb_plug_in_violations.