Physical Standby Database creation using Snapshot Replication
Purpose
A large number of Oracle customers use Oracle Data Guard to implement high availability, data protection and disaster recovery (DR) for their business critical Oracle databases. The creation of an Oracle physical standby database at the DR site can be a time-consuming activity, particularly if the database size is huge and the network bandwidth is not so great. That is because the primary database needs to be backed up and the backup needs to be transferred to the Standby site.
In this article, we'll learn how we can leverage Purity Snapshots to quickly build a physical standby database at the DR site, instead of a time-consuming RMAN restore. As the snapshots are deduped and compressed, the time that it takes to transfer them to the remote FlashArray™ is significantly reduced. Since snapshot replication on a FlashArray is incremental, the initial replication of the volume snapshots that transfers bulk of the data can be done ahead of time.
Please note that we are using snapshots only for the initial creation of the physical standby database. Subsequent transfer of data to keep the primary and the standby database in sync happens through the standard Data Guard Redo Transport services.
However, if the standby database was to ever get out of sync, we can quickly redo the snapshot replication and volume overwrite process to build a new copy again. In this case, the overwrite option in the Copy snapshot to volumes step needs to be enabled.
Environment
Here is how we have set up the lab environment for this demonstration.
We have a non-CDB 19c Oracle database called TPCH1T that is the primary database. It is using the Oracle ASM Filter Driver (ASMFD) for device persistence.
The database consists of 5 TB of allocated data files with about 3TB of populated data.
Both database servers are bare metal.
Primary | Standby | |
---|---|---|
Host Name | sn1-r720-e03-07 | sn1-r720-e03-09 |
FlashArray | sn1-x70r3-e03-30 | sn1-x70r2-e03-27 |
Purity version | 6.1 | 6.1 |
Database SID | TPCH1T | TPCH1T |
Database Unique Name | TPCH1T | TPCH1T_S |
Database Role | Primary | Physical Standby |
Asynchronous replication has been set up between the Primary and the Standby FlashArray.
Summary of the procedure
Here are the high-level steps that need to be performed.
- Prepare the Primary Database
- Enable Logging and Archive Log mode
- Initialization parameters
- Services setup
- Standby Server creation
- Identify the FlashArray volumes containing the Primary Database files
- Create a control and parameter file for Standby
- Copy init.ora and control file from the Primary to Standby
- Setup the env to support the standby database
- Take a Purity Snapshot of Primary Database Volumes and replicate to the target FlashArray
- Copy replicated FlashArray volumes to target volumes and connect them to target host
- Rescan target host, create required multipath entries, and mount ASM diskgroups.
- Start the Physical Standby database and the Log Apply services
- Verify and monitor the Physical Standby database
Detailed Instructions
Let's go over each of the steps in detail.
Prepare the Primary Database
Enable Logging
Please carry out the following steps on the primary server if needed.
Check if the primary database is in archivelog mode and that force logging is enabled.
SQL> select name,log_mode,force_logging from v$database; NAME LOG_MODE FORCE_LOGGING --------- ------------ --------------------------------------- TPCH1T NOARCHIVELOG NO
Enable Archive Log mode
If the database is in NOARCHIVELOG mode, then we need to enable it.
SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 1.6267E+11 bytes Fixed Size 30140696 bytes Variable Size 1.8790E+10 bytes Database Buffers 1.4334E+11 bytes Redo Buffers 506728448 bytes Database mounted. SQL> alter database archivelog; Database altered. SQL> alter database open; Database altered.
Enable Force Logging
If force logging is not enabled, then enable it using the following commands.
SQL> alter database force logging; Database altered. SQL> alter system switch logfile; System altered.
Initialization Parameters
In the primary database, we need to define initialization parameters that control redo transport services while the database is in the primary role.
There are additional parameters that need to be added that control the receipt of the redo data and apply services when the primary database is transitioned to the standby role.
DB_NAME=TPCH1T DB_UNIQUE_NAME=TPCH1T LOG_ARCHIVE_CONFIG='DG_CONFIG=(TPCH1T,TPCH1T_S)' LOG_ARCHIVE_DEST_1= 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=TPCH1T' LOG_ARCHIVE_DEST_2= 'SERVICE=TPCH1T _SBY ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=TPCH1T' REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
The following shows the additional standby role initialization parameters on the primary database. These parameters take effect when the primary database is transitioned to the standby role.
FAL_SERVER=TPCH1T_S DB_FILE_NAME_CONVERT='/TPCH1T_S/','/TPCH1T/' LOG_FILE_NAME_CONVERT='/TPCH1T_S/','/TPCH1T/' STANDBY_FILE_MANAGEMENT=AUTO
These parameters can be set by running the commands below.
SQL> alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(TPCH1T,TPCH1T_S)' scope=BOTH; SQL> alter system set LOG_ARCHIVE_DEST_1= 'LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=TPCH1T' scope=BOTH; SQL> alter system set LOG_ARCHIVE_DEST_2= 'SERVICE=TPCH1T_S ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=TPCH1T_S' scope=BOTH; SQL> alter system set REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE scope=SPFILE; SQL> alter system set LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' scope=SPFILE; SQL> alter system set FAL_SERVER=TPCH1T_S scope=BOTH; SQL> alter system set DB_FILE_NAME_CONVERT='/TPCH1T_S/','/TPCH1T/' scope=SPFILE; SQL> alter system set LOG_FILE_NAME_CONVERT='/TPCH1T_S/','/TPCH1T/' scope=SPFILE; SQL> alter system set STANDBY_FILE_MANAGEMENT=AUTO scope=BOTH;
Configure Services
Make sure the TNS entries for both the Primary the Standby database are included in the $ORACLE_HOME/network/admin/tnsnames.ora file on both servers (Primary and Standby).
TPCH1T = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = sn1-r720-e03-07.puretec.purestorage.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = TPCH1T) ) ) TPCH1T_S = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = sn1-r720-e03-09.puretec.purestorage.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = TPCH1T_S) ) )
Password File Setup
Copy the password file from Primary to the Standby server with the db_unique_name format of the standby.
scp $ORACLE_HOME/dbs/orapwTPCH1T oracle@10.21.184.15:/u01/app/oracle/product/19.0.0/dbhome_1/dbs/orapwTPCH1T_S
Standby Server Setup
Backup the Primary Database
Instead of performing a full-fledged RMAN backup along with archived logs, we will take a storage level snapshot of the volumes that comprise the data files and archived log files. We need to identify all volumes that are part of the TPCH1T database. If we already have this information, we can skip the next step.
Identify Primary Database files
First, we begin with finding out the paths of the ASM disks.
SQL > select c.db_name, g.name, d.path from v$asm_diskgroup g, v$asm_disk d, v$asm_client c where d.group_number = g.group_number and c.group_number = g.group_number and c.db_name = 'TPCH1T' order by 2, 3; DB_NAME NAME PATH -------- ------------------------------ ------------------------------ TPCH1T TPCH1T_01_DATA AFD:TPCH1T_01_DATA_01 TPCH1T TPCH1T_01_FRA AFD:TPCH1T_01_FRA_01
As the database has been configured with ASMFD, we need to refer to the configuration files created in /dev/oracleafd/disks directory to find the path to the actual volume. In our case, we are using Multipathing, therefore the file path displayed to the mapped device.
[root@sn1-r720-e03-07 ~]# cat /dev/oracleafd/disks/TPCH1T_01_DATA_01 /dev/mapper/oracle-rt-tpch1T-01-data-01
From the multipath, the volume name on the FlashArray can be identified by getting the UUID using the following command.
[root@sn1-r720-e03-07 ~]# multipathd show multipaths|grep oracle-rt-tpch1T-01-data-01 oracle-rt-tpch1T-01-data-01 dm-6 3624a937058f0a8a5b304695a0002736e
The first 9 characters of the UUID contains a fixed string which is the vendor ID. We remove it before matching it with the volume serial numbers on the FlashArray.
[root@sn1-r720-e03-07 ~]# ssh pureuser@10.21.184.113 purevol list | grep -i `echo 3624a937058f0a8a5b304695a0002736e | cut -c 10-` oracle-rt-tpch1T-01-data-01 8T - 2020-07-16 14:32:51 PDT 58F0A8A5B304695A0002736E
Using the steps above, we have determined that the ASMFD path AFD:TPCH1T_01_DATA_01 corresponds to the volume oracle-rt-tpch1T-01-data-01 on the FlashArray. This is a one time step and is only required if the mapping between the ASM devices and the volumes on the array is not known.
Create Standby control file
Run the following SQL on the primary database to create a standby controlfile.
SQL> alter database create standby controlfile as'/tmp/TPCH1T_S.ctl';
Create Standby init.ora
We need to create an init.ora file for the standby database. To make it simpler, we use the primary init.ora file as the starting point, and make the necessary changes to it.
SQL> create pfile='/tmp/initTPCH1T_S.ora' from SPFILE;
Update the file as highlighted.
db_name=TPCH1T db_unique_name=TPCH1T_S *.control_files='+TPCH1T_01_DATA/TPCH1T_S/CONTROLFILE/control01.ctl' *.fal_server='TPCH1T' *.log_archive_config='DG_CONFIG=(TPCH1T,TPCH1T_S)' *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=TPCH1T_S' *.log_archive_dest_2='SERVICE=TPCH1T ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=TPCH1T' *.standby_file_management='MANUAL' # Commented out the following for the initial Standby creation. # Uncomment them once the Standby database is created #db_file_name_convert='/TPCH1T/','/TPCH1T_S/' #log_file_name_convert='/TPCH1T/','/TPCH1T_S/' *.audit_file_dest='/u01/app/oracle/admin/TPCH1T_S/adump' *.audit_trail='db' *.compatible='19.0.0' *.db_block_size=8192 *.db_create_file_dest='+TPCH1T_01_DATA' *.db_name='TPCH1T' *.db_recovery_file_dest='+TPCH1T_01_FRA' *.db_recovery_file_dest_size=1099511627776 *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=TPCH1TXDB)' *.local_listener='LISTENER_TPCH1T' *.log_archive_format='%t_%s_%r.arc' *.nls_language='AMERICAN' *.nls_territory='AMERICA' *.open_cursors=300 *.pga_aggregate_target=51567m *.processes=3840 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=154701m *.standby_file_management='AUTO' *.star_transformation_enabled='TRUE' *.undo_tablespace='UNDOTBS1'
STANDBY_FILE_MANAGEMENT has been set to MANUAL to avoid any automatic file management when we mount the standby database for the first time.
Note that the db_unique_name or SID on the standby should not exceed 8 characters.
Transfer files to Standby
Transfer the init.ora and control file to the standby site.
[root@sn1-r720-e03-07 ~]# scp /tmp/TPCH1T_S.ctl 10.21.184.15:/tmp [oracle@sn1-r720-e03-07 tmp]$ scp /tmp/initTPCH1T_S.ora oracle@10.21.184.15:/u01/app/oracle/product/19.0.0/dbhome_1/dbs
Create Audit Dump destination on Standby
On standby mkdir -p /u01/app/oracle/admin/TPCH1T_S/adump
Snapshot Primary Database files
After the volumes that contain the database and archive log files have been identified, we will create of snapshot of these volumes. All the volumes need to snapshotted at the same instant. We need to place them in a protection group and then take a snapshot of the protection group.
A protection group oracle-rt-tpch1T-01-pg has been created that consists of the volumes associated with this database. Also, as highlighted below, we need to add the remote array as a replication target so that when snapshots of this protection group are taken, they are automatically replicated to the target array.
Another thing to note is that as we are going to perform recovery while bringing up the standby database, the snapshot backup needs to be application consistent. Therefore, we need to enable "hot" backup before taking the snapshot as shown in the script below. As the snapshot is created almost instantaneously, the overhead due to the database running in hot backup mode is insignificant.
sqlplus -s / as sysdba << EOF1 alter database begin backup; exit EOF1
Note that while creating the snapshot, we have selected the Replicate Now option so that the snapshot gets transferred to the remote array.
sqlplus -s / as sysdba << EOF2 alter database end backup; exit EOF2
If we login to the remote array, and navigate to Protection -> Protection Groups, we can see the the protection group that has been replicated. As can be seen from the prefix of the protection group name, the protection group has been replicated from the source array - sn1-x70r3-e03-30.
Copy snapshot to volumes on DR array
On the target array, we create new volumes by copying them from the replicated snapshot.
Connect volumes to the host
The next step is to connect the new volumes to the standby host.
Rescan scsi bus
Rescan scsi bus on the standby host to make sure that the newly created volumes are visible to the host.
[root@sn1-r720-e03-09 ~]# rescan-scsi-bus.sh -a
Create multipath entries
Add new entries to /etc/multipath.conf
multipath { wwid 3624a93703401ece8c2054b8200111b76 alias oracle-rt-tpch1T-01-stby-data-01 } multipath { wwid 3624a93703401ece8c2054b8200111b7f alias oracle-rt-tpch1T-01-stby-fra-01 }
Restart Multipathd service
Restart the multipath service for the new mapping to take effect.
[root@sn1-r720-e03-09 grid]# service multipathd restart Redirecting to /bin/systemctl restart multipathd.service [root@sn1-r720-e03-09 grid]# multipath -ll|grep 111b7 oracle-rt-tpch1T-01-stby-fra-01 (3624a93703401ece8c2054b8200111b7f) dm-21 PURE ,FlashArray oracle-rt-tpch1T-01-stby-data-01 (3624a93703401ece8c2054b8200111b76) dm-22 PURE ,FlashArray
Mount ASM disk groups
As ASMFD is being used, we'll relabel asm disks.
[root@sn1-r720-e03-09 grid_base]# asmcmd afd_label TPCH1T_01_DATA_01 /dev/mapper/oracle-rt-tpch1T-01-stby-data-01 --migrate [root@sn1-r720-e03-09 grid_base]# asmcmd afd_label TPCH1T_01_FRA_01 /dev/mapper/oracle-rt-tpch1T-01-stby-fra-01 --migrate [root@sn1-r720-e03-09 grid_base]# ls -l /dev/oracleafd/disks/TPCH1T_01* -rw-rw-r--. 1 grid oinstall 45 Jun 23 20:07 /dev/oracleafd/disks/TPCH1T_01_DATA_01 -rw-rw-r--. 1 grid oinstall 44 Jun 23 20:08 /dev/oracleafd/disks/TPCH1T_01_FRA_01
ASM disks groups should be visible now.
ASMCMD> lsdg --discovery State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name DISMOUNTED N 0 0 0 0 0 0 0 0 0 N TPCH1T_01_DATA/ DISMOUNTED N 0 0 0 0 0 0 0 0 0 N TPCH1T_01_FRA/
We can now mount the disk groups.
ASMCMD> mount TPCH1T_01_DATA ASMCMD> mount TPCH1T_01_FRA ASMCMD> lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 1048576 8388608 2312044 0 2312044 0 N TPCH1T_01_DATA/ MOUNTED EXTERN N 512 512 4096 1048576 8388608 8304148 0 8304148 0 N TPCH1T_01_FRA/
Rename directory
Login to the ASM instance as sysasm and rename directory names.
[grid@sn1-r720-e03-09 ~]$ sqlplus / as sysasm SQL> alter diskgroup TPCH1T_01_DATA rename directory '+TPCH1T_01_DATA/TPCH1T' to '+TPCH1T_01_DATA/TPCH1T_S'; Diskgroup altered.
Create FRA directory
Create a directory for the new archive log files and RMAN backups. As per convention, this should be the db_unique_name which is TPCH1T_S.
ASMCMD> ls -l TPCH1T_01_FRA Type Redund Striped Time Sys Name Y TPCH1T/ ASMCMD> mkdir TPCH1T_01_FRA/TPCH1T_S ASMCMD> ls -l TPCH1T_01_FRA Type Redund Striped Time Sys Name Y TPCH1T/ N TPCH1T_S/
Copy control file to ASM
Copy the standby control file that we had transferred from primary server to the appropriate directory in ASM.
ASMCMD> cp /tmp/TPCH1T_S.ctl +TPCH1T_01_DATA/TPCH1T_S/CONTROLFILE/control01.ctl copying /tmp/TPCH1T_S.ctl -> +TPCH1T_01_DATA/TPCH1T_S/CONTROLFILE/control01.ctl
Register standby database
Register the standby database and associate the disk group resources with the database.
[oracle@sn1-r720-e03-09 dbs]$ srvctl add database -d TPCH1T_S -o /u01/app/oracle/product/19.0.0/dbhome_1 [oracle@sn1-r720-e03-09 dbs]$ srvctl modify database -d TPCH1T_S -a "TPCH1T_01_DATA,TPCH1T_01_FRA" [oracle@sn1-r720-e03-09 dbs]$ srvctl config database -d TPCH1T_S Database unique name: TPCH1T_S Database name: Oracle home: /u01/app/oracle/product/19.0.0/dbhome_1 Oracle user: oracle Spfile: Password file: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Disk Groups: TPCH1T_01_DATA,TPCH1T_01_FRA Services: OSDBA group: OSOPER group: Database instance: TPCH1TS
Setup listener
Update listener.ora with an entry for the standby database.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = sn1-r720-e03-09.puretec.purestorage.com)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)
SID_LIST_LISTENER =
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME=TPCH1T_S)
(SID_NAME=TPCH1T_S)
(ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1)
)
)
Restart the listener.
Start the Physical Standby database
Startup mount the database with the pfile that we had created in an earlier step.
[oracle@sn1-r720-e03-09 dbs]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 24 01:18:11 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup mount pfile='initTPCH1T_S.ora'
ORACLE instance started.
Total System Global Area 1.6267E+11 bytes
Fixed Size 30140696 bytes
Variable Size 1.8790E+10 bytes
Database Buffers 1.4334E+11 bytes
Redo Buffers 506728448 bytes
Database mounted.
Next, we will update the paths of the database data, temp and online log files to correspond to the updated paths on the standby database.
cr_rename_script.sql set pages 0 set lines 180 set feed off set echo off spool rename_files.sql select 'alter database rename file '|| '''' ||name||''''||' to '||''''|| replace(name,'/TPCH1T/DATAFILE/','/TPCH1T_S/DATAFILE/')||''''||';' from v$datafile; select 'alter database rename file '|| '''' ||name||''''||' to '||''''||replace(name,'/TPCH1T/TEMPFILE/','/TPCH1T_S/TEMPFILE/') ||''''||';' from v$tempfile; select 'alter database rename file '|| '''' ||member||''''||' to '||''''|| replace(member,'/TPCH1T/ONLINELOG/','/TPCH1T_S/ONLINELOG/')||''''||';' from v$logfile; spool off
[oracle@sn1-r720-e03-09 dbs]$ sqlplus / as sysdba SQL> @cr_rename_script.sql SQL> @rename_files.sql
Standby log files are required for switch over. We need to add standby log files to both the primary and the standby database. They should be at least as large as the largest log file. We need to add one standby log file for each primary log file. Therefore, run the following command as many times are the number of primary redo log files.
SQL > ALTER DATABASE ADD STANDBY LOGFILE ('+TPCH1T_01_DATA') SIZE 10G;
Catalog the archive log files
[oracle@sn1-r720-e03-09 ~]$ rman target / Recovery Manager: Release 19.0.0.0.0 - Production on Thu Jun 24 02:21:59 2021 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. connected to target database: TPCH1T (DBID=2051270623, not open) RMAN> catalog start with '+TPCH1T_01_FRA/TPCH1T/ARCHIVELOG'; searching for all files that match the pattern +TPCH1T_01_FRA/TPCH1T/ARCHIVELOG List of Cataloged Files ======================= File Name: +TPCH1T_01_FRA/TPCH1T/ARCHIVELOG/2021_06_17/thread_1_seq_353.265.1075429619 File Name: +TPCH1T_01_FRA/TPCH1T/ARCHIVELOG/2021_06_16/thread_1_seq_351.267.1075405143 File Name: +TPCH1T_01_FRA/TPCH1T/ARCHIVELOG/2021_06_16/thread_1_seq_352.266.1075409413 File Name: +TPCH1T_01_FRA/TPCH1T/ARCHIVELOG/2021_06_08/thread_1_seq_350.268.1074722435 File Name: +TPCH1T_01_FRA/TPCH1T/ARCHIVELOG/2021_05_05/thread_1_seq_349.269.1071725825 File Name: +TPCH1T_01_FRA/TPCH1T/ARCHIVELOG/2021_03_31/thread_1_seq_348.264.1068620417 File Name: +TPCH1T_01_FRA/TPCH1T/ARCHIVELOG/2021_02_21/thread_1_seq_347.263.1065074411 File Name: +TPCH1T_01_FRA/TPCH1T/ARCHIVELOG/2021_01_17/thread_1_seq_346.270.1062054031 File Name: +TPCH1T_01_FRA/TPCH1T/ARCHIVELOG/2020_12_14/thread_1_seq_344.259.1059111889 File Name: +TPCH1T_01_FRA/TPCH1T/ARCHIVELOG/2020_12_14/thread_1_seq_345.258.1059112639
Catalog the online redo log files
RMAN> catalog archivelog '+TPCH1T_01_DATA/TPCH1T_S/ONLINELOG/group_4.266.1046005817'; cataloged archived log archived log file name=+TPCH1T_01_DATA/TPCH1T_S/ONLINELOG/group_4.266.1046005817 RECID=11 STAMP=1076099504 RMAN> catalog archivelog '+TPCH1T_01_DATA/TPCH1T_S/ONLINELOG/group_5.267.1046005825'; cataloged archived log archived log file name=+TPCH1T_01_DATA/TPCH1T_S/ONLINELOG/group_5.267.1046005825 RECID=12 STAMP=1076099527 RMAN> catalog archivelog '+TPCH1T_01_DATA/TPCH1T_S/ONLINELOG/group_6.268.1046005831'; cataloged archived log archived log file name=+TPCH1T_01_DATA/TPCH1T_S/ONLINELOG/group_6.268.1046005831 RECID=13 STAMP=1076099550
Start the Redo Apply Process on the Standby
RMAN> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; Statement processed
Verify Standby
Verify the Physical Standby database is performing properly
SQL> select role, thread#, sequence#, action from v$dataguard_process WHERE action != 'IDLE'; ROLE THREAD# SEQUENCE# ACTION ------------------------ ---------- ---------- ------------ recovery logmerger 1 362 APPLYING_LOG
SQL> select client_process, process, thread#, sequence#, status from v$managed_standby where client_process ='LGWR' OR process ='MRP0'; CLIENT_P PROCESS THREAD# SEQUENCE# STATUS -------- --------- ---------- ---------- ------------ N/A MRP0 1 354 WAIT_FOR_LOG
SQL> select client_process, process, thread#, sequence#, status from v$managed_standby where client_process ='LGWR' OR process ='MRP0'; CLIENT_P PROCESS THREAD# SEQUENCE# STATUS -------- --------- ---------- ---------- ------------ N/A MRP0 1 358 APPLYING_LOG LGWR RFS 1 358 IDLE
SQL> select client_process, process, thread#, sequence#, status from v$managed_standby where client_process ='LGWR' OR process ='MRP0'; CLIENT_P PROCESS THREAD# SEQUENCE# STATUS -------- --------- ---------- ---------- ------------ N/A MRP0 1 358 APPLYING_LOG LGWR RFS 1 358 RECEIVING
Post -Creation steps
Update the init.ora parameters on the Standby database.
Update the following parameters in the init.ora file
db_file_name_convert='/TPCH1T/','/TPCH1T_S/' log_file_name_convert='/TPCH1T/','/TPCH1T_S/' STANDBY_FILE_MANAGEMENT=AUTO;
Create the spfile for the Standby database from the pfile
Restart standby database
Restart the standby database normally, i.e. without specifying the pfile option.
SQL> startup nomount ORACLE instance started. SQL> alter database mount standby database; Database altered. SQL> alter database recover managed standby database disconnect from session; Database altered.
Troubleshooting
Check v$dataguard_status
Assuming that the standby is archive dest 2. If not, change the filter condition accordingly.
SQL > select severity,error_code,message,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') from v$dataguard_status where dest_id=<Archive Dest. ID>;