0% found this document useful (0 votes)
234 views6 pages

Dataguard Steps

The document outlines the steps to configure Dataguard between a primary database (PDB) and a standby database (SDB): 1. Configure the PDB and SDB with log archive destinations and necessary parameters. 2. Copy the datafiles, control files, and redo logs from the PDB to matching locations on the SDB. 3. Modify the tnsnames and listener files to include connections to the PDB and SDB. 4. Mount and open the SDB in managed recovery mode to begin applying redo logs from the PDB archive destination.

Uploaded by

koolhart
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
234 views6 pages

Dataguard Steps

The document outlines the steps to configure Dataguard between a primary database (PDB) and a standby database (SDB): 1. Configure the PDB and SDB with log archive destinations and necessary parameters. 2. Copy the datafiles, control files, and redo logs from the PDB to matching locations on the SDB. 3. Modify the tnsnames and listener files to include connections to the PDB and SDB. 4. Mount and open the SDB in managed recovery mode to begin applying redo logs from the PDB archive destination.

Uploaded by

koolhart
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

DATAGUARD STEPS :

STEP 1: ADD standby logfile PDB> ALTER DATABASE ADD STANDBY LOGFILE /<PATH> SIZE 50M;(path should be same of previous redo) PDB> ALTER DATABASE ADD STANDBY LOGFILE /<PATH> SIZE 50M; Step 2 : Edit the parameter of primary database *.audit_file_dest='/export/home/oras10g/test/adump' *.background_dump_dest='/export/home/oras10g/test/bdump' *.compatible='10.2.0.1.0' *.control_file_record_keep_time=30 *.control_files='/export/home/oras10g/test/control/control.ctl' *.core_dump_dest='/export/home/oras10g/test/cdump' *.db_16k_cache_size=0 *.db_block_size=8192 *.db_cache_size=322961408 *.db_domain='' *.db_file_multiblock_read_count=16 *.db_name='SATISH' *.db_unique_name='SATISH' *.dispatchers='(PROTOCOL=TCP) (SERVICE=SATISHXDB)' *.fal_client='STANDBY' *.fal_server='SATISH' *.fast_start_mttr_target=60 *.java_pool_size=4194304 *.job_queue_processes=10 *.large_pool_size=4194304

*.log_archive_dest_1='location=/export/home/oras10g/test/archive valid_for=(all_logfiles,all_roles) db_unique_name=SATISH' *.log_archive_dest_2='service=STANDBY valid_for=(online_logfiles,primary_roledb_unique_name=STANDBY' *.log_archive_dest_state_1='ENABLE' *.log_archive_dest_state_2='ENABLE' *.open_cursors=300 *.pga_aggregate_target=393216000 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.sga_max_size=600 *.sga_target=725614592 *.shared_pool_size=184549376 *.standby_file_management='AUTO' *.streams_pool_size=12582912 *.undo_management='AUTO' *.undo_tablespace='UNDOTBS1' *.user_dump_dest='/export/home/oras10g/test/udump' Step 3 : Create parameter of standby database *.audit_file_dest='/export/home/oras10g/STANDBY/adump' *.background_dump_dest='/export/home/oras10g/STANDBY/bdump' *.compatible='10.2.0.1.0' *.control_file_record_keep_time=30 *.control_files='/export/home/oras10g/STANDBY/control/standbycontrol.ctl' *.core_dump_dest='/export/home/oras10g/STANDBY/cdump' *.db_block_size=8192 *.db_domain='' *.db_file_multiblock_read_count=16

*.db_file_name_convert='/export/home/oras10g/test/dbf/','/export/home/oras10g/STANDBY/dbf/' ,'/export/home/oras10g/DEMO/dbf/tbs1.dbf','/export/home/oras10g/STANDBY/dbf/tbs1.dbf','/exp ort/home/oras10g/DEMO/dbf/tbs2.dbf','/export/home/oras10g/STANDBY/dbf/tbs2.dbf' *.db_name='SATISH' *.db_unique_name='STANDBY' *.dispatchers='(PROTOCOL=TCP) (SERVICE=dbadb1XDB)' *.fal_client='STANDBY' *.fal_server='SATISH' *.job_queue_processes=10 *.log_archive_dest_1='LOCATION=/export/home/oras10g/STANDBY/archive valid_for=(all_logfiles,all_roles)' *.log_archive_dest_state_1='ENABLE' *.log_archive_format='%t_%s_%r.dbf' *.log_file_name_convert='/export/home/oras10g/test/redo/','/export/home/oras10g/STANDBY/red o/' *.open_cursors=300 *.pga_aggregate_target=393216000 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.remote_os_authent=true *.sga_target=536870912 *.standby_archive_dest='/export/home/oras10g/STANDBY/standby_arch' *.standby_file_management='AUTO' *.undo_management='AUTO' *.undo_tablespace='UNDOTBS1' *.user_dump_dest='/export/home/oras10g/STANDBY/udump' Steps 3: Copy datafile to new location (for standby) Mkdir /.../../STANDBY Create subdirectory : redo ,dbf, archive,standby_arch, udump ,bdump ,control

Pdb>alter database begin backup; Copy all the datafile ; Pdb>alter database end backup; Pdb>alter database create standby controlfile as /<path/control.ctl> Steps 4: Make entry on tnsnames and listener listener listener = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dbavm.localdomain.com)(PORT = 2021)) ) SID_LIST_listener = (SID_LIST = (SID_DESC = (SID_NAME = STANDBY) (ORACLE_HOME = /oras10g/admin/product/10.2.0.1) ) (SID_DESC = (SID_NAME = SATISH) (ORACLE_HOME = /oras10g/admin/product/10.2.0.1) ) ) tnsname STANDBY= (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dbavm.localdomain.com)(PORT = 2021)) (CONNECT_DATA = (SERVER = DEDICATED)

(SERVICE_NAME = STANDBY) ) ) SATISH= (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dbavm.localdomain.com)(PORT = 2021)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = SATISH) ) )

START THE STANDBY DATABASE


Create password file for STANDBY Sdb>create spfile from pfile; Sdb>startup nomount Sdb>alter database mount standby database; Sdb>alter database recover managed standby database disconnect; (this query starts the MRP process to apply archive. To check in future whether it is already started fire this query: Select sequence#,status from v$manged_standby if status of last sequence is showing WAIT_FOR_LOG then it is waiting for applying archive...if status is APPLYING_FOR_LOG means it is in progress i.e applying log)

To cross check fire this (


pdb> alter system switch logfile pdb>select error from v$archive_dest; (It wll show the error while connecting to standby database, if no error then fine) sdb>select sequence#,applied from v$archived_log; (If applied is yes then the archiving is appliying.)

The current_scn Will not update at while as in PDB. It will update only when SDB got restarted and if any log is to apply (SDB restarted but log is not there to apply then scn will not change it will be same as before ie before shutdown)

You might also like