Resolving Gaps in DataGuard Apply Using Incremental RMAN Backup

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 6

Resolving Gaps in Data Guard Apply Using Incremental RMAN BAckup

Recently, we had a glitch on a Data Guard (physical standby database) on infrast


ructure. This is not a critical database; so the monitoring was relatively lax.
And that being done by an outsourcer does not help it either. In any case, the l
axness resulted in a failure remaining undetected for quite some time and it was
eventually discovered only when the customer complained. This standby database
is usually opened for read only access from time to time.This time, however, the
customer saw that the data was significantly out of sync with primary and raise
d a red flag. Unfortunately, at this time it had become a rather political issue
.
Since the DBA in charge couldnt resolve the problem, I was called in. In this pos
t, I will describe the issue and how it was resolved. In summary, there are two
parts of the problem:
(1) What happened
(2) How to fix it
What Happened
Lets look at the first question what caused the standby to lag behind. First, I l
ooked for the current SCN numbers of the primary and standby databases. On the p
rimary:
SQL> select current_scn from v$database;
CURRENT_SCN
----------1447102
On the standby:
SQL> select current_scn from v$database;
CURRENT_SCN
----------1301571
Clearly there is a difference. But this by itself does not indicate a problem; s
ince the standby is expected to lag behind the primary (this is an asynchronous
non-real time apply setup). The real question is how much it is lagging in the t
erms of wall clock. To know that I used the scn_to_timestamp function to transla
te the SCN to a timestamp:
SQL> select scn_to_timestamp(1447102) from dual;
SCN_TO_TIMESTAMP(1447102)
------------------------------18-DEC-09 08.54.28.000000000 AM
I ran the same query to know the timestamp associated with the SCN of the standb
y database as well (note, I ran it on the primary database, though; since it wil
l fail in the standby in a mounted mode):
SQL> select scn_to_timestamp(1301571) from dual;
SCN_TO_TIMESTAMP(1301571)
------------------------------15-DEC-09 07.19.27.000000000 PM

This shows that the standby is two and half days lagging! The data at this point
is not just stale; it must be rotten.
The next question is why it would be lagging so far back in the past. This is a
10.2 database where FAL server should automatically resolved any gaps in archive
d logs. Something must have happened that caused the FAL (fetch archived log) pr
ocess to fail. To get that answer, first, I checked the alert log of the standby
instance. I found these lines that showed the issue clearly:

Fri Dec 18 06:12:26 2009


Waiting for all non-current ORLs to be archived...
Media Recovery Waiting for thread 1 sequence 700
Fetching gap sequence in thread 1, gap sequence 700-700

Fri Dec 18 06:13:27 2009
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 700-700
DBID 846390698 branch 697108460
FAL[client]: All defined FAL servers have been attempted.

Going back in the alert log, I found these lines:


Tue Dec 15 17:16:15 2009
Fetching gap sequence in thread 1, gap sequence 700-700
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:15 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
Tue Dec 15 17:16:45 2009
Error 12514 received logging on to the standby
FAL[client, MRP0]: Error 12514 connecting to DEL1 for fetching gap sequence
Tue Dec 15 17:16:45 2009
Errors in file /opt/oracle/admin/DEL2/bdump/del2_mrp0_18308.trc:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
This clearly showed the issue. On December 15th at 17:16:15, the Managed Recover
y Process encountered an error while receiving the log information from the prim
ary. The error was ORA-12514 TNS:listener does not currently know of service requ
ested in connect descriptor. This is usually the case when the TNS connect string
is incorrectly specified. The primary is called DEL1 and there is a connect str
ing called DEL1 in the standby server.
The connect string works well. Actually, right now there is no issue with the st
andby getting the archived logs; so there connect string is fine - now. The stan
dby is receiving log information from the primary. There must have been some tem
porary hiccups causing that specific archived log not to travel to the standby.
If that log was somehow skipped (could be an intermittent problem), then it shou
ld have been picked by the FAL process later on; but that never happened. Since
the sequence# 700 was not applied, none of the logs received later 701, 702 and
so on were applied either. This has caused the standby to lag behind since that
time.
So, the fundamental question was why FAL did not fetch the archived log sequence
# 700 from the primary. To get to that, I looked into the alert log of the prima

ry instance. The following lines were of interest:

Tue Dec 15 19:19:58 2009


Thread 1 advanced to log sequence 701 (LGWR switch)
Current log# 2 seq# 701 mem# 0: /u01/oradata/DEL1/onlinelog/o1_mf_2_5bhbkg92_.lo
g
Tue Dec 15 19:20:29 2009Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/
bdump/del1_arc1_14469.trc:
ORA-00308: cannot open archived log /u01/oraback/1_700_697108460.dbf
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3
Tue Dec 15 19:20:29 2009
FAL[server, ARC1]: FAL archive failed, see trace file.
Tue Dec 15 19:20:29 2009
Errors in file /opt/oracle/product/10gR2/db1/admin/DEL1/bdump/del1_arc1_14469.tr
c:
ORA-16055: FAL request rejected
ARCH: FAL archive failed.
Archiver continuing
Tue Dec 15 19:20:29 2009
ORACLE Instance DEL1 - Archival Error. Archiver continuing.

These lines showed everything clearly. The issue was:


ORA-00308: cannot open archived log /u01/oraback/1_700_697108460.dbf
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
The archived log simply was not available. The process could not see the file an
d couldnt get it across to the standby site.
Upon further investigation I found that the DBA actually removed the archived lo
gs to make some room in the filesystem without realizing that his action has rem
oved the most current one which was yet to be transmitted to the remote site. Th
e mystery surrounding why the FAL did not get that log was finally cleared.
Solution
Now that I know the cause, the focus was now on the resolution. If the archived
log sequence# 700 was available on the primary, I could have easily copied it ov
er to the standby, registered the log file and let the managed recovery process
pick it up. But unfortunately, the file was gone and I couldnt just recreate the
file. Until that logfile was applied, the recovery will not move forward. So, wh
at are my options?
One option is of course to recreate the standby - possible one but not technical
ly feasible considering the time required. The other option is to apply the incr
emental backup of primary from that SCN number. Thats the key the backup must be
from a specific SCN number. I have described the process since it is not very ob
vious. The following shows the step by step approach for resolving this problem.
I have shown where the actions must be performed [Standby] or [Primary].
1. [Standby] Stop the managed standby apply process:
SQL> alter database recover managed standby database cancel;
Database altered.

2. [Standby] Shutdown the standby database


3. [Primary] On the primary, take an incremental backup from the SCN number wher
e the standby has been stuck:
RMAN> run {
2> allocate channel c1 type disk format /u01/oraback/%U.rmb ;
3> backup incremental from scn 1301571 database;
4> }
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=139 devtype=DISK
Starting backup at 18-DEC-09
channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/oradata/DEL1/datafile/o1_mf_system_5bhbh59c_.
dbf

piece handle=/u01/oraback/06l16u1q_1_1.rmb tag=TAG20091218T083619 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:06
Finished backup at 18-DEC-09
released channel: c1
4. [Primary] On the primary, create a new standby controlfile:
SQL> alter database create standby controlfile as /u01/oraback/DEL1_standby.ctl
;
Database altered.
5. [Primary] Copy these files to standby host:
oracle@oradba1 /u01/oraback# scp *.rmb *.ctl oracle@oradba2:/u01/oraback
oracle@oradba2 s password:
06l16u1q_1_1.rmb 100% 43MB 10.7MB/s 00:04
DEL1_standby.ctl 100% 43MB 10.7MB/s 00:04
6. [Standby] Bring up the instance in nomount mode:
SQL> startup nomount
7. [Standby] Check the location of the controlfile:
SQL> show parameter control_files
NAME TYPE VALUE
------------------------------------ ----------- -----------------------------control_files string /u01/oradata/standby_cntfile.ctl
8. [Standby] Replace the controlfile with the one you just created in primary.
9. $ cp /u01/oraback/DEL1_standby.ctl /u01/oradata/standby_cntfile.ctl
10.[Standby] Mount the standby database:
SQL> alter database mount standby database;

11.[Standby] RMAN does not know about these files yet; so you must let it know b
y a process called cataloging. Catalog these files:
$ rman target=/
Recovery Manager: Release 10.2.0.4.0 - Production on Fri Dec 18 06:44:25 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: DEL1 (DBID=846390698, not open)
RMAN> catalog start with /u01/oraback ;
using target database control file instead of recovery catalog
searching for all files that match the pattern /u01/oraback
List of Files Unknown to the Database
=====================================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb
Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done
List of Cataloged Files
=======================
File Name: /u01/oraback/DEL1_standby.ctl
File Name: /u01/oraback/06l16u1q_1_1.rmb
12.Recover these files:
RMAN> recover database;
Starting recover at 18-DEC-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /u01/oradata/DEL2/datafile/o1_mf_syst
em_5lptww3f_.dbf
...
channel ORA_DISK_1: reading from backup piece /u01/oraback/05l16u03_1_1.rmb
channel ORA_DISK_1: restored backup piece 1
piece handle=/u01/oraback/05l16u03_1_1.rmb tag=TAG20091218T083619
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
starting media recovery
archive log thread 1 sequence 8012 is already on disk as file /u01/oradata/1_801
2_697108460.dbf
archive log thread 1 sequence 8013 is already on disk as file /u01/oradata/1_801
3_697108460.dbf

13. After some time, the recovery fails with the message:
archive log
RMAN-00571:
RMAN-00569:
RMAN-00571:
RMAN-03002:

filename=/u01/oradata/1_8008_697108460.dbf thread=1 sequence=8009


===========================================================
=============== ERROR MESSAGE STACK FOLLOWS ===============
===========================================================
failure of recover command at 12/18/2009 06:53:02

RMAN-11003: failure during parse/execution of SQL statement: alter database reco


ver logfile /u01/oradata/1_8008_697108460.dbf
ORA-00310: archived log contains sequence 8008; sequence 8009 required
ORA-00334: archived log: /u01/oradata/1_8008_697108460.dbf
This happens because we have come to the last of the archived logs. The expected
archived log with sequence# 8008 has not been generated yet.
14.At this point exit RMAN and start managed recovery process:
SQL> alter database recover managed standby database disconnect from session;
Database altered.
15.Check the SCNs in primary and standby:
[Standby] SQL> select current_scn from v$database;
CURRENT_SCN
----------1447474
[Primary] SQL> select current_scn from v$database;
CURRENT_SCN
----------1447478
Now they are very close to each other. The standby has now caught up.

You might also like