Hot Backup Oracle
Hot Backup Oracle
First, take a binary backup of your Control File. Issue the SQL command:
… replacing the directory and file name there with something appropriate for your
environment. Note that this generates a single binary image of the Control File which is
guaranteed to be consistent (and hence usable), regardless of whether the database is
actually using multiple mirrored Control Files.
Next, move on to the Data Files. First determine what tablespaces exist which need
backing up:
Note here that we’re not interested in backing up read-only tablespaces (they do need to
be backed up at some point, but having once been backed up, they needn’t be included in
a regular backup strategy –unless their read-only status changes). We’re also not
interested in backing up proper temporary tablespaces, since –by definition- they contain
nothing that is worth backing up.
Now, working through that list one tablespace at a time, issue the following commands:
In other words, put the entire tablespace into hot backup mode, then copy all the Data
Files associated with that tablespace to the appropriate backup directory. When all copies
are complete, take the tablespace out of hot backup mode. As each tablespace comes out
of hot backup mode, we issue a checkpoint request, to force its header SCN back into
synchronisation with the rest of the database.
When one tablespace has been backed up in this way, move on to the next and do the
same sequence: begin backup - copy the Data Files – end backup. It’s important to take it
one tablespace at a time, because placing a tablespace into hot backup mode means
switching on block-sized redo for all transactions affecting that tablespace. That means
placing the entire set of tablespaces into hot backup mode, and then copying all Data Files
would be an extremely bad idea! Instead, this technique keeps the flood of block-level
redo that is about to be generated to an absolute minimum.
When all tablespaces have been backed up, issue the command:
That will force a log switch, and thus force the archiving of the last possible bit of online
redo. Whilst that archiving is taking place, we can begin copying all the other, pre-
existing, archives:
SET TRIMSPOOL ON
SET TERMOUT OFF
SET PAGESIZE 0
SET HEADING OFF
SET ECHO OFF
SET VERIFY OFF
SET FEEDBACK OFF
SPOOL ARCHIVECOPY
SELECT ‘HOST COPY ‘ || NAME || ‘ \BACKUPS’ FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
SPOOL OFF
@ARCHIVECOPY.LST
… that produces a script (and then runs it) which copies all archives listed in the view, in
the order in which they were generated. The v$archived_log view only records new
archives when they have been successfully completed, so using that as the basis of the
copy should ensure that you don’t inadvertently try and copy the last archive (the one
generated as a result of the earlier ‘switch logfile’ command) whilst it is still being written
to. It might conceivably mean, however, that the last archive doesn’t get backed up at all
–so don’t be too eager about deleting the last archive from its ‘live’ location after the
backup has finished.
In case it’s not obvious, this script works on NT only, because of the ‘host copy’ stuff: if
you were adapting this for Unix, you’d obviously use ‘host cp’ instead –and make sure that
capitalisation was correct.
That’s the basic procedure, anyway. You ought to finish the whole thing off by running
dbverify against all the copied Data Files, and making sure that there is no corruption in
the copies (see my tip on “How can I check that my backups are "clean" and free from
corruption?” for details).