Oracle DBA Automation Scripts
Oracle DBA Automation Scripts
Scripts
By Rajendra Gutta
Having the right backup and recovery procedures is crucial to the operation of any database. It is the
responsibility of the database administrator to protect the database from system faults, crashes, and natural
calamities resulting from a variety of circumstances. Learn how to choose the best backup and recovery
mechanism for your Oracle system.
Having the right backup and recovery procedures is the lifeblood of any database. Companies live on data,
and, if that data is not available, the whole company collapses. As a result, it is the responsibility of the
database administrator to protect the database from system faults, crashes, and natural calamities resulting
from a variety of circumstances.
The choice of a backup and recovery mechanism depends mainly on the following factors:
Logical exports create an export file that contains a list of SQL statements to recreate the database. Export
is performed when the database is open and does not affect users work. Offline backups can only be
performed when the database is shut down cleanly, and the database will be unavailable to users while the
offline backup is being performed. Online backups are performed when the database is open, and it does
not affect users work. The database needs to run in ARCHIVELOG mode to perform online backups.
The database can run in either ARCHIVELOG mode or NOARCHIVELOG mode. In ARCHIVELOG mode,
the archiver (ARCH) process archives the redo log files to the archive destination directory. These archive
files can be used to recover the database in the case of a failure. In NOARCHIVELOG mode, the redo log
files are not archived.
When the database is running in ARCHIVELOG mode, the choice can be one or more of the following:
• Export
• Hot backup
• Cold backup
When the database is running in NOARCHIVELOG mode, the choice of backup is as follows:
• Export
• Cold backup
Cold Backup
Offline or cold backups are performed when the database is completely shutdown. The disadvantage of an
offline backup is that it cannot be done if the database needs to be run 24/7. Additionally, you can only
recover the database up to the point when the last backup was made unless the database is running in
ARCHIVELOG mode.
The general steps involved in performing a cold backup are shown in Figure 3.1. These general steps are
used in writing cold backup scripts for Unix and Windows NT.
• Data files
• Control files
• Init.ora and config.ora files
CAUTION
Backing up online redo log files is not advised in all cases, except when performing cold backup with the
database running in NOARCHIVELOG mode. If you make a cold backup in ARCHIVELOG mode do not
backup redo log files. There is a chance that you may accidentally overwrite your real online redo logs,
preventing you from doing a complete recovery.
If your database is running in ARCHIVELOG mode, when you perform cold backup you should also backup
archive logs that exist.
Before performing a cold backup, you need to know the location of the files that need to be backed up.
Because the database structure changes day to day as more files get added or moved between directories,
it is always better to query the database to get the physical structure of database before making a cold
backup.
To get the structure of the database, query the following dynamic data dictionary tables:
• Backup the control file and perform a trace of the control file using
$su – oracle
$sqlplus "/ as sysdba"
SQL>shutdown
In the first step, you generated a list of files to be backed up. To back up the files, you can use the Unix
copy command (cp) to copy it to a backup location, as shown in the following code. You have to copy all
files that you generated in Step 1.
You can perform the backup of the Init.ora and config.ora files as follows:
After the backup is complete, you can start the database as follows:
$su – oracle
$sqlplus "/ as sysdba"
SQL> startup
Hot Backup
An online backup or hot backup is also referred to as ARCHIVE LOG backup. An online backup can only be
done when the database is running in ARCHIVELOG mode and the database is open. When the database
is running in ARCHIVELOG mode, the archiver (ARCH) background process will make a copy of the online
redo log file to archive backup location.
An online backup consists of backing up the following files. But, because the database is open while
performing a backup, you have to follow the procedure shown in Figure 3.2 to backup the files:
Step 1—Put the tablespace in the Backup mode and copy the data files.
Assume that your database has two tablespaces, USERS and TOOLS. To back up the files for these two
tablespaces, first put the tablespace in backup mode by using the ALTER statement as follows:
After the tablespace is in Backup mode, you can use the SELECT statement to list the data files for the
USERS tablespace, and the copy (cp) command to copy the files to the backup location. Assume that the
USERS tablespace has two data files—users01.dbf and users02.dbf.
The following command ends the backup process and puts the tablespace back in normal mode.
You have to repeat this process for all tablespaces. You can get the list of tablespaces by using the
following SQL statement:
The first command switches redo log file and the second command stops the archiver process.
To avoid backing up the archive file that is currently being written, we find the least sequence number that is
to be archived from the V$LOG view, and then backup all the archive files before that sequence number.
The archive file location is defined by the LOG_ARCHIVE_DEST_n parameter in the Init.ora file.
An online backup of a database will keep the database open and functional for 24/7 operations. It is advised
to schedule online backups when there is the least user activity on the database, because backing up the
database is very I/O intensive and users can see slow response during the backup period. Additionally, if
the user activity is very high, the archive destination might fill up very fast.
There can be many reasons for the database to crash during a hot backup—a power outage or rebooting of
the server, for example. If these were to happen during a hot backup, chances are that tablespace would be
left in backup mode. In that case you must manually recover the files involved, and the recovery operation
would end the backup of tablespace. It's important to check the status of the files as soon as you restart the
instance and end the backup for the tablespace if it's in backup mode.
or
This statement lists files with ACTIVE status. If the file is in ACTIVE state, the corresponding tablespace is
in backup mode. The second statement gives the tablespace name also, but this can't be used unless the
database is open. You need to end the backup mode of the tablespace with the following command:
Logical Export
Export is the single most versatile utility available to perform a backup of the database, de-fragment the
database, and port the database or individual objects from one operating system to another operating
system.
Export backup detects block corruption
Though you perform other types of backup regularly, it is good to perform full export of database at regular
intervals, because export detects any data or block corruptions in the database. By using export file, it is
also possible to recover individual objects, whereas other backup methods do not support individual object
recovery.
• Conventional Path (default)—Uses SQL layer to create the export file. The fact is that the SQL layer
introduces CPU overhead due to character set, converting numbers, dates and so on. This is time
consuming.
• Direct path (DIRECT=YES)—Skips the SQL layer and reads directly from database buffers or
private buffers. Therefore it is much faster than conventional path.
We will discuss scripts to perform the full, user-level, and table-level export of database. The scripts also
show you how to compress and split the export file while performing the export. This is especially useful if
the underlying operating system has a limitation of 2GB maximum file limit.
Understand scripting
This chapter requires understanding of basic Unix shell and DOS batch programming techniques that are
described in Chapter 2 "Building Blocks." That chapter explained some of the common routines that will be
used across most of the scripts presented here.
This book could have provided much more simple scripts. But, considering standardization across all scripts
and the reusability of individual sections for your own writing of scripts, I am focusing on providing a
comprehensive script, rather than a temporary fix. After you understand one script, it is easy to follow the
flow for the rest of the scripts.
The backup scripts provided here work for HP-UX, Sun Solaris, and AIX with one slight modification. That
is, the scripts use v$parameter and v$controlfile to get the user dump destination and control file
information. Because in Unix the dollar sign ($) is a special character, you have to precede it with a forward
slash (\) that tells Unix to treat it as a regular character. However, this is different in each flavor of Unix. AIX
and HP-UX need one forward slash, and the Sun OS needs two forward slashes to make the dollar sign a
regular character.
These scripts are presented in modular approach. Each script consists of a number of small functions and a
main section. Each function is designed to meet a specific objective so that they are easy to understand
and modify. These small functions are reusable and can be used in the design of your own scripts. If you
want to change a script to fit to your unique needs, you can do so easily in the function where you want the
change without affecting the whole script.
After the backup is complete, it is necessary to check the backup status by reviewing log and error files
generated by the scripts.
Cold Backup
Cold backup program (see Listing 3.1) performs the cold backup of the database under the Unix
environment. The script takes two input parameters—SID and OWNER. SID is the instance to be backed
up, and OWNER is the Unix account under which Oracle is running. Figure 3.3 describes the functionality of
the cold backup program. Each box represents a corresponding function in the program.
#####################################################################
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify(): Verify that database is online
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_verify(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
funct_chk_ux_cmd_stat "Database is down for given SID($ORA_SID),
Owner($ORA_OWNER). Can't generate files to be backed up"
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify_shutdown(): Verify that database is down
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_verify_shutdown(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
if [ $? = 0 ]; then
echo "´date´" >> $LOGFILE
echo "COLDBACKUP_FAIL: ${ORA_SID}, Database is up, can't make
coldbackup if the database is online."|tee -a ${BACKUPLOGFILE} >> $LOGFILE
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_shutdown_i(): Shutdown database in Immediate mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_shutdown_i(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
shutdown immediate;
exit
EOF
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_shutdown_n(): Shutdown database in Normal mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_shutdown_n(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
shutdown normal;
exit
EOF
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_startup_r(): Startup database in restricted mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_startup_r(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
startup restrict;
exit
EOF
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_startup_n(): Startup database in normal mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_startup_n(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
startup;
exit
EOF
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_build_dynfiles(): Identify the files to backup
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_build_dynfiles(){
# Build datafile list
echo "Building datafile list ." >> ${BACKUPLOGFILE}
datafile_list=´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select file_name from dba_data_files order by tablespace_name;
exit
EOF´
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_cold_backup(): Perform cold backup
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_cold_backup(){
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "COLDBACKUP_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already existing
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_bkup_dir() {
RESTOREFILE_DIR="${BACKUPDIR}/restorefile_dir"
BACKUPLOG_DIR="${BACKUPDIR}/backuplog_dir"
DATAFILE_DIR="${BACKUPDIR}/datafile_dir"
CONTROLFILE_DIR="${BACKUPDIR}/controlfile_dir"
REDOLOG_DIR="${BACKUPDIR}/redolog_dir"
ARCLOG_DIR="${BACKUPDIR}/arclog_dir"
INITFILE_DIR="${BACKUPDIR}/initfile_dir"
BACKUPLOGFILE="${BACKUPLOG_DIR}/backup_log_${ORA_SID}"
RESTOREFILE="${RESTOREFILE_DIR}/restorefile_${ORA_SID}"
LOGFILE="${LOGDIR}/${ORA_SID}.log"
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
udump_dest=´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select value from v\\$parameter
where name='user_dump_dest';
exit
EOF´
if [ ! -f $init_file ]; then
echo "COLDBACKUP_FAIL: init$ORA_SID.ora does not exist in
ORACLE_HOME/dbs"|tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_ux_cmd_stat(): Check the exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_ux_cmd_stat() {
if [ $? != 0 ]; then
echo "´date´" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "COLDBACKUP_FAIL: ${1} "| tee -a ${BACKUPLOGFILE}
>> ${LOGFILE}
exit 1
fi
}
############################################################
# MAIN
############################################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2
DYN_DIR="${TOOLS}/DYN_FILES"
LOGDIR="${TOOLS}/localog"
JOBNAME="dbcoldbackup"
funct_chk_parm
funct_chk_bkup_dir
funct_get_vars
funct_verify
funct_build_dynfiles
funct_shutdown_i
funct_startup_r
funct_shutdown_n
funct_verify_shutdown
funct_cold_backup
funct_startup_n
• In the main function, set correct values for the BACKUPDIR, ORATABDIR, and TOOLS variables
highlighted in the cold backup script. The default location of ORATABDIR is different for each flavor
of Unix. For information about the default location of the ORATAB file for different flavors of Unix,
refer to Chapter 13, "Unix, Windows NT, and Oracle."
• Check for the existence of SID in oratab file. If not already there, you must add the instance.
• Check for existence of initSID.ora file in the ORACLE_HOME/dbs directory. If it is in a different
location, you can create a soft link to the ORACLE_HOME/dbs directory.
• Pass SID and OWNER as parameters to the program.
• The database must be running when you start the program. It gets required information by querying
the database and then shuts down the database and performs cold backup.
• main() The main function defines the variables required and calls the functions to be executed.
The variables BACKUPDIR defines the backup location, ORATABDIR defines the oratab file
location. oratab files maintain the list of instances and their home directories on the machine. This
file is created by default when oracle is installed. If it is not there, you must create one. OWNER is the
owner of Oracle software directories. A sample oratab file can be found at the end of the chapter.
• funct_get_vars() This function gets ORACLE_HOME from the oratab file and
USER_DUMP_DEST from the initSID.ora file. The value of USER_DUMP_DEST is used to back up
the trace of the control file.
• funct_build_dynfiles() This function generates a list of files from the database for backup. It
also creates SQL statements for temporary files. These temporary files do not need to be backed
up, but can be recreated when a restore is performed. These temporary files are session-specific
and do not have any content when the database is closed.
• funct_shutdown_i() This function shuts down the database in Immediate mode, so that any
user connected to the database will be disconnected immediately.
• funct_startup_r() This function starts up the database in Restricted mode, so that no one can
connect to the database except users with Restrict privileges.
• funct_shutdown_n() This function performs a clean shutdown of the database.
• funct_chk_ux_cmd_stat() This function is used to check the status of Unix commands,
especially after copying files to a backup location.
Restore File
A cold backup program creates a restore file that contains the commands to restore the database. This
functionality is added based on the fact that a lot of DBAs perform backups but, when it comes to recovery,
they will not have any procedures to make the recovery faster. With the restore file, it is easier to restore
files to the original location because it has all the commands ready to restore the backup. Otherwise, you
need to know the structure of the database—what files are located where. A sample restore file is shown in
Listing 3.2.
Listing 3.2 Sample Restore File
######### SQL for Temp Files
alter tablespace TEMP add tempfile '/u03/oracle/DEV/data/temp03.dbf' reuse;
alter tablespace TEMP add tempfile '/u03/oracle/DEV/data/temp04.dbf' reuse;
######### Data Files
cp -p /bkp/DEV/cold/datafile_dir/INDX01.dbf /u02/oracle/DEV/data/INDX01.dbf
cp -p /bkp/DEV/cold/datafile_dir/RBS01.dbf /u02/oracle/DEV/data/RBS01.dbf
cp -p /bkp/DEV/cold/datafile_dir/SYSTEM01.dbf /u02/oracle/DEV/data/SYSTEM01.dbf
cp -p /bkp/DEV/cold/datafile_dir/TEMP01.dbf /u02/oracle/DEV/data/TEMP01.dbf
cp -p /bkp/DEV/cold/datafile_dir/USERS01.dbf /u02/oracle/DEV/data/USERS01.dbf
######### Control Files
cp -p /bkp/DEV/cold/controlfile_dir/cntrl01.dbf
/u02/oracle/DEV/data/cntrl01.dbf
######### Init.ora File
cp -p /bkp/DEV/cold/initfile_dir/initDEV.ora /u02/apps/DEV/oracle/8.1.7/
dbs/initDEV.ora
The important thing here is that the backup log file defined by BACKUPLOGFILE contains detailed
information about each step of the backup process. This is a very good place to start investigating why the
backup failed or for related errors. This file will also have the start and end time of the backup.
A single line about the success or failure of a backup is appended to SID.log file every time a backup is
performed. This file is located under the directory defined by the LOGDIR variable. This file also has the
backup completion time. A separate file is created for each instance. This single file maintains the history of
performed backups and their status and timing information. The messages for a cold backup are
'COLDBACKUP_FAIL' if a cold backup failed and 'Coldbackup Completed successfully' if a backup
completes successfully.
Apart from the BACKUPLOGFILE and SID.log files, it is always good to capture the out-of-the-ordinary
errors displayed onscreen if you are running the backup unattended. You can capture these errors by
running the command shown next. The same thing can be done for hot backups. This command captures
onscreen errors to the coldbackup.log file.
BACKUPLOGFILE
Listing 3.4 provides the script to perform the hot backup of a database under the Unix
environment. The hot backup script takes two input parameters—SID and OWNER. SID is the
instance to be backed up, and OWNER is the Unix account under which Oracle is running.
Figure 3.4 shows the functionality of the hot backup program. Each box represents a
corresponding function in the program.
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify(): Verify that database is online
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_verify(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
funct_chk_ux_cmd_stat "Database is down for given SID($ORA_SID),
Owner($ORA_OWNER). Can't perform hotbackup "
}
#:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_dblogmode(): Check DB log mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_dblogmode(){
STATUS=´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select log_mode from v\\$database;
exit
EOF´
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_control_backup(): Backup control file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_control_backup(){
echo "Begin backup of controlfile and trace to trace file" >>${BACKUPLOGFILE}
${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter database backup controlfile to '${CONTROLFILE_DIR}/backup_control.ctl';
alter database backup controlfile to trace;
exit
EOF
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_archivelog_backup(): Backup archivelog files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_archivelog_backup(){
echo "Begin backup of archived redo logs" >> ${BACKUPLOGFILE}
#Switch logs to flush current redo log to archived redo before back up
${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter system switch logfile;
alter system archive log stop;
exit
EOF
${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter system archive log start;
exit
EOF
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_init_backup(): Backup init.ora file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_init_backup(){
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_temp_backup(): Prepre SQL for temp files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_temp_backup(){
echo "############# Recreate the following Temporary Files" >> ${RESTOREFILE}
${ORACLE_HOME}/bin/sqlplus -s <<EOF >> ${RESTOREFILE}
/ as sysdba
set heading off feedback off
select 'alter tablespace '||tablespace_name||' add tempfile '||''||
file_name||''||' reuse'||';'
from dba_temp_files;
exit
EOF
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
#funct_hot_backup(): Backup datafiles
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_hot_backup(){
exit 1
fi
done
${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter tablespace ${tblspace} end backup;
exit
EOF
echo " Ending back up of tablespace ${tblspace}.." >> ${BACKUPLOGFILE}
done
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "HOTBACKUP_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already existing
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_bkup_dir() {
RESTOREFILE_DIR="${BACKUPDIR}/restorefile_dir"
BACKUPLOG_DIR="${BACKUPDIR}/backuplog_dir"
DATAFILE_DIR="${BACKUPDIR}/datafile_dir"
CONTROLFILE_DIR="${BACKUPDIR}/controlfile_dir"
REDOLOG_DIR="${BACKUPDIR}/redolog_dir"
ARCLOG_DIR="${BACKUPDIR}/arclog_dir"
INITFILE_DIR="${BACKUPDIR}/initfile_dir"
BACKUPLOGFILE="${BACKUPLOG_DIR}/backup_log_${ORA_SID}"
RESTOREFILE="${RESTOREFILE_DIR}/restorefile_${ORA_SID}"
LOGFILE="${LOGDIR}/${ORA_SID}.log"
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
udump_dest=´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select value from v\\$parameter
where name='user_dump_dest';
exit
EOF´
if [ ! -f $init_file ]; then
echo "HOTBACKUP_FAIL: init$ORA_SID.ora does not exist in ORACLE_HOME/dbs"
| tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
#:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_ux_cmd_stat(): Check the exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_ux_cmd_stat() {
if [ $? != 0 ]; then
echo "´date´" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "HOTBACKUP_FAIL:${1} "|tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
}
############################################################
# MAIN
############################################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2
DYN_DIR="${TOOLS}/DYN_FILES"
LOGDIR="${TOOLS}/localog"
JOBNAME="dbhotbackup"
funct_chk_parm
funct_chk_bkup_dir
funct_get_vars
funct_verify
funct_chk_dblogmode
funct_hot_backup
funct_temp_backup
funct_control_backup
funct_archivelog_backup
funct_init_backup
• In the main function, set the correct values for BACKUPDIR, ORATABDIR, TOOLS, and
log_arch_dest variables highlighted in the script. The default location of ORATABDIR is
different for each flavor of Unix.
• Check for existence of the SID instance in the oratab file. If not already there, you must
add the instance.
• Check for the existence of the initSID.ora file in the ORACLE_HOME/dbs directory. If it is
in a different location, you must create a soft link to the ORACLE_HOME/dbs directory.
• main() BACKUPDIR defines the backup location. ORATABDIR defines the oratab file
location. oratab files maintain the list of instances and their home directories on the
machine. This file is created by default when Oracle is installed. If it is not there, you must
create one. OWNER is the owner of the Oracle software directories.
• funct_get_vars() Make sure that the USER_DUMP_DEST parameter is set correctly in
Init.ora file. I was reluctant to get LOG_ARCHIVE_DEST from the Init.ora file because
there are some changes between Oracle 7 and 8 in the way the archive destination is
defined. There are a variety of ways that you can define log_archive_dest based on how
many destinations you are using. Consequently, I have given the option to define
log_archive_dest in main function.
• funct_temp_backup() Oracle 7 and Oracle 8 support permanent temporary tablespaces
(created with create tablespce tablespace_name ... temporary). Apart from this,
Oracle 8I has new features to create temporary tablespaces that do not need back up
(created with create tablespace temporary...). Data in these temporary tablespaces is
session-specific and gets deleted as soon as the session is disconnected. Because of the
nature of these temporary tablespaces, you do not need to back them up; in the case of a
restore, you can just add the data file for these temporary tablespaces. The files for these
temporary tablespaces are listed under the dba_temp_files data dictionary view.
• funct_control_backup() In addition to taking backup of control file, this function also
backs up the trace of the control file. The trace of the control file will be useful to examine
the structure of the database. This is the single most important piece of information that
you need to perform a good recovery, especially if the database has hundreds of files.
• funct_chk_bkup_dir() This function creates backup directories for data, control, redo
log, archivelog, init files, restore files, and backup log files.
Restore file
The restore file for hot backup looks similar to cold backup. Please refer to the explanation under
the heading restore file for cold backup.
The important thing here is that the backup log file defined by (BACKUPLOGFILE) contains
detailed information about each step of the backup process. This is a very good place to start
investigating why a backup has failed or for related errors. This file will also have the start and end
time of the backup.
A single line about the success or failure of a backup is appended to the SID.log file every time a
backup is performed. This file is located under the directory defined by the LOGDIR variable. This
file also has the backup completion time. A separate file is created for each instance. This single
file maintains the history of the performed backups, their status, and timing information. The
messages for a hot backup are 'HOTBACKUP_FAIL', if the hot backup failed, and 'Hotbackup
Completed successfully', if the backup completes successfully.
Export
The export program (see Listing 3.5) performs a full export of the database under Unix
environment. The export script takes two input parameters—SID and OWNER. SID is the instance to
be backed up, and OWNER is the Unix account under which Oracle is running. Figure 3.5 shows the
functionality of the export and split export programs. Each box represents a corresponding
function in the program.
Figure 3.5 Functions in export and split export scripts for Unix.
)
######################################################################
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify(): Verify that database is online
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_verify(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
funct_chk_unix_command_status "Database is down for given SID($ORA_SID),
Owner($ORA_OWNER). Can't perform export "
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_cleanup(): Cleanup interim files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_cleanup() {
echo "Left for user convenience" > /dev/null
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_build_parfile(): This will create parameter file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_build_parfile() {
# This line makes sure that it always creates a new parameter file
echo " " >${PARFILE}
echo "userid=system/manager">>${PARFILE}
# if you use connect string. see next line.
#userid=system/manager@${CON_STRING}
#echo "Owner=scott">>${PARFILE}
#echo "Tables=scott.T1">>${PARFILE}
echo "Full=Y">>${PARFILE}
#echo "Direct=Y">>${PARFILE}
echo "Grants=Y">>${PARFILE}
echo "Indexes=Y">>${PARFILE}
echo "Rows=Y">>${PARFILE}
echo "Constraints=Y">>${PARFILE}
echo "Compress=N">>${PARFILE}
echo "Consistent=Y">>${PARFILE}
echo "File=${FILE}">>${PARFILE}
echo "Log=${EXPORT_DIR}/${ORA_SID}.exp.log">>${PARFILE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_export(): Export the database
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_export() {
# Remove old export file
rm -f ${FILE}
${ORACLE_HOME}/bin/exp parfile=${PARFILE}
if [ $? != 0 ]; then
echo ´date´ >> $LOGDIR/${ORA_SID}.log
echo "EXPORT_FAIL: ${ORA_SID}, Export Failed" >> $LOGDIR/${ORA_SID}.log
funct_cleanup
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "EXPORT_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already exist
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_bkup_dir() {
EXPORT_DIR=${BACKUPDIR}
if [ ! -d ${EXPORT_DIR} ]; then mkdir -p ${EXPORT_DIR}; fi
if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi
if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi
FILE="${EXPORT_DIR}/${ORA_SID}.dmp"
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
ORA_HOME=´sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print $2}'´
ORA_BASE=´echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'´
ORACLE_BASE=´echo $ORA_BASE|tr -d " "´
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID
#CON_STRING=${ORA_SID}.company.com
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_unix_command_status(): Check exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_unix_command_status() {
if [ $? != 0 ]; then
echo "´date´" >> ${LOGDIR}/${ORA_SID}.log
echo "EXPORT_FAIL: ${1} " >> ${LOGDIR}/${ORA_SID}.log
exit 1
fi
}
######################################
# MAIN
######################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2
DYN_DIR="${TOOLS}/DYN_FILES"
PARFILE="${DYN_DIR}/export.par"
LOGDIR="${TOOLS}/localog"
funct_chk_parm
funct_get_vars
funct_verify
funct_chk_bkup_dir
funct_build_parfile
funct_export
funct_cleanup
• In the main function, set the correct values for BACKUPDIR, ORATABDIR, and TOOLS
variables highlighted in the export script. The default location of ORATABDIR is different for
each flavor of Unix.
• Check for existence of SID in the oratab file. If not already there, you must add the
instance.
• The funct_build_parfile() function builds the parameter file. By default, it performs a
full export. You can modify the parameters to perform a user- or table-level export.
The 'Log' parameter value set in the parameter file will have detailed information about the status
of export. This is a very good place to start investigating why an export has failed or for related
errors.
A single line about the success or failure of export is appended to SID.log file every time an export
is performed. This file is located under the directory defined by the LOGDIR variable. This file also
has the backup completion time. A separate file is created for each instance. This single file
maintains the history of performed backups, their status, and timing information. The messages for
an export are 'EXPORT_FAIL', if the export failed, and 'Export Completed successfully', if the
export completes successfully.
The split export program (see Listing 3.6) performs an export of the database. Additionally, if the
export file is larger than 2GB, the script compresses the export file and splits into multiple files to
overcome the export limitation of a 2GB file system size. This is the only way to split the export
file prior to Oracle 8i. New features in 8I allow you to split the export file into multiple files, but it
does not compress the files on-the-fly to save space. The script uses the Unix commands split
and compress to perform splitting and compressing of the files. The functions of the script are
explained in Figure 3.5.
The split export script takes two input parameters—SID and OWNER. SID is the instance to be
backed up, and OWNER is the Unix account under which Oracle is running.
In 8i, Oracle introduced two new export parameters called FILESIZE and QUERY. FILESIZE
specifies the maximum file size of each dump file. This overcomes the 2GB file system limitations
of export command operating systems. By using the QUERY parameter, you can export the subset of
a table data. During an import, when using split export files, you have to specify the same
FILESIZE limit.
######################################################################
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify(): Verify that database is online
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_verify(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
funct_chk_unix_command_status "Database is down for given SID($ORA_SID),
Owner($ORA_OWNER). Can't perform export "
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_cleanup(): Cleanup interim files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_cleanup() {
rm –f ${PIPE_DEVICE}
rm –f ${SPLIT_PIPE_DEVICE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_splitcompress_pipe(): Creates pipe for compressing and
splitting of file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_splitcompress_pipe() {
# Creates pipe for compressing
if [ ! -r ${PIPE_DEVICE} ]; then
/etc/mknod ${PIPE_DEVICE} p
fi
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_build_parfile(): Creates parameter file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_build_parfile() {
# This line makes sure that it always creates a new parameter file
echo " " >${PARFILE}
echo "userid=system/manager">>${PARFILE}
echo "Full=Y">>${PARFILE}
#echo "tables=scott.t1">>${PARFILE}
echo "Grants=Y">>${PARFILE}
echo "Indexes=Y">>${PARFILE}
echo "Rows=Y">>${PARFILE}
echo "Constraints=Y">>${PARFILE}
echo "Compress=N">>${PARFILE}
echo "Consistent=Y">>${PARFILE}
echo "File=${PIPE_DEVICE}">>${PARFILE}
echo "Log=${EXPORT_DIR}/${ORA_SID}.exp.log">>${PARFILE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_export(): Export the database
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_export() {
# Remove old export file
rm -f ${ZFILE}
${ORACLE_HOME}/bin/exp parfile=${PARFILE}
if [ $? != 0 ]; then
echo ´date´ >> $LOGDIR/${ORA_SID}.log
echo "EXPORT_FAIL: ${ORA_SID}, Export Failed" >> $LOGDIR/${ORA_SID}.log
funct_cleanup
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "EXPORT_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already existing
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_bkup_dir() {
EXPORT_DIR=${BACKUPDIR}
if [ ! -d ${EXPORT_DIR} ]; then mkdir -p ${EXPORT_DIR}; fi
if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi
if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi
ZFILE="${EXPORT_DIR}/${ORA_SID}.dmp.Z"
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
ORA_HOME=´sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print $2}'´
ORA_BASE=´echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'´
ORACLE_BASE=´echo $ORA_BASE|tr -d " "´
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_unix_command_status(): Check exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_unix_command_status() {
if [ $? != 0 ]; then
echo "´date´" >> ${LOGDIR}/${ORA_SID}.log
echo "EXPORT_FAIL: ${1} " >> ${LOGDIR}/${ORA_SID}.log
exit 1
fi
}
#######################################
## MAIN
#######################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2
# Set up environment
BACKUPDIR="/u02/${ORA_SID}/export"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"
DYN_DIR="${TOOLS}/DYN_FILES"
PARFILE="${DYN_DIR}/export.par"
LOGDIR="${TOOLS}/localog"
PIPE_DEVICE="/tmp/export_${ORA_SID}_pipe"
SPLIT_PIPE_DEVICE="/tmp/split_${ORA_SID}_pipe"
funct_chk_parm
funct_get_vars
funct_verify
funct_chk_bkup_dir
funct_splitcompress_pipe
funct_build_parfile
funct_export
funct_cleanup
The checklist of things to verify before the splitZxport is run is the same as for the export
program.
Split Import
The split import program (see Listing 3.7) performs an import using the compressed split export
dump files created by the splitZxport program. The script takes two input parameters—SID and
OWNER. SID is the instance to be backed up, and OWNER is the Unix account under which Oracle is
running.
######################################################################
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify(): Verify that database is online
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_verify(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
funct_chk_unix_command_status "Database is down for given SID($ORA_SID),
Owner($ORA_OWNER). Can't perform impot"
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_cleanup(): Cleanup interim files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_cleanup() {
rm –f ${PIPE_DEVICE}
rm –f ${SPLIT_PIPE_DEVICE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_desplitcompress_pipe(): Creates pipe for uncompressing and
desplitting of file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_desplitcompress_pipe() {
# Creates pipe for uncompressing
if [ ! -r ${PIPE_DEVICE} ]; then
/etc/mknod ${PIPE_DEVICE} p
fi
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_build_parfile(): Creates parameter file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_build_parfile() {
# This line makes sure that it always creates a new parameter file
echo " " >${PARFILE}
echo "userid=system/manager">>${PARFILE}
#echo "indexfile=${BACKUPDIR}/${ORA_SID}.ddl">>${PARFILE}
#echo "Owner=scott">>${PARFILE}
#echo "Fromuser=kishan">>${PARFILE}
#echo "Touser=aravind">>${PARFILE}
#echo "Tables=T1,T2,t3,t4">>${PARFILE}
echo "Full=Y">>${PARFILE}
echo "Ignore=Y">>${PARFILE}
echo "Commit=y">>${PARFILE}
echo "File=${PIPE_DEVICE}">>${PARFILE}
echo "Log=${BACKUPDIR}/${ORA_SID}.imp.log">>${PARFILE}
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_import(): Import the database
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_import() {
${ORACLE_HOME}/bin/imp parfile=${PARFILE}
if [ $? != 0 ]; then
echo ´date´ >> $LOGDIR/${ORA_SID}.log
echo "IMPORT_FAIL: ${ORA_SID}, Import Failed" >> $LOGDIR/${ORA_SID}.log
funct_cleanup
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "IMPORT_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Check for backup directories
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_bkup_dir() {
if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi
if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
ORA_HOME=´sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print $2}'´
ORA_BASE=´echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'´
ORACLE_BASE=´echo $ORA_BASE|tr -d " "´
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_unix_command_status(): Check exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_unix_command_status() {
if [ $? != 0 ]; then
echo "´date´" >> ${LOGDIR}/${ORA_SID}.log
echo "IMPORT_FAIL: ${1} " >> ${LOGDIR}/${ORA_SID}.log
exit 1
fi
}
#######################################
## MAIN
#######################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2
# Set up environment
BACKUPDIR="/u02/${ORA_SID}/export"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"
# List all split files in ZFILES variable
#ZFILES=´echo ${BACKUPDIR}/file.dmp.Z??|sort´
ZFILES="${BACKUPDIR}/file.dmp.Zaa ${BACKUPDIR}/file.dmp.Zab"
DYN_DIR="${TOOLS}/DYN_FILES"
PARFILE="${DYN_DIR}/import.par"
LOGDIR="${TOOLS}/localog"
PIPE_DEVICE="/tmp/import_${ORA_SID}_pipe"
SPLIT_PIPE_DEVICE="/tmp/split_${ORA_SID}_pipe"
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1;export NLS_LANG
funct_chk_parm
funct_get_vars
funct_verify
funct_chk_bkup_dir
funct_desplitcompress_pipe
funct_build_parfile
funct_import
funct_cleanup
• In the main() function, set the correct values for the BACKUPDIR, ORATABDIR, and TOOLS
variables highlighted in the import script. The default location of ORATABDIR is different for
each flavor of Unix.
• Check for the existence of the SID in the oratab file. If not already there, you must add the
instance.
• List all split filenames in the ZFILES variable in the main() function.
• The funct_build_parfile() function builds the parameter file. By default, it performs a
full import. You can modify the settings to perform a user or table import.
funct_desplitcompress_pipe() The only trick here is that we need to split and uncompress
the files before we use them as input to import command. That is accomplished by creating two
pipes. Here, we use the cat command to send output from split files to the split pipe device. The
split pipe device is passed to the uncompress command. The output from the uncompress
command is sent to the Oracle import command. cat and uncompress are Unix commands.
Everything else is the same as a regular import.
This section discusses backing up the software directories of Oracle. We have already discussed
how to back up the database. Backing up software is also a very important part of a backup
strategy. The software might not need to be backed up as often as the database because it does not
change quite as often. But as you upgrade, or before you apply any patches to existing software, it
is important to make a backup copy of it to avoid getting into trouble.
Listing 3.8 contains the script to perform a backup of Oracle software. The script takes two input
parameters—SID and OWNER. SID is the instance to be backed up, and OWNER is the Unix account
under which Oracle is running.
#####################################################################
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify_shutdown(): Verify that database is down
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_verify_shutdown(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
if [ $? = 0 ]; then
echo "´date´" >> ${LOGFILE}
echo "SOFTWAREBACKUP_FAIL: ${ORA_SID}, Database is up, can't do software
backup if the database is online." |tee -a ${BACKUPLOGFILE} >>
${LOGFILE}
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_shutdown_i(): Shutdown database in Immediate mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_shutdown_i(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
shutdown immediate;
exit
EOF
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_startup_n(): Startup database in normal mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_startup_n(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
startup;
exit
EOF
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_software_bkup(): Backup software
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_software_bkup(){
echo "tarring ${ORA_HOME}" >> ${BACKUPLOGFILE}
echo "tarring ${ORA_BASE}" >> ${BACKUPLOGFILE}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "SOFTWAREBACKUP_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already exist
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_bkup_dir() {
RESTOREFILE_DIR="${BACKUPDIR}/restorefile_dir"
BACKUPLOG_DIR="${BACKUPDIR}/backuplog_dir"
SOFTWARE_DIR="${BACKUPDIR}/software_dir"
BACKUPLOGFILE="${BACKUPLOG_DIR}/backup_log_${ORA_SID}"
RESTOREFILE="${RESTOREFILE_DIR}/restorefile_${ORA_SID}"
ORAHOMEFILE="${SOFTWARE_DIR}/orahome_${ORA_SID}.tar.Z"
ORABASEFILE="${SOFTWARE_DIR}/orabase_${ORA_SID}.tar.Z"
LOGFILE="${LOGDIR}/${ORA_SID}.log"
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
ORA_HOME=´sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print $2}'´
ORA_BASE=´echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'´
ORACLE_BASE=´echo $ORA_BASE|tr -d " "´
init_file=$ORA_HOME/dbs/init$ORA_SID.ora
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID
if [ ! -f $init_file ]; then
echo "SOFTWAREBACKUP_FAIL: int$ORA_SID.ora does not exist in
ORACLE_HOME/dbs. Used by funct_startup_n to start database"|tee -a
${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_unix_command_status(): Check the exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_unix_command_status() {
if [ $? != 0 ]; then
echo "´date´" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "SOFTWAREBACKUP_FAIL: ${1}"| tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
}
############################################################
## MAIN
############################################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2
DYN_DIR="${TOOLS}/DYN_FILES"
LOGDIR="${TOOLS}/localog"
JOBNAME="orasoftware"
funct_chk_parm
funct_chk_bkup_dir
funct_get_vars
funct_shutdown_i
funct_verify_shutdown
funct_software_bkup
funct_startup_n
echo "${ORA_SID}, Software Backup Completed successfully on ´date +\"%c\"´ "
|tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
• In the main function, set correct values for BACKUPDIR, ORATABDIR, and TOOLS variables
highlighted in the software backup script. The default location of ORATABDIR is different for
each flavor of Unix.
• Check for the existence of the SID in the oratab file. If not already there, you must add the
instance.
• If your Oracle software directory structure does not follow OFA guidelines, set ORA_BASE
and ORA_HOME manually in funct_get_vars().
The important thing here is that the backup log file defined by BACKUPLOGFILE contains detailed
information about each step of the backup process. This is a very good place to start investigating
why a backup has failed or for related errors. This file will also have the start and end time of
backup.
A single line about the success or failure of backup is appended to SID.log file every time backup
is performed. This file is located under the directory defined by the LOGDIR variable. The messages
for a software backup are 'SOFTWAREBACKUP_FAIL', if the software backup failed, and 'Software
Backup Completed', successfully', if the backup completes successfully.
1. Shutdown database.
2. Use restore file from the backup to restore the directories.
3. Start up the database.
The restore command in the restore file first does a zcat (uncompress and cat) of the output file
and passes it to tar for extraction. For example,
Before reading through this section, I strongly recommend that you go through the Windows NT
programming section in Chapter 2. This section presents and explains the scripts for taking a
backup and recovering a database in the Windows NT environment. Here, we use the DOS Shell
batch programming techniques to automate the backup process.
After the backup is complete, it is important to check the backup status by reviewing log and error
files generated by the scripts.
Cold Backup
Listing 3.9 performs a cold backup of a database under the Windows NT environment. The cold
backup script takes SID, the instance to be backed up, as the input parameter. The general steps to
write a backup script in Unix and Windows NT are the same. The only difference is that we will be
using commands that are understood by Windows NT. Figure 3.6 shows the functionality of a cold
backup program under Windows NT. Each box represents a corresponding section in the program.
For example, the Parameter Checking section checks for the necessary input parameters and also
checks for the existence of the backup directories.
set ORA_HOME=c:\oracle\ora81\bin
set CONNECT_USER="/ as sysdba"
set ORACLE_SID=%1
set BACKUP_DIR=c:\backup\%ORACLE_SID%\cold
set INIT_FILE=c:\oracle\admin\orcl\pfile\init.ora set
TOOLS=c:\oracomn\admin\my_dba
set LOGDIR=%TOOLS%\localog
set LOGFILE=%LOGDIR%\%ORACLE_SID%.log
set CFILE=%BACKUP_DIR%\log\coldbackup.sql
set ERR_FILE=%BACKUP_DIR%\log\cerrors.log
set LOG_FILE=%BACKUP_DIR%\log\cbackup.log
set BKP_DIR=%BACKUP_DIR%
REM :::::::::::::::::::: End Declare Variables Section
(echo ColdBackup Completed Successfully & date/T & time/T) >> %LOG_FILE%
(echo ColdBackup Completed Successfully & date/T & time/T) >> %LOGFILE%
goto end
:usage
echo Error, Usage: coldbackup_nt.bat SID
goto end
:backupdir
echo Error creating Backup directory structure >> %ERR_FILE%
(echo COLDBACKUP_FAIL:Error creating Backup directory structure
& date/T & time/T) >> %LOGFILE%
REM :::::::::::::::::::: End Error handling section
• Check to see that ORA_HOME, BACKUP_DIR, and TOOLS are set to correct values according to
your directory structure. These variables are highlighted in the script.
• Verify that CONNECT_USER is set to correct the username and password.
• Define the INIT_FILE variable to the location of the Init.ora file.
• Be sure that the user running the program has Write access to backup directories.
• When you run the program, pass SID as a parameter.
Backup log files defined by LOG_FILE contain detailed information about each step of the backup
process. This is a very good place to start investigating why a backup has failed or for related
errors. This file will also have the start and end time of backup. ERR_FILE has error information.
A single line about the success or failure of backup is appended to the SID.log file every time a
backup is performed. This file is located under the directory defined by the LOGDIR variable. The
messages for a cold backup are 'COLDBACKUP_FAIL', if the cold backup failed, and 'Cold Backup
Completed successfully', if the backup completes successfully.
You can schedule automatic backups using the 'at' command, as shown in the following:
This command runs a backup at 23:00 hours every Monday, Tuesday, Wednesday, Thursday, and
Friday.
The "Create Dynamic Files" section in the coldbackup_nt.bat program creates the
coldbackup.sql file (see Listing 3.10) under the log directory. coldbackup.sql is called from
coldbackup_nt.bat and generates a list of data, control, and redo log files to be backed up from
the database. A sample coldbackup.sql is shown in Listing 3.10 for your understanding. The
contents of this file are derived based on the structure of the database.
spool c:\backup\orcl\cold\log\coldbackup_list.bat
When the coldbackup.sql file is called from the coldbackup_nt.bat program, it spools output to the
coldbackup_list.bat DOS batch file (see Listing 3.11). This file has the commands necessary for
performing the cold backup.
This is only a sample file. Note that in the contents of file data, control, redo log, and Init.ora
files are copied to respective backup directories.
Hot Backup
The hot backup program (see Listing 3.12) performs a hot backup of a database under the
Windows NT environment. The hot backup script takes SID, the instance to be backed up, as the
input parameter.
set ORA_HOME=c:\oracle\ora81\bin
set CONNECT_USER="/ as sysdba"
set ORACLE_SID=%1
set BACKUP_DIR=c:\backup\%ORACLE_SID%\hot
set INIT_FILE=c:\oracle\admin\orcl\pfile\init.ora
set ARC_DEST=c:\oracle\oradata\orcl\archive
set TOOLS=c:\oracomn\admin\my_dba
set LOGDIR=%TOOLS%\localog
set LOGFILE=%LOGDIR%\%ORACLE_SID%.log
set HFILE=%BACKUP_DIR%\log\hotbackup.sql
set ERR_FILE=%BACKUP_DIR%\log\herrors.log
set LOG_FILE=%BACKUP_DIR%\log\hbackup.log
set BKP_DIR=%BACKUP_DIR%
REM :::::::::::::::::::: End Declare Variables Section
echo. >>%HFILE%
echo dbms_output.put_line(chr(10) ); >>%HFILE%
echo dbms_output.put_line('host REM ******Data files' ); >>%HFILE%
echo for tbs in c1 loop >>%HFILE%
echo dbms_output.put_line(' alter tablespace '^|^| tbs.tablespace_name
^|^|' begin backup;'); >>%HFILE%
echo for dbf in c2(tbs.tablespace_name) loop >>%HFILE%
echo dbms_output.put_line(' host copy '^|^|dbf.file_name^|^|'
%BKP_DIR%\data 1^>^> %LOG_FILE% 2^>^> %ERR_FILE%'); >>%HFILE%
echo end loop; >>%HFILE%
echo dbms_output.put_line(' alter tablespace '^|^|tbs.tablespace_name
^|^|' end backup;'); >>%HFILE%
echo end loop; >>%HFILE%
echo. >>%HFILE%
echo dbms_output.put_line(chr(10) ); >>%HFILE%
echo dbms_output.put_line('host REM ******Control files ' );
>>%HFILE%
echo dbms_output.put_line(' alter database backup controlfile to
'^|^| ''^|^|'%BKP_DIR% \control\coltrol_file.ctl'^|^|''^|^|';');
>>%HFILE%
echo dbms_output.put_line(' alter database backup controlfile to trace;');
>>%HFILE%
echo. >>%HFILE%
echo dbms_output.put_line(chr(10) ); >>%HFILE%
echo dbms_output.put_line('host REM ******Init.ora file ' );
>>%HFILE%
echo dbms_output.put_line(' host copy %INIT_FILE% %BKP_DIR%\control
1^>^> %LOG_FILE% 2^>^> %ERR_FILE%'); >>%HFILE%
echo. >>%HFILE%
echo dbms_output.put_line(chr(10) ); >>%HFILE%
echo dbms_output.put_line('host REM ******Archivelog files' );
>>%HFILE%
echo dbms_output.put_line(' alter system switch logfile;'); >>%HFILE%
echo dbms_output.put_line(' alter system archive log stop;'); >>%HFILE%
echo dbms_output.put_line('host move %ARC_DEST%\* %BKP_DIR%\arch
1^>^> %LOG_FILE% 2^>^> %ERR_FILE%' ); >>%HFILE%
echo dbms_output.put_line(' alter system archive log start;');
>>%HFILE%
(echo HotBackup Completed Successfully & date/T & time/T) >> %LOG_FILE%
(echo HotBackup Completed Successfully & date/T & time/T) >> %LOGFILE%
goto end
:usage
echo Error, Usage: hotbackup_nt.bat SID
goto end
:backupdir
echo Error creating Backup directory structure >> %ERR_FILE%
(echo HOTBACKUP_FAIL:Error creating Backup directory structure
& date/T & time/T) >> %LOGFILE%
REM :::::::::::::::::::: End Error handling section
Hot backup program functionality can be shown with the similar diagram as for a cold backup. The
sections and their purposes in the program are the same as for a cold backup.
• Check to see that ORA_HOME, BACKUP_DIR, and TOOLS are set to the correct values according
to your directory structure. These variables are highlighted in the script.
• Verify that CONNECT_USER is set to the correct username and password.
• Define the INIT_FILE variable to the location of the Init.ora file.
• Define the ARC_DEST variable to the location archive destination.
• Be sure that the user running the program has Write access to the backup directories.
• When you run the program, pass SID as a parameter.
The backup log file defined by LOG_FILE contains detailed information about each step of the
backup process. This is a very good place to start investigating why a backup has failed or for
related errors. This file will also have the start and end time of backup. ERR_FILE has error
information.
A single line about the success or failure of backup is appended to the SID.log file every time a
backup is performed. This file is located under the directory defined by the LOGDIR variable. The
messages for a hot backup are 'HOTBACKUP_FAIL', if a hot backup failed, and 'Hot Backup
Completed successfully', if a backup completes successfully.
The "Create Dynamic Files" section, in the hotbackup_nt.bat creates the hotbackup.sql file
(see Listing 3.13) under the log directory. This generates a list of tablespaces, data, control, and
redo log files from the database. It is called from the hotbackup_nt.bat program.
dbms_output.put_line(chr(10) );
dbms_output.put_line('host REM ******Data files' );
for tbs in c1 loop
dbms_output.put_line(' alter tablespace '|| tbs.tablespace_name ||'
begin backup;');
for dbf in c2(tbs.tablespace_name) loop
dbms_output.put_line(' host copy '||dbf.file_name||'
c:\backup\orcl\hot\data 1>> hbackup.log 2>> herrors.log');
end loop;
dbms_output.put_line(' alter tablespace '||tbs.tablespace_name ||
' end backup;');
end loop;
dbms_output.put_line(chr(10) );
dbms_output.put_line('host REM ******Control files ' );
dbms_output.put_line(' alter database backup controlfile to '||
''||'c:\backup\orcl\hot\control\coltrol_file.ctl
'||''||';');
dbms_output.put_line(' alter database backup controlfile to trace;');
dbms_output.put_line(chr(10) );
dbms_output.put_line('host REM ******Init.ora file ' );
dbms_output.put_line('host copy
c:\oracle\admin\orcl\pfile\init.orac:\backup\orcl\hot\control
1>> hbackup.log 2>> herrors.log');
dbms_output.put_line(chr(10) );
dbms_output.put_line('host REM ******Archivelog files' );
dbms_output.put_line(' alter system switch logfile;');
dbms_output.put_line(' alter system archive log stop;');
dbms_output.put_line('host move c:\oracle\oradata\orcl\archive\*
c:\backup\orcl\hot\arch 1>> hbackup.log 2>> herrors.log' );
dbms_output.put_line(' alter system archive log start;');
dbms_output.put_line('exit;');
End;
/
spool off
exit;
The hotbackup.sql file is called from hotbackup_nt.bat and it spools output to the
hotbackup_list.sql SQL file (see Listing 3.14). This file has the commands necessary for
performing a hot backup.
This is only a sample file. Note in the file that the data, control, archive log, and Init.ora files are
copied to their respective backup directories. First, it puts the tablespace into Backup mode, copies
the corresponding files to backup location, and then turns off the Backup mode for that tablespace.
This process is repeated for each tablespace, and each copy command puts the status of the copy
operation to hbackup.log and reports any errors to the herrors.log file.
Listing 3.14 is generated based on the structure of the database. In a real environment, the database
structure changes as more data files or tablespaces get added. Because of this, it is important to
generate the backup commands dynamically, as shown in hotbackup_list.sql. It performs the
actual backup and is called from hotbackup_nt.bat.
The export program (see Listing 3.15) performs a full export of the database under a Windows NT
environment. The export script takes SID, the instance to be backed up, as the input parameter.
set ORA_HOME=c:\oracle\ora81\bin
set ORACLE_SID=%1
set CONNECT_USER=system/manager
set BACKUP_DIR=c:\backup\%ORACLE_SID%\export
set TOOLS=c:\oracomn\admin\my_dba
set LOGDIR=%TOOLS%\localog
set LOGFILE=%LOGDIR%\%ORACLE_SID%.log
:usage
echo Error, Usage: coldbackup_nt.bat SID
goto end
:backupdir
echo Error creating Backup directory structure
(echo EXPORT_FAIL:Error creating Backup directory structure
& date/T & time/T) >> %LOGFILE%
This program performs an export of the database by using the parameter file specified by
export_par.txt. In Listing 3.16 is a sample parameter file that performs a full export of the
database. You can modify the parameter file to suit to your requirements.
• Check to see that ORA_HOME and BACKUP_DIR, TOOLS are set to correct values according to
your directory structure. These variables are highlighted in the program.
• Verify that CONNECT_USER is set to the correct username and password.
• Be sure that the user running the program has Write access to the backup directories.
• Edit the parameter file to your specific requirements. Specify the full path of the location of
your parameter file in the program.
• When you run the program, pass SID as a parameter.
The log file specified in the parameter file contains detailed information about each step of the
export process. This is a very good place to start investigating why an export has failed or for
related errors.
A single line about the success or failure of export is appended to the SID.log file every time an
export is performed. This file is located under the directory defined by the LOGDIR variable. The
messages for an export are 'EXPORT_FAIL', if the export failed, and 'Export Completed
successfully', if the export completes successfully.
Recovery Principles
Recovery principles are the same, regardless of whether you are in a Unix or Windows NT
environment. The following are general guidelines for recovery using a cold backup, hot backup,
and export.
Definitions
• Control File—The control file contains records that describe and maintain information
about the physical structure of a database. The control file is updated continuously during
database use and must be available for writing whenever the database is open. If the control
file is not accessible, the database will not open.
• System Change Number (SCN)—The system change number is a clock value for the
database that describes a committed version of the database. The SCN functions as a
sequence generator for a database and controls concurrency and redo record ordering.
Think of the SCN as a timestamp that helps ensure transaction consistency.
• Checkpoint—A checkpoint is a data structure in the control file that defines a consistent
point of the database across all threads of a redo log. Checkpoints are similar to SCNs and
they also describe which threads exist at that SCN. Checkpoints are used by recovery to
ensure that Oracle starts reading the log threads for the redo application at the correct point.
For a parallel server, each checkpoint has its own redo information.
To perform either a complete media recovery or incomplete media recovery, you need to be
familiar with the following three media recovery commands.
• RECOVER DATABASE This command performs a media recovery on all the data files that
require the application of redo.
• This can be used only when the database is mounted but not open.
• This command is generally used when the system data file is lost.
• RECOVER TABLESPACE tablespace_name This command performs a media recovery on all
the data files in the tablespaces listed.
• The database must be mounted and open.
• The tablespace in question must be offline to perform the media recovery.
• To recover the tablespace, you need to mount the database first, put the data file that is in
trouble offline, and then open the database and put the tablespace offline. Then give the
recover tablespace tablespace_name command and put the tablespace online when
the recovery is complete.
• RECOVER DATAFILE 'filename' This command performs a recovery on listed data files.
• The database can be open or closed.
• If the database is open, data file recovery can only recover offline files.
• To recover the data file in question, mount the database and put the troubled data file
offline, open the database and issue the 'RECOVER DATAFILE 'FILE_NAME' command, and
put the data file online. This command is generally used when a non-system data file is
lost.
You are a new DBA and you get a call from the project manager saying that the users are not able
to connect to the database.
As a first step, try to establish a connection for yourself as a DBA as shown. If the connection
succeeds, try to connect as a regular user and see if you receive any errors during connection,
because some errors that are seen by regular users do not show up when you connect as Internal or
SYSDBA (such as Max sessions reached).
$sqlplus user/pwd
Now you determined that you are not able to connect to the database.
As a second step, try to see whether the processes are running by using the following command.
As a third step, check the alert log file for any errors. The alert log file is located under the
directory defined by BACKGROUND_DUMP_DEST in the Init.ora file.
This file lists any errors encountered by database. If you see any errors, note the time of the error,
error number, and error message. If you do not see any errors, start up the database (sometimes it
will report an error when you try to startup the database). If the database starts, that is wonderful!
If it doesn't start, it will generally complain about the error onscreen and also report the error in the
alert log file. Check the alert log again for more information.
Now you determined from the error that the database is not finding one of the data files.
As a fourth step, inform the project manager that somebody has caused a problem in the database
and try to find out what happened (a hard disk problem or perhaps somebody deleted the file).
Limit your time to this research based on time available.
As a fifth step, try to determine what kind of backups you have taken recently and see which one is
most beneficial for recovering as much data as possible. This depends on the types of backups your
site is employing to protect from database crashes.
If you have a hot backup mechanism in place, you can be sure that you can recover all or most of
the data. If you have an export or cold backup mechanism in place, the data changes since the time
of last backup will be lost.
As a sixth step, follow the instructions in this chapter, given your recovery scenario.
To recover a database using c cold backup, just restore all the files from the backup location to
their original locations and open the database. You can find the original physical location in the
trace file you generated as part of the backup. You cannot recover the transactions that occurred
between the last backup and the point of failure—that information is lost.
Recovery When a Redo Log File Is LostTo recover the database when a redo log
file is lost or corrupted
alter database clear logfile group 1;
Or you can create a new control file and open the database in the Reset Logs mode (alter
database open resetlogs). For this the database need to be in NOMOUNT state (startup
NOMOUNT). The reset logs option resets the redo log sequence numbering and recreates any
missing logfiles. To create the new control file, you need to know the full structure of the database.
We have taken the trace of control file by using Alter database backup controlfile to trace
as part of the backup. Follow the steps explained in Chapter 10, "Database Maintenance and
Reorganization," for creating a new control file.
To recover the database in case of a lost control file, you simply recreate the control file knowing
the structure of the database (from the trace of control file) and open the database with reset logs.
Follow the steps explained in Chapter 10 for creating a new control file.
When the database is running in ARCHIVELOG mode and online backup is being used, there are a
variety of options for recovering the database, up to the point of failure, that provide maximum
protection for your data.
At all costs, we want to be able to fully recover the data in case of a database failure.
Consequently, we always try to perform a complete recovery unless the need is to recover the
database only to a specific point in time for specific reasons, such as those discussed in the next
section, "Incomplete Media Recovery."
The choice of whether to use a closed or open database recovery is based on the type of failure. If
you lose system data files, the only choice is a closed database recovery. If a non-system data file
is lost, you can perform recovery by using either a closed or open database method. Suppose that
you are running a 24/7, mission-critical database, and only part of the database (non-system) is
damaged. In this situation, you can open the database for users by taking the damaged data files
offline and then performing a recovery on the damaged files. This way, users can access the rest of
the database while the recovery is being performed on the damaged data files.
Incomplete media recovery is very useful as well, if a user drops a table accidentally and comes to
you for help, for example. If you know the time the table drop occurred, you can restore the
database from a backup. By using the latest control file, you can roll forward the changes by
applying redo log files up to the point just before the accidental drop (time-based recovery).
startup mount
recover database
At this point, you will be prompted for the location of the archived redo log files, if
necessary.
startup mount
5. After the database is open, take the tablespace offline. For example, if the corrupted data
file belongs to USERS tablespace, use the following command:
Here, tablespace can be taken offline either with a normal, temporary, or immediate
priority. If possible, take the damaged tablespace offline with a normal or temporary
priority to minimize the amount of recovery.
Startup mount
5. After the database is open, take the tablespace offline. For example, if the corrupted data
file belongs to USERS tablespace, use the following command:
Here, tablespace can be taken offline either with a normal, temporary, or immediate
priority. If possible, take the damaged tablespace offline with a normal or temporary
priority to minimize the amount of recovery.
At this point, you will be prompted for the location of the archived redo log files, if
necessary.
startup mount
At this point, you will be prompted for the location of the archived redo log files, if
necessary. Enter cancel to cancel recovery after Oracle has applied the archived redo log
file just prior to the point of corruption. If a backup control file or recreated control file is
being used with incomplete recovery, you should specify the using backup controlfile
option. In cancel-based recovery, you cannot stop in the middle of applying a redo log file.
You either completely apply a redo log file or you don't apply it at all. In time-based
recovery, you can apply to a specific point in time, regardless of the archived redo log
number.
Whenever an incomplete media recovery is being performed or the backup control file is
used for recovery, the database should be opened with the resetlogs option. The resetlogs
option will reset the redo log files.
If you open the database with resetlogs, a full backup of the database should be
performed immediately after recovery. Otherwise, you will not be able to recover changes
made after you reset the logs.
startup mount
For example
At this point, you will be prompted for the location of the archived redo log files, if
necessary. Oracle automatically terminates the recovery when it reaches the correct time. If
a backup control file or recreated control file is being used with incomplete recovery, you
should specify the using backup controlfile option.
Whenever an incomplete media recovery is being performed or the backup control file is
used, the database should be opened with the resetlogs option, so that it resets the log
numbering.
5. Perform a full backup of the database.
If the database is opened with resetlogs, a full backup of the database should be
performed immediately after recovery. Otherwise, you will not be able to recover the
changes made after you reset the logs.
startup mount
For example
At this point, you will be prompted for the location of the archived redo log files, if
necessary. Oracle automatically terminates the recovery when it reaches the correct system
change number (SCN).
If a backup control file or a recreated control file is being used with an incomplete
recovery, you should specify using the backup controlfile option.
If the database is opened with resetlogs, a full backup of the database should be
performed immediately after recovery. Otherwise, you will not be able to recover the
changes made after you reset the logs.
When a system data file is lost or damaged, the only way to recover the database is by doing a
closed database recovery using RECOVER DATABASE command.
The following command can be used to check the data file status. This command works when the
database is mounted or open.
The import utility is used to import the database from the dump file generated through the export
utility. This is very useful for transferring data across platforms and importing only specific objects
or users. It works whether archiving is turned on or off. Full database import performance can be
improved by turning off archiving during the import.
• Full
• User-level
• Table-level
Full Import
A full import can be used to restore the database in case of a database crash. For example, you
have a full export of the database from yesterday and your database crashed this afternoon. You
can use the import command to restore the database from the previous day's backup. The restore
steps are as follows.
3. Verify the import log for any errors—With this import, the data changes between your
previous backup and the crash will be lost.
Table-Level Import
A table level import allows you to import specific objects without importing the whole database.
Example 1:
For example, if one of the developers requests that you transfer the EMP and DEPT tables of user
SCOTT from database ORCL to TEST. You can use the following steps to transfer these two tables.
C:\>set ORACLE_SID=ORCL
This command exports table data, constraints and any indexes on the table. Because the
tables belong to owner scott, we need to precede them with the owner in the export
command. Verify the export.log file to make sure there are no errors in the export.
SQL>Connect system/manager@TEST
If the TEST database already has EMP and DEPT tables, you can truncate the tables or drop
the tables as shown.
Or
Example 2:
Suppose you walk into the office in the morning and a developer meets you in the hallway and
says that he accidentally dropped the SALES table. He wants to see whether you can do anything to
restore the table.
Well, you could do something if you have an export dump file from your previous backup. The
steps to restore the table are as follows (assuming that this happened in the TEST database):
C:\>set ORACLE_SID=TEST
This command imports the SALES table from previous backup. After the import check the
import log file for any errors.
Backup and Recovery Tools
Recovery Manager (RMAN)
RMAN is an Oracle provided tool that allows you perform backup and recovery operations on the
database. Using RMAN you can backup and restore datafiles, control files and archived redo log
files.
RMAN operates using the recovery catalog to store metadata information about backup and
recovery operations. Typically the recovery catalog is stored in a separate database. If you do not
want to use the recovery catalog RMAN can use the target database control file to perform backup
and recovery operations. Because most information in the recovery catalog is also available in the
target database's control file, RMAN supports using the target database control file instead of a
recovery catalog. The disadvantage of using the control file is that RMAN does not support restore
or recovery when the control file is lost. To avoid this you should make frequent backups of the
control file. Using the control file is especially appropriate for small databases where installation
and administration of another database for the sole purpose of maintaining the recovery catalog is
burdensome.
A single recovery catalog is able to store information for multiple target databases. Consequently,
loss of the recovery catalog can be disastrous. You should back up the recovery catalog frequently.
If the recovery catalog is destroyed and no backups of it are available, then you can partially
reconstruct the catalog from the current control file or control file backups.
When you perform a backup using RMAN, information about the backup is stored in the catalog
and the actual backups(physical files) are stored on disk or tape(requires media management
software). When you use RMAN with a recovery catalog, the RMAN environment is comprised of
the following components
• RMAN executable
• Recovery catalog database (Database to hold the catalog)
• Recovery catalog schema in the recovery catalog database (Schema to hold the metadata
information)
• Optional Media Management Software (for tape backups)
Sample Files
Sample oratab File
Listing 3.17 is created by the Oracle installer when you install the Oracle database under Unix
operating system. The installer adds the instance name, Oracle home directory, and auto startup
flag (Y/N) for the database in the format [SID:ORACLE_HOME:FLAG]. The auto startup flag tells
whether the Oracle database should be started automatically when the system is rebooted.
# SID:ORACLE_HOME:Y/N
DEV:/u02/oracle/DEV/oracle/8.1.7:N
TEST:/u05/oracle/TEST/oracle/8.1.7:N
#PREPROD:/u06/oracle/PREPROD/oracle/8.1.7:N
Sample Trace of Control File
Listing 3.18 will have the structure of the database. It lists the data files, control files, and the redo
log files and their location. This is useful if you need to recreate the control file. A trace of the
control file can be generated by using the alter database backup control file to trace.