Unit 4 Database Backup Restore and Recovery
Unit 4 Database Backup Restore and Recovery
Physical backups are backups of the physical files used in storing and recovering
your database, such as datafiles, control files, and archived redo logs. Ultimately,
every physical backup is a copy of files storing database information to some other
location, whether on disk or some offline storage such as tape.
Logical backups contain logical data (for example, tables or stored procedures)
exported from a database with an Oracle export utility and stored in a binary file,
for later re-importing into a database using the corresponding Oracle import utility.
Physical backups are the foundation of any sound backup and recovery strategy.
Logical backups are a useful supplement to physical backups in many
circumstances but are not sufficient protection against data loss without physical
backups. Unless otherwise specified, the term "backup" as used in the backup and
recovery documentation refers to physical backups, and to back up part or all of
your database is to take some kind of physical backup. The focus in the backup and
recovery documentation set will be almost exclusively on physical backups.
As a backup administrator, you may also be asked to perform other duties that are
related to backup and recovery:
While there are several types of problem that can halt the normal operation of an
Oracle database or affect database I/O operations, only two typically require DBA
intervention and media recovery: media failure, and user errors.
Other failures may require DBA intervention to restart the database (after an
instance failure) or allocate more disk space (after statement failure due to, for
instance, a full datafile) but these situations will not generally cause data loss or
require recovery from backup.
For performing backup and recovery based on physical backups, we have two
solutions available:
Both methods are supported by Oracle Corporation and are fully documented.
Recovery Manager is, however, the preferred solution for database backup and
recovery. It can perform the same types of backup and recovery available through
user-managed methods more easily, provides a common interface for backup tasks
across different host operating systems, and offers a number of backup techniques
not available through user-managed methods.
Most of the backup and recovery documentation set will focus on RMAN-based
backup and recovery. Whether you use RMAN or user-managed methods, you can
supplement your physical backups with logical backups of schema objects made
using data export utilities. Data thus saved can later be imported to re-create this
data after restore and recovery. However, logical backups are for the most part
beyond the scope of the backup and recovery documentation.
Introduction to Backup
A backup is a copy of data. This copy can include important parts of the database,
such as the control file and datafiles. A backup is a safeguard against unexpected
data loss and application errors. If you lose the original data, then you can
reconstruct it by using a backup.
Backups are divided into physical backups and logical backups. Physical
backups, which are the primary concern in a backup and recovery strategy, are
copies of physical database files. You can make physical backups with either the
Recovery Manager (RMAN) utility or operating system utilities. In contrast,
logical backups contain logical data (for example, tables and stored procedures)
extracted with an Oracle utility and stored in a binary file. You can use logical
backups to supplement physical backups.
Types of Backups:
Consistent Backups:
When the data files in database are taken backup by completely shutdown of
database is called consistent backup this is also called “Cold Backup”. In another
word backup is taken when the oracle database is shutdown is called consistent
backup, this is also called static backup or offline backup. In this backup no data
can add and removed from database. For the large company which can’t shutdown
the database even a while can’t implement the consistent backup technique.
A consistent backup is one in which the files being backed up contain all changes
up to the same system change number (SCN). This means that the files in the
backup contain all the data taken from a same point in time.
The only way to make a consistent whole database backup is to shut down the
database with the NORMAL, IMMEDIATE, or TRANSACTIONAL options and
make the backup while the database is closed. If a database is not shut down
Oracle makes the control files and datafiles consistent to the same SCN during a
database checkpoint. The only tablespaces in a consistent backup that are allowed
to have older SCNs are read-only and offline normal tablespaces, which are still
consistent with the other datafiles in the backup because no changes have been
made to them.
Inconsistent Backups:
An inconsistent backup is a backup of one or more database files that can take
while the database is running or in archive log mode this is also called “Hot
Backup”. It is also take the backup after the database has shut down abnormally.
This is also called online backup. This backup technique is used for large database
which should run continuously without shutdown.
Redo log files: These files are created during data base creation and a database
has minimum two redo log files. Redo log files holds all the history of change made
in your database.
No-archive log mode: by default oracle database is in no-archive log mode. When
your database is in no-archive log mode it starts overriding the redo log files when
they are fill instead of archiving them. It permanently lose the previous data this
can become complicate during the data recovery of old data, that is why always
recommend to run your database in archive log mode.
Archive log mode: in archive log mode database make copies of all the online
redo log files after they fill and these copies are called archive redo logs.
In this backup the files being backed up do not contain all the changes made at all
the SCNs. In other words, some changes are missing. This means that the files in
the backup contain data taken from different points in time. This can occur because
the datafiles are being modified as backups are being taken. Oracle recovery makes
inconsistent backups consistent by reading all archived and online redo logs,
starting with the earliest SCN in any of the datafile headers, and applying the
changes from the logs back into the datafiles.
If the database must be up and running 24 hours a day, seven days a week, then
you have no choice but to perform inconsistent backups of the whole database. A
backup of online datafiles is called an online backup. This requires that you run
your database in ARCHIVELOG mode.
If you run the database in ARCHIVELOG mode, then you do not have to back up
the whole database at one time. For example, if your database contains seven
tablespaces, and if you back up the control file as well as a different tablespace
each night, then in a week you will back up all tablespaces in the database as well
as the control file. You can consider this staggered backup as a whole database
backup. However, if such a staggered backup must be restored, then you need to
recover using all archived redo logs that were created since the earliest backup was
taken.
Figure bellow illustrates the valid configuration options given the type of backup
that is performed.
Tablespace Backups:
A tablespace backup is a backup of the datafiles that constitute the tablespace. For
example, if tablespace users contains datafiles 2, 3, and 4, then a backup of
tablespace users backs up these three datafiles.
Tablespace backups, whether online or offline, are valid only if the database is
operating in ARCHIVELOG mode. The reason is that redo is required to make the
restored tablespace consistent with the other tablespaces in the database.
Datafile Backups:
A datafile backup is a backup of a single datafile. Datafile backups, which are not
as common as tablespace backups, are valid in ARCHIVELOG databases. The
only time a datafile backup is valid for a database in NOARCHIVELOG mode is
if:
Every datafile in a tablespace is backed up. You cannot restore the database
unless all datafiles are backed up.
The datafiles are read only or offline-normal.
You can instruct RMAN to automatically backup the control file whenever you run
backup jobs. The command is CONFIGURE CONTROLFILE AUTOBACKUP.
Because the autobackup uses a default filename, RMAN can restore this backup
even if the RMAN repository is unavailable. Hence, this feature is extremely
useful in a disaster recovery scenario.
You can make manual backups of the control file by using the following methods:
Because archived redo logs are essential to recovery, you should back them up
regularly. If possible, then back them up regularly to tape.
You can make backups of archived logs by using the following methods:
Before we dive into restoring and recovering databases, let’s first understand under
what circumstances we might need to recover a database, and what are the
different ways in which a database can fail?
User errors
Bad code
Loss of a file, control file, redo log, or datafile
Corrupt blocks
Upgrade issues
Bad changes
Disasters
Oracle provides various options for recovery, such as rolling back a query or
returning to a point before a change.
These phases are used to bring the database to a desired SCN (System Chain
Number) since the backup (usually, the present).
To restore a datafile or control file from backup is to retrieve the file onto disk
from a backup location on tape, disk or other media, and make it available to the
database server.
Figure bellow illustrates the basic principle of backing up, restoring, and
recovering a database.
In this example a full backup of a database (copies of its datafiles and control file)
is taken at SCN 100. Redo logs generated during the operation of the database
capture all changes that occur between SCN 100 and SCN 500. Along the way,
some logs fill and are archived. At SCN 500, the datafiles of the database are lost
due to a media failure. The database is then returned to its transaction-consistent
state at SCN 500, by restoring the datafiles from the backup taken at SCN 100,
then applying the transactions captured in the archived and online redo logs and
undoing the uncomitted transactions.
For a datafile to be available for media recovery, one of two things must be true:
A datafile that needs media recovery cannot be brought online until media
recovery has been completed. A database cannot be opened if any of the online
datafiles needs media recovery.
Occasionally, however, you need to return a database to its state at a past point in
time. For example, to undo the effect of a user error, such as dropping or deleting
the contents of a table, you may want to return the database to its contents before
the delete occurred.
Point-in-time recovery is also your only option if you have to perform a recovery
and discover that you are missing an archived log covering time between the
backup you are restoring from and the target SCN for the recovery. Without the
missing log, you have no record of the updates to your datafiles during that period.
Your only choice is to recover the database from the point in time of the restored
backup, as far as the unbroken series of archived logs permits, then perform an
OPEN RESETLOGS and abandon all changes in or after the missing log. (If you
discover that you have lost archived logs and your database is still up, you should
do a full backup immediately.)
Media recovery must be explicitly invoked by a user. The database will not
run media recovery on its own.
Media recovery applies needed changes to datafiles that have been restored
from backup, not to online datafiles left over after a crash.
Media recovery must use archived logs as well as the online logs, to find
changes reaching back to the time of the datafile backup.
Unlike the forms of recovery performed manually after a data loss, crash recovery
uses only the online redo log files and current online datafiles, as left on disk after
the instance failure. Archived logs are never used during crash recovery, and
datafiles are never restored from backup.
Before you can access the database's contents, you must start Oracle. The
Oracle instance is the collection of memory and processes running which give you
access to the database. If the database server abnormally terminates, the instance
will die. But the data still survives in the database. After the server is up and
running again, one needs to start the instance before they can access the data.
When you start an instance, there are three stages the instance goes through:
1. NOMOUNT – In this mode, only the parameter file is accessed. Typically, this
mode is only used to modify the parameter file's contents or to create the database.
2. MOUNT – In mount mode, the control files are accessed for the first time.
Certain maintenance operations require the database to be in MOUNT mode.
Mount mode is when the instance acquires memory and spawns the various
background processes required for the Oracle instance.
3. OPEN – This mode is the first mode to touch the datafiles and online redo logs.
Once Oracle has contacted these files, Oracle lets users connect to the instance so
as to interact with the data.
When you issue a normal STARTUP command, the instance proceeds from
NOMOUNT to MOUNT to OPEN. You can STARTUP MOUNT the instance, in
which case the instance goes from NOMOUNT to MOUNT mode, but does not get
to OPEN mode.
After successful login, go to RMAN for backup and recovery operation. Here we
will perform inconsistence backup and recovery using RMAN prompt.
If archive log is disable then first enable it. Follow the following procedure.
Database closed.
Database dismounted.
Database mounted.
Database altered.
Database altered.
shutdown immediate;
startup mount;
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
CON_NAME
------------------------------
CDB$ROOT
FILE_NAME
--------------------------------------------------------------------------------
C:\ORACLE_19C_BASE\ORADATA\TESTDB\SYSTEM01.DBF
C:\ORACLE_19C_BASE\ORADATA\TESTDB\SYSAUX01.DBF
C:\ORACLE_19C_BASE\ORADATA\TESTDB\UNDOTBS01.DBF
C:\ORACLE_19C_BASE\ORADATA\TESTDB\USERS01.DBF
Tablespace created.
Tablespace altered.
Tablespace altered.
FILE_ID
----------
FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
16
E:\DB_BACKUP\TBS2_01.DBF
TBS2
17
E:\DB_BACKUP\TBS2_02.DBF
TBS2
FILE_ID
----------
FILE_NAME
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
18
E:\DB_BACKUP\TBS2_03.DBF
TBS2
SQL> /
16 E:\DB_BACKUP\TBS2_01.DBF TBS2
17 E:\DB_BACKUP\TBS2_02.DBF TBS2
18 E:\DB_BACKUP\TBS2_03.DBF TBS2
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
// Now look the available schemas or files in database which we want to backup
===========================
3 780 SYSAUX NO
C:\ORACLE_19C_BASE\ORADATA\TESTDB\SYSAUX01.DBF
4 65 UNDOTBS1 YES
C:\ORACLE_19C_BASE\ORADATA\TESTDB\UNDOTBS01.DBF
5 260 PDB$SEED:SYSTEM NO
C:\ORACLE_19C_BASE\ORADATA\TESTDB\PDBSEED\SYSTEM01.DBF
6 280 PDB$SEED:SYSAUX NO
C:\ORACLE_19C_BASE\ORADATA\TESTDB\PDBSEED\SYSAUX01.DBF
7 5 USERS NO
C:\ORACLE_19C_BASE\ORADATA\TESTDB\USERS01.DBF
8 100 PDB$SEED:UNDOTBS1 NO
C:\ORACLE_19C_BASE\ORADATA\TESTDB\PDBSEED\UNDOTBS01.DBF
9 270 TESTPDB:SYSTEM NO
C:\ORACLE_19C_BASE\ORADATA\TESTDB\TESTPDB\SYSTEM01.DBF
10 320 TESTPDB:SYSAUX NO
C:\ORACLE_19C_BASE\ORADATA\TESTDB\TESTPDB\SYSAUX01.DBF
11 100 TESTPDB:UNDOTBS1 NO
C:\ORACLE_19C_BASE\ORADATA\TESTDB\TESTPDB\UNDOTBS01.DBF
12 5 TESTPDB:USERS NO
C:\ORACLE_19C_BASE\ORADATA\TESTDB\TESTPDB\USERS01.DBF
13 0 TBS1 NO E:\DB_BACKUP\TBS1_01.DBF
14 0 TBS1 NO E:\DB_BACKUP\TBS1_02.DBF
16 0 TBS2 NO E:\DB_BACKUP\TBS2_01.DBF
17 0 TBS2 NO E:\DB_BACKUP\TBS2_02.DBF
18 0 TBS2 NO E:\DB_BACKUP\TBS2_03.DBF
=======================
2 36 PDB$SEED:TEMP 32767
C:\ORACLE_19C_BASE\ORADATA\TESTDB\PDBSEED\TEMP012022-09-
01_22-33-32-428-PM.DBF
piece
handle=E:\DB_BACKUP\TESTDB\BACKUPSET\2022_11_05\O1_MF_NNNDF
_TAG20221105T190326_KPDRO2JM_.BKP tag=TAG20221105T190326
comment=NONE
piece
handle=E:\DB_BACKUP\TESTDB\AUTOBACKUP\2022_11_05\O1_MF_S_111
9985407_KPDRO42T_.BKP comment=NONE
// And go to the backup location you can see the backedup data.
exit;
Then connect as sysdba by command
db_recovery_file_dest string
// if you already allocated the space for backup no need to allocate again, but
space must have to be allocated.
System altered.
db_recovery_file_dest string
System altered.
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
RMAN> exit
Connected.
Database closed.
Database dismounted.
// now go to backup file and corrupt the data by editing the file
SQL> startup;
Database mounted.
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
===================
Piece Name:
E:\DB_BACKUP\TESTDB\BACKUPSET\2022_11_05\O1_MF_NNNDF_TAG2
0221105T190326_KPDRO2JM_.BKP
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
Piece Name:
E:\DB_BACKUP\TESTDB\BACKUPSET\2022_11_05\O1_MF_NNNDF_TAG2
0221105T191604_KPDSDRW8_.BKP
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
// then recover it
RMAN-00571:
===========================================================
RMAN-00571:
===========================================================
RMAN-00571:
===========================================================
RMAN-00571:
===========================================================
RMAN-01009: syntax error: found "datafile": expecting one of: "backed, by,
completed, controlfile, device, for, guid, like, of, recoverable, summary, tag, ;"
=======================
Name:
E:\DB_BACKUP\TESTDB\DATAFILE\O1_MF_TBS2_KPDSHJW3_.DBF
Tag: TAG20221105T191732
RMAN-00571:
===========================================================
RMAN-00571:
===========================================================
RMAN-00571:
===========================================================
RMAN-00571:
===========================================================
RMAN-01009: syntax error: found "datafile": expecting one of: "backed, by,
completed, controlfile, device, for, guid, like, of, recoverable, summary, tag, ;"
// no backup available
RMAN> exit
// we have to create datafile as same name, location and size as initially create.
Database altered.
Database altered.
// here database opened successfully. Now you can run any query on database.
First of all you have to find the need of data backup and recovery, with these needs
in mind, decide how you can take advantage of features related to backup and
recovery, and look at how each feature meets some requirement of your backup
strategy.
Once you decide which features to use in your recovery strategy, you can plan your
backup strategy.
Your data recovery strategy should include responses to any number of database
failure scenarios. The key to an effective, efficient strategy is envisioning failure
modes, matching Oracle database recovery techniques and tools to the failure
modes in which they are useful, and then making sure you incorporate the
necessary backup types to support those recovery techniques.
Your plans for data recovery strategies are the basis of your plans for backup
strategy. After making recovery plan you have to decide when to perform database
backups, which parts of a database you should back up, what tools Oracle provides
for those backups, and how to configure your database to improve its robustness
and make backup and recovery easier. Of course, the specifics of your strategy
must balance the needs of your restore strategy with questions of cost, resources,
personnel and other factors.
Protecting your redundancy set (The set of files needed to recover an Oracle
database from the failure of any of its files a datafile, control file, or online
redo log is called the redundancy set.)
Deciding whether to use a Flash Recovery Area
Deciding Between Archivelog and Nonarchivelog Mode
Deciding whether to use Oracle Flashback Feature and Restore Point
Choosing a Backup Retention Policy
Archiving Older Backups
Determining Backup Frequency
Performing Backup before and after you make structural change
Scheduling Backups for Frequently-Updated Tablespaces
Backing up after NOLOGGING operations
Exporting Data for added protection and flexibility
Preventing the Backup of Online Redo Logs
Keeping records of the Hardware and Software Configuration of the Server
Practice backup and recovery techniques in a test environment before and after you
move to a production system. In this way, you can measure the thoroughness of
your strategies and minimize problems before they occur in a real situation.
Performing test recoveries regularly ensures that your archiving, backup, and
recovery procedures work. It also helps you stay familiar with recovery
procedures, so that you are less likely to make a mistake in a crisis.
If you use RMAN, then one option is to run the DUPLICATE command to create
a test database using backups of your production database. If you perform user-
managed backup and recovery, then you can either create a new database, a
standby database, or a copy of an existing database to test your backups.
Data Dump:
Data dump file is the data file which contains the schema objects of database. This
is used to migrate the data or whole database from one device to another device.
To dump the data we have to create a dumping directory and its object in database.
Like this you can export and import the specific table or other data form oracle
database. To export table just mention table name.
1. After identifying which files are damaged, place the database in the
appropriate state for restore and recovery. For example, if some but not all
datafiles are damaged, then take the affected tablespaces offline while the
database is open.
2. Restore the files with an operating system utility. If you do not have a
backup, it is sometimes possible to perform recovery if you have the
necessary redo logs dating from the time when the datafiles were first
created and the control file contains the name of the damaged file.
If you cannot restore a datafile to its original location, then relocate the
restored datafile and change the location in the control file.
3. Restore any necessary archived redo log files.
4. Use the SQL*Plus RECOVER command to recover the datafile backups.
High Availability:
Availability is the degree to which an application and database service is available.
Availability is measured by the perception of an application's user. Users
experience frustration when their data is unavailable or the computing system is
not performing as expected, and they do not understand or care to differentiate
between the complex components of an overall solution. Performance failures due
to higher than expected usage create the same disruption as the failure of critical
components in the architecture. If a user cannot access the application or database
service, it is said to be unavailable. Generally, the term downtime is used to refer
to periods when a system is unavailable.
Users who want their systems to be always ready to serve them need high
availability. A system that is highly available is designed to provide uninterrupted
computing services during essential time periods, during most hours of the day,
and most days of the week throughout the year; this measurement is often shown
as 24x365. Such systems may also need a high availability solution for planned
maintenance operations such as upgrading a system's hardware or software.
Recoverability: Even though there may be many ways to recover from a failure, it
is important to determine what types of failures may occur in your high availability
1-1 environment and how to recover from those failures quickly in order to meet
your business requirements.
For example, if a critical table is accidentally deleted from the database, what
action should you take to recover it? Does your architecture provide the ability to
recover in the time specified in a service-level agreement (SLA)?
Importance of Availability
The importance of high availability varies among applications. Databases and the
internet have enabled worldwide collaboration and information sharing by
extending the reach of database applications throughout organizations and
communities. This reach emphasizes the importance of high availability in data
management solutions.
Both small businesses and global enterprises have users all over the world who
require access to data 24 hours a day. Without this data access, operations can stop,
and revenue is lost. Users now demand service-level agreements from their
information technology (IT) departments and solution providers, reflecting the
increasing dependence on these solutions. Increasingly, availability is measured in
dollars, euros, and yen, not just in time and convenience.
The Oracle Data Guard broker logically groups these members into a broker
configuration that allows the broker to manage and monitor them together as an
integrated unit. You can manage a broker configuration using either Oracle
Enterprise Manager Cloud Control (Cloud Control) or the Oracle Data Guard
command-line interface.
Primary Database:
An Oracle Data Guard configuration contains one production database, also
referred to as the primary database, that functions in the primary role. The primary
database is the database that is accessed by most of your applications. The primary
database can be either a single-instance Oracle database or an Oracle Real
Application Clusters (Oracle RAC) database.
Standby Databases:
A standby database is a transactionally consistent copy of the primary database.
Using a backup copy of the primary database, you can create up to thirty standby
databases and incorporate them into an Oracle Data Guard configuration. Oracle
A far sync instance manages a control file, receives redo into standby redo logs
(SRLs), and archives those SRLs to local archived redo logs, but that is where the
similarity with standbys ends. A far sync instance does not have user data files,
cannot be opened for access, cannot run redo apply, and can never function in the
primary role or be converted to any type of standby database.
Far sync instances are part of the Oracle Active Data Guard Far Sync feature,
which requires an Oracle Active Data Guard license.
Recovery Appliance offloads most Oracle Database backup and restore processing
to a centralized backup system. It enables you to achieve significant efficiencies in
storage utilization, performance, and manageability of backups.
Flashback Operation:
It sometimes necessary to return some objects in your database or the entire
database to a previous state, following the effects of a mistaken database update.
For example, a user or DBA might erroneously delete or update the contents of one
or more tables, drop database objects that are still needed for a risky operation such
as an update to an application or a large batch update might fail.
In general, flashback features are more efficient and less disruptive than media
recovery in most situations in which they apply.
Most of the flashback features of Oracle operate at the logical level, enabling you
to view and manipulate database objects. Except for Oracle Flashback Drop, the
logical flashback features rely on undo data, which are records of the effects of
each database update and the values overwritten in the update.
Oracle Flashback Query: You can specify a target time and run queries
against a database, viewing results as they would appear at the target time.
To recover from an unwanted change like an update to a table, you could
choose a target time before the error and run a query to retrieve the contents
of the lost rows.
Oracle Flashback Version Query: You can view all versions of all rows
that ever existed in one or more tables in a specified time interval. You can
also retrieve metadata about the differing versions of the rows, including
start and end time, operation, and transaction ID of the transaction that
created the version. You can use this feature to recover lost data values and
to audit changes to the tables queried.
By Lec. Pratik Chand, Page 42
Elective-Database Administration – CSIT 7th Semester
A flashback data archive enables you to use some logical flashback features to
access data from far back in the past. A flashback data archive consists of one or
more tablespaces or parts of tablespaces. When you create a flashback data archive,
you specify the name, retention period, and tablespace. You can also specify a
default flashback data archive. The database automatically purges old historical
data the day after the retention period expires.
You can turn flashback archiving on and off for individual tables. By default,
flashback archiving is turned off for every table.
Flashback Database:
Flashback Database enables you to revert an Oracle Database to a previous point in
time. At the physical level, Oracle Flashback Database provides a more efficient
data protection alternative to database point-in-time recovery (DBPITR). If the
current data files have unwanted changes, then you can use the RMAN command
FLASHBACK DATABASE to revert the data files to their contents at a past time.
The end product is much like the result of a DBPITR, but is generally much faster
because it does not require restoring data files from backup and requires less redo
than media recovery.
Flashback Database uses flashback logs to access past versions of data blocks and
some information from archived redo logs. Flashback Database requires that you
configure a fast recovery area for a database because the flashback logs can only be
stored there. Flashback logging is not enabled by default. Space used for flashback
logs is managed automatically by the database and balanced against space required
for other files in the fast recovery area.
Oracle Database also supports restore points along with Flashback Database and
backup and recovery. A restore point is an alias corresponding to a system change
number (SCN). You can create a restore point at any time if you anticipate needing
to return part or all of a database to its contents at that time. A guaranteed restore
point ensures that you can use Flashback Database to return a database to the time
of the restore point.
End of Unit-4