Oracle Database 12c Administration Workshop
Oracle Database 12c Administration Workshop
Administration Workshop
Done by
Course Content
• Introduction
• Exploring Oracle Database Architecture
• Create Oracle Database
• Configuring the Oracle Network Environment
• Managing the Database Instance
• Managing User Security
• Managing Database Storage Structures
• Implementing Oracle Database Auditing
• Backup and Recovery: Concepts & Configuration
• Performing Database Backups & Recovery
• Improving Your Backups
• Moving Data
Requirements
User Server
SQL> Select … process process
User
Session
Connection
Session
• Server result cache A shared SQL area contains the parse tree and execution plan
Other
for a given SQL statement. Oracle Database saves memory by
Library
Reserved Pool using one shared SQL area for SQL statements run multiple
Database
Shared pool
Cache
buffer
Redo log times, which often happens when many users run the same
buffer
cache application.
The server result cache contains the SQL query result cache
Streams
Large pool Java pool
pool
Fixed SGA and PL/SQL function result cache, which share the same
infrastructure. The server result cache contains result sets, not
System Global Area (SGA) data blocks.
The reserved pool is a memory area in the shared pool that
Oracle Database can use to allocate large contiguous chunks
of memory.
• Is part of the SGA •The database buffer cache is the portion of the SGA that holds
block images read from the data files or constructed dynamically to
• Holds copies of data blocks that are read from data files satisfy the read consistency model. All users who are concurrently
• Is shared by all concurrent users connected to the instance share access to the database buffer
cache.
•The first time an Oracle Database user process requires a particular
Keep pool
piece of data, it searches for the data in the database buffer cache.
Recycle Database If the process finds the data already in the cache (a cache hit), it
Redo log
pool Shared pool buffer
buffer
can read the data directly from memory. If the process cannot find
cache the data in the cache (a cache miss), it must copy the data block
nK buffer
cache from a data file on disk into a buffer in the cache before accessing
Java pool
Streams the data. Accessing data through a cache hit is faster than
Large pool Fixed SGA
pool accessing data through a cache miss.
• The keep buffer pool and the recycle buffer pool are used for
System Global Area (SGA) specialized buffer pool tuning. The keep buffer pool is designed to
retain buffers in memory longer than the LRU would normally
retain them. The recycle buffer pool is designed to flush buffers
from memory faster than the LRU normally would.
•Is a circular buffer in the SGA •The redo log buffer is a circular buffer in the
•Holds information about changes made to the database SGA that holds information about changes
•Contains redo entries that have the information to redo made to the database. This information is
changes made by operations such as DML and DDL stored in redo entries. Redo entries contain the
information necessary to reconstruct (or redo)
changes that are made to the database by
Database
Redo log
DML, DDL, or internal operations. Redo entries
Shared pool buffer
cache
buffer are used for database recovery if necessary.
•As the server process makes changes to the
Redo log
Large pool Java pool
Streams
pool
Fixed SGA buffer cache, redo entries are generated and
buffer
written to the redo log buffer in the SGA. The
System Global Area (SGA) redo entries take up continuous, sequential
space in the buffer. The log writer background
process writes the redo log buffer to the active
redo log file (or group of files) on disk.
a. Shared pool
b. PGA
c. Buffer cache
d. User session data
What is read into the database buffer cache from data files?
a. Rows
b. Changes
c. Blocks
d. SQL
Server Processes
Oracle Database creates server processes to handle the requests
Instances (ASM and Database separate) of user processes connected to the instance. The user process
represents the application or tool that connects to the Oracle
System Global Area (SGA) database. It may be on the same machine as the Oracle
PGA database, or it may exist on a remote client and use a network
Server
process Background processes to reach the Oracle database. The user process first
Required: DBWn CKPT LGWR communicates with a listener process that creates a server
Listener
SMON PMON process in a dedicated environment.
RECO
LREG
Server processes created on behalf of each user’s application can
MMON MMNL perform one or more of the following:
Others
Grid Infrastructure Processes • Parse and run SQL statements issued through the application.
Optional: ARCn Others
(ASM and Oracle Restart) • Read necessary data blocks from data files on disk into the
User ohas ocssd
shared database buffers of the SGA (if the blocks are not
process diskmon already present in the SGA).
orarootagent oraagent cssdagent • Return results in such a way that the application can process
the information.
Writes modified (dirty) buffers in the database buffer cache to • The Database Writer process
disk:
(DBWn) writes the contents of
• Asynchronously while performing other processing buffers to data files. The DBWn
• To advance the checkpoint processes are responsible for
writing modified (dirty) buffers in
the database buffer cache to disk.
Although one Database Writer
DBWn
process (DBW0) is adequate for
Database buffer Database Witer Data files
most systems, you can configure
cache process
additional processes to improve
write performance
•Performs process recovery when a user process fails •The Process Monitor process (PMON) performs
process recovery when a user process fails.
•Cleans up the database buffer cache PMON is responsible for cleaning up the database
•Frees resources that are used by the user process buffer cache and freeing resources that the user
process was using. For example, it resets the
•Monitors sessions for idle session timeout status of the active transaction table, releases
locks, and removes the process ID from the list of
active processes.
Server •PMON periodically checks the status of
process
dispatcher and server processes, and restarts any
that have stopped running (but not any that
PMON
Oracle Database has terminated intentionally).
User
Database buffer
•Like SMON, PMON checks regularly to see
whether it is needed; it can be called if another
Failed user process Process Monitor
cache
process
• Copy redo log files to a designated storage device after a log •The Archiver processes (ARCn) copy redo
switch has occurred log files to a designated storage device
after a log switch has occurred. ARCn
• Can collect transaction redo data and transmit that data to
processes are present only when the
standby destinations
database is in ARCHIVELOG mode and
automatic archiving is enabled.
•If you anticipate a heavy workload for
archiving (such as during bulk loading of
ARCn data), you can increase the maximum
number of Archiver processes. There can
Archiver process Copies of redo log
files
Archive destination
also be multiple archive log destinations. It
is recommended that there be at least one
Archiver process for each destination. The
default is to have four Archiver processes.
Oracle Database 12c: Administration Workshop
Exploring Oracle Database Architecture 22
Archiver Processes (ARCn)
$ lsnrctl The Listener Control Utility enables you to control the listener. With
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 09-JUL-2013 08:47:42 lsnrctl, you can:
Copyright (c) 1991, 2013, Oracle. All rights reserved.
• Start the listener
• Stop the listener
Welcome to LSNRCTL, type "help" for information.
• Check the status of the listener
LSNRCTL> help • Reinitialize the listener from the configuration file parameters
The following operations are available • Dynamically configure many listeners
An asterisk (*) denotes a modifier or extended command:
• Change the listener password
start stop status services • The basic command syntax for this utility is:
version reload save_config trace
• LSNRCTL> command [listener_name]
spawn quit exit set*
show* • When the lsnrctl command is issued, the command acts on the
default listener (named LISTENER) unless a different listener name
is specified or the SET CURRENT_LISTENER command is executed.
If the listener name is LISTENER, the listener_name argument can
be omitted. The valid commands for lsnrctl are shown in the slide.
• Note: The lsnrctl utility is located in both the Grid Infrastructure
home and the Oracle Database home. It is important to set the
environment variables to the appropriate home before using it.
Commands for the Listener Control Utility •The lsnrctl commands can be issued from within the
can be issued from the command line or
from the lsnrctl prompt. utility (prompt syntax) or from the command line. The
following two commands have the same effect but use
Command-line syntax: command-line syntax and prompt syntax, respectively:
•Command-line syntax:
• $ lsnrctl start
•$ lsnrctl <command name> •Prompt syntax:
•$ lsnrctl start • $ lsnrctl
• LSNRCTL for Linux: Version 12.1.0.1.0 - Production on
•$ lsnrctl status 09-JUL-2013 08:47:42
• Copyright (c) 1991, 2013, Oracle. All rights
reserved. Welcome to LSNRCTL, type "help" for
Initialization Parameter Files When you start the instance, an initialization parameter file is read.
There are two types of parameter files.
• Server parameter file (SPFILE): This is the preferred type of
initialization parameter file. It is a binary file that can be written to and
read by the database server and must not be edited manually. It
resides on the server on which the Oracle instance is executing; it is
persistent across shutdown and startup. The default name of this file,
which is automatically sought at startup, is spfile<SID>.ora.
• Text initialization parameter file: This type of initialization parameter
file can be read by the database server, but it is not written to by the
server. The initialization parameter settings must be set and changed
manually by using a text editor so that they are persistent across
shutdown and startup. The default name of this file (which is
spfileorcl.ora
automatically sought at startup if an SPFILE is not found) is
init<SID>.ora.
or
•It is recommended that you create an SPFILE as a dynamic way to
initorcl.ora maintain initialization parameters.
•Note: The Oracle Database server searches the $ORACLE_HOME/dbs
directory on Linux for the initialization files.
Parameter Specifies • CONTROL_FILES parameter: Specifies one or more control file names. Oracle
strongly recommends that you multiplex and mirror control files. Range of values:
from one to eight file names (with path names). Default value: OS dependent.
CONTROL_FILES One or more control file names
• DB_FILES parameter: Specifies the maximum number of database files that can be
DB_FILES opened for this database. Range of values: OS dependent. Default value: 200.
Maximum number of database files
• PROCESSES parameter: Specifies the maximum number of OS user processes that
PROCESSES Maximum number of OS user processes that can
can simultaneously connect to an Oracle server. This value should allow for all
simultaneously connect background processes and user processes. Range of values: from 6 to an OS-
dependent value. Default value: Dynamic and dependent on the number of CPUs.
DB_BLOCK_SIZE Standard database block size used by all • DB_BLOCK_SIZE parameter: Specifies the size (in bytes) of an Oracle database
tablespaces block. This value is set at database creation and cannot be subsequently changed.
This specifies the standard block size for the database. All tablespaces will use
DB_CACHE_SIZE Size of the standard block buffer cache this size by default. Range of values: 2048 to 32768 (OS-dependent). Default
value: 8192.
• DB_CACHE_SIZE parameter: Specifies the size of the default buffer pool. Range of
values: At least 4 MB times the number of CPUs (smaller values are automatically
rounded up to this value). Default value: 0 if SGA_TARGET is set, otherwise the
larger of 48 MB or (4 MB*CPU_COUNT).
The database instance and database go through stages as the database is made available
for access by users. The database instance is started, the database is mounted, and then the
database is opened.
An instance is typically started only in NOMOUNT mode during database
creation, during re-creation of control files, or in certain backup and recovery
scenarios.
OPEN
When an instance is started, the following takes place:
STARTUP • Searching $ORACLE_HOME/dbs for a file of a particular name in this
sequence:
MOUNT 1. Search for spfile<SID>.ora.
2. If spfile<SID>.ora is not found, search for spfile.ora.
3. If spfile.ora is not found, search for init<SID>.ora.
This is the file that contains initialization parameters for the instance. Specifying
NOMOUNT the PFILE parameter with STARTUP overrides the default behavior.
• Allocating the SGA
Instance
• Starting the background processes
started
SHUTDOWN • Opening the alert_<SID>.log file and the trace files
Note: SID is the system ID, which identifies the instance name (for example, ORCL).
SHUTDOWN NORMAL
NORMAL is the default shutdown mode if no mode is specified. A normal database shutdown
proceeds with the following conditions:
On the way down: During: On the way up: • No new connections can be made.
• The Oracle server waits for all users to disconnect before completing the shutdown.
• Uncommitted • No instance • Database and redo buffers are written to disk.
SHUTDOWN
changes rolled NORMAL
recovery • The Oracle server closes and dismounts the database before shutting down the instance.
back, for or • The next startup does not require an instance recovery.
IMMEDIATE SHUTDOWN SHUTDOWN TRANSACTIONAL
TRANSACTIO A shutdown in TRANSACTIONAL mode prevents clients from losing data, including results from
• Database buffer their current activity. A transactional database shutdown proceeds with the following
NAL conditions:
cache written or • No client can start a new transaction on this particular instance.
to data files SHUTDOWN • A client is disconnected when the client ends the transaction that is in progress.
• Resources IMMEDIATE • When all transactions have been completed, a shutdown occurs immediately.
released • The next startup does not require an instance recovery.
SHUTDOWN IMMEDIATE
A shutdown in IMMEDIATE mode proceeds with the following conditions:
• Current SQL statements being processed by the Oracle database are not completed.
Consistent database • The Oracle server does not wait for the users who are currently connected to the database
to disconnect.
• The Oracle server rolls back active transactions and disconnects all connected users.
• The Oracle server closes and dismounts the database before shutting down the instance.
• The next startup does not require an instance recovery.
Database User Accounts • To access the database, a user must specify a valid
database user account and successfully authenticate as
required by that user account. Each database user has a
unique database account.
Each database user account has:
• Unique username: Usernames cannot exceed 30 bytes,
• A unique username cannot contain special characters, and must start with a
• An authentication method letter.
• A default tablespace • Authentication method: The most common authentication
method is a password. Oracle Database supports
• A temporary tablespace password, global, and external authentication methods
• A user profile (such as biometric, certificate, and token authentication).
• An account status • Default tablespace: This is a place where a user creates
objects if the user does not specify some other tablespace.
A schema: • Temporary tablespace: This is a place where temporary
• Is a collection of database objects that are owned by objects, such as sorts and temporary tables, User profile:
a database user This is a set of resource and password restrictions
assigned to the user.
• Has the same name as the user account
• Account status: Users can access only “open” accounts.
The account status may be “locked” and/or “expired.”
Secure Roles Roles are usually enabled by default, which means that if a role is
granted to a user, then that user can exercise the privileges given to
the role. Default roles are assigned to the user at connect time.
It is possible to:
• Roles can be nondefault and enabled when required. • Make a role nondefault. The user must now explicitly enable the
SET ROLE vacationdba; role before the role’s privileges can be exercised.
• Have a role require additional authentication by using the
IDENTIFIED clause to indicate that a user must be authorized by
• Roles can be protected through authentication. a specified method before the role is enabled with the SET ROLE
• Roles can also be secured programmatically. statement. The default authentication for a role is None.
• Create secure application roles that can be enabled only by
CREATE ROLE secure_application_role executing a PL/SQL procedure successfully. The PL/SQL
IDENTIFIED USING <security_procedure_name>; procedure can check things such as the user’s network address,
the program that the user is running, the time of day, and other
elements needed to properly secure a group of permissions.
• Administer roles easily using the Oracle Database Vault option.
Secure application roles are simplified, and traditional roles can be
further restricted.
•Note: Role authentication can be defined by using Enterprise
Manager Cloud Control, but not in Enterprise Manager Database
Express.
A database role:
•
• Queries and DML and DDL
Renaming an online data file:
ALTER DATABASE MOVE DATAFILE
operations can be performed
'/disk1/myexample1.dbf' TO
'/disk1/myexample01.dbf';
while the data file is being
moved.
DBA Responsibilities
• Base recovery requirements on data
• Protect the database from failure wherever possible
criticality.
• Increase the mean time between failures (MTBF) – Recovery Point Objective (RPO):
• Protect critical components by using redundancy Tolerance for data loss — How
• Decrease the mean time to recover (MTTR)
•
frequently should backups be taken? Is
Minimize the loss of data
point-in-time recovery required?
– Recovery Time Objective (RTO):
Tolerance for down time — Down time:
Problem identification + recovery
planning + systems recovery — Tiered
RTO per level of granularity (database,
tablespace, table, row)
Oracle Database 12c: Administration Workshop
Backup and Recovery: Concepts & Configuration 1
Categories of Failure
• Recovery can have two kinds of scope: • When you perform complete recovery, you bring
• Complete recovery: Brings the database or tablespace up the database to the state where it is fully up- to-
to the present, including all committed data changes made date, including all committed data modifications to
to the point in time when the recovery was requested the present time.
• Incomplete or point-in-time recovery (PITR): Brings the
database or tablespace up to a specified point in time in • Incomplete recovery, however, brings the database
the past, before the recovery operation was requested or tablespace to some point of time in the past.
This is also known as “point-in-time recovery
(PITR).” It means there are missing transactions;
Complete
Time of
crash
any data modifications done between the recovery
recovery
destination time and the present are lost. In many
Point-in-time
recovery Recovery
cases, this is the desirable goal because there may
Restore from Missing transactions task started have been some changes made to the database
this backup after point-in-time recovery at this time
that need to be undone. Recovering to a point in
the past is a way to remove the unwanted changes.
a. True
b. False
• To configure your database for maximum • Schedule regular backups: Most media failures require that you
recoverability, you must: restore the lost or damaged file from backup.
• Multiplex control files: All control files associated with a database
• Schedule regular backups are identical. Recovering from the loss of a single control file is not
difficult; recovering from the loss of all control files is much more
• Multiplex control files challenging.
• Multiplex redo log groups • Multiplex redo log groups: To recover from instance or media
failure, redo log information is used to roll data files forward to the
• Retain archived copies of redo logs last committed transaction. If your redo log groups rely on a single
redo log file, the loss of that file means that data is likely to be lost.
• Retain archived copies of redo logs: If a file is lost and restored from
backup, the instance must apply redo information to bring that file
up to the latest SCN contained in the control file. With the default
setting, the database can overwrite redo information after it has
been written to the data files. Your database can be configured to
retain redo information in archived copies of the redo logs. This is
known as placing the database in ARCHIVELOG mode.
• Fast recovery area: •The fast recovery area is a space that is set aside on disk to contain
archived logs, backups, flashback logs, multiplexed control files, and
– Strongly recommended for simplified backup storage multiplexed redo logs. A fast recovery area simplifies backup storage
management management and is strongly recommended. You should place the fast
– Storage space (separate from working database files) recovery area on storage space that is separate from the location of
– Location specified by the DB_RECOVERY_FILE_DEST your database data files and primary online log files and control file.
parameter •The amount of disk space to allocate for the fast recovery area
– Size specified by the DB_RECOVERY_FILE_DEST_SIZE depends on the size and activity levels of your database. As a general
parameter rule, the larger the fast recovery area, the more useful it is. Ideally, the
– Large enough for backups, archived logs, flashback logs, fast recovery area should be large enough for copies of your data and
multiplexed control files, and multiplexed redo logs control files and for flashback, online redo, and archived logs needed to
recover the database with the backups kept based on the retention
– Automatically managed according to your retention policy
policy. (In short, the fast recovery area should be at least twice the size
• Configuration of the fast recovery area of the database so that it can hold one backup and several archived
logs.)
includes specifying the location, size, and
•Space management in the fast recovery area is governed by a
retention policy. backup retention policy. A retention policy determines when files are
obsolete, which means that they are no longer needed to meet your
data recovery objectives. The Oracle Database server automatically
manages this storage by deleting files that are no longer needed.
Oracle Database 12c: Administration Workshop
Backup and Recovery: Concepts & Configuration 13
Multiplexing Control Files
• To protect against database failure, •A control file is a small binary file that describes
the structure of the database. It must be available
your database should have multiple for writing by the Oracle server whenever the
copies of the control
ASM Storage file. Storage
File System database is mounted or opened. Without this file,
Best One copy on each disk group At least two copies, each on separate disk the database cannot be mounted, and recovery or
Practice (such as +DATA and +FRA) (at least one on separate disk controller) re-creation of the control file is required. Your
Steps to No additional control file 1. Alter the SPFILE with the ALTER database should have a minimum of two control
create copies required SYSTEM SET files on different storage devices to minimize the
additional control_files command.
control 2. Shut down the database. impact of a loss of one control file.
files 3. Copy control file to a new •The loss of a single control file causes the
location.
4. Open the database and verify the
instance to fail because all control files must be
addition of the new control file. available at all times. However, recovery can be a
simple matter of copying one of the other control
files. The loss of all control files is slightly more
difficult to recover from but is not usually
catastrophic.
Oracle Database 12c: Administration Workshop
Backup and Recovery: Concepts & Configuration 14
Redo Log Files
To preserve redo information, •The Oracle Database server treats the online redo log
create archived copies of redo log groups as a circular buffer in which to store transaction
information, filling one group and then moving on to the
files by performing the following next. After all groups have been written to, the Oracle
steps: Database server begins overwriting information in the
first log group.
• Specify archived redo log file- •To configure your database for maximum
recoverability, you must instruct the Oracle
naming convention. Database server to make a copy of the online redo
log group before allowing it to be overwritten. These
• Specify one or more archived copies are known as archived redo log files.
•To facilitate the creation of archived redo log files:
redo log file locations. 1.Specify a naming convention for your archived
redo log files.
• Place the database in 2.Specify a destination or destinations for storing
ARCHIVELOG mode.
Online redo
your archived redo log files.
3.Place the database in ARCHIVELOG mode.
log files Archived
redo log files
Oracle Database 12c: Administration Workshop
Backup and Recovery: Concepts & Configuration 16
Configuring ARCHIVELOG Mode
•To place the database in ARCHIVELOG mode, • Placing the database in ARCHIVELOG mode prevents redo logs
from being overwritten until they have been archived.
perform the following steps: • In Enterprise Manager Cloud Control, select Availability >
• Using Enterprise Manager Cloud Control: Backup & Recovery > Recovery Settings. Select “ARCHIVELOG
Mode” and click Apply. The database instance must be
1. On the Recovery Settings page, select restarted after making this change.
“ARCHIVELOG Mode” and click Apply. The • To issue the SQL command to put the database in
database can be set to ARCHIVELOG mode ARCHIVELOG mode, the database must be in MOUNT mode. If
only from the MOUNT state. the database is currently open, you must shut it down cleanly
(not abort), and then mount it as shown in the following
2. Restart the database instance by clicking example:
“Yes” when prompted. shutdown immediate ;
• Using SQL commands: startup mount
– Mount the database. alter database archivelog;
– Issue the ALTER DATABASE ARCHIVELOG alter database open;
• With the database in NOARCHIVELOG mode (the default),
command. recovery is possible only until the time of the last backup. All
– Open the database. transactions made after that backup are lost.
a. FLASH_RECOVERY_AREA_SIZE
b. DB_RECOVERY_FILE_DEST
c. FLASH_RECOVERY_AREA_LOC
d. DB_RECOVERY_FILE_DEST_SIZE
User-Managed Backup
•Whole database backup: Includes all data files and at least one control
file (Remember that all control files in a database are identical.)
• Backup strategy may include: •Partial database backup: May include zero or more tablespaces and zero
or more data files; may or may not include a control file
– Entire database (whole)
•Full backup: Makes a copy of each data block that contains data and
– Portion of the database (partial) that is within the files being backed up
• Backup type may indicate inclusion of: •Incremental backup: Makes a copy of all data blocks that have changed
since a previous backup. Oracle Database supports two levels of
– All data blocks within your chosen files (full) incremental backup (0 and 1). A level 1 incremental backup can be one of
– Only information that has changed since a previous two types: cumulative or differential. A cumulative backup backs up all
backup (incremental) changes since the last level 0 backup. A differential backup backs up all
changes since the last incremental backup (which could be either a level 0
—
Cumulative (changes since last level 0) or level 1 backup). Change Tracking with RMAN supports incremental
—
Differential (changes since last incremental) backups.
•Offline backups (also known as “cold” or consistent backup): Are taken
while the database is not open. They are consistent because, at the time of
• Backup mode may be: the backup, the system change number (SCN) in data file headers matches
– Offline (consistent, cold) the SCN in the control files.
– Online (inconsistent, hot) Control Online •Online backups (also known as “hot” or inconsistent backup): Are taken
files redo log while the database is open. They are inconsistent because, with the
Data files Database files database open, there is no guarantee that the data files are synchronized
with the control files.
Data file #2 Data file #3 Data file #4 •Backup sets: Are collections of one or
Data file #3
Data file #5 Data file #6 more binary files that contain one or more
Data file #4
Backup set data files, control files, server parameter
(Binary, compressed files in
Data file #5 Oracle proprietary format) files, or archived log files. With backup
Data file #6
sets, empty data blocks are not stored,
Image copies thereby causing backup sets to use less
(Duplicate data and log files in OS format)
space on the disk or tape. Backup sets
can be compressed to further reduce the
space requirements of the backup.
Oracle Database 12c: Administration Workshop
Performing Database Backups & Recovery 3
RMAN Backup Types
To open a database:
• All control files must be present and synchronized
• All online data files must be present and synchronized •As a database moves from the shutdown stage to
being fully open, it performs internal consistency
• At least one member of each redo log group must be
checks with the following stages:
present • NOMOUNT: For an instance to reach the NOMOUNT (also
known as STARTED) status, the instance must read the
initialization parameter file. No database files are
checked while the instance enters the NOMOUNT state.
• MOUNT: As the instance moves to the MOUNT status, it
OPEN checks whether all control files listed in the initialization
STARTUP
parameter file are present and synchronized. If even
MOUNT one control file is missing or corrupt, the instance
returns an error (noting the missing control file) to the
NOMOUNT
administrator and remains in the NOMOUNT state.
• OPEN: When the instance moves from the MOUNT
SHUTDOWN
state to the OPEN state, it does the following:
•- Checks whether all redo log groups known to
the control file have at least one member present.
Any missing members are noted in the alert log.
After the database is open, it fails in case of the loss of: •After a database is open, instance failure can be caused
• Any control file by media failure: for example, by the loss of a control file,
• A data file belonging to the system or undo tablespaces the loss of an entire redo log group, or the loss of a data
file belonging to the SYSTEM or UNDO tablespaces.
• An entire redo log group (As long as at least one member
Even if an inactive redo log group is lost, the database
of the group is available, the instance remains open.)
would eventually fail due to log switches.
•In many cases, the failed instance does not completely
shut down but is unable to continue to perform work.
Recovering from these types of media failure must be
done with the database down. As a result, the
administrator must use the SHUTDOWN ABORT command
before beginning recovery efforts.
•The loss of data files belonging to other tablespaces
does not cause instance failure, and the database can
be recovered while open, with work continuing in other
tablespaces.
•These errors can be detected by inspecting the alert log
file or by using the Data Recovery Advisor.
Oracle Database 12c: Administration Workshop
Performing Database Backups & Recovery 9
Loss of a Control File
•The options for recovery from the loss of a control file depend on the storage configuration of the
If a control file is lost or corrupted, the instance normally aborts. control files and on whether at least one control file remains or have all been lost.
• If control files are stored in ASM disk groups, recovery •If using ASM storage, and at least one control file copy remains, you can perform guided
recovery using Enterprise Manager or perform manual recovery using RMAN as follows:
options are as follows:
1. Put the database in NOMOUNT mode.
– Perform guided recovery using Enterprise Manager. 2. Connect to RMAN and issue the RESTORE CONTROLFILE command to restore the control
– Put database in NOMOUNT mode and use an RMAN file from an existing control file, for example:
command to restore control file from existing control file. • restore controlfile from
RMAN> restore controlfile from • '+DATA/orcl/controlfile/current.260.695209463';
'+DATA/orcl/controlfile/current.260.695209463'; 3. After the control file is successfully restored, open the database.
• If control files are stored as regular file system files, then: •If your control files are stored as regular file system files and at least one control file copy
remains, then, while the database is down, you can just copy one of the remaining control files
– Shut down the database to the missing file’s location. If the media failure is due to the loss of a disk drive or controller,
– Copy existing control file to replace lost control file copy one of the remaining control files to some other location and update the instance’s
parameter file to point to the new location. Alternatively, you can delete the reference to the
After control file is successfully restored, open the database. missing control file from the initialization parameter file. Remember that Oracle recommends
having at least two control files at all times.
•Note: Recovering from the loss of all control files is covered in the course titled Oracle
Database 12c: Backup and Recovery Workshop.
If a member of a redo log file group is lost and if the Recovering from the loss of a single redo log group member should not affect the
running instance.
group still has at least one member, note the following
•To perform this recovery by using SQL commands:
results:
1. Determine whether there is a missing log file by examining the alert log.
• Normal operation of the instance is not affected. 2. Restore the missing file by first dropping the lost redo log member:
• You receive a message in the alert log notifying you • ALTER DATABASE DROP LOGFILE MEMBER '<filename>'
that a member cannot be found. • Then add a new member to replace the lost redo log member:
• You can restore the missing log file by dropping the • ALTER DATABASE ADD LOGFILE MEMBER '<filename>‘ TO GROUP <integer>
lost redo log member and adding a new member. • Note: If you are using Oracle Managed Files (OMF) for your redo log files and you use
the preceding syntax to add a new redo log member to an existing group, that new
• If the group with the missing log file has been redo log member file will not be an OMF file. If you want to ensure that the new redo
archived, you can clear the log group to re-create log member is an OMF file, then the easiest recovery option would be to create a new
the missing file. redo log group and then drop the redo log group that had the missing redo log
member.
3. If the media failure is due to the loss of a disk drive or controller, rename the missing
file.
4. If the group has already been archived, or if you are in NOARCHIVELOG mode, you may
choose to solve the problem by clearing the log group to re-create the missing file or
files. You can clear the affected group manually with the following command:
• ALTER DATABASE CLEAR LOGFILE GROUP <integer>;
If the database is in NOARCHIVELOG mode and if any •The loss of any data file from a database in NOARCHIVELOG
data file is lost, perform the following tasks: mode requires complete restoration of the database,
1. Shut down the instance if it is not already down. including control files and all data files.
•With the database in NOARCHIVELOG mode, recovery is possible
2. Restore the entire database—including all data and
only up to the time of the last backup. So users must re-enter all
control files—from the backup.
changes made since that backup.
3. Open the database. •To perform this type of recovery by using Enterprise Manager
4. Have users re-enter all changes that were Cloud Control:
made since the last backup. 1. Shut down the instance if it is not already down.
2. Select Availability > Backup & Recovery > Perform Recovery.
3. Select Whole Database as the type of recovery.
•If you have a database in NOARCHIVELOG mode that has an
incremental backup strategy, RMAN first restores the most recent
level 0 and then RMAN recovery applies the incremental backups.
a. True
b. False
a. True
b. False
Oracle Data Pump: Overview •Oracle Data Pump enables very high-speed data and metadata loading and
unloading of Oracle databases. The Data Pump infrastructure is callable via
the DBMS_DATAPUMP PL/SQL package. Thus, custom data movement
utilities can be built by using Data Pump.
As a server-based facility for high-speed data and metadata •Oracle Database provides the following tools:
movement, Oracle Data Pump: • Command-line export and import clients called expdp and impdp,
• Is callable via DBMS_DATAPUMP respectively
• Provides the following tools: • Export and import interface in Enterprise Manager Cloud Control
– expdp •Data Pump automatically decides the data access methods to use; these
can be either direct path or external tables. Data Pump uses direct path
– impdp
load and unload when a table’s structure allows it and when maximum
– GUI interface in Enterprise Manager Cloud Control single-stream performance is desired. However, if there are clustered
• Provides four data movement methods: tables, referential integrity constraints, encrypted columns, or several other
items, Data Pump uses external tables rather than direct path to move the
– Data file copying data.
– Direct path •The ability to detach from and re-attach to long-running jobs without
– External tables affecting the job itself enables you to monitor jobs from multiple locations
while they are running. All stopped Data Pump jobs can be restarted
– Network link support without loss of data as long as the metainformation remains undisturbed. It
• Detaches from and re-attaches to long-running jobs does not matter whether the job is stopped voluntarily or involuntarily due
to a crash.
• Restarts Data Pump jobs
Create Directory
CREATE OR REPLACE DIRECTORY DIRECTORY_NAME AS ‘PATH’;
• The directory object is only a
GRANT READ, WRITE ON DIRECTORY test_dir TO USER_NAME;
pointer to a physical directory,
Export TABLE :EXAMPLE creating it does not actually
expdp system/manager1@ORCL tables=EMP,DEPT directory=DIR_NAME
create the physical directory on
dumpfile=DUMP_NAME.dmp logfile=LOG_NAME.log the file system of the database
Export schema :EXAMPLE
server.
expdp system/manager1@ORCL schemas=HR directory=DIR_NAME
dumpfile=DUMP_NAME.dmp logfile=LOG_NAME.log
You can remap: •Because object metadata is stored as XML in the dump file set, it is
• Data files by using REMAP_DATAFILE easy to apply transformations when DDL is being formed during
• Tablespaces by using REMAP_TABLESPACE import. Data Pump Import supports several transformations:
• REMAP_DATAFILE is useful when moving databases across
• Schemas by using REMAP_SCHEMA
platforms that have different file-system semantics.
• Tables by using REMAP_TABLE • REMAP_TABLESPACE enables objects to be moved from
• Data by using REMAP_DATA one tablespace to another.
• REMAP_SCHEMA provides the old FROMUSER /TOUSER
REMAP_TABLE = 'EMPLOYEES':'EMP' capability to change object ownership.
• REMAP_TABLE provides the ability to rename entire tables.
• REMAP_DATA provides the ability to remap data as it is
being inserted.
a. True
b. False
Answer: b
END