Oracle9i DBA Fundamentals II - Volume I
Oracle9i DBA Fundamentals II - Volume I
D11297GC10
Production 1.0
May 2001
D32714
Authors Copyright © Oracle Corporation, 2000, 2001. All rights reserved.
Publisher
John B Dawson
Contents
1 Networking Overview
Objectives 1-2
Network Environment Challenges 1-3
Simple Network: Two-Tier 1-5
Simple to Complex Network: N-Tier 1-6
Complex Network 1-7
Oracle9i Networking Solutions 1-8
Connectivity: Oracle Net Services 1-9
Connectivity: Database Connectivity With IIOP and HTTP 1-11
Directory Naming 1-12
Directory Services: Oracle Internet Directory 1-13
Scalability: Oracle Shared Server 1-14
Scalability: Connection Manager 1-15
Security: Advanced Security 1-17
Advanced Security Encryption 1-18
Security: Oracle Net and Firewalls 1-19
Accessibility: Heterogeneous Services 1-20
Accessibility: External Procedures 1-21
Summary 1-22
2 Basic Oracle Net Architecture
Objectives 2-2
Oracle Net Connections 2-3
Client-Server Application Connection: No Middle-Tier 2-4
Web Client Application Connections 2-6
Web Client Application Connection: Java Application Client 2-7
Web Client Application Connection: Java Applet Client 2-8
Web Client Application Connection: Web Server Middle-Tier 2-9
Web Client Application Connection: No Middle-Tier 2-10
Summary 2-12
3 Basic Oracle Net Server-Side Configuration
Objectives 3-2
Overview: The Listener Process 3-3
The Listener Responses 3-4
Configuring the Listener 3-5
Bequeath Session 3-7
Redirect Session 3-9
Static Service Registration: The listener.ora File 3-10
Static Service Registration: Create a Listener 3-14
Configure Services 3-15
Logging and Tracing 3-16
Dynamic Service Registration: Configure Registration 3-17
Dynamic Service Registration: Configure PMON 3-18
Configure the Listener for Oracle9i JVM: IIOP and HTTP 3-19
iii
Listener Control Utility (LSNRCTL) 3-21
LSNRCTL Commands 3-22
LSNRCTL SET and SHOW Modifiers 3-24
Summary 3-26
Practice 3 Overview 3-27
4 Basic Oracle Net Services Client-Side Configuration
Objectives 4-2
Host Naming 4-3
Host Naming Client Side 4-4
Host Naming Server Side 4-5
Select Host Name Method 4-6
Host Naming Method 4-7
Local Naming 4-8
Oracle Net Configuration Assistant 4-9
Choosing Local Naming 4-10
Configuring Local Net Service Names 4-11
Working with Net Service Names 4-12
Specify the Oracle Database Version 4-13
Database Service Name 4-14
Network Protocol 4-15
Host Name and Listener Port 4-16
Testing the Connection 4-17
Connection Test Result 4-18
Net Service Name 4-19
Save the Net Service Name 4-20
tnsnames.ora 4-21
sqlnet.ora 4-22
Troubleshooting the Client Side 4-23
Summary 4-25
Practice 4 Overview 4-26
5 Usage and Configuration of the Oracle Shared Server
Objectives 5-2
Server Configurations 5-3
Dedicated Server Processes 5-4
Oracle Shared Server 5-5
Benefits of Oracle Shared Server 5-7
Connecting 5-9
Processing a Request 5-10
The SGA and PGA 5-12
Configuring Oracle Shared Server 5-13
DISPATCHERS 5-14
SHARED_SERVERS 5-16
MAX_DISPATCHERS 5-18
iv
MAX_SHARED_SERVERS 5-20
CIRCUITS 5-21
SHARED_SERVER_SESSIONS 5-22
Related Parameters 5-23
Verifying Setup 5-24
Data Dictionary Views 5-26
Summary 5-27
Practice 5 Overview 5-28
6 Backup and Recovery Overview
Objectives 6-2
Backup and Recovery Issues 6-3
Categories of Failures 6-4
Causes of Statement Failures 6-5
Resolutions for Statement Failures 6-6
Causes of User Process Failures 6-7
Resolution of User Process Failures 6-8
Possible User Errors 6-9
Resolution of User Errors 6-10
Causes of Instance Failure 6-11
Recovery from Instance Failure 6-12
Causes of Media Failures 6-14
Resolutions for Media Failures 6-15
Defining a Backup and Recovery Strategy 6-16
Business Requirements 6-17
Operational Requirements 6-18
Technical Considerations 6-20
Disaster Recovery Issues 6-22
Summary 6-24
7 Instance and Media Recovery Structures
Objectives 7-2
Overview 7-3
Large Pool 7-6
Database Buffer Cache, DBWn, and Datafiles 7-8
Redo Log Buffer, LGWR, and Redo Log Files 7-10
Multiplexed Redo Log Files 7-13
CKPT Process 7-15
Multiplexed Control Files 7-17
ARCn Process and Archived Log Files 7-19
Database Synchronization 7-21
Phases for Instance Recovery 7-22
Tuning Instance Recovery Performance 7-24
Tuning the Duration of Instance and Crash Recovery 7-25
v
Initialization Parameters Influencing Checkpoints 7-26
Tuning the Phases of Instance Recovery 7-28
Tuning the Rolling Forward Phase 7-29
Tuning the Rolling Back Phase 7-30
Fast-Start On-Demand Rollback 7-31
Fast-Start Parallel Rollback 7-32
Controlling Fast-Start Parallel Rollback 7-33
Monitoring Parallel Rollback 7-34
Summary 7-35
Practice 7 Overview 7-36
8 Configuring the Database Archiving Mode
Objectives 8-2
Redo Log History 8-3
Noarchivelog Mode 8-4
Archivelog Mode 8-6
Changing the Archiving Mode 8-8
Automatic and Manual Archiving 8-10
Specifying Multiple ARCn Processes 8-12
Stop or Start Additional Archive Processes 8-13
Enabling Automatic Archiving at Instance Startup 8-14
Enabling Automatic Archiving After Instance Startup 8-15
Disabling Automatic Archiving 8-16
Manually Archiving Online Redo Log Files 8-17
Specifying the Archive Log Destination 8-19
Specifying Multiple Archive Log Destinations 8-20
LOG_ARCHIVE_DEST_n Options 8-21
Specifying a Minimum Number of Local Destinations 8-22
Controlling Archiving to a Destination 8-24
Specifying the File Name Format 8-25
Obtaining Archive Log Information 8-26
Summary 8-29
Practice 8 Overview 8-30
9 Oracle Recovery Manager Overview and Configuration
Objectives 9-2
Recovery Manager Features 9-3
Recovery Manager Components 9-5
RMAN Repository: Using the Control File 9-7
Channel Allocation 9-8
Manual Channel Allocation 9-10
Automatic Channel Allocation 9-12
Media Management 9-13
Types of Connections with RMAN 9-15
Connecting Without a Recovery Catalog 9-16
vi
Recovery Manager Modes 9-18
RMAN Commands 9-20
RMAN Configuration Settings 9-22
The CONFIGURE Command 9-23
The SHOW Command 9-25
LIST Command Operations 9-26
The LIST Command 9-27
The REPORT Command 9-28
The REPORT NEED BACKUP Command 9-29
Recovery Manager Packages 9-30
RMAN Usage Considerations 9-31
Summary 9-33
Practice 9 Overview 9-34
10 User-Managed Backups
Objectives 10-2
Terminology 10-3
User-Managed Backup and Recovery 10-5
Querying Views to Obtain Database File Information 10-6
Backup Methods 10-8
Consistent Whole Database Backup (Closed Database Backup) 10-9
Advantages of Making Consistent Whole Database Backups 10-10
Making a Consistent Whole Database Backup 10-12
Open Database Backup 10-14
Advantages of Making Open Database Backups 10-15
Open Database Backup Requirements 10-16
Open Database Backup Options 10-17
Making a Backup of an Online Tablespace 10-18
Ending the Online Tablespace Backup 10-19
Backup Status Information 10-20
Failure During Online Tablespace Backup 10-22
Read-Only Tablespace Backup 10-24
Read-Only Tablespace Backup Issues 10-25
Backup Issues with Logging and Nologging Options 10-26
Manual Control File Backups 10-27
Backing Up the Initialization Parameter File 10-29
Verifying Backups Using the DBVERIFY Utility 10-30
DBVERIFY Command-Line Interface 10-31
Summary 10-33
Practice 10 Overview 10-34
11 RMAN Backups
Objectives 11-2
RMAN Backup Concepts 11-3
Recovery Manager Backups 11-4
vii
Backup Sets 11-5
Characteristics of Backup Sets 11-6
Backup Piece 11-7
The BACKUP Command 11-8
Backup Piece Size 11-11
Parallelization of Backup Sets 11-12
Multiplexed Backup Sets 11-15
Duplexed Backup Sets 11-16
Backups of Backup Sets 11-17
Archived Redo Log File Backups 11-18
Archived Redo Log Backup Sets 11-19
Datafile Backup Set Processing 11-20
Backup Constraints 11-21
Image Copies 11-22
Characteristics of an Image Copy 11-23
Image Copies 11-24
The COPY Command 11-25
Image Copy Parallelization 11-26
Copying the Whole Database 11-27
Making Incremental Backups 11-28
Differential Incremental Backup Example 11-29
Cumulative Incremental Backup Example 11-31
Backup in Noarchivelog Mode 11-32
RMAN Control File Autobackups 11-33
Tags for Backups and Image Copies 11-34
RMAN Dynamic Views 11-35
Monitoring RMAN Backups 11-36
Miscellaneous RMAN Issues 11-38
Summary 11-40
Practice 11 Overview 11-41
12 User-Managed Complete Recovery
Objectives 12-2
Media Recovery 12-3
Recovery Steps 12-4
Restoration and Datafile Media Recovery with User-Managed Procedures 12-5
Archivelog and Noarchivelog Modes 12-6
Recovery in Noarchivelog Mode 12-7
Recovery in Noarchivelog Mode With Redo Log File Backups 12-9
Recovery in Noarchivelog Mode Without Redo Log File Backups 12-10
Recovery in Archivelog Mode 12-11
Complete Recovery 12-12
Complete Recovery in Archivelog Mode 12-13
Determining Which Files Need Recovery 12-14
viii
User-Managed Recovery Procedures: RECOVER Command 12-16
Using Archived Redo Log Files During Recovery 12-17
Restoring Datafiles to a New Location with User-Managed Procedures 12-19
Complete Recovery Methods 12-20
Complete Recovery of a Closed Database 12-22
Closed Database Recovery Example 12-23
Open Database Recovery When the Database Is Initially Open 12-25
Open Database Recovery Example 12-26
Open Database Recovery When the Database Is Initially Closed 12-28
Open Database Recovery Example 12-29
Recovery of a Datafile Without a Backup 12-32
Recovery Without a Backup Example 12-33
Read-Only Tablespace Recovery 12-35
Read-Only Tablespace Recovery Issues 12-36
Loss of Control Files 12-37
Recovering Control Files 12-38
Summary 12-39
Practices 12-1 and 12-2 Overview 12-40
13 RMAN Complete Recovery
Objectives 13-2
Restoration and Datafile Media Recovery Using RMAN 13-3
Using RMAN to Recover a Database in Noarchivelog Mode 13-4
Using RMAN to Recover a Database in Archivelog Mode 13-6
Using RMAN to Restore Datafiles to a New Location 13-7
Using RMAN to Recover a Tablespace 13-8
Using RMAN to Relocate a Tablespace 13-9
Summary 13-11
Practices 13-1 and 13-2 Overview 13-12
14 User-Managed Incomplete Recovery
Objectives 14-2
Incomplete Recovery Overview 14-3
Reasons for Performing Incomplete Recovery 14-4
Types of Incomplete Recovery 14-5
Incomplete Recovery Guidelines 14-7
Incomplete Recovery and the Alert Log 14-9
User-Managed Procedures for Incomplete Recovery 14-10
RECOVER Command Overview 14-11
Time-Based Recovery Example 14-12
UNTIL TIME Recovery 14-13
Cancel-Based Recovery Example 14-15
Using a Backup Control File During Recovery 14-18
Loss of Current Redo Log Files 14-21
Summary 14-23
Practices 14-1 and 14-2 Overview 14-24
ix
15 RMAN Incomplete Recovery
Objectives 15-2
Incomplete Recovery of a Database Using RMAN 15-3
RMAN Incomplete Recovery UNTIL TIME Example 15-4
RMAN Incomplete Recovery UNTIL SEQUENCE Example 15-6
Summary 15-7
Practice 15 Overview 15-8
16 RMAN Maintenance
Objectives 16-2
Cross Checking Backups and Copies 16-3
The CROSSCHECK Command 16-4
Deleting Backups and Copies 16-5
The DELETE Command 16-6
Deleting Backups and Copies 16-7
Changing the Availability of RMAN Backups and Copies 16-8
Changing the Status to Unavailable 16-9
Exempting a Backup or Copy from the Retention Policy 16-10
The CHANGE … KEEP Command 16-11
Cataloging Archived Redo Log Files and User-Managed Backups 16-12
The CATALOG Command 16-13
Uncataloging RMAN Records 16-14
The CHANGE … UNCATALOG Command 16-15
Summary 16-16
Practice 16 Overview 16-17
17 Recovery Catalog Creation and Maintenance
Objectives 17-2
Overview 17-4
Recovery Catalog Contents 17-5
Benefits of Using a Recovery Catalog 17-7
Additional Features Which Require the Recovery Catalog 17-8
Create Recovery Catalog 17-9
Connecting Using a Recovery Catalog 17-12
Recovery Catalog Maintenance 17-13
Resynchronization of the Recovery Catalog 17-14
Using RESYNC CATALOG for Resynchronization 17-15
Resetting a Database Incarnation 17-16
Recovery Catalog Reporting 17-18
Viewing the Recovery Catalog 17-19
Stored Scripts 17-21
Script Examples 17-22
Managing Scripts 17-23
Backup of Recovery Catalog 17-24
x
Recovering the Recovery Catalog 17-25
Summary 17-26
Practice 17 Overview 17-27
18 Transporting Data Between Databases
Objectives 18-2
Oracle Export and Import Utility Overview 18-3
Methods to Run the Export Utility 18-5
Export Modes 18-6
Command-Line Export 18-7
Direct-Path Export Concepts 18-9
Specifying Direct-Path Export 18-10
Direct-Path Export Features 18-11
Direct-Path Export Restrictions 18-12
Uses of the Import Utility for Recovery 18-13
Import Modes 18-14
Command-Line Import 18-15
Invoking Import as SYSDBA 18-17
Import Process Sequence 18-18
National Language Support Considerations 18-19
Summary 18-20
Practice 18 Overview 18-21
19 Loading Data Into a Database
Objectives 19-2
Data Loading Methods 19-3
Direct-Load INSERT 19-4
Serial Direct-Load Inserts 19-5
Parallel Direct-Load Insert 19-7
SQL*Loader 19-8
Using SQL*Loader 19-9
Conventional and Direct Path Loads 19-10
Comparing Direct and Conventional Path Loads 19-11
Parallel Direct-Path Load 19-12
SQL*Loader Control File 19-13
Control File Syntax Considerations 19-16
Input Data and Datafiles 19-17
Logical Records 19-20
Data Conversion 19-21
Discarded or Rejected Records 19-22
Log File Contents 19-23
SQL*Loader Guidelines 19-25
Summary 19-26
Practice 19 Overview 19-27
xi
20 Workshop
Objectives 20-2
Workshop Methodology 20-4
Workshop Approach 20-6
Business Requirements 20-7
Resolving a Database Failure 20-8
Troubleshooting Methods 20-10
Enable Tracing 20-11
Using Trace Files 20-12
Resolving a Network Failure 20-14
Summary 20-16
xii
1
Networking Overview
Network
Client
Server
Two-Tier Networks
In a two-tier network, a client communicates directly with a server. This is also known as a
client-server architecture. A client-server network is an architecture that involves client
processes that request services from server processes.The client and server communicate over
a network using a given protocol, which must be installed on both the client and the server.
A common error in client-server network development is to prototype an application in a
small, two-tier environment and then scale up by simply adding more users to the server. This
approach can result in an ineffective system, as the server becomes overburdened. To
properly scale to hundreds or thousands of users, it may be necessary to implement an N-tier
architecture, which introduces one or more servers or agents between the client and server.
Network Network
Client
Middle tier Server
N-Tier Networks
In an N-tier architecture, the role of the middle-tier agent can be manifold. It can provide:
• Translation services (as in adapting a legacy application on a mainframe to a client-
server environment or acting as a bridge between protocols)
• Scalability services (as in acting as a transaction-processing monitor to balance the load
of requests between servers)
• Network agent services (as in mapping a request to a number of different servers,
collating the results, and returning a single response to the client)
DECnet
DECnet
TCP/IP
TCP/IP
APPC/LU6.2
TCP/IP
• Connectivity
• Directory Services
• Scalability
• Security
• Accessibility
• Protocol independence
• Comprehensive platform support
• Integrated GUI administration tools
• Multiple configuration options
• Tracing and diagnostic toolset
• Basic security
LDAP
LDAP is an acronym for Lightweight Directory Access Protocol, which is an Internet
standard for directory services. LDAP has emerged as a critical infrastructure component for
network security and as a vital platform for enabling integration among applications and
services on the network. It simplifies management of directory information considerably by
providing the following:
• A well-defined standard interface to a single, extensible directory service, such as the
Oracle Internet Directory
• Rapid development and deployment of directory-enabled applications
• An array of programmatic interfaces that enables seamless deployment of Internet-
ready applications
Naming Methods
Oracle supports various naming methods. A naming method is the process by which a
complex network address is resolved to a simple alias. This alias is then used by users and
administrators to connect between networks on complex networks. The following naming
methods are supported:
• Host naming: Used for simple networks using TCP/IP only
• Local naming: Uses a tnsnames.ora file
• Oracle Names naming: Uses an Oracle Names Server with Oracle8i and earlier versions
• Directory naming: Uses the Oracle Internet Directory
Oracle9i DBA Fundamentals II 1-12
Directory Services: Oracle Internet
Directory
Connection Manager
Connection Manager is a gateway process and control program configured and installed on a
middle tier. The Connection Manager can be configured for the following features:
Multiplexing
Connection Manager can handle several incoming connections and transmit them
simultaneously over a single outgoing connection. Multiplexing gives larger numbers of users
access to a server. The configuration is offered only in a TCP/IP environment.
Cross-Protocol Connectivity
Using this feature, a client and a server can communicate with different network protocols.
Network Access Control
Using Connection Manager, designated clients can connect to certain servers in a network
based on the TCP/IP protocol.
Benefits of Connection Manager
• Supports more users on the end tier if you use a middle tier to deploy Connection
Manager and provides for better use of resources and scalability
• Enables cross-protocol communication
• Can act as an access control mechanism
• Can act as a proxy server if your firewall doesn’t interact with sqlnet
Oracle9i DBA Fundamentals II 1-15
Scalability: Connection Manager
2
1
3
Connection
Manager
Server
Connection Multiplexing
This example shows how Connection Manager acts as a multiplexer to funnel data from
many clients to one server.
1. The initial connection from a client to a server is established by connecting to
Connection Manager.
2. Connection Manager establishes the connection to the server.
3. When additional clients request connections to the server through Connection Manager,
they use the same connection that Connection Manager used for the initial connection.
• Encryption
– Encodes between network nodes
– DES, RSA, 3DES
• Authentication
– Authenticates users through third-party services
and Secure Sockets Layer (SSL)
– Kerberos, Radius, CyberSafe
• Data Integrity
– Ensures data integrity during transmission
– MD5, SHA
Client Server
2
Encrypt
fdh37djf246gs’b[da,\ssk
Heterogeneous Services
Heterogeneous Services provide seamless integration between the Oracle server and
environments other than Oracle. Heterogeneous Services enable you to do the following:
• Use Oracle SQL to transparently access data stored in non-Oracle data-stores like
Informix, DB2, SQL Server and Sybase
• Use Oracle procedure calls to transparently access non-Oracle systems, services, or
application programming interfaces (APIs), from your Oracle distributed environment
A Heterogeneous Service agent is required to access a particular non-Oracle system.
Benefit
Heterogeneous Services enable integration with foreign data sources.
Note: Configuration of Heterogeneous Services is not covered in this class.
External Procedures
Oracle support of external procedures allows the developer more development choices than
standard SQL or PL/SQL provide. The listener can be configured to listen for external
procedure calls. When a PL/SQL or SQL application calls an external procedure, the listener
launches a network session-specific process called extproc. Through the listener service,
PL/SQL passes the following information to extproc:
• Shared library name
• External procedure name
• Parameters (if necessary)
The extproc program then loads the shared library and invokes the external procedure.
Client Server
Oracle
Forms/SQL*Plus
Database
TTC TTC
OPS OPS
Protocol Protocol
TTC TTC
OPS OPS
Protocol Protocol
Server
Web Application
Server (client) Oracle
database
Java applet
JavaTTC
Oracle Net
Java Net
OPS
TCP/IP TCP/IP
HTTP TCP/IP
User
Web browser
Web Oracle
application server
server
Server
Client Oracle
database
Web browser
HTTP, IIOP
HTTP, IIOP OPS
TCP/IP TCP/IP
IIOP
Web
browser
Oracle
Web Server
browser supporting
HTTP and IIOP
Client
Client Server
Listener
tnsnames.ora
sqlnet.ora
listener.ora
Listener Responses
Spawn and Bequeath Connection
The listener passes or bequeaths the connection to a spawned process. This method is used in
dedicated servers only.
Direct Hand Off Connection
The listener will hand off a connection to a dispatcher when an Oracle Shared Server is used.
This method is not possible with dedicated server processes.
Redirected Connection
A connection may be redirected by the listener to a dispatcher if a Shared Server is used
Note: Each of the connection types is covered in more detail later in the lesson.
Transparency of Direct Hand Off and Redirect
Whether a connection session is bequeathed, handed off, or redirected to an existing process,
the session is transparent to the user. It can be detected only by turning on tracing and
analyzing the resulting trace file.
4 3
2
1
listener
Client Server
Server or
dispatcher
6 process
5 3
port
4 2
port
1
Listener
1. LISTENER =
2. (ADDRESS_LIST =
3. (ADDRESS= (PROTOCOL= TCP)(Host= stc-sun02)(Port= 1521))
)
4. SID_LIST_LISTENER =
5. (SID_LIST =
6. (SID_DESC =
7. (ORACLE_HOME= /home/oracle)
8. (GLOBAL_DBNAME = ORCL.us.oracle.com)
9. (SID_NAME = ORCL)
)
• SERVICE_NAMES
• INSTANCE_NAME
INSTANCE_NAME=salesdb
Service Registration
Using a Non-default Listener
You can force PMON to register with a local listener on the server that does not use TCP/IP
or use port 1521 by configuring the LOCAL_LISTENER parameter in the init.ora file
as follows:
LOCAL_LISTENER=listener_alias
• Prompt syntax:
LSNRCTL> <command name>
LSNRCTL Commands
Starting the Listener
You can use the START command to start the listener from the Listener Control utility. Any
manual changes to the listener.ora file must be made when the listener is shut down.
The argument for the START command is the name of the listener, and if no argument is
specified, the current listener is started. If a current listener is not defined, LISTENER is
started.
LSNRCTL> START [listener_name]
or
$ lsnrctl start [listener_name]
Stopping the Listener
The STOP command stops the listener. The listener must be running to stop it properly. If a
password is configured, the SET PASSWORD command must be used before the STOP
command can be used. The password must be set from within the LSNRCTL prompt; it
cannot be set from the operating system command line. It is good practice to send a warning
message to all network users before stopping a listener.
LSNRCTL> STOP [listener_name]
or
$ lsnrctl stop [listener_name]
Oracle9i DBA Fundamentals II 3-22
LSNRCTL Commands (continued)
Command Description
CHANGE_PASSWORD Dynamically changes the encrypted password of a listener.
EXIT Quits the LSNRCTL utility.
HELP Provides the list of all available LSNRCTL commands.
QUIT Provides the functionality of the EXIT command.
RELOAD Shuts down everything except listener addresses and
rereads the listener.ora file. You use this command
to add or change services without actually stopping the
listener.
SAVE_CONFIG Creates a backup of your listener configuration file (called
listener.bak) and updates the
listener.ora file itself to reflect any changes
SERVICES Provides detailed information about the services the
listener listens for.
SET parameter This command sets a listener parameter.
SHOW parameter This command lists the value of a listener parameter.
Command Description
SET CONNECT_TIMEOUT Determines the amount of time the listener
waits for a valid connection request after a
connection has been started.
SET CURRENT_LISTENER Sets or shows parameters when multiple
listeners are used.
SET LOG_DIRECTORY Sets a nondefault location for the log file or to
return the location to the default.
SET LOG_FILE Sets a nondefault name for the log file.
SET LOG_STATUS Turns listener logging on or off.
SET PASSWORD Changes the password sent from the LSNRCTL
utility to the listener process for authentication
purposes only.
SET SAVE_CONFIG_ON_STOP Saves any changes made by the LSNRCTL SET
command permanently if the parameter is on.
All parameters are saved right before the
listener exits.
SET STARTUP_WAITTIME Sets the amount of time the listener sleeps
before responding to a START command.
SET TRC_DIRECTORY Sets a nondefault location for the trace file or to
return the location to the default.
SET TRC_FILE Sets a nondefault name for the trace file.
SET TRC_LEVEL Turns on tracing for the listener.
Note: The SHOW command has the corresponding parameters of the SET command except
SET PASSWORD.
Client Server
TCP/IP
TRACE_LEVEL_CLIENT = OFF
sqlnet.authentication_services = (NTS)
names.directory_path = (HOSTNAME)
sqlnet.ora
listener.ora
Client-Side Requirements
If you are using the host naming method, you must have TCP/IP installed on your client
machine. In addition you must install Oracle Net Services and the TCP/IP protocol adaptor.
The host name is resolved through an IP address translation mechanism such as Domain
Name Services (DNS), Network Information Services (NIS), or a centrally maintained
TCP/IP host file: that means that this should be configured from the client side before
attempting to use the host naming method.
Client Server
1521
TCP/IP
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = stc-sun02.us.oracle.com)
(ORACLE_HOME = /u03/ora9i/rel12)
(SID_NAME = TEST)
sqlnet.ora listener.ora
Server-Side Requirements
If you are using the host naming method, you must have TCP/IP installed on your server as
well as your client. You also need to install Oracle Net Services and the TCP/IP protocol
adaptor on the server side.
A listener using the default name listener must be started on port 1521 and if instance
registration is not implemented, the listener.ora file must include the line:
GLOBAL_DBNAME = host name
The host name must match the connect string you specify from your client. The additional
information included is the database you wish to connect to.
Example:
If all of the requirements are met on the client and server side, you can issue the connection
request from the client, and this connects you to the instance TEST:
sqlplus system/[email protected]
SQL*Plus: Release 9.0.0.0.0 - Beta on Tue Feb 24 3:11:07 2001
(c) Copyright 2000 Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.0.0.0.0 - Beta
SQL>
NAMES.DIRECTORY_PATH= (HOSTNAME)
Client Server
sqlnet.ora
tnsnames.ora listener.ora
Test Result
If the data entered is correct, the connection should be made successfully. If not, the Details
window should provide useful diagnostic information to troubleshoot the connection. Please
note that the default username used for the connection is scott. If you have no such user
you should click Change Login and enter a valid username and password combination then
retry the connection.
If the connection is successful, click Next to continue. Do not click Cancel because the
service information is not yet saved.
Note: The service name can also be tested from the command line by using the tnsping
utility. For example:
$ tnsping U01
TNS Ping Utility for Solaris: Version 9 - Production on 10-MAY-2001
Used parameter files:
/u01/user01/NETWORK/ADMIN/sqlnet.ora
/u01/user01/NETWORK/ADMIN/tnsnames.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (ADDRESS=(PROTOCOL=TCP)(HOST=stc-sun02)(PORT=1701))
OK (0 msec)
DESCRIPTION Keyword for describing the connect descriptor. Descriptions are always
specified the same way.
ADDRESS Keyword for the address specification. If multiple addresses are specified,
use the keyword ADDRESS_LIST prior to the ADDRESS
NAMES.DEFAULT_DOMAIN = us.oracle.com
NAMES.DIRECTORY_PATH= (TNSNAMES, HOSTNAME)
SQLNET.EXPIRE_TIME=0
sqlplus system/manager@MY_SERVICE
SQL*Plus: Release 9.0.0.0.0 - Beta on Tue Feb 27 10:11:00 2001
(c) Copyright 2000 Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.0.0.0.0 - Beta
JServer Release 9.0.0.0.0 - Beta
SQL>
Troubleshooting
The following describes common errors and how they can be resolved.
ORA-12154: “TNS:could not resolve service name”
Cause Oracle Net Services cannot locate the connect descriptor specified in the
tnsnames.ora configuration file.
Actions
1. Verify that a tnsnames.ora file exists and that it is accessible.
2. Verify that the tnsnames.ora file is in the location specified by the TNS_ADMIN
environment variable.
3. In your tnsnames.ora file, verify that the service name specified in your connection
string is mapped to a connect descriptor in the tnsnames.ora file. Also, verify that
there are no syntax errors in the file.
4. Verify that there are no duplicate copies of the sqlnet.ora file.
5. If you are connecting from a login dialog box, verify that you are not placing an at
symbol (@) before your connection service name.
Client Server
Instance
SGA
User Server
process process
Snnn
Snnn
Database server
client
User User
process process
Dispatcher Dispatcher
process process
D001 D002
1
User User
process process
• Required Parameters
– DISPATCHERS
– SHARED_SERVERS
• Optional Parameters
– MAX_DISPATCHERS
– MAX_SHARED_SERVERS initSID.ora
– CIRCUITS parameters
– SHARED_SERVER_SESSIONS
Init.ora file
dispatchers = “(PROTOCOL=TCP)(DISPATCHERS=2)\
(PROTOCOL=IPC)(DISPATCHERS=1)”
Attribute Description
PROTOCOL Specifies the network protocol for which the dispatcher
(PRO or PROT) generates a listening endpoint
ADDRESS Specifies the network protocol address of the endpoint on
(ADD or ADDR) which the dispatchers listen
DESCRIPTION Specifies the network description of the endpoint on which
(DES or DESC) the dispatchers listen, including the network protocol address
For example: (DESCRIPTION=(ADDRESS=...))
Default value: 0
Init.ora file
max_dispatchers = 5
Dispatcher
D004
Dispatcher Dispatcher Dispatcher Dispatcher
D001 D002 D003 D005
TCP/IP TCP/IP IPC
max_shared_servers = 10
CIRCUITS = 100
Instance: TST8i
Init.ora file
SHARED_SERVER_SESSIONS = 100
Related Parameters
Other parameters affected by Oracle Shared Server that may require adjustment:
• LARGE_POOL_SIZE specifies the size in bytes of the large pool allocation heap.
Oracle Shared Server may force the default value to be set too high, causing
performance problems or problems starting the database.
• SESSIONS specifies the maximum number of sessions that can be created in the
system. May need to be adjusted for Oracle Shared server.
Use the large pool to allocate shared server-related UGA (User Global Area), not the shared
pool. This is because Oracle uses the shared pool to allocate SGA (Shared Global Area)
memory for other purposes, such as shared SQL and PL/SQL procedures. Using the large
pool instead of the shared pool decreases fragmentation of the shared pool.
To store shared server-related UGA in the large pool, specify a value for the initialization
parameter LARGE_POOL_SIZE. To see in which pool (shared pool or large pool) the
memory for an object resides, see the POOL column in V$SGASTAT. LARGE_POOL_SIZE
does not have a default value, but its minimal value is 300K. If you do not set a value for
LARGE_POOL_SIZE, then Oracle uses the shared pool for Oracle Shared Server user
session memory.
Oracle allocates some fixed amount of memory (about 10K) per configured session from the
shared pool, even if you have configured the large pool. The CIRCUITS initialization
parameter specifies the maximum number of concurrent shared server connections that the
database allows.
Oracle9i DBA Fundamentals II 5-23
Verifying Setup
$ lsnrctl services
• V$CIRCUIT
• V$SHARED_SERVER
• V$DISPATCHER
• V$SHARED_SERVER_MONITOR
• V$QUEUE
• V$SESSION
V$CIRCUIT This view contains information about virtual circuits, which are
user connections to the database through dispatchers and servers.
V$SHARED_SERVER This view contains information on the shared server processes.
V$DISPATCHER This view provides information on the dispatcher processes.
V$SHARED_SERVER_ This view contains information for tuning the shared server
MONITOR processes.
V$QUEUE This view contains information on request and response
queues.
V$SESSION This view lists session information for each current session.
Overview
One of a database administrator’s (DBA) major responsibilities is to ensure that the database
is available for use. The DBA can take precautions to minimize failure of the system.
In spite of the precautions, it is naive to think that failures will never occur. The DBA must
make the database operational as quickly as possible in case of a failure and minimize the
loss of data.
To protect the data from the various types of failures that can occur, the DBA must back up
the database regularly. Without a current backup, it is impossible for the DBA to get the
database up and running if there is a file loss, without losing data.
Backups are critical for recovering from different types of failures. The task of validating
backups cannot be overemphasized. Making an assumption that a backup exists without
actually checking it’s existence can prove very costly if it is not valid.
• Statement failure
• User process failure
• User error
• Instance failure
• Media failure
• Network failure
Categories of Failures
Different types of failures may occur in an Oracle database environment. These include:
• Statement failure
• User process failure
• User error
• Instance failure
• Media failure
• Network failure
Each type of failure requires a varying level of involvement by the DBA to recover
effectively from the situation. In some cases, recovery depends on the type of backup strategy
that has been implemented. For example, a statement failure requires little DBA intervention,
whereas a media failure requires the DBA to employ a tested recovery strategy.
Statement Failure
Statement failure occurs where there is a logical failure in the handling of a statement in an
Oracle program. Types of statement failures include:
• A logical error occurs in the application.
• The user attempts to enter invalid data into the table, perhaps violating integrity
constraints.
• The user attempts an operation with insufficient privileges, such as an insert on a table
using only SELECT privileges.
• The user attempts to create a table but exceeds the user’s allotted quota limit.
• The user attempts an INSERT or UPDATE on a table, causing an extent to be allocated,
but insufficient free space is available in the tablespace.
Note: When a statement failure is encountered, it is likely that the Oracle server or the
operating system will return an error code and a message. The failed SQL statement is
automatically rolled back, then control is returned to the user program. The application
developer or DBA can use the Oracle error codes to diagnose and help resolve the failure.
User Errors
DBA intervention is usually required to recover from user errors.
Common Types of User Errors
• The user accidentally drops or truncates a table.
• The user deletes all rows in a table.
• The user commits data, but discovers an error in the committed data.
Instance Failure
An instance failure may occur for numerous reasons:
• A power outage occurs that causes the server to become unavailable.
• The server becomes unavailable due to hardware problems such as a CPU failure,
memory corruption, or an operating system crash.
• One of the Oracle server background processes (DBWn, LGWR, PMON, SMON,
CKPT) experiences a failure.
To recover from instance failure, the DBA:
• Starts the instance by using the “startup” command. The Oracle server will
automatically recover, performing both the roll forward and rollback phases.
• Investigates the cause of failure by reading the instance alert.log file and any other
trace files that were generated during the instance failure.
Instance Recovery
Instance recovery restores a database to its transaction-consistent state just prior to instance
failure. The Oracle server automatically performs instance recovery when the database is
opened if it is necessary.
No recovery action needs to be performed by you. All required redo information is read by
SMON. To recover from this type of failure, start the database:
SQL> CONNECT / AS sysdba;
Connected.
SQL> STARTUP;
. . .
Database opened.
After the database has opened, notify users that any data that they did not commit must be re-
entered.
Media Failure
Media failure involves a physical problem when reading from or writing to a file that is
necessary for the database to operate. Media failure is the most serious type of failure
because it usually requires DBA intervention.
Common Types of Media Related Problems
• The disk drive that held one of the database files experienced a head crash.
• There is a physical problem reading from or writing to the files needed for normal
database operation.
• A file was accidentally erased.
• Business requirements
• Operational requirements
• Technical considerations
• Management concurrence
• Mean-Time-To-Recover
• Mean-Time-Between-Failure
• Evolutionary process
Business Impact
You should understand the impact that down time has on the business. Management must
quantify the cost of down time and the loss of data and compare this with the cost of reducing
down time and minimizing data loss.
MTTR Database availability is a key issue for a DBA. In the event of a failure the DBA
should strive to reduce the Mean-Time-To-Recover (MTTR). This strategy ensures that the
database is unavailable for the shortest possible amount of time. Anticipating the types of
failures that can occur and using effective recovery strategies, the DBA can ultimately reduce
the MTTR.
MTBF Protecting the database against various types of failures is also a key DBA task. To
do this, a DBA must increase the Mean-Time-Between-Failures (MTBF). The DBA must
understand the backup and recovery structures within an Oracle database environment and
configure the database so that failures do not often occur.
Evolutionary Process A backup and recovery strategy evolves as business, operational, and
technical requirements change. It is important that both the DBA and appropriate
management review the validity of a backup and recovery strategy on a regular basis.
• 24-hour operations
• Testing and validating backups
• Database volatility
24-Hour Operations
Backups and recoveries are always affected by the type of business operation that you
provide, particularly in a situation where a database must be available 24 hours a day, 7 days
a week for continuous operation. Proper database configuration is necessary to support these
operational requirements because they directly affect the technical aspects of the database
environment.
Testing Backups
DBAs can ensure that they have a strategy that enables them to decrease the MTTR and
increase the MTBF by having a plan in place to test the validity of backups regularly. A
recovery is only as good as the backups that are available. Here are some questions to
consider when selecting a backup strategy:
• Can you depend on system administrators, vendors, backup DBAs, and other critical
personnel when you need help?
• Can you test your backup and recovery strategies at frequently scheduled intervals?
• Are backup copies stored at an off-site location?
• Is a plan well documented and maintained?
Natural Disaster
Perhaps your data is so important that you must ensure resiliency even in the event of a
complete system failure. Natural disasters and other issues can affect the availability of your
data and must be considered when creating a disaster recovery plan. Here are some questions
to consider when selecting a backup and recovery strategy:
• What will happen to your business in the event of a serious disaster such as:
– Flood, fire, earthquake, or hurricane
– Malfunction of storage hardware or software
• If your database server fails, will your business be able to operate during the hours,
days, or even weeks it might take to get a new hardware system?
• Do you store backups at an off-site location?
Overview
The Oracle server uses many memory components, background processes, and file structures for
its backup and recovery mechanism. This lesson reviews the concepts presented in the Oracle9i
DBA Fundamentals I course, with an emphasis on backup and recovery requirements.
Oracle Instance
An Oracle instance consists of memory areas (mainly System Global Area [SGA]) and
background processes, namely PMON, SMON, DBWn, LGWR, and CKPT. An instance is
created during the nomount stage of the database startup after the parameter file has been read.
If any of these processes terminate, the instance shuts down.
Type Description
Database buffer Memory area used to store blocks read from data files.
cache Data is read into the blocks by server processes and
written out by DBWn asynchronously.
Log buffer Memory containing before and after image copies of
changed data to be written to the redo logs
Large pool An optional area in the SGA that provides large memory
allocations for backup and restore operations, I/O server
processes, and session memory for the shared server and
Oracle XA.
Shared pool Stores parsed versions of SQL statements, PL/SQL
procedures, and data dictionary information
Background Processes
Type Description
Database writer Writes dirty buffers from the data buffer cache to the data
(DBWn) files. This activity is asynchronous.
Log writer (LGWR) Writes data from the redo log buffer to the redo log files
System monitor Performs automatic instance recovery. Recovers space in
(SMON) temporary segments when they are no longer in use.
Merges contiguous areas of free space depending on
parameters that are set.
Process monitor Cleans up the connection/server process dedicated to an
(PMON) abnormally terminated user process. Performs rollback
and releases the resources held by the failed process.
Checkpoint (CKPT) Synchronizes the headers of the data files and control files
with the current redo log and checkpoint numbers.
Archiver (ARCn) A process that automatically copies redo logs that have
(optional) been marked for archiving.
Dynamic Views
The Oracle server provides a number of standard views to obtain information on the database
and instance. These views include:
• V$SGA: Queries the size of the instance for the shared pool, log buffer, data buffer cache,
and fixed memory sizes (operating system-dependent)
• V$INSTANCE: Queries the status of the instance, such as the instance mode, instance
name, startup time, and host name
• V$PROCESS: Queries the background and server processes created for the instance
• V$BGPROCESS: Queries the background processes created for the instance
• V$DATABASE: Lists status and recovery information about the database. It includes
information on the database name, the unique database identifier, the creation date, the
control file creation date and time, the last database checkpoint, and other information.
• V$DATAFILE: Lists the location and names of the data files that are contained in the
database. It includes information relating to the file number and name, creation date, status
(online or offline), enabled (read-only, read-write), last data file checkpoint, size, and other
information.
Datafile 3
Password Archived
file log files
Database
Disk 1
(Member a)
Log1a.rdo Log2a.rdo Log3a.rdo
Disk 2
(Member b)
Log1b.rdo Log2b.rdo Log3b.rdo
Datafile 3
Password Archived
file log files
Database
Database Checkpoints
Database checkpoints ensure that all modified database buffers are written to the database files.
The database header files are then marked current, and the checkpoint sequence number is
recorded in the control file. Checkpoints synchronize the buffer cache by writing all buffers to
disk whose corresponding redo entries were part of the log file being checkpointed.
Incremental checkpoints are continuous, low overhead checkpoints that write buffers as a
background activity.
Checkpoint Process (CKPT) Features
• The CKPT process is always enabled.
• The CKPT process updates file headers at checkpoint completion.
• More frequent checkpoints reduce the time needed for recovering from instance failure at
the possible expense of performance.
Datafile 3
Password Archived
file log files
Database
Database Synchronization
An Oracle database cannot be opened unless all datafiles, redo logs, and control files are
synchronized. In this case, recovery is required.
Database File Synchronization
• For the database to open, all datafiles must have the same checkpoint number, unless they
are offline or part of a read-only tablespace.
• Synchronization of all Oracle files is based on the current redo log checkpoint and
sequence numbers.
• Archived and online redo log files recover committed transactions and roll back
uncommitted transactions to synchronize the database files.
• Archived and online redo log files are automatically requested by the Oracle server during
the recovery phase. Make sure logs exist in the requested location.
Database
Phase Explanation
1 Unsynchronized files: The Oracle server determines whether a database
needs recovery when unsynchronized files are found. Instance failure can
cause this to happen, such as a shutdown abort. This situation causes
loss of uncommitted data because memory is not written to disk and files
are not synchronized before shutdown.
2 Roll forward phase: DBWR writes both committed and uncommitted
data to the data files. The purpose of the roll forward phase is to apply
all changes recorded in the log file to the data blocks.
Note:
- Undo segments are populated during the roll forward phase. Because
redo logs store both before and after data images, an undo segment
entry is added if an uncommitted block is found in the datafile and no
rollback entry exists.
- Redo logs are applied using log buffers. The buffers used are marked
for recovery and do not participate in normal transactions until they are
relinquished by the recovery process.
- Redo logs are applied to a read-only datafile if a status conflict occurs
(that is, the file header states the file is read-only, yet the control file
recognizes it as read-write, or vice versa).
3 Committed and uncommitted data in datafiles: Once the roll forward
phase has successfully completed, all committed data resides in the
datafiles, although uncommitted data still might exist.
4 Roll back phase: To remove the uncommitted data from the files, undo
segments populated during the roll forward phase or prior to the crash
are used. Blocks are rolled back when requested by either the Oracle
server or a user, depending on who requests the block first.
The database is therefore available even while roll back is running. Only
those data blocks participating in roll back are not available.
5 Committed data in datafiles: When both the roll forward and rollback
phases have completed, only committed data resides on disk.
6 Synchronized data files: All datafiles are now synchronized.
Parameter Definition
Improved
response
Transaction
with more than
100 rollback blocks
SMON
P000
P001
P002
Rollback P003
Tables
segment
FAST_START_PARALLEL_ROLLBACK parameter
FALSE None
LOW 2 * CPU_COUNT
HIGH 4 * CPU_COUNT
• V$FAST_START_SERVERS
• V$FAST_START _TRANSACTIONS
052
054 053
Redo history
051
052
Online redo log files
LGWR
No redo
history
Online redo
log files 052 053
053
054
052
052 053
053
054
Noarchivelog Mode
By default, a database is created in Noarchivelog mode. The characteristics of operating a
database in Noarchivelog mode are as follows:
• Redo log files are used in a circular fashion.
• A redo log file can be reused immediately after a checkpoint has taken place.
• After redo logs are overwritten, media recovery is only possible to the last full backup.
Implications of Noarchivelog Mode
• If a tablespace becomes unavailable because of a failure, you cannot continue to operate
the database until the tablespace has been dropped or the entire database has been
restored from backups.
• You can perform operating system backups of the database only when the database is
shut down. It must have been shut down with the Normal or Immediate option.
• You must back up the entire set of datafiles and control files during each backup.
Although you can backup the online redo log files, it is not necessary. The files in this
type of backup are all consistent and do not need recovery, so the online logs are not
needed.
• If the online redo log files have been overwritten, you will lose all data since the last
full backup.
LGWR
Redo
history
Online redo
log files 052 053
051
054 053 051
052
052 051
053
054 053 053
Archived
log files
Archivelog Mode
A filled redo log file cannot be reused until a checkpoint has taken place and the redo log file
has been backed up by the ARCn background process. An entry in the control file records the
log sequence number of the archived log file.
The most recent changes to the database are available at any time for instance recovery, and
the archived redo log files can be used for media recovery.
Archiving Requirements
The database must be in Archivelog mode. Issuing the command to put the database into
Archivelog mode updates the control file. The ARCn background processes can be enabled to
implement automatic archiving.
Sufficient resources should be available to hold generated archived redo log files.
1 SHUTDOWN NORMAL/IMMEDIATE
Control
init.ora
file
Step Explanation
1 Shutdown the database: SQL> SHUTDOWN IMMEDIATE
2 Start the database in Mount state so that you can alter the
Archivelog mode of database: SQL> STARTUP MOUNT
3 Set the database in Archivelog mode by using the ALTER
DATABASE command:
SQL> ALTER DATABASE ARCHIVELOG;
4 Open the database:
SQL> ALTER DATABASE OPEN;
5 Take a full backup of the database.
Note: After the mode has been changed from Noarchivelog mode to Archivelog, you must
back up all the datafiles and the control file. Your previous backup is not usable anymore
because it was taken while the database was in Noarchivelog mode.
The new backup that is taken after putting the database into Archivelog mode is the back up
against which all your future archived redo log files will apply.
Setting the database in Archivelog mode does not enable the Archiver (ARCn) processes.
ARC0
053 053
ARC1
053 053
DBA
LOG_ARCHIVE_MAX_PROCESSES Parameter
Parallel Data Definition Language (DDL) and parallel Data Manipulation Language (DML)
operations may generate a large number of redo log files. A single ARC0 process to archive
these redo log files might not be able to keep up. Oracle starts additional processes as needed.
However if you wish to avoid the run time overhead of invoking the additional processes, you
can specify the number of processes to be started at instance startup.
You can specify up to ten ARCn processes by using the LOG_ARCHIVE_MAX_PROCESSES
parameter.
When LOG_ARCHIVE_START is set to TRUE, an Oracle instance starts up with as many
archiver processes as defined by LOG_ARCHIVE_MAX_PROCESSES.
You can always spawn additional archive processes, up to the limit set by
LOG_ARCHIVE_MAX_PROCESSES, or kill archive processes at any time during the
instance life.
LOG_ARCHIVE_MAX_PROCESSES=2
ARC0ARC1
053
053 053
LOG_ARCHIVE_START=TRUE 052
052
LOG_ARCHIVE_MAX_PROCESSES=n
2
init.ora
LOG_ARCHIVE_START
1
ALTER SYSTEM ARCHIVE LOG STOP;
051 052
Note: Stopping ARCn processes does not set the database in Noarchivelog mode. When all
groups of redo logs are used and not archived, the database will hang if it is in Archivelog
mode.
1
ALTER SYSTEM ARCHIVE LOG SEQUENCE 052;
Step Explanation
1 Execute the ALTER SYSTEM SQL command:
SQL> ALTER SYSTEM ARCHIVE LOG SEQUENCE
052;
2 The server process for the user executing the command
performs the archiving of the online redo log files.
In addition, you can use manual archiving when automatic archiving is enabled to re-archive
an inactive group to another destination.
You must connect with administrator privileges to issue the ALTER SYSTEM ARCHIVE LOG
command.
Option Description
THREAD Specifies thread containing the redo log file group to be archived
(for Oracle Parallel Server)
SEQUENCE Archives the online redo log file group identified by the log
sequence number
CHANGE Archives based upon the SCN
CURRENT Archives the current redo log file group of the specified thread
LOGFILE Archives the redo log file group with members identified by
filename
NEXT Archives the oldest online redo log file group that has not been
archived
ALL Archives all online redo log file groups
log_archive_dest_1 = "LOCATION=/archive1"
log_archive_dest_2 = "SERVICE=standby_db1"
log_archive_dest_1="LOCATION=/archive
MANDATORY REOPEN"
log_archive_dest_2="SERVICE=standby_db1
MANDATORY REOPEN=600"
log_archive_dest_3="LOCATION=/archive2
OPTIONAL"
• LOG_ARCHIVE_MIN_SUCCEED_DEST parameter
LOG_ARCHIVE_MIN_SUCCEED_DEST = 2
LOG_ARCHIVE_DEST_STATE_n Parameter
• The state of an archive destination can be changed dynamically. By default, an archive
destination is in the ENABLE state, indicating that the Oracle server can use this
destination.
• The state of an archive destination can be modified by setting the corresponding
LOG_ARCHIVE_DEST_STATE_n parameter. For example, to stop archiving to a
mandatory location temporarily when an error has occurred, the state of that destination
can be set to DEFER. A destination may be defined, but is set to DEFER in the
parameter file. This destination can then be enabled when another destination has an
error or needs maintenance.
Note: Archiving is not performed to a destination when the state is set to DEFER. If the state
of this destination is changed to ENABLE, any missed logs must be manually archived to this
destination.
Online redo
log files
Group 1 Group 2
Archived log file
053
053 053
052
052
ARC0
ARC0 052
052
053
053 053
052 /ORADATA/archive/ arch%s.arc
LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_FORMAT
Specifying LOG_ARCHIVE_FORMAT
LOG_ARCHIVE_FORMAT = extension
where: extension should include the variables %s or %S for
log sequence number. The default value is
operating system-specific.
Example on UNIX and NT: LOG_ARCHIVE_FORMAT=arch%s.arc
Filename Options
• %s or %S: Includes the log sequence number as part of the filename.
• %t or %T: Includes the thread number as part of the filename.
• Using %S causes the value to be a fixed length padded to the left with zeros.
Dynamic Views
V$ARCHIVED_LOG
V$ARCHIVE_DEST
V$LOG_HISTORY
V$DATABASE
V$ARCHIVE_PROCESSES
Command Line
Dynamic Views
You can view information about the archived log files by using the following views:
• V$ARCHIVED_LOG: Displays archived log information from the control file.
• V$ARCHIVE_DEST: For the current instance, describes all archive log destinations,
the current value, mode, and status.
SELECT destination, binding, target, status
FROM v$archive_dest;
DESTINATION BINDING TARGET STATUS
---------------------- --------- ------- --------
/db1/oracle/DEMO/arch MANDATORY PRIMARY VALID
/db2/oracle/DEMO/arch OPTIONAL PRIMARY DEFERRED
standbyDEMO OPTIONAL STANDBY ERROR
OPTIONAL PRIMARY INACTIVE
OPTIONAL PRIMARY INACTIVE
Server
session
(polling)
Recovery Enterprise
Target Manager Manager
database (RMAN)
Server
session
(rcvcat)
Recovery
Disk Disk
catalog DB
Server Disk
Channel Allocation
A channel represents one stream of data to a device type. A channel must be allocated before
you execute backup and recovery commands. Each allocated channel establishes a
connection from the RMAN executable to a target or auxiliary database instance (either a
database created with the duplicate command or a temporary database used in TSPITR)
by starting a server session on the instance. This server session performs the backup and
recovery operations. Only one RMAN session communicates with the allocated server
sessions.
Each channel usually corresponds to one output device, unless your MML is capable of
hardware multiplexing.
You can allocate channels manually or preconfigure channels for use in all RMAN sessions
using automatic channel allocation.
Manual Channel Allocation
The ALLOCATE CHANNEL command with a RUN command and the ALLOCATE
CHANNEL FOR MAINTENANCE command issued at the RMAN prompt are used to
allocate a channel manually. Manual channel allocation overrides automatic allocation.
Media
Recovery Oracle server
management
Manager session
library
Media
management
server software
Tape library or
single tape
Media Management
To use tape storage for your database backups, RMAN requires a media manager. A media
manager is a utility that loads, labels, and unloads sequential media, such as tape drives for
the purpose of backing up, restoring, and recovering data. The Oracle server calls MML
software routines to back up and restore data files to and from media that is controlled by the
media manager.
Some media management products can manage the entire data movement between Oracle
data files and the backup devices. Some products that use high-speed connections between
storage and media subsystems can reduce much of the backup load from the primary
database server.
Note that the Oracle server does not need to connect to the media management library
(MML) software when it backs up to disk.
The Oracle Backup Solutions Program (BSP) provides a range of media management
products that are compliant with Oracle’s MML specification. Software that is compliant
with the MML interface enables an Oracle server session to back up to a media manager and
request the media manager to restore backups. Check with your media vendor to determine
whether it is a member of the Oracle BSP.
• Target database
• Recovery catalog database
• Auxiliary database
– Standby database
– Duplicate database
– TSPITR instance
• Interactive mode
– Use it when doing analysis
– Minimize regular usage
– Avoid using with log option
• Batch mode
– Meant for automated jobs
– Minimize operator errors
– Set the log file to obtain information
Recovery Manager
Recovery Manager acts as a command-line interpreter (CLI) with its own command language.
There are two modes of operation with the RMAN— interactive and batch.
Interactive Mode To run RMAN commands interactively, start RMAN and then type
commands into the command-line interface. For example, you can start RMAN from the
UNIX command shell and then execute interactive commands as follows:
$ rman target sys/sys_pwd@db1
RMAN> BACKUP DATABASE;
Batch Mode You can type RMAN commands into a file, and then run the command file by
specifying its name on the command line. The contents of the command file should be
identical to commands entered at the command line.
When running in batch mode, RMAN reads input from a command file and writes output
messages to a log file (if specified).
RMAN Commands
RMAN has two basic types of commands: stand-alone and job commands.
Stand-alone commands are executed at the RMAN prompt and are generally self-contained.
Following are some of the stand-alone commands:
• CHANGE
• CONNECT
• CREATE CATALOG, RESYNC CATALOG
• CREATE SCRIPT, DELETE SCRIPT, REPLACE SCRIPT
The job commands are usually grouped and RMAN executes the job commands inside of a
RUN command block sequentially. If any command within the block fails, RMAN ceases
processing—no further commands within the block are executed.
There are some commands that can be issued either at the prompt or within RUN. Executing
stand-alone commands at the RMAN prompt allows you to take advantage of the automatic
channel functionality.
You can execute the commands in interactive mode or batch mode.
dbms_rcvcat dbms_backup
Recovery _restore
Manager
dbms_rcvcat
dbms_rcvman dbms_rcvman PL/SQL dbms_rcvman
Recovery Target
catalog control file
Backup Terminology
Whole Database Backup
Whole database backup (also known as whole backup) refers to a backup of all datafiles and the
control file of the database. Whole backups can be performed when the database is closed or
open. This is the most common method of backup.
The whole backup that is taken when the database is closed (after the database is shut down
using the NORMAL, IMMEDIATE, or TRANSACTIONAL options) is called a consistent
backup. In such a backup, all the database file headers are consistent with the control file, and
when restored completely, the database can be opened without any recovery. When the database
is operated in Noarchivelog mode, only a consistent whole database backup is valid for restore
and recovery.
When the database is open and operational, the datafile headers are not consistent with the
control file unless the database is open in read-only mode. When the database is shut down with
the ABORT option this inconsistency persists. Backups of the database in such a state are
termed as an inconsistent backup. Inconsistent backups need recovery to bring the database into
a consistent state. When databases need to be available 7 days a week, 24 hours a day, you have
no option but to use an inconsistent backup, and this can be performed only on databases
running in Archivelog mode.
V$DATAFILE
V$CONTROLFILE
V$LOGFILE
DBA_DATA_FILES
Physical backup
Online or
offline storage
• Conceptually simple
• Easy to perform
• Require little operator interaction
2
SHUTDOWN IMMEDIATE;
3
HOST cp <files> /backup/
1
STARTUP OPEN;
Control files
Datafiles
Online redo
Archived redo log files
Password file Parameter file log files
LGWR
Archivelog mode
052 053
051 ARC0
054 053
051
051
052
054
053
051
053 052
052
053
053
Online redo Archived redo
log files log files
143
Datafile 1
144
Datafile 2
Database Backup
144
Datafile 2
Database Backup
Dynamic views
V$BACKUP
V$DATAFILE_HEADER
Dynamic Views
You can obtain information about the status of datafiles while performing open database
backups by querying the V$BACKUP and V$DATAFILE_HEADER views.
V$BACKUP View
Query the V$BACKUP view to determine which files are in backup mode. When an ALTER
TABLESPACE BEGIN BACKUP command is issued the status changes to ACTIVE.
SQL> SELECT * FROM v$backup;
FILE# STATUS CHANGE# TIME
------ ----------- ------- ---------
1 NOT ACTIVE 0
2 NOT ACTIVE 0
3 ACTIVE 312905 05-APR-01
…
Database
2
3
File 1
backup copy SCN 1 Query_Data
SCN 1 File 1
Users File 1
4 SCN 2
DBW0
Users File 2
SCN 2
Legend Explanation
Number
1 Change the status of a tablespace from read-write to read-only by
using the ALTER TABLESPACE SQL command:
SQL> ALTER TABLESPACE query_data READ ONLY;
2 When the ALTER TABLESPACE command is issued, a
checkpoint is performed for all datafiles associated with the
tablespace. The file headers are then frozen with the current SCN.
3 When you make a tablespace read-only, you must back up all of the
datafiles for the tablespace.
Logging Nologging
4
DBVERIFY
Error
1 2 3 reporting
Online Offline
Online
Datafiles
Step Explanation
1 The utility can be used to verify online data files.
2 You can invoke the utility on a portion of a data file.
3 The utility can be used to verify online data files.
4 You can direct the output of the utility to an error log.
Running DBVERIFY
The name of the executable for the DBVERIFY utility varies across operating systems. It is
located in the bin directory under the Oracle Home directory. In the UNIX environment, you
execute the dbv executable.
DBVERIFY Parameters
Parameter Description
FILE Name of database file to verify
START Starting block address to verify. Block address is
specified in Oracle blocks. If START is not specified
it assumes the first block in the file.
END The ending block address to verify. If END is not
specified, it assumes the last block in the file.
BLOCKSIZE Required only if the file has a block size greater than
2KB
LOGFILE Specifies the file to which logging information
should be written. Default is to send output to the
terminal display.
Archived Archived
Log file Log file
Copy of archived log
Backup set
Datafile 1 Datafile 4 Datafile 1 Datafile 3 Control
file
Datafile 3
Backup Backup Backup
set 1 set 2 set 3
Backup Sets
A backup set consists of one or more physical files stored in an RMAN-specific format, on
either disk or tape. You can make a backup set containing datafiles, control files, and
archived redo log files. You can also back up a backup set. Backup sets can be of two types:
• Datafile: Can contain datafiles and control files, but not archived logs
• Archived log: Contains archived logs, not datafiles or control files
Note: Backup sets may need to be restored by Recovery Manager before recovery can be
performed, unlike image copies which generally are available on disks.
Control Files in Datafile Backup Sets
Each file in a backup set must have the same Oracle block size (control files and datafiles
have the same block size, whereas archived log block sizes are machine dependent). When a
control file is included, it is written in the last datafile backup set. A control file can be
included in a backup set either:
• Explicitly using the INCLUDE CONTROL FILE syntax
• Implicitly by backing up file 1 (the system datafile)
The RMAN BACKUP command is used to back up datafiles, archived redo log files, and
control files. The BACKUP command backs up the files into one or more backup sets on disk
or tape. You can make the backups when the database is open or closed. Backups can be full
or incremental backups.
Backup Piece
A logical backup set usually only has one backup piece. A backup piece is a single physical
file that can contain one or more Oracle datafiles or archived logs.
For a large database, a backup set might exceed the maximum size for a single tape reel,
physical disk, or operating system file. The size of each backup set piece can therefore be
limited by using MAXPIECESIZE with the CONFIGURE CHANNEL or ALLOCATE
CHANNEL commands.
RMAN> BACKUP
2> FORMAT ’/BACKUP/df_%d_%s_%p.bus’
3> DATABASE filesperset = 2;
Option Significance
full The server session copies all blocks into the backup set, skipping only
datafile blocks that have never been used. The server session does not skip
blocks when backing up archived redo logs or control files. Full backup is
not considered an incremental backup.
incremental The server session copies data blocks that have changed since the last
level incremental n backup, where n is any integer from 1 to 4.
integer When attempting an incremental backup of a level greater than 0, server
process checks that a level 0 backup or level 0 copy exists for each datafile
in the BACKUP command.
If you specify incremental, then in the backup spec you must set one of the
following parameters: DATA FILE, DATA FILECOPY, TABLESPACE,
or DATABASE. Recovery Manager does not support incremental backups
of control files, archived redo logs, or backup sets.
Option Significance
include Creates a snapshot of the current control file and places it into each backup
current set produced by this clause.
controlfile
Format Format of the name of output. The format parameters can be used either
individually or in combination.
%c Specifies the copy number of the backup piece within a set of duplexed
backup pieces.
%p Specifies the backup piece number within the backup set. This value starts
at 1 for each backup set and is increased by 1 as each backup piece is
created.
%s Specifies the backup set number. This number is a counter in the control
file that is increased for each backup set.
RMAN> RUN {
2> ALLOCATE CHANNEL t1 TYPE ’SBT_TAPE’
3> MAXPIECESIZE = 4G;
4> BACKUP
5> FORMAT ’df_%t_%s_%p’ FILESPERSET 3
6> (tablespace users); }
filesperset = 3
Backup set
Datafile 1 Datafile
1,2,3,1,2,3…
Server
Datafile 2 process
(channel)
MML Tape
Datafile 3
Datafile 1 Datafile 1
Datafile 1
Datafile 2 Datafile 2
Datafile 2
Backup set
Datafile 1 Datafile 1
Datafile 2 Datafile 2
RMAN> BACKUP
2> FORMAT ’/disk1/backup/ar_%t_%s_%p’
3> ARCHIVELOG ALL DELETE ALL INPUT;
Backup Constraints
When performing a backup using Recovery Manager, you must be aware of the following:
• The target database must be mounted for Recovery Manager to connect.
• Backups of online redo logs are not supported.
• If the target database is in Noarchivelog mode, only “clean” tablespace and datafile
backups can be taken (that is, backups of “offline normal” or “read only” tablespaces).
Database backups can be taken only if the database has first been shut down cleanly and
restarted in Mount mode.
• If the target database is in Archivelog mode, only “current” datafiles can be backed up
(restored datafiles are made current by recovery).
• If a recovery catalog is used, the recovery catalog database must be open.
Image Copies
An image copy contains a single datafile, archived redo log file, or control file. An image
copy can be created with the RMAN COPY command or an operating system command.
When you create the image copy with the RMAN COPY command, the server session
validates the blocks in the file and records the copy in the control file.
Archived Archived
log file log file
Copy of archived log
RMAN> COPY
2> DATAFILE ’/ORADATA/users_01_db01.dbf’ TO
3> ’/BACKUP/users01.dbf’ tag=DF3,
4> ARCHIVELOG ’arch_1060.arc’ TO
5> ’arch_1060.bak’;
Image Copies
The RMAN COPY command creates an image copy of a file. The output file is always
written to disk. You can copy datafiles, archived redo log files, or control files. In many
cases, copying datafiles is more beneficial than backing them up, because the output is
suitable for use without any additional processing.
If you want to make a whole database backup with the COPY command, you must copy each
datafile with a separate COPY statement. You can also make a copy of the control file and
archived redo log files.
The example in the slide assumes that you are using automatic channel allocation. If you are
manually allocating channels, include the COPY command within the RUN statement as
follows:
RMAN> RUN {
2> ALLOCATE CHANNEL c1 type disk;
3> COPY
4> DATAFILE ’/ORADATA/users_01_db01.dbf’ to
5> ’/BACKUP/users01.dbf’ tag=DF3,
6> ARCHIVELOG ’arch_1060.arc’ to
7> ’arch_1060.bak’;}
Datafile 1
Datafile 1 Control Redo log
files file 1
Image copy
Datafile 2 Redo log
file 2
Datafile 3
Datafile 3
Database
Image copy
Level 0 Level 0
Lvl 0 2 2 1 2 2 2 0
Day Sun Mon Tue Wed Thu Fri Sat Sun
Level 0 Level 0
Lvl 0 2 2C 1 2 2C 2C 0
Day Sun Mon Tue Wed Thu Fri Sat Sun
Datafile s Datafile 4
2,4
V$ARCHIVED_LOG
V$BACKUP_CORRUPTION
V$COPY_CORRUPTION
V$BACKUP_DATAFILE
V$BACKUP_REDOLOG
V$BACKUP_SET
V$BACKUP_PIECE