0% found this document useful (0 votes)
28 views39 pages

Architecture Components Memory

Uploaded by

Saurabh Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views39 pages

Architecture Components Memory

Uploaded by

Saurabh Mishra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 39

Connecting to an Oracle Instance – Creating a Session

System users can connect to an Oracle database through SQLPlus or through an application program like the
Internet Developer Suite (the program becomes the system user). This connection enables users to execute SQL
statements.

The act of connecting creates a communication pathway between a user process and an Oracle Server. As is shown
in the figure above, the User Process communicates with the Oracle Server through a Server Process. The User
Process executes on the client computer. The Server Process executes on the server computer, and actually
executes SQL statements submitted by the system user.

The figure shows a one-to-one correspondence between the User and Server Processes. This is called a Dedicated
Server connection. An alternative configuration is to use a Shared Server where more than one User Process
shares a Server Process.

Sessions: When a user connects to an Oracle server, this is termed a session. The User Global Area is session
memory and these memory structures are described later in this document. The session starts when the Oracle
server validates the user for connection. The session ends when the user logs out (disconnects) or if the connection
terminates abnormally (network failure or client computer failure).

A user can typically have more than one concurrent session, e.g., the user may connect using SQLPlus and also
connect using Internet Developer Suite tools at the same time. The limit of concurrent session connections is
controlled by the DBA.

If a system users attempts to connect and the Oracle Server is not running, the system user receives the Oracle
Not Available error message.

Oracle Database Memory Management and Memory Structures


Memory management - focus is to maintain optimal sizes for memory structures.
Memory is managed based on memory-related initialization parameters.
These values are stored in the init.ora file for each database.

Three basic options for memory management are as follows:


Automatic memory management:
DBA specifies the target size for instance memory.
The database instance automatically tunes to the target memory size.
Database redistributes memory as needed between the SGA and the instance PGA.

Automatic shared memory management:


This management mode is partially automated.
DBA specifies the target size for the SGA.
DBA can optionally set an aggregate target size for the PGA or managing PGA work areas individually.

Prior to Oracle 10G, a DBA had to manually specify SGA Component sizes through the initialization parameters, such
as SHARED_POOL_SIZE, DB_CACHE_SIZE, JAVA_POOL_SIZE, and LARGE_POOL_SIZE parameters.

Automatic Shared Memory Management enables a DBA to specify the total SGA memory available through
the SGA_TARGET initialization parameter. The Oracle Database automatically distributes this memory among
various subcomponents to ensure most effective memory utilization.

The DBORCL database SGA_TARGET is set in the initDBORCL.ora file:


sga_target=1610612736

With automatic SGA memory management, the different SGA components are flexibly sized to adapt to
the SGA available.

Setting a single parameter simplifies the administration task – the DBA only specifies the amount of SGA memory
available to an instance – the DBA can forget about the sizes of individual components. No out of memory errors are
generated unless the system has actually run out of memory. No manual tuning effort is needed.

The SGA_TARGET initialization parameter reflects the total size of the SGA and includes memory for
the following components:
Fixed SGA and other internal allocations needed by the Oracle Database instance
The log buffer
The shared pool
The Java pool
The buffer cache
The keep and recycle buffer caches (if specified)
Nonstandard block size buffer caches (if specified)
The Streams Pool

If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the SGA_MAX_SIZE
value is bumped up to accommodate SGA_TARGET.
When you set a value for SGA_TARGET, Oracle Database 11g automatically sizes the most commonly
configured components, including:
The shared pool (for SQL and PL/SQL execution)
The Java pool (for Java execution state)
The large pool (for large allocations such as RMAN backup buffers)
The buffer cache

There are a few SGA components whose sizes are not automatically adjusted. The DBA must specify the sizes of
these components explicitly, if they are needed by an application. Such components are:
Keep/Recycle buffer caches (controlled
by DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE)
Additional buffer caches for non-standard block sizes (controlled
by DB_nK_CACHE_SIZE, n = {2, 4, 8, 16, 32})
Streams Pool (controlled by the new parameter STREAMS_POOL_SIZE)
The granule size that is currently being used for the SGA for each component can be viewed in the
view V$SGAINFO. The size of each component and the time and type of the last resize operation performed on
each component can be viewed in the view V$SGA_DYNAMIC_COMPONENTS.

Manual memory management:


Instead of setting the total memory size, the DBA sets many initialization parameters to manage components of the
SGA and instance PGA individually.

If you create a database with Database Configuration Assistant (DBCA) and choose the basic installation option, then
automatic memory management is the default.

The memory structures include three areas of memory:


System Global Area (SGA) – this is allocated when an Oracle Instance starts up.
Program Global Area (PGA) – this is allocated when a Server Process starts up.
User Global Area (UGA) – this is allocated when a user connects to create a session.

User Global Area


The User Global Area is session memory.

A session that loads a PL/SQL package into memory has the package state stored to the UGA. The package
state is the set of values stored in all the package variables at a specific time. The state changes as program code
the variables. By default, package variables are unique to and persist for the life of the session.

The OLAP page pool is also stored in the UGA. This pool manages OLAP data pages, which are equivalent to data
blocks. The page pool is allocated at the start of an OLAP session and released at the end of the session. An OLAP
session opens automatically whenever a user queries a dimensional object such as acube.

Note: Oracle OLAP is a multidimensional analytic engine embedded in Oracle Database 11g. Oracle OLAP cubes
deliver sophisticated calculations using simple SQL queries - producing results with speed of thought response
times.

The UGA must be available to a database session for the life of the session. For this reason, the UGA cannot be
stored in the PGA when using a shared server connection because the PGA is specific to a single process. Therefore,
the UGA is stored in the SGA when using shared server connections, enabling any shared server process access to it.
When using a dedicated server connection, the UGA is stored in the PGA.

Program Global Area (PGA)


A PGA is:
a nonshared memory region that contains data and control information exclusively for use by an Oracle process.
A PGA is created by Oracle Database when an Oracle process is started.
One PGA exists for each Server Process and each Background Process. It stores data and control information
for a single Server Process or a singleBackground Process.
It is allocated when a process is created and the memory is scavenged by the operating system when the process
terminates. This is NOT a shared part of memory – one PGA to each process only.
The collection of individual PGAs is the total instance PGA, or instance PGA.
Database initialization parameters set the size of the instance PGA, not individual PGAs.

The Program Global Area is also termed the Process Global Area (PGA) and is a part of memory allocated that
is outside of the Oracle Instance.

The content of the PGA varies, but as shown in the figure above, generally includes the following:

Private SQL Area: Stores information for a parsed SQL statement – stores bind variable values and runtime
memory allocations. A user session issuing SQL statements has a Private SQL Area that may be associated with a
Shared SQL Area if the same SQL statement is being executed by more than one system user. This often happens in
OLTP environments where many users are executing and using the same application program.
Dedicated Server environment – the Private SQL Area is located in the Program Global Area.
Shared Server environment – the Private SQL Area is located in the System Global Area.

Session Memory: Memory that holds session variables and other session information.

SQL Work Areas: Memory allocated for sort, hash-join, bitmap merge, and bitmap create types of operations.
Oracle 9i and later versions enable automatic sizing of the SQL Work Areas by setting
the WORKAREA_SIZE_POLICY = AUTO parameter (this is the default!) and PGA_AGGREGATE_TARGET =
n (where n is some amount of memory established by the DBA). However, the DBA can let the Oracle DBMS
determine the appropriate amount of memory.

System Global Area


The SGA is a read/write memory area that stores information shared by all database processes and by all users of
the database (sometimes it is called theShared Global Area).
This information includes both organizational data and control information used by the Oracle Server.
The SGA is allocated in memory and virtual memory.
The size of the SGA can be established by a DBA by assigning a value to the parameter SGA_MAX_SIZE in the
parameter file—this is an optional parameter.

The SGA is allocated when an Oracle instance (database) is started up based on values specified in the initialization
parameter file (either PFILE or SPFILE).

The SGA has the following mandatory memory structures:


Database Buffer Cache
Redo Log Buffer
Java Pool
Streams Pool
Shared Pool – includes two components:
Library Cache
Data Dictionary Cache
Other structures (for example, lock and latch management, statistical data)

Additional optional memory structures in the SGA include:


Large Pool

The SHOW SGA SQL command will show you the SGA memory allocations.
This is a recent clip of the SGA for the DBORCL database at SIUE.
In order to execute SHOW SGA you must be connected with the special privilege SYSDBA (which is only available
to user accounts that are members of the DBA Linux group).

SQL> connect / as sysdba


Connected.
SQL> show sga

Total System Global Area 1610612736 bytes


Fixed Size 2084296 bytes
Variable Size 1006633528 bytes
Database Buffers 587202560 bytes
Redo Buffers 14692352 bytes

Early versions of Oracle used a Static SGA. This meant that if modifications to memory management were
required, the database had to be shutdown, modifications were made to the init.ora parameter file, and then the
database had to be restarted.

Oracle 11g uses a Dynamic SGA. Memory configurations for the system global area can be made without shutting
down the database instance. The DBA can resize the Database Buffer Cache and Shared Pool dynamically.

Several initialization parameters are set that affect the amount of random access memory dedicated to the SGA of
an Oracle Instance. These are:

SGA_MAX_SIZE: This optional parameter is used to set a limit on the amount of virtual memory allocated to
the SGA – a typical setting might be 1 GB; however, if the value for SGA_MAX_SIZE in the initialization parameter
file or server parameter file is less than the sum the memory allocated for all components, either explicitly in the
parameter file or by default, at the time the instance is initialized, then the database ignores the setting for
SGA_MAX_SIZE. For optimal performance, the entire SGA should fit in real memory to eliminate paging to/from disk
by the operating system.
DB_CACHE_SIZE: This optional parameter is used to tune the amount memory allocated to the Database Buffer
Cache in standard database blocks. Block sizes vary among operating systems. The DBORCL database uses 8
KB blocks. The total blocks in the cache defaults to 48 MB on LINUX/UNIX and 52 MB on Windows operating
systems.
LOG_BUFFER: This optional parameter specifies the number of bytes allocated for the Redo Log Buffer.
SHARED_POOL_SIZE: This optional parameter specifies the number of bytes of memory allocated to shared SQL
and PL/SQL. The default is 16 MB. If the operating system is based on a 64 bit configuration, then the default size
is 64 MB.
LARGE_POOL_SIZE: This is an optional memory object – the size of the Large Pool defaults to zero. If the init.ora
parameterPARALLEL_AUTOMATIC_TUNING is set to TRUE, then the default size is automatically calculated.
JAVA_POOL_SIZE: This is another optional memory object. The default is 24 MB of memory.

The size of the SGA cannot exceed the parameter SGA_MAX_SIZE minus the combination of the size of the
additional parameters, DB_CACHE_SIZE,LOG_BUFFER, SHARED_POOL_SIZE, LARGE_POOL_SIZE,
and JAVA_POOL_SIZE.

Memory is allocated to the SGA as contiguous virtual memory in units termed granules. Granule size depends on
the estimated total size of the SGA, which as was noted above, depends on the SGA_MAX_SIZE parameter. Granules
are sized as follows:
If the SGA is less than 1 GB in total, each granule is 4 MB.
If the SGA is greater than 1 GB in total, each granule is 16 MB.

Granules are assigned to the Database Buffer Cache, Shared Pool, Java Pool, and other memory structures, and
these memory components can dynamically grow and shrink. Using contiguous memory improves system
performance. The actual number of granules assigned to one of these memory components can be determined by
querying the database view named V$BUFFER_POOL.

Granules are allocated when the Oracle server starts a database instance in order to provide memory addressing
space to meet the SGA_MAX_SIZE parameter. The minimum is 3 granules: one each for the fixed SGA, Database
Buffer Cache, and Shared Pool. In practice, you'll find the SGA is allocated much more memory than this. The
SELECT statement shown below shows a current_size of 1,152 granules.

SELECT name, block_size, current_size, prev_size, prev_buffers


FROM v$buffer_pool;

NAME BLOCK_SIZE CURRENT_SIZE PREV_SIZE PREV_BUFFERS


-------------------- ---------- ------------ ---------- ------------
DEFAULT 8192 560 576 71244

For additional information on the dynamic SGA sizing, enroll in Oracle's Oracle11g Database Performance
Tuning course.

Shared Pool
The Shared Pool is a memory structure that is shared by all system users.
It caches various types of program data. For example, the shared pool stores parsed SQL, PL/SQL code, system
parameters, and data dictionaryinformation.
The shared pool is involved in almost every operation that occurs in the database. For example, if a user executes
a SQL statement, then Oracle Database accesses the shared pool.
It consists of both fixed and variable structures.
The variable component grows and shrinks depending on the demands placed on memory size by system users
and application programs.

Memory can be allocated to the Shared Pool by the parameter SHARED_POOL_SIZE in the parameter file. The
default value of this parameter is 8MB on 32-bit platforms and 64MB on 64-bit platforms. Increasing the value of
this parameter increases the amount of memory reserved for the shared pool.

You can alter the size of the shared pool dynamically with the ALTER SYSTEM SET command. An example
command is shown in the figure below. You must keep in mind that the total memory allocated to the SGA is set by
the SGA_TARGET parameter (and may also be limited by the SGA_MAX_SIZE if it is set), and since the Shared Pool
is part of the SGA, you cannot exceed the maximum size of the SGA. It is recommended to let Oracle optimize the
Shared Pool size.

The Shared Pool stores the most recently executed SQL statements and used data definitions. This is because some
system users and application programs will tend to execute the same SQL statements often. Saving this information
in memory can improve system performance.

The Shared Pool includes several cache areas described below.

Library Cache
Memory is allocated to the Library Cache whenever an SQL statement is parsed or a program unit is called. This
enables storage of the most recently used SQL and PL/SQL statements.

If the Library Cache is too small, the Library Cache must purge statement definitions in order to have space to load
new SQL and PL/SQL statements. Actual management of this memory structure is through a Least-Recently-Used
(LRU) algorithm. This means that the SQL and PL/SQL statements that are oldest and least recently used are
purged when more storage space is needed.

The Library Cache is composed of two memory subcomponents:


Shared SQL: This stores/shares the execution plan and parse tree for SQL statements, as well as PL/SQL
statements such as functions, packages, and triggers. If a system user executes an identical statement, then the
statement does not have to be parsed again in order to execute the statement.
Private SQL Area: With a shared server, each session issuing a SQL statement has a private SQL area in its
PGA.
Each user that submits the same statement has a private SQL area pointing to the same shared SQL area.
Many private SQL areas in separate PGAs can be associated with the same shared SQL area.
This figure depicts two different client processes issuing the same SQL statement – the parsed solution is already in
the Shared SQL Area.

Data Dictionary Cache


The Data Dictionary Cache is a memory structure that caches data dictionary information that has been recently
used.
This cache is necessary because the data dictionary is accessed so often.
Information accessed includes user account information, datafile names, table descriptions, user privileges, and
other information.
The database server manages the size of the Data Dictionary Cache internally and the size depends on the size of
the Shared Pool in which the Data Dictionary Cache resides. If the size is too small, then the data dictionary tables
that reside on disk must be queried often for information and this will slow down performance.

Server Result Cache

The Server Result Cache holds result sets and not data blocks. The server result cache contains the SQL query result
cache and PL/SQL function result cache, which share the same infrastructure.

SQL Query Result Cache

This cache stores the results of queries and query fragments.


Using the cache results for future queries tends to improve performance.
For example, suppose an application runs the same SELECT statement repeatedly. If the results are cached, then
the database returns them immediately.
In this way, the database avoids the expensive operation of rereading blocks and recomputing results.

PL/SQL Function Result Cache

The PL/SQL Function Result Cache stores function result sets.


Without caching, 1000 calls of a function at 1 second per call would take 1000 seconds.
With caching, 1000 function calls with the same inputs could take 1 second total.
Good candidates for result caching are frequently invoked functions that depend on relatively static data.
PL/SQL function code can specify that results be cached.

Buffer Caches
A number of buffer caches are maintained in memory in order to improve system response time.

Database Buffer Cache

The Database Buffer Cache is a fairly large memory object that stores the actual data blocks that are retrieved
from datafiles by system queries and other data manipulation language commands.

The purpose is to optimize physical input/output of data.

When Database Smart Flash Cache (flash cache) is enabled, part of the buffer cache can reside in the flash
cache.
This buffer cache extension is stored on a flash disk device, which is a solid state storage device that uses flash
memory.
The database can improve performance by caching buffers in flash memory instead of reading from magnetic
disk.
Database Smart Flash Cache is available only in Solaris and Oracle Enterprise Linux.

A query causes a Server Process to look for data.


The first look is in the Database Buffer Cache to determine if the requested information happens to already be
located in memory – thus the information would not need to be retrieved from disk and this would speed up
performance.
If the information is not in the Database Buffer Cache, the Server Process retrieves the information from disk and
stores it to the cache.
Keep in mind that information read from disk is read a block at a time, NOT a row at a time, because a
database block is the smallest addressable storage space on disk.
Database blocks are kept in the Database Buffer Cache according to a Least Recently Used (LRU) algorithm and
are aged out of memory if a buffer cache block is not used in order to provide space for the insertion of newly
needed database blocks.

There are three buffer states:


Unused - a buffer is available for use - it has never been used or is currently unused.
Clean - a buffer that was used earlier - the data has been written to disk.
Dirty - a buffer that has modified data that has not been written to disk.

Each buffer has one of two access modes:


Pinned - a buffer is pinned so it does not age out of memory.
Free (unpinned).

The buffers in the cache are organized in two lists:


the write list and,
the least recently used (LRU) list.

The write list (also called a write queue) holds dirty buffers – these are buffers that hold that data that has been
modified, but the blocks have not been written back to disk.

The LRU list holds unused, free clean buffers, pinned buffers, and free dirty buffers that have not yet been moved
to the write list. Free clean buffers do not contain any useful data and are available for use. Pinned buffers are
currently being accessed.

When an Oracle process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of
the LRU list – this causes dirty buffers to age toward the LRU end of the LRU list.

When an Oracle user process needs a data row, it searches for the data in the database buffer cache because
memory can be searched more quickly than hard disk can be accessed. If the data row is already in the cache
(a cache hit), the process reads the data from memory; otherwise a cache miss occurs and data must be read
from hard disk into the database buffer cache.

Before reading a data block into the cache, the process must first find a free buffer. The process searches the LRU
list, starting at the LRU end of the list. The search continues until a free buffer is found or until the search reaches
the threshold limit of buffers.

Each time a user process finds a dirty buffer as it searches the LRU, that buffer is moved to the write list and the
search for a free buffer continues.

When a user process finds a free buffer, it reads the data block from disk into the buffer and moves the buffer to the
MRU end of the LRU list.

If an Oracle user process searches the threshold limit of buffers without finding a free buffer, the process stops
searching the LRU list and signals the DBWn background process to write some of the dirty buffers to disk. This
frees up some buffers.

Database Buffer Cache Block Size

The block size for a database is set when a database is created and is determined by the init.ora parameter file
parameter named DB_BLOCK_SIZE.
Typical block sizes are 2KB, 4KB, 8KB, 16KB, and 32KB.
The size of blocks in the Database Buffer Cache matches the block size for the database.
The DBORCL database uses an 8KB block size.
This figure shows that the use of non-standard block sizes results in multiple database buffer cache memory
allocations.
Because tablespaces that store oracle tables can use different (non-standard) block sizes, there can be more than
one Database Buffer Cache allocated to match block sizes in the cache with the block sizes in the non-standard
tablespaces.

The size of the Database Buffer Caches can be controlled by the


parameters DB_CACHE_SIZE and DB_nK_CACHE_SIZE to dynamically change the memory allocated to the caches
without restarting the Oracle instance.

You can dynamically change the size of the Database Buffer Cache with the ALTER SYSTEM command like the one
shown here:

ALTER SYSTEM SET DB_CACHE_SIZE = 96M;

You can have the Oracle Server gather statistics about the Database Buffer Cache to help you size it to achieve an
optimal workload for the memory allocation. This information is displayed from the V$DB_CACHE_ADVICE view. In
order for statistics to be gathered, you can dynamically alter the system by using theALTER SYSTEM SET
DB_CACHE_ADVICE (OFF, ON, READY) command. However, gathering statistics on system performance always
incurs some overhead that will slow down system performance.

SQL> ALTER SYSTEM SET db_cache_advice = ON;

System altered.

SQL> DESC V$DB_cache_advice;


Name Null? Type
----------------------------------------- -------- -------------
ID NUMBER
NAME VARCHAR2(20)
BLOCK_SIZE NUMBER
ADVICE_STATUS VARCHAR2(3)
SIZE_FOR_ESTIMATE NUMBER
SIZE_FACTOR NUMBER
BUFFERS_FOR_ESTIMATE NUMBER
ESTD_PHYSICAL_READ_FACTOR NUMBER
ESTD_PHYSICAL_READS NUMBER
ESTD_PHYSICAL_READ_TIME NUMBER
ESTD_PCT_OF_DB_TIME_FOR_READS NUMBER
ESTD_CLUSTER_READS NUMBER
ESTD_CLUSTER_READ_TIME NUMBER

SQL> SELECT name, block_size, advice_status FROM v$db_cache_advice;

NAME BLOCK_SIZE ADV


-------------------- ---------- ---
DEFAULT 8192 ON
<more rows will display>
21 rows selected.

SQL> ALTER SYSTEM SET db_cache_advice = OFF;

System altered.

KEEP Buffer Pool


This pool retains blocks in memory (data from tables) that are likely to be reused throughout daily processing. An
example might be a table containing user names and passwords or a validation table of some type.

The DB_KEEP_CACHE_SIZE parameter sizes the KEEP Buffer Pool.

RECYCLE Buffer Pool


This pool is used to store table data that is unlikely to be reused throughout daily processing – thus the data blocks
are quickly removed from memory when not needed.

The DB_RECYCLE_CACHE_SIZE parameter sizes the Recycle Buffer Pool.

Redo Log Buffer


The Redo Log Buffer memory object stores images of all changes made to database blocks.
Database blocks typically store several table rows of organizational data. This means that if a single column value
from one row in a block is changed, the block image is stored. Changes include INSERT, UPDATE, DELETE, CREATE,
ALTER, or DROP.
LGWR writes redo sequentially to disk while DBWn performs scattered writes of data blocks to disk.
Scattered writes tend to be much slower than sequential writes.
Because LGWR enable users to avoid waiting for DBWn to complete its slow writes, the database delivers better
performance.

The Redo Log Buffer as a circular buffer that is reused over and over. As the buffer fills up, copies of the images are
stored to the Redo Log Files that are covered in more detail in a later module.

Large Pool
The Large Pool is an optional memory structure that primarily relieves the memory burden placed on the Shared
Pool. The Large Pool is used for the following tasks if it is allocated:
Allocating space for session memory requirements from the User Global Area where a Shared Server is in use.
Transactions that interact with more than one database, e.g., a distributed database scenario.
Backup and restore operations by the Recovery Manager (RMAN) process.
RMAN uses this only if the BACKUP_DISK_IO = n and BACKUP_TAPE_IO_SLAVE = TRUE parameters are set.
If the Large Pool is too small, memory allocation for backup will fail and memory will be allocated from the Shared
Pool.
Parallel execution message buffers for parallel server operations. The PARALLEL_AUTOMATIC_TUNING =
TRUE parameter must be set.

The Large Pool size is set with the LARGE_POOL_SIZE parameter – this is not a dynamic parameter. It does not use
an LRU list to manage memory.

Java Pool
The Java Pool is an optional memory object, but is required if the database has Oracle Java installed and in use for
Oracle JVM (Java Virtual Machine).
The size is set with the JAVA_POOL_SIZE parameter that defaults to 24MB.
The Java Pool is used for memory allocation to parse Java commands and to store data associated with Java
commands.
Storing Java code and data in the Java Pool is analogous to SQL and PL/SQL code cached in the Shared Pool.

Streams Pool
This pool stores data and control structures to support the Oracle Streams feature of Oracle Enterprise Edition.
Oracle Steams manages sharing of data and events in a distributed environment.
It is sized with the parameter STREAMS_POOL_SIZE.
If STEAMS_POOL_SIZE is not set or is zero, the size of the pool grows dynamically.

Types of Processes

A database instance contains or interacts with the following types of processes:

 Client processes run the application or Oracle tool code.


 Oracle processes run the Oracle database code. Oracle processes including the
following subtypes:
o Server processes perform work based on a client request.

For example, these processes parse SQL queries, place them in the shared
pool, create and execute a query plan for each query, and read buffers
from the database buffer cache or from disk.

o Background processes start with the database instance and perform


maintenance tasks such as performing instance recovery, cleaning up
processes, writing redo buffers to disk, and so on.

o Slave processes perform additional tasks for a background or server


process.

The process structure varies depending on the operating system and the choice of Oracle
Database options. For example, the code for connected users can be configured
for dedicated server or shared server connections. In a shared server architecture, each
server process that runs database code can serve multiple client processes.

For each user connection, the application is run by aclient process that is different from
the dedicated server process that runs the database code. Each client process is associated
with its own server process, which has its own program global area (PGA).
Overview of Client Processes

When a user runs an application such as a Pro*C program or SQL*Plus, the operating system
creates a client process (sometimes called a user process) to run the user application. The
client application has Oracle Database libraries linked into it that provide the APIs required
to communicate with the database.

Client and Server Processes

Client processes differ in important ways from the Oracle processes interacting directly with
the instance. The Oracle processes servicing the client process can read from and write to
the SGA, whereas the client process cannot. A client process can run on a host other than
the database host, whereas Oracle processes cannot.

Connections and Sessions

A connection is a physical communication pathway between a client process and a


database instance. A communication pathway is established using available interprocess
communication mechanisms or network software. Typically, a connection occurs between a
client process and a server process or dispatcher, but it can also occur between a client
process and Oracle Connection Manager (CMAN).

A session is a logical entity in the database instance memory that represents the state of a
current user login to a database. For example, when a user is authenticated by the database
with a password, a session is established for this user. A session lasts from the time the user
is authenticated by the database until the time the user disconnects or exits the database
application.

A single connection can have 0, 1, or more sessions established on it. The sessions are
independent: a commit in one session does not affect transactions in other sessions.

Multiple sessions can exist concurrently for a single database user. user hr can have multiple
connections to a database. In dedicated server connections, the database creates a server
process on behalf of each connection. Only the client process that causes the dedicated
server to be created uses it. In a shared server connection, many client processes access a
single shared server process.

Overview of Server Processes

Oracle Database creates server processes to handle the requests of client processes
connected to the instance. A client process always communicates with a database through a
separate server process.

Server processes created on behalf of a database application can perform one or more of
the following tasks:

 Parse and run SQL statements issued through the application, including creating and
executing the query plan (see "Stages of SQL Processing")
 Execute PL/SQL code

 Read data blocks from data files into the database buffer cache (the
DBWn background process has the task of writing modified blocks back to disk)

 Return results in such a way that the application can process the information

Dedicated Server Processes

In dedicated server connections, the client connection is associated with one and only one
server process (see "Dedicated Server Architecture"). On Linux, 20 client processes
connected to a database instance are serviced by 20 server processes.
Each client process communicates directly with its server process. This server process is
dedicated to its client process for the duration of the session. The server process stores
process-specific information and the UGA in its PGA (see "PGA Usage in Dedicated and
Shared Server Modes").

Shared Server Processes

In shared server connections, client applications connect over a network to a dispatcher


process, not a server process (see "Shared Server Architecture"). For example, 20 client
processes can connect to a single dispatcher process.

The dispatcher process receives requests from connected clients and puts them into a
request queue in the large pool (see "Large Pool"). The first available shared server
process takes the request from the queue and processes it. Afterward, the shared server
place the result into the dispatcher response queue. The dispatcher process monitors this
queue and transmits the result to the client.

Like a dedicated server process, a shared server process has its own PGA. However, the UGA
for a session is in the SGA so that any shared server can access session data.

Overview of Background Processes

A multiprocess Oracle database uses some additional processes called background


processes. The background processes perform maintenance tasks required to operate the
database and to maximize performance for multiple users.

Each background process has a separate task, but works with the other processes. For
example, the LGWR process writes data from the redo log buffer to the online redo log.
When a filled log file is ready to be archived, LGWR signals another process to archive the
file.

Oracle Database creates background processes automatically when a database instance


starts. An instance can have many background processes, not all of which always exist in
every database configuration. The following query lists the background processes running
on your database:

SELECT PNAME
FROM V$PROCESS
WHERE PNAME IS NOT NULL
ORDER BY PNAME;

This section includes the following topics:

 Mandatory Background Processes


 Optional Background Processes
 Slave Processes

Mandatory Background Processes

The mandatory background processes are present in all typical database configurations.
These processes run by default in a database instance started with a minimally configured
initialization parameter file (see Example 13-1).
This section describes the following mandatory background processes:

 Process Monitor Process (PMON)


 System Monitor Process (SMON)

 Database Writer Process (DBWn)

 Log Writer Process (LGWR)

 Checkpoint Process (CKPT)

 Manageability Monitor Processes (MMON and MMNL)

 Recoverer Process (RECO)

Process Monitor Process (PMON)

The process monitor (PMON) monitors the other background processes and performs
process recovery when a server or dispatcher process terminates abnormally. PMON is
responsible for cleaning up the database buffer cache and freeing resources that the client
process was using. For example, PMON resets the status of the activetransaction table,
releases locks that are no longer required, and removes the process ID from the list of
active processes.

PMON also registers information about the instance and dispatcher processes with
the Oracle Net listener (see "The Oracle Net Listener"). When an instance starts, PMON
polls the listener to determine whether it is running. If the listener is running, then PMON
passes it relevant parameters. If it is not running, then PMON periodically attempts to
contact it.

System Monitor Process (SMON)


If the database is crashed (power failure) and next time when we restart the database SMON
observes that last time the database was not shutdown gracefully. Hence it requires some
recovery, which is known as INSTANCE CRASH RECOVERY. When performing the crash
recovery before the database is completely open, if it finds any transaction committed but
not found in the datafiles, will now be applied from redolog files to datafiles.
 If SMON observes some uncommitted transaction which has already updated the
table in the datafile, is going to be treated as a in doubt transaction and will be rolled back
with the help of before image available inrollback segments.
 SMON also cleans up temporary segments that are no longer in use.
 It also coalesces contiguous free extents in dictionary managed tablespaces that
have PCTINCREASE set to a non-zero value.
 In RAC environment, the SMON process of one instance can perform instance
recovery for other instances that have failed.
 SMON wakes up about every 5 minutes to perform housekeeping activities.

The system monitor process (SMON) is in charge of a variety of system-level cleanup


duties. The duties assigned to SMON include:
 Performing instance recovery, if necessary, at instance startup. In an Oracle RAC
database, the SMON process of one database instance can perform instance recovery
for a failed instance.
 Recovering terminated transactions that were skipped during instance recovery
because of file-read or tablespace offline errors. SMON recovers the transactions
when the tablespace or file is brought back online.

 Cleaning up unused temporary segments. For example, Oracle Database allocates


extents when creating an index. If the operation fails, then SMON cleans up the
temporary space.

 Coalescing contiguous free extents within dictionary-managed tablespaces.

SMON checks regularly to see whether it is needed. Other processes can call SMON if they
detect a need for it.

Database Writer Process (DBWn)

The database writer process (DBWn) writes the contents of database buffers to data
files. DBWn processes write modified buffers in the database buffer cache to disk
(see "Database Buffer Cache").

Although one database writer process (DBW0) is adequate for most systems, you can
configure additional processes—DBW1 through DBW9 and DBWa through DBWj—to improve
write performance if your system modifies data heavily. These additional DBWn processes
are not useful on uniprocessor systems.

The DBWn process writes dirty buffers to disk under the following conditions:

 When a server process cannot find a clean reusable buffer after scanning a threshold
number of buffers, it signals DBWn to write. DBWn writes dirty buffers to disk
asynchronously if possible while performing other processing.
 DBWn periodically writes buffers to advance the checkpoint, which is the position in
the redo thread from which instance recovery begins (see "Overview of
Checkpoints"). The log position of the checkpoint is determined by the oldest dirty
buffer in the buffer cache.

In many cases the blocks that DBWn writes are scattered throughout the disk. Thus, the
writes tend to be slower than the sequential writes performed by LGWR. DBWnperforms
multiblock writes when possible to improve efficiency. The number of blocks written in a
multiblock write varies by operating system.

Log Writer Process (LGWR)

The log writer process (LGWR) manages the redo log buffer. LGWR writes one contiguous
portion of the buffer to the online redo log. By separating the tasks of modifying database
buffers, performing scattered writes of dirty buffers to disk, and performing fast sequential
writes of redo to disk, the database improves performance.

In the following circumstances, LGWR writes all redo entries that have been copied into the
buffer since the last time it wrote:

 A user commits a transaction (see "Committing Transactions").


 An online redo log switch occurs.

 Three seconds have passed since LGWR last wrote.

 The redo log buffer is one-third full or contains 1 MB of buffered data.

 DBWn must write modified buffers to disk.

Before DBWn can write a dirty buffer, redo records associated with changes to the
buffer must be written to disk (the write-ahead protocol). If DBWn finds that some
redo records have not been written, it signals LGWR to write the records to disk and
waits for LGWR to complete before writing the data buffers to disk.

Checkpoint Process (CKPT)

The checkpoint process (CKPT) updates the control file and data file headers with
checkpoint information and signals DBWn to write blocks to disk. Checkpoint information
includes the checkpoint position, SCN, location in online redo log to begin recovery, and so
on. As shown in Figure 15-4, CKPT does not write data blocks to data files or redo blocks to
online redo log files.

Manageability Monitor Processes (MMON and MMNL)

The manageability monitor process (MMON) performs many tasks related to


the Automatic Workload Repository (AWR). For example, MMON writes when
ametric violates its threshold value, taking snapshots, and capturing statistics value for
recently modified SQL objects.
The manageability monitor lite process (MMNL) writes statistics from the Active
Session History (ASH) buffer in the SGA to disk. MMNL writes to disk when the ASH buffer is
full.

Recoverer Process (RECO)

In a distributed database, the recoverer process (RECO) automatically resolves failures


in distributed transactions. The RECO process of a node automatically connects to other
databases involved in an in-doubt distributed transaction. When RECO reestablishes a
connection between the databases, it automatically resolves all in-doubt transactions,
removing from each database's pending transaction table any rows that correspond to the
resolved transactions.

Optional Background Processes

An optional background process is any background process not defined as mandatory.


Most optional background processes are specific to tasks or features. For example,
background processes that support Oracle Streams Advanced Queuing (AQ) or Oracle
Automatic Storage Management (Oracle ASM) are only available when these features
are enabled.

This section describes some common optional processes:

 Archiver Processes (ARCn)


 Job Queue Processes (CJQ0 and Jnnn)

 Flashback Data Archiver Process (FBDA)

 Space Management Coordinator Process (SMCO)

Archiver Processes (ARCn)

The archiver processes (ARCn) copy online redo log files to offline storage after a redo
log switch occurs. These processes can also collect transaction redo data and transmit it
to standby database destinations. ARCn processes exist only when the database is
in ARCHIVELOG mode and automatic archiving is enabled.

Job Queue Processes (CJQ0 and Jnnn)

Oracle Database uses job queue processes to run user jobs, often in batch mode. A job is
a user-defined task scheduled to run one or more times. For example, you can use a job
queue to schedule a long-running update in the background. Given a start date and a time
interval, the job queue processes attempt to run the job at the next occurrence of the
interval.

Oracle Database manages job queue processes dynamically, thereby enabling job queue
clients to use more job queue processes when required. The database releases resources
used by the new processes when they are idle.

Dynamic job queue processes can run a large number of jobs concurrently at a given
interval. The sequence of events is as follows:
1. The job coordinator process (CJQ0) is automatically started and stopped as
needed by Oracle Scheduler (see "Oracle Scheduler"). The coordinator process
periodically selects jobs that need to be run from the system JOB$ table. New jobs
selected are ordered by time.
2. The coordinator process dynamically spawns job queue slave processes (Jnnn) to
run the jobs.

3. The job queue process runs one of the jobs that was selected by the CJQ0 process for
execution. Each job queue process runs one job at a time to completion.

4. After the process finishes execution of a single job, it polls for more jobs. If no jobs
are scheduled for execution, then it enters a sleep state, from which it wakes up at
periodic intervals and polls for more jobs. If the process does not find any new jobs,
then it terminates after a preset interval.

The initialization parameter JOB_QUEUE_PROCESSES represents the maximum number of job


queue processes that can concurrently run on an instance. However, clients should not
assume that all job queue processes are available for job execution.

Flashback Data Archiver Process (FBDA)

The flashback data archiver process (FBDA) archives historical rows of tracked tables
into Flashback Data Archives. When a transaction containing DML on a tracked table
commits, this process stores the pre-image of the rows into the Flashback Data Archive. It
also keeps metadata on the current rows.

FBDA automatically manages the flashback data archive for space, organization, and
retention. Additionally, the process keeps track of how far the archiving of tracked
transactions has occurred.

Space Management Coordinator Process (SMCO)

The SMCO process coordinates the execution of various space management related tasks,
such as proactive space allocation and space reclamation. SMCO dynamically spawns slave
processes (Wnnn) to implement the task.

Slave Processes

Slave processes are background processes that perform work on behalf of other
processes. This section describes some slave processes used by Oracle Database.

I/O Slave Processes

I/O slave processes (Innn) simulate asynchronous I/O for systems and devices that do not
support it. In asynchronous I/O, there is no timing requirement for transmission, enabling
other processes to start before the transmission has finished.

For example, assume that an application writes 1000 blocks to a disk on an operating
system that does not support asynchronous I/O. Each write occurs sequentially and waits for
a confirmation that the write was successful. With asynchronous disk, the application can
write the blocks in bulk and perform other work while waiting for a response from the
operating system that all blocks were written.
To simulate asynchronous I/O, one process oversees several slave processes. The invoker
process assigns work to each of the slave processes, who wait for each write to complete
and report back to the invoker when done. In true asynchronous I/O the operating system
waits for the I/O to complete and reports back to the process, while in simulated
asynchronous I/O the slaves wait and report back to the invoker.

The database supports different types of I/O slaves, including the following:

 I/O slaves for Recovery Manager (RMAN)

When using RMAN to back up or restore data, you can make use of I/O slaves for both
disk and tape devices.

 Database writer slaves

If it is not practical to use multiple database writer processes, such as when the
computer has one CPU, then the database can distribute I/O over multiple slave
processes. DBWR is the only process that scans the buffer cache LRU list for blocks to
be written to disk. However, I/O slaves perform the I/O for these blocks.

Parallel Query Slaves

In parallel execution or parallel processing, multiple processes work together


simultaneously to run a single SQL statement. By dividing the work among multiple
processes, Oracle Database can run the statement more quickly. For example, four
processes handle four different quarters in a year instead of one process handling all four
quarters by itself.

Parallel execution reduces response time for data-intensive operations on large databases
such as data warehouses. Symmetric multiprocessing (SMP) and clustered system gain the
largest performance benefits from parallel execution because statement processing can be
split up among multiple CPUs. Parallel execution can also benefit certain types of OLTP and
hybrid systems.

In Oracle RAC systems, the service placement of a particular service controls parallel
execution. Specifically, parallel processes run on the nodes on which you have configured
the service. By default, Oracle Database runs the parallel process only on the instance that
offers the service used to connect to the database. This does not affect other parallel
operations such as parallel recovery or the processing of GV$ queries.

Serial Execution

In serial execution, a single server process performs all necessary processing for the
sequential execution of a SQL statement.

Parallel Execution

In parallel execution, the server process acts as the parallel execution


coordinator responsible for parsing the query, allocating and controlling the slave
processes, and sending output to the user. Given a query plan for a SQL query, the
coordinator breaks down each operator in a SQL query into parallel pieces, runs them in
the order specified in the query, and integrates the partial results produced by the slave
processes executing the operators.
PFILE and SPFILE

When an Oracle Instance is started, the characteristics of the Instance are established by
parameters specified within the initialization parameter file. These initialization parameters
are either stored in a PFILE or SPFILE. SPFILEs are available in Oracle 9i and above. All prior
releases of Oracle are using PFILEs.

SPFILEs provide the following advantages over PFILEs:

o An SPFILE can be backed-up with RMAN (RMAN cannot backup PFILEs)


o Reduce human errors. The SPFILE is maintained by the server. Parameters are
checked before changes are accepted.
o Eliminate configuration problems (no need to have a local PFILE if you want to start
Oracle from a remote machine)
o Easy to find - stored in a central location

What is the difference between a PFILE and SPFILE:

A PFILE is a static, client-side text file that must be updated with a standard text editor like
"notepad" or "vi". This file normally reside on the server, however, you need a local copy if
you want to start Oracle from a remote machine. DBA's commonly refer to this file as the
INIT.ORA file.

An SPFILE (Server Parameter File), on the other hand, is a persistent server-side binary file
that can only be modified with the "ALTER SYSTEM SET" command. This means you no
longer need a local copy of the pfile to start the database from a remote machine. Editing an
SPFILE will corrupt it, and you will not be able to start your database anymore.

How will I know if my database is using a PFILE or SPFILE:

Execute the following query to see if your database was started with a PFILE or SPFILE:

SQL> SELECT DECODE(value, NULL, 'PFILE', 'SPFILE') "Init File Type"

FROM sys.v_$parameter WHERE name = 'spfile';

You can also use the V$SPPARAMETER view to check if you are using a PFILE or not: if the
"value" column is NULL for all parameters, you are using a PFILE.

Viewing Parameters Settings:

One can view parameter values using one of the following methods (regardless if they were
set via PFILE or SPFILE):

o The "SHOW PARAMETERS" command from SQL*Plus (i.e.: SHOW PARAMETERS


timed_statistics)
o V$PARAMETER view - display the currently in effect parameter values
o V$PARAMETER2 view - display the currently in effect parameter values, but "List
Values" are shown in multiple rows
o V$SPPARAMETER view - display the current contents of the server parameter file.

Starting a database with a PFILE or SPFILE:

Oracle searches for a suitable initialization parameter file in the following order:

o Try to use the spfile${ORACLE_SID}.ora file in $ORACLE_HOME/dbs (Unix) or


ORACLE_HOME/database (Windows)
o Try to use the spfile.ora file in $ORACLE_HOME/dbs (Unix) or
ORACLE_HOME/database (Windows)
o Try to use the init${ORACLE_SID}.ora file in $ORACLE_HOME/dbs (Unix) or
ORACLE_HOME/database (Windows)

One can override the default location by specifying the PFILE parameter at database startup:

SQL> STARTUP PFILE='/oradata/spfileORCL.ora'

Note that there is not an equivalent "STARTUP SPFILE=" command. One can only use the
above option with SPFILE's if the PFILE you point to (in the example above), contains a single
'SPFILE=' parameter pointing to the SPFILE that should be used. Example:

SPFILE=/path/to/spfile

Changing SPFILE parameter values:

While a PFILE can be edited with any text editor, the SPFILE is a binary file. The "ALTER
SYSTEM SET" and "ALTER SYSTEM RESET" commands can be used to change parameter
values in an SPFILE. Look at these examples:

SQL> ALTER SYSTEM SET open_cursors=300 SCOPE=SPFILE;

SQL> ALTER SYSTEM SET timed_statistics=TRUE

COMMENT='Changed by Frank on 1 June 2003'

SCOPE=BOTH

SID='*';

The SCOPE parameter can be set to SPFILE, MEMORY or BOTH:

- MEMORY: Set for the current instance only. This is the default behaviour if a PFILE was
used at STARTUP.

- SPFILE: update the SPFILE, the parameter will take effect with next database startup
- BOTH: affect the current instance and persist to the SPFILE. This is the default behaviour if
an SPFILE was used at STARTUP.
The COMMENT parameter (optional) specifies a user remark.

The SID parameter (optional; only used with RAC) indicates the instance for which the
parameter applies (Default is *: all Instances).

Use the following syntax to set parameters that take multiple (a list of) values:

SQL> ALTER SYSTEM SET utl_file_dir='/tmp/','/oradata','/home/' SCOPE=SPFILE;

Use this syntax to set unsupported initialization parameters (obviously only when Oracle
Support instructs you to set it):

SQL> ALTER SYSTEM SET "_allow_read_only_corruption"=TRUE SCOPE=SPFILE;

Execute one of the following command to remove a parameter from the SPFILE:

SQL> ALTER SYSTEM RESET timed_statistics SCOPE=SPFILE SID=‘*’;

SQL> ALTER SYSTEM SET timed_statistics = '' SCOPE=SPFILE;

Converting between PFILES and SPFILES:

One can easily migrate from a PFILE to SPFILE or vice versa. Execute the following
commands from a user with SYSDBA or SYSOPER privileges:

SQL> CREATE PFILE FROM SPFILE;

SQL> CREATE SPFILE FROM PFILE;

One can also specify a non-default location for either (or both) the PFILE and SPFILE
parameters. Look at this example:

SQL> CREATE SPFILE='/oradata/spfileORCL.ora' from


PFILE='/oradata/initORCL.ora';

Here is an alternative procedure for changing SPFILE parameter values using the above
method:

o Export the SPFILE with: CREATE PFILE=‘pfilename’ FROM SPFILE = ‘spfilename’;


o Edit the resulting PFILE with a text editor
o Shutdown and startup the database with the PFILE option: STARTUP PFILE=filename
o Recreate the SPFILE with: CREATE SPFILE=‘spfilename’ FROM PFILE=‘pfilename’;
o On the next startup, use STARTUP without the PFILE parameter and the new SPFILE
will be used.

Parameter File Backups:


RMAN (Oracle's Recovery Manager) will backup the SPFILE with the database control file if
setting "CONFIGURE CONTROLFILE AUTOBACKUP" is ON (the default is OFF). PFILEs cannot
be backed-up with RMAN. Look at this example:

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

Use the following RMAN command to restore an SPFILE:

RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;

Password file (orapwd utility) in Oracle


Oracle password file stores passwords for users with administrative privileges.

If the DBA wants to start up an Oracle instance there must be a way for Oracle to
authenticate the DBA. Obviously, DBA password cannot be stored in the database, because
Oracle cannot access the database before the instance is started up. Therefore, the
authentication of the DBA must happen outside of the database. There are two distinct
mechanisms to authenticate the DBA:
(i) Using the password file or
(ii) Through the operating system (groups). Any OS user under dba group, can login as
SYSDBA.

The default location for the password file is:


$ORACLE_HOME/dbs/orapw$ORACLE_SID on Unix, %ORACLE_HOME%\database\PWD
%ORACLE_SID%.ora on Windows.

REMOTE_LOGIN_PASSWORDFILE
The init parameter REMOTE_LOGIN_PASSWORDFILE specifies if a password file is used to
authenticate the Oracle DBA or not. If it set either to SHARED or EXCLUSIVE, password file
will be used.

REMOTE_LOGIN_PASSWORDFILE is a static initialization parameter and therefore cannot be


changed without bouncing the database.

Following are the valid values for REMOTE_LOGIN_PASSWORDFILE:

NONE - Oracle ignores the password file if it exists i.e. no privileged connections
are allowed over non secure connections. If REMOTE_LOGIN_PASSWORDFILE is set to
EXCLUSIVE or SHARED and the password file is missing, this is equivalent to setting
REMOTE_LOGIN_PASSWORDFILE to NONE.

EXCLUSIVE (default) - Password file is exclusively used by only one (instance of the)
database. Any user can be addedto the password file. Only an EXCLUSIVE file can be
modified. EXCLUSIVE password file enables you to add, modify, and delete users. It also
enables you to change the SYS password with the ALTER USER command.

SHARED - The password file is shared among databases. A SHARED password file can be
used by multiple databases running on the same server, or multiple instances of an
Oracle Real Application Clusters (RAC) database. However, the only user that can
be added/authenticated is SYS.
 Download Pdf
 Accept
 Accounting entry level jobs
 Acrylic display frames
 Added
 Adhere

A SHARED password file cannot be modified i.e. you cannot add users to a SHARED
password file. Any attempt to do so or to change the password of SYS or other users with the
SYSDBA or SYSOPER or SYSASM (this is from Oracle 11g) privileges generates an error. All
users needing SYSDBA or SYSOPER or SYSASM system privileges must be added to the
password file when REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE. After all
users are added, you can change REMOTE_LOGIN_PASSWORDFILE to SHARED.

This option is useful if you are administering multiple databases or a RAC database.

If a password file is SHARED or EXCLUSIVE is also stored in the password file. After its
creation, the state is SHARED. The state can be changed by setting
REMOTE_LOGIN_PASSWORDFILE and starting the database i.e. the database overwrites the
state in the password file when it is started up.

ORAPWD
You can create a password file using orapwd utility. For some Operating systems, you can
create this file as part of standard installation.

Users are added to the password file when they are granted the SYSDBA or SYSOPER or
SYSASM privilege.

The Oracle orapwd utility assists the DBA while granting SYSDBA, SYSOPER and SYSASM
privileges to other users. By default, SYS is the only user that has SYSDBA and SYSOPER
privileges. Creating a password file, via orapwd, enables remote users to connect
with administrative privileges.

$ orapwd file=password_file_name [password=the_password] [entries=n] [force=Y|N]


[ignorecase=Y|N] [nosysdba=Y|N]

Examples:
$ orapwd file=orapwSID password=sys_password force=y nosysdba=y
$ orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=secret
$ orapwd file=orapwprod entries=30 force=y
C:\orapwd file=%ORACLE_HOME%\database\PWD%ORACLE_SID%.ora password=2012
entries=20
C:\orapwd file=D:\oracle11g\product\11.1.0\db_1\database\pwdsfs.ora password=id
entries=6 force=y
$ orapwd file=orapwPRODB3 password=abc123 entries=10 ignorecase=n
$ orapwd file=orapwprodb password=oracle1 ignorecase=y

There are no spaces permitted around the equal-to (=).

The following describe the orapwd command line arguments.

FILE
Name to assign to the password file, which will hold the password information. You must
supply complete path. If you supply only filename, the file is written to the current directory.
The contents are encrypted and are unreadable. This argument is mandatory.
The filenames allowed for the password file are OS specific. Some operating systems require
the password file toadhere to a specific format and be located in a specific directory. Other
operating systems allow the use of environment variables to specify the name and location
of the password file.

If you are running multiple instances of Oracle Database using Oracle Real Application
Clusters (RAC), the environment variable for each instance should point to the same
password file.

It is critically important to secure password file.

PASSWORD
This is the password the privileged users should enter while connecting as SYSDBA or
SYSOPER or SYSASM.

ENTRIES
Entries specify the maximum number of distinct SYSDBA, SYSOPER and SYSASM users that
can be stored in the password file.

This argument specifies the number of entries that you require the password file to accept.
The actual number ofallowable entries can be higher than the number of users, because
the orapwd utility continues to assign password entries until an OS block is filled. For
example, if your OS block size is 512 bytes, it holds four password entries. The number of
password entries allocated is always a multiple of four.

Entries can be reused as users are added to and removed from the password file. When you
exceed the allocated number of password entries, you must create a new password file. To
avoid this necessity, allocate a number of entries that is larger than you think you will ever
need.

FORCE
(Optional) If Y, permits overwriting an existing password file. An error will be returned if
password file of the same name already exists and this argument is omitted or set to N.

IGNORECASE
(Optional) If Y, passwords are treated as case-insensitive i.e. case is ignored when
comparing the password that the user supplies during login with the password in the
password file.

NOSYSDBA
(Optional) For Oracle Data Vault installations.

Granting SYSDBA or SYSOPER or SYSASM


privileges

Use the V$PWFILE_USERS view to see the users who have been granted SYSDBA or SYSOPER
or SYSASM system privileges for a database.

SQL> select * from v$pwfile_users;


USERNAME SYSDBA SYSOPER SYSASM
-------- ------ ------- ------
SYS TRUE TRUE FALSE

The columns displayed by the view V$PWFILE_USERS are:

Column Description
This column contains the name of the user that is recognized by the
USERNAME password file.
If the value of this column is TRUE, then the user can log on with
SYSDBA
SYSDBA system privilege.
If the value of this column is TRUE, then the user can log on with
SYSOPER
SYSOPER system privilege.
If the value of this column is TRUE, then the user can log on with
SYSASM
SYSASM system privilege.
 Administrative
 Administrative Support
 Alarm System
 All Users
 Download Pdf
 Accept

If orapwd has not yet been executed or password file is not available, attempting to grant
SYSDBA or SYSOPER or SYSASM privileges will result in the following error:
SQL> grant sysdba to satya;
ORA-01994: GRANT failed: cannot add users to public password file

If your server is using an EXCLUSIVE password file, use the GRANT statement to grant the
SYSDBA or SYSOPER or SYSASM system privilege to a user, as shown in the following
example:
SQL> grant sysdba to satya;

SQL> select * from v$pwfile_users;


USERNAME SYSDBA SYSOPER SYSASM
-------- ------ ------- ------
SYS TRUE TRUE FALSE
SATYA TRUE FALSE FALSE

SQL> grant sysoper to satya;


SQL> select * from v$pwfile_users;
USERNAME SYSDBA SYSOPER SYSASM
-------- ------ ------- ------
SYS TRUE TRUE FALSE
SATYA TRUE TRUE FALSE

SQL> grant sysasm to satya;


SQL> select * from v$pwfile_users;
USERNAME SYSDBA SYSOPER SYSASM
-------- ------ ------- ------
SYS TRUE TRUE FALSE
SATYA TRUE TRUE TRUE

When you grant SYSDBA or SYSOPER or SYSASM privileges to a user, that user's name and
privilege information areadded to the password file. If the server does not have an
EXCLUSIVE password file (i.e. if the initialization parameter REMOTE_LOGIN_PASSWORDFILE
is NONE or SHARED, or the password file is missing), Oracle issues an error if you attempt to
grant these privileges.

Use the REVOKE statement to revoke the SYSDBA or SYSOPER or SYSASM system privilege
from a user, as shown in the following example:
SQL> revoke sysoper from satya;

SQL> select * from v$pwfile_users;


USERNAME SYSDBA SYSOPER SYSASM
-------- ------ ------- ------
SYS TRUE TRUE FALSE
SATYA TRUE FALSE TRUE

A user's name remains in the password file only as long as that user has at least one of
these three privileges. If you revoke all 3 privileges, Oracle removes the user from the
password file.

Because SYSDBA, SYSOPER and SYSASM are the most powerful database privileges, the
WITH ADMIN OPTION is not used in the GRANT statement. That is, the grantee cannot in turn
grant the SYSDBA or SYSOPER or SYSASM privilege to another user. Only a user currently
connected as SYSDBA can grant or revoke another user's SYSDBA or SYSOPER or SYSASM
system privileges. These privileges cannot be granted to roles, because roles are available
only after database startup.

If you receive the file full error (ORA-01996) when you try to grant SYSDBA or SYSOPER or
SYSASM system privileges to a user, you must create a larger password file and regrant the
privileges to the users.

Removing Password File


If you determine that you no longer require a password file to authenticate users, you can
delete the password file and then optionally reset the REMOTE_LOGIN_PASSWORDFILE
initialization parameter to NONE. After you remove this file, only those users who can be
authenticated by the OS can perform SYSDBA or SYSOPER or SYSASM database
administration operations.

Managing Control Files in Oracle database

Managing Control Files in Oracle database


Every Oracle Database has a control file, which is a small binary file that records the
physical structure of the database. The control file includes:
 The database name
 Names and locations of associated datafiles and redo log files
 The timestamp of the database creation
 The current log sequence number
 Checkpoint information

It is strongly recommended that you multiplex control files i.e. Have at least two control
files one in one hard disk and another one located in another disk, in a database. In this
way if control file becomes corrupt in one disk the another copy will be available and
you don’t have to do recovery of control file.

You can multiplex control file at the time of creating a database and later on also. If you
have not multiplexed control file at the time of creating a database you can do it now by
following given procedure.

Multiplexing Control File

Steps:

1. Shutdown the Database.

SQL>SHUTDOWN IMMEDIATE;

2. Copy the control file from old location to new location using operating system
command. For example.

$ cp /u01/oracle/ica/control.ora /u02/oracle/ica/control.ora

3. Now open the parameter file and specify the new location like this

CONTROL_FILES=/u01/oracle/ica/control.ora

Change it to

CONTROL_FILES=/u01/oracle/ica/control.ora,/u02/oracle/ica/
control.ora

4. Start the Database


Now Oracle will start updating both the control files and, if one control file is lost you can
copy it from another location.

Changing the Name of a Database

If you ever want to change the name of database or want to change the setting of
MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS then you have to create a new
control file.

Creating A New Control File

Follow the given steps to create a new controlfile

Steps

1. First generate the create controlfile statement

SQL>alter database backup controlfile to trace;

After giving this statement oracle will write the CREATE CONTROLFILE statement in a
trace file. The trace file will be randomly named something like ORA23212.TRC and it is
created in USER_DUMP_DEST directory.

2. Go to the USER_DUMP_DEST directory and open the latest trace file in text editor.
This file will contain the CREATE CONTROLFILE statement. It will have two sets of
statement one with RESETLOGS and another without RESETLOGS. Since we are
changing the name of the Database we have to use RESETLOGS option of CREATE
CONTROLFILE statement. Now copy and paste the statement in a file. Let it be c.sql

3. Now open the c.sql file in text editor and set the database name from ica to prod
shown in an example below

CREATE CONTROLFILE
SET DATABASE prod
LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log',
'/u01/oracle/ica/redo01_02.log'),
GROUP 2 ('/u01/oracle/ica/redo02_01.log',
'/u01/oracle/ica/redo02_02.log'),
GROUP 3 ('/u01/oracle/ica/redo03_01.log',
'/u01/oracle/ica/redo03_02.log')
RESETLOGS
DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M,
'/u01/oracle/ica/rbs01.dbs' SIZE 5M,
'/u01/oracle/ica/users01.dbs' SIZE 5M,
'/u01/oracle/ica/temp01.dbs' SIZE 5M
MAXLOGFILES 50
MAXLOGMEMBERS 3
MAXLOGHISTORY 400
MAXDATAFILES 200
MAXINSTANCES 6
ARCHIVELOG;

4. Start and do not mount the database.

SQL>STARTUP NOMOUNT;

5. Now execute c.sql script

SQL> @/u01/oracle/c.sql

6. Now open the database with RESETLOGS

SQL>ALTER DATABASE OPEN RESETLOGS;

Managing Redo Logfiles in Oracle

Every Oracle database must have at least 2 redo logfile groups. Oracle writes all
statements except, SELECT statement, to the logfiles. This is done because Oracle
performs deferred batch writes i.e. it does write changes to disk per statement instead it
performs write in batches. So in this case if a user updates a row, Oracle will change
the row in db_buffer_cache and records the statement in the logfile and give the
message to the user that row is updated. Actually the row is not yet written back to the
datafile but still it give the message to the user that row is updated. After 3 seconds the
row is actually written to the datafile. This is known as deferred batch writes.

Since Oracle defers writing to the datafile there is chance of power failure or system
crash before the row is written to the disk. That’s why Oracle writes the statement in
redo logfile so that in case of power failure or system crash oracle can re-execute the
statements next time when you open the database.

Adding a New Redo Logfile Group

To add a new Redo Logfile group to the database give the following command

SQL>alter database add logfile group 3


‘/u01/oracle/ica/log3.ora’ size 10M;

Note: You can add groups to a database up to the MAXLOGFILES setting you have
specified at the time of creating the database. If you want to change MAXLOGFILE
setting you have to create a new controlfile.

Adding Members to an existing group

To add new member to an existing group give the following command

SQL>alter database add logfile member ‘/u01/oracle/ica/log11.ora’


to group 1;

Note: You can add members to a group up to the MAXLOGMEMBERS setting you have
specified at the time of creating the database. If you want to change
MAXLOGMEMBERS setting you have create a new controlfile

Important: Is it strongly recommended that you multiplex logfiles i.e. have at least two
log members, one member in one disk and another in second disk, in a database.

Dropping Members from a group

You can drop member from a log group only if the group is having more than one
member and if it is not the current group. If you want to drop members from the current
group, force a log switch or wait so that log switch occurs and another group becomes
current. To force a log switch give the following command

SQL>alter system switch logfile;

The following command can be used to drop a logfile member

SQL>alter database drop logfile member


‘/u01/oracle/ica/log11.ora’;

Note: When you drop logfiles the files are not deleted from the disk. You have to use
O/S command to delete the files from disk.

Dropping Logfile Group

Similarly, you can also drop logfile group only if the database is having more than two
groups and if it is not the current group.

SQL>alter database drop logfile group 3;

Note: When you drop logfiles the files are not deleted from the disk. You have to use
O/S command to delete the files from disk.

Resizing Logfiles

You cannot resize logfiles. If you want to resize a logfile create a new logfile group with
the new size and subsequently drop the old logfile group.

Renaming or Relocating Logfiles

To Rename or Relocate Logfiles perform the following steps

For Example, suppose you want to move a logfile from ‘/u01/oracle/ica/log1.ora’ to


‘/u02/oracle/ica/log1.ora’, then do the following

Steps

1. Shutdown the database


SQL>shutdown immediate;

2. Move the logfile from Old location to new location using operating system command

$mv /u01/oracle/ica/log1.ora /u02/oracle/ica/log1.ora

3. Start and mount the database

SQL>startup mount

4. Now give the following command to change the location in controlfile

SQL>alter database rename file ‘/u01/oracle/ica/log1.ora’ to


‘/u02/oracle/ica/log2.ora’;

5. Open the database

SQL>alter database open;

Clearing REDO LOGFILES

A redo log file might become corrupted while the database is open, and ultimately stop
database activity because archiving cannot continue. In this situation the ALTER
DATABASE CLEAR LOGFILE statement can be used reinitialize the file without
shutting down the database.

The following statement clears the log files in redo log group number 3:

ALTER DATABASE CLEAR LOGFILE GROUP 3;

This statement overcomes two situations where dropping redo logs is not possible:

If there are only two log groups


The corrupt redo log file belongs to the current group
If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the
statement.

ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;


This statement clears the corrupted redo logs and avoids archiving them. The cleared
redo logs are available for use even though they were not archived.

If you clear a log file that is needed for recovery of a backup, then you can no longer
recover from that backup. The database writes a message in the alert log describing the
backups from which you cannot recover

Viewing Information About Logfiles

To See how many logfile groups are there and their status type the following query.

SQL>SELECT * FROM V$LOG;


GROUP# THREAD# SEQ BYTES MEMBERS ARC STATUS
FIRST_CHANGE# FIRST_TIM
------ ------- ----- ------- ------- --- ---------
------------- ---------
1 1 20605 1048576 1 YES ACTIVE
61515628 21-JUN-07
2 1 20606 1048576 1 NO CURRENT
41517595 21-JUN-07
3 1 20603 1048576 1 YES INACTIVE
31511666 21-JUN-07
4 1 20604 1048576 1 YES INACTIVE
21513647 21-JUN-07

To See how many members are there and where they are located give the following
query

SQL>SELECT * FROM V$LOGFILE;

GROUP# STATUS MEMBER


------ ------- ----------------------------------
1 /U01/ORACLE/ICA/LOG1.ORA
2 /U01/ORACLE/ICA/LOG2.ORA

ARCHIVELOG :
ARCHIVELOG mode is a mode that you can put the database in for creating a backup of all
transactions that have occurred in the database so that you can recover to any point in time.
NOARCHIVELOG mode is basically the absence of ARCHIVELOG mode and has the
disadvantage of not being able to recover to any point in time. NOARCHIVELOG mode does
have the advantage of not having to write transactions to an archive log and thus increases the
performance of the database slightly. ARCHIVELOG MODE Advantages 1. You can perform hot
backups (backups when the database is online). 2. The archive logs and the last full backup
(offline or online) or an older backup can completely recover the database without losing any
data because all changes made in the database are stored in the log file. Disadvantages 1. It
requires additional disk space to store archived log files. However, the agent offers the option to
purge the logs after they have been backed up, giving you the opportunity to free disk space if
you need it. NO-ARCHIVELOG MODE Advantages 1. It requires no additional disk space to
store archived log files. Disadvantages 1. If you must recover a database, you can only restore
the last full offline backup. As a result, any changes made to the database after the last full
offline backup are lost. 2. Database downtime is significant because you cannot back up the
database online. This limitation becomes a very serious consideration for large databases.
Note: Because NOARCHIVELOG mode does not guarantee Oracle database recovery if there
is a disaster, the Agent for Oracle does not support this mode. If you need to maintain Oracle
Server 1 / 2 in NOARCHIVELOG mode, then you must backup full Oracle database files without
the agent using CA ARCserve Backup while the database is offline to ensure disaster recovery.

You might also like