Oracle DBA Q&A
Oracle DBA Q&A
ow, this is a loaded question and almost begs for you to answer it with "What DBA activities
W
do you LIKE to do on a daily basis?." And that is how I would answer this question. Again, do
not get caught up in the "typical" day-to-day operational issues of database administration.
Sure, you can talk about the index you rebuilt, the monitoring of system and session waits that
were occurring, or the space you added to a data file, these are all good and great and you
should convey that you understand the day-to-day operational issues. What you should also
throw into this answer are the meetings that you attend to provide direction in the database
arena, the people that you meet and talk with daily to answer adhoc questions about database
use, the modeling of business needs within the database, and the extra time you spend early in
the morning or late at night to get the job done. Just because the question stipulates "today" do
not take "today" to mean "today." Make sure you wrap up a few good days into "today" and talk
about them. This question also begs you to ask the question of "What typical DBA activities are
performed day to day within X Corporation?"
If you spend enough time on question 1, this question will never be asked. It is really a
continuation of question 1 to try and get you to open up and talk about the type of things you like
to do. Personally, I would continue with the theme of question 1 if you are cut short or this
question is asked later in the interview process. Just note that this question is not all geared
toward the day-to-day operational issues you experience as a DBA. This question also gives
you the opportunity to see if they want to know about you as an individual. Since the question
did not stipulate "on the job" I would throw in a few items like, I get up at 5:00am to get into work
and get some quiet time to read up on new trends or you help coach your son/daughter's soccer
team. Just test the waters to what is acceptable. If the interviewer starts to pull you back to "job"
related issues, do not go to personal. Also, if you go to the office of the interviewer please notice
the surroundings, if there are pictures of his/her family, it is probably a good idea to venture
down the personal path. If there is a fly-fishing picture on the wall, do not say you like deep-sea
fishing. You get the picture.
3. What is difference between oracle SID and Oracle service name?
racle SID is the unique name that uniquely identifies your instance/database where as the
O
service name is the TNS alias can be same or different as SID.
. What are the steps to install oracle on Linux system? List two kernel parameter that effect
4
oracle installation?
Initially set up disks and kernel parameters, then create oracle user and DBA group, and finally
run installer to start the installation process. The SHMMAX & SHMMNI two kernel parameter
required to set before installation process.
5.What are bind variables?
ith bind variable in SQL, oracle can cache queries in a single time in the SQL cache area.
W
This avoids a hard parse each time, which saves on various locking and latching resource we
use to check object existence and so on.
data block is the smallest unit of logical storage for a database object. As objects grow they
A
take chunks of additional storage that are composed of contiguous data blocks. These
groupings of contiguous data blocks are called extents. All the extents that an object takes when
grouped together are considered the segment of the database object.
hen you are running dedicated server then process information stored inside the process
W
global area (PGA) and when you are using shared server then the process information stored
inside user global area (UGA).
he system global area is a group of shared memory area that is dedicated to oracle instance.
T
All oracle process uses the SGA to hold information. The SGA is used to store incoming data
and internal control information that is needed by the database. You can control the SGA
memory by setting the parameter db_cache_size, shared_pool_size and log_buffer.
Shared pool portion contain three major area:
Library cache (parse SQL statement, cursor information and execution plan),
data dictionary cache (contain cache, user account information, privilege user information,
segments and extent information,data buffer cache for parallel execution message and control
structure.
MON (System Monitor) performs recovery after instance failure, monitor temporary segments
S
and extents; clean temp segment, coalesce free space. It is mandatory process of DB and starts
by default.
PMON (Process Monitor) failed process resources. In shared server architecture monitor and
restarts any failed dispatcher or server process. It is mandatory process of DB and starts by
default.
1. What is the main purpose of ‘CHECKPOINT’ in oracle database? How do you automatically
1
force the oracle to perform a checkpoint?
checkpoint is a database event, which synchronize the database blocks in memory with the
A
datafiles on disk. It has two main purposes: To establish a data consistency and enable faster
database Recovery.
he following are the parameter that will be used by DBA to adjust time or interval of how
T
frequently its checkpoint should occur in database.
LOG_CHECKPOINT_TIMEOUT = 3600; # Every one hour
LOG_CHECKPOINT_INTERVAL = 1000; # number of OS blocks.
irst it will check the syntax and semantics in library cache, after that it will create execution
F
plan.
If already data is in buffer cache it will directly return to the client.
If not it will fetch the data from datafiles and write to the database buffer cache after that it will
send server and finally server send to the client.
13. What is the use of large pool, which case you need to set the large pool?
ou need to set large pool if you are using: MTS (Multi thread server) and RMAN Backups.
Y
Large pool prevents RMAN & MTS from competing with other sub system for the same memory.
RMAN uses the large pool for backup & restore when you set the DBWR_IO_SLAVES or
BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O. If neither of these
parameters is enabled, then Oracle allocates backup buffers from local process memory rather
than shared memory. Then there is no use of large pool.
hile mounting the database oracle reads the data from controlfile which is used for verifying
W
physical database files during sanity check. Background processes are started before mounting
the database only.
“ CURRENT” state means that redo records are currently being written to that group. It will be
until a log switch occurs. At a time there can be only one redo group current.
If a redo group containing redo’s of a dirty buffer that redo group is said to be ‘ACTIVE’ state. As
we know log file keep changes made to the data blocks then data blocks are modified in buffer
cache (dirty blocks). These dirty blocks must be written to the disk (RAM to permanent media).
And when a redolog group contains no redo records belonging to a dirty buffer it is in an
“INACTIVE” state. These inactive redolog can be overwritten.
One more state ‘UNUSED’ initially when you create new redo log group its log file is empty on
that time it is unused. Later it can be any of the above mentioned state.
he point at which oracle ends writing to one online redo log file and begins writing to another is
T
called a log switch. Sometimes you can force the log switch.
racle Instance:
O
a means to access an Oracle database,always opens one and only one database and consists
of memory structures and background process.
racle server:
O
a DBMS that provides an open, comprehensive, integrated approach to information
management,Consists of an Instance and a database.
racle database:
O
a collection of data that is treated as a unit,Consists of Datafiles, Control files, Redo log files.
(optional param file, passwd file, archived log)
GA Memory structures:
S
Includes Shared Pool, Database Buffer Cache, Redo Log Buffer among others.
hared Pool :
S
Consists of two key performance-related memory structures Library Cache and Data Dictionary
Cache.
ibrary Cache:
L
Stores information about the most recently used SQL and PL/SQL statements and enables the
sharing of commonly used statements.
ser process:
U
Started at the time a database User requests connection to the Oracle server. requests
interaction with the Oracle server, does not interact directly with the Oracle server.
erver process:
S
Connects to the Oracle Instance and is Started when a user establishes a session.
fulfills calls generated and returns results.
Each server process has its own nonshared PGA when the process is started.
Server Process Parses and run SQL statements issued through the application, Reads
necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the
blocks are not already present in the SGA and Return results in such a way that the application
can process the information.
In some situations when the application and Oracle Database operate on the same computer, it
is possible to combine the user process and corresponding server process into a single process
to reduce system overhead.
ackground processes:
B
Started when an Oracle Instance is started.
Background Processes Maintains and enforces relationships between physical and memory
structures
here are two types of database processes:
T
1. Mandatory background processes
2. Optional background processes
Mandatory background processes:
– DBWn, PMON, CKPT, LGWR, SMON
Optional background processes:
– ARCn, LMDn, RECO, CJQ0, LMON, Snnn, Dnnn, Pnnn, LCKn, QMNn
19. Why do you run orainstRoot and ROOT.SH once you finalize the Installation?
rainstRoot.sh needs to be run to change the Permissions and groupname to 770 and to dba.
o
Root.sh (ORACLE_HOME) location needs to be run to create a ORATAB in /etc/oratab or
/opt/var/oratab in Solaris and to copy dbhome, oraenv and coraenv to /usr/local/bin.
orainstRoot.sh
[root@oracle11g ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to dba.
The execution of the script is complete
root.sh
[root@oracle11g ~]# /u01/app/oracle/product/11.1.0/db_1/root.sh
Running Oracle 11g root.sh script…
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.1.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
For Oracle installation on unix/linux, we will be prompted to run a script ‘root.sh’ from the oracle
inventory directory.this script needs to run the first time only when any oracle product is installed
on the server.
It creates the additional directories and sets appropriate ownership and permissions on files for
root user.
raInventory is repository (directory) which store/records oracle software products & their
o
oracle_homes location on a machine. This Inventory now a days in XML format and called as
XML Inventory where as in past it used to be in binary format & called as binary Inventory.
There are basically two kind of inventories,
One is Local Inventory (also called as Oracle Home Inventory) and other is Global Inventory
(also called as Central Inventory).
lobal Inventory holds information about Oracle Products on a Machine. These products can be
G
various oracle components like database, oracle application server, collaboration suite, soa
suite, forms & reports or discoverer server . This global Inventory location will be determined by
file oraInst.loc in /etc (on Linux) or /var/opt/oracle (solaris). If you want to see list of oracle
products on machine check for file inventory.xml under ContentsXML in oraInventory Please
note if you have multiple global Inventory on machine check all oraInventory directories)
You will see entry like
HOME NAME=”ORA10g_HOME” LOC=”/u01/oracle/10.2.0/db” TYPE=”O” IDX=”1?/
Inventory inside each Oracle Home is called as local Inventory or oracle_home Inventory. This
Inventory holds information to that oracle_home only.
racle home inventory or local inventory is present inside each Oracle home. It only contains
O
information relevant to a particular Oracle home. This file is located in the following location:
$ORACLE_HOME/inventory
It contains the following files and folders:
· Components File
· Home Properties File
· Other Folders
uite common questions is that can you have multiple global Inventory and answer is YES you
Q
can have multiple global Inventory but if your upgrading or applying patch then change
Inventory Pointer oraInst.loc to respective location. If you are following single global Inventory
and if you wish to uninstall any software then remove it from Global Inventory as well.
o need to worry if your global Inventory is corrupted, you can recreate global Inventory on
N
machine using Universal Installer and attach already Installed oracle home by option
-attachHome
./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc
ORACLE_HOME=”Oracle_Home_Location” ORACLE_HOME_NAME=”Oracle_Home_Name”
CLUSTER_NODES=”{}”
9. If any one of these 6 mandatory background processes is killed/not running, the instance will
2
be aborted ?
GA_MAX_SIZE is the largest amount of memory that will be available for the SGA in the
S
instance and it will be allocated from memory. You do not have to use it all, but it will be
potentially wasted if you set it too high and don’t use it. It is not a dynamic parameter. Basically
it gives you room for the Oracle instance to grow.
SGA_TARGET is actual memory in use by the current SGA. This parameter is dynamic and can
be increased up to the value of SGA_MAX_SIZE.
SGA_MAX_SIZE and SGA_TARGET both are the parameter are used to change the SGA
SIZE.
SGA_MAX_SIZE sets the maximum value for sga_target.
SGA_TAGET is 10G feature used to change the sga size dynamically .it specifies the total
amount of SGA memory available to an instance.
this feature is called Automatic Shared Memory Management. With ASMM, the parameters
java_pool_size, shared_pool_size, large_pool_size and db_cache_size are affected.
GA_MAX_SIZE sets the overall amount of memory the SGA can consume but is not dynamic.
S
The SGA_MAX_SIZE parameter is the max allowable size to resize the SGA Memory area
parameters. If the SGA_TARGET is set to some value then the Automatic Shared Memory
Management (ASMM) is enabled, the SGA_TARGET value can be adjusted up to the
SGA_MAX_SIZE parameter, not more than SGA_MAX_SIZE parameter value.
I.e. If SGA_MAX_SIZE=4GB and SGA_TARGET=2GB, later period of time, if you want you can
resize your SGA_TARGET parameter to the value of SGA_MAX_SIZE i.e. 4GB, you can’t resize
the SGA_TARGET value to more than 4GB.
It is significant that SGA_TARGET includes the entire memory for the SGA, in contrast to earlier
releases in which memory for the internal and fixed SGA was added to the sum of the
configured SGA memory parameters. Thus, SGA_TARGET gives you precise control over the
size of the shared memory region allocated by the database. If SGA_TARGET is set to a value
greater than SGA_MAX_SIZE at startup, then the latter is bumped up to accommodate
SGA_TARGET
Do not dynamically set or unset the SGA_TARGET parameter. This should be set only at
startup.
SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that can be
used for automatic SGA memory sizing.
SGA_TARGET provides the following:
§ Single parameter for total SGA size
§ Automatically sizes SGA components
§ Memory is transferred to where most needed
§ Uses workload information
§ Uses internal advisory predictions
§ STATISTICS_LEVEL must be set to TYPICAL
SGA_TARGET is dynamic
§
§ Can be increased till SGA_MAX_SIZE
§ Can be reduced till some component reaches minimum size
§ Change in value of SGA_TARGET affects only automatically sized components
If I keep SGA_TARGET =0 then what will happen ?
Disable automatic SGA tuning by setting sga_target=0
Disable ASMM by setting SGA_TARGET=0
http://www.orafaq.com/wiki/SGA_target
SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that can be
used for automatic SGA memory sizing.
Default value 0 (SGA auto tuning is disabled)
31. What happens when you run ALTER DATABASE OPEN RESETLOGS ?
he current online redo logs are archived, the log sequence number is reset to 1, new database
T
incarnation is created, and the online redo logs are given a new time stamp and SCN.
The reason to do the open the database with the resetlogs is that after doing an incomplete
recovery , the data files and control files still don’t come to the same point of the redo log files.
And as long as the database is not consistent within all the three file-data, redo and control, you
can’t open the database. The resetlogs clause would reset the log sequence numbers within the
log files and would start them from 0 thus enabling you to open the database but on the cost of
losing all what was there in the redo log files.
In what scenarios open resetlogs required ?
An ALTER DATABASE OPEN RESETLOGS statement is required,
1.after incomplete recovery (Point in Time Recovery) or
2.recovery with a backup control file.
3. recovery with a control file recreated with the reset logs option.
http://onlineappsdba.com/index.php/2009/09/11/oracle-database-incarnation-open-resetlogs-scn
/
http://web.njit.edu/info/limpid/DOC/backup.102/b14191/osrecov009.htm
Whenever you perform incomplete recovery or recovery with a backup control file, you must
reset the online logs when you open the database. The new version of the reset database is
called a new incarnation..
atabase incarnation is effectively a new “version” of the database that happens when you reset
D
the online redo logs using “alter database open resetlogs;”.
Database incarnation falls into following category Current, Parent, Ancestor and Sibling
i) Current Incarnation : The database incarnation in which the database is currently generating
redo.
ii) Parent Incarnation : The database incarnation from which the current incarnation branched
following an OPEN RESETLOGS operation.
iii) Ancestor Incarnation : The parent of the parent incarnation is an ancestor incarnation. Any
parent of an ancestor incarnation is also an ancestor incarnation.
iv) Sibling Incarnation : Two incarnations that share a common ancestor are sibling incarnations
if neither one is an ancestor of the other.
36. How would you decide your backup strategy and timing for backup?
In fact backup strategy is purely depends upon your organization business need.
If no downtime then database must be run on archivelog mode and you have to take frequently
or daily backup.
If sufficient downtime is there and loss of data would not affect your business then you can run
your database in noarchivelog mode and backup can be taken in-frequently or weekly or
monthly.
In most of the case in an organization when no downtime then frequent inconsistent backup
needed (daily backup), multiplex online redo log files (multiple copies), different location for redo
log files, database must run in archivelog mode and dataguard can be implemented for extra bit
of protection.
estoring means copying the database object from the backup media to the destination where
R
actually it is required where as recovery means to apply the database object copied earlier (roll
forward) in order to bring the database into consistent state.
n incomplete database recovery is a recovery that it does not reach to the point of failure. The
A
recovery can be either point of time or particular SCN or Particular archive log specially incase
of missing archive log or redolog failure where as a complete recovery recovers to the point of
failure possibly when having all archive log backup.
39. What is the benefit of running the DB in archivelog mode over no archivelog mode?
hen a database is in no archivelog mode whenever log switch happens there will be a loss of
W
some redoes log information in order to avoid this, redo logs must be archived. This can be
achieved by configuring the database in archivelog mode.
If the database is in archivelog we can recover that transaction otherwise we cannot recover
that transaction which is not in backup.
or hotbackup we have to put database in begin backup mode, then take backup where as
F
RMAN would not put database in begin backup mode. RMAN is faster can perform incremental
(changes only) backup, and does not place tablespace in hotbackup mode.
43. Why RMAN incremental backup fails even though full backup exists?
If you have taken the RMAN full backup using the command ‘Backup database’, where as a
level 0 backup is physically identical to a full backup. The only difference is that the level 0
backup is recorded as an incremental backup in the RMAN repository so it can be used as the
parent for a level 1 backup. Simply the ‘full backup without level 0’ can not be considered as a
parent backup from which you can take level 1 backup.
If no level 0 is available, then the behavior depends upon the compatibility mode setting (oracle
version).
If the compatibility mode less than 10.0.0, RMAN generates a level 0 backup of files contents at
the time of backup.
If the compatibility is greater than 10.0.0, RMAN copies all block changes since the file was
created, and stores the results as level 1 backup.
In case of recovery catalog, you can put by using catalog command:
RMAN> CATALOG START WITH ‘/oracle/backup.ctl’;
If you want to check RMAN catalog version then use the below query from SQL*plus
SQL> Select * from rcver;
8. When you moved oracle binary files from one ORACLE_HOME server to another server
4
then which oracle utility will be used to make this new ORACLE_HOME usable?
Relink all.
50. When we applying single Patch, can you use opatch utility?
es, you can use Opatch incase of single patch. The only type of patch that cannot be used with
Y
OPatch is a patchset.
s you know for apply patch your database and listener must be down. When you apply
A
OPTACH it will update your current ORACLE_HOME. Thus coming to your question to the point
in fact it is not possible without or zero downtime in case of single instance but in RAC you can
Apply Opatch without downtime as there will be more separate ORACLE_HOME and more
separate instances (running once instance on each ORACLE_HOME).
2. You have collection of patch (nearly 100 patches) or patchset. How can you apply only one
5
patch from it?
ith Napply itself (by providing patch location and specific patch id) you can apply only one
W
patch from a collection of extracted patch. For more information check the opatch util NApply
–help. It will give you clear picture.
or Example:
F
opatch util napply -id 9 -skip_subset -skip_duplicate
This will apply only the patch id 9 from the patch location and will skip duplicate and subset of
patch installed in your ORACLE_HOME.
53. If both CPU and PSU are available for given version which one, you will prefer to apply?
rom the above discussion it is clear once you apply the PSU then the recommended way is to
F
apply the next PSU only. In fact, no need to apply CPU on the top of PSU as PSU contain CPU
(If you apply CPU over PSU will considered you are trying to rollback the PSU and will require
more effort in fact). So if you have not decided or applied any of the patches then, I will suggest
you to go to use PSU patches. For more details refer: Oracle Products [ID 1430923.1], ID
1446582.1
54. PSU is superset of CPU then why someone choose to apply a CPU rather than a PSU?
PUs are smaller and more focused than PSU and mostly deal with security issues. It seems to
C
be theoretically more consecutive approach and can cause less trouble than PSU as it has less
code changing in it. Thus any one who is concerned only with security fixes and not functionality
fixes, CPU may be good approach.
If you are using latest support.oracle.com then after login to metalink Dashboard
– Click on “Patches & Updates” tab
– On the left sidebar click on “Latest Patchsets” under “Oracle Server/Tools”.
– A new window will appear.
– Just mouseover on your product in the “Latest Oracle Server/Tools Patchsets” page.
– Corresponding oracle platform version will appear. Then simply choose the patchset version
and click on that.
– You will go the download page. From the download page you can also change your platform
and patchset version.
. You MUST read the Readme.txt file included in opatch file, look for any prereq. steps/ post
1
installation steps or and DB related changes. Also, make sure that you have the correct opatch
version required by this patch.
2.Make sure you have a good backup of database.
. Make a note of all Invalid objects in the database prior to the patch.
3
4. Shutdown All the Oracle Processes running from that Oracle Home , including the Listener
and Database instance, Management agent etc.
5. You MUST Backup your oracle Home and Inventory
tar -cvf $ORACLE_HOME $ORACLE_HOME/oraInventory | gzip >
Backup_Software_Version.tar.gz
6. Unzip the patch in $ORACLE_HOME/patches
7. cd to the patch direcory and do opatch -apply to apply the patch.
8. Read the output/log file to make sure there were no errors.
. Download the required Patch from Metalink based on OS Bit Version and DB Version.
1
2. Need to down the database before applying patch.
3. Unzip and Apply the Patch using ”opatch apply” command.On successfully applied of patch
you will see successful message “OPatch succeeded.“, Crosscheck your patch is applied by
using “opatch lsinventory” command .
4. Each patch has a unique ID, the command to rollback a patch is “opatch rollback -id ”
command.On successfully applied of patch you will see successful message “OPatch
succeeded.“, Crosscheck your patch is applied by using “opatch lsinventory” command .
5. Patch file format will be like, “p__.zip”
6. We can check the opatch version using “opatch -version” command.
7. Generally, takes 2 minutes to apply a patch.
8. To get latest Opatch version download “patch 6880880 – latest opatch tool”, it contains
OPatch directory.
9. Contents of downloaded patches will be like “etc,files directories and a README file”
10. Log file for Opatch utility can be found at $ORACLE_HOME/cfgtoollogs/opatch
11. OPatch also maintains an index of the commands executed with OPatch and the log files
associated with it in the history.txt file located in the /cfgtoollogs/opatch directory.
12. Starting with the 11.2.0.2 patch set, Oracle Database patch sets are full installations of the
Oracle Database software. This means that you do not need to install Oracle Database 11g
Release 2 (11.2.0.1) before installing Oracle Database 11g Release 2 (11.2.0.2).
13. Direct upgrade to Oracle 10g is only supported if your database is running one of the
following releases: 8.0.6, 8.1.7, 9.0.1, or 9.2.0. If not, you will have to upgrade the database to
one of these releases or use a different upgrade option (like export/ import).
4.Direct upgrades to 11g are possible from existing databases with versions 9.2.0.4+,
1
10.1.0.2+ or 10.2.0.1+. Upgrades from other versions are supported only via intermediate
upgrades to a supported upgrade version.
racle ASM is Oracle’s volume manager specially designed for Oracle database data. It is
O
available since Oracle database version 10g and many improvements have been made in
versions 11g release 1 and 2.
SM offers support for Oracle RAC clusters without the requirement to install 3rd party software,
A
such as cluster aware volume managers or filesystems.
SM is shipped as part of the database server software (Enterprise and Standard editions) and
A
does not cost extra money to run.
he ASM functionality is an extention of the Oracle Managed Files (OMF) functionality that also
T
includes striping and mirroring to provide balanced and secure storage. The new ASM
functionality can be used in combination with existing raw and cooked file systems, along with
OMF and manually managed files.
revents fragmentation of disks, so you don’t need to manually relocate data to tune I/O
P
performance
dding disks is straight forward – ASM automatically performs online disk reorganization when
A
you add or remove storage
Using disk group makes configuration easier, as files are placed into disk groups
ASM provides stripping and mirroring (fine and coarse gain – see below)
triping—ASM spreads data evenly across all disks in a disk group to optimize performance
S
and utilization. This even distribution of database files eliminates the need for regular monitoring
and I/O performance tuning.
or example, if there are six disks in a disk group, pieces of each ASM file are written to all six
F
disks. These pieces come in 1 MB chunks known as extents. When a database file is created, it
is striped (divided into extents and distributed) across the six disks, and allocated disk space on
all six disks grows evenly. When reading the file, file extents are read from all six disks in
parallel, greatly increasing performance.
irroring—ASM can increase availability by optionally mirroring any file. ASM mirrors at the file
M
level, unlike operating system mirroring, which mirrors at the disk level. Mirroring means
keeping redundant copies, or mirrored copies, of each extent of the file, to help avoid data loss
caused by disk failures. The mirrored copy of each file extent is always kept on a different disk
from the original copy. If a disk fails, ASM can continue to access affected files by accessing
mirrored copies on the surviving disks in the disk group.
SM supports 2-way mirroring, where each file extent gets one mirrored copy, and 3-way
A
mirroring, where each file extent gets two mirrored copies.
nline storage reconfiguration and dynamic rebalancing—ASM permits you to add or remove
O
disks from your disk storage system while the database is operating. When you add a disk,
ASM automatically redistributes the data so that it is evenly spread across all disks in the disk
group, including the new disk. This redistribution is known as rebalancing. It is done in the
background and with minimal impact to database performance. When you request to remove a
disk, ASM first rebalances by evenly relocating all file extents from the disk being removed to
the other disks in the disk group.
anaged file creation and deletion—ASM further reduces administration tasks by enabling files
M
stored in ASM disk groups to be Oracle-managed files. ASM automatically assigns filenames
when files are created, and automatically deletes files when they are no longer needed.
1. do not have controlfile and datafiles, do not have online redo logs
oth an Oracle ASM instance and an Oracle Database instance are built on the same
B
technology. Like a database instance, an Oracle ASM instance has memory structures (System
Global Area) and background processes. Besides, Oracle ASM has a minimal performance
impact on a server. Rather than mounting a database, Oracle ASM instances mount disk groups
to make Oracle ASM files available to database instances.
There are at least two new background processes added for an ASM instance:
RBx (ASM) Rebalance working processARBn performs the actual rebalance data extent
A
movements in an Automatic Storage Management instance. There can be many of these
processes running at a time, named ARB0, ARB1, and so on.These processes are managed by
the RBAL process. The number of ARBx processes invoked is directly influenced by the
asm_power_limit parameter.
BAL (Re-balancer) RBAL runs in both database and ASM instances. In the database instance,
R
it does a global open of ASM disks. In an ASM instance, it also coordinates rebalance activity
for disk groups.RBAL, which coordinates rebalance activities
for disk resources controlled by ASM.
BAL, which performs global opens on all disks in the disk group.A global open means that
R
more than one database instance can be accessing the ASM disks at a time.
ailure groups are defined within a disk group to support the required level of redundancy. For
F
two-way mirroring you would expect a disk group to contain two failure groups so individual files
are written to two locations.
INSTANCE_TYPE – Set to ASM or RDBMS depending on the instance type. The default is
RDBMS.
B_UNIQUE_NAME – Specifies a globally unique name for the database. This defaults to
D
+ASM but must be altered if you intend to run multiple ASM instances.
SM_DISKGROUPS – The list of disk groups that should be mounted by an ASM instance
A
during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. ASM
configuration changes are automatically reflected in this parameter.
SM_DISKSTRING – Specifies a value that can be used to limit the disks considered for
A
discovery. Altering the default value may improve the speed of disk group mount time and the
speed of adding a disk to a disk group. Changing the parameter to a value which prevents the
discovery of already mounted disks results in an error. The default value is NULL allowing all
suitable disks to be considered.
revents fragmentation of disks, so you don’t need to manually relocate data to tune I/O
P
performance
dding disks is straight forward – ASM automatically performs online disk reorganization when
A
you add or remove storage
Using disk group makes configuration easier, as files are placed into disk groups
ASM provides stripping and mirroring (fine and coarse gain – see below)
triping—ASM spreads data evenly across all disks in a disk group to optimize performance
S
and utilization. This even distribution of database files eliminates the need for regular monitoring
and I/O performance tuning.
or example, if there are six disks in a disk group, pieces of each ASM file are written to all six
F
disks. These pieces come in 1 MB chunks known as extents. When a database file is created, it
is striped (divided into extents and distributed) across the six disks, and allocated disk space on
all six disks grows evenly. When reading the file, file extents are read from all six disks in
parallel, greatly increasing performance.
irroring – ASM can increase availability by optionally mirroring any file. ASM mirrors at the file
M
level, unlike operating system mirroring, which mirrors at the disk level. Mirroring means
keeping redundant copies, or mirrored copies, of each extent of the file, to help avoid data loss
caused by disk failures. The mirrored copy of each file extent is always kept on a different disk
from the original copy. If a disk fails, ASM can continue to access affected files by accessing
mirrored copies on the surviving disks in the disk group.
SM supports 2-way mirroring, where each file extent gets one mirrored copy, and 3-way
A
mirroring, where each file extent gets two mirrored copies.
nline storage reconfiguration and dynamic rebalancing—ASM permits you to add or remove
O
disks from your disk storage system while the database is operating. When you add a disk,
ASM automatically redistributes the data so that it is evenly spread across all disks in the disk
group, including the new disk. This redistribution is known as rebalancing. It is done in the
background and with minimal impact to database performance. When you request to remove a
isk, ASM first rebalances by evenly relocating all file extents from the disk being removed to
d
the other disks in the disk group.
anaged file creation and deletion—ASM further reduces administration tasks by enabling files
M
stored in ASM disk groups to be Oracle-managed files. ASM automatically assigns filenames
when files are created, and automatically deletes files when they are no longer needed.
SM should be installed separately from the database software in its own ORACLE_HOME
A
directory. This will allow you the flexibility to patch and upgrade ASM and the database software
independently.
everal databases can share a single ASM instance. So, although one can create multiple ASM
S
instances on a single system, normal configurations should have one and only one ASM
instance per system.
For clustered systems, create one ASM instance per node (called +ASM1, +ASM2, etc).
enerally speaking one should have only one disk group for all database files – and, optionally
G
a second for recovery files (see FRA).
ata with different storage characteristics should be stored in different disk groups. Each disk
D
group can have different redundancy (mirroring) settings (high, normal and external), different
fail-groups, etc. However, it is generally not necessary to create many disk groups with the
same storage characteristics (i.e. +DATA1, +DATA2, etc. all on the same type of disks).
o get started, create 2 disk groups – one for data and one for recovery files. Here is an
T
example:
REATE DISKGROUP data EXTERNAL REDUNDANCY DISK ‘/dev/d1′, ‘/dev/d2′, ‘/dev/d3′, ….;
C
CREATE DISKGROUP recover EXTERNAL REDUNDANCY DISK ‘/dev/d10′, ‘/dev/d11′,
‘/dev/d12′, ….;
Here is an example how you can enable automatic file management with such a setup:
racle ASM files are stored within the Oracle ASM diskgroup. If we dig into internals, oracle
O
ASM files are stored within the Oracle ASM filesystem structures.
How are the Oracle ASM files stored within the Oracle ASM filesystem structure?
ralce ASM files are stored within the Oracle ASM filesystem structures as objects that RDBMS
O
instances/Oracle database instance access. RDBMS/Oracle instance treats the Oracle ASM
files as standard filesystem files.
What are the Oracle ASM files that are stored within the Oracle ASM file hierarchy?
hat happens when you create a file/database file in ASM?What commands do you use to
W
create database files?
nce the ASM file is created in ASM diskgroup, a filename is generated. This file is now visible
O
to the user via the standard RDBMS view V$DATAFILE.
n incarnation number is a part of ASM filename syntax. It is derived from the timestamp. Once
A
the file is created, its incarnation number doesnot change.
Incarnation number distinguishes between a new file that has been created using the same file
number and another file that has been deleted
SM’s SPFile will be residing inside ASM itself. This could be found out in number of ways,
A
looking at the alert log of ASM when ASM starts
Machine: x86_64
Using parameter settings in server-side spfile
+DATA/asm/asmparameterfile/registry.253.766260991
System parameters with non-default values:
large_pool_size = 12M
instance_type = “asm”
remote_login_passwordfile= “EXCLUSIVE”
asm_diskgroups = “FLASH”
asm_diskgroups = “DATA”
sm_power_limit = 1
a
diagnostic_dest = “/opt/app/oracle”
Or using the asmcmd’s spget command which shows the spfile location registered with GnP
profile
ASMCMD> spget
+DATA/asm/asmparameterfile/registry.253.766260991
ORACLE – RAC
What is RAC? What is the benefit of RAC over single instance database?
In Real Application Clusters environments, all nodes concurrently execute transactions against
the same database. Real Application Clusters coordinates each node’s access to the shared
data to provide consistency and integrity.
enefits:
B
Improve response time
Improve throughput
High availability
Transparency
racle RAC one Node is a single instance running on one node of the cluster while the 2nd
O
node is in cold standby mode. If the instance fails for some reason then RAC one node detect it
and restart the instance on the same node or the instance is relocate to the 2nd node incase
there is failure or fault in 1st node. The benefit of this feature is that it provides a cold failover
solution and it automates the instance relocation without any downtime and does not need a
manual intervention. Oracle introduced this feature with the release of 11gR2 (available with
Enterprise Edition).
racle’s Real Application Clusters (RAC) option supports the transparent deployment of a single
O
database across a cluster of servers, providing fault tolerance from hardware failures or planned
outages. Oracle RAC running on clusters provides Oracle’s highest level of capability in terms of
availability, scalability, and low-cost computing.
Oracle Clusterware has two key components Cluster Registry OCR and Voting Disk.
he cluster registry holds all information about nodes, instances, services and ASM storage if
T
used, it also contains state information ie they are available and up or similar.
he voting disk is used to determine if a node has failed, i.e. become separated from the
T
majority. If a node is deemed to no longer belong to the majority then it is forcibly rebooted and
will after the reboot add itself again the the surviving cluster nodes.
If a node fails, then the node’s VIP address fails over to another node on which the VIP address
can accept TCP connections but it cannot accept Oracle connections.
Give situations under which VIP address failover happens:-
VIP addresses failover happens when the node on which the VIP address runs fails, all
interfaces for the VIP address fails, all interfaces for the VIP address are disconnected from the
network.
Using virtual IP we can save our TCP/IP timeout problem because Oracle notification service
maintains communication between each nodes and listeners.
hen a VIP address failover happens, Clients that attempt to connect to the VIP address
W
receive a rapid connection refused error .They don’t have to wait for TCP connection timeout
messages.
oting Disk is a file that sits in the shared storage area and must be accessible by all nodes in
V
the cluster. All nodes in the cluster registers their heart-beat information in the voting disk, so as
to confirm that they are all operational. If heart-beat information of any node in the voting disk is
not available that node will be evicted from the cluster. The CSS (Cluster Synchronization
Service) daemon in the clusterware maintains the heart beat of all nodes to the voting disk.
When any node is not able to send heartbeat to voting disk, then it will reboot itself, thus help
avoiding the split-brain syndrome.
or high availability, Oracle recommends that you have a minimum of three or odd number (3 or
F
greater) of votingdisks.
oting Disk – is file that resides on shared storage and Manages cluster members. Voting disk
V
reassigns cluster ownership between the nodes in case of failure.
he Voting Disk Files are used by Oracle Clusterware to determine which nodes are currently
T
members of the cluster. The voting disk files are also used in concert with other Cluster
components such as CRS to maintain the clusters integrity.
racle Database 11g Release 2 provides the ability to store the voting disks in ASM along with
O
the OCR. Oracle Clusterware can access the OCR and the voting disks present in ASM even if
the ASM instance is down. As a result CSS can continue to maintain the Oracle cluster even if
the ASM instance has failed.
ttp://www.toadworld.com/KNOWLEDGE/KnowledgeXpertforOracle/tabid/648/TopicID/RACR2A
h
RC6/Default.aspx
racle expects that you will configure at least 3 voting disks for redundancy purposes. You
O
should always configure an odd number of voting disks >= 3. This is because loss of more than
half your voting disks will cause the entire cluster to fail.
ou should plan on allocating 280MB for each voting disk file. For example, if you are using
Y
ASM and external redundancy then you will need to allocate 280MB of disk for the voting disk. If
you are using ASM and normal redundancy you will need 560MB.
racle expects that you will configure at least 3 voting disks for redundancy purposes. You
O
should always configure an odd number of voting disks >= 3. This is because loss of more than
half your voting disks will cause the entire cluster to fail.
luster Synchronization Services (ocssd) — Manages cluster node membership and runs as
C
the oracle user; failure of this process results in cluster restart.
Cluster Ready Services (crsd) — The crs process manages cluster resources (which could be a
database, an instance, a service, a Listener, a virtual IP (VIP) address, an application process,
and so on) based on the resource’s configuration information that is stored in the OCR. This
includes start, stop, monitor and failover operations. This process runs as the root user
Event manager daemon (evmd) —A background process that publishes events that crs creates.
Process Monitor Daemon (OPROCD) —This process monitor the cluster and provide I/O
fencing. OPROCD performs its check, stops running, and if the wake up is beyond the expected
time, then OPROCD resets the processor and reboots the node. An OPROCD failure results in
Oracle Clusterware restarting the node. OPROCD uses the hangcheck timer on Linux platforms.
RACG (racgmain, racgimon) —Extends clusterware to support Oracle-specific requirements
and complex resources. Runs server callout scripts when FAN events occur.
ransfor of data across instances through private interconnect is called cachefusion.Oracle RAC
T
is composed of two or more instances. When a block of data is read from datafile by an instance
ithin the cluster and another instance is in need of the same block,it is easy to get the block
w
image from the insatnce which has the block in its SGA rather than reading from the disk. To
enable inter instance communication Oracle RAC makes use of interconnects. The Global
Enqueue Service(GES) monitors and Instance enqueue process manages the cahce fusion
ingle Client Access Name (SCAN) is s a new Oracle Real Application Clusters (RAC) 11g
S
Release 2 feature that provides a single name for clients to access an Oracle Database running
in a cluster. The benefit is clients using SCAN do not need to change if you add or remove
nodes in the cluster.
CAN provides a single domain name via (DNS), allowing and-users to address a RAC cluster
S
as-if it were a single IP address. SCAN works by replacing a hostname or IP list with virtual IP
addresses (VIP).
ingle client access name (SCAN) is meant to facilitate single name for all Oracle clients to
S
connect to the cluster database, irrespective of number of nodes and node location. Until now,
we have to keep adding multiple address records in all clients tnsnames.ora, when a new node
gets added to or deleted from the cluster.
ingle Client Access Name (SCAN) eliminates the need to change TNSNAMES entry when
S
nodes are added to or removed from the Cluster. RAC instances register to SCAN listeners as
remote listeners. Oracle recommends assigning 3 addresses to SCAN, which will create 3
SCAN listeners, though the cluster has got dozens of nodes.. SCAN is a domain name
registered to at least one and up to three IP addresses, either in DNS (Domain Name Service)
or GNS (Grid Naming Service). The SCAN must resolve to at least one address on the public
network. For high availability and scalability, Oracle recommends configuring the SCAN to
resolve to three addresses.
http://www.freeoraclehelp.com/2011/12/scan-setup-for-oracle-11g-release211gr2.html
.SCAN Name
1
2.SCAN IPs (3)
3.SCAN Listeners (3)
What is FAN?
ast application Notification as it abbreviates to FAN relates to the events related to
F
instances,services and nodes.This is a notification mechanism that Oracle RAc uses to notify
other processes about the configuration and service level information that includes service
status changes such as,UP or DOWN events.Applications can respond to FAN events and take
immediate action.
What is TAF?
fter an Oracle RAC node crashes—usually from a hardware failure—all new application
A
transactions are automatically rerouted to a specified backup node. The challenge in rerouting is
to not lose transactions that were “in flight” at the exact moment of the crash. One of the
requirements of continuous availability is the ability to restart in-flight application transactions,
allowing a failed node to resume processing on another server without interruption. Oracle’s
answer to application failover is a new Oracle Net mechanism dubbed Transparent Application
Failover. TAF allows the DBA to configure the type and method of failover for each Oracle Net
client.
TAF architecture offers the ability to restart transactions at either the transaction (SELECT) or
session level.
. External Shared Disk to store Oracle Cluster ware file (Voting Disk and Oracle Cluster
1
Registry – OCR)
2. Two netwrok cards on each cluster ware node (and three set of IP address) –
Network Card 1 (with IP address set 1) for public network
Network Card 2 (with IP address set 2) for private network (for inter node communication
between rac nodes used by clusterware and rac database)
IP address set 3 for Virtual IP (VIP) (used as Virtual IP address for client connection and for
connection failover)
3. Storage Option for OCR and Voting Disk – RAW, OCFS2 (Oracle Cluster File System), NFS,
…..
Which enable the load balancing of applications in RAC?
Oracle Net Services enable the load balancing of application connections across all of the
instances in an Oracle RAC database.
If you need to find the location of OCR (Oracle Cluster Registry) but your CRS is down.
When the CRS is down:
Look into “ocr.loc” file, location of this file changes depending on the OS:
n Linux: /etc/oracle/ocr.loc
O
On Solaris: /var/opt/oracle/ocr.loc
When CRS is UP:
Set ASM environment or CRS environment then run the below command:
ocrcheck
– 3 set of IP address
6
## eth1-Public: 2
## eth0-Private: 2
## VIP: 2
ublic IP adress is the normal IP address typically used by DBA and SA to manage storage,
P
system and database. Public IP addresses are reserved for the Internet.
rivate IP address is used only for internal clustering processing (Cache Fusion) (aka as
P
interconnect). Private IP addresses are reserved for private networks.
VIP is used by database applications to enable fail over when one cluster node fails. The
purpose for having VIP is so client connection can be failover to surviving nodes in case there is
failure
o. private IP address is used only for internal clustering processing (Cache Fusion) (aka as
N
interconnect)
ORACLE – DATAGUARD
What is Dataguard?
ata Guard provides a comprehensive set of services that create, maintain, manage, and
D
monitor one or more standby databases to enable production Oracle databases to survive
disasters and data corruptions. Data Guard maintains these standby databases as copies of the
production database. Data Guard can be used with traditional backup, restoration, and cluster
techniques to provide a high level of data protection and data availability.
What is DG Broker?
ataguard :
D
Dataguard is mechanism/tool to maintain standby database.
The dataguard is set up between primary and standby instance .
Data Guard is only available on Enterprise Edition.
tandby Database :
S
Physical standby database provides a physically identical copy of the primary database, with on
disk database structures that are identical to the primary database on a block-for-block basis.
Standby capability is available on Standard Edition.
EFERENCE:
R
http://neeraj-dba.blogspot.in/2011/06/difference-between-dataguard-and.html
What are the differences between Physical/Logical standby databases? How would you decide
which one is best suited for your environment?
hysical standby DB:
P
As the name, it is physically (datafiles, schema, other physical identity) same copy of the
primary database.
It synchronized with the primary database with Apply Redo to the standby DB.
Logical Standby DB:
As the name logical information is the same as the production database, it may be physical
structure can be different.
It synchronized with primary database though SQL Apply, Redo received from the primary
database into SQL statements and then executing these SQL statements on the standby DB.
We can open “physical stand by DB to “read only” and make it available to the applications
users (Only select is allowed during this period). we can not apply redo logs received from
primary database at this time.
We do not see such issues with logical standby database. We can open the database in normal
mode and make it available to the users. At the same time, we can apply archived logs received
from primary database.
For OLTP large transaction database it is better to choose logical standby database.
EFERENCE:
R
http://gavinsoorma.com/2009/07/11g-snapshot-standby-database/
ike a physical or logical standby database, a snapshot standby database receives and
L
archives redo data from a primary database. Unlike a physical or logical standby database, a
snapshot standby database does not apply the redo data that it receives. The redo data
received by a snapshot standby database is not applied until the snapshot standby is converted
back into a physical standby database, after first discarding any local updates made to the
snapshot standby database.
EFERENCE:
R
http://docs.oracle.com/cd/B28359_01/server.111/b28294/title.htm
hat is the Default mode will the Standby will be, either SYNC or ASYNC?
W
ASYNC
Dataguard Architechture?
ataguard Architecture
D
The Oracle 9i Data Guard architecture incorporates the following items:
• Primary Database – A production database that is used to create standby databases. The
archive logs from the primary database are transfered and applied to standby databases. Each
standby can only be associated with a single primary database, but a single primary database
can be associated with multiple standby databases.
• Standby Database – A replica of the primary database.
• Log Transport Services – Control the automatic transfer of archive redo log files from the
primary database to one or more standby destinations.
• Network Configuration – The primary database is connected to one or more standby
databases using Oracle Net.
• Log Apply Services – Apply the archived redo logs to the standby database. The Managed
Recovery Process (MRP) actually does the work of maintaining and applying the archived redo
logs.
• Role Management Services – Control the changing of database roles from primary to standby.
The services include switchover, switchback and failover.
• Data Guard Broker – Controls the creation and monitoring of Data Guard. It comes with a GUI
and command line interface.
rimary Database:
P
A Data Guard configuration contains one production database, also referred to as the primary
database, that functions in the primary role. This is the database that is accessed by most of
your applications.
tandby Database:
S
A standby database is a transactionally consistent copy of the primary database. Using a
backup copy of the primary database, you can create up to nine standby databases and
incorporate them in a Data Guard configuration. Once created, Data Guard automatically
maintains each standby database by transmitting redo data from the primary database and then
applying the redo to the standby database.
The types of standby databases are as follows:
What are the services required on the primary and standby database ?
It controls the automated transfer of redo data from the production database to one or more
archival destinations. The redo transport services perform the following tasks:
a) Transmit redo data from the primary system to the standby systems in the configuration.
b) Manage the process of resolving any gaps in the archived redo log files due to a network
failure.
c) Automatically detect missing or corrupted archived redo log files on a standby system and
automatically retrieve replacement archived redo log files from the
primary database or another standby database.
aximum Availability
M
This protectionmode provides the highest level of data protection that is possible without
compromising the availability of a primary database. Transactions do not commit until all redo
data needed to recover those transactions has been written to the online redo log and to at least
one synchronized standby database. If the primary database cannot write its redo stream to at
least one synchronized standby database, it operates as if it were in maximum performance
mode to preserve primary database availability until it is again able to write its redo stream to a
synchronized standby database.
This mode ensures that no data loss will occur if the primary database fails, but only if a second
fault does not prevent a complete set of redo data from being sent from the primary database to
at least one standby database.
aximum Performance
M
This protectionmode provides the highest level of data protection that is possible without
affecting the performance of a primary database. This is accomplished by allowing transactions
to commit as soon as all redo data generated by those transactions has been written to the
online log. Redo data is also written to one or more standby databases, but this is done
asynchronously with respect to transaction commitment, so primary database performance is
unaffected by delays in writing redo data to the standby database(s).
This protection mode offers slightly less data protection than maximum availability mode and
has minimal impact on primary database performance.
This is the default protection mode.
aximum Protection
M
This protection mode ensures that zero data loss occurs if a primary database fails. To provide
this level of protection, the redo data needed to recover a transaction must be written to both the
online redo log and to at least one synchronized standby database before the transaction
commits. To ensure that data loss cannot occur, the primary database will shut down, rather
than continue processing transactions, if it cannot write its redo stream to at least one
synchronized standby database.
Because this data protection mode prioritizes data protection over primary database availability,
Oracle recommends that a minimum of two standby databases be used to protect a primary
database that runs in maximum protection mode to prevent a single standby database failure
from causing the primary database to shut down.
n the primary database, you define initialization parameters that control redo transport
O
services while the database is in the primary role. There are additional parameters you need to
add that control the receipt of the redo data and log apply services when the primary database
is transitioned to the standby role.
B_NAME=chicago
D
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/chicago/control1.ctl’, ‘/arch2/chicago/control2.ctl’
LOG_ARCHIVE_DEST_1=
‘LOCATION=/arch1/chicago/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_2=
‘SERVICE=boston LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
Primary Database: Standby Role Initialization Parameters
FAL_SERVER=boston
FAL_CLIENT=chicago
DB_FILE_NAME_CONVERT=’boston’,’chicago’
LOG_FILE_NAME_CONVERT=
‘/arch1/boston/’,’/arch1/chicago/’,’/arch2/boston/’,’/arch2/chicago/’
STANDBY_FILE_MANAGEMENT=AUTO
B_NAME=chicago
D
DB_UNIQUE_NAME=boston
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago,boston)’
CONTROL_FILES=’/arch1/boston/control1.ctl’, ‘/arch2/boston/control2.ctl’
DB_FILE_NAME_CONVERT=’chicago’,’boston’
LOG_FILE_NAME_CONVERT=
‘/arch1/chicago/’,’/arch1/boston/’,’/arch2/chicago/’,’/arch2/boston/’
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
LOG_ARCHIVE_DEST_1= ‘LOCATION=/arch1/boston/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=boston’
LOG_ARCHIVE_DEST_2= ‘SERVICE=chicago LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=chicago
FAL_CLIENT=boston
igh performance is common expectation for end user, in fact the database is never slow or fast
H
in most of the case session connected to the database slow down when they receives
unexpected hit. Thus to solve this issue you need to find those unexpected hit. To know exactly
what the session is doing join your query v$session with v$session_wait.
SELECT NVL(s.username,’(oracle)’) as username,s.sid,s.serial#,sw.event,sw.wait_time,
sw.seconds_in_wait, sw.state FROM v$session_wait sw,v$session s
WHERE s.sid=sw.sid and s.username= ‘&username’ORDER BY sw.seconds_in_wait DESC;
ore:
M
1.Run TOP command in Linux to check CPU usage.
2.Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and
possible blocking.
3.Enable the trace file before running your queries,then check the trace file using tkprof create
output file.
According to explain plan check the elapsed time for each query,then tune them respectively.
If you are getting high “Busy Buffer waits”, how can you find the reason behind it?
Buffer busy wait means that the queries are waiting for the blocks to be read into the db cache.
There could be the reason when the block may be busy in the cache and session is waiting for
it. It could be undo/data block or segment header wait.
Run the below two query to find out the P1, P2 and P3 of a session causing buffer busy wait
then after another query by putting the above P1, P2 and P3 values.
SQL> Select p1 “File #”,p2 “Block #”,p3 “Reason Code” from v$session_wait Where event =
‘buffer busy waits’;
SQL> Select owner, segment_name, segment_type from dba_extents
Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;
any DBAs already know how to use STATSPACK but are not always sure what to check
M
regularly.
emember to separate OLTP and Batch activity when you run STATSPACK, since they usually
R
generate different types of waits. The SQL script “spauto.sql” can be used to run STATSPACK
every hour on the hour. See the script in $ORACLE_HOME/rdbms/admin/spauto.sql for more
information (note that JOB_QUEUE_PROCESSES must be set > 0). Since every system is
different,this is only a general list of things you should regularly check in your STATSPACK
output:
hat is the difference between DB file sequential read and DB File Scattered Read?
W
DB file sequential read is associated with index read where as DB File Scattered Read has to
do with full table scan.
The DB file sequential read, reads block into contiguous memory and DB File scattered read
gets from multiple block and scattered them into buffer cache.
Which factors are to be considered for creating index on Table? How to select column for index?
reation of index on table depends on size of table, volume of data. If size of table is large and
C
we need only few data for selecting or in report then we need to create index. There are some
basic reason of selecting column for indexing like cardinality and frequent usage in where
condition of select query. Business rule is also forcing to create index like primary key, because
configuring primary key or unique key automatically create unique index.
It is important to note that creation of so many indexes would affect the performance of DML on
table because in single transaction should need to perform on various index segments and table
simultaneously.
ES. You can create and rebuild indexes online. This enables you to update base tables at the
Y
same time you are building or rebuilding indexes on that table. You can perform DML operations
while the index building is taking place, but DDL operations are not allowed. Parallel execution
is not supported when creating or rebuilding an index online.
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;
How can you track the password change for a user in oracle?
racle only tracks the date that the password will expire based on when it was latest changed.
O
Thus listing the view DBA_USERS.EXPIRY_DATE and subtracting PASSWORD_LIFE_TIME
you can determine when password was last changed. You can also check the last password
change time directly from the PTIME column in USER$ table (on which DBA_USERS view is
based). But If you have PASSWORD_REUSE_TIME and/or PASSWORD_REUSE_MAX set in
a profile assigned to a user account then you can reference dictionary table USER_HISTORY$
for when the password was changed for this account.
SELECT user$.NAME, user$.PASSWORD, user$.ptime, user_history$.password_date
FROM SYS.user_history$, SYS.user$
WHERE user_history$.user# = user$.user#;
hrough the use of SEPS you can store password credentials for connecting to database by
T
using a client side oracle wallet, this wallet stores signing credentials. This feature introduced
since oracle 10g. Thus the application code, scheduled job, scripts no longer needed embedded
username and passwords. This reduces risk because the passwords are no longer exposed and
password management policies are more easily enforced without changing application code
whenever username and password change.
hy we need CASCADE option with DROP USER command whenever dropping a user and
W
why “DROP USER” commands fails when we don’t use it?
If a user having any object then ‘YES’ in that case you are not able to drop that user without
using CASCADE option. The DROP USER with CASCADE option command drops user along
with its all associated objects. Remember it is a DDL command after the execution of this
command rollback cannot be performed.
I find there is always some confusion when talking about Redo, Rollback and Undo. They all
sound like pretty much the same thing or at least pretty close.
Redo: Every Oracle database has a set of (two or more) redo log files. The redo log records all
changes made to data, including both uncommitted and committed changes. In addition to the
online redo logs Oracle also stores archive redo logs. All redo logs are used in recovery
situations.
Rollback: More specifically rollback segments. Rollback segments store the data as it was
before changes were made. This is in contrast to the redo log which is a record of the
insert/update/deletes.
ndo: Rollback segments. They both are really one in the same. Undo data is stored in the
U
undo tablespace. Undo is helpful in building a read consistent view of data.
ou have more than 3 instances running on the Linux server? How can you determine which
Y
shared memory and semaphores are associated with which instance?
Oradebug is undocumented oracle supplied utility by oracle. The oradebug help command list
the command available with oracle.
SQL>oradebug setmypid
SQL>oradebug ipc
SQL>oradebug tracfile_name
If you are using SYS user to drop any table then user’s object will not go to the recyclebin as
there is no recyclebin for SYSTEM tablespace, even we have already SET recycle bin
parameter TRUE.
Select * from v$parameter where name = ‘recyclebin’;
Show parameter recyclebin;
emp Tablespace is 100% FULL and there is no space available to add datafiles to increase
T
temp tablespace. What can you do in that case to free up TEMP tablespace?
Try to close some of the idle sessions connected to the database will help you to free some
TEMP space. Otherwise you can also use ‘Alter Tablespace PCTINCREASE 1’ followed by
‘Alter Tablespace PCTINCREASE 0’
ow Migration:
R
A row migrates when an update to that row would cause it to not fit on the block anymore (with
all of the other data that exists there currently). A migration means that the entire row will move
and we just leave behind the «forwarding address». So, the original block just has the rowid of
the new block and the entire row is moved.
ow Chaining:
R
A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for
your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store
the row in pieces.
Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize.
Tables with LONG and LONG RAW columns are prone to having chained rows. Tables with
more then 255 columns will have chained rows as Oracle break wide tables up into pieces.
So, instead of just having a forwarding address on one block and the data on another we have
data on two or more blocks.