0% found this document useful (0 votes)
17 views48 pages

Jonn Interview Questions and Answers

Pravin Patil is an Oracle DBA with over 5 years of experience in various DBA tasks including user management, database upgrades, and performance tuning. He supports a US client with around 150 databases, managing both production and non-production environments, and performs daily activities such as monitoring alerts and executing scheduled jobs. His expertise includes working with Oracle databases in RAC environments, utilizing tools like SQL Developer and TOAD, and handling database creation, patching, and backups.

Uploaded by

Milind
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views48 pages

Jonn Interview Questions and Answers

Pravin Patil is an Oracle DBA with over 5 years of experience in various DBA tasks including user management, database upgrades, and performance tuning. He supports a US client with around 150 databases, managing both production and non-production environments, and performs daily activities such as monitoring alerts and executing scheduled jobs. His expertise includes working with Oracle databases in RAC environments, utilizing tools like SQL Developer and TOAD, and handling database creation, patching, and backups.

Uploaded by

Milind
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Topics

Self Introduction and Current Environment


1. Can you please briefly introduce yourself? - 3 to 4 years experience
Can you please briefly introduce yourself? & What are your day-to-day
activities? (N-FU) Could You tell me about your overall experience?

It’s my pleasure to present myself to you, I am


Pravin Patil. I have around 5+ years of IT experience as
Oracle DBA in level 2 support, I have started my career
as Oracle Trainee DBA. Since last 5 Years I have wide
exposure towards DBA tasks like User management,
Tablespace management, RMAN as well as manual backups,
Creating and managing databases, Dataguard, Database
upgrades, and patching. Also I have experience in oracle
performance tuning, not only this but I have good
knowledge on managing databases in RAC environment as
well as I have good knowledge on golden gate.

2. What are your day-to-day activities?


I working for US client, so we support 24*7 in
shift, we have on premises infrastructure, depends on
shifts our day-to-day activity are different, my day
starts with attending handover calls, We check our emails
and alerts if any tickets or emails are pending, then on
priority basis we take care of the alerts or emails. we
have scheduled daily jobs for backup, tablespace
alerting, blocking locks, Load on server, long running
sessions, Dataguard lagging, service status and many more
so accordingly we check emails, we will check status of
backup if it's completed successfully if any issue, we
fix it and retrigger it. We will check any alerts for a
tablespace, blocking locks, long-running sessions and so
on then spend upon scheduled activity like cloning,
patching, database upgrade, rman, dataguard, rac we used
to perform the work.

3. What is your current environment or infrastructure?


We do have 12c and 19c databases, overall we have
around 150 databases, out of that 60 databases are
Production databases and remaining are non production
databases, some of them are on RAC some of them are
standalone, we have dataguard environments for both 12c
and 19c databases. We have rman backups. Some of our
databases are in the upgrade stage.

4. What is your technology expertise? Current environment?


I have wide exposure towards DBA activities like
creating database, creating and managing tablespaces,
user management, Logical backup using datapump, physical
backups using manual as well as rman, cloning, patching,
database upgradation, Configuring and managing Standby
databases, Managing ASM and RAC services.
Coming towards my current environment I am working
for one US based dedicated client, There we have almost
70-80 databases, out of that 25 are production and rest
all are non prod, some databases are on 12c and some on
19c, We have some production databases on RAC, some are
on standalone. Apart from this we have few production
databases with dataguard setup, physical standby. For
monitoring we have a separate tool from HP, a separate
monitoring team will monitor our databases and raise jira
tickets, so on a priority basis we use to work on
tickets. Some of the databases are still on 12c, We are
upgrading a few databases to 19c.

5. What is your job role?


I am working for L1 and L2 support in that, I used
to monitor tablespace growth, disk space utilisation on
file system as well as asm disks, checking load on
servers, checking inactive sessions, Killing inactive
sessions, checking gather stats and collect the gather
stats at table and schema level, checking plans of long
running SQL, exporting and importing a schema and table,
checking standby status, rebuilding standby as required,
apart from this I used to work on cloning and patching
activities every quarterly, restarting database services
on standalone as well as RAC.
6. What is Size of Database
Size varies from 15 GB to 17 TB. Out of those two
databases having 15 TB (Redo log – 16 Group with 2
members @ 2 GB, SGA 65 GB, RAM 256 GB).

7. Which tools do you use to execute DML statements?


We have sqldeveloper and toad

8. While applying patches do you involve in any call? During downtime do you
start any bridge call? How do you manage downtime?
For non-prod instances we dont set up calls, simply
we will drop email to the respective team and check for
downtime, if they are okay with that downtime window then
we will stop the database and apply the patch.
If it's a production database then our Senior
DBA will schedule the downtime prior to a couple of
months. During downtime We will join the bridge call with
the DBA team and apply the patch, while applying patch
one of our colleagues/Team mate will monitor our screen.

Database Creation and Software installation


9. What are the prerequisites for database configuration?
Check the disk space, create mandatory directory
structure for storing CRD files and backups, Check if any
other databases are running on servers (ps -ef |grep
pmon) if yes check the listener ports of currently
running databases, so we have to use different ports for
the listener.

10. On your server how many mounts or disks do you have and what is stored in
it?
We have 3 mount points, /d01, /d02 and /d04 - /d01
contains oracle homes, /d02 contains CRD files, /d04
contains backups and archive log files.

11. What is the use of oraInventory?


oraInventory is the central location of Oracle
binaries. Whenever you install oracle software, oracle
home will get registered with oraInventory.

12. What are things will you consider before installing oracle software?
You need to verify if all prerequisite packages are
installed on the server, create required groups and
users, check the space on mount points and create
directory structure and change permissions and owners of
directories.
13. How to create database? What are the things which we will have to consider
before creating a database?
Using dbca or manually, Check the disk space, create
mandatory directory structure for storing CRD files and
backups, Check if any other databases are running on
servers (ps -ef |grep PMON) if yes check the listener
ports of currently running databases, so we have to use
different port for the listener.

14. what is dba group ? What is the use of that ?


DBA group is os level group which is used to manage
your database.

15. Startup sequence of oracle database?


Nomount, mount and then open, in nomount stage
oracle will read parameter file, in mount stage
oracle will read control files, in open mode oracle
will open database.

16. What is the difference between shutdown transactional and shutdown


immediate. (N-RE)
Shutdown transactional will wait for users to
complete their current running transaction whereas shut
immediate will not wait for current transactions to
complete.

17. At the time of shutdown immediate, what will happen to running transactions.
(N-RE)
It will get killed.

18. What is the path of listener.ora. (N-RE)

$ORACLE_HOME/network/admin

19. What is the role of the listener. (N-ML)

To accept incoming requests and handover to pmon for


authentication.

20. If we haven’t listener then. (N-ML)

Then you won't be able to connect to the database


remotely like from sql developer or toad or from
application.

21. How will you find how many instances are there in the server? How many are
running. How will you get the home location of that database?
We will get all databases information from
/etc/oratab file.

Use ps -ef |grep pmon command to check running


instances on the server.

Check inventory.xml file from oraInventory to get


the oracle home locations or you can get it from oratab
file as well.

22. How will you start the database, which parameter will you use in the
environment file. If you don't have an environment then how will you login to
the database?

We use oraenv to set the environment and then start


the database using startup command. If oraenv doesn’t
exist then you can export oracle_home and oracle_sid
manually.

23. oraenv doesn’t exists then how will you login

Export oracle_home and oracle_sid manually at os


level using export command.

Architecture of Oracle database

24. What is the difference between SGA_Target and SGA_MAX_SIZE?


Sga_target is the size of sga to set, sga_max_size
is the maximum memory size which instance can use when
there is high utilisation of sga.

25. Do you know archive logs? How to resize archive logs?


Yes I know, We can not resize archive logs as its
backup file of online redo logs.

26. What is the select query flow in the database?


1. Server process will receive the statement sent by
the user process on the server side and will handover
that to the library cache of the shared pool.
2. The 1st phase of sql execution i.e. Parsing will
be done in the library cache.
3. Then, OPTIMIZER (brain of oracle sql engine) will
generate many execution plans, but chooses the best one
based on time and cost (Time – response time, Cost – CPU
resource utilisation).
4. Server process will send the parsed statement with
its execution plan to PGA and 2nd phase i.e. EXECUTION
will be done there.
5. After execution, the server process will start
searching for the data from LRU end of LRU list and this
search will continue till it finds data or reaches MRU
end. If it finds data, it will be given to the user. If
it didn’t find any data, it means data is not there in
the database buffer cache.
6. In such cases, the server process will copy data
from datafiles to the MRU end of LRU list of database
buffer cache.
7. From MRU end again blocks will be copied to PGA
for filtering required rows and then it will be given to
user (displayed on user’s console)

27. What is a shared Pool and LC. (N-FU)


Shared pool contains/caches the text, Parsed form,
and the execution plan for the SQL or PL/SQL statements,
Data dictionary cache contains rows of data dictionary
information
The shared pool is further divided into two parts:
Library cache and Data Dictionary cache.
Library Cache- The library cache contains the
current SQL execution plan information.
28. How to find archive location in the database. (N-RE)
Using archive log list command or using
log_archive_dest parameter.

29. Tell me any 5 background processes. (N-RE)


pmon, smon, dbwr, lgwr, ckpt, arc

30. What is a checkpoint? (N-RE)


Checkpoint writes SCN number to datafile header and
control files.

31. What is instance recovery?

When database is crashed then smon will perform


instance recovery based on roll forward and roll backward
mechanism, server process/ smon will compare the latest
scn and checkpoint number between datafile header and
redo logs and performs instance recovery, committed data
will be saved to datafiles using roll forward mechanism
and uncommitted data will be rollbacked using roll
backward mechanism.

32. Role of checkpoint in process.

Checkpoint will write the latest scn number to the


datafile header and control files.

33. What is parsing? When will oracle perform hard parsing and soft parsing?

Parsing is the phase of sql execution where semantic


and syntax check is done, after that optimiser will
generate multiple execution plans and store the parsed
sql, plan and hash value to the Library cache of the
shared pool.

If the execution plan exists in the Library cache of


the shared pool then the server process doesn’t need to
perform parsing; this is called soft parsing, and if the
plan doesn’t exist in the library cache then the server
process has to parse the sql that is called hard parsing.

34. What is the phase / stage of sql execution?

Parsing , execution, fetch.

35. What is row chaining and row migration.

Row Chaining - When the row is too large to fit into one
data block when it is first inserted. In this case,
Oracle stores the data for the row in a chain of data
blocks (one or more) reserved for that segment.

Row Migration - When a row that originally fit into one


data block is updated so that the overall row length
increases, and the block‘s free space is already
completely filled. In this case, Oracle migrates the data
for the entire row to a new data block, assuming the
entire row can fit in a new block.

36. What is HWM (High Water Mark)

High water mark is the maximum amount of database


blocks used so far by a segment.
37. During the Startup of a database, in which order does Oracle software search a
parameter file?
First oracle will look for spfile and then pfile.

38. What do you mean by redo log?


Redo logs are basically database files which are
used by lgwr to write redo entries from the log buffer
cache.

39. What are the states of redo logs? How does it work?
Current, Active, Inactive, unused.
Lgwr will write redo entries on current redo log,
once redo log is full the log switch will happen, then
lgwr will stop writing on current redo log and start
writing on next redo logs.

40. What is SCN?


SCN is a system change number which is generated
after every commit.

41. How will you protect online redolog files?

By multiplexing redo logs to different groups.

42. Types of tablespace?


Permanent, undo and temp

43. How will you monitor tablespace?


We have set cron jobs to monitor tablespaces and
send alerts to the dba team when the tablespace is 90%
full.
OR
We have a separate monitoring team which will use HP
tools to monitor tablespace and create jira tickets when
tablespace is 90% full.

44. How will you monitor databases?


Through OEM and through some third party tools.

45. How will you add space to tablespace?


By adding new datafiles or resizing existing
datafiles. Using the alter tablespace command.

46. What is tempfile? Why do we use what kind of data?


Tempfile is used for sorting purpose, it contains
temporary data for sorting purpose.
47. What is stored in the sysaux tablespace?
Sysaux contains some indexes and non-sys-related
tables, as well as awr snapshots.

48. Can we add multiple datafiles to big file tablespace?


No, we can not add multiple datafiles to a big file
tablespace.

49. what stores in SYSTEM, SYSAUX, USERS, TEMP, UNDO tablespace?


System tablespace contains base tables.
Sysaux contains indexes and awr snapshot reports.
Users tablespace contains permanent data of the
users.
Temp tablespace is used for sorting purposes.
Undo tablespace is used to store preimage copies of
data blocks.

50. how to create / drop tablespace & datafile?


Create tablespace myths add datafile
‘/path/mytbs01.dbf’;
Drop tablespace myths including contents and
datafiles;
Alter tablespace myths drop datafile ‘/path’;

51. Steps to add datafile?


Before adding datafile to any tablespace, I will
check the size of mount point or asm disks, then I will
check the existing datafiles of that tablespace and then
I will add datafile using command Alter tablespace myths
add datafile ‘path’ size 5m autoextend on next 100m
maxsize 10G;

52. How to make tablespace offline/online ?


Alter tablespace tablespace_name offline/online;
53. What happens when you put tablespace in offline mode? System tablespace
offline?
Users won't be able to read or write data from that
tablespace. If you make the system tablespace offline
then the database will not come up.

54. What is the max size of tablespace? How many datafiles can we add to
tablespace?
We can not set maxsize to tablespace, we can set maxsize
to datafile(32GB). We can add a maximum 1022 datafiles to
one tablespace.

55. What Information Control File Contains?


Database name, database creation time stamp,
location of datafiles and online redo log files, rman
backup information, SCN number and current log sequence
number.

56. How to resize the datafile. (N-RE)


Alter database datafile ‘path’ resize 10G;
Alter database datafile ‘path’ autoextend on maxsize
10G;

57. How to check redo log file size. (N-RE)

Check the bytes column from v$log view.

58. How to increase the redo log file size. (N-RE)

We can not resize redo log members, we have to drop


redo log groups/members and then recreate with new size.

59. How to add a redo log member in the redo log group. (N-RE)

alter database add logfile member ‘path’ size 10m to


group 5;

60. How to know the temporary tablespace/tempfile information. (N-RE)

Using view called dba_temp_files or v$tempfile

61. How to find default temporary and default tablespace in database. (N-RE)

Check the default_temp_tablespace property value


from the table called database_property.

SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE


PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

62. What is meant by roll forward and roll backward.

It is used in instance recovery, committed data will


be stored in datafiles using rollforward mechanism and
uncommitted data will be rollbacked using roll backward
mechanism.

63. What is the location of the parameter file?


$ORACLE_HOME/dbs

64. What are the types of parameter file?

Pfile and spfile

65. What is the difference between pfile and spfile?

Pfile is in normal text format, you can edit using


vi editor, if you make any changes in pfile then you need
to bounce the database to reflect changes. Whereas spfile
is in binary format, If you want to update any parameters
then you have to update parameters at database level
using alter system command, if parameter is dynamic then
you do not need to bounce the database.

66. Why do you need to multiplex control file and how do you do it?

To protect controlfiles from loss we have to keep


multiple copies.

Update new controlfile details in parameter file,


then shut down the database and copy old controlfile to
new controlfile, then start database.

User Management

67. How will you create a profile and apply to users?


User create profile command, create profile profile name
limit
alter user username profile profile_name

68. How will you check if the user is locked?


Check account_staus from dba_users view.

69. How to unlock the user.


Alter user username account unlock

70. User is complaining that he is not able to connect to the database. What will be
your approach to solve the user problem?
First I will check if the database is up, then I
will check the listener status, then I will check if
users account status from dba_users view, if the user
account is locked then we will unlock it, If users
password is not working then we will reset users
password.

71. Which privileges are required to run an alter user command?


Alter user

72. How do you set a password that expires in 15 days?


Update PASSWORD_LIFE_TIME limit value in users default
profile to 15 days

73. What is profile,privileges & role in oracle?


Profile is used to set password security rules and
resource usage limit
privilege is a permission to execute either a particular
type of sql statements
Roles are collections of multiple privileges, You can
add/revoke the privilege easily from roles.

74. Client is asking for sysdba access. What is the command to give access?
It’s not a good idea to share the sysdba role to normal
user/client, I will ask client/user why he needs sysdba
grant, instead of providing sysdba role I will ask user/client
to provide us details, we will perform the task on behalf of
user.

Database auditing

75. What is Database Auditing?


Auditing tracks the use of privileges, activity of
users, access to sensitive data, action performed on
database objects, modification made to database settings.

76. Why Do We Need Auditing?


To prevent users from inappropriate actions, To
investigate suspicious activity, to trace actions of an
unauthorised user,Monitor and gather data about specific
database activities.

77. What are the levels of auditing?


We can enable audit at 3 levels, statement, object
and system or privilege level.

78. What is by access and by session in database auditing?


By access means the database will write one record
for each audited statement and operation, it
generates more audit records.

By session means oracle will write a single record


for all same type SQL statements. It generates less
audit records.
79. How to enable audit?
We have to set audit_trail=DB parameter.
DB = Save audit records to the database audit trail
(SYS.AUD$)table.
OS = Save the audit records to operating system
files ( in the directory pointed by
AUDIT_FILE_DEST)

80. Where are the audit files stored?


At OS level- In the value which you set to
audit_file_dest parameter and at Database level in AUD$
table under sys schema.

TDE - Transparent Data Encryption / Masking

81. What is TDE? Why do we enable TDE?


TDE is used to encrypt the data stored in the OS
data files. TDE enables the encryption of data at the
storage level to prevent data tampering/reading from
outside of the database.

82. What are the levels in TDE?


We can enable TDE at column level and Tablespace
level.
83. What is masking on the table?
Masking is nothing enabling the encryption on column
Level using TDE.

84. How to Configure TDE?


First create the directory to store configuration
files,Update the wallet location in sqlnet.ora, then
create the key store using command ADMINISTER KEY
MANAGEMENT CREATE KEYSTORE, then open the keystore using
command ADMINISTER KEY MANAGEMENT OPEN KEYSTORE and set
the keystore using command ADMINISTER KEY MANAGEMENT SET
KEYSTORE, then you can create table or tablespace using
encryption option.

Logical Backup/ Data Pump


85. We have a 10TB database size? How much space do we need to take the
backup?
It’s totally depends on which method you using to
take backup, if it's hot/cold backup then you need almost
10 TB, if you use RMAN to take backup then you will need
around 6 TB for one time backup, If you want to retain
last 3-4 backups then you will need 3-4 times of backup
size.

86. Do you know about export imports? traditional or data pump? I want to export
only data from the table. Which parameter will you use?
Use content = data_only parameter.

87. Prerequisite before taking logical backup?


Check the size of the table or schema and then check
if you have enough space on disks. Check the load on the
server if load is high then wait till load comes down.

88. Prerequisite before importing schema to different databases?


First we will check the dump file size and if we
have enough free space in tablespace as well as on disk
space, then If user is already exists then we will take
DDL of the user, we will cross check the grants and
default tablespace of that user, then we will start
importing and monitor alert log and tablespace.

89. How to kill running logical backup?


Check the job name in dba_datapump_jobs view and then
using the attach parameter connect to that job and run
kill_job=immediate command.

90. Difference between traditional exp/imp vs datapump?


Traditional export import is slower than datapump, We
don’t have stop/restart option in export import, You need
to create a user before importing schema backup in
traditional export import.

91. How to perform transportable tablespace export import activity?


Put the tablespace in read only mode, then take export of
tablespace using transport_tablespaces parameter, then
put the tablespace in read write mode, then copy the
datafiles associate with that tablespace to another
server also copy the dumpfile and then import datafile
using parameter transport_datafiles.

92. What are Estimate,network_link, parallel, compression parameters?


Estimate parameter is used to estimate the space need for
EXPDP backup
Network_link parameter is used to import data from remote
databases using db link.

Parallel defines the maximum number of processes which


are actually executing the export or import job.

93. How to take schema-level backup in the data pump?


Using parameter called schemas=schema_name

94. Which parameter is used in expdp or impdp to fetch data from different remote
database. (N-CI)

network_link

95. In the database 10 years old table is there we have to export some particular
information from that table how you will export that information. (N-CI)

Using query parameter

96. You have taken any logical backup, what is the size of the database. (N-RE)

I can’t predict the exact figure but it varies from very


less to 300GB.

97. What is the size of the schema which you have taken backups of? Then what is
the size of the backup? (N-RE)

I can’t predict the exact figure but it varies from very


less to 300GB. Size of backup is reduced to almost 50 to 60%
of schema size.

98. What are the parameters in logical backup? (N-RE)

Dumpfile, logfile, schemas,tables, remap_schema,


remap_table, network_link, table_exists_action,
estimate_only, attach, parfile

99. Have you worked on expdp & impdp, which parameter you come across. If
there are two tables a, b remaps to c, d how will you write remap tables
parameter.

I am not sure about how to remap two tables at a time but


one table we can remap using remap_table parameter.

100. What are different types of backups available?

Logical, cold, hot and rman


101. On which database you will create dblink.

Local database

102. What is a db link ? How to create a db link?

Db links allow user to access objects of another


user in remote database, to create db link we have to add
tns entry of remote database to local database and then
create db link using command

create public database link ‘Link_name’ connect to


USERNAME identified by PASSWORD using ‘Service_name’

103. What is the difference between public and private db links?

If db links are private, then only the user who


created the link has access; if they are public, then all
database users have access.

104. Which parameter is used in expdp or impdp to fetch data from different
remote databases.

Network_link

Physical Backup
105. Types of backups
Hot, cold and rman

106. How to check archive log mode and how to enable archive log mode?
Using archive log list command and v$database view
To enable archive log mode, start the database in
mount stage and run the alter database archivelog
command.

107. How will you perform a restore using RMAN backup?


Restore database backup location ‘backup location’

108. Can you tell me the difference between hot backup and cold backup?
What is the difference between a hot/online backup and a cold/offline
backup?
Hot backup is inconsistent whereas cold backup
is consistent.
To take a hot backup database must be in
archive log mode, We can take cold backup in both
conditions, archive log mode or noarchivelog mode.
Hot backup needs recovery to make it
consistent, cold backup doesn’t need recovery.
Cold backup needs downtime whereas hot backup
doesn’t need downtime.

109. Can we take a cold backup using RMAN?


No, we can't take cold backups using rman.

110. Where the metadata / information of RMAN backup is stored.


If you do not have a catalog then in the control
file and if you have catalog configured then in catalog
database.

111. What is the difference between obsolete and expired backup?


OBSOLETE is where backups exist but are no longer
required for a complete recovery.

EXPIRED is where Oracle thinks it has a backup but


the file is no longer available, i.e. deleted.Someone
deleted the backup set or backup pieces at OS level.The
database CONTROLFILE has the details of the backup on
disk but at OS level the backup file does not exist.

112. Why do we use rman backup?


RMAN gives you access to several backup and recovery
techniques and features not available with user-managed
backup and recovery. Like incremental backups, block
level recovery, compression of backups, maintaining
backup information in control file or in catalog,
provides name to backup file automatically.

113. We have a 7 days old backup. How do you restore it? What is the
command?
Restore database backup location ‘location of
backup’

114. What is a backup strategy? Difference between differential incremental


& cumulative incremental? What is BCT in rman?
We use rman backup, Saturday we take Level 0/ Full
backup from Monday to Wednesday incremental, Thursday
cumulative incremental backup and Friday again
incremental backup.
Differential incremental, rman will take backup of
changed data block from last level 0 or level 1 backup
whereas cumulative incremental rman will take backup of
changed data block since last level 0.

BCT is block change tracking, if you enable BCT then


ctwr is background process which will keep track of
changed data block and update in BCT file, rman will read
that BCT file to take backup of changed datablock.

115. What will you recommend cumulative over differential backups?


I would suggest taking a differential incremental
from Monday to Wednesday and Thursday take cumulative
incremental backup.

116. What is the difference between Level 0 and Full backup?


The only difference between a level 0 backup and a full
backup is that a full backup is never included in an
incremental strategy. Thus, an incremental level 0 backup
is a full backup that happens to be the parent of
incremental backups whose level is greater than 0.

117. Difference between RMAN and vs data pump?


You can not compare between rman and data pump, rman
is used for physical backup whereas data pump is used for
logical backups.

118. What is the RMAN command to take control file and spfile backup?
Backup current controlfile
Backup spfile

If you have enabled autobackup on then you do not


need to take spfile and controlfile backup explicitly.

119. In RMAN how we take backup of archivelog from particular log


sequence. (N-RE)

BACKUP ARCHIVELOG FROM SEQUENCE 288 UNTIL SEQUENCE


388;

120. I want to restore yesterday’s archivelog from RMAN. What is the


command? (N-RE)

restore archivelog from logseq=8619 until


logseq=8632 thread=1;
121. How to check RMAN backup?

Using list backup command.

122. How to delete yesterday’s or before 24 hours archivelog what is the


command. (N-RE)

Using find & -mtime+1 command you can delete.

find . -mtime +1 -exec rm -Rf

123. What is the crosscheck command? (N-RE)

Crosscheck is used to determine whether backups and


copies recorded in the repository still exist on disk or
tape. If RMAN cannot locate the backups and copies, then
it updates their records to EXPIRED status.

124. What is user managed backup

Hot backup and cold backup

125. What happens when we do DML and DDL statements when the database or
tablespace is in begin backup mode, User will be requesting data at that time can
he will fetch the data or not.

Yes, users can get the data without any issue, no


impact on DML and DDL statements.

126. What is a fractured block in user managed backup.

The block is a fractured block, meaning that the


data in this block is not consistent.

127. How will you restore from a user managed backup?

Copy the files using cp command which you want to


restore and then perform the recovery if required.

128. What happens when you execute alter database begin backup?

Datafile header gets freezed and it will prevent


ckpt process to write scn numbers to datafile header,
database will be available for normal processing.

129. How will you check your database is in begin backup mode?

You can check using v$backup view, check the status


from v$backup
130. If the database is up and running & you lost one of the control files what
will happen.

Nothing will happen to current running sessions or


current running sql, only we will get errors while adding
new datafile, redo log files or creating tablespace.

Cloning
131. How to clone/refresh the databases or how do you restore the database
on the dev/test server? How will you clone by using RMAN

a. Check the backup on the production server and copy


the last full backup plus all incremental backup,
controlfile backup and archive log backups to the
dev server.
b. Set the parameter file on the dev server. We need to
set two important parameters: log_file_name_convert
and db_file_name_convert.
c. Configure listener and start database in nomount
stage.
d. Connect to auxiliary database using rman and then
execute duplicate command
e. Duplicate target database to DEV backup location
‘backup location’

132. What are the post clone steps have you performed?
a. Change sys, system and all important users password.
b. Create a password file.
c. Create temporary tablespace and add temp files.
d. Create db links ( depends on requirement)
e. Execute some project specific scripts.
f. If you have taken export of any table/schema before
clone then you can import it back.
g. Change dbid using nid utility if you have cloned an
instance from hot/cold manual backup.

133. Why do you change dbid after a hot/cold manual clone?


If DBID is the same in multiple instances and if you
are trying to register all those databases in catalog
database for rman backup then it will create conflict in
the catalog.

134. How to clone from 19c to 12c (Cross Version Cloning) (N-FU)
As I know it’s not possible, If you know if we can
do it then please let me know.

135. How will you restore user managed backup in another server i.e., cloning
a. Copy the hot backup to DEV server.
b. Copy the controlfile backup from source.
c. Copy the required archive logs.
d. Configure oracle home and set pfile.
e. Create a controlfile using trace file, here you have
to set Noresetlogs to resetlogs and REUSE to SET and
change the database name and path of datafile.
f. Start recovery using command recover database using
backup controlfile until cancel;
g. Apply the archive logs.

136. How will you restore/clone the database without a duplicate command?
a. Copy the Full backup, incremental backup, archive
log backup, controlfile backup from production to
target database.
b. Configure Oracle home and pfile
(log_file_name_convert and db_file_name_convert,
keep same db name as source and change
db_unique_name to Non prod)
c. Start database in nomount stage.
d. Restore controlfile using restore controlfile
command.
e. Start database in mount stage.
f. Restore database using restore database backup
location command.
g. If you want to keep it the same then keep it or if
you want to change db name then you can change db
name using nid utility.

137. How to make a clone by using clone.

There is no difference. you can perform the same


steps as normally you do from a production database
clone.

138. Have you change database name ?

Using nid utility we can change dbname and dbid


Data Guard
139. What is dataguard?
Dataguard is a service which will help us to
configure and manage standby databases.

140. What are the parameters to configure Dataguard?


Log_archive_config, log_archive_dest_1,
log_archive_dest_2, fal_client, fal_server,
standby_file_management, log_archive_dest_2_state &
db_unique_name

141. How to implement a standby database? Briefly Explain Steps to


Configure Physical Standby?
On Primary database, Enable force logging, set few
parameters like Log_archive_config, log_archive_dest_1,
log_archive_dest_2, fal_client, fal_server,
standby_file_management, log_archive_dest_2_state, Take
backup of primary database and transfer to standby
server.

On standby side configure Oracle home, password file


and listener & tns, add standby tns entries to primary
and primary tns entries to standby, Set db_unique_name
different and start standby database in nomount stage.
Connect to rman and restore standby from backup &
once restore and recovery is complete,Add standby redo
logs and start MRP.

142. What is snapshot dataguard why it used?


Snapshot standby database is used for real time
testing. If some users want to perform real time testing
then we convert the physical standby database into a
snapshot standby database, Once users testing is over we
convert snapshot standby to physical standby.

143. Steps to convert physical standby to snapshot ?


a. Check the database role and open_mode on physical
standby
b. Cancel the recovery using alter database recover
managed standby database cancel;
c. Create restore point
d. Convert physical standby into snapshot standby using
command alter database convert to snapshot standby;
e. Start a database and perform testing.
144. How will you resolve the archive gap on the standby database? / How
do you resolve the gap between primary and standby? / How to resolve the
archive gap? If there is a lag/gap explain Steps?

a. It depends on how much archive logs/log sequence gap


there is between primary and standby.
b. If only few archive logs are missing and if archive
logs are available on the primary database then copy
missing archive logs manually to standby and
register archive logs to standby using command alter
database register logfile.
c. If archive logs are not available on the primary
database then check the last log sequence number on
standby and take the incremental backup from that
scn number on primary and restore and recover the
standby database from that backup.
d. We take daily archive log backups, we can regenerate
missing archive logs on production then
automatically those archive logs will get
transferred to standby and MRP will apply those
archive logs to standby.

145. On Primary Current 10 archive logs are missing & current log sequence
is 50 what will you do?
Regenerate missing archive logs from rman backup,
then automatically archive logs will get transferred to
standby and mrp will apply those missing archive logs

or else
Then take incremental backup from missing log
sequence number to current log sequence number and
restore it to the standby database.

146. Why do we add standby redo logs to primary?


Standby redo logs are required for the RFS process,
So when you perform DR drill/ switchover activity, your
primary becomes standby hence we will need standby redo
logs on the primary database.

147. Explain Protection Modes? Modes of DataGuard. (N-CI)


a. MAX PERFORMANCE - it uses ASYNC redo transport,
Never expects acknowledgment from standby database.
b. MAX PROTECTION - It uses sync mode & lgwr, Primary
database needs to write redo to standby database, if
standby unavailable, primary will shut down.

c. MAX AVAILABILITY - It uses sync mode & lgwr, Primary


database needs to write redo to standby database, if
standby unavailable, it continues processing
transactions as in async mode.

148. Types of Standby? Types of DataGuard ?


Physical standby, Logical Standby and snapshot
standby.

149. What is the standby file management parameter? Suppose I have added
a datafile to primary but it is not getting reflected/transferred to standby, how
will you solve this issue?
Standby file management parameter is used for
transferring datafiles from primary to standby. If the
datafile is not getting transferred to standby then I
will check the standby file management parameter it
should set to auto.

150. How Do You Find the Gap?


Using v$archived_log and V$LOG_HISTORY view check
the sequence and applied values.

151. Where do you Diagnose,if there are issues in Standby?


Using v$archived_log and V$LOG_HISTORY we will get
to know about the gap between primary and standby
databases. Then First I will check
log_archive_dest_1_state parameter value if it is enable
or defer, then I will cross verify all parameters like
log_archive_config, log_archive_dest_1 & dest_2,
standby_file_management if all good then I will check if
listener and tns then we will check about the
connectivity between primary and standby and accordingly
we will take action.

152. Difference between Switchover / Failover?


Switchover means reversing the role between primary
and standby, primary will become standby and standby will
become primary.
Failover means when primary goes down and there is
no possibility to start the primary database then we will
make a standby database as primary.

153. What is switchover activity? How will you in normal sql mode not in
DGMGRL. What are pre-activity checks before switchover.

Switchover means reversing the role between primary


and standby, primary will become standby and standby will
become primary.

Check the log sequence number on standby, check the


database role, switchover status, verify if your standby
is ready to switchover, Stop the dbms_scheduler jobs on
primary, Check the invalid objects and compile on
primary. Initiate a Switchover on the Primary Database
using command alter database commit to switchover to
standby;

154. Difference between Async / Sync?


Async mode Never expects acknowledgment from standby
database. Primary database will not bother about standby.

Sync mode - Primary database needs to write redo to


standby database, if standby unavailable, primary will
shut down.

155. What is the difference between AFFIRM and NOAFFIRM?

156. Explain Background process of Standby?


LNS - captures redo entries from log buffer cache or
online redo logs.
RFS - Captures redo entries from LNS and writes redo
entries into standby redo logs.
MRP - will read redo entries from standby redo log
or from archive logs and apply on standby database.

157. When do you do Rebuild Standby?


When standby goes out of sync we have to rebuild
standby.
158. Explain commands for Switchover / Failover?
alter database commit to switchover to standby;
alter database commit to switchover to primary;
alter database activate standby database;

159. How many Standbys are You Using in your current company?
4

160. Difference Between Physical Standby and Logical Standby?


Physical standby captures and applies redo entries
whereas logical standby converts redo entries into
the form of sql and then applies sql to standby.

161. How to stop the RFS process. (N-CI)

Set log_archive_dest_2_state parameter to defer

162. If MRP processes stop automatically frequently, what would be the


issue. (N-CI)

This happens mainly because of the initialization


parameter STANDBY_FILE_MANAGEMENT being set to MANUAL,
when you add a datafile on the primary database, MRP on
the Physical standby database might terminate.

Patching
163. What is a patch? Tell me about patching. (N-CI)
Oracle regularly makes patches available to upgrade
features, enhance security, or fix problems or to fix the
bugs with supported software.

164. Why do we apply the patches and what is the process?


To fix the bugs or enhance security, to upgrade
features.

165. Which utility will you use to apply the patch?


Opatch

166. What is a patching strategy & Steps to apply the patches?


We follow N-2 policy to apply the patch on
production databases.
1. Check if patch is already applied to oracle_home
using opatch lsinventory command.
2. Check inventory.xml under oraInventory, if
oracle_home is registered in oraInventory.
3. Check the opatch utility version if Opatch utility
version is lower then upgrade opatch utility(Patch
number - 6880880).
4. Stop database and listener services
5. Apply the patch using the opatch apply command.
167. What is datapatch ? If database is down how will you run it?

After applying patches to Oracle Home we have to update


patch details to the database registry. We have to start
the database and run it.

OR

After applying patches to Oracle Home we have to


register the patch details to the database.We have to start
the database and run it.

168. How to check if a patch is already applied or not?

Opatch lsinventory

169. What happens when you apply the patch using the opatch apply
command?

Opatch utility will take backups of the files which


are going to be changed by the patch and stores
in .patch_storage directory under $ORACLE_HOME

170. What is .patch_storage?

It is a hidden directory where opatch will take


backup before applying the patch.

171. What happens if the .patch_storage directory is missing?

You may get difficulties while rolling back the


patch.

172. What is the difference between CPU patch and PSU patch?

CPU patch contains only security related fixing


patches whereas PSU patch contains cpu plus bug fixing
patches.

173. Patch is running slow? What will you check? What could be the
reasons?

Before applying patches check recognize the inactive


patches using opatch util listorderedinactivepatches
command and keep a few latest inactive patches and delete
all inactive patches using command opatch util
deleteinactivepatches, which will help you to reduce
patch time.
174. What is the difference between PSU patch and RU patches?

Patch Set Update (PSU) usually contains security fixes


and regression fixes, i.e. bug fixes whereas Release
Update(RU) is a superset of a PSU, RU patches contain PSU
patches as well as optimizer fixes and functional fixes.

175. When does Oracle release psu/security patches?

Oracle releases the patches every quarterly. (Jan-


April - July - October)

176. How do you apply the patch in the dataguard environment?

We first apply the patch on standby and then


primary. Stop MRP and shutdown standby database and
listener and apply the patch and then apply the patch on
primary.

Upgradation
177. How to upgrade the database from 12c to 19c steps?
Upgrade is mainly divided into 3 parts, Pre upgrade
steps, upgrade and post upgrade steps. In pre upgrade
steps we prepare the database for upgrade, we execute
some steps like purge recycle bin, compile invalid
objects, gather dictionary stats, run dbupgdiag.sql
script, run preupgrade.jar script which again creates two
scripts preupgrade_fixup.sql and post_upgrade_fixup.sql,
Check the size of system and sysaux tablespace and add
enough space, Stop dbms_scheduler jobs, take cold backup,
Install Oracle 19c binaries, apply latest security
patches, and then perform the upgrade using DBUA, upgrade
it self take care of compiling invalid objects, time zone
upgradation everything will be taken care by DBUA, as
post upgrade steps execute post_upgrade_fixup.sql,
configure listener and tns, create new password file.

178. What happens when you execute the preupgrade.jar file? What is the
location of the preupgrade.jar file?
Preupgrade.jar file creates 2 files
pre_upgrade_fixup.sql and post_upgrade_fixup.sql,
preupgrade script we have to execute prior to starting
database upgrade and post upgrade script we have to
execute after database upgrade.
preupgrade .jar file is under $ORACLE_HOME/rdbms/admin

179. What are the pre steps you performed before upgrading?
In pre upgrade steps we prepare the database for
upgrade, we execute some steps like purge recycle bin,
compile invalid objects, gather dictionary stats, run
dbupgdiag.sql script, run preupgrade.jar script which
again creates two scripts preupgrade_fixup.sql and
post_upgrade_fixup.sql, Check the size of system and
sysaux tablespace and add enough space, Stop
dbms_scheduler jobs, take cold backup, Install Oracle 19c
binaries, apply latest security patches, and then perform
the upgrade using DBUA.

180. What is the time zone?


Timezone will help to store data as per system time and
display the time as per users time zone. Eg. If
server/system time is 10 AM PST then it will be 7 AM EST
because of the time difference between the two locations.
although the time is stored as 7 a.m., it can still
appear to the user as 10 a.m. EST

181. Suppose if the Database upgrade fails, what will you do?
I will check the database upgrade logs and will take the
action accordingly, Why it is failed, IF upgrade is
failed during upgrade stage then we will try to fix the
issue or restore database from backup/restore point, If
upgrade fails in post upgrade steps then we will check
the logs and try to fix it, If we couldn’t able to fix
the issue then we will check with Oracle Support, we will
upload all required logs to Oracle Support and will try
to fix the upgrade.

182. What is the restore point? Which restore point will you use during
upgrade? What is the difference between Normal restore point and Guaranteed
Restore Point?
Restore point we create to restore the database till the
restore point of time. During database upgrade or
switchover we create the restore point to avoid failures.

We will use Guaranteed restore point during database


upgrade.
Normal restore point will age out after restore point
retention time period, whereas guarantee restore point
will not age out we will have to drop it manually.

Performance Tuning
183. Have you worked on Performance issues? Tell me any five wait events?
Yes sometimes I worked on the PT part but I do not
have expertise in PT, coming to wait events: Buffer busy
waits, free buffer waits, Library cache wait, Log buffer
space, db file sequential read.

184. How will you find active sessions?


Using v$session, check status= ACTIVE

185. How will you find which session is consuming more temp space?
Using view v$tempseg_usage and v$sort_usage

186. SQL query is running slow. What will you do?One session is running
well before 5 days. Currently it is performing slowly. What is your approach?
SQL query taking time to execute- how will you resolve the issue?
a. I will login to the database server and check load
and memory usages on the server using top, free -m,
vmstat, iostat commands. If load is high then I will
identify the process ID which is consuming more
resources and will try to troubleshoot on those
processes or sessions.
b. I will check the disk space if any disk or
archivelog destination or FRA is full then
accordingly I will take action, If FRA or archive
log destination is full then I will clean up old
files
c. If load is normal then check the alert log of the
database if we notice any error in alert log then we
will fix that.
d. I will go ahead and check if any blocking locks or
locks on objects or any long running inactive
sessions, if there are any then I will clean it.
e. I will check the stale stats on some important
schema/table, if gather is stale then I will run the
gather stats.
f. If any sql is in top sessions and consuming more
resources then we will try to tune those sql using
checking explain plan, if it’s using correct
execution plan, cost of the sql, table full scan,
recommendations from tuning advisor like to create
index or create profiles.

187. How to get alert log and trace file location?


Check the value of diagnostic_dest parameter, Under
diagnostic_dest parameter value We have a directory
called rdbms/sid/SID/trace.

188. What do you check in alert logs? How do you resolve it?
It depends on the error. Normally whenever any
tablespace is full, or archive log destination or FRA is
full, database startup & shutdown message, if any
datafiles is missing all errors will be written to alert
log, So depending on error we will have to take action.
Eg. If tablespace is full then we will get an error like
Unable to extend tablespace that time we will add space
to tablespace, if archive log destination is full then
database will hang, you will get archival error then we
will clean up some old archive logs.

189. Difference between AWR and ASH Report? AWR/ASH/ADDM?


AWR reports holds historic past snapshot intervals
and sessions information for further analysis, awr report
contains wait events, sql statistics, object statistics,
time model, ash, high load generating sql statements.

ADDM Report analyzes the AWR data on a regular


basis, to give you the root cause of the problem which is
affecting your database’s performance. It also provides
suggestions or recommendations for rectifying any problem
identified and lists the areas which are having no
issues.

ASH can help you when there's a sudden performance


degradation of the database felt. ASH report contains -
top events, load profile, top sql, top pl sql, top
sessions.

190. Your database is open. You don’t want to interrupt currently connected
users but you want to temporarily disable further logons. What would you do
to achieve this and how would you revert the database back to its normal state
after that?
Stop listener.
191. Temp/Undo tablespace utilisation is 100% full. What will you do?
What will you do if the temp/Undo tablespace is full?
First, I will try to identify if the issue is
permanent or intermittent. If it's intermittent, I will
check which session is blowing temp tablespace, then I
will check the cost and explain the plan for that session
if Cost is too high or if the session is doing full table
scan. I will check if the session is running with a good
plan or worst plan, if we have a better plan available
then I will try to pin the best plan, if only one plan
exists, I will run a tuning advisor on it and according
to tuning I will try to implement a solution.
If cost and plan is good and issues happen
frequently for many sessions then I will check AWR
report and will add tempfile / undo files to
temporary/undo tablespace.

192. If the database is up and running & you lost one of the control files
what will happen. (N-CI)

Nothing will happen to Database and current


connected users, also new users can connect to database
and perform their operations, We will get errors while
creating new tablespace or adding datafiles to tablespace
or creating or adding redo logs.

To Fix this issue, shutdown the database and copy


the missing controlfile from the existing controlfile and
start the database.

193. What is the ORA 600 error? (N-CI)

ORA-00600: internal error code, ORA-600 error is


that it is signalled when a code check fails within the
database. At points throughout the code, Oracle Database
performs checks to confirm that the information being
used in internal processing is healthy, that the
variables being used are within a valid range, that
changes are being made to a consistent structure, and
that a change won’t put a structure into an unstable
state. If a check fails, Oracle Database signals an ORA-
600 error

194. How to check blocking locks and how will you kill blocking locks?
We can check blocking locks using v$session and
v$lock, we can kill inactive blocking sessions using the
alter system kill session command.

195. A user is complaining that DB is running slow. What could be the issue?
(N-FU) or Database is hung. Old and new user connections alike hang on
impact. What do you do? Your SYS SQLPLUS session is able to connect

1. I will login to the database server and check load and


memory usages on the server using top, free -m, vmstat,
iostat commands. If load is high then I will identify the
process ID which is consuming more resources and will try
to troubleshoot on those processes or sessions.
2. I will check the disk space if any disk or archivelog
destination or FRA is full then accordingly I will take
action, If FRA or archive log destination is full then
I will clean up old files..
3. If load is normal then check the alert log of the
database if we notice any error in alert log then we
will fix that.
4. On database level I will go ahead and check if any
blocking locks or locks on objects or any long running
inactive sessions, if there are any then I will clean
it.
5. I will check the stale stats on some important
schema/table, if gather is stale then I will run the
gather stats.
6. I will run the AWR report during poor performance time
and I will check the AWR report for db time, wait time,
cpu time, top sql, Host CPU and Instance CPU details,
Instance efficiency, and many more for further
analysis.
7. If any sql is in top sessions and consuming more
resources then we will try to tune those sql using
checking explain plan, if it’s using correct execution
plan, cost of the sql, table full scan, recommendations
from tuning advisor like to create index or create
profiles.
8. To perform all these tasks I will use oracle tools
like OEM, Solarwind, AWR, Time model techniques,
Explain plan.

196. How to reduce hard parsing. (N-ML)


Increase the size of the shared pool.
197. How to assign automatic memory management. (N-ML)
AMM is configured using two new initialization
parameters: MEMORY_TARGET and MEMORY_MAX_TARGET

198. What are the things you monitor?

We monitor database server load, Memory utilisation, disk


space, asm disk space, tablespace size, inactive
sessions, blocking locks, standby database status,
Cluster service status, Backup status, logical and
physical size of database/schema, export job status.

199. What is a snapshot too old error? What is the ora-01555 error?

The root cause of ORA-01555 is that the UNDO image


is not present to satisfy a read-consistent request.

200. I changed the oracle base folder and suddenly I live what would you do.
The Oracle base directory is the location where
Oracle software and configuration files are stored. I
will check the location and name of the oracle home in
oratab or inventory.xml and revert the directory to its
original name.

201. What is SCN (System Change Number)?


The system change number (SCN) is an ever-increasing
value that uniquely identifies a committed version of the
database at a point in time. Every time a user commits a
transaction Oracle records a new SCN in redo logs.
202. What is an index ? Have you rebuilt the index ? Why do we need to
rebuild indexes?
Indexes are used to provide quick access to rows in
a table. Indexes provide faster access to data.
We need to rebuild indexes because indexes become
fragmented over time. This causes their performance to
degrade. Hence, rebuilding indexes every now and again
can be quite beneficial.

203. How to generate awr report? How to Generate an ASH report?


Using awrrpt.sql under $ORACLE_HOME/rdbms/admin ( we
have to provide begin snap id and end snap id)
Using ashrpt.sql under $ORACLE_HOME/rdbms/admin ( we
have to provide time sysdate - 30, it will generate
report for last 30 mins )

204. What will you check in the AWR report?


Check Instance details like Name, version, DBtime =
( wait time + CPU time) , DB CPU time ( time which taken
by sql to finish execution),logical reads, physical
reads, hard parsing, Instance Efficiency Percentages
(Buffer nowait,Redo NoWait,Soft Parse,Library Hit - it
should be close to 100%)
host cpu details, Check wait events, check
top SQL by cpu time and execution time, CPU details.

205. What is a db file sequential read wait event?


If a data block is not present in SGA, then the oracle
process will wait for the data block to read from the
database and copy it to SGA.

206. What is a log buffer wait event?


The log buffer space wait event occurs when server
processes write redo entries into the log buffer cache
faster than the LGWR process can write it out. When Log
buffer cache doesn't have enough space then the server
process has to wait to redo entries to Log Buffer cache.

207. What is table fragmentation? How to remove fragmentation?


Table fragmentation occurs when rows are not stored
contiguously, or when rows are split between multiple
pages.
We can remove fragmentation by running gather stats on
table, rebuilding index, Using table reorg.

208. How will you decide to create an index on the table?


We will check the plan of sql, if SQL is doing full table
scan and cost of sql is very high then we will run the
tuning advisor on sql and if TA suggests to create an
index then we will create an index.

209. What kind of error have you noticed in the alert log file?
ORA-1652: unable to extend temp segment by 128 in
tablespace TEMP
ORA-1652: unable to extend temp segment by 128 in
tablespace TEMP
ORA-30036: unable to extend segment by 8 in undo
tablespace
ORA-1653: unable to extend table om.emp by 128 in
tablespace users
ORA-01555: snapshot too old: rollback segment number
184 with name "_SYSSMU184_2797134679$" too small
ORA-04301: unable to allocate x bytes of shared
memory
ORA-00257: archiver error, connect internal only,
until freed.
Shutdown, startup, log switch information...

ASM
210. What is ASM? How will you configure ASM?
Automatic Storage Management (ASM) is Oracle's
logical volume manager.
Prevents fragmentation of disks, Provides automatic load
balancing over all the available disks.

To Configure ASM,
1. Install all asm pre-requisite package and
configure asm using oracleasm configure -i
command and initiate oracleasm
2. Once you get raw disks from the storage team,
format the disk using fdisk command.
3. Create asm disks using oracleasm createdisk
DISK_NAME /dev/sdb1
4. As an oracle or grid user install the grid
software and create a diskgroup.
5. Once oracle asm is configured then create a
database.

211. How to add/drop a disk to/from diskgroup?


alter diskgroup DATA add disk
'/dev/oracleasm/disks/ASMDISK5' NAME DATA_ASMDISK5
rebalance power 5;

alter diskgroup DATA drop disk DATA_ASMDISK5


rebalance power 5;

212. When you drop a disk from diskgroup, what will be the status of that
dropped disk ?
Former means disk can be reused.
213. What are the stats of ASM disks?
PROVISIONED – Disk is not part of a disk group and
may be added to a disk group with the ALTER DISKGROUP
statement.

MEMBER – Already member of a diskgroup

FORMER – Once used, can be reused

214. What is rebalancing in ASM?


Rebalancing is the process when we add or remove disk to
diskgroup data will get moved from existing disk to new
disk. RBAL is the background process which will generate
the rebalancing plan and ARBn background process will
take care of rebalancing.

215. How to copy files from file system to asm disk?


We can’t copy files from the file system to asm
disks.

216. What is the allocation unit in ASM?


It is a memory allocation unit to asm storage, we
have to define it during creation of ASM diskgroup using
asmca utility.

217. How to create a diskgroup?


Create diskgroup diskgroup_name external redundancy
disk ‘disk_name’ name DATA_001
If you are creating a diskgroup with Normal or High
redundancy then you have to mention failover groups.

create diskgroup FRA normal redundancy failgroup


FRAGRP1 disk '/dev/oracleasm/disks/ASMDISK2' name
ASMDISK2 failgroup FRAGRP2 disk
'/dev/oracleasm/disks/ASMDISK3' name ASMDISK3;

Or

Using ASMCA utility.

218. How to drop diskgroup? How to drop disk from diskgroup?


DROP DISKGROUP DISKGROUP_NAME INCLUDING CONTENTS;
alter diskgroup DATA drop disk DATA_ASMDISK5
rebalance power 5;

219. What are ASM background processes?


RBAL - runs in both database and ASM instances. In
the database instance, it does open ASM disks, In ASM
instance it will provide the plan for rebalancing to
arbn.
ARBn - will move the extents while rebalancing.
There can be many of these processes running at a time,
named ARB0, ARB1, and so on.

ASMB - runs in both database and ASM instances. In


the database instance ASMB communicates with the ASM
instance, managing storage and providing statistics. ASMB
runs in ASM instances when the ASMCMD cp command runs.

GMON- maintains disk membership in ASM disk groups.

MARK - marks ASM allocation units as stale


following a missed write to an offline disk.

220. What is the redundancy level? What are those?


Mirroring protects data by storing copies of data on
multiple disks. There are 4 redundancy levels:
Normal,high,flex,external.

Normal for 2-way mirroring

High for 3-way mirroring

External do not use Oracle ASM mirroring, such as


when you configure hardware RAID for redundancy

221. What is a failover group? What is the use of a failover group?


Failure Groups are used to place mirror copies of data.

222. What is ASM POWER_LIMIT? default memory allocation for ASM?


ASM_POWER_LIMIT - The maximum power for a
rebalancing operation on an ASM instance.

ASM_DISKGROUPS - The list of disk groups that should


be mounted by an ASM instance during instance startup
ASM_DISKSTRING - Specifies a value that can be used
to limit the disks considered for discovery.

223. How will you monitor ASM disks/diskgroups/rebalancing?


Using view called v$ASM_DISKGROUPS, V$ASM_DISKS,
V$ASM_OPERATIONS

224. Adding a datafile in ASM. (N-RE)


Alter tablespace tablespace_name add datafile;

Or

ALTER TABLESPACE DATA_TBS ADD datafile '+DATA' SIZE


1G AUTOEXTEND on next 100M MAXSIZE 8G;

225. How to copy files from ASM to the file system?


Connect to asm using asmcmd and then copy the files
using cp command.

226. How to copy rman backups from ASM diskgroup to another server?
We can not copy any files directly from asm disks to
another server, first we have to copy to the local file
system and then we have to scp to another server.

RAC
227. What is the difference between RAC and Single Instance?
RAC has multiple instances with a shared database,
Cluster is a combination of one or more physical servers
whereas standalone/Single instance has one instance.

228. What are the benefits of RAC?


High availability and Scalability.

229. How to identify our database is on RAC or Standalone. (N-RE)


Show parameter cluster, check the value of this
parameter.

230. RAC implement steps?


Storage or Linux will provide us Servers with
network IP’s like Public IP, Private IP, Virtual IP and
SCAN IP’s. Set the hostname on all nodes, add IP details
to both nodes /etc/hosts files, Install prerequisite
package, set the password of users and cross verify all
required groups created, Configure password less
authentication using ssh-keygen, Install all required
RPM’s, Set oraInventory location in oraInst.loc file,
Configure ASM and run runcluvfy.sh in prestage mode to
check if all nodes are ready to install cluster, if you
get any warnings/error then fix it. Once all issues are
fixed then install Cluster software using gridsetup.sh,
Provide SCAN details, nodes details, ASM diskgroup
details and install cluster.

231. How many IP’s are required to configure 4 node RAC?


4 Private IP + 4 Public IP + 4 Virtual IP + 3 SCAN
IP. (SCAN IP will always -3)
232. SCAN listener, public ip,vip, private ip usages?
Private IP is used for node to node communication.
Public IP is used to connect cluster nodes from a
public network.
VIP is basically virtual IP, which can float between
clusters, if One node goes down then VIP will be shifted
to other available nodes in the cluster.
SCAN IP is basically used for client side
configuration, client will use SCAN ip’s to connect to
the database.

233. What is the use of crsctl, srvctl, ocrconfig?


Crsctl is used to manage cluster-like start/stop of
cluster services.
Srvctl is used to manage resources in clusters like
database, listener and services.
Ocrconfig is used to take backup of OCR and restore
OCR

234. Purpose of SCAN listener? Why don’t we use VIP instead of SCAN?
No need to update tns details on the client side.
Suppose your database is running with 2 node RAC clusters
and after certain days, your organisation decided to add 3rd
node then you have to add 3rd VIP details in application side
tns details. So wherever they have added tns entries they have
to change it and they have to reconfigure the entire
application which will again need downtime. This is one
disadvantage of VIP, same while removing nodes, to overcome
this problem SCAN came into picture.
Client will connect to SCAN, and scan will pass
connections to node 1,2,3..4. Client and application team no
need to add those VIP’s in tns in application.
235. What is cache fusion in oracle RAC? and its benefits? GCS,GES,GRD
explain in deep?
Moving the data block from the buffer cache of one
instance to another instance’s buffer cache is called
cache fusion. If one instance reads a data block from the
disk and another instance needs the same block, then that
datablock will get transferred to another instance’s
buffer cache.

Global Resource Directory (GRD): GRD keeps track of


changed blocks. GRD is present on all the instances of
the cluster.
Global Cache Services (GCS): Global Cache Services
are responsible for transferring blocks from one instance
to another. LMS is a GCS process. This process used to be
called the Lock Manager Server.
Global Enqueue Services (GES): Global Enqueue
Services is responsible for managing locks across the cluster.

236. Node eviction reasons?


There are multiple reasons to evict the node, if
cluster nodes are not responding, if Node loses the
network connectivity,If node crashes frequently or if
Node is not responsive due to high load.

237. What is Network heartbeat?


It is node to node communication. eg node 1 will
ping to node 2 and node 2 will respond to node 1 vice
versa. Ping as I am available as part of the cluster
network, are you available? If node is not responding
within the specified second node will be evicted from the
cluster.

238. What is a Disk heartbeat?


Nodes go to your voting disk, nodes will register,
this is a time stamp and I am available and part of the
cluster, regular interval nodes will register in the
voting disk that I am available. If any node is not able
to communicate with the voting disk node will be evicted
from the cluster.

239. On what basis the node will be evicted?


based on the split-brain syndrome and within the
help of a voting disk.
240. How will you know if Node is evicted from the cluster?
Node eviction process will be reported in alert log
as ora error ORA-29740 and LMON trace file. To determine
the root cause we have to analyse alert logs and trace
files carefully.

241. How do you apply the patch in the RAC cluster?


We apply the patch using rolling fashion method, We
apply the patches on one node at a time using opatchauto
utility.
Pre checks - We have to update OPatch utility
version on all nodes under $GRID_HOME and $ORACLE_HOME,
then check the conflict, then analyze the patch
(opatchauto apply $PATCH_NUMBER -analyze), the check the
system space ($ORACLE_HOME/OPatch/opatch prereq
CheckSystemSpace ) and then apply the patch.
Opatchauto will stop cluster & database service,
apply the patch and start the services on that Node.

242. OCR, VD , OLR , What is GPNP profile?


VD - To maintain node membership cluster uses Voting
Disk, CSSD service will update in VD.
OCR - OCR stores and maintains the resource
information, like databases, listeners, virtual IP
addresses and services, and any application. Ocrconfig is
a utility to take backup and restore OCR.
OLR - OLR is basically a local copy of OCR. OLR
stores and maintains the resource information which are
running on that single node.
GPNP Profile - This file is stored under
$GRID_HOME/gpnp/<hostname>/profiles/peer/profile.xml,
This profile consists of cluster name, hostname, network
profiles with IP addresses, OCR. If we do any
modifications for the voting disk, the profile will be
updated.

Note - we store OCR and Voting disk in ASM, but


clusterware needs OCR and Voting disk to start CRSD and
CSSD process but point is, both OCR and Voting disk are
stored in ASM, which itself is a resource for the nodes
that means CRSD and CSSD process needs the OCR and Voting
file before the ASM startup. So the question is “how will
the clusterware start?”,
To resolve this issue Oracle introduced two new node
specific files OLR & GPnP, Each node of the cluster
maintains a local copy of this profile and is maintained
by GPnP daemon along with mdns daemon

243. RAC demons? What are the role of CRSD,CSSD,CTSSD, EVMD,


GPNPD?
CRSD - is mainly used to manage cluster, crsd
monitors, start, stop and automatically restart services.
CRSD will maintain resource information in OCR.
CSSD - cssd mainly used to maintain the consistency
between nodes, CSSD will write node details to VD.
CTSSD - ctssd is mainly used to perform time
synchronisation between nodes.
EVMD - It is a background process that publishes
events that Oracle Clusterware creates.
GPNPD - gpnpd daemon is mainly used to maintain
gpnpd profiles.

244. What is the concept of split brain syndrome?


Split brain syndrome is like, node 1 is not able to
communicate with node2 and node2 is not able to
communicate with node , in that case both nodes will act
as independent cluster nodes, then Clusterware has to
evict one of the cluster node. How to evict that cluster
node based upon your voting disk. So whichever the node
out of this two written majority of data into voting
disks so those will be retained and those nodes with less
data written to voting disks will be evicted from
clusterware.

245. What is service in RAC? How to configure service? Why do we need


services?
Suppose we have a 4 node RAC setup, I want to run
the reports/rman backup only on 3rd and 4th Node then I
will create a service and define my preferred node as 3rd
and 4th node and will run it on 3rd node and 4th node.

Srvctl add service Service_name -r hostnode1 -a


hostnode2,hostnode3,hostnode4

-r stands for Prefered Node on which job will run


-a stands for available nodes, if prefered node is
not available then the job will run on available nodes.

246. What do you understand by rolling upgrades?


247. What are the RAC related background processes?
LMON –
(Global Enqueue Service Monitor) It manages global
enqueue and resources.
LMON detects the instance transitions and performs
reconfiguration of GES and GCS resources.
It usually do the job of dynamic remastering.

LMD – >
referred to as the GES (Global Enqueue Service) daemon
since its job is to manage the global enqueue and global
resource access.

LMD process also handles deadlock detection and remote


enqueue requests.

LCK0 -(Instance Lock Manager) > This process manages non-


cache fusion resource requests such as library and row
cache requests.

LMS – ( Global Cache Service process) – >


Its primary job is to transport blocks across the nodes
for cache-fusion requests.
GCS_SERVER_PROCESSES –> no of LMS processes specified in
init. ora parameter. Increase this parameter if global
cache is very high.

ACMS:
Atomic Controlfile Memory Service.
ensuring a distributed SGA memory update is either
globally committed on success or globally aborted if a
failure occurs.

RMSn: Oracle RAC Management Processes (RMSn)


It usually helps in creation of services, when a new
instance is added.

LMHB
Global Cache/Enqueue Service Heartbeat Monitor
LMHB monitors the heartbeat of LMON, LMD, and LMSn
processes to ensure they are running normally without
blocking or spinning

248. What is TAF?


TAF stands for Transparent Application Failover.
When any rac node is down, the select statements need to
failover to the active node. And insert, delete, update
and also Alter session statements are not supported by
TAF. Temporary objects & pl/sql packages are lost during
the failover.

There are two types of failover methods used in TAF.

Basic failover: It will connect to a single node. And no


overload will be there. End user experiences a delay in
completing the transaction.

Preconnect failover: It will connect to primary & backup


node at at time. This offers faster failover. An overload
will be experienced as statement need to be ready to
complete transaction with minimal delay.

249. While the select statement is running, the node on which the select
statement is running crashed ?
So the select statement will be transparently failed
over to another node and the select statement will be
completed and results will be fetched.

250. Explain RAC startup sequence?


There are two stack lower stack and upper stack, in
lower stack ohasd will start and upper stack crsd process
will start.
There are 4 Levels in startup sequence -
Level 1 - OHASD will start cssdagent, cssdmonitor,
oraagent and orarootagent.
Level 2 - cssdagent will start cssd daemon,
orarootagent will start crsd,ctssd, diskmon,acfs,
oraagent will start oracleASM, GPNP, GiPCD, EVMD.
Level 3 - CRSD from level2 will spawn oraagent and
orarootagent
Level 4 - orarootagent will spawn root owned
services like network, scan ip, node vip, GNS, oraagent
will spawn oracle owned services like database, listener,
scan listener, asm.
251. Ohasd process to start what it look ?
OLR

252. cssd to start what it will look?

Voting disk & GpnP profile

253. crsd to start what it will look?

OCR & GpnP profile

254. GPNP process to start what it look?


name resolution (ns lookup)

255. What is the relation between SCAN IP and Listener? (N-ML)


256. Tell me any 5 views in RAC?

gv$session, gv$services, gv$database, gv$lock

257. What is the difference between gv$session and v$session?

v$session will show the details of only a single


node whereas gv$session will show you the session details
of all nodes.

Multitenant Architecture
258. Do you know about multitenant architecture? What is CDB PDB
259. Features 10g vs 11g vs 12c vs 18c vs 19c vs 21c?

Other
260. How would you edit your CRONTAB to schedule the running of
/test/test.sh to run every other day at 2 PM?
261. Have you raised any SR or Do you know about support?
262. What is an undo retention policy? How do you estimate the undo
retention policy?

263. What is database incarnation? What happens when the database goes
into a new incarnation? What happens when you run ALTER DATABASE
OPEN RESETLOGS?

The current online redo logs are archived, the log


sequence number is reset to 1, new database incarnation
is created, and the online redo logs are given a new
timestamp and SCN.
TCS

264. Backup fails on a particular day? What could be the reason? What will
you do?
Most of the cases backup fails due to space issue,
we can delete old backups and start backup again in off
hours.

265. Tablespace is 100% full. what will you do?


We will add a datafile to tablespace or resize
existing datafile.

266. What will you check after adding datafiles?


We will check the free space of the tablespace, we
will check the size of the datafile, if it's getting full
immediately then we will add a new datafile.

267. Can we take backup using expdp in the mount stage?


No we can’t take logical backup in the mount stage,
expdp is used to take backup of schema,
tables,tablespace, these will not be accessible until the
database is open.
268. How to start a cluster ?
Crsctl start has
Crsctl start crs
269. What is OCR and OLR?
OCR and OLR maintain the resource information, OCR
is common to all nodes and stored in ASM, OLR is local
copy of OCR stored on particular node.

270. What is GPNP?


GPNPD stands for Grid Plug aNd Play Daemon. A file
is located in
CRS_HOME/gpnp/<node_name>/profile/peer/profile.xml which
is known as GPNP profile. And this profile consists of
cluster name, hostname, network profiles with IP
addresses, OCR. If we do any modifications for the voting
disk, the profile will be updated.

271. If OLR is not missing or not present what will happen?


Ohasd will not start, to start ohasd we will need
OLR.
272. Have you applied RAC patches? What issue so far have you faced in
RAC?
Normally if environment is patched regularly then we
don't get much issues, but recently we faced one issue,
We applied 19.22 (Jan 2024 Security) patch on Cluster
Home, but client intentionally told to keep Database on
19.21 hence we didn’t apply the patch on database after
some days we noticed that one of database instance was
going down, so we raised SR with oracle support and
oracle informed that this is bug, Oracle suggested to
apply 19.22 patch on Database as well, so after applying
patch on database issue is resolved.

273. How will you get to know if an instance is down?


We have alerts configured so we will get alerts when
an instance is down. We will login to node and check the
alert logs and cluster logs if needed and then start the
database.
274.

You might also like