Oracle DBA Interview Questions and Answers
Oracle DBA Interview Questions and Answers
Oracle Database uses a combination of memory areas and background processes to perform its tasks efficiently. The
architecture is broadly split into two parts: memory structures and processes.
SGA (System Global Area) is shared memory accessible by all sessions. It's used to cache data and reduce disk I/O:
• Shared Pool: Caches SQL statements, PL/SQL code, and data dictionary information.
• Database Buffer Cache: Holds copies of data blocks read from disk. Speeds up reads and writes.
• Redo Log Buffer: Stores change data (redo entries) temporarily before it's written to redo log files.
• Large Pool, Java Pool, Streams Pool: Used for RMAN, Java, or stream operations.
PGA (Program Global Area) is private memory allocated to each server or background process. It holds session-specific
data like sort areas, hash joins, and connection state.
These work together to manage data access, query processing, and ensure high performance and recoverability.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
When you start an Oracle database, it goes through three main stages: NOMOUNT, MOUNT, and OPEN.
• In NOMOUNT: Oracle starts the instance by allocating memory (SGA) and starting background processes. It
reads the initialization parameter file (spfile or pfile). No access to database files yet.
• In MOUNT: Oracle reads the control file, which contains information about datafiles and redo logs. The instance
is associated with the database but it’s not available for users.
• In OPEN: Oracle opens datafiles and redo logs, checks their consistency, and makes the database available for
users. Read/write operations can now occur.
• Abort: Immediate termination without cleanup; requires instance recovery on next startup.
This process ensures controlled access and safe recovery in case of failures.
Tablespaces are logical containers that group datafiles. They help manage space and organize data.
System Tablespaces:
• USERS: Default tablespace for user-created objects like tables and indexes.
• You can create custom tablespaces to separate data logically (e.g., SALES_TBS).
Temporary Tablespace:
• Used for sorting, hashing, and other temp operations during query execution.
• TEMP tablespace stores data that doesn’t need to persist after a session ends.
UNDO Tablespace:
Tablespaces allow Oracle DBAs to manage storage and performance efficiently. Each tablespace consists of one or more
datafiles on disk.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
Oracle stores data in a structured format to manage space efficiently. The smallest unit is a data block.
• Data Block: Smallest unit of I/O; usually 8KB or 16KB in size. It stores rows of table data.
• Extent: A set of contiguous data blocks. Oracle allocates extents to store new data as needed.
• Segment: A set of extents allocated to a database object, such as a table, index, or LOB.
When a table is created, Oracle allocates an initial extent. As the table grows, more extents are added. All these extents
together form a segment.
Segments reside in tablespaces, and tablespaces use datafiles. This layered storage model helps Oracle efficiently
manage growth, fragmentation, and access speed.
Oracle’s storage model allows for dynamic space allocation, meaning objects can grow without manual intervention. DBAs
can monitor and tune object growth using views like DBA_EXTENTS and DBA_SEGMENTS.
The control file is a small but critical file that tracks the overall state of the database. Without it, the database cannot
start.
It stores:
• Checkpoint data
Oracle reads the control file during the MOUNT stage of startup. If the control file is missing or corrupted, the instance
won’t proceed.
Oracle recommends having multiplexed control files—multiple copies on different disks—specified in the parameter file.
Backups are also critical. You can use RMAN (BACKUP CURRENT CONTROLFILE) or manually create a text backup
(ALTER DATABASE BACKUP CONTROLFILE TO TRACE).
In case of control file loss, you can restore it from a backup and recover the database. Managing this file properly is key to
database stability and recoverability.
Redo logs record all changes made to the database (both committed and uncommitted). They are crucial for instance
recovery in case of failure.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
Oracle uses a circular set of redo log groups, each with one or more members (copies):
• When changes are made to the data, redo entries are written to the redo log buffer.
• The LGWR process flushes this buffer to redo log files on disk.
• If the database crashes, redo logs are used to replay those changes.
ACID is an acronym for Atomicity, Consistency, Isolation, and Durability — the four key principles that guarantee
database transactions are reliable:
• Consistency: Transactions take the database from one valid state to another.
• Durability: Once a transaction is committed, it must not be lost, even if the system crashes.
Oracle performs a log switch when a redo log file is full, starting to write to the next log group.
• Archived redo logs (if the database is in ARCHIVELOG mode) for point-in-time recovery.
Redo logs are vital for ensuring that committed transactions are never lost and that the database remains consistent after
unexpected failures.
Physical backup refers to copying the actual binary files of the database—datafiles, control files, and redo logs. Tools like
RMAN are used for this. These backups are fast, complete, and ideal for full disaster recovery.
Logical backup, on the other hand, involves exporting individual database objects (like tables, schemas, or users) using
tools like:
Use Cases:
• Physical backups are used for complete recovery and are a must-have in production.
• Logical backups are useful for migrations, data moves, or restoring specific objects.
Physical backups can be hot (online) or cold (offline). Logical backups are usually done online.
Best practice: use both types. Use physical backups for disaster recovery, and logical for migrations or selective restores.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
Oracle Multitenant was introduced in version 12c. It allows a single Container Database (CDB) to host multiple
Pluggable Databases (PDBs).
CDB contains:
PDBs contain:
• Application-specific data.
Benefits:
CDB/PDB model helps organizations manage hundreds of databases more efficiently. It’s now the default architecture in
Oracle 21c and is mandatory in Oracle 23c.
Steps:
1. Connect to the non-CDB and run DBMS_PDB.DESCRIBE() to generate an XML manifest file.
3. Use the CREATE PLUGGABLE DATABASE command in the CDB to plug in the non-CDB using the manifest file.
Oracle provides tools like Data Pump, RMAN, and Clone PDB methods for other migration strategies. Testing this
conversion in non-production is highly recommended.
The Data Dictionary is a collection of tables and views maintained by Oracle to store metadata about the database.
It includes:
• Storage structures
It is stored in the SYSTEM tablespace and updated automatically by Oracle when you create, modify, or drop objects.
Common views:
The data dictionary is essential for Oracle to function correctly and is used heavily in scripting, automation, audits, and
troubleshooting.
In Oracle, users are accounts that can connect to the database and access data or perform operations. Each user owns
their own set of tables, procedures, etc.
• Creating users.
• Locking/unlocking accounts.
• Changing passwords.
For example:
This way, instead of giving 10 privileges to each user, you create a role with those 10 privileges and assign the role.
Easier to manage and more secure!
Difference:
Oracle lets you define password policies to enforce strong security. These are part of a PROFILE which controls:
• Password length
• Password expiration
Example:
FAILED_LOGIN_ATTEMPTS 5
PASSWORD_LIFE_TIME 30;
If a user enters the wrong password 5 times, Oracle locks the account.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
• Database credentials
• SSL certificates
• Encryption keys
Use cases:
Later, apps can use the wallet instead of plaintext passwords. It’s part of Oracle’s Advanced Security features.
Auditing in Oracle helps track who did what and when in the database. It’s used for:
• Security checks
• User sessions
RMAN (Recovery Manager) is Oracle’s built-in tool for performing backup and recovery operations. It automates and
simplifies the process of taking backups, validating them, and recovering data. Compared to user-managed backups
(using OS commands like cp or SQL*Plus), RMAN offers major advantages:
• Full Backup: Captures the entire database regardless of whether blocks changed.
• Incremental Backup: Backs up only the blocks that have changed since the last backup. It can be of two types:
• Cumulative Incremental: Backs up all changes since last Level 0 (ignores intermediate level 1s).
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
The RMAN Recovery Catalog is a separate schema in a different database that stores metadata about backups, which is
helpful especially if the control file is lost. Steps:
Point-In-Time Recovery (PITR) is used to recover the database to a specific past time before a failure (like accidental data
deletion). You can recover:
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
RMAN> RUN {
RESTORE DATABASE;
RECOVER DATABASE;
• It creates a temporary auxiliary database, restores the tablespace there, and copies back the dropped table.
Example:
Alternatively, if the table was dropped and archived logs are available, RMAN TSPITR is your best option when Flashback
can’t help.
Performance Tuning
To find what's slowing down the database, you start by identifying where the problem is—CPU, memory, I/O, locks, or
SQLs. Tools like AWR (Automatic Workload Repository), ASH (Active Session History), and v$ views are helpful.
You check:
If sessions are waiting on “db file sequential read,” it may be an I/O issue. If you see “CPU time” wait class high, it's a
CPU issue. Start with what’s affecting performance the most, using the Top 5 Wait Events in the AWR report.
• AWR (Automatic Workload Repository) collects performance data like wait events, SQLs, sessions, and
system stats every hour (default).
o Use: @$ORACLE_HOME/rdbms/admin/awrrpt.sql
• ASH (Active Session History) stores real-time session activity from v$session for active sessions only.
Both are great for troubleshooting performance issues. AWR helps with historical patterns; ASH is more session-focused.
23. How do you analyze and resolve high CPU usage in Oracle?
• Run:
SELECT * FROM v$sql ORDER BY cpu_time DESC;
to find top SQLs by CPU.
• Use top and ps -ef from OS to see which Oracle processes are heavy.
• Look into plans of SQLs causing high CPU—bad plans, missing indexes, or high parse rate can be reasons.
• Also check:
Use SQL tuning, plan baselines, or rewriting inefficient queries to reduce load.
Instead of writing WHERE emp_id = 101, you use :1 and pass the value later.
Benefits:
Without bind variables, Oracle treats each literal as a new query, leading to more parsing and CPU usage.
SQL Plan Baseline is a feature to control execution plans for SQL queries.
When a good plan is captured, Oracle stores it in the baseline. If a future change (e.g., stats, patch, bind value) causes a
new bad plan, Oracle can reject it and keep using the trusted plan from the baseline.
Benefits:
First, check the execution plan using EXPLAIN PLAN or DBMS_XPLAN.DISPLAY_CURSOR. Look for:
• Missing Indexes?
Then:
• Consider hints like /*+ index(emp emp_idx1) */ if Oracle chooses a wrong path.
Also analyze the buffer gets, CPU time, and elapsed time from v$sql to confirm inefficiency.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
27. What is the difference between hard parse and soft parse?
• Hard Parse: Happens when a SQL statement is new to Oracle and not found in shared pool. It must go through:
o Syntax check
o Semantic check
o Optimization
o Plan generation
• Soft Parse: When the same SQL already exists in memory. Oracle skips optimization and reuses the plan.
• Latches/locks
OPatch is Oracle’s manual patching tool used to apply interim patches to Oracle software like the database or Grid
Infrastructure. It requires the DBA to manually stop services, apply patches, and start services again.
OPatchAuto is an automated patching tool introduced to simplify patching in Oracle RAC and Grid Infrastructure
environments. It automates the process of patch application across all cluster nodes with minimal downtime, often
supporting rolling patches where nodes are patched one by one without stopping the entire cluster.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
In short, OPatch is manual, and OPatchAuto automates patching for easier and safer operations.
Rolling patching means applying patches to one RAC node at a time while the rest of the cluster keeps running. This
approach allows Oracle RAC to stay online and available during patching, providing near-zero downtime for the
database users.
For example, if you have 3 nodes in a cluster, you patch node 1, reboot it, then patch node 2, and so on, without shutting
down the entire RAC cluster.
Always take a backup and check cluster health before and after patching.
Upgrading Oracle Database from 19c to 23c generally involves these steps:
1. Prepare by checking compatibility, backing up the database, and reading release notes.
3. Install the new Oracle Home (software binaries) for 23c without overwriting 19c.
4. Run the Database Upgrade Assistant (DBUA) or use manual scripts (catctl.pl) to upgrade the database dictionary.
5. Perform post-upgrade tasks like recompiling invalid objects and verifying the upgrade.
Datapatch is a tool used to apply and roll back SQL changes (like patches to the database dictionary or new features)
that come bundled with Oracle patches or PSU (Patch Set Updates).
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
It is used after applying a patch to the Oracle software binaries to apply the necessary database changes.
$ORACLE_HOME/OPatch/datapatch -verbose
It ensures the database components are properly patched and up to date after binary patching.
Oracle RAC
Oracle Real Application Clusters (RAC) is a technology that enables multiple servers, called nodes, to run a single Oracle
database simultaneously. This means multiple servers share the workload and access the same database files stored on
shared storage devices like SAN or NAS. RAC provides high availability, ensuring that if one node fails, other nodes
keep the database available without interruption to users or applications. Besides availability, RAC offers scalability,
meaning you can add more servers (nodes) to the cluster as workload increases, balancing the load among nodes to
improve performance. This is different from standalone Oracle databases, which run on a single server and can become a
bottleneck or single point of failure. RAC relies on Oracle Clusterware, a software layer that manages node
communication, resource allocation, and failure detection. When a user connects to RAC, they can be routed to any
available node by the cluster, enhancing load distribution. Overall, RAC helps businesses achieve continuous uptime and
scale their database environment easily.
Cache Fusion is a key technology in Oracle RAC that makes multiple nodes appear like one by sharing data directly
between the nodes' memory caches. In a RAC cluster, each node has its own cache (memory area) holding copies of
database blocks. When a node needs a block that another node has, Cache Fusion transfers the block over a fast,
dedicated network called the interconnect, instead of reading it from the slower shared disk storage. This direct transfer
greatly improves performance by reducing disk I/O. Cache Fusion also keeps data consistent across all nodes using
sophisticated locking and coordination mechanisms. It ensures that when one node modifies a block, other nodes see the
changes or wait until the block becomes available. This technology is crucial because it maintains data integrity and
concurrency while allowing all nodes to work together efficiently as a single system. Without Cache Fusion, RAC would
face serious performance and consistency issues.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
SCAN stands for Single Client Access Name and is a special DNS name assigned to an Oracle RAC cluster to simplify
client connection management. Instead of knowing multiple node IPs or hostnames, clients connect using the SCAN,
which Oracle Clusterware resolves to one of several SCAN listeners running on the cluster nodes. SCAN listeners
distribute incoming client connections across the available RAC nodes automatically for better load balancing and fault
tolerance. This means clients always have a single, stable connection point, even if nodes are added or removed.
VIPs, or Virtual IP addresses, are IPs assigned to each node but are not tied permanently to the physical server. If a node
fails, its VIP can quickly move to another node, allowing client connections to failover quickly without waiting for DNS
changes. VIPs are essential for fast failover and preventing clients from connecting to downed nodes, improving RAC’s
overall availability.
The Oracle Cluster Registry (OCR) and Voting Disk are fundamental components of Oracle Clusterware used in RAC
clusters.
OCR stores all cluster configuration information such as the list of cluster nodes, resource definitions, and database
configurations. It acts as the cluster’s “brain,” helping nodes understand cluster membership and resource ownership. It’s
critical to back up the OCR regularly because losing it can cause cluster failure.
The Voting Disk is used to maintain cluster integrity and quorum. Each node votes to indicate it is alive and
communicating. If a node can’t communicate with the majority of the voting disks, it is evicted from the cluster to prevent
inconsistent states, which could lead to split-brain problems. The Voting Disk ensures only one active cluster partition
exists at a time, preventing data corruption caused by multiple isolated cluster parts thinking they own the database.
Adding a node to a RAC cluster involves integrating a new server into the existing cluster environment so it can share
workload and access the database.
• Run the addnode.sh script or use Grid Infrastructure tools to add the node to the cluster.
• Configure network settings including SCAN and VIP on the new node.
• Update the OCR and Voting Disk with the new node information.
• Run the delnode.sh script or equivalent tools to safely remove the node from the cluster.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
• Update cluster configuration to remove the node from OCR and Voting Disk.
Proper planning and verification are essential to avoid downtime or cluster issues during node changes.
38. What causes node eviction in RAC and how to prevent it?
Node eviction occurs when Oracle Clusterware forcibly removes a node from the cluster due to health or communication
problems to protect the overall cluster integrity.
• Hardware failures.
Eviction prevents unstable nodes from corrupting cluster data but can cause application interruptions. To prevent eviction:
Split-brain is a dangerous cluster condition where nodes lose communication with each other but continue to operate
independently, potentially leading to multiple conflicting updates to the same database. This breaks data integrity and can
cause corruption.
Oracle RAC prevents split-brain by using quorum-based voting through the Voting Disk. A node must maintain
communication with a majority of voting disks to stay in the cluster. Nodes that lose quorum are evicted automatically,
shutting them down to avoid split-brain. Oracle Clusterware also monitors heartbeat and network communication to detect
and isolate problematic nodes quickly.
By enforcing a strict majority and evicting minority nodes, RAC ensures only one active cluster partition exists, protecting
data consistency.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
40. What are the key RAC views you use for troubleshooting?
In Oracle RAC, global views prefixed with GV$ provide vital information aggregated from all nodes, which is crucial for
effective troubleshooting.
• GV$ACTIVE_SESSION_HISTORY: Helps track active sessions and wait events across the cluster, useful for
identifying performance bottlenecks.
• GV$INSTANCE: Shows the status of each node’s instance including uptime and host information.
• GV$SESSION: Gives session details from all nodes for auditing and monitoring.
• GV$RESOURCE_LIMIT: Tracks resource consumption like memory and processes per node.
These views help DBAs see the entire cluster’s health and activity in one place.
These are command-line utilities used for different Oracle RAC management tasks:
• crsctl is used to control Oracle Clusterware itself. It manages cluster-wide operations like starting/stopping the
cluster, checking cluster status, and handling node evictions.
• srvctl manages Oracle database and listener resources within the cluster. You use it to start/stop databases,
add/remove services, and configure listeners.
• olsnodes is a simple utility that lists all nodes currently part of the cluster.
In short, crsctl handles cluster infrastructure, srvctl manages database services, and olsnodes provides cluster node info.
Oracle Data Guard is a disaster recovery and data protection solution that ensures high availability, data integrity, and
disaster resilience for Oracle databases. It works by maintaining one or more standby databases as copies of a primary
database. These standby databases continuously receive and apply redo data (changes) from the primary database to
stay synchronized.
In simple terms, Data Guard creates a backup "live" copy of your database that is ready to take over in case the primary
database fails due to hardware failure, software issues, or disasters. It helps protect your business by minimizing
downtime and data loss.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
Data Guard works with redo transport services, sending redo logs from the primary to the standby. The standby
database applies these logs either in real-time or with minimal delay, depending on configuration. It also supports
automatic failover, meaning if the primary goes down, the standby can be promoted quickly to become the new primary,
allowing your applications to continue working with minimal interruption.
Data Guard can be configured in different protection modes — Maximum Protection, Maximum Availability, or Maximum
Performance — which balance between zero data loss and system performance. This flexibility lets you choose how
much data risk you want to tolerate versus performance impact.
Overall, Oracle Data Guard provides a robust, tested solution for disaster recovery, ensuring your critical data is always
protected and your database environment stays available.
43. Difference between Physical, Logical, Snapshot, and Active Data Guard
Oracle Data Guard supports several types of standby databases, each designed for specific use cases:
• Physical Standby: This is an exact, block-for-block replica of the primary database. It uses Redo Apply to keep
synchronized by applying redo logs from the primary. It can be opened in read-only mode for reporting and
queries but mainly used for disaster recovery. Physical standby provides the highest data integrity and is most
commonly used.
• Logical Standby: This standby database is logically equivalent but physically different. It applies SQL statements
(changes) instead of redo logs, which allows the logical standby to be open read-write for certain purposes and
support additional operations like adding indexes or materialized views. Logical standby is good for reporting and
data warehousing but may have slightly higher lag and complexity.
• Snapshot Standby: A snapshot standby is a physical standby temporarily converted to read-write mode for
testing or reporting purposes. Changes made to it do not affect the primary. Later, it can be converted back to a
physical standby and resynchronized with the primary. This is useful for running tests or experiments without
risking the main data.
• Active Data Guard: This is an enhanced version of physical standby that allows real-time query access to the
standby database while redo logs continue applying in the background. It supports features like read-only
reporting, backups, and fast incremental backups without impacting the primary. Active Data Guard combines high
availability with performance optimization.
Each type balances availability, performance, and use case flexibility differently, and choosing the right standby depends
on your business needs.
A Switchover and Failover are two critical Data Guard operations used during planned maintenance or unplanned
outages:
• Switchover is a planned, controlled role reversal between the primary and standby databases. It is done when
you want to switch roles temporarily, such as for maintenance on the primary server or load balancing. The
current primary becomes standby, and the standby becomes primary, with no data loss. Switchover involves these
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
steps: confirm both databases are synchronized, stop database activities on primary, switch roles using Data
Guard Broker or manual commands, and redirect clients to the new primary.
• Failover is an unplanned role transition used when the primary database fails unexpectedly and cannot be
recovered quickly. Failover promotes the standby database to primary immediately to restore availability. It may
cause some data loss depending on protection mode and redo application lag. After failover, a manual
reintegration of the old primary (now failed) as a standby is necessary once repaired.
Both processes can be done using Oracle Data Guard Broker (recommended for automation and safety) or manually with
SQL*Plus and RMAN commands. Proper monitoring and communication are essential during these operations to ensure
minimal downtime and data consistency.
Monitoring Data Guard synchronization is vital to ensure that standby databases are properly receiving and applying
changes from the primary, and to detect any lag or issues before they cause problems.
• Using Oracle views like V$DATAGUARD_STATS, which show the status of log transport and apply services, lag
times, and error messages.
• Checking V$ARCHIVE_DEST_STATUS to confirm archive log shipping status and if the standby is receiving logs.
• Querying V$STANDBY_LOG and V$STANDBY_APPLY to verify standby redo logs and apply progress.
• Monitoring delay metrics like transport lag (time taken to ship redo logs) and apply lag (time taken to apply logs
on standby). Significant lag could indicate network or system performance issues.
• Using Oracle Enterprise Manager (OEM) console provides a graphical interface for monitoring all Data Guard
components, alerting on problems and providing health overviews.
• Data Guard Broker can automate monitoring and send notifications about synchronization issues or protection
mode violations.
Proactive monitoring helps maintain data protection and minimize failover downtime.
Oracle Data Guard Broker is a management framework that simplifies configuration, monitoring, and management of Data
Guard environments through GUI or command-line tools.
• Enabling Broker on both primary and standby databases by setting the DG_BROKER_START parameter to
TRUE.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
• Creating a Broker configuration file that defines the primary and standby databases, their roles, and connection
details.
• Using the command-line utility dgmgrl or Oracle Enterprise Manager to create the configuration with the command
CREATE CONFIGURATION.
• Setting properties like failover modes, protection levels, and log transport settings in the Broker configuration.
• Once configured, Broker automates role transitions, synchronization monitoring, and alerts on issues.
• You can also manage switchover/failover operations via Broker commands safely and consistently.
Using Data Guard Broker reduces human error and simplifies complex Data Guard administration tasks.
Before creating a standby database in Oracle Data Guard, certain prerequisites must be met to ensure the environment
supports data protection and failover:
• Both primary and standby servers should have compatible Oracle versions and patches applied to avoid
compatibility issues.
• The primary database must be in ARCHIVELOG mode to generate redo logs necessary for shipping.
• A shared storage or network file system setup is not mandatory but recommended for certain standby
configurations.
• Configure the primary database with proper initialization parameters such as LOG_ARCHIVE_CONFIG,
LOG_ARCHIVE_DEST_n, and FAL_SERVER to enable redo transport.
• Sufficient network bandwidth and reliable connectivity between primary and standby to transfer redo logs in a
timely manner.
• Standby server must have the Oracle software installed and configured properly, matching the primary database
edition and platform.
• Proper directory structure and permissions on the standby server to receive and apply redo logs.
• Backups of the primary database should be taken and restored on the standby server as a baseline copy.
• Optional but recommended: set up Oracle Data Guard Broker to manage and monitor the standby environment.
Meeting these prerequisites ensures a smooth creation and stable operation of the standby database.
48. How do you perform schema and full database export/import using Data Pump?
Oracle Data Pump is a powerful and faster utility used for exporting and importing database objects such as tables,
schemas, or the entire database. To perform a schema export/import, you specify the schema name using the schemas
parameter in the expdp (export) and impdp (import) commands. This exports all objects belonging to the schema, like
tables, indexes, and procedures.
For a full database export/import, you use the full=y parameter, which exports the entire database’s metadata and data,
including users, tablespaces, and roles. Full exports are useful for migrations or backup of entire databases.
During export, Data Pump creates dump files that store the data and metadata. During import, these dump files are read
to recreate objects in the target database. You can also use remapping features to change schema names or tablespaces
during import.
Data Pump is faster than the old Export/Import utilities because it uses direct path load and parallel processing. It also
allows fine control with filters and supports network mode export/import for moving data between databases without
intermediate dump files.
Transportable Tablespaces (TTS) is a feature in Oracle that allows you to move large amounts of data between databases
quickly by physically moving tablespace data files rather than exporting and importing all the data.
To use TTS, you first make tablespaces read-only on the source database. Then, you use the expdp utility with the
transport_tablespaces parameter to export metadata describing the tablespace. The actual data files are copied manually
or using OS commands to the target server.
On the target database, the tablespace data files are plugged in by importing the metadata dump using impdp. This
method avoids the overhead of row-by-row data export/import, making it much faster for large datasets.
TTS is often used in database migrations, upgrades, or when consolidating databases. It requires compatible database
versions and consistent tablespace structure between source and target.
• Use parallel processing in Data Pump (parallel parameter) to split the work into multiple threads, speeding up
the operation.
• Ensure sufficient disk space and fast I/O subsystems, especially for dump files and redo logs.
• Use direct path export/import (default in Data Pump) instead of conventional path to boost speed.
• Minimize redo generation during import by using NOLOGGING options where appropriate.
• For import, disable indexes and constraints temporarily to speed up data loading, then rebuild or enable them
after import.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
• Monitor and tune network bandwidth if doing network mode operations to avoid bottlenecks.
51. What are Oracle High Availability solutions besides RAC and DG?
Besides Oracle RAC (Real Application Clusters) and Data Guard, Oracle offers several other high availability (HA)
solutions:
• Oracle GoldenGate: Provides real-time data replication between heterogeneous databases, supporting zero
downtime migrations and active-active configurations.
• Oracle Flashback Technologies: Allow recovery of data from logical errors without restoring backups.
• Oracle Active Data Guard: Extends Data Guard by enabling read-only queries on standby to offload reporting
workloads.
• Oracle ASM (Automatic Storage Management): Provides high availability and scalability for storage by
managing disk groups and redundancy.
• Oracle Clusterware: Manages server clustering and resource failover beyond just RAC.
• Oracle RMAN (Recovery Manager): Supports fast and reliable backup/recovery strategies integral to HA.
• Oracle Data Guard Broker: Simplifies management and automation of Data Guard configurations for HA.
These solutions together cover various failure scenarios and business continuity needs.
Cloning a database with RMAN involves creating an exact copy of a source database to a target server. The main steps
are:
• Take a backup of the source database, including datafiles, control files, and archived logs.
• Copy the backup pieces and parameter files to the target server.
• Use RMAN RESTORE and RECOVER commands on the target to recreate the database.
• Configure init.ora/spfile on the target to reflect the new environment (file paths, DB name if different).
• Optionally, use RMAN DUPLICATE command for an automated cloning process that handles backup, restore,
and recovery in one step.
• Ensure network connectivity and access permissions between source and target.
• After cloning, update configurations like listeners and TNS names for client connections.
RMAN cloning is widely used for creating test, development, or standby environments from production.
Refreshing a non-production database (such as test or dev) from a production copy involves updating the non-prod DB
with a recent snapshot of production data.
• Restoring the backup on the non-prod server or using RMAN duplicate to create a fresh copy.
• Optionally masking or anonymizing sensitive data in the non-prod copy for compliance.
• Adjusting environment-specific settings such as connection strings, mail servers, or application configurations to
avoid impacting production systems.
• Scheduling refreshes regularly to keep non-prod environments relevant for testing and development.
Proper refresh procedures ensure developers and testers work with real data scenarios while maintaining data security.
54. What are Oracle snapshots and how are they used?
Oracle Snapshots, now known as Materialized Views, are database objects that store the results of a query physically.
They allow you to replicate and cache remote data locally, improving query performance and enabling offline reporting.
Snapshots can be refreshed periodically or on-demand to keep the data current. They are useful for:
Materialized views can be simple or complex, support fast refresh using logs, and are tightly integrated into Oracle’s
optimizer for efficient query rewriting.
A standby clone is a physical standby database used mainly for reporting or testing without impacting the primary.
• Open the standby database in read-only mode or use Active Data Guard for real-time query capability.
• Ensure the standby database continuously applies redo logs to stay synchronized.
• Schedule regular monitoring to ensure data freshness and apply lag is minimal.
• Use standby cloning to offload heavy reporting queries from primary, reducing load and risk.
Automatic Storage Management (ASM) is Oracle's integrated volume manager and file system designed for Oracle
database files. It simplifies storage management by abstracting the physical disks into logical disk groups.
ASM automatically manages striping and mirroring of data across disks to improve performance and provide redundancy.
It reduces the complexity of managing individual files and disks manually.
• Integrates fully with Oracle tools like RMAN and Grid Infrastructure.
ASM is the preferred storage solution in Oracle RAC and large database environments.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
• Ensuring the disks are properly prepared (e.g., raw devices or partitioned).
• Once created, the disk group appears as a logical volume and is used to store database files.
• Using views like V$ASM_DISKGROUP, V$ASM_DISK, and V$ASM_OPERATION to check disk group space,
health, and ongoing operations.
• Monitoring I/O statistics and disk failures to detect bottlenecks or faults early.
• Regularly reviewing ASM metadata and disk status to maintain high availability.
ASM templates define the attributes and defaults for disk groups and files in ASM, including redundancy and striping
policies.
• NORMAL: Two-way mirroring; data is mirrored across two disks for fault tolerance.
• HIGH: Three-way mirroring; data is mirrored across three disks for maximum protection.
Templates help automate the selection of redundancy and file placement for consistent performance and reliability.
• Checking ASM alert logs and V$ASM_DISK view to identify missing disks.
• Verifying physical connectivity and disk health at OS level using tools like lsblk, fdisk.
• Restarting ASM instance or using ALTER DISKGROUP commands to drop or re-add disks.
• Using Oracle support resources if hardware failure or firmware bugs are suspected.
Timely troubleshooting prevents ASM disk group degradation and data loss risks.
Troubleshooting
ORA-01555 occurs when a query tries to access an older version of data that has been overwritten in the undo
tablespace before the query completed. This usually happens during long-running queries or when undo retention is too
low.
To handle it:
• Increase undo tablespace size and undo retention period to keep undo data longer.
• Avoid unnecessary large transactions that hold undo for a long time.
62. What is the procedure for resolving “ORA-03113: end-of-file on communication channel”?
ORA-03113 means the connection between client and server was unexpectedly terminated. This could be caused by
server crash, network issues, or bugs.
To resolve:
• Check the database alert log and trace files to identify server-side errors or crashes.
• Check for server resource issues like memory, CPU, or disk space.
When the database is stuck in mount mode, it means the instance has started but the database is not open yet.
Steps to handle:
To resolve:
• If the datafile belongs to a read-only or offline tablespace, bring it online or drop the tablespace if data is not
needed.
• If no backup is available, try to use DATAFILE OFFLINE DROP as last resort, but this results in data loss.
• Always verify backup and recovery plans to avoid critical data loss.
• Query views like V$SESSION and V$LOCK to identify which session is holding locks and which are waiting.
• SELECT blocking_session, sid, serial#, wait_class, event FROM v$session WHERE blocking_session IS NOT
NULL;
• Sometimes, if the session does not terminate, use OS commands to kill the session process.
• After killing, monitor the system to ensure locks are released and performance returns to normal.
• Use bulk operations (FORALL, BULK COLLECT) instead of row-by-row processing to reduce context switches
between SQL and PL/SQL.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
• Procedure: A subprogram that performs an action but does not return a value directly. It may return values via
OUT parameters. Used to execute business logic.
• Function: Similar to a procedure but must return a single value and can be used in SQL statements.
• Package: A collection of related procedures, functions, variables, and cursors grouped as a single unit. Packages
improve modularity, encapsulation, and can maintain state via package variables.
Packages help organize code logically and provide better security and performance.
Autonomous transactions are independent transactions that run separately from the main transaction. They can commit or
rollback without affecting the main transaction.
• Performing operations like saving progress or sending notifications even if the main transaction fails.
Autonomous transactions improve flexibility in handling complex logic but should be used carefully to maintain data
consistency.
To use it:
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
• Create a job with CREATE_JOB procedure specifying job type (PL/SQL block, stored procedure, external script).
• It replaces the older DBMS_JOB and offers more flexibility and control.
Exception handling is a mechanism to catch and manage runtime errors in PL/SQL to prevent program termination and
ensure graceful error recovery.
• Handling exceptions helps maintain program flow and log errors for troubleshooting.
71. How do you configure and use Oracle Enterprise Manager (OEM)?
To configure:
Alert log is the first place to check for troubleshooting critical database issues.
• Create incident rules to send email alerts for specific thresholds or errors.
• Alternatively, use DBMS_ALERT or UTL_SMTP packages in PL/SQL for custom email notifications.
75. What are the top OS-level metrics you monitor for DB performance?
• Filesystem free space: Running out of disk space affects database files.
Oracle provides several types of indexes to improve query performance by speeding up data retrieval:
• B-tree indexes: The most common type; ideal for high-cardinality columns with many unique values. They
organize data in a balanced tree structure for fast searches.
• Bitmap indexes: Efficient for columns with low cardinality (few distinct values), such as gender or status. They
use bitmaps to represent data and work well in data warehousing.
• Function-based indexes: Indexes based on expressions or functions on columns, like indexing UPPER(name)
to make case-insensitive searches faster.
• Reverse key indexes: Reverse the byte order of keys to reduce contention in insert-heavy environments.
• Domain indexes: Custom indexes for specific data types like spatial, text, or XML data.
• Clustered indexes: Logical grouping of rows for faster access but less common in Oracle.
Each type suits different use cases based on data distribution and query patterns.
Rebuilding an index reorganizes and defragments it to improve performance and space usage.
Why rebuild:
How to rebuild:
This creates a new copy of the index and replaces the old one without affecting database availability. You can also rebuild
with options like parallelism or online rebuild for minimal downtime.
A bitmap index uses bitmaps (bit arrays) to represent the presence or absence of a value in rows, making it highly space-
efficient.
Use cases:
• Ideal for columns with low cardinality (few distinct values), like gender, marital status, or categories.
• Common in data warehouse environments where queries aggregate large data sets.
• Not suitable for OLTP systems with frequent updates, as bitmap indexes can cause locking issues.
Bitmap indexes speed up complex ad hoc queries involving AND, OR, NOT operations on multiple columns.
Partitioning divides a large table or index into smaller, manageable pieces called partitions, each stored separately but
appearing as one logical object.
Benefits:
• Simplifies maintenance tasks like backup, restore, and data loading on partitions.
80. What is the difference between locally and globally partitioned indexes?
• Locally partitioned indexes: Have the same partitioning scheme as the underlying table. Each index partition
corresponds to a table partition, making maintenance easier (e.g., dropping a table partition automatically drops
the related index partition). Queries that access one table partition only scan the corresponding index partition.
• Globally partitioned indexes: Partitioning is independent of the table’s partitioning. The index partitions may not
align with table partitions. This allows more flexibility but increases complexity in maintaining index and table
consistency.
Choosing between local and global depends on workload, partition maintenance requirements, and query patterns.
Unified Auditing consolidates all audit types into a single infrastructure, improving performance and manageability.
To enable:
• For new installations of Oracle 12c and later, Unified Auditing is enabled by default.
• For earlier versions or upgrades, you might need to enable the unified audit trail by running the enable script as
SYSDBA.
Configuration:
• Use CREATE AUDIT POLICY to define what actions to audit (e.g., logins, object access).
Unified Auditing supports fine-grained, user, and system auditing in one place.
Tracking unauthorized access involves monitoring and auditing all login attempts and resource access.
Steps:
• Use Oracle’s Fine-Grained Auditing (FGA) to track access to sensitive data at row or column level.
• Regularly review audit logs and correlate with user roles and permissions.
Combining audit with security tools ensures early detection of unauthorized access.
FGA policies audit access to specific rows or columns based on user-defined conditions.
Features:
• Policies are created using DBMS_FGA.ADD_POLICY specifying the object, columns, and conditions.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
• Generates audit records only when conditions are met, reducing noise.
Data masking hides sensitive data to protect privacy while allowing realistic data in non-production environments.
Methods:
• Static Data Masking: Replace sensitive data in backups or copies with scrambled but realistic data using Oracle
Data Masking Pack.
• Dynamic Data Masking: Use Oracle Database Vault or SQL policies to mask data in real-time based on user
roles.
• Use built-in masking formats like random numbers, nulls, or custom functions.
• Masking ensures compliance with regulations and protects against data leaks.
Oracle Autonomous Database is a revolutionary cloud-based service that leverages advanced automation and machine
learning to manage, secure, and optimize database operations without human intervention. Designed to eliminate
complexity, it autonomously handles provisioning, patching, backups, tuning, and scaling while ensuring high availability
and robust security. By continuously analyzing workload patterns, it dynamically adjusts resources and optimizes SQL
execution for peak performance. Its self-repairing capabilities minimize downtime, identifying and resolving issues before
they impact operations. Integrated encryption, real-time threat detection, and automated security updates safeguard data
from unauthorized access, making it a highly secure solution for enterprises. With its autonomous nature, businesses can
reduce administrative overhead, lower operational costs, and accelerate innovation by focusing on strategic initiatives
instead of database maintenance. Oracle Autonomous Database is a game-changer for industries requiring reliable,
scalable, and high-performance data management, from finance and healthcare to retail and logistics.
- Self-Securing Continuously applies security patches and protects against cyber threats
- High Performance Optimizes query execution and data retrieval using AI-driven analytics
Oracle’s Multitenant Architecture helps manage multiple databases efficiently in the cloud. Instead of having separate
databases on different servers, Oracle allows many databases to exist inside a single system, called a container database
or CDB. Each individual database inside it is called a pluggable database or PDB. Think of it like a big hotel where every
guest has their own room, but they all share the same building resources like electricity, water, and security.This setup
makes managing databases much easier. When Oracle updates or patches the system, it happens at the CDB level, so
all PDBs get updated at once, reducing downtime. Adding a new database is quick since it only requires plugging in a new
PDB rather than setting up everything from scratch. Security is strong because each PDB is separate, meaning they
cannot interfere with each other’s data. Performance is also optimized, as Oracle balances workload across all PDBs,
ensuring efficient resource use.To remember this easily for interviews, think of a shopping mall. The mall itself is the
container database. Each store inside is a pluggable database, operating independently but benefiting from shared
electricity, security, and maintenance. When the mall gets a renovation, every store is upgraded at the same time.
Similarly, Oracle’s Multitenant Architecture makes managing multiple databases simpler, faster, and more cost-effective.
Migrating Oracle workloads to Oracle Cloud Infrastructure (OCI) or AWS RDS involves several steps:
88. What is Oracle ZDLRA and how does it integrate with RMAN?
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
Oracle Zero Data Loss Recovery Appliance ZDLRA is a specialized backup system designed to eliminate data loss during
database recovery. Unlike traditional backups that rely on scheduled snapshots, ZDLRA continuously captures redo logs
in real time, ensuring that even the most recent transactions are preserved. This integration with Recovery Manager
RMAN allows for efficient and automated backups, reducing storage usage and speeding up recovery processes. ZDLRA
validates incoming RMAN backups, detecting corruption before issues arise, ensuring the integrity of stored data. Since
ZDLRA offloads backup management from the primary database, system performance remains optimal while backup
operations run smoothly. The appliance supports incremental forever backups, meaning only the changes since the last
backup are stored, which minimizes storage needs and backup durations. Fast recovery is enabled through optimized
data retrieval processes, making database restoration quicker and more reliable. By automating backup validation, data
protection, and efficient recovery, Oracle ZDLRA provides a streamlined solution for businesses requiring high availability
and data integrity for mission-critical operations.
Oracle Fleet Patching and Provisioning FPP is a powerful automation framework designed to streamline the patching and
provisioning process across multiple Oracle servers and databases. Traditional patching methods often require significant
manual effort, leading to errors, inconsistencies, and extended downtime. FPP eliminates these challenges by centralizing
patch management, ensuring uniform software updates across an entire Oracle fleet while reducing maintenance
complexities. It is particularly useful in environments with a large number of databases and applications that need
synchronized updates. One of FPP’s key strengths is its ability to perform rolling patching in Real Application Clusters
RAC environments, meaning patches can be applied to individual nodes while the others remain operational. This
significantly minimizes downtime, ensuring that business-critical applications continue running without disruption.
Moreover, FPP allows database administrators to pre-validate patches in test environments before applying them to
production, enhancing reliability and reducing the risk of unexpected failures. Beyond patching, FPP is also an essential
tool for provisioning new Oracle installations. Whether setting up databases, middleware, or applications, FPP automates
the deployment process by applying predefined configurations, reducing setup time and human errors. This
standardization ensures that newly provisioned environments maintain consistency across the entire infrastructure. FPP
also supports out-of-place patching, where new Oracle homes are created alongside existing installations before the
switch is made. This method allows for easier rollback options and provides a safer mechanism for critical updates.
Administrators can schedule patch cycles, execute controlled updates, and monitor patching success across various
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
instances using its integrated management console.By integrating FPP into enterprise IT operations, organizations can
improve system stability, enforce compliance, and lower the operational overhead associated with manual updates. It
plays a vital role in ensuring databases, applications, and Oracle software stacks remain secure, up-to-date, and optimally
configured with minimal effort.
- Out-of-place patching allows for safer updates and quick rollbacks if needed
Transparent Data Encryption TDE is a security feature in Oracle that automatically encrypts sensitive data stored in
tablespaces or individual columns without requiring modifications to applications. It ensures that data at rest remains
protected from unauthorized access, even if someone gains access to the physical storage or backup files. TDE uses
encryption keys managed by Oracle Wallet or external Key Management Systems, preventing unauthorized decryption.
When data is stored, it is encrypted, and when authorized users or applications retrieve it, it is transparently decrypted
without additional steps.
1. Enable TDE Configure the database to support encryption by setting up Oracle Wallet or an external keystore to store
encryption keys
2. Create and Manage Encryption Keys Generate encryption keys using the ADMINISTER KEY MANAGEMENT
command and securely store them in the wallet or keystore
3. Encrypt Target Data Objects Apply encryption to entire tablespaces or specific columns using the ENCRYPT clause
when creating or modifying tablespaces and tables
4. Validate Encryption Check encryption status using views like DBA_ENCRYPTED_COLUMNS or DBA_TABLESPACES
to confirm that data is properly secured
5. Backup and Recovery Considerations Ensure encrypted backups are properly managed, as TDE-encrypted data
requires access to the correct encryption keys for restoration
TDE is widely used in industries that require strong data protection, such as finance, healthcare, and government sectors,
helping organizations meet compliance requirements like GDPR and PCI DSS
Real-World Scenarios
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
92. A critical table was accidentally dropped — what steps do you follow?
• Check if Flashback Table is enabled; if yes, use FLASHBACK TABLE to restore it.
• Analyze session activity and wait events using AWR and ASH reports.
95. Logs are filling up too fast — how do you control logging?
• Adjust logging levels and enable only necessary audit or debug logs.
Command-Line Usage
SQLPlus is the primary command-line interface to interact with Oracle Database. As a DBA, you use SQLPlus to connect
to the database, run queries, execute scripts, manage users, check system status, and perform administrative tasks.
Remember: SQL*Plus is your direct window to the database internals — learning its commands like CONNECT,
SELECT, SHOW PARAMETER, and STARTUP is crucial.
SRVCTL (Server Control) is a utility to manage Oracle RAC components. It lets you start, stop, and check the status of
database instances, listeners, and ASM in a RAC environment. Think of SRVCTL as the remote control for your RAC
cluster. Commands like srvctl start database, srvctl stop instance, and srvctl status listener are vital. You can add or
remove nodes with it too.
100 Most Frequently Asked Oracle DBA Interview Questions for L2/L3 Roles -BRIJESH MEHRA
CRSCTL (Cluster Ready Services Control) manages the Oracle Clusterware stack—the foundation layer for RAC. You
use CRSCTL for cluster-wide operations such as starting/stopping the cluster, managing voting disks, checking cluster
resource status, and node membership. CRSCTL commands like crsctl start cluster, crsctl status resource, and crsctl get
css votedisk are essential for cluster health and troubleshooting.
Memory tip:
Think of SQLPlus as your database command center, SRVCTL as your cluster device manager, and CRSCTL as your
cluster system administrator. Visualize these as layers: database (SQLPlus) < RAC resource control (SRVCTL) <
cluster infrastructure (CRSCTL).
RMAN (Recovery Manager) is Oracle’s built-in backup and recovery tool. As a DBA, you’ll rely heavily on these
commands:
• BACKUP INCREMENTAL LEVEL 1 DATABASE; — incremental backup saving space and time.
You also use commands for backup validation, managing channels (for parallelism), and catalog maintenance.
Memory tip:
Imagine RMAN commands as the backup and restore toolbox. Visualize “BACKUP” as putting things safely in a box,
“RESTORE” as unpacking, and “RECOVER” as fixing broken pieces to make it whole again. Regular “REPORT” and
“DELETE” commands keep your backup storage neat and optimized.
DGMGRL (Data Guard Manager command-line) is used to manage Data Guard environments—ensuring disaster
recovery and data availability.
Common commands:
• EDIT DATABASE 'dbname' SET PROPERTY; — modify database properties like log transport.
Memory tip:
DGMGRL commands are your Data Guard command center. Think of “SHOW” commands as status checks,
“SWITCHOVER” as role exchange, and “FAILOVER” as emergency takeover. Remember: ENABLE means “turn on,” and
DISABLE means “turn off” the DG configuration.
• srvctl status database -d dbname — shows database instance status across nodes.
These commands help diagnose node failures, resource issues, network problems, and cluster membership
inconsistencies.
Memory tip:
Visualize your RAC cluster as a machine with multiple parts. CRSCTL is the mechanic checking each part’s status,
SRVCTL controls the machine’s power and functions, and OLSNODES is your roster of active players (nodes). Repeat
these commands during troubleshooting to build strong muscle memory.
Memory tip:
Got it! Here’s a creative ORACLE DBA acronym just for you, breaking down each letter with clear, memorable daily DBA
tasks and mind tips to help beginners and pros alike:
E = Ensure Security
Check user accounts, roles, and audit trails. Protect your data fortress.
This ORACLE DBA acronym helps you remember essential tasks and responsibilities in a simple, easy way. Perfect for
interviews and daily work! By following a structured daily routine, you minimize risks and keep production smooth.