0% found this document useful (0 votes)
44 views62 pages

200 Advanced Oracle DBA Q&A - Real-World, Deep-Level

The document provides a comprehensive guide for preparing for Oracle DBA expert interviews, featuring 200 technical questions covering various topics such as performance tuning, memory management, RMAN, and Data Pump. It includes detailed explanations of Oracle database architecture, installation processes, memory allocation, and backup strategies, particularly focusing on the advancements in Oracle 19c, 21c, and 23ai. The content is aimed at helping candidates understand complex Oracle DBA concepts and improve their interview readiness.

Uploaded by

charanmeher90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views62 pages

200 Advanced Oracle DBA Q&A - Real-World, Deep-Level

The document provides a comprehensive guide for preparing for Oracle DBA expert interviews, featuring 200 technical questions covering various topics such as performance tuning, memory management, RMAN, and Data Pump. It includes detailed explanations of Oracle database architecture, installation processes, memory allocation, and backup strategies, particularly focusing on the advancements in Oracle 19c, 21c, and 23ai. The content is aimed at helping candidates understand complex Oracle DBA concepts and improve their interview readiness.

Uploaded by

charanmeher90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

Most Asked 200 Expert-Level Oracle DBA


Questions | Oracle 19c/21c/23ai |
Performance Tuning, RMAN, AWR, ASH,
ADDM, Memory Sizing, Redo, Backups,
Flashback, Patching, Upgrades, Auditing,
DB Links, Data Pump, Troubleshooting,
Architecture, 23ai Features

Brijesh Mehra
ORACLE GOLDEN GATE DBA
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

ARCHITECTURE & INSTALLATION

1. Explain the detailed internal memory layout of an Oracle process in 21c, including stack, heap, UGA,
and PGA allocation paths.
Oracle 21c processes allocate memory in three key regions: stack (for local function calls), heap (dynamic
memory), and UGA (User Global Area). Dedicated server processes store UGA in PGA, whereas shared
server configurations keep UGA in SGA. PGA contains session-specific memory such as sort space, session
memory, and cursor state. Memory allocation follows OS-level malloc/calloc wrappers optimized by Oracle
memory managers. Process memory is dynamically adjusted via Automatic Memory Management (AMM) and
adaptive algorithms for PGA_AGGREGATE_TARGET in 21c.

2. Describe the internal flow of a query from client to disk, including listener, dispatcher, shared server,
library cache, and buffer cache involvement.
A query request reaches the Oracle listener (via TCP/IP), which hands off to a dispatcher if shared server is
enabled. The dispatcher places the request in the request queue, where a shared server process picks it up.
The query is parsed and optimized using the library cache and shared pool. Data access is managed through
the buffer cache; if not found, physical I/O is triggered to read from disk. The response flows back through the
server process to the client. Wait events and latches ensure concurrency and coordination during execution.

3. How does Oracle 23ai improve upon the traditional SGA resize and dynamic memory tuning
approach?
Oracle 23ai introduces enhanced in-memory adaptive sizing and fine-grained control over memory pools. It
improves upon traditional SGA_TARGET by providing automatic redistribution based on workload telemetry
and usage patterns. Memory advisors now analyze histograms over time for more predictive tuning. Real-time
reallocation of memory components like shared pool, buffer cache, and Java pool is faster and non-disruptive.
Oracle 23ai uses machine learning-based heuristics for managing SGA, reducing the need for manual
adjustments in dynamic workloads.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

4. In what cases do SGA_TARGET and MEMORY_TARGET conflict, and how should a production
system prioritize between them?
SGA_TARGET and MEMORY_TARGET can conflict when both are set but values are not harmonized.
MEMORY_TARGET governs the total instance memory (SGA + PGA), while SGA_TARGET controls only the
SGA portion. If MEMORY_TARGET is set, Oracle auto-manages both SGA and PGA; setting SGA_TARGET
independently can lead to unpredictable resizing. In production, prioritize MEMORY_TARGET only if AMM is
stable and OS memory availability is high. Otherwise, use SGA_TARGET and PGA_AGGREGATE_TARGET
separately for granular and predictable control.

5. What happens internally during Oracle Grid Infrastructure installation — process flow, configuration
files, and background processes setup?
During Grid Infrastructure installation, Oracle installs Clusterware binaries, sets up required groups and users,
and configures OCR (Oracle Cluster Registry) and voting disks. It generates essential config files like
crsconfig_params and initializes systemd or init scripts for automatic startup. Once installed, the GI stack starts
core background processes: CRSD (Cluster Ready Services), CSSD (Cluster Synchronization Services),
OCSSD (Oracle Communication Services), and ASM processes. The GI stack ensures resource registration
and cluster health monitoring across nodes.

6. Explain the startup sequence of a RAC database from OS boot to full open, including CSSD, OCSSD,
CRSD, ASM, and database resource dependencies.
At OS boot, Oracle High Availability Services (OHAS) is invoked, which starts OCSSD to handle low-level disk
heartbeat. CSSD then synchronizes cluster membership via voting disks. CRSD starts to manage cluster
resources like ASM and databases. ASM is initialized next to provide storage for datafiles and redo logs. The
database resource profile (defined in OCR) triggers the startup of the database instance via the Grid Agent.
Each node's instance joins the cluster-wide RAC database, progressing through NOMOUNT, MOUNT, and
OPEN states.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

7. What is the detailed difference between Oracle 19c and 23ai in terms of memory management and
auto-tuning capabilities?
Oracle 19c uses AMM (Automatic Memory Management) and ASMM (Automatic Shared Memory
Management) with manual tuning advisories. Oracle 23ai enhances this with AI-driven memory analytics,
dynamic resizing, and workload-adaptive tuning. In 23ai, memory pools respond faster to usage trends using
historical workload patterns and ML-based tuning. SGA components like Result Cache and In-Memory Area
now resize more predictively. Also, 23ai introduces self-healing for memory pressure events, reducing manual
intervention compared to 19c.

8. How does Oracle handle automatic memory rebalancing across multiple PDBs in a multitenant 21c
architecture?
In Oracle 21c, memory for PDBs is managed via the Resource Manager and CDB-level memory policies. The
system allocates PGA and SGA quotas to PDBs dynamically based on workload. When memory pressure
occurs, the Database Resource Manager redistributes memory from under-utilized PDBs to those with higher
demands. Each PDB has a minimum guarantee and maximum limit, enforced via container memory policies.
This allows balanced memory usage and performance isolation in a multitenant environment.

9. What low-level OS libraries and system calls does Oracle use during the startup and background
process spawning?
Oracle uses POSIX-compliant system calls like fork(), exec(), mmap(), shmget(), and semop() for spawning and
managing processes and shared memory segments. Libraries such as libpthread (for threading), libaio (for
asynchronous I/O), and libclntsh are invoked. On Linux, Oracle heavily uses epoll and /proc interfaces for event
handling and resource checks. These calls initialize background processes like PMON, SMON, DBWn, LGWR,
and others during instance startup.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

10. Explain step-by-step how to configure Oracle restart on a standalone server with role separation for
GI and RDBMS homes.
Start by installing Grid Infrastructure in standalone (Oracle Restart) mode. Configure ASM if used for storage.
Create a separate Oracle Home for the RDBMS and ensure it is registered with GI using srvctl. Set environment
variables (ORACLE_HOME, ORACLE_BASE) separately for both homes. Use srvctl add database and srvctl
add instance to register the database under Oracle Restart. Assign role separation by ensuring different OS
users for GI (grid) and DB (oracle) and using correct ownership and permissions. Enable autostart for resources
via crsctl.

RMAN & BACKUPS

11. What is block change tracking in RMAN, and what internal bitmap structure does Oracle use to
maintain it?
Block Change Tracking (BCT) allows RMAN to skip unchanged blocks during incremental backups by using a
change-tracking file. Internally, Oracle maintains a bitmap file that logs block-level changes since the last
backup. Each bit corresponds to a range of blocks and is updated by DBWR during normal DML activity. During
backups, RMAN reads only the changed blocks identified via this bitmap, significantly improving incremental
backup performance. The tracking file is located outside the database and can be recreated if corrupted.

12. How does incremental merge backup operate at the block level in Oracle 21c?
Incremental merge backup allows a level 1 backup to be merged into a level 0 image copy, making it current
without a full backup. RMAN identifies changed blocks using BCT and applies only those to the image copy.
This occurs at the block level using RMAN's internal block reader and writer APIs. Over time, the image copy
becomes equivalent to a new full backup. This method reduces backup windows and allows for faster recovery
while keeping disk usage optimal.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

13. Explain what happens if you try to restore an SPFILE from an autobackup when the control file is
also lost.
If both SPFILE and control file are lost, RMAN must first restore the control file from autobackup. Using startup
nomount and setting DBID manually, RMAN searches for the control file autobackup via DBID or default
naming. Once the control file is restored and mounted, the SPFILE can then be restored using RESTORE
SPFILE FROM AUTOBACKUP. Without the control file, RMAN has no knowledge of backup metadata, so
SPFILE restoration must be done only after control file recovery.

14. Describe how RMAN detects corruption in backup sets and the mechanisms used to repair or skip
them.
RMAN detects corruption during backup and restore using checksums at the block level. It validates datafiles
and backupsets using BACKUP VALIDATE and RESTORE VALIDATE. Detected corrupt blocks are logged in
V$DATABASE_BLOCK_CORRUPTION. If configured, RMAN can skip corrupted blocks using SKIP
CORRUPTION or automatically repair them if valid copies exist in FRA, image copies, or Data Guard. Recovery
Catalog also helps in mapping healthy backups across different incarnations of the database.

15. What’s the difference in performance and internal storage behavior between compressed
backupsets and image copies?
Compressed backupsets use RMAN's internal compression algorithms (BASIC, LOW, MEDIUM, HIGH) to
reduce storage footprint by eliminating redundancy at the block level. They require more CPU but less disk I/O
and space. Image copies, however, are byte-for-byte replicas of datafiles and allow faster recovery due to direct
restore capability. Backupsets are sequential and require RMAN interpretation to extract data, while image
copies can be directly cataloged and switched to by the database during restore operations.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

16. How does RMAN use V$BACKUP_DATAFILE and V$DATAFILE_HEADER to perform intelligent
restore decisions?
RMAN queries V$BACKUP_DATAFILE to retrieve the SCN and checkpoint metadata of backed-up datafiles. It
uses V$DATAFILE_HEADER to compare current file state and determine the most efficient restore path. If the
backup SCN matches or exceeds the needed recovery SCN, RMAN restores only those files. This intelligent
selection reduces unnecessary restores and enables point-in-time and tablespace-level recovery. These views
help RMAN validate consistency and avoid incomplete or invalid restores.

17. In Oracle 21c, how can you take an incremental backup of a PDB and recover only that PDB to a
PIT?
To perform point-in-time recovery (PITR) of a PDB, RMAN supports BACKUP INCREMENTAL at the PDB
level. Use BACKUP DATABASE <PDB_NAME> with level 1 or 0 and ensure archive logs are available. For
recovery, issue RECOVER PLUGGABLE DATABASE <PDB_NAME> UNTIL TIME '<timestamp>'. RMAN
handles auxiliary instance creation in the background to isolate and restore the PDB without affecting the CDB.
This allows granular restore while maintaining multi-tenant architecture integrity in 21c.

18. Describe the internal consistency checks done by RMAN when duplicating a database across
platforms.
During cross-platform duplication, RMAN validates endian formats and uses DBMS_TDB.CHECK_DB to
ensure transportability. RMAN performs checksum verification and file header validation for structural
compatibility. It checks tablespace read-only status if required, and uses transportable tablespace methods or
incremental cross-platform backups with automatic conversion using FROM PLATFORM. These checks
ensure datafile integrity, endian compatibility, and object-level consistency before proceeding with duplication or
transport.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

19. How does Oracle RMAN determine SCN consistency across archive logs and redo logs during
restore?
RMAN uses control file metadata to map archive logs and redo logs to the SCN timeline. It identifies gaps using
log sequence numbers and SCNs from V$ARCHIVED_LOG and V$LOG_HISTORY. During recovery, it aligns
the target SCN with available logs to apply changes sequentially and maintain consistency. If gaps exist, RMAN
prompts for missing logs or switches to datafile media recovery if allowed. SCN continuity is ensured through
strict validation at every log application phase.

20. Explain how parallelism works in RMAN at the slave channel level and how memory distribution
happens for each stream.
RMAN parallelism is defined using ALLOCATE CHANNEL or PARALLELISM parameter in the configuration.
Each channel spawns a slave process responsible for handling a portion of the datafile or backupset. These
channels read/write concurrently, improving performance. Memory distribution across channels is governed by
PGA_AGGREGATE_TARGET, and RMAN dynamically allocates buffers for each stream based on workload.
The V$SESSION_LONGOPS view tracks parallel backup progress and memory consumption at each channel
level.

LOGICAL BACKUPS & DATAPUMP

21. What internal optimizations were added to Data Pump in Oracle 21c for high-volume schema
exports?
Oracle 21c introduced several Data Pump optimizations for large exports, including parallel metadata filtering,
asynchronous I/O handling, and improved LOB streaming. The ACCESS_METHOD=DIRECT_PATH is further
enhanced to minimize context switching. Additionally, metadata loading has been optimized to avoid
unnecessary locking and reduce enqueue waits. Enhancements in buffer handling and task scheduling across
multiple worker processes lead to faster throughput in high-volume exports.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

22. How does metadata filtering work in Data Pump, and what are the internal views and steps
involved?
Metadata filtering in Data Pump is implemented via the INCLUDE and EXCLUDE parameters, internally
processed using KUPF$ and KUPM$ packages. These filters allow selective export/import of specific object
types. Data Pump constructs object dependency trees and filters metadata accordingly before the data
movement phase begins. Views like DBA_DATAPUMP_JOBS and DBA_DATAPUMP_SESSIONS help
monitor the process, while filtering is resolved using in-memory structures during job execution.

23. In Oracle 23ai, how can you use the new Data Pump features for blockchain table export/import?
Oracle 23ai supports secure export and import of blockchain and immutable tables using Data Pump by
preserving the cryptographic metadata. You must set ACCESS_METHOD=direct_path and ensure
compatibility level is 23 or higher. During import, validation of row chains and signatures is performed to
preserve blockchain integrity. Oracle also includes checksum and audit trail reconstruction during import to
ensure no tampering or loss occurs.

24. How is Transportable Tablespace (TTS) validated internally before being plugged into a target CDB?
Before plugging TTS into a target CDB, Oracle uses DBMS_TTS.TRANSPORT_SET_CHECK to validate
structural consistency. It checks for self-contained objects, valid data dictionary entries, and verifies platform
endian format. Additionally, RMAN CONVERT may be used for cross-endian movement. Internally, Oracle also
verifies tablespace metadata using the control file and datafile headers to ensure compatibility and prevent
corruption during plugin.

25. Describe step-by-step how Oracle ensures data consistency during expdp when consistent=y and
multiple workers are used.
When consistent=y is set in expdp, Oracle creates a consistent snapshot using a read-consistent SCN. All
worker processes attach to this SCN and perform data extraction without locking source tables. Oracle uses
undo information to rollback uncommitted changes and ensure snapshot isolation. Each worker processes its
object set under the same snapshot to maintain global consistency. The master table tracks SCN adherence
and ensures job-level consistency across all threads.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

26. What are the differences between legacy exp/imp and expdp/impdp in terms of SGA usage and
buffer management?
Legacy exp/imp tools relied on SQL*Net and buffer cache, causing heavy SGA and shared pool usage,
especially for large exports. expdp/impdp uses server-side processes with direct path reads and Data Pump
worker APIs, reducing dependency on SQL cursors and improving memory efficiency. Data Pump also
introduces buffer queues and memory-efficient streams, allowing better resource utilization and parallel
processing, which the legacy tools could not support natively.

27. How does Data Pump parallelism impact undo usage in large schema-level exports?
Data Pump parallelism uses multiple threads to read from consistent SCNs, which increases undo read activity
but not undo generation. Oracle reads undo segments to reconstruct data at the consistent SCN across
workers. If parallelism is high and undo_retention is low, it may cause ORA-01555 errors. Sizing undo
tablespace and using DATA_ACCESS_METHOD=direct_path can minimize this risk by bypassing undo for
certain data types.

28. What is the role of SYS.KUPM$ views during a Data Pump job and how do you interpret them for
tuning?
The SYS.KUPM$ views provide real-time insight into Data Pump job status and performance. Key views
include KUPM$DATAPUMP_JOBS, KUPM$DATAPUMP_WORKERS, and KUPM$MESSAGE_QUEUE.
These views expose worker activity, buffer sizes, processed rows, and job phases. They are useful for
identifying bottlenecks in metadata loading, data streaming, or network I/O. Tuning can be done by adjusting
PARALLEL, ESTIMATE, or buffer sizes based on observations from these views.

29. How does LOB storage impact Data Pump performance and how does SECUREFILE vs BASICFILE
affect it?
LOBs stored as SECUREFILE provide better performance for Data Pump operations due to support for
deduplication, compression, and efficient chunking. BASICFILE LOBs often result in slower exports and imports
as they lack these optimizations and require full data scans. SecureFiles also support LOB_STORAGE options
that Data Pump leverages for faster streaming. However, they may require more CPU and memory. Properly
sizing LOB cache and enabling parallelism helps improve LOB handling.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

30. How can you clone a schema using Data Pump in Oracle 21c while remapping storage parameters
and tablespaces?
To clone a schema, use expdp with SCHEMAS=<source> and impdp with REMAP_SCHEMA,
REMAP_TABLESPACE, and REMAP_DATAFILE. You can also use
TRANSFORM=SEGMENT_ATTRIBUTES:N to avoid copying original storage parameters. Oracle maps
tables, indexes, and objects to the new schema and tablespaces during import while preserving metadata
integrity. Ensure roles, privileges, and directory access are pre-created in the target before running impdp.

MEMORY, PGA/SGA & BUFFER TUNING

31. Explain how to read V$SGA_RESIZE_OPS and its tuning implications for dynamic memory
components.
V$SGA_RESIZE_OPS shows historical SGA component resize attempts, successes, and failures. Key
columns include COMPONENT, OPER_TYPE, INIT_SIZE, and FINAL_SIZE. Monitoring this helps detect
frequent resizing of shared pool or buffer cache. High operation count indicates instability in memory distribution.
If COMPONENT resizes frequently, it may signal pressure from PGA or other areas. Use this to adjust
SGA_TARGET or fix under-allocated pools. Frequent SHRINK operations can impact performance.

32. What does V$PGA_TARGET_ADVICE reveal and how do you apply it in high OLTP workloads?
V$PGA_TARGET_ADVICE gives simulated memory allocation efficiency at different PGA_TARGET sizes.
Columns like ESTD_OVERALLOC_COUNT and ESTD_WASTED_SIZE show under/over allocation. For
OLTP systems, ensure optimal value where estimated cache hit percentage is high, ideally >90%. Too low
PGA_TARGET leads to disk spills. Use this view during peak load to determine proper sizing. Adjust
PGA_AGGREGATE_TARGET based on desired performance and server capacity.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

33. Describe the interaction between shared pool latch contention and parse-to-hard parse ratios.
High latch contention in the shared pool often results from excessive hard parsing. A low parse-to-hard parse
ratio indicates poor cursor reuse. This contention affects concurrency and CPU. Monitoring V$LATCH and
V$SQLAREA can help diagnose. Frequent allocations, cursor pinning, and invalidations intensify latch waits.
Tuning involves using bind variables, increasing shared pool size, and enabling session cursor caching.
Avoiding dynamic SQL also reduces contention.

34. How can you monitor and resolve excessive reloads in the library cache using X$ views?
Use X$KSMLRU and V$LIBRARYCACHE to monitor reloads. High RELOADS/GETS ratio indicates memory
pressure in the shared pool. Reloads happen when objects are aged out due to insufficient memory. Increasing
SHARED_POOL_SIZE or enabling automatic memory management can help. Also review cursor sharing
behavior and pin frequently used packages. Session tracing with EVENT 10046 or AWR can show SQLs
causing the pressure. Pinned objects reduce reload risk.

35. In Oracle 21c, how does the In-Memory column store impact traditional buffer cache optimization?
The In-Memory column store (IMCS) enables parallel vector processing on compressed columnar formats. This
reduces reliance on buffer cache for analytical queries. Frequently scanned data bypasses traditional buffer
reads, lowering buffer cache I/O. This separation optimizes hybrid workloads—OLTP via buffer cache and
analytics via IMCS. It also minimizes cache pollution. However, memory sizing must account for both caches.
Use V$INMEMORY_AREA and AWR for visibility.

36. Describe how automatic PGA management behaves when multiple sessions exceed the aggregate
PGA target.
When total PGA usage exceeds PGA_AGGREGATE_TARGET, Oracle starts spilling to disk using TEMP
segments. The memory manager dynamically allocates optimal and maximum PGA per session. If usage hits
high-water marks, Oracle reduces allocations, prioritizing foreground processes. V$PROCESS and
V$SQL_WORKAREA_ACTIVE show usage patterns. For heavy batch workloads, PGA_AGGREGATE_LIMIT
acts as a hard cap. Overuse triggers ORA-4030 or performance degradation due to I/O.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

37. What’s the impact of high multi-versioning in undo segments on buffer cache and PGA
performance?
High multi-versioning leads to increased undo block reads, consuming buffer cache space. Long-running
queries increase undo retention needs, keeping undo blocks active longer. This affects cache churn and
memory pressure. For the PGA, rollbacks and consistent reads may require additional memory to reconstruct
versions. Monitor with V$UNDOSTAT and DBA_HIST_UNDOSTAT. Solutions include tuning undo_retention
and segment usage, and avoiding unnecessary long queries.

38. Explain the use of V$MEM_DYNAMIC_COMPONENTS and how it helps in analyzing pressure on
memory areas.
V$MEM_DYNAMIC_COMPONENTS shows current and target sizes for SGA components under automatic
memory management. It helps identify memory-starved areas by comparing CURRENT_SIZE vs
TARGET_SIZE. Frequent differences signal resizing activity. Use this view to diagnose memory contention,
such as buffer cache shrinking to feed the shared pool. It also helps validate settings like SGA_TARGET and
MEMORY_MAX_TARGET. Consistent shrink/grow patterns may require manual tuning.

39. How do you determine memory pressure across RAC nodes using GV$ memory-related views?
GV$SGASTAT, GV$PGASTAT, and GV$MEM_DYNAMIC_COMPONENTS show memory usage per
instance. Comparing these across nodes reveals imbalance in component allocations or workload. Look for
uneven buffer cache or shared pool consumption. Also monitor GV$ACTIVE_SESSION_HISTORY for
memory-related waits. Memory pressure on one node may result in global cache inefficiencies. Adjust instance
memory parameters or rebalance sessions. Use AWR reports to track historical pressure.

40. Explain how to trace memory allocation per SQL_ID using fixed tables or heap dump analysis.
Use V$SQL_WORKAREA_ACTIVE and V$SQL_PLAN to trace memory usage per SQL_ID. For deep
analysis, enable heap dump using ALTER SESSION SET EVENTS 'immediate trace name heapdump level 2';
and filter by SQL_ID. This reveals memory heaps and structures used by that SQL. Trace files show allocations
in UGA, PGA, and SGA. Combine with ASH to correlate memory spikes with SQL execution. Heap dumps help
diagnose memory leaks or excessive allocation.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

REDO LOGS, CHECKPOINTING & SIZING

41. What factors influence redo log sizing in a high-volume RAC setup?
Redo log sizing in RAC depends on transaction volume, checkpoint interval, and log switch frequency. Under-
sizing causes frequent switches and contention. Use log file size advisory in AWR to determine optimal sizes.
Each thread requires separate redo logs. LGWR and ARCH performance also guide sizing. Sizing must
consider Data Guard if enabled. Target 15–30 minutes per log switch. High redo generation from batch jobs
may need larger logs.

42. Describe the process of redo allocation from client memory to redo log file — including LGWR and
LNS behavior.
Redo is generated in user sessions and written to the log buffer in SGA. LGWR writes log buffer contents to
redo log files based on commit, timeout, or space thresholds. If Data Guard is configured, LNS picks up redo
and sends to standby. In RAC, each instance has a thread of redo. LGWR coordinates with GES and GCS for
log file access. Redo entries are flushed in SCN order to preserve consistency.

43. How does Oracle manage standby redo log conflicts in cascaded standby topologies?
Oracle assigns separate SRLs per standby thread. In cascaded setups, SRLs must match or exceed redo
thread count. Conflicts occur if SRLs are undersized or shared incorrectly. ORA-00313 or log gaps may occur.
Oracle manages SRLs using ARCH and RFS processes. Monitor with V$STANDBY_LOG and
V$ARCHIVE_DEST_STATUS. Proper configuration includes setting LOG_FILE_NAME_CONVERT and
ensuring enough SRLs on all downstream standbys.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

44. What is the impact of log buffer waits on commit latency, and how do you trace it using wait
events?
Log buffer waits occur when LGWR is slow or log buffer is full. It directly delays commits. Sessions wait on log
file sync and log buffer space events. Trace using V$SESSION_WAIT, ASH, and AWR reports. Increasing log
buffer size or tuning LGWR I/O improves performance. Monitor log file I/O latency with V$IOSTAT. CPU
starvation and contention also worsen the delay. Avoid excessive commits in rapid succession.

45. How do you monitor and fix excessive log switches that affect performance?
Use V$LOG_HISTORY and AWR reports to track switch frequency. High switches cause frequent checkpoints
and increased I/O. Check alert logs for messages on log switch frequency. Increase redo log size to reduce
switch count. Target 4–5 switches per hour. Use DB_CREATE_ONLINE_LOG_DEST_x for auto sizing in new
setups. RAC databases must have matched redo log sets per instance. Avoid log switches during peak load.

46. What are the internals of online log file recovery and how does Oracle maintain atomic
consistency?
During recovery, Oracle replays redo from current and archived logs to restore committed transactions. Crash
recovery involves roll forward using redo and rollback using undo. Redo log blocks have checksums and SCNs
for consistency. Oracle uses LGWR to guarantee all committed data is flushed to redo. DBWR handles datafile
consistency. Redo logs ensure atomicity via write-ahead logging and group commits. Oracle protects recovery
with multiplexed logs.

47. How does Adaptive Log File Sync tuning work in Oracle 21c?
Oracle 21c enhances log sync by dynamically adjusting LGWR wake-up behavior. Based on workload patterns,
it switches between post/wait and polling modes. It minimizes commit latency by tuning wakeup thresholds.
V$EVENTMETRIC and V$LATCH help monitor sync performance. Oracle evaluates redo generation and IOPS
to determine optimal sync timing. This reduces CPU overhead and improves commit throughput.
DBMS_LOGMNR can validate sync effectiveness.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

48. What are redo transport compression features and when should they be enabled?
Redo compression is used in Data Guard to reduce network usage. Enabled via LOG_ARCHIVE_DEST_n
COMPRESSION=ENABLE. Useful when network bandwidth is limited or redo volume is high. Compression
applies to redo data shipped to standby, not primary disk writes. It introduces CPU overhead. Use when latency
exceeds acceptable limits or WAN is used. Monitor performance via V$ARCHIVE_DEST_STATUS and Data
Guard transport lag.

49. How does Oracle handle out-of-order log sequence numbers in logical standby?
In logical standby, logs must arrive in order for SQL apply. Out-of-order logs delay apply processes. Oracle
buffers and reorders if possible. Severe disorder triggers ORA-16106 or apply lag. ARCH and LNS configuration
should preserve sequence. Use DGMGRL VALIDATE to confirm transport integrity. Logical standby uses
metadata for dependency resolution. Resync may be required for persistent misalignment.

50. Describe the redo transport layer security mechanism in 19c and how encryption is enforced.
Oracle 19c uses redo transport encryption with Oracle Net Services. Enabled via LOG_ARCHIVE_DEST_n
parameters with ENCRYPTION=ON. Secure communication is established using SSL or Oracle native
encryption. The transport layer ensures data confidentiality across untrusted networks. Wallets and certificates
are used for identity validation. Oracle checks TNS encryption policies during ARC/LNS communication.
Monitoring is done via V$ARCHIVE_DEST_STATUS.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

AWR, ASH & ADDM


51. How do you correlate AWR snapshots to specific user complaints about slowness?
Capture the exact timestamp of the user complaint and match it to the closest BEGIN_INTERVAL_TIME and
END_INTERVAL_TIME in DBA_HIST_SNAPSHOT. Generate an AWR report covering that interval. Cross-
reference SQL_IDs, wait events, and top sessions from the report with application logs or
V$ACTIVE_SESSION_HISTORY to identify relevant workload. This isolates high-load periods and problematic
SQLs tied to the complaint.

52. What are the key sections in an AWR report you must analyze in a high-load batch system?
Focus on "Load Profile," "Top Timed Events," and "SQL ordered by Elapsed Time" to identify pressure points.
Check "Instance Efficiency Percentages" for parsing, buffer cache, and redo efficiency. Analyze "Time Model
Statistics" and "I/O Stats" for bottlenecks. For batch systems, also review "Advisory Statistics" to assess PGA,
buffer cache, and parallel execution. Investigate segments causing highest I/O and LIOs.

53. How do you identify contention between SQLs in ASH using blocking_session and event columns?
Use V$ACTIVE_SESSION_HISTORY to filter sessions with non-null BLOCKING_SESSION and event types
like "enq: TX" or "row lock contention". Join ASH rows on SQL_ID and SESSION_ID to determine which SQLs
are blocked and which are blockers. Time correlation across multiple sessions helps reveal blocking chains.
Visual tools like OEM Top Activity graph or manual pivoting in SQL can highlight peak contention windows.

54. Explain the background processes and mechanisms that gather and flush AWR/ASH data.
MMON (Manageability Monitor) and MMNL (Manageability Monitor Light) are responsible for capturing
performance statistics and flushing them to AWR at defined intervals. MMON gathers snapshots and ADDM
findings, while MMNL flushes ASH samples from memory to disk. Data is stored in WRH$ tables under
SYSAUX. These processes run autonomously every 60 minutes by default but can be triggered manually.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

55. What differences exist between AWR and Statspack in terms of granularity and performance
overhead?
AWR captures data at finer granularity and includes wait events, time models, and system metrics not available
in Statspack. AWR is integrated with Oracle Enterprise Manager and provides ADDM analysis. Statspack is
manual, lacks ASH integration, and imposes more overhead due to coarse locking. AWR data is flushed
automatically by MMON, while Statspack snapshots require DBMS_JOB or manual execution.

56. In ADDM reports, how do you differentiate between symptoms and root causes of performance
issues?
Symptoms include observations like high CPU, high parsing, or excessive waits. Root causes are underlying
problems such as unoptimized SQL, I/O contention, or latch issues. ADDM findings are prioritized with impact
percentages—focus on highest-impact causes. Review the "Findings," "Recommendations," and "Rationale"
sections carefully. True root causes often relate to configuration, resource contention, or poorly designed SQL.

57. What are the limitations of ADDM in a multitenant environment, and how do you supplement them?
ADDM operates at the CDB level by default and may not capture individual PDB workload accurately. Granular
performance issues inside PDBs can go undetected. Supplement with AWR for PDBs (AWR_PDB) and per-
PDB ASH analysis. Use DBMS_PERF and DBMS_WORKLOAD_REPOSITORY APIs with CON_ID filters.
Manually analyze wait events and SQL metrics at the PDB level to compensate for ADDM’s cross-container
aggregation.

58. How do you extract ASH data manually from DBA_HIST_ACTIVE_SESS_HISTORY using custom
time ranges?
Query DBA_HIST_ACTIVE_SESS_HISTORY using filters on SAMPLE_TIME, SQL_ID, EVENT, or
SESSION_ID. For example:
SELECT * FROM DBA_HIST_ACTIVE_SESS_HISTORY WHERE SAMPLE_TIME BETWEEN :start_time
AND :end_time; Use TO_DATE for accurate timestamp matching. This provides session-level activity over
historical intervals, even after memory-flushed samples. You can aggregate top wait events or SQLs over
defined time slices to identify hotspots.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

59. What’s the role of the WRH$ tables in historical diagnostics and how do you purge them safely?
WRH$ tables store AWR history in the SYSAUX tablespace. They capture stats on SQLs, events, I/O, and
memory. Use DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS to control retention
and interval. Purge using DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE. Ensure no
diagnostics, baselines, or reports rely on purged data. Regular purging helps control SYSAUX bloat. Backup
prior to purging in sensitive environments.

60. How do you use DBA_ADDM_FINDINGS to perform automated tuning reviews?


Query DBA_ADDM_FINDINGS for top findings by IMPACT_PCT. Each row includes the problem, impact, and
recommended action. Join with DBA_ADDM_RECOMMENDATIONS for suggested fixes. Use filters like
TASK_NAME or BEGIN_SNAP_ID for specific intervals. Automate report extraction using SQL scripts or
Enterprise Manager jobs. Findings give a quick health overview of the database without needing full AWR
analysis.

UPGRADES, PATCHING, FLASHBACK

61. Describe the step-by-step process of a multitenant upgrade using dbupgrade in silent mode.
Start by running preupgrade.jar from the new Oracle home. Back up the entire CDB and PDBs. Adjust init.ora or
SPFILE as per recommendations. Use dbupgrade -silent from the new Oracle home, specifying PDB upgrade
scope with -pdbs or -cdb. Post-upgrade, run datapatch and then utlrp.sql. Finally, validate with
dba_registry_sqlpatch and recompile invalid objects. Test PDB functionality before production cutover.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

62. What are the risks of applying OJVM patches on production and how do you minimize impact?
OJVM patches may invalidate PL/SQL, Java, or scheduler objects. They often require downtime and can fail if
dependencies aren’t met. Use precheck with datapatch -verbose. Schedule patching during low activity
windows. Always apply OJVM alongside CPU patches to maintain compatibility. Validate with dba_registry and
Java object recompilation. Avoid using OJVM if not required. Backups are mandatory before applying.

63. Explain the internal behavior of Flashback Database and how it stores before-images in flashback
logs.
Flashback Database stores block-level before-images in flashback logs in the fast recovery area. It uses FBDA
process to periodically copy changed blocks from buffer cache to flashback logs. During a flashback, Oracle
rewinds datafiles to a previous SCN using these logs. It doesn’t affect redo logs or undo directly. The flashback
log format is proprietary and optimized for fast rewind performance.

64. What are the scenarios where Guaranteed Restore Points fail and how can you prevent that?
Guaranteed Restore Points (GRPs) fail if the fast recovery area is full or flashback logs are lost. They also fail if
datafiles go offline or are dropped. Prevent issues by allocating sufficient FRA space and monitoring usage with
V$RECOVERY_FILE_DEST. Avoid full FRA alerts and enable flashback retention target. Refrain from DDL
operations that drop or move critical segments after GRP creation.

65. How do you plan and validate a downgrade from Oracle 21c to 19c with multitenant?
Check downgrade support in MOS. Run preupgrade and downgrade scripts (dbdowngrade) after full RMAN
backup. Remove 21c-specific features like blockchain or auto ML. Downgrade must be done CDB-wide; PDB-
only downgrade isn’t allowed. Validate with dba_registry and utlrp.sql. Restore catalog compatibility and patch
levels. Run catcon.pl for script execution across containers. Test downgrade in staging prior to production.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

66. What are the new patching modes in 23ai (like hot patching) and how are they internally applied?
Oracle 23ai introduces hot patching for certain components like SQL engine and PL/SQL. Patches are applied
in-memory without database restart. It uses library replacement and code patching at runtime. Use datapatch
with -hotpatch flag or apply via Fleet Patching and Provisioning (FPP). Internally, the patched objects are
redirected via memory patch layers. Only supported patches can use this method, reducing downtime
significantly.

67. How do you detect and fix a failed datapatch post-upgrade scenario?
Check logs under $ORACLE_HOME/cfgtoollogs/datapatch. Look for ORA- or patch apply errors. Validate with
dba_registry_sqlpatch and opatch lsinventory. Common issues include missing OJVM, permissions, or invalid
objects. Rerun datapatch -verbose as SYSDBA. Fix prerequisite issues like invalid schema, ORACLE_HOME
ownership, or environment misconfigurations. Use APEX or Java recompilation scripts if needed. Always
ensure DB is open before retrying.

68. How does the presence of Data Guard affect patch application and flashback configuration?
Patching in Data Guard setups requires standby sync management. Use dgmgrl to disable apply and
switchover roles if needed. Flashback must be enabled on both primary and standby for rollback support.
Ensure SRLs and FRA exist on standby. Apply patches to standby first to reduce downtime via switchover.
Flashback aids rollback during patch failure. Test patch integrity before enabling recovery on standby.

69. How do you apply Release Updates (RU) using OPatchAuto on GI and RDBMS homes in silent
mode?
Run opatchauto apply with the patch directory from root on each node. Use -silent and -oh for specifying
ORACLE_HOME and -local for node-level operation. OPatchAuto coordinates patch application, service
relocation, and root script execution. Ensure Grid Infrastructure and RDBMS are at compatible patch levels.
Precheck with opatchauto apply -analyze. Monitor logs under $ORACLE_HOME/cfgtoollogs/opatchauto.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

70. What’s the role of rollback segments in flashback undo and how do they differ from normal undo?
Flashback does not use rollback segments directly. It uses undo tablespace for Flashback Query (using
UNDO_RETENTION) and flashback logs for Flashback Database. Normal undo supports consistent read and
rollback, while flashback undo reconstructs past states using before-images. Rollback segments existed pre-
UNDO tablespace era and are now obsolete. Flashback logs store historical changes, while undo stores
transactional state for active sessions.

USER MGMT, SECURITY, AUDITING

71. Explain the internal structure of SYS.USER$ table and how it's tied to authentication.
The SYS.USER$ table stores core user information including usernames, hashed passwords, account status,
and default schema. It is the foundational table Oracle consults during authentication to validate credentials. The
PASSWORD column holds encrypted passwords using Oracle’s hashing algorithms. ASTATUS indicates user
lock or expiry. USER# acts as a primary key linking to privileges and roles across internal tables. Upon login,
Oracle compares input credentials with USER$ entries. This table is critical for both local and external
authentication mechanisms. Any changes here directly affect user access and security enforcement.

72. How does Unified Auditing work and what are the hidden performance pitfalls?
Unified Auditing consolidates multiple audit types into a single framework, logging events into
SYS.UNIFIED_AUDIT_TRAIL. It leverages the SGA for buffering audit records before writing to disk, reducing
overhead compared to traditional auditing. However, large volumes of audit data can cause I/O contention and
increased parsing time during analysis. Improperly scoped audit policies may generate excessive audit noise,
impacting performance. The audit trail can grow quickly, requiring regular purging or archiving. Audit-related
waits and CPU usage may increase under heavy transactional loads. Proper policy design and monitoring are
essential to balance security with performance.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

73. What are best practices for hardening SYSDBA accounts in a multitenant environment?
Restrict SYSDBA privileges strictly to essential personnel and avoid shared credentials. Use external
authentication or strong password files with OS-level protections. Disable remote SYSDBA connections unless
absolutely necessary. In multitenant setups, delegate administrative duties via local users instead of granting
SYSDBA at the CDB level. Enable auditing specifically for SYSDBA activity to track changes. Implement role
separation and enforce multi-factor authentication where feasible. Regularly rotate passwords and monitor
access logs. Follow Oracle security patches and guidelines to minimize attack vectors.

74. What’s the difference between common users and local users in CDB architecture in 21c?
Common users are created in the root container and exist across all PDBs, with usernames prefixed by C## by
default. They can manage and connect to multiple PDBs and hold roles spanning containers. Local users reside
only within individual PDBs and manage PDB-specific resources and privileges. Common users enable
centralized administration and security policies across the entire CDB. Local users facilitate delegated
administration and isolation. Authentication and authorization scopes differ accordingly. Role grants to common
users do not propagate to local users unless explicitly granted. This design supports flexible multitenant security
models.

75. How can you audit user-level DMLs without using Fine Grained Auditing (FGA)?
Standard auditing can be enabled using the AUDIT command for INSERT, UPDATE, and DELETE at table
level. This creates audit records in the unified or traditional audit trail. It captures who performed the DML and
when but lacks row-level detail. Triggers may be implemented to log more granular data but with overhead.
Using database application context can help correlate user sessions to audit events. Supplementary tools like
LogMiner or Streams can provide deeper transaction insights. Proper filtering and retention policies are needed
to avoid audit trail bloat. This approach is simpler but less precise than FGA.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

76. How does Oracle validate external password files (orapwd) and what are the security implications?
Oracle uses the orapwd file to authenticate privileged users such as SYSDBA remotely. The file contains
hashed credentials protected by file system permissions. Validation occurs by matching credentials against this
file during remote logins when REMOTE_LOGIN_PASSWORDFILE is enabled. Different modes (EXCLUSIVE,
SHARED) control concurrent user access. The password file is static and must be updated manually to rotate
credentials. Improper file protection risks unauthorized SYSDBA access. Best practice is to restrict file
permissions and use stronger authentication methods such as LDAP or Kerberos. Backup and restore
procedures must include secure handling of this file.

77. What is the structure and internal usage of WRL directories in auditing and wallet config?
WRL (Wallet Resource Locator) directories store Oracle wallets, which securely hold encryption keys and
certificates for TDE, SSL, and other security functions. Wallets are stored as cwallet.sso or ewallet.p12 files,
providing auto-login capabilities without user input. Oracle references the wallet location via sqlnet.ora
parameters during startup and runtime. Proper OS-level permissions ensure only authorized processes access
the wallet. Wallets enable transparent encryption operations and secure communication. Changes to wallet
contents require re-opening or restarting dependent services. Regular wallet backups and strong access
controls are critical to maintain data security.

78. How do you configure LDAP and Kerberos authentication in Oracle RAC environments?
LDAP integration involves configuring Oracle Internet Directory or Active Directory and setting Oracle Net to use
LDAP for user lookups in sqlnet.ora. RAC nodes require consistent LDAP client configuration. Kerberos setup
requires creating service principals, keytabs, and configuring krb5.conf. Oracle validates Kerberos tickets to
authenticate users securely. RAC nodes must share Kerberos credentials and synchronize time for ticket
validation. Integration provides centralized authentication, reducing password management overhead. Using
wallets or credential stores can facilitate secure authentication. Both methods enhance security and simplify
user management in clustered environments.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

79. What’s the purpose of REDACTION policies and how are they enforced at parse/execute time?
Redaction policies mask sensitive data in query results dynamically without modifying stored data. Using
DBMS_REDACT, DBAs define columns, redaction types, and conditions for masking. Oracle applies policies at
parse time by rewriting queries to obscure data transparently. This prevents unauthorized users from viewing
confidential information. Redaction supports full, partial, and random masking methods. Enforcement does not
affect data storage or query plans. It is applied only during data retrieval and respects user privileges. Redaction
policies can be audited and must be carefully designed to balance security and usability.

80. Explain the role of DBA_COMMON_AUDIT_TRAIL and how to filter noise during forensic
investigations.
DBA_COMMON_AUDIT_TRAIL consolidates audit records from multiple audit sources into a unified view. It
contains detailed information about user actions, object access, and timestamps. Forensic investigations require
filtering out background jobs, system processes, and irrelevant audit entries to focus on suspicious activities.
Filters based on usernames, actions, timestamps, and return codes improve signal-to-noise ratio. Applying audit
policies to scope important events and excluding noise proactively helps. Exporting audit data to external SIEM
or analytics platforms enhances investigation efficiency. Regular audit trail maintenance ensures manageable
dataset size and quick query response.

DB LINK, FLASHBACK, ODBC, 23AI

81. What’s the internal architecture of DB link resolution across multiple CDBs?
Database links (DB links) allow sessions in one database to access objects in another transparently. In a
multitenant environment, DB link resolution involves connecting from a PDB to the remote container or PDB.
The link stores connection credentials and network details. Oracle resolves the DB link using the local service
name, then authenticates remotely. Cross-container DB links require proper grants and sometimes common
user accounts for access. Session context, including container information, is propagated to maintain security.
Oracle manages connection pooling and failover for DB links in RAC. The architecture ensures transparent
distributed query execution.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

82. How does ODBC connectivity work for Oracle using heterogeneous services — detail the flow.
Oracle’s Heterogeneous Services enable ODBC clients to access non-Oracle databases transparently. An
Oracle database acts as a gateway, translating SQL requests from ODBC into native queries. The client sends
SQL via ODBC driver to the Oracle listener. Oracle dispatches the request to the Heterogeneous Services
agent, which connects to the remote system using ODBC. Results are fetched, translated back into Oracle-
compatible formats, and returned to the client. Oracle maintains session and transaction context to ensure
consistency. The agent handles data type conversions, error mappings, and security context translation. This
architecture supports transparent heterogeneous access.

83. Describe how flashback archive stores historical data and the role of undo segments.
Flashback Archive captures historical data changes to enable long-term auditing and temporal queries. It stores
versions of table rows in a dedicated history table managed by the Oracle Flashback Data Archive feature.
Undo segments store recent transactional changes used for short-term flashback operations. Flashback Archive
leverages undo for immediate flashback, but archives older data to persistent storage for long retention. This
separation allows efficient historical querying without undo overhead growth. Oracle manages purge policies to
remove aged flashback data. Flashback Archive enforces read consistency and supports compliance
requirements for data retention.

84. What differences are introduced in Oracle 23ai for AI vector data types or JSON search
enhancements?
Oracle 23ai introduces native support for AI vector data types optimized for machine learning workloads,
enabling efficient storage and querying of high-dimensional vectors. JSON search capabilities are enhanced
with improved indexing strategies and faster predicate evaluation. The database offers new SQL macros and
functions for AI inferencing directly inside the engine. This reduces data movement and latency for AI
applications. Vector data is integrated with Oracle Text and Spatial features for richer analysis. Enhanced
memory management and query optimization adapt to these data types. These changes aim to streamline
AI/ML workloads within the database ecosystem.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

85. Explain the security posture changes in 23ai for external data sources.
Oracle 23ai enforces stricter access controls and encrypted communications for external data integrations.
Connections to external data sources require mutual TLS authentication and support hardware-based key
management. Fine-grained auditing and monitoring are extended to external data fetch operations. Policies
prevent data leakage by restricting access based on user roles and context. Credential management is
centralized with enhanced wallet integration. Oracle applies runtime security checks on data retrieved from
external sources to detect anomalies. These enhancements improve compliance with modern security
frameworks and reduce attack surface related to federated data access.

86. How does 23ai support AI inferencing using SQL macros — and what background packages are
involved?
Oracle 23ai integrates AI inferencing via specialized SQL macros that encapsulate complex model execution
logic. These macros invoke internal machine learning engines optimized for in-database inferencing. Key
packages such as DBMS_AI and DBMS_ML provide APIs for model management and prediction calls.
Inferencing occurs within SQL execution plans, enabling seamless embedding into existing workflows.
Background processes optimize resource allocation and parallelism for these operations. This architecture
reduces latency and eliminates the need for external ML platforms. The system supports versioning, monitoring,
and auditing of AI models executed through SQL macros.

87. What’s new in 23ai regarding sharding enhancements and CDB scalability?
Oracle 23ai introduces dynamic shard rebalancing, allowing live redistribution of data across shards without
downtime. Enhanced global transactions provide stronger consistency guarantees across distributed shards.
The multitenant architecture is optimized for scale with improved PDB cloning and snapshot capabilities. Cluster
management processes leverage AI-driven monitoring to predict resource bottlenecks and automate
remediation. Shard catalog management is enhanced for easier administration. Memory and CPU resource
allocation across shards and PDBs is more granular and adaptive. These features enable highly available,
scalable, and manageable cloud-native database deployments.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

88. Explain how Oracle 23ai optimizes memory for autonomous workloads.
Oracle 23ai uses AI-driven adaptive memory management that learns workload patterns to optimize SGA and
PGA allocation in real time. It dynamically resizes memory pools based on session activity and query
complexity. Autonomous memory tuning reduces manual DBA intervention while improving throughput.
Memory pressure across RAC nodes is coordinated via global views to avoid resource contention. Workloads
are classified and prioritized for resource allocation. Background processes continuously analyze wait events
and adjust cache sizes. The architecture supports hybrid transactional and analytical processing with minimal
latency.

89. What are changes in 23ai around SQL Plan Management (SPM) baselines and their internal
behavior?
In 23ai, SPM baselines support AI-driven plan evolution, allowing automatic acceptance or rejection of plans
based on learned performance metrics. The optimizer leverages historical execution statistics with machine
learning to predict plan stability. Plan capture is more granular, including bind variable effects. Baselines can be
automatically purged or archived depending on workload shifts. The system integrates tightly with Automatic
Workload Repository to provide continuous tuning feedback. SQL profiles and baselines co-exist with enhanced
management interfaces. This reduces manual plan management and improves query performance
consistency.

90. How does the new multitenant architecture in 23ai simplify cloning of PDBs at scale?
Oracle 23ai introduces fast PDB cloning using copy-on-write and incremental data replication technologies.
Cloning operations are offloaded to background processes that minimize storage and I/O impact. Metadata and
data files are synchronized efficiently across clusters using intelligent caching and delta tracking. Clone lifecycle
management is automated with simplified APIs supporting version control and snapshot management. The
architecture supports massive parallelism for simultaneous cloning requests. Network-aware optimizations
reduce latency in distributed environments. These improvements enable rapid provisioning and scaling of PDBs
for cloud and DevOps workflows.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

TROUBLESHOOTING & PERFORMANCE


TUNING

91. How do you capture call stacks using ORADEBUG and analyze stuck sessions?
Using ORADEBUG, you attach to the session or process ID and issue callstack commands to capture call
stacks. For stuck sessions, commands like ORADEBUG SETospid <PID> followed by callstack provide current
execution paths and wait events. The output reveals where the session is blocked or waiting inside the Oracle
kernel. Analysis involves identifying repeated or long waits, resource contention, or infinite loops in PL/SQL.
Correlating call stacks with V$SESSION and V$SESSION_WAIT helps pinpoint root causes. Regular capture
during hangs aids trend analysis and escalation with Oracle Support.

92. What are common causes for ORA-600, ORA-7445, and how to escalate with full evidence?
ORA-600 and ORA-7445 are internal Oracle errors indicating unexpected kernel exceptions or OS-level faults.
Common causes include memory corruption, bugs in Oracle code, or hardware issues. To escalate, gather full
trace files, alert logs, systemstate dumps, and reproduction steps. Use ADRCI to package diagnostics. Include
environment details, patch levels, and workload specifics. Full evidence enables Oracle Support to identify bugs
or misconfigurations. Early detection and detailed data prevent prolonged outages and expedite root cause
analysis.

93. How do you handle “log file sync” events in async commit-heavy applications?
"Log file sync" waits occur when user sessions wait for commit acknowledgment from LGWR. In async commit
environments, tuning involves reducing commit frequency or batching transactions. Optimizing redo log file size
and placement on low-latency storage reduces wait times. Using Oracle’s group commit and enabling Fast
Commit options also help. Monitoring LGWR process CPU and I/O usage identifies bottlenecks. Application
redesign to minimize frequent commits and use bulk processing can mitigate this event. Regular redo log switch
and checkpoint tuning are essential.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

94. Explain how to identify session-level memory leakage using V$PROCESS_MEMORY.


V$PROCESS_MEMORY reports memory allocated by Oracle background and user processes. Identify
abnormal or continuously growing memory usage in specific sessions by comparing PGA and UGA usage over
time. High or increasing values without release suggest leaks. Correlate with V$SESSION to isolate users or
queries causing leakage. Use ASH and AWR data to track associated SQL or PL/SQL code. Memory leaks
often relate to faulty PL/SQL or third-party code, and tracking helps initiate fixes or restarts. Proper memory
monitoring avoids performance degradation.

95. How do you investigate SCN divergence between primary and standby?
SCN divergence occurs when primary and standby databases fall out of sync, usually due to missing or delayed
redo application. Investigate by checking V$DATAGUARD_STATS and V$ARCHIVE_DEST_STATUS to
identify lag. Use V$STANDBY_EVENT_HISTOGRAM and V$STANDBY_LOG for standby apply delays.
Network issues or apply errors in logs can cause divergence. Resolving requires resynchronizing via
RECOVER MANAGED STANDBY DATABASE or rebuilding standby. Monitoring Data Guard broker states
and alert logs is essential to prevent future divergence.

96. How do you detect internal contention using latch and mutex wait diagnostics?
Latch and mutex contention can be identified by querying V$LATCH, V$LATCH_CHILDREN, and
V$SESSION_WAIT views. High wait times on specific latches or mutexes indicate contention hotspots. Tools
like ASH reports can reveal sessions waiting excessively on these internal locks. Use ORADEBUG to dump
latch holders and waiters. Oracle 21c+ includes enhanced mutex diagnostics to pinpoint exact contention code
paths. Resolving involves tuning or patching contention-prone areas, increasing spin counts, or redesigning
workload to reduce concurrency on critical structures.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

97. What’s the difference between high cursor version counts and bind mismatch issues?
High cursor version counts occur when multiple child cursors exist due to varying bind variable values or data
types, leading to increased parsing and memory usage. Bind mismatch happens when bind variables are used
inconsistently, for example mixing data types or lengths, causing unnecessary child cursor creation. Bind
mismatch increases hard parses and reduces cursor sharing efficiency. Monitoring V$SQL and
V$SQL_SHARED_CURSOR helps detect these. Resolving requires standardizing bind variable usage in
applications and enabling cursor sharing parameters. Effective bind management improves performance and
scalability.

98. How do you investigate ORA-01555 in long-running queries in 21c?


ORA-01555 ("snapshot too old") happens when undo information required for consistent reads is overwritten.
Investigate by checking undo tablespace size, undo retention period, and query execution time. Long-running
queries might exceed undo retention, causing snapshot loss. Use V$UNDOSTAT and AWR reports to monitor
undo usage and retention effectiveness. Also, check for undo segment contention or aggressive undo cleanup.
Increasing undo retention or optimizing queries to reduce runtime helps. Oracle 21c includes enhanced undo
management features to mitigate these errors.

99. What are internal latch structures used for library cache pinning and how do you monitor them?
Library cache pinning uses latches like library cache pins and mutexes to serialize access to shared SQL and
PL/SQL objects in the shared pool. These internal latches prevent concurrent invalidation or modification
conflicts. Monitoring is done through V$LATCH, V$LATCH_MISSES, and V$SESSION_WAIT for latch wait
events related to library cache. High contention here can cause parsing and execution delays. Increasing
shared pool size, pinning frequently used objects, or applying patches reduce pin contention. Oracle 19c+ also
provides mutex statistics to monitor fine-grained locking.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

100. How do you troubleshoot a complete database hang using HANGANALYZE and systemstate
dumps?
Use HANGANALYZE utility to collect and analyze systemstate dumps capturing process call stacks, wait
events, and resource usage during a hang. Generate dumps via ORADB signals or ORADB scripts at hang
time. Analyze call stacks for stuck sessions, latch waits, or infinite loops. Correlate with V$SESSION_WAIT and
V$PROCESS to identify blocking or hung processes. Review system resource statistics and OS-level metrics to
rule out external causes. Provide collected data to Oracle Support with detailed reproduction steps for advanced
analysis. Early intervention reduces downtime and data loss risk.

MULTITENANT & CDB/PDB OPERATIONS

101. How does Oracle internally isolate UNDO for local and shared undo in PDBs and what are the side
effects of switching undo modes?
Oracle supports two undo modes in multitenant: shared undo at CDB level or local undo per PDB. Shared undo
centralizes undo management but can cause contention with many PDBs. Local undo isolates undo in each
PDB, improving autonomy and reducing cross-PDB interference. Switching modes requires database restart
and careful consideration of transactions spanning containers. Local undo simplifies backup and recovery at
PDB level but increases resource usage. Shared undo eases management in smaller deployments. Side
effects include differences in undo retention and visibility across PDBs.

102. Describe the internal behavior of CONTAINERS clause in multitenant and how it’s resolved at
runtime.
The CONTAINERS clause scopes SQL statements or PL/SQL blocks to multiple PDBs or the entire CDB.
Internally, Oracle parses this clause and directs execution context to each targeted container sequentially or in
parallel. The CDB root manages the metadata, while container-specific operations execute in individual PDBs.
Runtime resolution involves session context switching, ensuring container-specific data and dictionary views are
correctly accessed. This clause enables efficient multi-PDB operations without reconnecting to each PDB. It is
used in administrative automation and cross-container maintenance tasks.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

103. What are the limitations of hot cloning in 21c and how does 23ai improve this process?
In 21c, hot cloning supports PDB duplication while the source is open but has limitations with active transactions
and heavy workloads, potentially causing inconsistent clones or delays. It also restricts cloning of certain objects
and requires quota management. Oracle 23ai enhances hot cloning by improving consistency guarantees
through incremental cloning and automated conflict detection. It supports more complex object types and
reduces downtime by leveraging AI-driven dependency analysis. These improvements provide near-zero
downtime cloning, better resource efficiency, and higher reliability in large multitenant deployments.

104. How does Oracle validate metadata during unplug and plug operations of PDBs?
During unplug, Oracle captures PDB metadata, including data dictionary, tablespace info, and security settings
into XML files. Plug operations validate this metadata against the target CDB, ensuring compatibility in Oracle
versions, character sets, and existing container configurations. Oracle performs dictionary consistency checks
and verifies tablespace locations. Any mismatch triggers errors preventing PDB plug-in to avoid corruption. The
process also validates user and role mappings. This thorough metadata validation ensures PDB integrity and
prevents runtime conflicts after plug operations.

105. What are the mechanisms behind PDB lockdown profiles and how are they enforced at the kernel
level?
PDB lockdown profiles restrict administrative and user actions within PDBs to enhance security and compliance.
These profiles define allowed commands and features via predefined or custom rules. Enforcement occurs in
the Oracle kernel by hooking into command parsing and execution stages, blocking disallowed operations at
runtime. Lockdown profiles prevent unauthorized DDL, access to sensitive parameters, or disabling auditing
within PDBs. They ensure container isolation and minimize privilege escalation risks. Profiles can be centrally
managed from the root container, providing consistent policy application across PDBs.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

106. How does Oracle handle parallel DML across multiple PDBs in a single container database?
Parallel DML in a multitenant CDB executes within each PDB independently to maintain transaction isolation.
Oracle manages resource allocation and concurrency through the Container Resource Manager, which
governs CPU and memory distribution. Parallel DML statements spawn parallel servers scoped to the executing
PDB. Coordination across PDBs is minimal since transactions are container-local. Oracle ensures undo and
redo generation are correctly isolated per PDB. Inter-PDB parallel DML is not supported due to architectural
isolation. This model preserves multitenant security while enabling high-performance parallel processing.

107. What are the implications of using common users with local tablespaces in multitenant security?
Common users exist across all containers with a shared identity but can be assigned local tablespaces within
PDBs. This separation allows common users to maintain centralized privileges while storing data physically
isolated in each PDB. Implications include complexity in managing space quotas and auditing since data
ownership spans containers. Security risks arise if tablespace access is misconfigured, potentially exposing data
between PDBs. Best practices recommend strict privilege controls and monitoring. Oracle enforces namespace
isolation, but proper configuration is essential to prevent unauthorized access through common user
tablespaces.

108. How do you troubleshoot a PDB open failure due to dictionary mismatch in CDB?
A dictionary mismatch during PDB open typically arises from incompatible metadata versions or corrupt
dictionary objects. Troubleshooting begins by reviewing alert logs and trace files for specific error codes. Using
DBA_PDBS and V$CONTAINERS, verify PDB state and compatibility. Run CATPDBSQL or UTLRP to
recompile invalid objects. Check for missing patches or version discrepancies between CDB and PDB. In
severe cases, restore the PDB from backup or recreate metadata using unplug/plug. Collaboration with Oracle
Support is often required for deep dictionary inconsistencies.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

109. How is memory distributed dynamically among PDBs when resource manager limits are used?
Oracle Resource Manager in multitenant environments dynamically allocates memory resources like PGA and
SGA among PDBs based on configured consumer groups and resource plans. Limits define maximum memory
usage per PDB, ensuring no single PDB exhausts system memory. Oracle continuously monitors workloads
and adjusts allocations to balance performance and fairness. Excessive memory consumption triggers throttling
and workload prioritization. This dynamic distribution enables effective multi-tenant consolidation while
preventing resource starvation. Monitoring via GV$RSRC_CONSUMER_GROUP helps optimize resource
plans.

110. What are the internal data dictionary changes between non-CDB and CDB architectures?
The CDB architecture introduces container-specific views and tablespaces to support multitenancy, unlike non-
CDB where a single database dictionary exists. Key dictionary tables are partitioned by container or enhanced
with CON_ID columns to identify container ownership. Metadata for common and local users, roles, and objects
is segregated accordingly. Oracle adds CDB-specific views such as CDB_OBJECTS alongside traditional views
for compatibility. This layered dictionary ensures isolation and shared resource management, requiring
applications and DBAs to be aware of container context when querying dictionary views.

RAC & CLUSTERWARE ADVANCED SCENARIOS

111. How does clusterware handle interconnect load balancing and what are the tuning options for
RAC interconnects?
Clusterware balances RAC interconnect traffic by dynamically routing Global Cache Service (GCS) and Global
Enqueue Service (GES) messages over multiple network interfaces. Load balancing minimizes latency and
congestion on any single interconnect. Tuning involves configuring multiple private interconnects with redundant
paths, setting proper MTU sizes, and enabling features like TCP_NODELAY or UDP for multicast traffic.
Monitoring tools like crsctl and olsnodes help verify network status. Oracle 23ai introduces AI-driven
interconnect load predictions for proactive tuning. Proper balancing improves cluster scalability and reduces
cross-instance block transfer delays.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

112. Describe the full failover path when a node crashes and how CRSD and CSSD reallocate
resources.
When a RAC node crashes, the Cluster Ready Services Daemon (CRSD) detects the failure via heartbeat loss
and notifies Cluster Synchronization Services Daemon (CSSD). CSSD triggers fencing mechanisms to isolate
the failed node and prevent split-brain. CRSD initiates failover of services and resource groups owned by the
crashed node to surviving nodes, ensuring workload continuity. Resources like VIPs, database instances, and
ASM instances restart on target nodes. The Cluster Health Monitor (CHM) assists in diagnostics. Failover timing
depends on resource profiles and cluster policies. This mechanism ensures high availability and minimal
downtime.

113. What are the internals of the GES (Global Enqueue Services) and how do they resolve deadlocks
in RAC?
GES manages global locks on resources shared between RAC instances. It tracks lock states (granted, waiting)
and coordinates ownership across nodes. Lock requests go through GES, which serializes access to avoid
conflicts. Deadlock detection in GES uses wait-for graphs built from lock requests. When a cycle is detected,
GES aborts one session to break the deadlock. It communicates with LMS and other cluster processes to
propagate lock state changes efficiently. GES uses a combination of message passing and shared memory
structures to maintain consistency and minimize latency during lock management.

114. How does Oracle RAC ensure instance affinity during rolling upgrades and application continuity?
Instance affinity ensures that client sessions maintain connection to the same RAC instance to preserve session
state and reduce overhead. During rolling upgrades, Oracle uses connection load balancing policies and
services to redirect new connections while draining existing sessions from nodes undergoing patching.
Application Continuity features replay in-flight transactions seamlessly on failover or node restart. Oracle
Transparent Application Failover (TAF) supports affinity with session state preservation. These mechanisms
prevent transaction loss and maintain user experience during patching without full cluster downtime.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

115. What steps does Oracle take to synchronize redo between nodes in a RAC system and how is
consistency maintained?
Oracle RAC synchronizes redo via instance-specific online redo logs and the Global Cache Service (GCS).
Each instance writes redo locally, and the Redo Transport Services ensure consistency by shipping redo to
standby sites if configured. Internally, LMS and LMON processes coordinate global cache state changes and
ensure consistency of blocks modified across nodes. Oracle uses SCNs and enqueue serialization to maintain
transactional order. Checkpoints synchronize datafile states cluster-wide. This coordination prevents data
corruption and guarantees transactional consistency across the cluster.

116. What happens if voting disks go stale on two out of three nodes in a 3-node RAC?
If two voting disks become stale or inaccessible, a 3-node RAC faces quorum loss, risking split-brain scenarios.
Oracle Clusterware requires a majority of voting disks (quorum) to function correctly. With only one active voting
disk, cluster nodes cannot confirm membership, leading to automatic cluster shutdown or node evictions. To
recover, administrators must restore access to voting disks or replace failed disks. This event highlights the
importance of voting disk redundancy and proper disk failure monitoring for cluster stability.

117. How do you troubleshoot RAC split-brain situations at the IO fencing level?
Split-brain occurs when cluster nodes lose communication but continue operating independently, risking data
corruption. IO fencing isolates misbehaving nodes by forcibly resetting or powering them off via storage-level
commands (STONITH). Troubleshooting involves checking cluster logs for fencing failures, network partitions,
and verifying fencing device configurations. Tools like crsctl and cluvfy assist in diagnosing fencing issues.
Ensuring fencing devices are correctly configured, accessible, and tested prevents split-brain. Immediate
fencing action is critical to maintain cluster data integrity.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

118. What enhancements exist in 23ai for Flex Clusters and GNS auto-naming?
Oracle 23ai introduces AI-driven optimizations for Flex Clusters, improving dynamic node membership
management and reducing manual configuration. GNS auto-naming enhancements automate network
resource naming, eliminating conflicts and accelerating cluster setup. AI algorithms predict optimal resource
placement and resolve DNS conflicts proactively. These features simplify large-scale RAC deployments by
reducing human errors and downtime during scaling or reconfiguration. They also enhance cluster resilience by
monitoring network health and auto-correcting naming inconsistencies.

119. How do you tune the LMS process to reduce cross-instance block shipping latency?
LMS tuning involves optimizing interconnect throughput, adjusting LMS process affinity to dedicated CPUs, and
tuning cluster cache fusion parameters like _gc_throttle_limit. Monitoring V$LMS wait events identifies
bottlenecks. Reducing block size and batching cache fusion messages improves efficiency. Applying latest
patches optimizes LMS algorithms. Using multiple private interconnects and configuring network MTUs properly
minimizes latency. LMS tuning reduces wait times for global cache access, enhancing RAC performance in
high-concurrency environments.

120. How do you simulate and recover from a CRSD failure on one RAC node without restarting the
cluster?
To simulate CRSD failure, use crsctl stop crsd -n <node> to stop CRSD on the target node. Observe resource
failover handled by clusterware to surviving nodes. Recovery involves restarting CRSD with crsctl start crsd -n
<node>. The node rejoins the cluster, reclaims resources, and synchronizes state. Monitoring crsctl stat res -t
verifies successful recovery. This procedure tests cluster resilience and failover mechanisms without full cluster
downtime. Proper logging and alerting ensure administrators are notified of failures.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

AUTONOMOUS DATABASE & ORACLE CLOUD

121. What architecture components differentiate Autonomous Database from traditional RDBMS?
Autonomous Database (ADB) integrates automation layers over Oracle’s core RDBMS, including self-tuning,
self-patching, and self-securing components. Key differentiators are automated workload management,
adaptive indexing, and machine learning-based optimizations. The architecture features a cloud-native control
plane, multi-tenant containers optimized for elasticity, and built-in security such as TDE and network encryption.
Autonomous background services continuously monitor performance and usage patterns, adjusting parameters
dynamically without DBA intervention. Unlike traditional RDBMS, ADB abstracts infrastructure management,
offering a serverless consumption model with on-demand scaling.

122. How is auto-scaling implemented internally in Autonomous Database and what background
processes control it?
Auto-scaling in ADB leverages Oracle Cloud Infrastructure (OCI) orchestration combined with internal DB
resource managers. The process continuously monitors CPU, I/O, and memory metrics via background tasks
such as Automatic Workload Repository (AWR) monitors and Resource Manager agents. When thresholds are
met, OCI dynamically provisions or decommissions compute nodes and storage capacity. Oracle’s internal
Autoscaling Daemon communicates with OCI APIs to adjust resources with minimal impact. The process is
transparent to users, maintaining SLAs and ensuring workload responsiveness through elastic scaling policies.

123. What are the data masking and redaction features enforced in ADB at rest and in transit?
ADB enforces data masking and redaction using built-in Transparent Data Encryption (TDE) for data at rest,
protecting stored data with encryption keys managed in Oracle Key Vault or OCI Vault. Redaction policies
dynamically mask sensitive data in query results using Oracle Data Redaction. In transit, ADB enforces network
encryption via TLS protocols ensuring secure client-server communication. Masking rules are applied
declaratively and automatically in Autonomous environments to comply with privacy regulations, minimizing
DBA overhead and protecting against unauthorized data exposure.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

124. How is patching managed in Oracle Autonomous databases, and how do rollback options differ?
Oracle Autonomous Database applies rolling patches automatically during predefined maintenance windows
using Oracle’s Zero Downtime Patching (ZDP) technology. Patching is orchestrated at the cloud control plane
level, applying fixes to underlying infrastructure and software without user intervention. Rollback differs from
traditional patching as Autonomous maintains a full backup snapshot before patching, enabling point-in-time
restoration rather than simple patch reversal. Users can also revert to previous database backups using OCI
Recovery features. This approach minimizes downtime and reduces operational risks compared to manual
patching.

125. What does the Optimizer Statistics Advisor do in Autonomous and how does it integrate with SQL
plan baselines?
The Optimizer Statistics Advisor in Autonomous Database automatically identifies stale or missing statistics
affecting query performance. It analyzes workload patterns, recommends statistics gathering, and applies
changes in a controlled manner. Integration with SQL Plan Management (SPM) allows the advisor to validate
new execution plans against baselines, preventing performance regressions. If new statistics cause plan
changes, SPM ensures stable plans are maintained or new plans are accepted only after verification. This
automation improves plan stability and optimizes execution efficiency without manual tuning.

126. Explain the failure domains in OCI for RAC and how it differs from on-premise setups.
OCI failure domains represent physical and logical isolation units such as availability domains (ADs), fault
domains, and regions. OCI RAC clusters distribute nodes across fault domains within an AD to minimize
correlated failures. Unlike on-prem RAC relying on physical racks and switches, OCI provides software-defined
isolation ensuring high availability and fault tolerance. Fault domains isolate hardware components so a failure
impacts only one domain, allowing Oracle RAC to survive node or hardware failures transparently. OCI also
offers region-level disaster recovery unlike traditional on-prem setups.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

127. How do you configure Transparent Data Encryption (TDE) for BYOL deployments in Oracle
Cloud?
For Bring Your Own License (BYOL) deployments, TDE is configured by enabling the Oracle Wallet or
integrating with OCI Vault for key management. The wallet stores encryption keys securely and is managed
through Oracle Wallet Manager or Oracle Key Vault. Encryption is enabled by setting appropriate parameters
and creating encrypted tablespaces or columns. BYOL customers must ensure proper wallet backups and
lifecycle management. OCI Vault integration automates key rotation and lifecycle policies, enhancing security
compliance and reducing manual key management overhead in cloud environments.

128. What tools are used to capture and visualize Autonomous Workload Repository reports in ADB?
ADB provides integrated tools like Oracle SQL Developer Web, OCI Console Performance Hub, and Oracle
Cloud Observability services to capture and visualize AWR reports. These tools generate detailed workload
insights including wait events, SQL statistics, and system performance. The Performance Hub offers graphical
dashboards for real-time monitoring and historical trend analysis. Additionally, REST APIs allow programmatic
extraction of AWR data for custom visualization or integration with third-party tools. These capabilities facilitate
proactive performance tuning and resource optimization in Autonomous environments.

129. What restrictions exist when connecting third-party BI tools to ADB via JDBC or ODBC?
Third-party BI tools connecting via JDBC/ODBC to ADB face restrictions including limited support for certain
SQL syntax, lack of direct SYSDBA privileges, and constraints on network configurations like VPN or private
endpoints. Some advanced Oracle features like DBMS_SCHEDULER or fine-grained access controls may not
be accessible due to security policies. Connection pooling and session multiplexing might be limited, impacting
concurrency. Additionally, large data extracts may be throttled due to Autonomous workload management.
Users must ensure drivers are certified and network ACLs permit access.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

130. How do you control and monitor session-level SQL tuning in Autonomous using
DBMS_SQLTUNE?
ADB enables session-level SQL tuning by exposing limited DBMS_SQLTUNE packages with restricted
privileges to automate tuning tasks. DBMS_SQLTUNE can capture SQL Profiles, generate tuning tasks, and
perform automatic plan regression checks. Monitoring is facilitated by Autonomous’s integrated performance
tools that track tuning task status and effectiveness. SQL Tuning Advisor recommendations are automatically
applied or reviewed via cloud interfaces. This automation reduces manual tuning efforts and enforces consistent
performance improvements within user session scopes.

ADVANCED PERFORMANCE DIAGNOSTICS

131. How do you analyze a RAC-wide performance problem using GV$ views across instances in real
time?
GV$ views aggregate instance-specific dynamic performance data across RAC nodes by including the INST_ID
column. To analyze RAC-wide issues, DBAs query GV$ views like GV$SESSION,
GV$ACTIVE_SESSION_HISTORY, and GV$SYSTEM_EVENT filtering and grouping by instance. This real-
time consolidated data reveals bottlenecks such as enqueue waits, interconnect latencies, or load imbalances.
Tools like Oracle Enterprise Manager also utilize GV$ views for cluster-wide diagnostics. Cross-instance
correlation aids root cause identification and workload redistribution in multi-node environments.

132. What is the internal structure of SQL Monitor reports and how do they collect execution stats?
SQL Monitor reports collect detailed runtime statistics by tracing active SQL executions and their resource
usage. Internally, Oracle uses the SQL Monitoring infrastructure that samples execution plans, CPU, I/O, and
wait events periodically. Data is stored in V$SQL_MONITOR and related views, capturing execution steps,
cardinalities, and timings. The report integrates these statistics to provide visualization of execution paths, wait
hotspots, and resource consumption. It uses a combination of performance counters and AWR data, enabling
deep analysis of long-running or parallel queries.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

133. How do you detect latch contention using V$LATCH_CHILDREN and how to resolve it?
V$LATCH_CHILDREN displays wait statistics for individual latch children, showing contention hotspots.
Detecting high wait times or contention on specific latch addresses indicates serialization points in Oracle’s
memory structures. Resolution involves identifying the related latch class (e.g., library cache, buffer cache) and
tuning the workload or parameters like _latch_classes and spin counts. Application tuning to reduce concurrent
access to hot blocks, increasing instance resources, or patching known bugs can also alleviate contention.
Monitoring latch activity over time helps preempt escalation.

134. Explain the usage of DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS for


tuning snapshot behavior.
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS customizes snapshot frequency and
retention policies for AWR data collection. By adjusting the snapshot interval, DBAs can control the granularity of
workload data to balance detail versus overhead. Modifying retention periods influences the historical data
available for analysis and storage usage. This tuning is critical during periods of high activity to avoid excessive
overhead or when longer-term trending is needed. Proper configuration enables effective performance
diagnostics while optimizing system resource consumption.

135. How do you interpret the buffer busy waits wait event at object level using X$BH and
DBA_OBJECTS?
Buffer busy waits occur when multiple sessions compete to access the same data block in buffer cache. Using
X$BH, which contains real-time buffer header info, DBAs identify hot blocks by filtering high wait counts. Joining
X$BH with DBA_OBJECTS maps block addresses to objects, revealing the problematic segments or indexes.
High buffer busy waits indicate contention, often from concurrent DML on specific objects. Resolution includes
segment-level tuning like partitioning, increasing freelists, or optimizer hints to reduce contention hotspots.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

136. What is the role of V$SYSSTAT vs V$SESSTAT in correlation analysis and what are their key
differences?
V$SYSSTAT provides cumulative system-wide statistics since instance startup, showing overall resource usage
trends. V$SESSTAT provides per-session statistics, allowing granular analysis of individual session resource
consumption. In correlation analysis, comparing session-level metrics against system totals helps identify
sessions causing performance issues. V$SYSSTAT is useful for baseline comparisons, while V$SESSTAT
assists in pinpointing problematic SQL or users. Key differences include scope (system vs session) and update
frequency, making them complementary for comprehensive performance diagnostics.

137. How does Oracle detect suboptimal plans in SQL Plan Management and how are fix plans
applied?
Oracle SQL Plan Management (SPM) detects suboptimal plans by comparing new execution plans against
accepted SQL plan baselines. Using performance metrics like cardinality estimates and execution cost, plans
causing regressions are flagged. Oracle retains known good plans and tests new ones in controlled
environments before acceptance. Fix plans are applied by forcing the use of accepted plans, preventing
regressions. New plans can be accepted automatically if performance improves or manually after review. This
framework ensures plan stability and prevents unexpected performance degradation.

138. What are key metrics to look at in ADDM to identify CPU starvation due to OS scheduling?
ADDM identifies CPU starvation by analyzing CPU wait events like CPU_QUEUE and OS-related waits. Key
metrics include high CPU utilization nearing 100%, elevated context switches, and long CPU queue lengths.
ADDM correlates these with session-level wait times and identifies overloaded CPUs or runaway processes.
Additional indicators are increased run queue lengths and system-level statistics from OS views. These metrics
help diagnose if the OS scheduler is causing CPU bottlenecks, guiding administrators to rebalance workloads or
adjust OS-level process priorities.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

139. How do you perform index usage analysis across a 20+ PDB multitenant deployment with minimal
impact?
Index usage analysis in multitenant involves querying DBA_INDEX_USAGE or V$OBJECT_USAGE in each
PDB, aggregated through scripts or tools using CONTAINERS clause for multi-PDB queries. To minimize
impact, snapshot intervals and data collection periods are optimized. Autonomous or scheduled jobs capture
index usage asynchronously. Using AWR and SQL Monitoring further helps correlate index scans with workload
patterns. Centralized reporting consolidates index health across PDBs, enabling targeted maintenance while
avoiding excessive overhead or performance impact.

140. How do you track adaptive plans in execution using V$SQL and V$SQL_PLAN statistics?
Adaptive plans are tracked in V$SQL by examining flags and columns such as PLAN_HASH_VALUE and
ADAPTIVE_PLAN. V$SQL_PLAN shows detailed step execution statistics and plan changes during runtime.
Oracle records adaptive plan evolutions like dynamic statistics feedback and plan shape modifications. By
correlating execution metrics with plan stability columns, DBAs monitor adaptive behavior and performance
impact. This insight aids in tuning or forcing plans through SQL Plan Baselines when adaptive plans cause
regressions or unexpected plan changes.

DATA GUARD, LOGICAL STANDBY, TRANSPORT

141. What are the internal mechanics of SQL Apply in Logical Standby and how does Oracle ensure
data integrity?
SQL Apply in Logical Standby uses a process that reads redo data, transforms it into SQL transactions, and
applies those to the standby database. It parses redo, extracts DML changes, and executes equivalent SQL
statements, maintaining transactional consistency. Oracle ensures data integrity through strict ordering of
transactions, conflict detection, and use of rollback segments for recovery. Referential constraints and triggers
are maintained, and apply errors can be handled via deferred transactions or error tables, guaranteeing logical
and physical consistency between primary and standby.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

142. How is transport lag calculated and what causes transport vs apply lag divergence?
Transport lag measures the delay between the generation of redo on the primary and its receipt by the standby,
calculated as the difference between the last archived redo log sequence number on primary and the last
received on standby. Apply lag measures the delay in applying redo at the standby. Divergence arises when
apply processes lag due to heavy workload, SQL transformation overhead in logical standby, or network
inconsistencies. Transport lag reflects transmission delay; apply lag reflects processing and application time,
which can vary significantly.

143. What happens when a standby redo log is missing and how do you force sync with primary?
If a standby redo log (SRL) is missing or corrupted, Data Guard cannot apply redo in real time, causing apply
and transport lag to increase. The standby may pause or fall behind. To force sync, DBAs can perform a
manual gap resolution using ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING
CURRENT LOGFILE; or force log shipping with ALTER SYSTEM SWITCH LOGFILE; on primary. Additionally,
copying missing archive logs manually or using DATAGUARD broker’s gap resolution automates
synchronization to restore redo flow.

144. How do Far Sync instances interact with real-time redo in a maximum protection configuration?
In Maximum Protection mode, Far Sync instances act as zero-latency relay nodes that receive redo from the
primary synchronously, then forward it asynchronously to remote standbys. They acknowledge the primary only
after redo is safely written to their disk, ensuring no data loss. This offloads the primary from waiting on distant
standby acknowledgments, improving performance while maintaining zero data loss guarantees. Far Sync
nodes reduce latency impact of geographic distances in Data Guard configurations.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

145. What’s the impact of applying JSON or XML data types in Logical Standby and how does Oracle
handle it?
Applying JSON or XML data in Logical Standby can introduce complexity since these data types often store
large, nested, or semi-structured content. SQL Apply transforms redo into SQL DML, but certain functions or
internal representations may not replicate identically. Oracle handles this by applying changes at the SQL level
with built-in support for JSON/XML operators but may have limitations with advanced features. Performance
overhead might increase due to parsing and applying complex structures. Careful testing is required to ensure
data fidelity and performance.

146. How do you troubleshoot LSP errors in Logical Standby and what log files are most relevant?
LSP (Logical Standby Process) errors can be troubleshot by examining the alert.log of the standby database
and the LSP trace files located in the diagnostic destination (ADR). The dataguard and sql apply trace files
provide detailed error context. Common errors include SQL translation issues, conflicts, or data dictionary
mismatches. Enabling extended SQL Apply tracing can capture detailed SQL statements causing failures.
Using LOGMINER views and Oracle Support tools can assist in diagnosing root causes.

147. What are the conditions under which Data Guard Broker automatically performs failover?
Data Guard Broker automatically triggers failover under conditions like loss of the primary database, persistent
connectivity failure between primary and standby, and when configured with fast-start failover enabled. The
observer continuously monitors the primary’s health and quorum. If it detects primary failure and a majority of
nodes confirm, the broker promotes the standby to primary. It respects protection modes and failover policies to
avoid split-brain, ensuring automatic failover is safe and consistent.

148. How do you configure cascading standbys and what are performance caveats?
Cascading standby configuration involves setting a standby database to receive redo from another standby
instead of the primary. This is configured using the LOG_ARCHIVE_DEST_n parameters to point to the next
standby. Cascading reduces primary load but introduces additional lag points and potential delays. Performance
caveats include increased apply and transport lag, complexity in monitoring multiple hops, and cascading failure
risks. Network and resource bottlenecks at intermediate standbys can impact overall Data Guard
synchronization.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

149. Explain how FSFO interacts with observer and how quorum is maintained during network
partition.
Fast-Start Failover (FSFO) uses an observer process that monitors primary and standby databases. During a
network partition, the observer maintains quorum by requiring consensus among nodes before failover. If the
observer loses connectivity to primary and determines it is down, it initiates failover to standby. Quorum prevents
split-brain by ensuring only one primary is active. When partition heals, role transitions are managed carefully to
maintain data integrity and consistent cluster state.

150. How do you roll forward a standby with incremental SCN recovery and how is it different from full
restore?
Incremental SCN recovery applies redo logs starting from a known SCN checkpoint without a full database
restore, allowing faster catch-up of standby databases after a partial outage. It uses RMAN incremental backups
and archived logs to update the standby incrementally. Full restore involves restoring entire datafiles or backups
before applying redo. Incremental SCN recovery is more efficient, minimizes downtime, and reduces storage
usage compared to a full restore, making it ideal for large databases and quick recovery scenarios.

TROUBLESHOOTING & SYSTEMSTATE ANALYSIS

151. What are the steps to collect a Level 10 trace for a session and interpret wait event sequencing?
To collect a Level 10 trace, enable 10046 event tracing with level 10 using
DBMS_MONITOR.SESSION_TRACE_ENABLE or ALTER SESSION SET EVENTS. This captures detailed
wait events, bind variables, and execution paths. Analyze the trace using Oracle’s tkprof utility which formats
raw data into readable sections. Pay attention to wait event sequencing to identify bottlenecks, dependencies,
and idle periods. Correlate wait times with SQL execution steps to pinpoint performance issues at granular level.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

152. How do you generate and analyze a SYSTEMSTATE dump for deadlock analysis across RAC?
Generate a SYSTEMSTATE dump using oradebug or kill -3 commands on all RAC nodes involved during a
suspected deadlock. The dump contains snapshot of all sessions, locks, and waits. Analyze the dump files with
Oracle Support tools or by manual inspection for session states and locked resources. Look for wait chains and
blocking sessions. Correlate session IDs and SQL statements to identify deadlock victims and root causes,
facilitating resolution.

153. What’s the difference between process state dumps and session heap dumps for memory
corruption cases?
Process state dumps capture the entire process memory and thread context, providing a broad view useful in
severe corruption or crash scenarios. Session heap dumps are more focused snapshots of the user session’s
private memory structures, useful for analyzing memory leaks or corruption localized to a session. Process
dumps are larger and more comprehensive, while session heap dumps are lightweight and targeted. Both are
used for Oracle Support diagnostic analysis but differ in scope and granularity.

154. How do you decode ORA-00600 errors using incident packaging service and analyze trace files?
The Incident Packaging Service (IPS) automates collection of trace files, dumps, and logs related to ORA-00600
internal errors. By uploading these to Oracle Support, IPS correlates error codes with known bugs and provides
diagnostic reports. Trace files contain detailed error stack and context, which can be manually analyzed using
ORADEBUG commands and examining call stacks. Combining IPS reports with trace analysis helps pinpoint
root causes and resolution steps for these critical errors.

155. What are OS-level traces or logs you must collect for cases of repeated node eviction in cluster?
Collect clusterware logs such as crsctl output, alert.log, and evmd logs for Oracle Clusterware. Additionally, OS-
level logs like /var/log/messages, kernel traces, network interface statistics, and storage subsystem logs are
crucial. Tools like tcpdump or dtrace can capture network anomalies. These logs help diagnose connectivity
issues, heartbeat failures, or resource contention that cause node evictions. Coordinated analysis reveals
environmental causes outside Oracle software.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

156. How do you analyze excessive mutex sleeps and how do they affect cursor sharing?
Excessive mutex sleeps indicate contention for shared internal Oracle structures. Analyze
V$MUTEX_SLEEP_HISTORY and wait event cursor: mutex X to identify hotspots. High sleeps reduce
concurrency, delay cursor operations, and increase CPU consumption. This negatively impacts cursor sharing
by serializing parse and execute phases. Resolving includes tuning application cursor usage, increasing
cursor_sharing parameter, or applying patches to reduce mutex contention in specific areas like shared pool or
library cache.

157. How does Oracle represent chained rows and how to trace fetch inefficiencies due to row
chaining?
Chained rows span multiple data blocks when row size exceeds block capacity or updates increase row size.
Oracle marks these rows as chained in DBA_TABLES and DBA_EXTENTS. Fetch inefficiencies arise because
multiple I/O operations are needed. Trace using ANALYZE TABLE ... LIST CHAINED ROWS or query
DBA_EXTENTS for segment fragmentation. Performance impact is detected via buffer busy waits or high
physical reads. Remedies include increasing PCTFREE, row migration minimization, or table reorganization.

158. How do you use V$WAITCLASSMETRIC_HISTORY to correlate spikes in DB time?


V$WAITCLASSMETRIC_HISTORY captures historical wait event metrics grouped by wait classes like CPU,
I/O, and concurrency. Correlate spikes in DB time by analyzing time-based samples of wait time and count for
each class. Identifying dominant wait classes during performance degradation helps isolate root causes such as
CPU saturation or I/O bottlenecks. This data complements AWR and ADDM reports for trend analysis and
targeted tuning.

159. What are signs of shared pool fragmentation and how can it be resolved dynamically?
Signs include frequent library cache latch waits, parse failures, ORA-4031 errors (shared memory allocation
failure), and high mutex contention. Fragmentation reduces available contiguous memory chunks for caching
SQL or PL/SQL objects. Dynamic resolution involves flushing specific caches (ALTER SYSTEM FLUSH
SHARED_POOL), resizing shared pool via SHARED_POOL_SIZE parameter, or enabling Automatic Shared
Memory Management (ASMM). Application tuning to reduce hard parses also prevents fragmentation.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

160. How do you isolate session-level buffer cache inefficiencies using V$BH and V$SESSION_WAIT?
Use V$BH to identify hot blocks causing buffer busy waits by examining buffer header statistics. Join with
DBA_OBJECTS to map blocks to segments. Simultaneously, query V$SESSION_WAIT for sessions waiting
on buffer cache events. Cross-referencing these identifies sessions impacted by cache contention. Further drill-
down into SQL execution plans reveals queries causing hot block access. This targeted analysis enables
focused tuning like partitioning or freelist adjustment to reduce contention.

STORAGE, IOPS, ASM, FILESYSTEMS

161. How does Oracle determine I/O priority in ASM across multiple diskgroups with different
redundancy levels?
Oracle ASM determines I/O priority based on diskgroup attributes such as redundancy (external, normal, high),
disk I/O latency, and weight settings. Rebalance operations prioritize higher redundancy levels to maintain
availability. Critical operations like redo log writes receive higher I/O priority regardless of diskgroup. ASM also
uses the disk’s performance profile and load statistics to balance I/O. Allocation Units (AUs) are evenly
distributed considering redundancy and available throughput. Diskgroup usage percentage can affect priority
dynamically.

162. What are the implications of using Exadata storage indexes vs normal B-tree indexes in
performance?
Exadata storage indexes dynamically eliminate disk I/O by tracking min/max column values per storage region,
allowing smart scans to skip irrelevant blocks. B-tree indexes, in contrast, are maintained in the database and
used during index range scans. Storage indexes reduce CPU and I/O at storage layer without affecting buffer
cache. They are not persisted and only benefit full table scans in Exadata. B-tree indexes are preferred for
OLTP but storage indexes shine in large analytical queries.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

163. Describe the internal rebalance process in ASM and how it allocates AU during disk addition.
ASM rebalancing evenly distributes AUs across all disks within a diskgroup when new disks are added or
removed. It uses background processes like RBAL and ARBx to relocate extents while honoring redundancy
policies. The rebalance power parameter controls CPU usage and rebalance speed. ASM metadata tracks AU
placement and moves extents in small chunks to avoid service disruption. Only used or hot data segments are
prioritized during rebalance, making the process efficient.

164. How does Oracle optimize direct path reads in smart scan-enabled systems?
In smart scan systems like Exadata, Oracle bypasses the buffer cache for direct path reads and pushes SQL
predicates, projections, and filtering to the storage layer. This minimizes data transferred to the database server.
Direct path reads occur when large table scans are invoked, especially in parallel queries. Smart scan engines
apply filtering and return only needed rows/columns. This significantly reduces IOPS, memory consumption,
and CPU load on the database node.

165. What is the behavior of Oracle when a diskgroup becomes dismounted on one instance in RAC?
When a diskgroup dismounts from one RAC instance, that instance loses access to all files in that diskgroup,
including datafiles, tempfiles, and controlfiles. Other RAC nodes continue operation if the diskgroup is healthy on
their end. ASM automatically attempts to remount the diskgroup based on retry settings. If the dismount is due
to disk or path failure, corrective action must be taken. Cluster-level operations like rebalancing or recovery may
get impacted during the dismount.

166. How do you measure ASM metadata I/O load using X$KFFXP and V$ASM_DISKGROUP_STAT?
X$KFFXP provides granular extent-level metadata, allowing assessment of allocation unit distribution.
V$ASM_DISKGROUP_STAT shows real-time statistics on reads, writes, and IOPS at diskgroup level. By
querying both views, you can analyze disk utilization patterns and metadata hotspots. High metadata I/O may
indicate frequent file creations, extent relocations, or diskgroup rebalance activity. Monitoring these views helps
tune ASM performance and plan capacity proactively.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

167. What are the pros and cons of using ASMFD vs ASMLib vs UDEV for ASM disk presentation?
ASMFD (ASM Filter Driver) offers best integration with Oracle, supports I/O fencing, and simplifies disk labeling.
ASMLib is Oracle-maintained but requires kernel compatibility and manual patching. UDEV is OS-native,
flexible, and widely supported but lacks fencing features. ASMFD provides best protection and performance in
RAC setups, but it’s Oracle-only. UDEV is preferred in heterogeneous environments. ASMLib may be
deprecated in some platforms, making ASMFD and UDEV future-safe options.

168. How do you identify and resolve IO slowness using AWR I/O statistics sections?
In AWR, the I/O statistics section (Tablespace I/O, File I/O, Segment I/O) shows read/write latencies, physical
IOPS, and wait times. High average wait times or queue lengths indicate I/O bottlenecks. Identifying top
segments or datafiles contributing to slowness helps isolate problematic SQL or objects. Resolution involves
tuning SQL, adding storage bandwidth, or redistributing data across faster disks. Monitoring over time ensures
the I/O path remains optimized.

169. What new compression features are supported in 23ai for OLTP and how does it impact disk
writes?
Oracle 23ai introduces enhancements to Advanced Row Compression and Heat Map-driven compression for
OLTP. Data blocks are now compressed more efficiently without impacting write performance. It uses
lightweight CPU algorithms and modifies only changed columns, minimizing redo generation. This reduces
storage footprint and improves IOPS efficiency. Compression-aware indexes further improve lookup
performance. Disk writes are reduced due to fewer block modifications and better memory-to-disk compression
ratios.

170. How do you monitor redo log IO separately from datafile IO in high-throughput systems?
Use V$FILESTAT and V$IOSTAT_FILE to monitor redo log file I/O, filtering by filetype=3 (logfiles). For datafile
I/O, use filetype=2. V$LOG_HISTORY, V$LOG, and V$ARCHIVED_LOG provide insights into log switch
frequency and archive lag. In AWR, compare redo IOPS versus data IOPS to spot imbalance. High redo I/O
may indicate commit-heavy workloads or inefficient logging. Monitoring separately enables focused tuning of
redo log size, LGWR latency, and disk configuration.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

SECURITY, WALLET, ENCRYPTION, AUDITING

171. What are differences between software and hardware wallets in Oracle and how are they
integrated into TDE?
Software wallets store TDE keys in OS-encrypted files managed by Oracle Key Vault or local filesystem.
Hardware wallets use HSM (Hardware Security Module) for secure, tamper-proof key storage. Software wallets
are easier to manage but may be vulnerable to OS-level access. Hardware wallets provide better compliance
(FIPS, PCI DSS) and physical protection. Both integrate with TDE transparently, but hardware wallets offer
higher security assurance and centralized control for enterprise environments.

172. How does Oracle detect tampering or unauthorized change to a database wallet?
Oracle detects wallet tampering via cryptographic checksum and integrity validation mechanisms during wallet
open operations. Any unauthorized modification renders the wallet unreadable. Additionally, audit trails log
wallet access and changes. In HSM-backed wallets, access is tightly restricted to authorized modules only.
Oracle Key Vault logs access attempts and alerts for abnormal activities. Password changes, key rotations, and
wallet status are logged for compliance and auditability.

173. Describe how DBMS_CRYPTO and TDE coexist and where they differ in encryption scope.
DBMS_CRYPTO is a PL/SQL package for encrypting data programmatically at column-level or in transit,
offering application-controlled encryption. TDE (Transparent Data Encryption) encrypts entire tablespaces or
columns automatically at rest without application changes. DBMS_CRYPTO requires manual key management
and is suitable for selective encryption needs. TDE integrates with Oracle wallets and handles
encryption/decryption transparently. They can coexist, with TDE securing storage and DBMS_CRYPTO
securing application-specific fields.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

174. How do you enforce mandatory password complexity policies across CDBs and PDBs centrally?
Password complexity in a multitenant architecture is enforced using a shared profile or password verify function
at the CDB level, applied via CREATE PROFILE or ALTER PROFILE and associated with users. Oracle 21c+
allows centralized policy enforcement using COMMON users and profiles. Auditing and
ORA_SECURECONFIG baseline also help validate enforcement. Profiles can be cloned into PDBs and
adjusted using CONTAINER=ALL for global application. Policies include length, special characters, and reuse
prevention.

175. What are changes in 23ai in passwordless authentication or biometric integration (if supported)?
Oracle 23ai introduces enhancements in passwordless login using FIDO2 standards and certificate-based
authentication via Oracle Identity Cloud Service (IDCS). Integration with biometric systems is available through
external identity providers and federated SSO. These methods reduce risk of credential theft and simplify user
experience. Authentication is offloaded to trusted devices or biometrics, verified via tokens or smartcards.
Passwordless authentication is enforced via network ACLs and centralized authentication policies.

176. How do you rotate TDE master keys securely in a RAC environment with minimal downtime?
In a RAC setup, use ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY ... with USING TAG
clause to rotate the TDE master key. Oracle propagates the new key across all RAC nodes using shared wallet
or Oracle Key Vault. The operation is online and does not interrupt access to encrypted data. Use
ENCRYPTION_WALLET autologin or software/hardware wallet with consistent configuration across nodes.
Backup old keys and wallet before rotation for rollback safety.

177. What are options to audit only failed DMLs or logins without performance impact?
Use Unified Auditing with policies filtering only FAILED ATTEMPTS, defined using CREATE AUDIT POLICY
with conditions like WHENEVER NOT SUCCESSFUL. For DMLs, audit specific tables or actions using AUDIT
INSERT ON ... WHENEVER NOT SUCCESSFUL. This approach avoids full statement logging, reducing
performance overhead. Set AUDIT_TRAIL=DB,EXTENDED only if needed. Unified Audit policies can be
enabled at CDB or PDB level selectively to minimize resource usage.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

178. What are key columns in UNIFIED_AUDIT_TRAIL and how do you extract per-user detailed
activities?
UNIFIED_AUDIT_TRAIL includes columns like EVENT_TIMESTAMP, DATABASE_USER, ACTION_NAME,
OBJECT_NAME, SQL_TEXT, CLIENT_IDENTIFIER, and OS_USERNAME. To extract per-user activities, filter
by DATABASE_USER and time using WHERE clause. Joining with DBA_USERS, V$SESSION, or login IP via
CLIENT_IDENTIFIER provides session context. Audit trails can be offloaded to JSON/XML for downstream
analysis. Creating views or exporting to external systems enables flexible security monitoring.

179. How do you prevent session hijacking in a multitenant system using network ACLs?
Network ACLs in Oracle restrict which hosts and users can access external resources. By limiting access via
DBMS_NETWORK_ACL_ADMIN, session spoofing or hijacking is mitigated. Use ACLs to restrict outbound
network calls and enforce strict privilege controls. Combine with VPD, SSL, and APP CONTEXT policies to
validate session origin. Use secure application roles and IP validation via login triggers to strengthen session
authentication.

180. What is the performance overhead of unified auditing and how do you reduce its size in SYS
tablespace?
Unified Auditing introduces minimal overhead due to its memory-efficient queue and batched writes. However,
excessive audit generation can grow the AUDSYS schema in the SYSAUX tablespace. To reduce space
usage, regularly purge old audit data using DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL. Move audit tables
to custom tablespace using DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION. Also, fine-tune policies to
log only necessary events, and use file-based audit trails for large environments.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

ADVANCED SQL, COSTING, PARSING & PLANS

181. How does bind peeking influence plan selection in adaptive cursor sharing?
Bind peeking lets Oracle choose a plan based on initial bind values, but it can lead to suboptimal plans if
subsequent values vary. Adaptive Cursor Sharing (ACS) improves this by tracking execution statistics and
generating multiple cursors for different bind scenarios. Oracle evaluates predicates, cardinality estimates, and
bind sensitivity to decide on reuse. ACS prevents performance issues due to skewed data. It reduces the need
for hints and helps stabilize plan quality. Execution plan history is maintained in the cursor cache. V$SQL and
V$SQL_CS_STATISTICS aid in monitoring.

182. What are internal structures of SQL plan directives and when do they expire or invalidate?
SQL Plan Directives (SPD) are stored in SYS.SPD$ and guide the optimizer when statistics are insufficient.
They advise on column groupings and dynamic sampling. SPDs are created automatically during parse and
maintained in SYSAUX. They expire after a defined inactivity threshold or are purged during upgrade.
Invalidation can occur if object statistics are refreshed or structure changes. Views like
DBA_SQL_PLAN_DIRECTIVES help manage them. Oracle 21c enhances SPDs with improved feedback
loops. SPDs influence cardinality and join selectivity estimates.

183. Describe the optimization process of star transformation join methods and how to force them.
Star transformation rewrites queries to access dimension tables first using bitmap indexes, then joins filtered
results to the fact table. It is ideal for star schemas in data warehouses.
STAR_TRANSFORMATION_ENABLED parameter must be set to TRUE or TEMP_DISABLE. Optimizer uses
subquery unnesting and join elimination techniques. Hints like STAR_TRANSFORMATION and FACT can
force it. Indexes and accurate stats on dimension keys are crucial. Execution plans will show BITMAP
CONVERSION and JOIN FILTER steps. Enhanced in 21c with vector filtering.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

184. How do you track optimizer dynamic sampling decisions for a query in 21c?
Dynamic sampling decisions can be tracked using DBMS_XPLAN.DISPLAY_CURSOR with ALLSTATS
LAST. Notes section in the plan indicates sampling level used. V$SQL_PLAN contains flags showing sampling
activity. SQL Monitor and Real-Time SQL Monitoring give detailed insight into sampling delays.
_OPTIMIZER_TRACE parameter can be enabled for tracing. Adaptive Sampling Feedback is shown in
optimizer statistics. Sampling is triggered when stats are missing or stale. Enhanced heuristics in 21c improve
accuracy and reduce parse overhead.

185. What is cardinality feedback and how can it lead to plan instability?
Cardinality feedback adjusts estimates based on previous execution row counts. Initially, Oracle uses static
stats, then adapts with feedback. It improves future plan generation but can lead to plan instability in fluctuating
workloads. In 21c, it evolved into Statistics Feedback with better tracking. Execution info is stored in
V$SQL_PLAN and DBA_SQL_PLAN_DIRECTIVES. Excessive feedback causes frequent plan changes. SQL
Plan Baselines can be used to stabilize plans. Feedback is more accurate with histograms and extended stats.

186. What is the lifecycle of a SQL plan baseline — creation, evolution, acceptance, and fixing?
Plan baselines are created manually using DBMS_SPM or auto-captured. New plans are marked as non-
accepted and undergo evaluation. During evolution, SQL Tuning Advisor or automatic verification compares
performance. If better, the plan is accepted into the baseline. Fixing a plan locks it for consistent reuse unless
un-fixed. V$SQL, DBA_SQL_PLAN_BASELINES, and DBA_SQL_PROFILES help in monitoring. Evolution
statistics are stored and managed in SYSAUX. Baselines enhance stability and reduce regression during
upgrades.

187. How do you detect a wrong join order chosen by CBO and enforce a better one without hints?
Wrong join orders can be detected using DBMS_XPLAN output, cardinality estimates, and SQL Monitor.
Suboptimal joins usually result from poor stats or missing histograms. SQL Plan Baselines or Profiles allow
influencing plans without hardcoded hints. Creating extended stats or rewriting queries can help. Adjusting
OPTIMIZER_MODE or join methods (e.g., nested loop vs. hash) also influences behavior. V$SQL_PLAN and
10053 trace files offer insights. Oracle 23ai adds enhanced heuristics for join reordering.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

188. How do you measure parse-to-execute ratio and what causes excessive parsing overheads?
Parse-to-execute ratio is measured using V$SQLAREA, comparing PARSE_CALLS to EXECUTIONS. High
ratio indicates excessive parsing due to literal SQL or cursor aging. Causes include missing bind variables,
invalidations, or application design. Cursor sharing settings like FORCE and SIMILAR affect reuse. Excessive
parsing impacts CPU and latches. Use CURSOR_SHARING and session caching for improvements.
V$LIBRARYCACHE shows parse efficiency. Tools like AWR and ADDM help correlate with system load.

189. What differences are introduced in Oracle 23ai optimizer for vector and AI-enhanced queries?
23ai optimizer introduces vector-aware path selection, adaptive embedding handling, and AI-index integration. It
recognizes VECTOR and JSON_EMBED types during parse. Plans now include VECTOR_SEARCH and
VECTOR_FILTER operators. Costing adjusts based on similarity search and ANN index performance. New
statistics types and column metadata assist in AI query selectivity. Execution feedback loop is enhanced for
embeddings. V$SQL_HINT and DBA_AI_INDEXES show vector usage. Hybrid OLTP/AI plans are common in
AI-augmented workloads.

190. How do you troubleshoot optimizer choosing index fast full scan instead of index range scan?
Use DBMS_XPLAN and SQL Monitor to confirm access paths. Fast full scan is preferred if index blocks are
cached or predicate selectivity is high. Check statistics, missing histograms, or skewed data. Adjust
OPTIMIZER_INDEX_COST_ADJ to influence costing. Re-gathering stats with method_opt on indexed
columns may help. Hints like INDEX or access path rewriting can test alternatives. Adaptive plans may also
toggle between access types. Analyze with V$SQL_PLAN and autotrace output.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

NEW FEATURES & DEEP ARCHITECTURE


CHANGES (ORACLE 23AI)

191. What is the role of vector embeddings in 23ai and how does Oracle store and retrieve them
efficiently?
Vector embeddings enable similarity search and AI tasks inside Oracle SQL. Oracle stores them in VECTOR
datatype columns with optional ANN (Approximate Nearest Neighbor) indexes. Data is compressed and
structured for efficient cache and access. Queries use distance metrics like cosine or Euclidean. Indexing
methods include HNSW and IVF for high-performance lookup. Storage is managed within row format or direct-
lob structures. Retrieval leverages vector scan operators and memory-resident caches.

192. How do you enable and test Vector Search indexes and what are their optimizer implications?
Create VECTOR columns, then use CREATE INDEX with VECTOR clause. Optimizer includes
VECTOR_SEARCH operations in plans. SQL functions like VECTOR_DISTANCE or ANN_SEARCH are
used. Indexes can be tested with EXPLAIN PLAN or Real-Time SQL Monitoring. Costs are compared against
traditional scans. Optimizer considers dimension, distance type, and selectivity. Views like DBA_AI_INDEXES
and V$SQL_PLAN help track. Proper vector stats ensure efficient plan selection.

193. What internal changes exist in memory allocation in 23ai for self-optimizing workloads?
23ai enhances Auto Memory with AI workload support. It introduces memory pools for vector processing,
inference caching, and adaptive memory zoning. Background processes adjust allocations based on telemetry
and execution patterns. Memory movement between SGA and PGA components is more dynamic.
V$MEM_DYNAMIC_COMPONENTS and V$AI_MEMORY_STATS show distribution. Oracle leverages
machine learning to predict usage spikes. This reduces contention and improves mixed workload concurrency.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

194. How does 23ai integrate AI inference models in SQL and what PL/SQL interfaces support it?
Inference models can be called directly from SQL using DBMS_AI or custom PL/SQL wrappers. SQL macros
include EMBEDDING or PREDICT functions. Models are trained externally or via AutoML and stored in-
database. Inference results are cached and pipelined for fast reuse. The optimizer treats them as deterministic
functions when cost is predictable. Execution is distributed across AI vector cores if available. PL/SQL APIs
manage deployment, scoring, and auditing.

195. What are the changes in workload capture and replay in 23ai and how do they improve testing?
Workload capture now logs AI ops, vector scans, and embedding metadata. DBMS_WORKLOAD_CAPTURE
supports tagging of AI-influenced queries. Replay runs with concurrency awareness and mimics multi-model
behavior. AI metrics are tracked in V$REPLAY_STATS. Captures are OCI-integrated and can simulate edge
cases like plan drift or vector lookup delays. DBMS_WORKLOAD_REPLAY enhancements improve feedback.
Capture filters now support embedding and graph op flags.

196. How does sharding configuration change in 23ai and what is new in deployment automation?
Sharding in 23ai supports autonomous sharding with dynamic rebalancing. Deployment uses GDSCTL and
Oracle Shard Director with AI-based placement. Shard catalog supports multi-PDB per shardgroup. Index
replication and caching strategies are improved. Terraform and REST APIs aid automation. Failure detection is
faster using vector-aware heartbeat signals. OCI-native integration allows cross-region sharding. DBA_SHARD
and DBA_SHARD_DIRECTOR views help monitor.

197. What metadata enhancements are made to DBA_TAB_COLUMNS in 23ai for JSON and AI types?
New columns track VECTOR, JSON_REL, and EMBEDDING data types. JSON schema mapping and
validation status are recorded. AI metadata includes vector dimension, indexing strategy, and similarity method.
DBMS_METADATA and DBA_JSON_COLUMNS assist in introspection. Extended stats like vector distribution
and sparsity are collected. These changes enhance plan generation for hybrid workloads. 23ai metadata
supports real-time DDL evolution for AI-aware schemas.
Oracle DBA Expert Interview Prep 200 Deep Technical Questions for Architects - Brijesh Mehra

198. How are Graph and ML tables indexed in 23ai and how do you monitor their performance?
Graph tables use EDGE and PATH indexes for traversals. ML tables use FEATURE indexes for fast model
lookup. DBMS_GRAPH and DBMS_ML manage index creation. Performance is tracked via
V$GRAPH_STATS and DBA_ML_MODEL_RUNTIME. Optimizer includes GRAPH_SCAN operators in plans.
Graph operators leverage recursive WITH clauses and hierarchical traversal. Monitoring includes inference
latency, node fan-out, and graph depth. Indexes are memory-optimized and support HTAP.

199. What background processes have changed or introduced in 23ai to support AI-enhanced SQL
operations?
New processes include AIMON (AI Monitor), VECIDX (Vector Index Manager), and AIEXEC (AI Executor).
They manage caching, index refresh, and inference execution. MMON is enhanced to track vector metrics.
SGA and PGA resize decisions consider AI workloads. AIMON interfaces with DBRM for resource governance.
V$BGPROCESS shows new process health. These enhancements support AI scaleout and concurrency.
Background logs are tagged for AI traceability.

200. Describe how Oracle 23ai handles hybrid transactional/analytical processing (HTAP) workloads
and its impact on memory/cpu.
23ai supports HTAP by blending OLTP and vector analytics. Memory is divided into fast-access transactional
buffers and vector caches. Execution plans include MIXED_OPS showing both access patterns. AI indexes
reduce analytical load. CPU usage is balanced using adaptive task queues. DBA_HTAP_STATS tracks HTAP
query patterns. Resource Manager isolates noisy analytics from transactional users. V$HTAP_VIEW and AI
usage telemetry help with tuning. Architecture favors real-time analysis with minimal duplication.

You might also like