0% found this document useful (0 votes)
25 views53 pages

ODI Tool and PLSQL Questions

The document contains a series of scenario-based interview questions and answers focused on Oracle Data Integrator (ODI) and PL/SQL. It covers various topics including data integration projects, performance issues, error handling, incremental loading, and complex transformations. Additionally, it provides advanced questions related to slowly changing dimensions, large file processing, and real-time data integration, along with detailed answers for interview preparation.

Uploaded by

Dharani Dharani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views53 pages

ODI Tool and PLSQL Questions

The document contains a series of scenario-based interview questions and answers focused on Oracle Data Integrator (ODI) and PL/SQL. It covers various topics including data integration projects, performance issues, error handling, incremental loading, and complex transformations. Additionally, it provides advanced questions related to slowly changing dimensions, large file processing, and real-time data integration, along with detailed answers for interview preparation.

Uploaded by

Dharani Dharani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 53

ODI Tool and PL/SQL Interview

Questions
Here are various scenario-based questions for your interview covering ODI (Oracle
Data Integrator) and PL/SQL:

ODI Scenario-Based Questions


Scenario 1: Data Integration Project
"You're assigned to build an ODI interface that loads customer data from multiple
source systems (SQL Server, Flat Files) into an Oracle data warehouse. The sources
have different structures but need to be consolidated. How would you approach
this?"

Scenario 2: Performance Issues


"An existing ODI load process that used to take 30 minutes is now taking 4 hours
after a data volume increase. What steps would you take to identify and resolve the
performance bottleneck?"

Scenario 3: Error Handling


"During a nightly ODI load, some records fail due to data quality issues. The business
wants these records logged but the load to continue for good records. How would
you implement this in ODI?"

Scenario 4: Incremental Loading


"You need to design an ODI process that only loads changed data from source to
target (incremental load) rather than full loads. The source system doesn't have
reliable change tracking. How would you implement this?"

Scenario 5: Complex Transformation


"Your interface needs to implement a business rule that says: 'If customer status is
'Gold' and purchase amount > 1000,apply151000,apply15500, apply 10% discount.'
Where would you implement this logic in ODI and why?"

PL/SQL Scenario-Based Questions


Scenario 1: Data Processing
"You need to write a PL/SQL procedure that processes millions of records from a
staging table, applies complex business rules, and loads them into a target table.
How would you structure this to be efficient and handle errors?"

Scenario 2: Performance Tuning


"A report query running a PL/SQL function is performing poorly. The function is called
for each row in a large table. How would you identify and fix the performance issue?"

Scenario 3: Error Handling


"Your PL/SQL procedure is part of a nightly batch process. It fails at 2 AM with a
storage space error. How would you design the procedure to handle this and similar
errors gracefully?"

Scenario 4: Data Validation


"Write a PL/SQL block that validates customer records before loading them into the
system, checking for required fields, data formats, and business rules. How would
you return validation errors to the calling program?"

Scenario 5: Complex Logic Implementation


"You need to implement a business rule in PL/SQL that calculates loyalty points
based on purchase history, customer tier, and promotional periods. The calculation
has multiple conditions and exceptions. How would you structure this code for
maintainability?"

Combined ODI + PL/SQL Questions


Scenario 1: Integration Approach
"For a complex ETL process, when would you choose to implement transformation
logic in ODI versus PL/SQL packages? What factors would influence your decision?"

Scenario 2: Custom Solution


"Your ODI interface needs to call a PL/SQL package that performs specialized data
cleansing. How would you integrate this package into your ODI workflow?"

Scenario 3: Debugging
"An ODI process calling a PL/SQL package is failing with unclear error messages.
How would you troubleshoot this issue?"

Scenario 4: Scheduling
"You need to schedule a complex workflow that involves ODI jobs calling PL/SQL
packages, with dependencies between steps. How would you design and implement
this?"

Would you like me to provide sample answers or more detailed scenarios for any of
these questions?
Hi Deepseek I have an interview on ODI tool and PLSQL Could you please provide
different scenario based question for the interview with answers
ODI and PL/SQL Scenario-Based
Interview Questions with
Answers
Here are scenario-based questions along with detailed answers to help you
prepare for your interview on Oracle Data Integrator (ODI) and PL/SQL.

ODI Scenario-Based Questions with Answers


Scenario 1: Incremental Data Loading
Question:
"You need to design an ODI process that only loads changed records from a source
table (Oracle) to a target table (Oracle) instead of doing a full refresh. The source
system does not have a timestamp column. How would you implement this?"

Answer:

1. Use CDC (Change Data Capture) if the source database supports it (Oracle
CDC, LogMiner).
2. Alternative Approach (No CDC):
o Create a temporary staging table to hold all source data.
o Compare key columns between the staging table and target table
using a JOIN in an ODI mapping.
o Use a FILTER condition to load only new or modified records.
o Example SQL in ODI:
sql

Copy

WHERE NOT EXISTS (SELECT 1 FROM TARGET_TABLE T


WHERE T.PK_COLUMN = S.PK_COLUMN)
OR (T.HASH_VALUE != S.HASH_VALUE) -- Optional: Compare hash for
changes
3. Post-Load Step: Update a control table to track the last extraction point.

Scenario 2: Handling Data Quality Issues


Question:
"During a nightly ODI load, some records fail due to invalid data (e.g., NULL in
mandatory fields, wrong formats). The business wants to log these errors but
continue loading valid records. How would you implement this?"

Answer:

1. Use ODI’s Error Handling:


oEnable "Continue on Error" in the Interface.
oDefine "Error Tables" in the Physical tab to capture rejected rows.
2. Data Quality Checks:
o Use FILTER conditions to separate bad records.
o Example:
sql

Copy

WHERE CUSTOMER_NAME IS NOT NULL


AND REGEXP_LIKE(PHONE_NUMBER, '^[0-9]{10}$')
3. Logging Bad Data:
o Route failed records to an error table with error descriptions.
o Use ODI Procedures to email error reports.

Scenario 3: Performance Tuning in ODI


Question:
"An ODI job that loads 1 million records is running very slowly. What optimization
techniques would you apply?"

Answer:

1. Optimize SQL in Mappings:


o Use PARTITIONING in source/target tables.
o Replace *"SELECT " with only required columns.
2. LKM & IKM Settings:
o Use BULK loading (e.g., Oracle Direct Path Load).
o Increase commit interval (e.g., commit every 10,000 rows).
3. Parallel Execution:
o Use Scenarios with Parallel Steps.
o Split large extracts into smaller chunks (e.g., by date ranges).
4. Index Management:
o Drop indexes before load, rebuild after.

PL/SQL Scenario-Based Questions with


Answers
Scenario 1: Efficient Batch Processing
Question:
"You need to write a PL/SQL procedure that processes 10 million records from a
staging table and updates a target table. How would you optimize it?"

Answer:

1. Use BULK COLLECT & FORALL:


sql

Copy

DECLARE
TYPE t_ids IS TABLE OF STAGING_TABLE.ID%TYPE;
v_ids t_ids;
BEGIN
SELECT ID BULK COLLECT INTO v_ids FROM STAGING_TABLE;

FORALL i IN 1..v_ids.COUNT
UPDATE TARGET_TABLE
SET STATUS = 'PROCESSED'
WHERE ID = v_ids(i);
END;
2. Limit Batch Size:
o Process in chunks (e.g., 10,000 rows at a time).
3. Use MERGE Instead of Cursor Loops:
sql

Copy

MERGE INTO TARGET_TABLE T


USING STAGING_TABLE S
ON (T.ID = S.ID)
WHEN MATCHED THEN UPDATE SET T.COLUMN = S.COLUMN;

Scenario 2: Error Handling in PL/SQL


Question:
"A PL/SQL procedure is part of a nightly job. If it fails, the entire batch stops. How
would you make it resilient?"

Answer:

1. Use Exception Handling:

sql

Copy

BEGIN
-- Main logic
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
INSERT INTO ERROR_LOG(ERR_MSG, ERR_DATE)
VALUES (SQLERRM, SYSDATE);
COMMIT;
-- Optionally send email alert
END;
2. Log Errors to a Table:
o Track failed records instead of failing the whole job.
3. Retry Mechanism:
o Use AUTONOMOUS_TRANSACTION for logging without affecting
main transaction.

Scenario 3: Dynamic SQL in PL/SQL


Question:
"You need to write a procedure that dynamically generates and executes SQL based
on user input. How would you ensure security and performance?"

Answer:

1. Use EXECUTE IMMEDIATE with Bind Variables:

sql

Copy

PROCEDURE run_dynamic_sql(p_table_name VARCHAR2, p_id NUMBER) IS


v_sql VARCHAR2(1000);
BEGIN
v_sql := 'SELECT * FROM ' || DBMS_ASSERT.SQL_OBJECT_NAME(p_table_name) ||
' WHERE ID = :id';
EXECUTE IMMEDIATE v_sql USING p_id;
END;
2. Prevent SQL Injection:
o Use DBMS_ASSERT to validate object names.
o Avoid string concatenation with user inputs.

Combined ODI + PL/SQL Scenario


Scenario: Complex Transformation Logic
Question:
"Should you implement complex business logic (e.g., customer scoring) in ODI
mappings or PL/SQL? Justify your choice."

Answer:

 Use ODI for:


o Simple transformations (filters, joins, aggregations).
o Data flow orchestration.
 Use PL/SQL for:
o Complex calculations (recursive logic, multi-step processing).
o Reusable business rules.
 Best Practice:
o Call a PL/SQL package from ODI using a Procedure step for
complex logic.
o Example:
sql
Copy

BEGIN
pkg_customer_scoring.calculate_score();
END;

Final Tips for the Interview


✔ For ODI: Focus on mapping design, performance tuning, error handling,
and CDC.
✔ For PL/SQL: Emphasize bulk processing, exception handling, and dynamic
SQL security.
✔ Combined Scenarios: Know when to use ODI vs. PL/SQL and how they
integrate.

Would you like more scenario variations or mock interview practice? 😊


yes
Great! Here are additional scenario-based questions with answers to further
strengthen your ODI and PL/SQL interview preparation.

Advanced ODI Scenario-Based Questions


with Answers
Scenario 1: Slowly Changing Dimension (SCD)
Implementation
Question:
"How would you implement a Type 2 Slowly Changing Dimension (SCD) in ODI,
where historical changes must be tracked with effective dates?"

Answer:

1. Approach:
o Use ODI’s SCD Knowledge Module (KM) or a custom mapping.
o For Type 2 SCD, maintain:
 Surrogate keys (e.g., CUSTOMER_SK)
 Effective dates (START_DATE, END_DATE)
 Current flag (IS_CURRENT = 'Y'/'N')
o Steps:
 Check for changes (compare source and target).
 Expire old records (set END_DATE = SYSDATE, IS_CURRENT = 'N').
 Insert new versions (new START_DATE, IS_CURRENT = 'Y').
2. Example ODI Flow:
o LKM: Extract source data.
o IKM: Use IKM Oracle Slowly Changing Dimension or custom SQL.
o Post-Process: Log audit details.
Scenario 2: Handling Large File Processing
Question:
"You need to process a 10GB CSV file in ODI. The load is failing due to memory
issues. How would you optimize this?"

Answer:

1. Split the File:


o Use OS commands (e.g., split in Unix) or ODI tools to break into
smaller files (e.g., 1GB each).
2. Use File Streaming:
o Configure File Driver in ODI to read line-by-line instead of loading
entirely into memory.
3. Optimize LKM Settings:
o Use BULK LOAD for databases.
o Adjust JDBC fetch size for better performance.
4. Database Optimization:
o Use external tables (Oracle) or SQLLDR for direct loads.

Scenario 3: Real-Time Data Integration


Question:
"The business needs near real-time updates from an OLTP system to a data
warehouse. How would you design this in ODI?"

Answer:

1. Use Change Data Capture (CDC):


o Oracle CDC or GoldenGate to capture changes.
o Set up ODI Journalizing to track modified rows.
2. Micro-Batch Processing:
o Schedule ODI scenarios every 5-10 minutes instead of daily.
3. Event-Driven Triggers:
o Use ODI REST API or OSB calls to trigger jobs on data change.
4. Performance Considerations:
o Avoid full table scans; use indexed CDC tables.

Advanced PL/SQL Scenario-Based Questions


with Answers
Scenario 1: Recursive Data Processing
Question:
"You need to write a PL/SQL procedure that processes hierarchical data (e.g.,
organizational chart). How would you approach this?"

Answer:
1. Use Recursive WITH Clause (Oracle 11g+):

sql

Copy

WITH org_hierarchy AS (
SELECT employee_id, manager_id, 1 AS level
FROM employees
WHERE manager_id IS NULL
UNION ALL
SELECT e.employee_id, e.manager_id, h.level + 1
FROM employees e
JOIN org_hierarchy h ON e.manager_id = h.employee_id
)
SELECT * FROM org_hierarchy;
2. Alternative (PL/SQL Cursor Loop):
sql

Copy

FOR rec IN (SELECT * FROM employees WHERE manager_id IS NULL) LOOP


-- Process parent
FOR child_rec IN (SELECT * FROM employees WHERE manager_id =
rec.employee_id) LOOP
-- Process child
END LOOP;
END LOOP;

Scenario 2: Dynamic Pivot Query


Question:
"You need to generate a pivot report where columns are dynamic (based on user
input). How would you implement this in PL/SQL?"

Answer:

1. Use Dynamic SQL + PIVOT:

sql

Copy

DECLARE
v_sql VARCHAR2(4000);
v_columns VARCHAR2(1000) := '''Sales'', ''Marketing'', ''IT'''; -- Dynamic input
BEGIN
v_sql := 'SELECT * FROM (
SELECT department, salary FROM employees
)
PIVOT (
AVG(salary) FOR department IN (' || v_columns || ')
)';
EXECUTE IMMEDIATE v_sql;
END;
2. Alternative (XML + DBMS_SQL):
o Use DBMS_SQL for highly dynamic queries.

Scenario 3: Optimizing Cursor Performance


Question:
"A PL/SQL procedure uses a cursor fetching 1M rows, causing memory issues. How
would you optimize it?"

Answer:

1. Use BULK COLLECT with LIMIT:

sql

Copy

DECLARE
CURSOR c_data IS SELECT * FROM large_table;
TYPE t_data IS TABLE OF c_data%ROWTYPE;
v_data t_data;
BEGIN
OPEN c_data;
LOOP
FETCH c_data BULK COLLECT INTO v_data LIMIT 1000; -- Process in batches
EXIT WHEN v_data.COUNT = 0;
-- Process batch
END LOOP;
CLOSE c_data;
END;
2. Use FORALL for DML:
o Reduces context switching between SQL and PL/SQL engines.

Combined ODI + PL/SQL Scenarios


Scenario 1: Calling PL/SQL from ODI
Question:
"You need to call a PL/SQL package from ODI that performs complex validation
before loading data. How would you structure this?"

Answer:

1. Use ODI Procedure Step:


o Drag a Procedure in the ODI interface.
o Define the call:
sql

Copy
BEGIN
pkg_validation.validate_data();
END;
2. Error Handling:
o Capture PL/SQL exceptions in ODI Scenarios.
o Log errors to an ODI error table.

Scenario 2: Reusable Transformation Logic


Question:
"A transformation rule (e.g., currency conversion) is used in multiple ODI interfaces.
Should you implement it in ODI or PL/SQL?"

Answer:

 PL/SQL is better for:


o Complex logic (e.g., multi-step calculations).
o Reusability (call from multiple interfaces).
 ODI is better for:
o Simple expressions (e.g., AMOUNT * 1.1).
 Best Practice:
o Create a PL/SQL function (e.g., convert_currency()) and call it from ODI
mappings.

Final Tips for Success


✔ For ODI: Focus on mapping strategies, CDC, performance tuning, and error
logging.
✔ For PL/SQL: Master bulk processing, dynamic SQL, and hierarchical
queries.
✔ For Combined Scenarios: Know when to use PL/SQL vs. ODI and how they
interact.

Would you like a mock interview drill or specific topic deep dive? 😊
ODI Scenario based Interview questions and Answers from basic to advanced
Here’s a comprehensive list of ODI scenario-based interview
questions categorized from basic to advanced, along with detailed answers:

🔹 Basic ODI Scenario Questions

1. Simple Data Loading

Question:
"How would you create an ODI interface to load data from a flat file (CSV) into an
Oracle table?"
Answer:

1. Create a File Model in ODI for the CSV (define delimiter, encoding).
2. Create an Oracle Model for the target table.
3. Design an Interface:
o Source Datastore: Flat file.
o Target Datastore: Oracle table.
4. Use LKM File to SQL (to stage data).
5. Use IKM Oracle Incremental Update (for insertion).
6. Run the Scenario.

2. Filtering Data in ODI

Question:
"You need to load only active customers (STATUS = 'ACTIVE') from a source table.
How would you do this in ODI?"

Answer:

1. In the Interface, drag the source table.


2. Add a Filter component:
sql

Copy

WHERE STATUS = 'ACTIVE'


3. Map remaining columns to the target.

🔹 Intermediate ODI Scenario Questions

3. Joining Multiple Sources

Question:
"You have customer data in SQL Server and order data in Oracle. How would you join
them in ODI before loading into a data warehouse?"

Answer:

1. Create Models for both sources (SQL Server + Oracle).


2. Design an Interface:
o Drag both tables into the Diagram.
o Use a Join component (e.g., CUSTOMER_ID = ORDER_CUSTOMER_ID).
3. Use LKM SQL Server to Oracle (to move data to staging).
4. Use IKM Oracle Control Append for final load.

4. Incremental Load Without CDC


Question:
"How would you implement incremental loading if the source table has no
timestamp or CDC?"

Answer:

1. Use a Checksum/Hash Comparison:


o Add a derived column in the source:

sql

Copy

ORA_HASH(COL1 || COL2 || COL3) AS HASH_VALUE


o Compare with target hash in the Filter:
sql

Copy

WHERE NOT EXISTS (SELECT 1 FROM TARGET T WHERE T.PK = S.PK AND
T.HASH_VALUE = S.HASH_VALUE)
2. Alternative: Use a control table to track last extracted IDs.

🔹 Advanced ODI Scenario Questions

5. Handling Errors in Batches

Question:
"Your ODI job processes 1M records, but some fail due to data issues. How would
you log errors without stopping the load?"

Answer:

1. Enable "Continue on Error" in the Interface.


2. Define Error Tables:
o In the Physical tab, specify an error table (ERR$_TARGET).
3. Use ODI Tools:
o ODIWriteErrorTable to log bad records.
4. Post-Process:
o Email error summaries using ODI OdiOSCommand or REST API.

6. Slowly Changing Dimension (SCD Type 2)

Question:
"How would you design an ODI flow for SCD Type 2 (track history with start/end
dates)?"

Answer:

1. Use the SCD KM:


o Select IKM Oracle SCD.
2. Configure:
o Natural Key: CUSTOMER_ID.
o Surrogate Key: CUSTOMER_SK (sequence).
o Tracking Columns: START_DATE, END_DATE, CURRENT_FLAG.
3. Logic:
o Expire old records (END_DATE = SYSDATE).
o Insert new versions (START_DATE = SYSDATE, CURRENT_FLAG = 'Y').

7. Real-Time Integration with ODI

Question:
"How would you set up near real-time data integration using ODI?"

Answer:

1. Use Change Data Capture (CDC):


o Enable Oracle CDC or GoldenGate.
2. ODI Journalizing:
o Create a Journal (J$) table to track changes.
3. Micro-Batching:
o Schedule scenarios every 5 minutes using ODISleep or REST API
triggers.
4. Optimize Performance:
o Use IKM Oracle CDC Apply for minimal latency.

🔹 Expert-Level ODI Scenario Questions

8. Dynamic File Processing

Question:
"You receive daily files with varying schemas (e.g., new columns added). How would
you handle this in ODI?"

Answer:

1. Use ODI Knowledge Modules (KMs):


o LKM File Dynamic to read metadata.
2. Flexible Mapping:
o Store column mappings in a control table.
3. PL/SQL Pre-Processing:
o Dynamically generate SQL for variable columns.
4. Error Handling:
o Log mismatched columns for manual review.

9. Parallel Execution in ODI


Question:
"Your ODI job processes 10 tables sequentially. How would you parallelize it?"

Answer:

1. Use ODI Packages:


o Create parallel execution branches.
2. Set Up Scenarios:
o Use OdiStartScen to trigger multiple scenarios concurrently.
3. Database-Level Parallelism:
o Use PARALLEL hints in IKM SQL.
4. Thread Control:
o Limit parallelism using ODI Tools.

10. Securing Sensitive Data

Question:
"Your ODI job processes PII (e.g., SSNs). How would you secure this data?"

Answer:

1. Data Masking:
o Use Oracle Data Redaction or ODI Variable Substitution.
2. Encryption:
o LKM File Encrypted for file transfers.
3. ODI Credentials:
o Store passwords in Secure Master Repository.
4. Audit Logs:
o Enable ODI Audit Framework to track access.

🎯 Key Takeaways for Interviews


 Basic: Focus on interfaces, filters, and joins.
 Intermediate: Know incremental loads, error handling, and SCDs.
 Advanced: Master CDC, parallelism, and dynamic processing.
 Expert: Be ready for real-time, security, and optimization scenarios.

Would you like PL/SQL integration scenarios or a mock interview simulation?


😊
Yes
Here’s a mock interview drill combining ODI and PL/SQL scenarios with suggested
responses to simulate real interview pressure:

Round 1: Basic Concepts


Interviewer:
"Walk me through how you’d create a simple ODI interface to load data from Oracle
to Snowflake."

You:

1. Setup Models:
o Create Oracle (source) and Snowflake (target) models in Topology.
o Define JDBC connections for both.
2. Interface Design:
o Drag Oracle table as source, Snowflake table as target.
o Use LKM SQL to SQL (AWS/Azure/GCP) for staging.
o Select IKM Snowflake Control Append for loading.
3. Execution:
o Generate scenario → Schedule with OdiStartScen.

Why this works: Shows understanding of cross-database loads.

Round 2: Error Handling


Interviewer:
"Your ODI job fails mid-way due to a PK violation. How do you ensure only valid
records are loaded?"

You:

1. Preventive Approach:
o Use Pre-Integration Check (e.g., WHERE NOT EXISTS (SELECT 1 FROM
TARGET WHERE PK=SOURCE.PK)).
2. Recovery:
o Enable Error Tables in Physical tab → Log bad records to ERR$_TABLE.
o Use ODI Tools to send alerts with OdiOSCommand (email script).
3. Restartability:
o Implement SCD Type 1 for updates to avoid duplicates.

Pro Tip: Mention ODI 12c+ Smart Restart to skip successful steps.

Round 3: PL/SQL + ODI Integration


Interviewer:
"A mapping requires complex data cleansing. Would you use ODI expressions or
PL/SQL? How?"

You:
Choice: PL/SQL for complex logic (e.g., address standardization).
Implementation:

1. Create a PL/SQL package:


sql

Copy

CREATE OR REPLACE PACKAGE pkg_cleanse AS


PROCEDURE clean_customer(p_data IN OUT CLOB);
END;
2. Call from ODI:
o Add Procedure step in interface → Use OdiPLSQL tool:

sql

Copy

BEGIN
pkg_cleanse.clean_customer(:IN_DATA);
END;
3. Advantage:
o Reusable logic across interfaces.
o Better error handling (LOG ERRORS INTO err_table).

Round 4: Performance Tuning


Interviewer:
"A daily ODI job takes 6 hours. How would you optimize it?"

You:
Diagnosis First:

1. Check Session Logs for slow steps (e.g., full table scans).
2. Common Fixes:
o SQL Tuning: Add /*+ PARALLEL(8) */ hints in LKM.
o Bulk Processing: Replace row-by-row with FORALL in PL/SQL.
o Commit Frequency: Adjust COMMIT_INTERVAL in IKM.
3. Infrastructure:
o Increase JVM heap size (ODI_JAVA_OPTS=-Xmx4G).

Advanced: Suggest partitioning source tables by date.

Round 5: Real-World Scenario


Interviewer:
"Your client wants nightly sales data refreshed by 6 AM, but the source system is
unreliable. How do you ensure SLA?"

Your Action Plan:

1. Early Start:
o Schedule job at 2 AM with buffer time.
2. Fallback:
o If source fails, use staged data from yesterday (flag as "unverified").
3. Monitoring:
o Implement OdiSleep loops to check source availability.
4. Alerting:
o Trigger Slack/Teams alerts via OdiOSCommand if delayed.

Key Phrase:
"Design for resilience, not just success."

Final Tip: Ask Back!


When asked "Do you have questions?", try:

 "How does your team handle CDC for real-time analytics?"


 "What’s your biggest ODI pain point I could help solve?"

This shows engagement + problem-solving mindset.

Want to practice specific scenarios like CDC or debugging? Let me know! 🚀


Yes
Here’s a targeted mock drill with 5 critical ODI+PL/SQL scenarios you’re likely to
face, designed to reveal your technical depth and problem-solving approach:

Scenario 1: The Broken Incremental Load


Interviewer:
"Your incremental load job is inserting duplicate records. The source has no
timestamp. How do you fix this?"

Your Response:

1. Immediate Diagnosis:
o Check if the interface uses a key-based comparison (e.g., WHERE NOT
EXISTS). If missing, add:

sql

Copy

WHERE NOT EXISTS (SELECT 1 FROM TARGET T WHERE T.BUSINESS_KEY =


S.BUSINESS_KEY)
2. Advanced Solution:
o Implement hash comparison for changed data:

sql
Copy

AND ORA_HASH(S.COL1||S.COL2) != (SELECT ORA_HASH(COL1||COL2)


FROM TARGET T
WHERE T.BUSINESS_KEY = S.BUSINESS_KEY)
3. Prevention:
o Create a control table to track last successful load’s high-water mark.

Key Insight: Shows you understand both quick fixes and robust patterns.

Scenario 2: The Performance Nightmare


Interviewer:
"Your PL/SQL function called from ODI slows down when processing 10M rows. How
do you optimize?"

Your Playbook:

1. Eliminate Row-by-Row Processing:


o Replace scalar function calls with BULK COLLECT + FORALL:

sql

Copy

SELECT * BULK COLLECT INTO v_data FROM STAGING;


FORALL i IN 1..v_data.COUNT
UPDATE TARGET SET status = process_data(v_data(i))
WHERE id = v_data(i).id;
2. ODI-Level Fix:
o Move logic to IKM SQL (set-based operation) if possible:

sql

Copy

UPDATE TARGET T
SET status = (SELECT custom_logic(S.col1, S.col2)
FROM STAGING S WHERE S.id = T.id);
3. Nuclear Option:
o Rewrite logic as Java UDF for heavy computations.

Pro Tip: Mention DBMS_PROFILER to identify bottlenecks.

Scenario 3: The Mysterious Failure


Interviewer:
"An ODI job succeeds but the target table is empty. How do you debug?"
Your Investigation:

1. Session Logs:
o Check step-by-step execution in Operator Navigator → Verify if
LKM/IKM ran.
2. Data Tracing:
o Insert audit rows into a log table during flow:

sql

Copy

INSERT INTO odi_audit VALUES ('Post-LKM', SYSDATE);


3. Common Culprits:
o Filter conditions too restrictive.
o Commit skipped (IKM set to No Transaction).
4. Quick Test:
o Run mapping with $SNP_SET_DEBUG(1) to capture runtime variables.

Script Example:

shell
Copy
# Check ODI logs
grep "ORA-" $ODI_HOME/log/session_12345.log

Scenario 4: The Real-Time Dilemma


Interviewer:
"Business demands near real-time updates from OLTP to warehouse. How do you
architect this?"

Your Blueprint:

1. CDC Setup:
o Enable Oracle GoldenGate or LogMiner-based journalizing.
2. ODI Micro-Batches:
o Schedule scenarios every 1-5 mins using:

shell

Copy

OdiStartScen -SCEN_NAME=MY_SCEN -SYNC=1


3. Event-Driven Option:
o Use ODI REST API triggered by Kafka/Database events.
4. Fallback Plan:
o Implement staging layer with "last_update" columns for recovery.

Warning: Mention source system impact of frequent queries.


Scenario 5: The Security Challenge
Interviewer:
"How would you secure sensitive data (PII) in ODI flows?"

Your Defense Strategy:

1. Data Masking:
o Use Oracle Data Redaction in staging tables:

sql

Copy

DBMS_REDACT.ADD_POLICY(object_schema => 'STAGE',


object_name => 'CUSTOMERS',
column_name => 'SSN');
2. ODI-Specific:
o Store credentials in Secure Master Repository (not in code).
3. Transport Encryption:
o Configure LKM File AES-256 for file transfers.
4. Audit Trail:
o Enable ODI Audit Framework + DBA_FGA for SELECTs.

Compliance Bonus: Reference GDPR/CCPA requirements.

Final Critical Question


Interviewer:
"What’s your approach when requirements change mid-development?"

Your Structured Answer:

1. Impact Analysis:
o Assess changes to mappings, KMs, and schedules.
2. Version Control:
o Use ODI Git integration to branch/merge changes.
3. Communication:
o Document trade-offs (e.g., "Adding SCD2 will increase load time by
30%").

Phrase to Impress:
"I prototype high-risk changes in a sandbox before altering production workflows."
Want to drill deeper into specific scenarios (e.g., debugging CDC issues) or PL/SQL
optimization patterns? Let me know! 🎯
Yes
Here’s a hardcore mock interview drill with 7 challenging real-world scenarios
and expert-level responses to help you crush your ODI/PL/SQL interview:

Scenario 1: The Phantom Data Corruption


Interviewer:
"Users report that monthly financial data loads contain random corrupted amounts.
Your ODI logs show no errors. How do you investigate?"

Your Response:

1. Data Lineage Check:


o Implement a checksum validation at each stage:

sql

Copy

SELECT SUM(ORA_HASH(AMOUNT || CURRENCY)) FROM STAGE_TABLE


o Compare with source/target checksums
2. Temporal Analysis:
o Add audit columns to capture load timestamps:
sql

Copy

ALTER TABLE TARGET ADD (LOAD_TIMESTAMP TIMESTAMP, ODI_SESSION_ID


NUMBER)
3. Hidden Transformation Bug:
o Check for implicit conversions in mappings (e.g., VARCHAR to NUMBER)
o Verify decimal handling in LKM file definitions

Scenario 2: The Lockdown Deadlock


Interviewer:
"Your PL/SQL package called from ODI causes deadlocks when run in parallel. How
do you resolve?"

Your Solution:

1. Isolation Level Adjustment:

sql

Copy

PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
-- Critical section
COMMIT;
END;
2. Row-Level Locking Strategy:
sql

Copy

FOR r IN (SELECT rowid FROM target_table WHERE status='PENDING' FOR UPDATE


SKIP LOCKED)
LOOP
-- Process unlocked rows only
END LOOP;
3. ODI Orchestration:
o Implement chunk-based processing in package
o Use ODI package branching with OdiSleep between chunks

Scenario 3: The Vanishing Session


Interviewer:
"Long-running ODI sessions disappear from Operator without completing or logging
errors. What's your approach?"

Diagnostic Plan:

1. Database-Level Checks:

sql

Copy

SELECT sid, serial#, status, sql_id


FROM v$session
WHERE module LIKE '%ODI%' AND status='INACTIVE'
2. Infrastructure Review:
o Check ODI agent memory settings:
bash

Copy

ps -ef | grep odi | grep -v grep


o Verify network timeouts in odiparams.sh
3. Prevention:
o Implement heartbeat monitoring:
sql

Copy

BEGIN
INSERT INTO session_monitor VALUES (USERENV('SESSIONID'), SYSDATE);
COMMIT;
END;

Scenario 4: The Mysterious Performance Degradation


Interviewer:
"An ODI process that took 30 minutes now takes 6 hours with no code changes. How
do you troubleshoot?"

Expert Investigation:

1. AWR Report Analysis:

sql

Copy

SELECT * FROM TABLE(DBMS_WORKLOAD_REPOSITORY.awr_report_text(


l_dbid, l_inst_num, l_bid, l_eid, 0));
2. Execution Plan Comparison:
sql

Copy

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_AWR('sql_id'));


3. ODI-Specific Checks:
o Verify statistics on SNP tables:
sql

Copy

EXEC DBMS_STATS.GATHER_SCHEMA_STATS('SNP');
o Check for fragmented C$ temporary tables

Scenario 5: The Compliance Timebomb


Interviewer:
"You discover an ODI flow processes GDPR-protected data without proper auditing.
How do you remediate?"

Compliance Overhaul:

1. Data Masking Implementation:

sql

Copy

BEGIN
DBMS_RLS.ADD_POLICY(
object_schema => 'HR',
object_name => 'EMPLOYEES',
policy_name => 'GDPR_MASK',
function_schema => 'SEC',
policy_function => 'MASK_SENSITIVE');
END;
2. ODI-Specific Controls:
o Enable extended auditing in topology:
sql
Copy

UPDATE SNP_CONNECT SET AUDIT_LEVEL = 'FULL' WHERE CON_NAME =


'PROD_DB';
3. Data Lineage Documentation:
o Implement ODI Knowledge Module hooks to populate:
sql

Copy

INSERT INTO data_lineage VALUES (:SCENARIO_NAME, :TABLE_NAME, SYSDATE,


USER);

Scenario 6: The Unreliable REST Source


Interviewer:
"Your ODI interface consuming REST API fails randomly with HTTP 503 errors. How
do you make it resilient?"

Robust Solution:

1. Retry Logic in ODI Procedure:

shell

Copy

# odi_retry.sh
for i in {1..3}; do
OdiStartScen -SCEN_NAME=API_LOAD && break
sleep 60
done
2. Circuit Breaker Pattern:
sql

Copy

CREATE TABLE api_status (


api_name VARCHAR2(100),
last_failure TIMESTAMP,
status VARCHAR2(20) CHECK (status IN ('UP','DOWN'))
);
3. Fallback Mechanism:
o Implement file-based staging when API fails:
sql

Copy

IF api_status = 'DOWN' THEN


lkm_file_to_sql('fallback.json');
END IF;

Scenario 7: The Multi-Cloud Migration


Interviewer:
"You need to migrate ODI workflows from on-prem Oracle to AWS Aurora. What's
your strategy?"

Migration Playbook:

1. Knowledge Module Adaptation:


o Modify LKM/IKM to use Aurora-compatible syntax:

sql

Copy

-- Replace Oracle hints with MySQL equivalents


/*+ BKA(t) */ instead of /*+ INDEX(t) */
2. PL/SQL Conversion:
o Rewrite Oracle-specific features:
sql

Copy

-- Change:
EXECUTE IMMEDIATE 'TRUNCATE TABLE ' || table_name;
-- To:
SET @sql = CONCAT('TRUNCATE TABLE ', table_name);
PREPARE stmt FROM @sql; EXECUTE stmt;
3. Connection Management:
o Implement AWS Secrets Manager integration:
python

Copy

# odi_aws_creds.py
import boto3
secret = boto3.client('secretsmanager').get_secret_value(...)

Final Power Move:


When asked "Any questions?", respond with:
"Could you walk me through your most complex ODI failure and how the team
resolved it?"
This demonstrates deep engagement and problem-solving curiosity.

Want to practice whiteboarding an ODI architecture or debugging a specific


error pattern? Let me know your focus area!

New chat

Here's a hardcore, realistic mock interview with 10 challenging


ODI/PL/SQL scenarios and expert-level responses to help you dominate
your technical interview:

Scenario 1: The Silent Data Corruption


Interviewer:
"Your monthly financial load completes successfully, but users find
random records where amounts are off by exactly 100x. ODI shows no
errors. How do you diagnose?"

Your Response:

1. Data Profiling:

sql

Copy

-- Find all records where amount = source_amount * 100


SELECT COUNT(*) FROM target t
JOIN source s ON t.id = s.id
WHERE t.amount = s.amount * 100;
2. Transformation Audit:
o Check for implicit conversions in the mapping:
sql

Copy

-- Suspect expression:
TO_NUMBER(SUBSTR(raw_value, 1, 10)) / 100 -- Accidentally dividing twice?
3. Version Control Check:
o Compare current mapping with last known good version in Git:
bash

Copy

git diff HEAD~1 -- Interfaces/Financial_Load.xml

Scenario 2: The Deadlock Storm

Interviewer:
"When running 10 parallel ODI sessions, the database locks up with
deadlocks. Your PL/SQL package uses FOR UPDATE. How do you fix?"

Your Solution:

1. Row-Level Locking:

sql

Copy

CURSOR c_data IS
SELECT * FROM staging
WHERE status = 'NEW'
FOR UPDATE SKIP LOCKED; -- Critical addition
2. Batch Optimization:
sql

Copy

FOR r IN c_data LIMIT 1000 -- Process in chunks


LOOP
UPDATE target SET ...
WHERE CURRENT OF c_data;
COMMIT EVERY 100 ROWS;
END LOOP;
3. ODI Orchestration:
o Implement staggered starts:
shell

Copy

# In ODI package
OdiStartScen -SCEN_NAME=LOAD_CHUNK_1 -SYNC=0
OdiSleep 30
OdiStartScen -SCEN_NAME=LOAD_CHUNK_2 -SYNC=0

Scenario 3: The Memory Leak

Interviewer:
"Your ODI agent crashes overnight with OutOfMemory errors when
processing large XML files. How do you troubleshoot?"

Diagnostic Plan:

1. Heap Analysis:

bash

Copy

# Add to odi.conf:
ODI_JAVA_OPTS="-Xmx8G -XX:+HeapDumpOnOutOfMemoryError"
2. File Streaming:
o Replace DOM parsing with SAX in LKM:
xml

Copy

<FileReader mode="STREAMING" chunkSize="50000"/>


Run HTML

3. JDBC Tuning:
sql

Copy
-- In LKM properties:
fetchSize=1000

Scenario 4: The Phantom Performance Killer

Interviewer:
"A simple SELECT query runs fast in SQL Developer but takes hours in
ODI. Why?"

Expert Investigation:

1. Session Tracing:

sql

Copy

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE(session_id, serial_num, true,


true);
2. ODI-Specific Checks:
o Verify driver properties:
properties

Copy

# odi.conf
oracle.jdbc.defaultNChar=true
3. Network Analysis:
bash

Copy

tcpdump -i eth0 -w odi_traffic.pcap

Scenario 5: The Compliance Nightmare

Interviewer:
"Auditors found PII in your ODI error tables. How do you fix this
immediately?"

Emergency Response:

1. Data Redaction:

sql

Copy

CREATE OR REPLACE VIEW err_secure AS


SELECT
error_id,
REGEXP_REPLACE(raw_data, '([0-9]{3})-([0-9]{2})-([0-9]{4})', 'XXX-XX-XXXX')
clean_data
FROM err$_customer;
2. Automated Purging:
sql

Copy

BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'PURGE_ERROR_DATA',
job_type => 'PLSQL_BLOCK',
job_action => 'DELETE FROM err$_table WHERE created_date < SYSDATE-7',
repeat_interval => 'FREQ=DAILY');
END;

Scenario 6: The Mysterious CDC Gap

Interviewer:
"Your CDC-based load missed 3 hours of transactions last night. How do
you recover?"

Disaster Recovery:

1. LogMiner Rescue:

sql

Copy

BEGIN
DBMS_LOGMNR.START_LOGMNR(
starttime => SYSDATE - (4/24),
endtime => SYSDATE - (1/24));
END;
2. ODI Fallback:
o Create emergency interface with:
sql

Copy

WHERE last_update BETWEEN :MISSING_START AND :MISSING_END

Scenario 7: The Multi-Cloud Migration

Interviewer:
"How would you migrate an Oracle-dependent ODI workflow to
Snowflake?"

Migration Blueprint:
1. KM Adaptation:
o Rewrite IKM using Snowflake syntax:

sql

Copy

COPY INTO target FROM @stage


FILE_FORMAT = (TYPE = 'CSV')
2. PL/SQL Conversion:
o Replace Oracle features with JavaScript UDFs:
javascript

Copy

// In Snowflake
CREATE FUNCTION oracle_to_date(str VARCHAR)
RETURNS TIMESTAMP
AS 'return new Date(str);';

Scenario 8: The Dynamic SQL Trap

Interviewer:
"Your dynamic PL/SQL called from ODI works in dev but fails in prod with
ORA-00942. Why?"

Root Cause Analysis:

1. Bind Variable Check:

sql

Copy

-- Bad:
EXECUTE IMMEDIATE 'SELECT ' || col_list || ' FROM ' || table_name;

-- Good:
EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ' ||
DBMS_ASSERT.SQL_OBJECT_NAME(table_name);
2. Schema Verification:
sql

Copy

SELECT owner FROM all_tables


WHERE table_name = :table_name;

Scenario 9: The Silent Killer Constraint


Interviewer:
"Your ODI load fails randomly with ORA-00001 but the PK violation isn't in
your target table. Why?"

Deep Dive:

1. Constraint Tracing:

sql

Copy

SELECT * FROM all_constraints


WHERE r_owner = :target_schema
AND r_constraint_name = :pk_name;
2. ODI Flow Check:
o Verify if IKM is using DELETE before load (triggers FK cascade)

Scenario 10: The Ultimate Test

Interviewer:
"How would you design an ODI architecture for a 24/7 global financial
system?"

Enterprise Blueprint:

1. Topology:
o Regional ODI agents with load balancing
o GoldenGate for CDC with ODI journalizing fallback
2. Resilience:
sql

Copy

-- Heartbeat monitoring
BEGIN
INSERT INTO system_health
VALUES ('ODI_AGENT_EAST', SYSDATE);
COMMIT EVERY 60 SECONDS;
END;
3. Disaster Recovery:
o ODI scenario replication with:
bash

Copy

rsync -avz /oracle/odi/scenarios dr_site:/backup/odi


Handling Duplicate Data in ODI:
Separation Techniques
Here's a comprehensive guide to separating duplicate and non-duplicate
data in Oracle Data Integrator (ODI), covering various scenarios and
implementation methods:

1. Using ODI's Built-in Deduplication


In Interface Mapping:

1. Add a Filter to identify duplicates:


sql
Copy
-- To find duplicates
WHERE ROWID NOT IN (
SELECT MIN(ROWID)
FROM SAME_TABLE_ALIAS
GROUP BY KEY_COLUMN1, KEY_COLUMN2
)

-- To find non-duplicates
WHERE ROWID IN (
SELECT MIN(ROWID)
FROM SAME_TABLE_ALIAS
GROUP BY KEY_COLUMN1, KEY_COLUMN2
)
2. Use DISTINCT in the source query (simple but less flexible)

2. Advanced Deduplication Techniques


Using Analytical Functions (Recommended):

sql
Copy
-- In your mapping, add derived columns:
CASE
WHEN ROW_NUMBER() OVER (PARTITION BY KEY_COLUMNS ORDER BY TIMESTAMP_COL
DESC) = 1
THEN 'UNIQUE'
ELSE 'DUPLICATE'
END AS DUP_FLAG

Implementation Options:

1. Two Separate Interfaces:


o One interface filters duplicates to target table
o Another interface routes duplicates to error/review table
2. Single Interface with Conditional Loading:
o Use ODI's ODI-CONDITION tool to branch flows
o Route based on the DUP_FLAG value

3. Using Knowledge Modules for


Deduplication
IKM Options:

1. IKM Oracle Incremental Update:


o Set DELETE_ALL option to false
o Configure UPDATE_KEYS and INSERT_KEYS properly
2. Custom IKM:
o Implement MERGE statement with duplicate handling:
sql

Copy

MERGE INTO target t


USING (SELECT DISTINCT * FROM staging) s
ON (t.key = s.key)
WHEN NOT MATCHED THEN INSERT...

4. Temporary Table Approach


1. Create a temporary staging table with additional columns:
sql
Copy
ALTER TABLE STG_TABLE ADD (
IS_DUPLICATE VARCHAR2(1),
DUPLICATE_GROUP_ID NUMBER
);
2. Populate with ODI procedure:
sql
Copy
UPDATE STG_TABLE t1
SET IS_DUPLICATE = 'Y',
DUPLICATE_GROUP_ID = (
SELECT MIN(ROWID)
FROM STG_TABLE t2
WHERE t1.key1 = t2.key1 AND t1.key2 = t2.key2
)
WHERE EXISTS (...duplicate logic...);

5. Handling Complex Duplicates


For business rule-based duplicates (not just key matches):

1. Create a PL/SQL function to identify duplicates:


sql
Copy
CREATE FUNCTION is_duplicate(p_id NUMBER) RETURN VARCHAR2 AS
v_count NUMBER;
BEGIN
SELECT COUNT(*) INTO v_count FROM customers
WHERE customer_name = (SELECT customer_name FROM customers WHERE id = p_id);

RETURN CASE WHEN v_count > 1 THEN 'Y' ELSE 'N' END;
END;
2. Call from ODI mapping as a derived column

Best Practices:
1. Performance Considerations:
o Add indexes on key columns used for duplicate detection
o Process in batches for large datasets
o Consider hash-based comparison for wide tables
2. Audit Trail:
o Always log duplicate records rather than silently discarding
o Include reason codes for different duplicate types
3. Error Handling:
o Configure ODI error tables to capture duplicate records
o Set up notification for duplicate threshold breaches
Here’s a comprehensive collection of SQL interview
questions categorized from basic to advanced, covering fundamental
concepts, tricky scenarios, and performance optimization:

🔹 Basic SQL Questions

1. Core Concepts

Q1: Explain the difference between WHERE and HAVING clauses.


A:

 WHERE filters rows before aggregation.


 HAVING filters groups after aggregation (used with GROUP BY).

Q2: What is the difference between DELETE, TRUNCATE, and DROP?


A:

 DELETE: Removes rows (can be rolled back, fires triggers).


 TRUNCATE: Removes all rows (faster, no triggers, auto-commits).
 DROP: Deletes the entire table structure.

2. Joins
Q3: Explain INNER JOIN, LEFT JOIN, RIGHT JOIN, and FULL OUTER JOIN with
examples.
A:

sql
Copy
-- INNER JOIN: Only matching rows
SELECT a.id, b.name FROM TableA a INNER JOIN TableB b ON a.id = b.id;

-- LEFT JOIN: All rows from left + matches from right


SELECT a.id, b.name FROM TableA a LEFT JOIN TableB b ON a.id = b.id;

-- FULL OUTER JOIN: All rows from both tables


SELECT a.id, b.name FROM TableA a FULL OUTER JOIN TableB b ON a.id = b.id;

Q4: How do you find records in TableA that don’t exist in TableB?
A:

sql
Copy
SELECT a.* FROM TableA a LEFT JOIN TableB b ON a.id = b.id WHERE b.id IS NULL;

🔹 Intermediate SQL Questions

3. Aggregation & Window Functions

Q5: Write a query to find the second-highest salary.


A:

sql
Copy
SELECT MAX(salary) FROM employees WHERE salary < (SELECT MAX(salary) FROM
employees);
-- OR using window functions:
SELECT salary FROM (
SELECT salary, DENSE_RANK() OVER (ORDER BY salary DESC) AS rnk
FROM employees
) WHERE rnk = 2;

Q6: Explain RANK(), DENSE_RANK(), and ROW_NUMBER().


A:

 RANK(): Leaves gaps (e.g., 1, 2, 2, 4).


 DENSE_RANK(): No gaps (e.g., 1, 2, 2, 3).
 ROW_NUMBER(): Unique sequential numbers (e.g., 1, 2, 3, 4).

4. Subqueries & CTEs

Q7: When would you use a CTE instead of a subquery?


A:
 CTEs improve readability for complex queries.
 They allow recursion (e.g., hierarchical data).
sql
Copy
WITH DeptAvg AS (
SELECT department, AVG(salary) AS avg_sal
FROM employees GROUP BY department
)
SELECT e.name FROM employees e JOIN DeptAvg d ON e.department = d.department
WHERE e.salary > d.avg_sal;

Q8: How do you optimize a correlated subquery?


A:

 Rewrite as a JOIN or use EXISTS:


sql
Copy
-- Correlated (slow):
SELECT name FROM employees e WHERE salary > (SELECT AVG(salary) FROM
employees WHERE department = e.department);

-- Optimized:
SELECT e.name FROM employees e JOIN (
SELECT department, AVG(salary) AS avg_sal FROM employees GROUP BY department
) d ON e.department = d.department WHERE e.salary > d.avg_sal;

🔹 Advanced SQL Questions

5. Performance Tuning

Q9: How would you optimize a slow-running query?


A:

1. Check execution plan (EXPLAIN PLAN).


2. Add indexes on join/where columns.
3. Avoid SELECT * (fetch only needed columns).
4. Use WITH clause for reusable subqueries.
5. Partition large tables.

Q10: What are covering indexes, and how do they help?


A:

 Indexes that include all columns needed for a query (avoid table
access).
sql
Copy
CREATE INDEX idx_cover ON employees(department) INCLUDE (salary);

6. Tricky Scenarios
Q11: How do you find duplicate records?
A:

sql
Copy
SELECT email, COUNT(*) FROM users GROUP BY email HAVING COUNT(*) > 1;

Q12: Write a query to delete duplicates (keep one record).


A:

sql
Copy
DELETE FROM users
WHERE id NOT IN (
SELECT MIN(id) FROM users GROUP BY email
);

Q13: How do you pivot rows into columns?


A:

sql
Copy
-- Oracle
SELECT * FROM (
SELECT department, salary FROM employees
) PIVOT (
AVG(salary) FOR department IN ('IT' AS it, 'HR' AS hr)
);

-- SQL Server
SELECT name, [IT], [HR] FROM (
SELECT name, department, salary FROM employees
) AS src PIVOT (
AVG(salary) FOR department IN ([IT], [HR])
) AS pvt;

🔹 Expert-Level SQL Questions

7. Advanced Joins

Q14: How do you find employees who manage others?


A:

sql
Copy
SELECT DISTINCT m.name FROM employees e JOIN employees m ON e.manager_id =
m.id;

Q15: Explain recursive CTEs for hierarchical data.


A:

sql
Copy
WITH OrgChart AS (
-- Base case: CEO (no manager)
SELECT id, name, manager_id, 1 AS level FROM employees WHERE manager_id IS NULL
UNION ALL
-- Recursive case: Employees with managers
SELECT e.id, e.name, e.manager_id, oc.level + 1
FROM employees e JOIN OrgChart oc ON e.manager_id = oc.id
)
SELECT * FROM OrgChart;

8. Transaction & Locking

Q16: What is the difference between READ


COMMITTED and SERIALIZABLE isolation levels?
A:

 READ COMMITTED: Sees only committed data (may have non-


repeatable reads).
 SERIALIZABLE: Locks rows to prevent phantom reads (most
restrictive).

Q17: How do you handle deadlocks?


A:

1. Ensure consistent order of table access.


2. Use NOWAIT or SKIP LOCKED in Oracle.
3. Reduce transaction time.

🔹 Real-World Scenarios

Q18: How would you design a schema for an e-commerce platform?


A:

 Tables: users, products, orders, order_items, payments.


 Key constraints: Foreign keys (order_items.order_id → orders.id).
 Indexes: products.category_id, orders.user_id.

Q19: How do you backup a table before a migration?


A:

sql
Copy
CREATE TABLE employees_backup AS SELECT * FROM employees;
-- Or for large tables:
EXPDP scott/tiger TABLES=employees DIRECTORY=backup_dir
DUMPFILE=employees.dmp;
Here’s a collection of real-world scenario-based SQL questions with
detailed answers, covering common business problems and advanced
techniques:

1. E-Commerce Data Analysis

Scenario:
"An e-commerce platform wants to identify customers who made
purchases above $500 but haven't bought anything in the last 3 months."

Solution:

sql
Copy
WITH big_spenders AS (
SELECT customer_id
FROM orders
WHERE order_amount > 500
),
inactive_customers AS (
SELECT customer_id
FROM orders
GROUP BY customer_id
HAVING MAX(order_date) < CURRENT_DATE - INTERVAL '3 months'
)
SELECT c.customer_name, c.email
FROM customers c
JOIN big_spenders b ON c.customer_id = b.customer_id
JOIN inactive_customers i ON c.customer_id = i.customer_id;

Key Techniques:

 CTEs for modular logic


 Date arithmetic
 Multi-step filtering

2. Employee Hierarchy

Scenario:
"Generate an org chart showing all employees with their managers'
names up to the CEO."

Solution:

sql
Copy
WITH RECURSIVE org_chart AS (
-- Base case: CEO (no manager)
SELECT id, name, manager_id, 1 AS level, CAST(name AS VARCHAR(1000)) AS path
FROM employees WHERE manager_id IS NULL

UNION ALL

-- Recursive case
SELECT e.id, e.name, e.manager_id, oc.level + 1,
oc.path || ' → ' || e.name
FROM employees e
JOIN org_chart oc ON e.manager_id = oc.id
)
SELECT id, name, level, path FROM org_chart
ORDER BY level, name;

Key Techniques:

 Recursive CTE
 Path building
 Hierarchical sorting

3. Sessionization for Web Analytics

Scenario:
"Identify user sessions from web logs where a session expires after 30
minutes of inactivity."

Solution:

sql
Copy
WITH tagged_logs AS (
SELECT
user_id,
event_time,
LAG(event_time) OVER (PARTITION BY user_id ORDER BY event_time) AS prev_time,
CASE WHEN EXTRACT(EPOCH FROM (event_time - LAG(event_time)
OVER (PARTITION BY user_id ORDER BY event_time))) > 1800 OR LAG(event_time)
IS NULL
THEN 1 ELSE 0 END AS new_session
FROM web_logs
),
session_groups AS (
SELECT
user_id,
event_time,
SUM(new_session) OVER (PARTITION BY user_id ORDER BY event_time) AS session_id
FROM tagged_logs
)
SELECT
user_id,
session_id,
MIN(event_time) AS session_start,
MAX(event_time) AS session_end,
COUNT(*) AS events
FROM session_groups
GROUP BY user_id, session_id;
Key Techniques:

 Window functions with LAG


 Conditional session tagging
 Time-based grouping

4. Inventory Management

Scenario:
"Find products that have been out of stock for more than 7 days but had
sales in the last month."

Solution:

sql
Copy
SELECT p.product_id, p.product_name
FROM products p
JOIN inventory i ON p.product_id = i.product_id
WHERE i.stock_count = 0
AND i.status_date < CURRENT_DATE - INTERVAL '7 days'
AND EXISTS (
SELECT 1 FROM order_items oi
JOIN orders o ON oi.order_id = o.order_id
WHERE oi.product_id = p.product_id
AND o.order_date >= CURRENT_DATE - INTERVAL '1 month'
);

Key Techniques:

 EXISTS for correlated subqueries


 Date filtering
 Multi-table joins

5. Financial Fraud Detection

Scenario:
"Flag transactions where the same credit card was used in different cities
within 1 hour."

Solution:

sql
Copy
SELECT DISTINCT a.transaction_id, a.card_number, a.city AS city1, b.city AS city2
FROM transactions a
JOIN transactions b ON a.card_number = b.card_number
AND a.transaction_id != b.transaction_id
AND ABS(EXTRACT(EPOCH FROM (a.transaction_time - b.transaction_time)))
<= 3600
WHERE a.city != b.city;

Key Techniques:

 Self-join
 Time difference calculation
 Distance-based fraud pattern

6. Patient Appointment Analysis

Scenario:
"Calculate the no-show rate for doctors, but only include patients who had
at least 2 previous appointments."

Solution:

sql
Copy
WITH patient_history AS (
SELECT
patient_id,
COUNT(*) FILTER (WHERE status = 'completed') AS completed_visits,
COUNT(*) FILTER (WHERE status = 'no-show') AS no_shows
FROM appointments
GROUP BY patient_id
HAVING COUNT(*) >= 2
)
SELECT
d.doctor_name,
SUM(ph.no_shows) * 100.0 / NULLIF(SUM(ph.completed_visits + ph.no_shows), 0) AS
no_show_rate
FROM patient_history ph
JOIN appointments a ON ph.patient_id = a.patient_id
JOIN doctors d ON a.doctor_id = d.doctor_id
GROUP BY d.doctor_id, d.doctor_name;

Key Techniques:

 FILTER clause for conditional aggregation


 NULLIF to prevent division by zero
 Multi-level aggregation

7. A/B Test Analysis

Scenario:
"Compare conversion rates between two website versions, ensuring
statistical significance (p < 0.05)."

Solution:
sql
Copy
WITH test_results AS (
SELECT
version,
COUNT(*) AS visitors,
COUNT(*) FILTER (WHERE converted = 1) AS conversions
FROM ab_test
GROUP BY version
),
stats AS (
SELECT
SUM(CASE WHEN version = 'A' THEN conversions ELSE 0 END) AS a_conv,
SUM(CASE WHEN version = 'A' THEN visitors ELSE 0 END) AS a_total,
SUM(CASE WHEN version = 'B' THEN conversions ELSE 0 END) AS b_conv,
SUM(CASE WHEN version = 'B' THEN visitors ELSE 0 END) AS b_total
FROM test_results
)
SELECT
a_conv * 100.0 / a_total AS conv_rate_a,
b_conv * 100.0 / b_total AS conv_rate_b,
-- Z-score calculation for statistical significance
(b_conv/b_total - a_conv/a_total) /
SQRT((a_conv + b_conv)/(a_total + b_total) * (1 - (a_conv + b_conv)/(a_total +
b_total)) * (1/a_total + 1/b_total)) AS z_score
FROM stats;

Key Techniques:

 Conditional aggregation
 Statistical calculations in SQL
 Cross-version comparison
Scenario based SQL questions with answers
Here’s a collection of real-world scenario-based SQL questions with detailed
answers, covering common business problems and advanced techniques:

1. E-Commerce Data Analysis


Scenario:
"An e-commerce platform wants to identify customers who made purchases above
$500 but haven't bought anything in the last 3 months."

Solution:

sql
Copy
WITH big_spenders AS (
SELECT customer_id
FROM orders
WHERE order_amount > 500
),
inactive_customers AS (
SELECT customer_id
FROM orders
GROUP BY customer_id
HAVING MAX(order_date) < CURRENT_DATE - INTERVAL '3 months'
)
SELECT c.customer_name, c.email
FROM customers c
JOIN big_spenders b ON c.customer_id = b.customer_id
JOIN inactive_customers i ON c.customer_id = i.customer_id;

Key Techniques:

 CTEs for modular logic


 Date arithmetic
 Multi-step filtering

2. Employee Hierarchy
Scenario:
"Generate an org chart showing all employees with their managers' names up to the
CEO."

Solution:

sql
Copy
WITH RECURSIVE org_chart AS (
-- Base case: CEO (no manager)
SELECT id, name, manager_id, 1 AS level, CAST(name AS VARCHAR(1000)) AS path
FROM employees WHERE manager_id IS NULL

UNION ALL

-- Recursive case
SELECT e.id, e.name, e.manager_id, oc.level + 1,
oc.path || ' → ' || e.name
FROM employees e
JOIN org_chart oc ON e.manager_id = oc.id
)
SELECT id, name, level, path FROM org_chart
ORDER BY level, name;

Key Techniques:

 Recursive CTE
 Path building
 Hierarchical sorting

3. Sessionization for Web Analytics


Scenario:
"Identify user sessions from web logs where a session expires after 30 minutes of
inactivity."
Solution:

sql
Copy
WITH tagged_logs AS (
SELECT
user_id,
event_time,
LAG(event_time) OVER (PARTITION BY user_id ORDER BY event_time) AS prev_time,
CASE WHEN EXTRACT(EPOCH FROM (event_time - LAG(event_time)
OVER (PARTITION BY user_id ORDER BY event_time))) > 1800 OR LAG(event_time)
IS NULL
THEN 1 ELSE 0 END AS new_session
FROM web_logs
),
session_groups AS (
SELECT
user_id,
event_time,
SUM(new_session) OVER (PARTITION BY user_id ORDER BY event_time) AS session_id
FROM tagged_logs
)
SELECT
user_id,
session_id,
MIN(event_time) AS session_start,
MAX(event_time) AS session_end,
COUNT(*) AS events
FROM session_groups
GROUP BY user_id, session_id;

Key Techniques:

 Window functions with LAG


 Conditional session tagging
 Time-based grouping

4. Inventory Management
Scenario:
"Find products that have been out of stock for more than 7 days but had sales in the
last month."

Solution:

sql
Copy
SELECT p.product_id, p.product_name
FROM products p
JOIN inventory i ON p.product_id = i.product_id
WHERE i.stock_count = 0
AND i.status_date < CURRENT_DATE - INTERVAL '7 days'
AND EXISTS (
SELECT 1 FROM order_items oi
JOIN orders o ON oi.order_id = o.order_id
WHERE oi.product_id = p.product_id
AND o.order_date >= CURRENT_DATE - INTERVAL '1 month'
);

Key Techniques:

 EXISTS for correlated subqueries


 Date filtering
 Multi-table joins

5. Financial Fraud Detection


Scenario:
"Flag transactions where the same credit card was used in different cities within 1
hour."

Solution:

sql
Copy
SELECT DISTINCT a.transaction_id, a.card_number, a.city AS city1, b.city AS city2
FROM transactions a
JOIN transactions b ON a.card_number = b.card_number
AND a.transaction_id != b.transaction_id
AND ABS(EXTRACT(EPOCH FROM (a.transaction_time - b.transaction_time)))
<= 3600
WHERE a.city != b.city;

Key Techniques:

 Self-join
 Time difference calculation
 Distance-based fraud pattern

6. Patient Appointment Analysis


Scenario:
"Calculate the no-show rate for doctors, but only include patients who had at least 2
previous appointments."

Solution:

sql
Copy
WITH patient_history AS (
SELECT
patient_id,
COUNT(*) FILTER (WHERE status = 'completed') AS completed_visits,
COUNT(*) FILTER (WHERE status = 'no-show') AS no_shows
FROM appointments
GROUP BY patient_id
HAVING COUNT(*) >= 2
)
SELECT
d.doctor_name,
SUM(ph.no_shows) * 100.0 / NULLIF(SUM(ph.completed_visits + ph.no_shows), 0) AS
no_show_rate
FROM patient_history ph
JOIN appointments a ON ph.patient_id = a.patient_id
JOIN doctors d ON a.doctor_id = d.doctor_id
GROUP BY d.doctor_id, d.doctor_name;

Key Techniques:

 FILTER clause for conditional aggregation


 NULLIF to prevent division by zero
 Multi-level aggregation

7. A/B Test Analysis


Scenario:
"Compare conversion rates between two website versions, ensuring statistical
significance (p < 0.05)."

Solution:

sql
Copy
WITH test_results AS (
SELECT
version,
COUNT(*) AS visitors,
COUNT(*) FILTER (WHERE converted = 1) AS conversions
FROM ab_test
GROUP BY version
),
stats AS (
SELECT
SUM(CASE WHEN version = 'A' THEN conversions ELSE 0 END) AS a_conv,
SUM(CASE WHEN version = 'A' THEN visitors ELSE 0 END) AS a_total,
SUM(CASE WHEN version = 'B' THEN conversions ELSE 0 END) AS b_conv,
SUM(CASE WHEN version = 'B' THEN visitors ELSE 0 END) AS b_total
FROM test_results
)
SELECT
a_conv * 100.0 / a_total AS conv_rate_a,
b_conv * 100.0 / b_total AS conv_rate_b,
-- Z-score calculation for statistical significance
(b_conv/b_total - a_conv/a_total) /
SQRT((a_conv + b_conv)/(a_total + b_total) * (1 - (a_conv + b_conv)/(a_total +
b_total)) * (1/a_total + 1/b_total)) AS z_score
FROM stats;

Key Techniques:

 Conditional aggregation
 Statistical calculations in SQL
 Cross-version comparison
Oracle SQL and PLSQL theoritical and scenario based Interview Questions with
answers
Here's a comprehensive collection of Oracle SQL and PL/SQL interview
questions covering both theoretical concepts and real-world scenarios, with
detailed answers:

1. Oracle SQL Theoretical Questions

Q1. Explain the Oracle architecture components

Answer:

 SGA (System Global Area): Shared memory area with buffer cache, shared
pool, redo log buffer
 PGA (Program Global Area): Private memory for each server process
 Background Processes: SMON (system monitor), PMON (process monitor),
DBWn (database writer), LGWR (log writer)
 Database Files: Datafiles, control files, redo log files

Q2. What's the difference between RANK(), DENSE_RANK(), and


ROW_NUMBER()?

Answer:

sql
Copy
SELECT
salary,
RANK() OVER (ORDER BY salary DESC) AS rank, -- 1,2,2,4
DENSE_RANK() OVER (ORDER BY salary DESC) AS drank, -- 1,2,2,3
ROW_NUMBER() OVER (ORDER BY salary DESC) AS rnum -- 1,2,3,4
FROM employees;

2. PL/SQL Theoretical Questions

Q3. Explain PL/SQL exception handling

Answer:

sql
Copy
BEGIN
-- Code
EXCEPTION
WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE('Record not found');
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error: ' || SQLERRM);
ROLLBACK;
END;
Q4. What are the differences between procedures, functions, and
packages?

Answer:

Procedur
Feature Function Package
e
Return value No Yes N/A
Call context Executable Expression Both
Statefulness Stateless Stateless Persistent

3. SQL Scenario-Based Questions

Q5. Find employees earning more than their department average

Solution:

sql
Copy
SELECT e.employee_id, e.salary, e.department_id, d.avg_sal
FROM employees e
JOIN (
SELECT department_id, AVG(salary) avg_sal
FROM employees
GROUP BY department_id
) d ON e.department_id = d.department_id
WHERE e.salary > d.avg_sal;

Q6. Identify gaps in a sequence of invoice numbers

Solution:

sql
Copy
WITH all_numbers AS (
SELECT MIN(invoice_no) min_no, MAX(invoice_no) max_no
FROM invoices
),
expected AS (
SELECT LEVEL + min_no - 1 AS expected_no
FROM all_numbers
CONNECT BY LEVEL <= max_no - min_no + 1
)
SELECT e.expected_no
FROM expected e
LEFT JOIN invoices i ON e.expected_no = i.invoice_no
WHERE i.invoice_no IS NULL;

4. PL/SQL Scenario-Based Questions

Q7. Write a procedure to archive old orders


Solution:

sql
Copy
CREATE OR REPLACE PROCEDURE archive_orders(p_cutoff_date DATE) IS
BEGIN
INSERT INTO orders_archive
SELECT * FROM orders
WHERE order_date < p_cutoff_date;

DELETE FROM orders


WHERE order_date < p_cutoff_date;

COMMIT;
DBMS_OUTPUT.PUT_LINE(SQL%ROWCOUNT || ' orders archived');
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
RAISE;
END;

Q8. Create a function to calculate tax with exception handling

Solution:

sql
Copy
CREATE OR REPLACE FUNCTION calculate_tax(
p_amount NUMBER,
p_tax_rate NUMBER
) RETURN NUMBER IS
v_tax NUMBER;
BEGIN
IF p_amount IS NULL OR p_tax_rate IS NULL THEN
RAISE_APPLICATION_ERROR(-20001, 'Input parameters cannot be null');
END IF;

v_tax := p_amount * (p_tax_rate/100);

RETURN ROUND(v_tax, 2);


EXCEPTION
WHEN VALUE_ERROR THEN
RETURN 0;
END;

5. Advanced Oracle Concepts

Q9. Explain materialized views with refresh strategies

Answer:

 Complete Refresh: Rebuilds entire MV (REFRESH COMPLETE)


 Fast Refresh: Updates only changes (REFRESH FAST requires MV logs)
 On Commit: Automatically refreshes when base tables change
 On Demand: Manually refreshed
Q10. How would you troubleshoot a slow PL/SQL procedure?

Approach:

1. Use DBMS_PROFILER to identify bottlenecks


2. Check execution plan for SQL statements
3. Examine V$SQL for resource-intensive queries
4. Consider bulk operations instead of row-by-row processing

6. Real-World Scenarios

Q11. Design a solution for auditing table changes

Solution:

sql
Copy
CREATE OR REPLACE TRIGGER audit_employee_changes
AFTER INSERT OR UPDATE OR DELETE ON employees
FOR EACH ROW
DECLARE
v_action VARCHAR2(10);
BEGIN
IF INSERTING THEN v_action := 'INSERT';
ELSIF UPDATING THEN v_action := 'UPDATE';
ELSE v_action := 'DELETE'; END IF;

INSERT INTO employee_audit VALUES (


:new.employee_id,
v_action,
SYSDATE,
USER,
DBMS_UTILITY.FORMAT_CALL_STACK
);
END;

Q12. Implement a retry mechanism for failed transactions

Solution:

sql
Copy
CREATE OR REPLACE PROCEDURE process_order_with_retry(
p_order_id NUMBER,
p_max_retries NUMBER := 3
) IS
v_retry_count NUMBER := 0;
v_success BOOLEAN := FALSE;
BEGIN
WHILE v_retry_count < p_max_retries AND NOT v_success LOOP
BEGIN
-- Transaction logic here
INSERT INTO order_processing_log VALUES (...);
COMMIT;
v_success := TRUE;
EXCEPTION
WHEN OTHERS THEN
v_retry_count := v_retry_count + 1;
DBMS_OUTPUT.PUT_LINE('Attempt ' || v_retry_count || ' failed: ' || SQLERRM);
ROLLBACK;
IF v_retry_count < p_max_retries THEN
DBMS_LOCK.SLEEP(5 * v_retry_count); -- Exponential backoff
END IF;
END;
END LOOP;

IF NOT v_success THEN


-- Log final failure
END IF;
END;

You might also like