Oracle PLSQL Int Class Notes
Oracle PLSQL Int Class Notes
LEAD – Lead (expression, offset (1), out of bounds value(null) ) over (partition by
expression order by expression)
LAG - Lag (expression, offset (1), out of bounds value(null) ) over (partition by
expression order by expression)
Cursor
begin
End;
Reference Cursor & Cursor attributes:
The first method matches the SQL text of the query to the SQL text of the materialized view definition. If the first
method fails, then the optimizer uses the more general method in which it compares joins, selections, data columns,
grouping columns, and aggregate functions between the query and materialized views.
o TOO_MANY_ROWS
o DIVIDE_BY_ZERO
o NO_DATA_FOUND
o INVALID_CURSOR
o CURSOR_ALREADY_OPEN
SQL performance tuning
Driving_site()
o The driving_site() hint forces query execution to be done at a different site than the initiating instance.
Especially when the remote table is much larger than local table.
o This will save the back-and-forth network traffic and enhances SQL query performance
Nested loops – each row of outer table will be matched against target table
Dbms_metadata.get_ddl(‘OBJECT_NAME’,’OBJECT_TYPE’,’OBJECT_OWNER’);
Dbms_stats.gather_table_stats(owner,schema_name,% of sample size)
Copy table stats?
dbms_stats.copy_table_stats(Owner, table_name,Source_partition_name ,target_partition_name);
Performance tuning did for TFS:
Redesign ETL:
o partition copy with CTAS & PEL
o introduce staging area for aggr
o implement NOLOGGING & PARALLEL for temp table creation
o replace delete with truncate
o Drop and recreate indexes as part of ETL pre and post processes
o Convert informatica sequence into DB sequence using oracle 12c feature
o Use table partitions
DB optimization –
o implement PARALLEL & NOLOGGING for index creation
o implement DOP on DB tables
alter table table_name parallel 4
alter table table_name parallel DEFAULT
DB configuration parameter PARALLEL_DEGREE_LIMIT
o gather table stats as part of ETL post load process
o copy table stats for initial loads
o incremental stats gathering
SQL optimization:
o Using optimizer hints: PARALLEL, INDEX, NO_INDEX,NO_NL,USE_NL, USE_HASH,
NO_USE_HASH,USE_MERGE, NO_USE_MERGE
DB concepts:
Normalization rules
As per First Normal Form, no two Rows of data must contain repeating group of information i.e each set of
column must have a unique value, such that multiple columns cannot be used to fetch the same row. Each
table should be organized into rows, and each row should have a primary key that distinguishes it as unique.
As per the Second Normal Form there must not be any partial dependency of any column on primary key. It
means that for a table that has concatenated primary key, each column in the table that is not part of the
primary key must depend upon the entire concatenated key for its existence. If any column depends only on
one part of the concatenated key, then the table fails Second normal form
Third Normal form applies that every non-prime attribute of table must be dependent on primary key, or we
can say that, there should not be the case that a non-prime attribute is determined by another non-prime
attribute. So this transitive functional dependency should be removed from the table and also the table must
be in Second Normal form. For example, consider a table with following fields.
Triggers
Table Functions
Table functions are used to return PL/SQL collections that mimic tables. They can be queried like a regular table by using the TABLE
function in the FROM clause. Regular table functions require collections to be fully populated before they are returned. Since
collections are held in memory, this can be a problem as large collections can waste a lot of memory and take a long time to return
the first row. These potential bottlenecks make regular table functions unsuitable for large Extraction Transformation Load (ETL)
operations. Regular table functions require named row and table types to be created as database objects.
Instead of Triggers
When you issue a DML statement such as INSERT, UPDATE, or DELETE to a non-updatable view, it will automatically
skip the DML statement and execute other DML statements instead.
Note that an INSTEAD OF trigger is fired for each row of the view that gets modified. you can create an INSTEAD
OF trigger for a view only. You cannot create an INSTEAD OF trigger for a table.
External tables
How do you make sure that your jobs are running within SLA?
identify top SQL which are most time taking/most executed/most resource intensive:
SELECT
sql_fulltext,
sql_id,
elapsed_time,
child_number,
disk_reads,
executions,
first_load_time,
last_load_time
FROM v$sql
ORDER BY elapsed_time DESC
ALL_ROWS
FIRST_ROWS
PARALLEL
FULL
INDEX
NO_INDEX
INDEX_FFS
NO_NL
USE_NL
USE_HASH
NO_USE_HASH
USE_MERGE
NO_USE_MERGE
DRIVING_SITE
LISTAGG orders data within each group specified in the ORDER BY clause and then concatenates the values of the
measure column
Emp_list Earliest
------------------------------------------------------------ ---------
Raphaely; Khoo; Tobias; Baida; Himuro; Colmenares 07-DEC-02
What are some of the DICT tables that you have used?
What is TKPROF?
A common table expression (CTE) is a named temporary result set that exists within the scope of a single statement and
that can be referred to later within that statement, possibly multiple times.
A derived table can be referenced only a single time within a query. A CTE can be referenced multiple times. To
use multiple instances of a derived table result, you must derive the result multiple times.
A CTE may be easier to read when its definition appears at the beginning of the statement rather than
embedded within it.
COALESCE function takes two or more compatible arguments and returns the first argument that is not null
select coalesce (smallintcol, bigintcol) from temp;
Define access groups define data access role assign access groups to data access roles assign data access roles
to users
Select Rownum r
From dual
Connect By Rownum <= 100
How does a watcher work as per Autosys? Any specific scripting language or AutoSys does it?
insert_job: APP_BATCH_FW_JOB
job_type: FW
machine: some@hostname
days_of_week: mon,tue,wed,thru,fri
start_times: "09:00"
watch_file: /app/input/infeed.txt
watch_interval: 60 # every 60 sec it would check for the file
term_run_time: 15 # after 15 mins the job would be terminated if file is not found
insert_job: APP_BATCH_START_JOB
job_type: CMD
machine: some@hostname
command: bash /app/script/start.bash
condition: SUCCESS(APP_BATCH_FW_JOB)
insert_job: APP_INFORMARICA_BATCH_START_JOB
job_type: PMCMD
machine: some@hostname
command: pmcmd startworkflow -sv Integration_Service_Name -d Domain_Name -u User_Name -p Password -f
Folder_Name Workflow_Name
condition: SUCCESS(APP_INFORMATICA_BATCH_ JOB)
#!/bin/bash
sqlplus -s user/pass@oracle_sid @file.sql
Echo “Hello”
cp source_file /path/to/target_file
mv file_or_directory [target_directory]
file [file_name] - checks a file type, such as TXT, PDF, or other
zip [options] zip_file_name file1 file2
unzip [options] zip_file_name
tar [options] tar_file_name file1 file2
tar -cfz archive.tar.gz fle1.txt file2.txt
tar [options] tar_file_name
cat file_name
ls | grep "file.txt"
sed 's/red/blue' colors.txt hue.txt
head [options] file_name
tail [options] file_name
cut -d',' -f3-5 list.txt - extracts the third to fifth field from a comma-separated list
diff file_name1 file_name2
find path/to/folder -type f -name "file"
COUNT(*) takes all columns to count rows and COUNT(1) counts using the first column: Primary Key. Thanks to
that, COUNT(1) is able to use index to count rows and it's much faster
Option 1:
Option 2:
CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE (null,'STAFF','c:\output\myfile.del',';','%',null,0);
SYSCS_UTIL.SYSCS_IMPORT_TABLE (
IN SCHEMANAME VARCHAR(128),
IN TABLENAME VARCHAR(128), IN FILENAME VARCHAR(32672),
IN COLUMNDELIMITER CHAR(1), IN CHARACTERDELIMITER CHAR(1),
IN CODESET VARCHAR(128), IN REPLACE SMALLINT
);