0% found this document useful (0 votes)
175 views43 pages

What Are The Diff Navigators Available in Odi?

The document discusses various topics related to Oracle Data Integrator (ODI), including: 1. The four types of navigators in ODI: Designer, Operator, Topology, Security. 2. How to frame a loop in ODI using packages, variables, and interfaces. 3. The different components of an ODI interface: Overview, Mapping, Quick-Edit, Flow, Controls, Scenarios, Execution. 4. The properties of variables in ODI and how they can be used.

Uploaded by

Anonymous S5fcPa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
175 views43 pages

What Are The Diff Navigators Available in Odi?

The document discusses various topics related to Oracle Data Integrator (ODI), including: 1. The four types of navigators in ODI: Designer, Operator, Topology, Security. 2. How to frame a loop in ODI using packages, variables, and interfaces. 3. The different components of an ODI interface: Overview, Mapping, Quick-Edit, Flow, Controls, Scenarios, Execution. 4. The properties of variables in ODI and how they can be used.

Uploaded by

Anonymous S5fcPa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

1. What are the diff navigators available in Odi?

ODI has 4 types of navigators,


 Designer
 Operator
 Topology
 Security

2. How to frame a loop in Odi?


By creating a package with variables and interfaces in it we can frame a loop in
Odi. Variables play a crucial role in framing a loop. By using Set Variable option to
Refresh variable/Set variable/Evaluate variable/Declare variable we can limit a loop.

3. What are the different components of an interface?


 Overview
 Mapping
 Quick-Edit
 Flow
 Controls
 Scenarios
 Execution

4. What are the properties of variables?


 Variables are run time objects
 Variables can support only Sql statements that to SELECT command
 Variables always prefix with # or :
 We can’t see variable values in execution log
 Variables can store only one value of a particular data type at a time

5. Which is first in C$, I$?


C$ - Loading work table
I$ - Integration work table

C$ is first and I$ is second. Because, C$ is created during data loading from source to
staging area. Whereas I$ is created during data loading from staging area to target
table.
6. What is the diff between static control and flow control?
Flow control checks the constraints on target table with input data. If the input
data satisfies the constraint condition on target then it will be inserted into target table
else to Error table (E$).
Static control first put the data into the target table and then check the
constraints on target table. After then the invalid data is inserted into Error table (E$).
The target table contains both valid data and invalid data.

7. What is the difference between incremental update and Changing Data Capture
(CDC)?
Changing Data Capture is an ODI Feature
Incremental Update is a Loading Strategy

8. What is the difference between append and control append?


In Append, data is inserted into target table and the target table is not truncated
for every insert/run.
In Control Append, data is inserted into target table and the target table is
truncated for every insert/run.
We use Append, to store the historical data along with present data.
We use Control Append, to store the present data only.

10. How to identify whether the strategy is Incremental Update or scd2?


 If the target table contains only sequence ID (SID) then it is Incremental Update
strategy.
 If the target table contains SID, Indicator, Start_date, End_date then it is scd2
strategy.

11. What is data server?


Data Server describe connections to actual physical application servers and
databases. I.e., Data server creates connection to the database from the Odi

12. What is master repository and work repository?

 Master Repository stores the Topology information and Security information.


 Work Repository stores the Designer information and Operator information.

13. What is staging area and staging schema?


Temporary objects are always stored in a particular schema or directory called
the Work schema or Staging Area. (Or) It is the place where different formats of the
data are converted into one required format. This process is known as transformation.

Staging schema or Work area is first defined when you create a physical
schema but you can change it in interface properties.

14. How to create a load plan in ODI and how to arrange its dependencies?
We create a load plan from Load Plans and Scenario’s tab in the Designer
navigator. In step tab, after adding a step. Through Add Step Wizard options Serial
step/ Parallel step/ Run scenario step/ Case step/ When step/ Else step we can arrange
dependencies.
15. What is the process of CDC?
If any change occurred in source then the change will be immediately reflected
to target
 Import JKM Oracle Consistent KM
 Remove all the simple cdc if you apply simple cdc already
 Apply KM on the model
 Add to CDC
 Add Subscriber
 Start Journal

16. Why do we need to go for CDC, why not incremental update?


CDC gave us better performance when compared to Incremental Update.

In CDC only the changed data from source is loaded into the target since the
last update. But in Incremental Update all the source data is going to be loaded into
the target along with the updated data from source.

18. What are the different knowledge modules you used in ODI?
In ODI we have 6 different knowledge modules. They are
 Reverse Engineering KM
 Loading KM
 Integration KM
 Check KM
 Journalization KM
 Service KM

Knowledge Modules are the pre-built or inbuilt code templates which contains
sequence
of commands in order to perform the integration task. In ODI, we never write any
single piece of code. We automatically get the code by making use of these
Knowledge Modules.

19. Explain the architecture of your project?


20. How to configure JAVAEE agent?
21. How to start the javaee agent?
22. What is the use of JAVAEE agent?
Javaee agent is slightly more complex to setup (we need to install WebLogic
Server first, setup domain and so forth), but gives us access to different world in terms
of enterprise -scale deployment: Clustering, Load balancing, Centralized monitoring
an d so forth.

23. Write a query to delete the duplicate records?


To find duplicate records in a EMP table
Select name,no,addr,count(*) as no_dup from EMP group by name,no, addr having
count(*)>1;
Now we get the duplicate data

Delete from EMP where rowid not in (select max (rowid) from EMP group by
empno);

24. What is the diff between truncate and delete?


Truncate is used to delete whole data from table at a time
Delete is used to delete whole data or selected data from table

25. What is sub-query?


A sub-query is a query within a query. The outer query is called main query and
inner query is called sub-query.
26. What is co-related sub-query?
A co-related sub-query is evaluated once for each row in a table as opposed to
a normal sub-query which is evaluated only once for each table.

27. What is the difference between procedure and function?


 Function is mainly used in the case where it must return a value. Whereas a
procedure may or may not return a value or return more than one value using the out
parameters.
 Function can be called from Sql statements. Whereas Procedures cannot be called
from
Sql statements
 Functions are normally used for computations. Whereas procedures are normally
used for executing business logic
 Function allows only Select statement in it. Whereas procedure allows Select as
well as
DML (insert, update, delete) statement in it.
 Function returns 1 value only. Whereas procedure can return a maximum of 1024
values
 Stored procedure is a pre-compiled execution plan whereas function are not
 Function can have only 1 input parameter for it. Whereas procedures can have
input/output parameters
 Function can be called from procedure. Whereas procedures cannot be called from
function

28. What is the difference between plSql procedure and Odi procedure?
29. How to pass the dates as input to interfaces in Odi?
Using the query Select sysdate from dual; in the variable and assigning or
concatenating that variable to file resource name we can achieve the required task.

30. How to use REPLACE function in Sql?


Replace function replaces a sequence of characters in a string with another set
of characters.
Select replace (‘jack and jue’,’j’,’bl’) from dual;

Translate function replace a sequence of characters in a string with another set


of characters. However, it replaces a single character at a time.
Select replace (‘jack and jue’,’j’,’bl’) from dual;

31. What are diff Analytical or Window functions in Sql?


Analytical functions compute an aggregate value based on group of rows. They
differ from aggregate function in that they return multiple rows for each group.
 Rank
 Dense_rank
 Lead
 Lag
 Listagg

32. What is inline view?


An inline view is a select statement in the from clause of another select
statement.
Select * from (select deptno,count(*) emp_count from emp group by deptno)
emp,dept where dept.deptno=emp.deptno;

33. What is the difference between exists and in operator?


Exists- TRUE if a sub-query returns at least one row
In- “Equivalent to any member of” test. Equivalent to “=ANY”
Exists
The Exists operator tests for existence of rows in the results set of the sub query.
Select dname from dept where exists (Select 1 from EMP Where dept.deptno=
emp.deptno);

34. What is synonym insert, update and insert/update and when do we use it?
35. What are different import and export methods in Odi?
We have 2 types of import and export methods
 Smart import /export
 Import/export
In smart import/export the dependent objects also exported. i.e., fo r interfaces
 source and target tables are also exported.
In import/export only selected object is exported.

36. What and all the knowledge modules require to load the data from flat file to DB?
LKM file to Sql
IKM Sql control append

37. What are the daily jobs in your project?


38. What is the difference between dimensional modeling and fact loading?
39. What are the monthly jobs in your current project?
40. What is the difference between package and load plan and why do we need to
scenarios?
41. Write a query to update the data of a table using not exists?
Update suppliers Set supplier_name = (Select customers.customer_name From
customers Where customers.customer_id = suppliers.supplier_id) Where Not Exists
(Select customers.customer_name From customers Where customers.customer_id =
suppliers.supplier_id);

43. What are the different connections available in Odi? (Ex: jdbc)
jdbc stands for java data base connectivity which is used to make connection
with data base. In this we have 2 components
 jdbc driver
 jdbc url

jdbc driver will create the connection between Odi and db.
jdbc url will create the connection to particular schema.

44. How to execute the difference interfaces in parallel in a package?


Drag and drop all scenarios generated for the interface into the packages. Click
on each scenario and select Asynchronous mode in properties to execute each
scenario in parallel.

45. How to stop the parallel execution in package if any interface fails?
46. What is incremental update merge in Odi?
47. What is quick edit in interface and why do we need quick edit?
48. What are the options available in interface overview tab?
49. Where do we need to set lkm, ikm and how to set the CKM?
Through Flow tab in interface we can set lkm and ikm based on the loading.
Through
Control tab we can set ckm.
50. What is your database size?
51. How many fact tables and dimension tables you are having in you project
52. What is surrogate key and why do we required surrogate key?
A surrogate key is a substitution for the natural primary key. It is a unique
identifier or number (normally created by the database sequence generator) for each
record of a dimension table that can be used for the primary key of the table.
A surrogate key is useful because natural keys may change

53. What are data marts?


A data mart is a subset of data ware house with a focused objective (or) Data
mart is usually sponsored at the department level and developed with a specific details
or subject in mind.

54. What is difference between DataMart and dwh?


Data Mart Data Warehouse
Data mart is usually sponsored at the Data warehouse is a “Subject-Oriented,
department level and developed with a Integrated, Time-Variant, Nonvolatile
specific issue or subject in mind, a data collection of data in support of decision
mart is a data warehouse with a focused making”.
objective.
A data mart is used on a business A data warehouse is used on an
division/ enterprise level
department level.
A Data Mart is a subset of data from a A Data Warehouse is simply an
Data Warehouse. Data Marts are built for integrated consolidation of data from a
specific user groups. variety of sources that is specially
designed to support strategic and tactical
By providing decision makers with only decision
The mainmaking.
objective of Data Warehouse is
a subset of data from the Data to provide an integrated environment and
Warehouse, Privacy, Performance and coherent picture of the business at a
Clarity Objectives can be attained. point in time.

55. How do u implement if you’re having multiple sources and multiple targets with
in single interface and your sources also different technology’s and target also
different technology
56. What is the difference between stored procedures and procedures?
57. What is the difference between 11(G) and 12(C)? What does G stands for and C
stands for?
ODI 11g ODI 12c
No multiple target tables in a single We can load multiple target tables in
interface single mappings
Interface Mappings & Reusable mappings
No reusable mappings added in global Reusable mappings added in global
objects
Standalone & J2ee Agent objects
Standalone, Colocated & J2ee Agent
Only NG profiles is NG_Designer Added additional NG profiles
NG_Console, NG_Metadata_Admin,
NG_Version_Admin
No Wallet password Added Wallet password
No direct role selection while creating Direct role selection while creating users
users
No Hive technology for Hadoop in Added new Hive technology for Hadoop
topology in topology
No Pivot functions Available below functions in Opatch
17053786
 Pivot component
 UnPivot component
 Table function component
 Subquery filter component
Sequence enhancements only Nextval Sequence enhancements Currval &
No split or Sort or Set objects Nextval
Declarative flow based user interface
Xml improvements

I stands for Internet


G stands for Grid
C stands for Cloud Computing

58. What is grid and what is clustering?


59. What is clustering?
Clustering, in the context of database, refers to the ability of several servers or
instances to single database. An instance is a collection of memory and processes that
interacts with a database, which is set of physical files that actually store data. (Or)
Database working with more than one server is called Clustering.

Clustering offers two major advantages, especially in high volume database


environments
Fault Tolerance: Because there is more than one server or instance for users to
connect to, clustering offers an alternative, in the event of individual server failure.
Load Balancing: The clustering feature is usually setup to allow users to be
automatically allocated to the server with least load.

60. What are nodes in JAVAEE agent?


61. Where your work tables will get created and what are the different work tables
that will create in ODI?
Generally work tables will be created in work schema and the different work
tables that get created in ODI are
 Error  E$
 Loading  C$
 Integration  I$
 Temporary Indexes  IX$
 Data stores  J$
 Views  JV$
 Triggers  T$

62. How do send the file to other team using Odi and that file should be password
protected.
Through Encrypting and Decrypting

63. In control append strategy why the work tables will get drop why not delete?

64. Have you customize the knowledge modules if yes what are they?
65. What is the difference between view and materialized view?
View Materialized view
A view has a logical existence. It does A materialized view has a physical
not contain data. existence.
It’s not a database object. It is a database object.
We cannot perform DML operation on We can perform DML operation on
view. materialized view.
When we do select * from view it will When we do select * from materialized
fetch the data from base table. view it will fetch the data from
materialized
view.
In view we cannot schedule to refresh. In materialized view we can schedule to
refresh.
We can keep aggregated data into
materialized view. Materialized view
can be created based on multiple tables.

66. What is the Difference between Delete, Truncate and Drop?


Delete
The delete command is used to remove rows from a table. A WHERE clause can be
used to only remove some rows. If no WHERE condition is specified, all rows will be
removed. After performing a delete operation you need to COMMIT or ROLLBACK
the transaction to make the change permanent or to undo it.
Truncate
Truncate removes all rows from a table. The operation cannot be rolled back. As such,
truncate is faster and doesn't use as much undo space as a delete.
Drop
The drop command removes a table from the database. All the tables' rows, indexes
and privileges will also be removed. The operation cannot be rolled back.

67. Differences between where clause and having clause


The Where clause cannot be used to restrict groups.
You use the Having clause to restrict groups.
Where clause Having clause
Both where and having clause can be used to filter the data.
Where as in where clause it is not But having clause we need to use it with
mandatory. the group by.
Where clause applies to the individual Whereas having clause is used to test
rows. some condition on the group rather than
on individual rows.
Where clause is used to restrict rows. But having clause is used to restrict
groups.
Restrict normal query by where Restrict group by function by having
In where clause every record is filtered In having clause it is with aggregate
based on where. records
(group by functions).
Merge Statement
You can use merge command to perform insert and update in a single
command.
Ex: Merge into student1 s1 Using (select * from student2) s2 On (s1.no=s2.no)
When matched then
Update set marks = s2.marks
When not matched then
Insert (s1.no, s1.name, s1.marks) Values (s2.no, s2.name, s2.marks);

68. What is the difference between sub-query & co-related sub query?
A sub query is executed once for the parent statement
Select deptno, ename, sal from emp a where sal in (select sal from Grade where
sal_grade='A' or sal_grade='B')

Whereas the correlated sub query is executed once for each row of the parent
query.
Find all employees who earn more than the average salary in their department.
SELECT last-named, salary, department_id from employees A
Where salary > (select AVG (salary)
From employees B where B.department_id =A.department_id
Group by B.department_id)

Sub-query Co-related sub-query


A sub-query is executed once for the Whereas co-related sub-query is
parent Query executed once for each row of the
parent query.
Example: Example:

Select * from emp where deptno in Select a.* from emp e where sal >=
(select deptno from dept); (select avg(sal) from emp a where
a.deptno=e.deptno group by a.deptno);

69. Difference between Trigger and Procedure


Stored procedure normally used for performing tasks
But the Trigger normally used for tracing and auditing logs.

Stored procedures should be called explicitly by the user in order to execute


But the Trigger should be called implicitly based on the events defined in the table.

Stored Procedure can run independently


But the Trigger should be part of any DML events on the table.

Stored procedure can be executed from the Trigger


But the Trigger cannot be executed from the stored procedures.

Stored Procedures can have parameters.


But the Trigger cannot have any parameters.

Stored procedures are compiled collection of programs or SQL statements in the


database. Using stored procedure we can access and modify data present in many
tables.
Also a stored procedure is not associated with any particular database object.
But triggers are event-driven special procedures which are attached to a specific
database object say a table.

Stored procedures are not automatically run and they have to be called explicitly by
the user.
But triggers get executed when the particular event associated with the event gets
fired.

Triggers Stored Procedures


In trigger no need to execute manually. Where as in procedure we need to execute
Triggers will be fired automatically. manually.

Triggers that run implicitly when an


INSERT, UPDATE, or DELETE
statement is issued against the associated

70. Differences between stored procedure and functions


Stored Procedure Functions
Stored procedure may or may not return Function should return at least one output
values. parameter. Can return more than one
parameter using OUT argument.
Stored procedure can be used to solve the Function can be used to calculations
business logic.
Stored procedure is a pre-compiled But function is not a pre-compiled
statement. statement.
Stored procedure accepts more than one Whereas function does not accept
argument. arguments.
Stored procedures are mainly used to Functions are mainly used to compute
process the tasks. values
Cannot be invoked from SQL statements. Can be invoked form SQL statements e.g.
E.g. SELECT SELECT
Can affect the state of database using Cannot affect the state of database.
commit.
Stored as a pseudo-code in database i.e. Parsed and compiled at runtime.
compiled form.

71. Difference between Rowid and Rownum?


Rowid
A globally unique identifier for a row in a database. it is created at the time the row is
inserted into a table, and destroyed when it is removed from a
table.'bbbbbbbb.rrrr.ffff' where bbbbbbbb is the block number, rrrr is the slot(row)
number, and ffff is a file number.
Rownum
for each row returned by a query, the rownum pseudo column returns a number
indicating the order in which oracle selects the row from a table or set of jo ined rows.
The first row selected has a rownum of 1, the second has 2, and so on.

you can use rownum to limit the number of rows returned by a query, as in this
example:
select * from employees where rownum < 10;
Rowid Row-num
Rowid is an oracle internal id that is Row-num is a row number returned by
allocated every time a new record is a select statement.
inserted in a table. This ID is unique
and cannot be changed by the user.
Rowid is permanent. Row-num is temporary.
Rowid is a globally unique identifier The row-num pseudocoloumn returns a
for a row in a database. It is created at number indicating the order in which
the time the row is inserted into the oracle selects the row from a table or
table, and destroyed when it is set of joined rows.
removed from a table.
Order of where and having:
SELECT column,
group_function
FROM table
[WHERE condition]
[GROUP BY group_by_expression]
[HAVING group_condition]
[ORDER BY column];

72. Different types of installations in Odi?


3 types of installations are there. They are
 Developer installation
 Stand-alone installation
 Javaee installation

By default we can select developer installation.

73. What is Odi console?


In Odi, if you want to give the global access, you need to make use of one web
based component called Odi console. This Odi console is a web based component.
This has to be configured with an application server like WEBLOGIC or IBM
WebSphere.
This Odi console is also called as meta data navigator in Odi 10g

74. What is agent and what are the diff types of agents having in Odi?
Odi can't execute anything by itself it makes use of one component called as
agent.
Agent will takes care of the execution in Odi. Agent is a piece of java code that takes
care of execution in Odi.
In Odi 11g we are having 3 types of agents
 stand-alone agent
 javaee agent
 Local agent
In Odi 12c, a new agent introduced called Collocated.

75. What are the diff ways creating the master repository in Odi?
There are 2 diff ways to create master repository in Odi, they are
 Rcu  Repository Creation Utility.
 File tab or Odi studio.

76. What are the diff types of authentications available in Odi?

77. Who is supervisor?


Supervisor is the person who contains all the permissions, rights on the Odi

78. How do 'Contexts' work in ODI?


Context defines which environment we are in. Context maps the Logical
architecture with Physical architecture. In a particular context, one logical schema
gets mapped with one physical schema.
ODI offers a unique design approach through use of Contexts and Logical
schemas. Imagine a development team, within the ODI Topology manager a senior
developer can define the system architecture, connections, databases, data servers
(tables etc.) and so forth.
These objects are linked through contexts to 'logical' architecture objects that
are then used by other developers to simply create interfaces using these logical
objects, at run -time, on specification of a context within which to execute the
interfaces, ODI will use the correct physical connections, databases + tables (source +
target) linked the logical objects being used in those interfaces as defined within the
environment Topology.

79. What components make up Oracle Data Integrator?


Oracle Data Integrator" comprises of:
Oracle Data Integrator + Topology Manager + Designer + Operator + Agent
Oracle Data Quality for Data Integrator
Oracle Data Profiling
80. What is Oracle Data Integrator (ODI)?
Oracle acquired Sunopsis in 2006 and with it "Sunopsis Data Integrator".
Oracle Data Integrator (ODI) is an E-LT (Extract, Load and Transform) tool
used for high- speed data movement between disparate systems.
The latest version, Oracle Data Integrator Enterprise Edition (ODI-EE) brings
together "Oracle Data Integrator" and "Oracle Warehouse Builder" as separate
components of a single product with a single license.

81. What is E-LT?


E-LT is an innovative approach to extracting, loading and Transforming data.
Typically ETL application vendors have relied on costly heavyweight, mid-tier server
to perform the transformations required when moving large volumes of data around
the enterprise.

ODI delivers unique next-generation, Extract Load and Transform (E-LT)


technology that improves performance and reduces data integration costs, even across
heterogeneous systems by pushing the processing required down to the typically large
and powerful database servers already in place within the enterprise.

82. What is Oracle Data Integration Suite?


Oracle data integration suite is a set of data management applications for
building, deploying, and managing enterprise data integration solutions:

 Oracle Data Integrator Enterprise Edition


 Oracle Data Relationship Management
 Oracle Service Bus (limited use)
 Oracle BPEL (limited use)
 Oracle WebLogic Server (limited use) Additional product options are:
 Oracle Golden gate
 Oracle Data Quality for Oracle Data Integrator (Trillium-based DQ)
 Oracle Data Profiling (Trillium based Data Profiling)
 ODSI (the former Aqua logic Data Services Platform)

83. What systems can ODI extract and load data into?
ODI brings true heterogeneous connectivity out-of-the-box, it can connect
natively to Oracle, Sybase, MS SQL Server, MySQL, LDAP, DB2, PostgreSQL, and
Netezza.

It can also connect to any data source supporting JDBC, its possible even to use
the Oracle BI Server as a data source using the jdbc driver that ships with BI Publisher

84. Does my ODI infrastructure require an Oracle database?


No, the ODI modular repositories (Master + and one of multiple Work
repositories) can be installed on any database engine that supports ANSI ISO 89
syntax such as Oracle, Microsoft SQL Server, Sybase AS Enterprise, IBM DB2 UDB,
IBM DB2/40.

85. Does ODI support web services?


Yes, ODI is 'SOA' enabled and its web services can be used in 3 ways:
The Oracle Data Integrator Public Web Service, that lets you execute a scenario (a
published package) from a web service call Data Services, which provide a web
service over an ODI data store (i.e. a table, view or other data source registered in
ODI). The ODI Invoke Web Service tool that you can add to a package to request a
response from a web service

86. Where does ODI sit with my existing OWB implementation(s)?


As mentioned previously, the ODI-EE license includes both ODI and OWB as
separate products, both tools will converge in time into "Oracle’s Unified Data
Integration Product".

Oracle have released a statement of direction for both products, published January
2010:
http://www.oracle.com/technology/products/oracle-data-integrator/sod.pdf

OWB 11G R2 is the first step from Oracle to bring these two applications together,
it’s now possible to use ODI Knowledge modules within your OWB 11G R2
environment as 'Code Templates', an Oracle white paper published February 2010
describes this in more detail:
http://www.oracle.com/technology/products/warehouse/pdf/owb-11gr2-code-
template- mappings.pdf

87. Is ODI Used by Oracle in their products?


Yes there are many Oracle products that utilize ODI, but here are just a few:
 Oracle Application Integration Architecture (AIA)
 Oracle Agile products
 Oracle Hyperion Financial Management
 Oracle Hyperion Planning
 Oracle Fusion Governance, Risk & Compliance
 Oracle Business Activity Monitoring
Oracle BI Applications also uses ODI as its core ETL tool in place of
Informatica , but only for one release of OBIA and when using a certain source
system.

Future plans are to have ODI fully available through the OBIA offering.

88. Difference between Dimensional table and Fact table?


Dimensional Table Fact Table
It provides the context /descriptive It provides measurement of an enterprise
information for fact table measurements
Structure includes - Surrogate key, oneStructure includes – Foreign key,
or more other fields that compose naturalDegenerated dimensions and
key and set of attributes measurements
Size of dimension table is smaller than Size of fact table is larger than
fact table dimension table
In a schema more no of dimension tables In a schema less no of fact tables are
are present than the fact tables present than the dimensional tables
Values of fields are in text representationValues of fields are in integer form
We can load the dimension table directly We can’t load the fact table first. So to
load fact table we need to load the
dimensional table first. Also while
loading the fact table we will make a
lookup on the dimensional table cause
the fact table contain the measure/facts
& the foreign keys which are primary
keys in the dimension tables surrounded

89. ORACLE abbreviation


Oak Ridge Automatic Computer Logical Engine

90. What is Repository in ODI?


Repository is a centralized component of Odi architecture where the complete
information of the Odi which includes Topology info, Designer info, Security info,
Operator info, Agent info, Console info is stored in the Repository.

i.e., Repository stores the information regarding whatever component connected to


the ODI

91. Components of ODI architecture


 ODI Studio – ODI studio is the user interface in oracle integrator. If any user want
to interact with ODI they have to make use of the ODI Studio.
 Repository
 Agent
 ODI Console
 Public Web Service – If you want to generate web services based on the data
processed by
ODI then we have to make use of Public Web Services.

92. Types of Work repositories, Explain in detail


Work repositories in ODI are 2 types
 Development Work Repository – It stores both Designer information and Operator
information.
 Execution Work Repository – It stores the Operator information.
93. AGILE Methodology means performing Development, QA and Production
simultaneously.
JDBC – Java Data Base Connectivity
JNDI – Java Naming Directory Interface
DB Link is a connection between DB and ODI that allows Odi to access the DB. In
simple, a
DB Link allows Local User to access data on a Remote DB
At Design time we work with Logical resources, at Run time we work with Physical
resources.
Mask – Mask functionality in ODI is equivalent to LIKE Operator. The LIKE
operator performs the wild card search.
Interface is the Primary Dataflow Component in ODI.

94. What are the components of Integration Interface?


 Target Data Store
 Mapping
 Data sets
 Staging Area
 Quick edit
 Flow

95. Knowledge Modules Kinds


 Interface Knowledge Module – LKM, IKM, CKM.
 Model Knowledge Module – CKM, JKM, RKM, SKM.

Every Session contains an ID. The hierarchy of an Execution in an Operator is


1. Session
2. Step
3. Task
Repository is also termed as Accordion.
Session is the Run time job generated by ODI.
Task is the smallest execution unit in ODI.

Simulation – Simulation shows us the code which is going to be executed before


execution. It is
one kind of performance tuning technique used in ODI.
Sql statements containing Set Operators are called as Compound Queries because
we are having multiple select statements combined by the set operator.
Set Operators are also called as Vertical Joins, as the result is combined data from
two or more
selects based on the columns instead of rows.
XML – Extensible Markup Language
XSD – XML Schema Definition
DTD – Document Type Definition
XSD/DTD stores the metadata in bottom to top approach. Whereas XML stores the
actual data.
JMS – Java Messaging Services
For XML there is no technology specific lkm, hence we have to use LKM Sql to Sql.
Because XML is like a database.
Whenever we perform any filter and join in XML, they get executed on the source
itself. Whereas for file and table, they get executed in staging area.
In ODI, any object can be saved in 2 ways
1. XML
2. HTML
HSQL – Hypersonic SQL

In ODI, Staging area is Logical.


Default Standalone Agent port is 20910.

Loading Strategies
 Control Append – Source to Staging – Truncate & Reload
 Incremental Update (scd1) – Dimensional table loading – Update else Insert
 Slowly Changing Dimension (scd2) - Dimensional table loading – Insert and
Modify
 Append – Fact table loading – Insert

Source ----------------------------------- Stage ----------------------------------- Target


Control Append/CDC SCD1/SCD2 & Append

The default value for ending timestamp is 2400-01-01 00:00:00.0


It is possible to convert snow flake schema to star schema. It can be implemented
using Views. But it is not recommended

96. If star schema is good at performance, why snow flake schema is implemented
mostly?

97. What is the difference between snow flake and star schema?
Star Schema Snow Flake Schema
The star schema is the simplest data Snowflake schema is a more complex
warehouse scheme. data warehouse model than a star
schema.
In star schema each of the dimensions is In snow flake schema at least one
represented in a single table .It should hierarchy should exists between
not have any hierarchies between dimension tables.
dimensions.
It contains a fact table surrounded by It contains a fact table surrounded by
dimension tables. If the dimensions are dimension tables. If a dimension is
de- normalized, we say it is a star normalized, we say it is a snow flaked
schema design. design.
In star schema only one join establishesIn snow flake schema since there is
the relationship between the fact tablerelationship between the dimensions
and any one of the dimension tables. tables it has to do many joins to fetch the
data.
A star schema optimizes the Snowflake schemas normalize
performance by keeping queries simple dimensions to eliminated redundancy.
and providing fast response time. All the The result is more complex queries and
information about the each level is reduced query performance.
stored in one
It is called row.schema because the
a star It is called a snowflake schema because
diagram resembles a star. the diagram resembles a snowflake.

Lookup’s are mainly used to perform the fact table loading. For this we have to u se
the Append strategy.
Lookup table will be in Parrot color.
In lookup, if driving table and lookup table belongs to same logical schema then we
can execute the lookup on source or staging.
In lookup, if driving table and lookup table belongs to different logical schemas then
we can execute the lookup on staging.
98. Difference between Flow/Static Control and Filters
Flow/Static Control Filters
Rejects the data during the data flow into Rejects the data at source end itself
the target or after loading the data into
target
Rejected records will be stored in Error Rejected records will not be stored in
table Error table
Require Check knowledge module Doesn’t require any knowledge module
We define Constraints We don’t define Constraints
To check the constraints we need to No need to define a key
define a key
If you want to use the data in future use Use filters when we require unwanted
Flow/Static control data

Variables and Procedures in Odi are based on Logical Schema


If we create any Encryption, that encryption will get stored in XML format.
Package won’t support Knowledge module, sequence.
Package contains Interface, Procedure, Variables, Models, Data stores.
Scenario is the frozen copy of the partially generated code for the objects contained in
the package.
We can’t generate scenarios for Sequences and Knowledge modules. But we can
generate
Scenario’s for Interfaces, Procedures, Packages and Variables.
Procedure is a reusable component in Odi, which allows us to group the actions which
doesn’t fit in the interface framework.
We provide security for procedures by Encrypting and Decrypting them.
In Odi, to implement sub queries we make use of Yellow Interfaces.

Dimension tables are divided into 9 types:


 SCD – Slowly Changing Dimension
 RCD – Rapid Changing Dimension
 UCD – Un Change or Static Dimension
 Confirmed – One dimension is shared by multiple subject areas
 Junk – Storing information like flags/indicators which are useless
 Role playing – One dimension is playing multiple roles in fact table
 Inferred – Dimension which is empty except the surrogate key and we want to load
data
 Shrunken – Subset of one dimension
 Degenerated – Only one dimension which is part of fact table

Fact tables are divided into 3 types:


 Additive – We can apply all group function on fact
 Semi Additive – We can’t apply all the group function on fact
 Non Additive – We can’t apply any type of group function on fact

Sequence is an Odi object which automatically gets incremented whenever it is used.


Any object in Odi can be saved in XML format or HTML format.
Load Plan is used to load data in a particular order as we instructed. Any Odi objects
added in load plan are converted to Scenario’s.
Scenario is the execution unit for production, which can be scheduled.

99. Difference between Load plan & Package


Load Plan Package
Using load plans we can do Reusability, These are not present here
Native parallelism, Exception handling
Available from 11.1.1.6 version Available from starting Odi
Every object inserted in load plan is Every object inserted in package remains
converted to a Scenario same
Largest execution unit Second largest execution unit after Load
plan
These have validations before execution Validations are not present here

100. Suppose I’m having 6 interfaces and while running the 3rd interface it got failed.
How to run remaining interfaces?

101. What is load plan and types of load plans?


Load plan is a process to run or execute multiple scenarios as a sequential or
parallel or conditional based execution. And same we can call 3 types of load plans
 Sequential
 Parallel
 Conditional

102. How to write Sub queries in Odi?


Using Yellow interfaces and sub queries option we can create sub queries in
Odi (Or) Using View we can go for Sub query (Or) Using Odi procedure we can call
direct database queries in Odi.

103. Suppose having unique and duplicate but I want to load unique record in one
table and duplicate record in another?

104. How to implement data validations?


Use filters & mapping area and data quality related to constraints use Ckm flow
control .

105. How to handle exceptions?

106. In a package one interface got failed. How to know which interface got failed if
we have no access to operator?
107. How to implement the logic in procedures if the source side data deleted
that will reflect the target side table?
Using this query on command on target
Delete from target table where not exists (Select X from source_table where
source_table.Id=target_table.Id)

108. If the source have total 15 records with 2 updated and 3 newly inserted. At the
target side we have to load the newly changed and inserted records, how?
This is SCD type 2 strategy. Use ikm incremental update knowledge module
for both insert and update operations.

109. How to load the data with one flat file and one Rdbms source using joins?
110. If source and target are in oracle technology tell me the process to achieve
this requirement
(Interfaces, KMs, and Models)
Use Lkm sql to sql (or) Lkm sql to oracle, Ikm oracle incremental update or
control append.
111. What we specify in the Xml data server and parameters to connect to xml file?
112. How to reverse engineer views/ How to load data from views?
In model go to reverse engineer tab and select reverse engineering object as
View.

113. What is Profile in Odi?


Profile is a set of objective wise privileges. We can assign profiles to the users.
Users will get the privileges from profile.

114. How to remove Duplicate in Odi?


Use Distinct in ikm level. It will remove the duplicate rows while loading to the
target.

115. How will you bulk load data?


Using Ikm knowledge module designed for bulk loading of data.

116. How will you bring files from different locations?


117. How to prevent overwrite existing data in a table?
In target properties of flow tab if you are using incremental update
Set option Update= false (Or) Use control append km.

118. What is Memos?


A memo is an unlimited amount of text attached to virtually any object, visible
on its memo tab.

119. What is Change Data Capture (CDC), Explain?


CDC is used to capture the data that is inserted, updated and deleted at the
source side and replicating the same at the target. Odi have a journalizing knowledge
module to do the required implementation. CDC is of two types
 Simple CDC – Implemented on single table
 Consistent CDC – Implemented on Multiple tables/model

Steps to implement CDC


 Adding the table to CDC (Data store)
 Right click on data store, select cdc, select user as subscriber and execute it
 Right click on data store, select cdc, select start journal
Then Odi creates a subscriber table in the work schema. Journal table, views and
triggers to capture any changes, when any insert, update and deletion takes place.
 Drag the journalized table as source as required as target. On the source data store
check mark the option Journalized Data Only and Odi will automatically add a filter
with the required condition and subscriber information. Use the proper lkm and ikm
as per your technology.

120. How to perform Load Balancing?


If you have two or three agents and you want to do load balancing
Under physical agents  Load balancing, check the required agents

121. Asynchronous/parallel execution?


In Odi interface can be called parallel through using Odi start scenario
1. Create scenarios of the interface
2. In package, drag the odi start scenario and provide the following details
Version= -1 calls the latest scenario
3. In the Synchronous/Asynchronous, select the Asynchronous
4. Do it for all interface to be executed parallel
5. Finally call the Odi wait for child session. It checks till all the parallel scenarios
are executed successful or failure.

122. What is the difference between context, logical and physical schema?
The most important is one doesn’t exist without other
Data server - Objects that define the connection to database. It stores the IP,
User and
Password for instance database.
Physical schema – Defines two schema Data schema - to read the data and Work
schema - Odi used to work(as work area where the I$,C$ tables created)
Context – Defines an Environment, a particular instance for code execution.
Development, Test and production
Logical schema - Logical schema allow the same code to be used at any environment
once it is an alias

123. Explain ODI Architecture?


The repository forms the central component of the ODI architecture. This stores
configuration information about the IT infrastructure, the metadata for all
applications, projects, scenarios and execution logs. Repositories can be installed on
any OLTP relational database. The repository also contains information about the
ODI infrastructure, defined by the administrators

Security manager and Topology manager are used for administering the infrastructure
. Designer is used for reverse engineering metadata and developing projects.
Operator is used for scheduling and operating runtime operations.
At design time, developers work in a repository to define metadata and business
rules. The resulting processing jobs are executed by the agent.
Agent orchestrates the execution by leveraging existing systems. It connects to
available servers and request them to execute the code. It then stores all return codes
messages into the repository. It also stores statistics, such as the number of records
processed and the elapsed time

Developers release their projects in the form of scenarios that are sent to production
In production, these scenarios are scheduled and executed on a scheduler agent that
also stores all its information in the repository.
Operators have access to this information that are able to monitor the integration
process in real time. Business users, as well as developers, administrators and
operators can get web based read access to the repository

The metadata navigator(ODI Console),links the ODI repository to any


webserver, such as Internet Explorer, Mozilla Firefox etc.

124. What are the ODI Design time components and Run time components?
The ODI Design time components are
Designer-Reverse engineer, Develop projects, Release scenarios Topology manager-
Defines the infrastructure of the IS Security manager-Manage user privileges
The ODI Run time components are
Operator-Operate production, monitor sessions
Scheduler Agent-Lightweight, distributed architecture - Handles schedules
orchestrates sessions.

125. What is execution repository?


When a work repository is used to store execution information (typically
for production purpose) it is called as “Execution Repository”.

126. What is Topology, explain?


Topology is complete representation of your Information System.
It concludes everything from data server and schemas through reserved keywords in
languages used by different technologies. Odi uses this topology to access the
resource available in the information system to carry at integration tasks.

Technologies: Data types, Data servers, schemas


Agents: run rime modules of ODI. Agent’s carryout integration tasks at runtime.
Contexts: Languages and actions. Contexts allow you to define an integration process
at an abstract level, then link it to the physical data servers where it will be performed
Languages-specify the keywords that exists for each technology. Actions-Are used to
generate the DDL scripts.
If you are adding a new technology then you would need to modify these
parts of the topology(languages and actions)

127. What is a physical schema?


Physical schema indicate the physical location of the data stores such as
tables, files, topics and queues inside a data server.
An ODI physical schema always consists two data server schemas: The data schema,
which contains the data stores
The work schema, which stores the temporary objects.

128. What a Data server has?


One or more physical schemas. One default physical schema for server level
temporary objects (work schema).

129. ODI recommendations for ODI temporary objects?


It is recommended that on each data server you create a dedicated area for
ODI’s temporary objects and use it as the work schema.

130. What is a logical schema?


Logical schema is a single alia for different physical schemas that have
similar data structures based on the same technology, but in different contexts. One
logical schema corresponds to the data structure for one application, implements in
several places called physical schemas.

Logical schema name are: Dev-physical schema1


Test-physical schema2
Prod-physical schema3

131. What is design time verses run time?


In Odi, you work at design time on logical resources. At runtime, execution
is started in a particular context, allowing access to the physical resources of that
context.

132. What is project?


A project is a collection of ODI objects that have been created by users. It
should encompass a single functional domain.

133. What can a project contain?


Folders –Packages, interfaces, procedure, Variables, Sequences, User functions.
(Either belongs to a project or can be created with global scope), Knowledge
modules, Markers.

134. What is a folder?


A folder gives a project a hierarchical structure. It can contain folder and
other objects. Every package, interface, or procedure in a project must belong to a
folder.
If you have a new functional domain create new project
If you have a large number of interfaces, procedures and packages in a project,
then create folders to group them
Objects in one folder can be used by any other object in the project.

135. What is a knowledge module?


A knowledge module is a code template containing the sequence of commands
necessary to carry at a data integration task.
There are different KM’S for loading, integration, checking, reverse
engineering and journalizing. A km’s work by generating code to be executed at run
time.
Interfaces KM’s:
LKM- Loading-assembles data from source data stores to the staging area
IKM- Integration-uses a given strategy to populate the target data store from the
staging area
Models KM’s
CKM-check-checks data in a data store for errors statically or during an
integration process
RKM-reverse engineering retrieves the structure of a data model from a database. It
is needed only for customized reverse engineering
JKM-journalizing sets up a system for changed data capture to reduce the amount of
that needs to be processed.

136. What are ID number?


Every ODI object has an internal id numbers shown on its version tab.

Unique number for this interface in this repository: 6


ID of the repository where it was created: 002
ID is unique only for each type of object
This id is exported with the object.

137. What is a Scenario?


A Scenario is a frozen copy of the generated code, ready for execution in any
context. It is the form in which all work in ODI should be released.
Scenarios are “Compiled” ODI objects
Easily deployed
Not editable
Scenarios have a unique name and version.
Scenarios are generated from packages, interfaces, procedures, variables with
common format Designer option.
Scenarios can be started on different contexts.

138. What are the various ways to execute scenarios?


The scenarios can be executed: From the GUI
From the command line
Using ODI tool ODISTARTSCEN Scheduling
Metadata Navigator (ODI Console)
139. What is Regenerating a Scenario?
Regeneration replaces the selected version of the scenario with a freshly
generated copy.
Overwrites the scenarios
Reattaches any schedules for the current scenario
Generation:
Preserves a history of scenario
Requires schedule to be redefined

140. Explain the steps of LKM File to SQL?


The steps of LKM File to SQL are:
1) Drop work table C$
2) Create work table
3) Load data
4) Drop work table

141. Explain LKM SQL to Oracle?


Same as LKM File to SQL but loads data from an ISO-92 database to an
Oracle target database.
1) Validate KM Options
2) Drop work table
3) Create work table
4) Look Journalized table
5) Load data
6) Create temporary index on work table
7) Analyze work table
8) Clean up journalized table
9) Drop work table

142. What are steps LKM SQL to SQL?


Loads data from any ISO-92 database to any ISO-92 compliant target database.
1) Drop work table
2) Create work table
3) Load journalizing table
4) Load data
5) Clean up journalized table
6) Drop work table

143. What is CKM Oracle?


This module controls the validity of the constraints of a data store and rejects
the invalid records in an error table. It can be used for static control as well as
flow control. This module creates non-unique index on the I$ table before checking
AK and PK and an index on the E$ table before removing erroneous records from I$
table.
Restrictions:
Data cleansing can be performed only if an update key is defined on the controlled
table
This KM uses Oracle Row ID column for data cleansing

144. What are steps of CKM Oracle?


The steps of CKM Oracle are:
1) Validate KM Options
2) Drop Check table
3) Drop check view
4) Drop check EV
5) Create check table
6) Delete previous check sum
7) Drop error table
8) Upgrade process rename column
9) Upgrade process add new column
10) Create error table
11) Delete previous errors
12) Create index on PK
13) Insert PK errors
14) Create index on AK
15) Insert AK errors
16) Insert FK errors
17) Insert CK errors
18) Insert NOT NULL errors
19) Create index on error table
20) Delete errors from controlled table
21) Insert checksum into check table

145. Explain the IKM Oracle Incremental Update?


In exists rows are inserted, already existing rows are updated. Data can be
controlled. Invalid data is isolated in error table and can be recycled.
Requirement—Update key defined in the interface is mandatory
Restrictions:
The Truncate option cannot work if it references another table
Flow control and static control options will call the CKM to isolate invalid data
If no CKM is set, an error occurs
Both options met be set to ‘No’ in the case when an integration interface
populates a temporary target data store.
Default update option is true, which means by default it’s assumed that there is at
least one non-key column specified in a target table.

146. IKM Oracle Slowly Changing Dimension?


Description: Type 2 SCD on Oracle.
Data can be controlled. Invalid data is isolated in the error table and can be recycled.
Requirements:
When defining the target datastore,it is necessary to set the
following parameters(from the SCD behavior list box, on the description tab of each
column concerned):
1) Surrogate key: Single column identifying records. This column must be
-Mapped to the next value
-Executed on target
2) Natural Key: Columns representing the natural key of the record.
3) Overwrite Column: List of Updatable columns
4) Insert row: Add row on change
5) Current Record Flag: Status of the record
6) Start time stamp: Indicating the beginning of the record availability
7) End Time Stamp: Indicating the end of the record availability.

Limitations:
If the table primary key is designated by the same column as the surrogate
key, then on the “Diagram” tab of the interface, uncheck the “Check Not Null”
check box for the column representing the primary key.
On the “Control” tab of the interface and in the “constraint” panel, set the
option Value to no for the primary control.

147. IKM SQL Control Apend?


Integrates data in any ISO-92 compliant database target table in
truncate/insert
(append) mode.
1) Create target table
2) Drop Flow table
3) Create flow table I$
4) Lock journalized table
5) Insert flow into I$ table
6) Recycle previous errors
7) Flow control
8) Truncate target table
9) Delete target table
10) Insert new rows
11) Commit transaction
12) Cleanup journalized table
13) Post integration control
14) Drop flow table

148. ODI 11g features overview?


ODI 11.1.1 Features:
ODI is a best the data integration platform focused on fast bulk moment and handling
complex
Data Transformation:
1) Enterprise scale development patterns
--Availability, failover and security
2) Developer productivity and runtime performance
--New IDE and case features improving integration flow design efficiency,
simplicity and performances
3) Component administration
--Template, driven deployment, enterprise manager integration
4) Run time management, Debugging and Diagnosability
--Code simulation, enhanced error management and session control
5) Hot plug ability and Heterogeneous connectivity
--Continues improvements each release to support news technologies for integration.

1).Architecture for enterprise scale development:


ODI 11g provides its runtime components as java enterprise
edition applications, enhances to fully leverage the capability of the oracle WebLogic
application server.
ODI c o m p o n e n t i n c l u d e e x c l u s i v e f e a t u r e s f o r e n t e r p r i
s e s c a l e d e v e l o p m e n t , high availability, scalability and hardened security
2).Simplified deployment and unified administration:
Components deploy easily and quickly in an oracle web logic server using
prec onfigured templates or templates that are generated based on the metadata
defined in topology. It is also possible to create data server definitions for sources
and targets in the topology and to deploy these in few clicks as data stores in a
WebLogic server.
3).Better control over production
ODI Console
Enhanced session control
Enhanced error management
4).Design time productivity
News IDE based on JDEV Redesigned interface editor Auto fixing
Quick edit
Code simulation
Reverse engineering context is automatically set to the default context.
5).ELT features for better performances
Data sets and set based operations
Partitioning
Lookups wizard
Support for natural joins
Support for native sequences
Automatic temporary index management

149. What is Full load?


This load process will be executed once at the time of production rollout to
load all the master and transactional data. Interfaces developed for full load will use
below knowledge modules to load data into DataMart.
LKM-LKM SQL to Oracle
IKM-IKM SQL control append, IKM oracle multi table insert, IKM SCD CKM-
CKM oracle

150. What is incremental load?


This load process will be scheduled after production rollout to load all
incremental master and transactional data
LKM-LKM SQL to Oracle
IKM-IKM oracle incremental update
- IKM oracle incremental update (MERGE)
- IKM oracle incremental update (PL/SQL)
-IKM oracle SCD CKM-CKM oracle

151. What is Scheduling?


The scheduling of ELT jobs(scenarios) will be done through ODI
scheduler. During scheduling, the following parameters will be considered:
a) Expected time of availability of source file on FTP server b) D e p e n d e n c i e s of
jobs (scenarios)
c) Expected load time of each job into a single dimension or a single fact.

Following strategies will be followed for scheduling:


Scheduling will be time based as well as event based
All independent jobs will be scheduled to run concurrently
Dependent jobs will be scheduled on event based. These jobs will wait for the
completion of their parent jobs.
The jobs that transfers files from FTP server to landing stage will first check
availability
of all files on FTP server. It will transfer all available gives to landing stage and will
wait for missing files for a given time window.

152. Loading data from landing stage to staging?


Before loading data into DataMart, first data from landing stage will be loaded
into staging area where all data type and size related validations will be performed.
For this load, for every feed file a relational table will be created into stage schema.
Relational table will have same fields as feed files will be loaded.
LKM-LKM file to oracle (SQLLDR)
IKM-IKM sql control append
CKM-CKM oracle
During load truncate/insert strategy will be used.

153. Loading data from stage to Data mart?


To load data from stage to data mart ODI interfaces, procedures will be
developed. Every interface will be responsible for loading data into a single
dimension or a single fact. Interfaces is the place where all transformations logics,
joins, lookups, filters and aggregations rules are specified. Details of interfaces will
be provided into low level design document.

These will be two types of load process:


Full Load
Incremental Load

154. What are the different dataware housing methodologies that you are familiar
with?
The methodologies are popular
1. Ralph Kimball—Bottom up Approach
2. Bill Inmon---Top-down Approach

155. What is a View?


 A view is a database object that is a logical representation of a table
 It is delivered from a table but has no storage of its own
 View will fetch data from base table, it will run the base query
 A view takes the output of the query and treats it as a table

156. What are Materialized views?


157. What is a data warehouse?
158. Differences between facts and dimensions?
159. What is fact less fact table?
A fact table that contain only primary keys from the dimension tables, and that
do not contain any measures that type of fact table is called fact less fact table.

160. What are fact constellations?


161. What is granularity?
In DWH grain refers to the level of detail available in a given fact table as well
as to the level of detail provided by a star schema.

165. What is a junk dimension?


166. What is fact constellation?
167. What are dirty dimensions?
168. Display second highest salary from emp table?
select max(sal) from emp where sal<(select max(sal) from emp);

169. Display nth row from the table?


Select * from emp where Rownum<=n
Minus
Select * from emp where Rownum<n;

170. To display p to q rows from the table? select * from emp where rownum<=q
Minus
Select * from emp where rownum<p

171. Count distinct data values in a column?


select count(sal) from (select distinct(sal) from emp);

172. Query to display any duplicate rows?


Select * from emp group by sal having count(*)>1;

173. Display every 4th row in a table?


select * from emp where(rowed,0) IN(select rowed, mod(rownum,4) from emp);

174. What is IKM multi table insert?


175. What are the advantages of database partitions?
Well partitioning brings about many things for you. The following are the
advantages of database partitions:
You can analyze each partition quickly, instead of running an analyze on a 100gig
table
You can reorganize each partition independent of any other partition
You can easily redistribute the load over many disks, you now have evenly distributed
the data into 100 1 zip partitions, move them at all.
You no longer have to load this data into this file in direct path more, that data into
that
file and so on, just let the software do that work for you.
So you are a data warehouse and need to do a bulk update/delete. Well you can do
that in parallel now since that data is partitioned.

176. What are the Drawbacks of ELT Approach?


177. Creating User defined Data Quality rules?
Most common data quality rules can be implemented as constraints. Primary or
alternate keys to ensure the uniqueness of records for a set of columns. Reference
(simple) to check referential integrity of records against other records from a
referenced table.
References (complex) to check complex referential integrity based on a “Semi
relationship”
Defined by a complex expression
For example, check that the third, first character of the address field concatenated
with the four last character of the postal code of the customer table reference and
existing country code in my country table.

Conditions to check a rule based on several columns of the same record.


For example, check that the order update date is between the creation date and the
purchase date.
Mandatory columns to check that a column contains a non-null value.

178. How variables can be used?


Variables can be used in:
Using variables in interfaces (mapping expression, filter expression and join
expression).
Using variables in procedures (Use options).
Using variables in the resource name of a data store
Using variables in server URL.

179. What is Load Balancing?


They are cases when a single agent can become a bottleneck, especially when it
has to deal large number of sessions started in parallel. For example, suppose you
want to retrieve source data from 300 stores. If you attempt to do this parallel on a
single agent, it may lead to excessive overhead as it will require opening 300 threads
for every session. A way to avoid is to set up the load balancing across the several
agents.

180. What are the different knowledge modules u used?


Multi table insert/Roll Back – IKM Oracle Muli Table Insert
Router Transformation – LKM File to Sql and IKM Sql Control Append
CDC – JKM Oracle Simple / JKM Oracle Consistent
Control Append – IKM Sql Contol Append
Append – IKM Sql Control Append
Incremental Update (SCD1) – IKM Oracle Incremental Update
SCD2 – IKM Oracle Slowly Changing Dimension
File to Table – LKM File to Sql and IKM Sql Control Append
XML to Table – LKM sql to sql and IKM Sql Control Append
Table to file – IKM Sql to File Append
File to file – LKM file to Sql and Ikm Sql to File Append

181. Difference between database and dataware house?


Database is a collection of schemas/users
Dataware house is subject oriented,non volatile,time invariant,integrated
collection of data in support of decision making.

Database consists of OLTP data


Dataware house consists of OLAP data

Data in database is in normalised structure


Data in dataware house is in denormalized structure

182. What is the default join condition exits when two tables are joined in ODI?
Natural Join
183. What is Physical agent & Logical agent?
A Physical agent corresponds to a single standalone agent or a java EE agent. A
Physical agent should have a unique name in topology.
Similarly to schemas, Physical agents having an identical role in different
environment can be grouped under same logical agent. A logical agent is referred to
physical agent through contexts. When, starting an execution, you indicate the logical
agent and context. ODI will translate this information into a single physical agent that
will receive the execution request.

184. What is Solution?

185. Running parallel ODI Interfaces

You might also like