What Are The Diff Navigators Available in Odi?
What Are The Diff Navigators Available in Odi?
C$ is first and I$ is second. Because, C$ is created during data loading from source to
staging area. Whereas I$ is created during data loading from staging area to target
table.
6. What is the diff between static control and flow control?
Flow control checks the constraints on target table with input data. If the input
data satisfies the constraint condition on target then it will be inserted into target table
else to Error table (E$).
Static control first put the data into the target table and then check the
constraints on target table. After then the invalid data is inserted into Error table (E$).
The target table contains both valid data and invalid data.
7. What is the difference between incremental update and Changing Data Capture
(CDC)?
Changing Data Capture is an ODI Feature
Incremental Update is a Loading Strategy
Staging schema or Work area is first defined when you create a physical
schema but you can change it in interface properties.
14. How to create a load plan in ODI and how to arrange its dependencies?
We create a load plan from Load Plans and Scenario’s tab in the Designer
navigator. In step tab, after adding a step. Through Add Step Wizard options Serial
step/ Parallel step/ Run scenario step/ Case step/ When step/ Else step we can arrange
dependencies.
15. What is the process of CDC?
If any change occurred in source then the change will be immediately reflected
to target
Import JKM Oracle Consistent KM
Remove all the simple cdc if you apply simple cdc already
Apply KM on the model
Add to CDC
Add Subscriber
Start Journal
In CDC only the changed data from source is loaded into the target since the
last update. But in Incremental Update all the source data is going to be loaded into
the target along with the updated data from source.
18. What are the different knowledge modules you used in ODI?
In ODI we have 6 different knowledge modules. They are
Reverse Engineering KM
Loading KM
Integration KM
Check KM
Journalization KM
Service KM
Knowledge Modules are the pre-built or inbuilt code templates which contains
sequence
of commands in order to perform the integration task. In ODI, we never write any
single piece of code. We automatically get the code by making use of these
Knowledge Modules.
Delete from EMP where rowid not in (select max (rowid) from EMP group by
empno);
28. What is the difference between plSql procedure and Odi procedure?
29. How to pass the dates as input to interfaces in Odi?
Using the query Select sysdate from dual; in the variable and assigning or
concatenating that variable to file resource name we can achieve the required task.
34. What is synonym insert, update and insert/update and when do we use it?
35. What are different import and export methods in Odi?
We have 2 types of import and export methods
Smart import /export
Import/export
In smart import/export the dependent objects also exported. i.e., fo r interfaces
source and target tables are also exported.
In import/export only selected object is exported.
36. What and all the knowledge modules require to load the data from flat file to DB?
LKM file to Sql
IKM Sql control append
43. What are the different connections available in Odi? (Ex: jdbc)
jdbc stands for java data base connectivity which is used to make connection
with data base. In this we have 2 components
jdbc driver
jdbc url
jdbc driver will create the connection between Odi and db.
jdbc url will create the connection to particular schema.
45. How to stop the parallel execution in package if any interface fails?
46. What is incremental update merge in Odi?
47. What is quick edit in interface and why do we need quick edit?
48. What are the options available in interface overview tab?
49. Where do we need to set lkm, ikm and how to set the CKM?
Through Flow tab in interface we can set lkm and ikm based on the loading.
Through
Control tab we can set ckm.
50. What is your database size?
51. How many fact tables and dimension tables you are having in you project
52. What is surrogate key and why do we required surrogate key?
A surrogate key is a substitution for the natural primary key. It is a unique
identifier or number (normally created by the database sequence generator) for each
record of a dimension table that can be used for the primary key of the table.
A surrogate key is useful because natural keys may change
55. How do u implement if you’re having multiple sources and multiple targets with
in single interface and your sources also different technology’s and target also
different technology
56. What is the difference between stored procedures and procedures?
57. What is the difference between 11(G) and 12(C)? What does G stands for and C
stands for?
ODI 11g ODI 12c
No multiple target tables in a single We can load multiple target tables in
interface single mappings
Interface Mappings & Reusable mappings
No reusable mappings added in global Reusable mappings added in global
objects
Standalone & J2ee Agent objects
Standalone, Colocated & J2ee Agent
Only NG profiles is NG_Designer Added additional NG profiles
NG_Console, NG_Metadata_Admin,
NG_Version_Admin
No Wallet password Added Wallet password
No direct role selection while creating Direct role selection while creating users
users
No Hive technology for Hadoop in Added new Hive technology for Hadoop
topology in topology
No Pivot functions Available below functions in Opatch
17053786
Pivot component
UnPivot component
Table function component
Subquery filter component
Sequence enhancements only Nextval Sequence enhancements Currval &
No split or Sort or Set objects Nextval
Declarative flow based user interface
Xml improvements
62. How do send the file to other team using Odi and that file should be password
protected.
Through Encrypting and Decrypting
63. In control append strategy why the work tables will get drop why not delete?
64. Have you customize the knowledge modules if yes what are they?
65. What is the difference between view and materialized view?
View Materialized view
A view has a logical existence. It does A materialized view has a physical
not contain data. existence.
It’s not a database object. It is a database object.
We cannot perform DML operation on We can perform DML operation on
view. materialized view.
When we do select * from view it will When we do select * from materialized
fetch the data from base table. view it will fetch the data from
materialized
view.
In view we cannot schedule to refresh. In materialized view we can schedule to
refresh.
We can keep aggregated data into
materialized view. Materialized view
can be created based on multiple tables.
68. What is the difference between sub-query & co-related sub query?
A sub query is executed once for the parent statement
Select deptno, ename, sal from emp a where sal in (select sal from Grade where
sal_grade='A' or sal_grade='B')
Whereas the correlated sub query is executed once for each row of the parent
query.
Find all employees who earn more than the average salary in their department.
SELECT last-named, salary, department_id from employees A
Where salary > (select AVG (salary)
From employees B where B.department_id =A.department_id
Group by B.department_id)
Select * from emp where deptno in Select a.* from emp e where sal >=
(select deptno from dept); (select avg(sal) from emp a where
a.deptno=e.deptno group by a.deptno);
Stored procedures are not automatically run and they have to be called explicitly by
the user.
But triggers get executed when the particular event associated with the event gets
fired.
you can use rownum to limit the number of rows returned by a query, as in this
example:
select * from employees where rownum < 10;
Rowid Row-num
Rowid is an oracle internal id that is Row-num is a row number returned by
allocated every time a new record is a select statement.
inserted in a table. This ID is unique
and cannot be changed by the user.
Rowid is permanent. Row-num is temporary.
Rowid is a globally unique identifier The row-num pseudocoloumn returns a
for a row in a database. It is created at number indicating the order in which
the time the row is inserted into the oracle selects the row from a table or
table, and destroyed when it is set of joined rows.
removed from a table.
Order of where and having:
SELECT column,
group_function
FROM table
[WHERE condition]
[GROUP BY group_by_expression]
[HAVING group_condition]
[ORDER BY column];
74. What is agent and what are the diff types of agents having in Odi?
Odi can't execute anything by itself it makes use of one component called as
agent.
Agent will takes care of the execution in Odi. Agent is a piece of java code that takes
care of execution in Odi.
In Odi 11g we are having 3 types of agents
stand-alone agent
javaee agent
Local agent
In Odi 12c, a new agent introduced called Collocated.
75. What are the diff ways creating the master repository in Odi?
There are 2 diff ways to create master repository in Odi, they are
Rcu Repository Creation Utility.
File tab or Odi studio.
83. What systems can ODI extract and load data into?
ODI brings true heterogeneous connectivity out-of-the-box, it can connect
natively to Oracle, Sybase, MS SQL Server, MySQL, LDAP, DB2, PostgreSQL, and
Netezza.
It can also connect to any data source supporting JDBC, its possible even to use
the Oracle BI Server as a data source using the jdbc driver that ships with BI Publisher
Oracle have released a statement of direction for both products, published January
2010:
http://www.oracle.com/technology/products/oracle-data-integrator/sod.pdf
OWB 11G R2 is the first step from Oracle to bring these two applications together,
it’s now possible to use ODI Knowledge modules within your OWB 11G R2
environment as 'Code Templates', an Oracle white paper published February 2010
describes this in more detail:
http://www.oracle.com/technology/products/warehouse/pdf/owb-11gr2-code-
template- mappings.pdf
Future plans are to have ODI fully available through the OBIA offering.
Loading Strategies
Control Append – Source to Staging – Truncate & Reload
Incremental Update (scd1) – Dimensional table loading – Update else Insert
Slowly Changing Dimension (scd2) - Dimensional table loading – Insert and
Modify
Append – Fact table loading – Insert
96. If star schema is good at performance, why snow flake schema is implemented
mostly?
97. What is the difference between snow flake and star schema?
Star Schema Snow Flake Schema
The star schema is the simplest data Snowflake schema is a more complex
warehouse scheme. data warehouse model than a star
schema.
In star schema each of the dimensions is In snow flake schema at least one
represented in a single table .It should hierarchy should exists between
not have any hierarchies between dimension tables.
dimensions.
It contains a fact table surrounded by It contains a fact table surrounded by
dimension tables. If the dimensions are dimension tables. If a dimension is
de- normalized, we say it is a star normalized, we say it is a snow flaked
schema design. design.
In star schema only one join establishesIn snow flake schema since there is
the relationship between the fact tablerelationship between the dimensions
and any one of the dimension tables. tables it has to do many joins to fetch the
data.
A star schema optimizes the Snowflake schemas normalize
performance by keeping queries simple dimensions to eliminated redundancy.
and providing fast response time. All the The result is more complex queries and
information about the each level is reduced query performance.
stored in one
It is called row.schema because the
a star It is called a snowflake schema because
diagram resembles a star. the diagram resembles a snowflake.
Lookup’s are mainly used to perform the fact table loading. For this we have to u se
the Append strategy.
Lookup table will be in Parrot color.
In lookup, if driving table and lookup table belongs to same logical schema then we
can execute the lookup on source or staging.
In lookup, if driving table and lookup table belongs to different logical schemas then
we can execute the lookup on staging.
98. Difference between Flow/Static Control and Filters
Flow/Static Control Filters
Rejects the data during the data flow into Rejects the data at source end itself
the target or after loading the data into
target
Rejected records will be stored in Error Rejected records will not be stored in
table Error table
Require Check knowledge module Doesn’t require any knowledge module
We define Constraints We don’t define Constraints
To check the constraints we need to No need to define a key
define a key
If you want to use the data in future use Use filters when we require unwanted
Flow/Static control data
100. Suppose I’m having 6 interfaces and while running the 3rd interface it got failed.
How to run remaining interfaces?
103. Suppose having unique and duplicate but I want to load unique record in one
table and duplicate record in another?
106. In a package one interface got failed. How to know which interface got failed if
we have no access to operator?
107. How to implement the logic in procedures if the source side data deleted
that will reflect the target side table?
Using this query on command on target
Delete from target table where not exists (Select X from source_table where
source_table.Id=target_table.Id)
108. If the source have total 15 records with 2 updated and 3 newly inserted. At the
target side we have to load the newly changed and inserted records, how?
This is SCD type 2 strategy. Use ikm incremental update knowledge module
for both insert and update operations.
109. How to load the data with one flat file and one Rdbms source using joins?
110. If source and target are in oracle technology tell me the process to achieve
this requirement
(Interfaces, KMs, and Models)
Use Lkm sql to sql (or) Lkm sql to oracle, Ikm oracle incremental update or
control append.
111. What we specify in the Xml data server and parameters to connect to xml file?
112. How to reverse engineer views/ How to load data from views?
In model go to reverse engineer tab and select reverse engineering object as
View.
122. What is the difference between context, logical and physical schema?
The most important is one doesn’t exist without other
Data server - Objects that define the connection to database. It stores the IP,
User and
Password for instance database.
Physical schema – Defines two schema Data schema - to read the data and Work
schema - Odi used to work(as work area where the I$,C$ tables created)
Context – Defines an Environment, a particular instance for code execution.
Development, Test and production
Logical schema - Logical schema allow the same code to be used at any environment
once it is an alias
Security manager and Topology manager are used for administering the infrastructure
. Designer is used for reverse engineering metadata and developing projects.
Operator is used for scheduling and operating runtime operations.
At design time, developers work in a repository to define metadata and business
rules. The resulting processing jobs are executed by the agent.
Agent orchestrates the execution by leveraging existing systems. It connects to
available servers and request them to execute the code. It then stores all return codes
messages into the repository. It also stores statistics, such as the number of records
processed and the elapsed time
Developers release their projects in the form of scenarios that are sent to production
In production, these scenarios are scheduled and executed on a scheduler agent that
also stores all its information in the repository.
Operators have access to this information that are able to monitor the integration
process in real time. Business users, as well as developers, administrators and
operators can get web based read access to the repository
124. What are the ODI Design time components and Run time components?
The ODI Design time components are
Designer-Reverse engineer, Develop projects, Release scenarios Topology manager-
Defines the infrastructure of the IS Security manager-Manage user privileges
The ODI Run time components are
Operator-Operate production, monitor sessions
Scheduler Agent-Lightweight, distributed architecture - Handles schedules
orchestrates sessions.
Limitations:
If the table primary key is designated by the same column as the surrogate
key, then on the “Diagram” tab of the interface, uncheck the “Check Not Null”
check box for the column representing the primary key.
On the “Control” tab of the interface and in the “constraint” panel, set the
option Value to no for the primary control.
154. What are the different dataware housing methodologies that you are familiar
with?
The methodologies are popular
1. Ralph Kimball—Bottom up Approach
2. Bill Inmon---Top-down Approach
170. To display p to q rows from the table? select * from emp where rownum<=q
Minus
Select * from emp where rownum<p
182. What is the default join condition exits when two tables are joined in ODI?
Natural Join
183. What is Physical agent & Logical agent?
A Physical agent corresponds to a single standalone agent or a java EE agent. A
Physical agent should have a unique name in topology.
Similarly to schemas, Physical agents having an identical role in different
environment can be grouped under same logical agent. A logical agent is referred to
physical agent through contexts. When, starting an execution, you indicate the logical
agent and context. ODI will translate this information into a single physical agent that
will receive the execution request.