Guru99 Interview Que
Guru99 Interview Que
Info Processing
Analytical Processing
Data Mining
Additive Facts
Semi-additive Facts
Non-additive Facts
OLAP stands for Online Analytics Processing, and OLAP cube stores large data in
muti-dimensional form for reporting purposes. It consists of facts called as
measures categorized by dimensions.
Tracing level is the amount of data stored in the log files. Tracing level can be
classified in two Normal and Verbose. Normal level explains the tracing level in a
detailed manner while verbose explains the tracing levels at each and every
row.
Grain fact can be defined as the level at which the fact information is stored. It is
also known as Fact Granularity
A fact table without measures is known as Factless fact table. It can view the
number of occurring events. For example, it is used to record an event such as
employee count in a company.
Round-Robin Partitioning:
Hash Partitioning:
15) In case you have non-OLEDB (Object Linking and Embedding Database)
source for the lookup what would you do?
In case if you have non-OLEBD source for the lookup then you have to use Cache
to load data and use it as source
16) In what case do you use dynamic cache and static cache in connected
and unconnected transformations?
Dynamic cache is used when you have to update master table and slowly
changing dimensions (SCD) type 1
For flat files Static cache is used
17) Explain what are the differences between Unconnected and Connected
lookup?
A data source view allows to define the relational schema which will be used in
the analysis services databases. Rather than directly from data source objects,
dimensions and cubes are created from data source views.
19) Explain what is the difference between OLAP tools and ETL tools ?
ETL tool is meant for the extraction of data from the legacy systems and load
into specified data base with some process of cleansing data.
While OLAP is meant for reporting purpose in OLAP data available in multi-
directional model.
With the power connect option you extract SAP data using informatica
Install and configure the PowerConnect tool
Import the source into the Source Analyzer. Between Informatica and SAP
Powerconnect act as a gateaway. The next step is to generate the ABAP
code for the mapping then only informatica can pull data from SAP
To connect and import sources from external systems Power Connect is
used
21) Mention what is the difference between Power Mart and Power Center?
Power Center Power Mart
Suppose to process huge volume of data Suppose to process low volume o
It supports ERP sources such as SAP, people It does not support ERP sources
soft etc.
22) Explain what staging area is and what is the purpose of a staging area?
Data staging is an area where you hold the data temporary on data warehouse
server. Data staging includes following steps
For the various business process to identify the common dimensions, BUS
schema is used. It comes with a conformed dimensions along with a
standardized definition of information
Data purging is a process of deleting data from data warehouse. It deletes junk
data’s like rows with null values or extra spaces.
Schema objects are the logical structure that directly refer to the databases
data. Schema objects includes tables, views, sequence synonyms, indexes,
clusters, functions packages and database links