SAP BW Modelling, Extraction and Reporting
SAP BW Modelling, Extraction and Reporting
Tabs:
Data Source: Info Objects:
1. General Information 1.General
2. Extraction 2. Business Explorer
3. Proposal 3.Master Data/Texts
4. Fields 4.Hierarchy
5. Preview 5.Attribute 6. Compounding
1. Extraction 1.Contents
2. Update 2.Requests
3. Execute 3.Reconstruction
Cube:
1. Contents
2. Performance
3. Requests
4. Rollup
5. Collapse/Compression
6. Reconstruction
Cube:
1
DSO:
Info Obj:
Key Figs:
Data Source:
2
General Info: Give short, medium, long description.
Extraction: Name of the file,
Header rows to be ignored
Data format-CSV
Data separator
Escape sign.
Proposal: Load example data.
Fields: Field---Description----Data type----Length.
Preview: Read preview data (10000)
Info Package:
3
DTP:
4
BW/BI: Data Warehouse is a Methodology, we are executing data by using a Tool or
Application is Called as BW/BI.
VERSIONS:
1999: BIW1.2
2000: BIW3.0A
2003: BIW3.1C
2003: BW3.5
2004: BW3.5
2006: BI7.0
2010: BW7.3
SAP R/3 Applications: SD MM PP PM QM WM FI CO
NAMING CONVENSION:
0 ----> SAP Standard Delivery Objects.
Z/Y----> Custom Defined Objects (Developer Creates)
Y/Z ----> Temporary Testing
B ----> Super User
Technical names of the Info Objects are Alpha Numeric
(Not support any Special Char).
Length is 3 – 9.
Data dictionary (Table) naming Conversion:
Standard Objects table: /BI0/<XXXXXXXX>
Custom Object Table: /BIC/<XXXXXXXX>
Characteristics:
Data Types: CHAR 60
NUMC 60
DATS 8
TIMS 6
Key figures:
Data Types: AMOUNT QUANTITY NUMBER
INTEGER DATE TIME
5
Lowercase Letters Check box:
If select this check box: It allows both Lower & Upper case letters.
If Not select this check box: It allows only Upper case letters,
for ex : Field is Primary Key.
Converse Rout: Converting from source format to Data base format.
ALPHA: Numeric Values, it prefix with leading zeros based on length.
Ex: 9999 0000009999
C009999 C009999.
6
System generates minimum of 3 Data base Tables:
1. Master Data table: P Table to maintain the Time Independent Master Data
/BIC/P<Info Obj Name>
Ex: /BIC/PZMATERIAL
2. SID table: S Table to maintain the SID of Characteristic
/BIC/S<Info Obj Name>
Ex: /BIC/SZMATERIAL
3. Text Table: T Table to Maintain the TEXT Data
/BIC/T<Info Obj Name>
Ex: /BIC/TZMATERIAL
4. Q Table to maintain the Time Dependent Master Data
/BIC/Q<Info Obj Name>
Ex: /BIC/QZMATERIAL
IP UPDATE MODES:
1. FULL UPDATE
2. INITIAL UPDATE
3. DELTA UPDATE
FULL UPDATE: It Extract all the data from Source system with respect to Data Selection.
Flat file Support only FULL UPDATE.
INITIAL UPDATE: It Extract all the data from Source system and Enable the Delta load.
FULL UPDATE + Delta Q.
DELTA UPDATE: It Extracts only New Records and Changed Records/Modified Records.
Note: If you run Initialize load with some selections, delta load also run with the same
selections only (mandatory).
7
Info Provider: Info Provider is an object on which we can do the Reporting.
Info Area: It’s like a folder and main Area, which maintain all the data targets. i.e. Cube,
DSO,…..
Application Component: It’s like a folder, its maintain a Data sources & Info Sources.
Cato logs: It’s like a folder, its maintain a characteristics & Key figures.
Data Targets: Data Targets are Objects, which Stores data physically.
Data Source: Data Source is Object, which is related to source system.
Info Pack: Info package is an object, it is used to load the data till Data Targets in BW 3.5
& up to PAS in BI7.0.
Transformations: Converting the data from one state to another state, we use
Transformations B/W source and Targets.
DTP: This object is newly introduced in BI 7.0, it is used to load the data from PSA to
Data Targets. And also used to update data from one data target to another data target. It
also enables error handling of records.
8
Re-Modelling: T_CODE: RSMRT
http://help.sap.com/saphelp_nw04s/helpdata/en/58/85e5414f070640e10000000a1550b0/
frameset.htm
Cube is already loaded, without losing cube data we can change the structure of info
Cubes.
Info Cube contains few compressed and few uncompressed requests you need to
compress all requests before you can start the Remodelling. Remodelling cannot be
preferred on a partially compressed Info Cube.
Add characteristic
Delete characteristic
Replace characteristic
Condition Filled By: Add /Replace Char
1. Attribute of another Characteristics within the same Dimension.
2. From another Characteristics within the same Dimension (1:1 mapping with
Characteristic).
3. Constant.
4. Customer Exit.
Add Key figure
Delete Key figure
Replace Key figure
Condition Filled By:
Add Key figure Support
1. Constant
2. Customer Exit
Replace Key figure Support Only
1. Customer Exit
Whenever we can do Re-Modelling remind some points:
1. We have to make sure that before going to Re-Modelling the Process Chain using
loading data in to that info cube should be stopped.
2. Whatever Characteristics or Key figures you are adding you have additional data base
space available in that cube or not.
3. After doing the Re-Modelling check Transformations everything is working fine then
Activating Process chains and schedule the data.
9
Star & Extended Star Schema:
Structure wise both are same but Diff is A fact table at the centre and surrounded (linked)
by dimension tables
1. In Star Schema we have total 16 Dim. [3 are SAP Defined & 13 are User Defined] &
1 Fact Table, these fact table having 255 Columns [233 for Key figures, 16 for Dim’s
& 6 for SAP Defined]
2. In Star Schema 1 Dim assign to 1 Characteristic.
3. Here Master Data Maintain Inside the Dimension Table.
4. In Extended Star Schema 1 Dim assign to 248 Characteristics.
5. Here Master Data Maintain Outside the Info Cube.
6. Here New Concept is SID, There is no SID Table in Star Schema, SID means
Surrogate Id.
7. In Extended Star Schema Master Data Linked to SID Table.
8. The SID Table generate the numeric Values for every Master Data Records.
9. In Extended Star Schema SID Table Linked to Dim Table.
10. The Dim Table generate the numeric values, these Dim Table numeric values
linked to Fact Table.
11. So in Extended Star Schema Master Data Linked to SID Table, SID Table Linked to
Dim Table, Dim Table Linked to Fact Table.
12. SID Table: To generate the Numeric Values for every Master Data Records and SID
table is the interface between master data table and the dimension tables.
Star Scheme Extended Star Schema
10
Targets:
1. Info Object 5. Multi Provider
2. Info Cube 6. Virtual Provider
3. Real Time Info Cube 7. Info Set
4. Data Store Object
Info Object:
11
Info Cube
Info Cube is one of the Physical Data Target as well as Info Provider.
From a Reporting point of view, an Info cube describes a self-contained dataset, this
dataset can be evaluated in a Bex Query.
Info Cube maintains Historical data means older data.
It will maintain Data Physically.
Cube maintains Summarised Data.
Cube Functionality is Additive.
There are 3 Types Of Cubes
1. Standard Info Cube (Basic Info Cube)
2. Real Time Info Cube (Transactional)
3. Virtual Provider (Remote) 4. Aggregate Cube
Info Cube have a 16 Dimensions
3 are Free Defined/SAP Defined.
1. UNIT Dim.
2. TIME Dim.
3. DATA PACKAGE Dim.
13 are User/Customer Defined.
Here One Dim=248 Characteristics (7.0)
In 3.5 One Dim=1 Characteristic.
Cube have a One FACT Table
One FACT Table=255 Columns
16 For Dimensions.
6 For Predefined/SAP Defined.
233 For Key figures.
Characteristics are assigned to Dimensions & Key figures are assigned to Key figures.
For Every Info Cube, we have to assign at least one Time Characteristic.
These Time Characteristics are Provide by SAP,we Can’t create Time Characteristics.
1. Calendar Day 7. Calendar Year/week
2. Calendar Month 8. Calendar Year/Month
3. Calendar Year 9. Calendar Year/Quarter
4. Quarter 10.Fiscal Year
5. Half Year 11.Fiscal Year Variant
6. Week Day 12.Posting Period
12
13. Fiscal Year/Period
13
VIRTUAL Providers
Real-Time Data Target can be loaded with Data: Planned not Allowed.
Real-Time Data Target can be planned with Data: Loaded not Allowed.
14
Diff B/W CUBE & VI CUBE
CUBE VI CUBE
1. Maintain Physical Data. 1. Maintain Data Logically.
2. Use Standard DTP. 2. Use Direct Access DTP.
3. Generate Some SID’s. 3. Not generate any SID’s.
If you want to Convert STD CUBE to REAL CUBE OR REAL CUBE to STD CUBE by
using Program called SE37: SAP_CONVERT_NORMAL_TRANS
DSO: DSO is a Data Storage area for Cleaned up & Consolidated Master OR
Transaction Data at Document Level (Detailed Level).
1. DSO is a One of the Data Target and maintain data Physically.
2. DSO Maintain Operational Data means Latest data.
3. It maintains Detail level of Data.
4. DSO having OVER WRITE Functionality.
5. Key Fields are must and should same, then only Overwrite Data, Otherwise insert a
new record.
6. Characteristics are assigned to Key Fields & Key Figures are assigned to Data Fields.
7. The field acts as a PK put it in to Key fields & the field act as a Non PK put in to Data
fields.
8. Maximum Key Fields are assigned to 16 & max Data Fields are assigned to 749.
15
There are 3 types of DSO’s
1. Standard DSO (Standard).
2. Direct Update DSO (Transactional).
3. Write Optimized DSO.
Standard DSO Have 3 Tables
1. New Table (Activation Queue).
2. Active Data Table.
3. Change Log Table.
Direct Update DSO & Write Optimised DSO having only one Table. i.e. Active Data Table .
Standard DSO:
1. When you want to use Standard DSO for Operational Reporting and Consolidating
data.
2. Data provided using a Data Transfer Process.
3. SID values can be generated.
4. Data is available for reporting after activation.
Data Base Structure:
1. New Data Table (/BIC/A<DSO NAME>40)
2. Active Data Table (/BIC/A<DSO NAME>00)
3. Change Log table (/BIC/B<SYS GEN.NUMBER>)
16
9. In WODSO instead of Key Fields & Data Fields---- Technical Key (Generated) &
Semantic Key.
What are the Technical keys in the WODSO:
1. RECORD NUMBER
2. REQUEST NUMBER
3. DATA PACKET NUMBER
Data Base Structure:
Active Data Table (/BIC/A<DSO NAME>00).
Full/Init Delta
Reporting Further data Targets
17
Whenever you Load the Data in to DSO First it will come to New Table, Then
Activate, once activation is done Data Move From New Table to Active Data Table &
Change Log Table.
Whenever data move to Active data Table & Change Log Table, data is not there in
the New Table.
Key fields are drag & drop in to Data fields but we can’t drag & drop the Data fields
in to Key fields.
DSO Having data we can add Data field, but we can’t add Key Field.
Do the Direct Activation Process in DSO---> RC---> Activate Data.
Images Maintain Under 0RECORD Mode in Change Log Table not in New &
Active Data Table.
1. NEW Image ----- N
2. BEFORE Image ----- X
3. AFTER Image -----__
4. REVERSE Image---- R
5. ADD Image ----A
6. DELETE Image -----D
7. UPDATE Image ----Y
Do The Reporting on DSO, It pick the Data from Active Data Table.
If you want to Load data From DSO to Further data Targets Data move from
Change Log.
When FULL/INIT Load Data move from Active Data table & DELTA Load Data
move from Change Log table.
Once Activate the data then only SID’s can be generated.
Whenever your Extracting data from Flat file we can use DSO for data Staging.
Reporting On DSO is Faster Compare to Reporting on CUBE.
18
DSO Settings: 1. SID Generation upon Activation.
2. Unique Data Records.
3. Set Quality Status to OK Automatically.
4. Activate Data Automatically.
5. Update Data Automatically.
1. SID Generation upon Activation:
Each Master Data Record Generate One SID value, These SID’s are stored in to
Separate SID Table, SAP System will Check every Master Data Record having SID or Not.
This Check Box Is Automatically Selected.
1. Unique Data Records:
Whenever you select The Check box Ignore The Duplicate Records, and any
Error Records will Comes you will get The Error Message.
2. Set Quality Status to OK Automatically:
Whenever data loading in to DSO it will comes in to New Data table, status
should be in Yellow here we can change the status from yellow to Green then Activate,
when we select this check box, automatically status will be changed from yellow to green.
3. Activate Data Automatically:
If we select this check box activated data automatically in ODS.
4. Update Data Automatically:
If we select this check box Updated data automatically from DSO to other data target.
19
Diff B/W CUBE & DSO:
CUBE DSO
1. Cube is ABR Functionality. 1. DSO is AIE Functionality.
9. Cube Data base Structure Consists 9. DSO Data base Structure Consists of
Data Mart:
Data Move From One Target to another Data Target.
Here SAP BI Acts as a Source as well as Target.
From DSO to DSO Data Move in Further Data Target
Go to Target DTP --> Extraction Tab --> Extraction From -->
1. Activate Table.
2. Data Store Change Log.
ERROR STACK:
a. It is a temporary Storage Area, it maintain all Error Records.
b. These Error records are not Updated Through Normal DTP.
c. These Error records are Updated Through Error DTP.
20
Multi Providers:
1. Multi provider is a type of Info provider that contains data from a number of Info
providers and Provide Data for Reporting.
1. Info Objects 2. DSO Objects (Standard only)
3. Info Cubes 4. Info Sets 5. Aggregation Level
2. It Improves the Query Performance.
3. Multi Provider as well as an Info Set doesn’t contain data physically, it contain data
logically.
4. Don’t use Direct Update & Write Optimized DSO in Multi provider.
5. In case, you have to assign Direct Update DSO in Multi provider, first you have to
assign Info set then this Info set assign to Multi provider.
6. Do not use more than one Non-cumulative Info Cube because this could lead to
incorrect query results.
7. Multi Provider Can also be built on Single Target.
Info Set:
Info set is used to JOIN the Data from Different targets.
At least One Object should be Same in all.
It Providers Intersection Data.
Don’t use more than 10 Info Providers in one Info Set. It is better to create Multiple
Info Sets depending on reporting needs.
When do the reporting on Info set, we can get one Extra key field ‘Record Count’.
Record Count: How many records have been combined.
The property of Info set is Inner Join (Intersection) or Left outer Join.
Inner Join: It brings only common fields.
Left outer Join: It will bring the common values & additional value from left operand.
Temporal Join: A join is called temporal if at least one member is time-dependent.
What are the Inputs of Info set?
In BW 3.x Info Set can Joined the data from only ODS & Info Object’s.
In BI 7.x Info Set can Joined the data from ODS, Info Obj’s & 2 CUBE only.
What is the Diff BW MP & Info set:
1. Info set’s are for JOIN & MP’s are for UNION.
2. MP’s for Multi Dimensional Analysis & Info Set for Tabular format Analysis.
21
Performance Tuning:
Whenever we executing the Query, Query Display might be Slow? Query Performance.
When we are extracting data from source system to BI, Loading might be going slow?
Loading Performance.
Query Performance: Whenever we execute the query it triggers the OLAP processer,
first check the data is available in OLAP cache, if data is not there in OLAP cache then
go to Cube & execute Bex report.
How to Improve Query Performance:
1. Aggregates
2. Compression
3. Create Index
4. Partitioning
-----> http://explore-sapbw.blogspot.in/2012/02/use-of-aggregatescompression-roll-up.html
How to Improve Load Performance:
1. Delete Index before Loading.
2. Prefer the delta updates.
3. Load Master Data before Transaction Data
B/C SID is generated for Master Data based on Cube Dim Id is generated.
4. Create Line Item Dimension:
Since Referred to Sid, no need to generated Dim Id’s hence Improves loading
Performance.
Delete Index Improves The Load Performance.
Create Index Improves the Query Performance.
22
Aggregates: T_code: RSDDV
23
Whenever Create Aggregates 2 Options Comes
1. Generate Proposal: System propose Which Characteristics takes more time at Query
Execution time, Those Characteristics will be Display, then Select and assign to
Aggregates.
2. Create Your Self: System can’t propose, You Only Select and assign any
Characteristics in Aggregates.
Diff Options In Aggregates: SWITCH OFF/ON
ACTIVATE/DEACTIVATE
DELETE
SWITCH OFF/ON:
If Don’t Use any Aggregate Simply SWITCH OFF.
If SWITCH OFF The Aggregate, Structure is available & Data also available, but
Data will not be Displayed.
ON ------> Data Is Available to Query.
OFF-----> Data Is Not Available to Query.
Aggregates are SWITCH OFF, Execute the Query, Direct go to Cube Search The
Data.
Aggregates are SWITCH OFF, Don’t provide data to the Query, but Load Data From
Cube to Aggregates, You can load No Problem.
ACTIVATE/DEACTIVATE:
If DEACTIVATE the Particular Aggregate, Structure is available but Data is Deleted.
DELETE:
Whenever Delete The Aggregate, Structure as well as Data also Deleted.
ROLL-UP
Data move from Cube to aggregates By Using Roll-Up based on Request Id’s.
Without Request Id’s we can’t move data from Cube to Aggregates.
If you want to move data from Cube to Aggregates, we have to move before
Compression of the Cube Data.
We can Compress the Aggregates without Compress the Cube Data, but Compress the
Cube, it will Compress the Aggregates also.
Once the aggregate is activated, compression is done automatically, if we select the
Check box----> Compress after Roll-Up In Roll-Up Tab.
24
Compression: (http://sapbi-allaboutcompression.blogspot.in/)
Zero Elimination: If you select this check box, after Compress the data, if there is any
Key figure values with ‘zero’ it will be deleted those records.
Collapse tab there is checkbox - Zero Elimination
Selective Deletion:
Compress the cube data with no errors, after compression during testing process or reporting
process found some error how to solve that after compression we cannot delete the data based
on request Ids in that case by using selective Deletion, we are going to Delete those records
and after that once again schedule data from source system through the Selections.
What is Reverse Posting?
Reverse Posting is a Concept of SAP BW 3.x and not SAP BI 7.0.
Once a cube is compressed, you cannot alter the information, This creates big problem when
the data which is compressed has some wrong entries. To delete these wrong entries SAP
provides a way called “Reverse Posting”.
Reverse Posting is Possible only if the Data is loaded to Cube via PSA, if the Data is loaded
to ODS, reverse posting is not possible.
F table:
Req No Cno Mno Calday Curr Unit Rev Qty
161 C1 M1 20080101 INR EA 100 20
161 C1 M1 20080102 USD EA 200 40
162 C1 M1 20080101 INR EA 40 60
162 C1 M1 20080102 USD EA 80 90
163 C1 M1 20080102 INR EA 200 400
25
E table:
Req No Cno Mno Calday Curr Unit Rev Qty
0 C1 M1 20080101 INR EA 140 80
0 C1 M1 20080202 USD EA 280 130
0 C1 M1 20080102 INR EA 200 400
Ex: In Collapse Tab, if give 377 & Collapse, That Particular Request and bellow Requests
also Collapsed.
How to See The Data in /F & /E Fact Tables.
/F -----> Cube ----> Manage ----> Content Tab ----> FACT Table.
/E ---- > SE11 ----> Data Base Table----> /BIC/EIBM_CUBE.
Contents: Display all The Characteristics in the Cube.
Performance: Maintain all the Indexes.
Indexes are 2 Types 1. Primary Index.
2. Secondary Index.
Primary Index: Primary Index is default maintain by System, we can’t Create
Or Delete.
Secondary Index: Secondary Indexes are we can Create.
These are 2 Types 1. Bit Map.
2. B-Tree.
Bit Map: Each Record Generated by One Binary Value (10110100) Based On Binary Value
Retrieve the Data.
B-Tree: The data will maintain in the form of parent & child Relationship.
Every Parent will have two children. Value of left child will be less than parent, Value of
right child will greater than parent. It Will Divide Data like Tree.
SE14 F table Edit Indexes 0 indicated primary Index all others are secondary
Bit map is faster when number of customers are less (Cardinality low).
BTree is faster when number of customers are high (Cardinality high).
26
Line Item Dimension:
If the size of the Dimension table is more than 20% of the size of the Fact table, then
create that Dimension as Line Item Dimension.
By Making the Dimension as Line Item Dimension the SID Values are Directly
linked to Fact Table.
Only One Characteristic is assigned to line Item Dimension.
Ex: Sales Order No, Account Doc No, Billing Doc No.
Instead of Fact Table referring to Dimension Table, fact Table is referred to SID, Line
Item Dimension assigned.
Cardinality of Dimension Table is more Than 20% of Fact Table, Line Item
Dimension is assigned.
ADV: When Line Item Dimension is assigned, Fact Table is referred to SID,
Hence The NO.Of Joins are reduced, This Improves the Query Performance.
What scenario dimension table is optional?
When you create Dimension as Line item Dimension, in this scenario Dimension Table is
Optional, SID table is mandatory.
High Cardinality:
If the size of the Dimension table is more than 10% and less than 20% of the size of
the Fact table, then create that Dimension as High Cardinality Dimension.
What happen when you create Dimension as High Cordiality Dimension.
System generates B-Tree Index instead of Bit-Map Index.
How should we find the cardinality?
RSDEW_INFOCUBE_DESIGNS will give the cardinality of Fact table & dimension tables
SE37 RSDEW_INFOCUBE_DESIGNS single test (F8) Info cube name Execute
Check the table size & Percentage.
27
Partitioning:
28
RDA (Real Time Data Acquisition): Only in 7.x
1. Whenever you want to extract up to minute (up to 5 min or up to 10 min) of data from
your OLTP system to BW system physically we are going to use Real Time Data
Acquisition.
2. If you want to extract data from source system using Real time data Acquisition that
Data source must and should support to Delta Update, if does not support Delta
Update mode we can’t extract data using a Real Time Data Acquisition from that Data
source.
3. Real Time Data Acquisition always the data load to DSO.
4. We have 2 types of DTP’s to upload the data in to DSO
a. Standard DTP
b. Real Time Data DTP (RDA DTP)
Using RDA DTP only we are going to load data into DSO.
In Real Time data Acquisition we are not going to execute IP the IP scheduled by DAEMON
& RDA DTP also scheduled by DAEMON.
How to Create:
1. Create Data Source and Activate it in BI.
2. Create a DSO.
3. Create Transformations.
4. Create IP for Delta.
29
Step4: go to DSO RC Create DTP
Once complete the Init without Data Transfer then set the Delta Update.
30
Next go to Schedule tab click on Assign.
Next click on DAEMON and give the Short Description then save.
31
Give the DAEMON ID
32
Attribute: We can add already Created characteristic Info Object in to newly Created
Characteristic Info Object is Called ‘Attribute’.
Types of Attributes:
1. Display Attribute.
2. Navigational Attribute.
3. Exclusive Attribute.
4. Time Dependent Attribute.
5. Time Dependent Navigational Attribute.
6. Compounding Attribute.
7. Transitive Attribute.
Display Attribute:
Navigational Attribute:
1. If you create any attribute as Navigational attribute it behaves like a Display attribute
as well as Characteristic, if it is selected at Info Provider level or cube level.
2. Navigational Attribute is behaves like a regular characteristic in the Report.
3. It is Stored in Attribute Table---> /P (Master Data Table)
4. Naming convention of Navigational attribute is Main Characteristic Name_Attribute
Name.
1. In what Scenario we can use Navigational Attribute?
Ans: If you want make any Attribute behave like a Display Attribute as well as
Characteristic in that Scenario we can use Attribute type as Navigational Attribute.
33
Diff B/W Display & Navigational Attributes:
If you select Navigational Attribute at info provider level it behave like a
characteristic, but Display Attribute not behave like a Characteristic.
Display attribute is used only for Display purpose in the report. Whereas Navigational
attribute is used for Drilling Down Option in the report.
Exclusive Attribute:
a) If Attribute Only Check Box is Unselected, it becomes Exclusive Attribute.
b) It is Stored in Attribute Table---> /P (Master Data Table).
c) Exclusive Attribute can be Used as a Characteristic in the Cube.
Time Dependent Attribute:
Whenever Data has to be maintained different time Intervals with different data, Time
Dependent Attribute is Used.
a) It is Stored in /Q Table( Time Dependent Master Data Table)
b) 2 New fields (Date To & Date From) By Default.
c) Date To will acts as a part of Primary Key.
d) ‘N’ No. of Records can be Maintained with Time Intervals. Date To is Primary Key.
Time Dependent Navigational Attribute:
It is both Time Dependent & navigational.
a. It is Stored in /Q Table.
b. Enables the Drill Down Facility.
c. Displays W.R.T. Key Date in Query Properties.
Compounding Attribute:
Value of One Info Object is completely dependent on Value of another Info Object, we go for
Compounding Attribute.
Ex: Company Code ---> Controlling Area, Storage Loc---> Plant,
GL Account---> Chat of Account.
a. Compounding Attribute is also called as ‘Superior’ Info Object.
b. If Compounding Attribute is Used, it degrades the Load Performance.
Transitive Attribute:
a. 2nd level of Navigational attribute.
b. One Navigational attribute There Exists One more Attribute attached it.
Ex: Sales Emp -----> Sales Office (Nav. Attr) -----> Location.
34
BI Content Installation: T_CODE:RSORBCT
BI Content Versions: 1. N- New
2. A- Activate
3. Modified
4. Delivered
All Objects given by SAP in Delivered Version.
If the Object is in Delivered Version it can’t used, it should be converted to Active
Version.
Is it possible to Install Cube Only Delivery to Active.
Ans: No, Cube always contains Characteristics & Key Figures.
Is it possible to Install Query Only Delivery to Active.
Ans: No, Queries always dependent on Info Providers.
Step2: Grouping
35
1. Only Necessary Objects
2. In Data Flow Before
3. In Data Flow Afterword
4. In Data Flow Before & After
Only Necessary:
When you select only Necessary objects, whatever objects are minimum requirement.
Ex: Info Cube: Info Area+ Info Objects+ Info Cube
In Data Flow Before:
Whatever objects which submits data to the collected objects.
Ex: Info Cube: Info Area+ Info Objects+ Info Cube+ Data Source +
Transformations+ DTP+ IP.
In Data Flow After:
Whatever objects which obtains data from the collected Objects.
Ex: Info Cube: Info Area+ Info Objects+ Info Cube+ Queries+ Workbooks+
Web Templates+ Roles+ Reports.
In Data Flow Before & After:
Whatever objects which submits data to the Collected Objects as well as which obtains data
from the collected Objects.
Ex: Info Cube: Info Area+ Info Objects+ Info Cube+ Data Source+ Transformations+
DTP+ IP+ Queries+ Workbooks+ Web Templates+ Roles+ Reports.
Step5:Install:
36
1. Simulate Installation
2. Install
3. Install In Background
4. Installation and Transport
Simulation Installation: means logically it will check for Errors don’t install but if you
install physically are you going to create any errors check logically means checking.
Install: means it starts installing in the fore ground.
Install in Background: means it starts installing in back ground process.
Installation and Transport: means installing as well as ask Transport Request.
Display:
1. List
2. Hierarchy
MATCH(X) & COPY: To avoid the overwrite of previous Changes use this Option.
Merge your changes with new changes.
Ex: If one object is already Active Version once again install the same object select check
box otherwise overwrite the existing object.
37
Reliance RELI 2006 3000
Reliance RELI 2007 5000
Non-Cumulative:
Company Code No.Of Employees
RELI 5000
Cumulative:
Cumulative means the value of the key figure is does not change frequently.
It is used to summarised data.
Company Code N0.Of Employees
RELI 8000
Go to Info Object -->No. Of Employees-->R.C -->Change -->Aggregation Tab
Exception Aggregate (Latest Value) ----> Activate
Non-Cumulative Inflow & outflow:
Coming In-----> Inflow Going Out ------> Outflow
Ex: Inflow Outflow
01.08.2008 SL1 M1 50
01.08.2008 SL2 M2 90
02.08.2008 SL1 M1 40
02.08.2008 SL2 M2 60
17.08.2008 SL1 M1 30
170.8.2008 SL2 M2 50
Stock Over View Report: Opening Stock +In Flow – Out Flow.
Historical Movements 01.08.2008 to 16.08.2008 -->Init
Initial Stock is loaded Only Once.
Latest Movements 17.08.2008 -----> Delta.
Movements will be Loaded Daily.
38
Routines:
(http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/6090a621-c170-
2910-c1ab-d9203321ee19?QuickLink=index&overridelayout=true)
1. Start Routine.
2. End Routine.
3. Expert Routine.
4. Characteristic Routine / Field Routine
Start Routine:
Start Routine is available with Transformation. It is triggered before Transformation.
Generally Start routine is used for filtering records or fetching global data. Start Routine
exists with the same structure as that of the source.
Example:
I need to write a start routine. My data source is having three columns.
1. Rec type 2. Key code 3.Text
Rec type--------Key code-----Text
Region-----------ASA-----------Asia
Region-----------USA-----------America
Region-----------AUS-----------Australia
Country-----------IND-----------India
Country-----------SAF-----------South Africa
My requirement is i need to only extract the data if the Rec type = "REGION"
End Routine
End Routine is available with Transformation. It is triggered after Transformation.
Generally End routine is used for updating data based on existing data. End Routine exists
with the same structure as that of the target.
Expert Routine
To create an Expert routine go to Edit menu and select Expert Routine. Expert Routine
will trigger without any transformation Rule. All existing Rules will be deleted once you
develop Expert Routine. Generally Expert routine is used for customizing rules.
SOURCE_PACKAGE has same structure as that of Source of the Transformation.
RESULT_PACKAGE has same structure as that of target Object.
39
Rule Types:
1. Constant
2. Direct Assignment
3. Formula
4. Initial(Characteristics)
5. Read Master Data
6. Routine
7. No Transformation(Key figures Only)
Constant: Irrespective of the data coming from source system if you want make some field
value as constant we use constant type of transformation (Fixed value for all records).
Direct Assignment: To map the value of source filed to Target field.
Formula: we want to transfer the data from data source to target by implementing a Simple
logic we will use formula type of transfer rule.
Routine: We want to transfer the data from data source to target by implementing a Complex
logic we will use routine type of transfer rule.
40
Creating Currency Translation Type
41
We needs to migrate transfer rules first before migrating data source, it will copy
all transformation mappings as well as routines from transfer rule to newly created
transformation. If you migrate data source first, then you will lose transfer rules.
When ever load master data we have to perform the Attribute Change Run, then
master data tables will get the refreshed and if there is any Navigational Attributes or
Hierarchies used in Aggregates those data will be refreshed, since Aggregates give the
present Truth.
42
DTP & IP:
DTP follows one to one mechanism i.e. there is one DTP per One data target whereas, IP
loads for all data targets at once.
DTP can load Full/Delta load for same target via same DTP which is not possible via same
IP.
IP Delta: Check the Record wise.
DTP Delta: Check Request wise.
43
Queries:
Repeat--->Repeat will create new instance…IP is Used Repeat.
Repair--->The Repair will continue same instance…DTP is Used Repair.
DATA Packet: It is a group of logically related data into single unit.
Data Packet is related to Info Package which is used to load data from Source System
to BI (PSA). As per SAP standard, we prefer to have 50,000 records per one data
packet.
DATA Package: It is a group of data packet.
Data package is related to DTP which is used to load Data from PSA to
further Data Targets. Start and end routine works at package level so
routine run for each package one by one.
Replication:
Brings the Data Source from R/3 to BW Is Called Replication.
Virtual Cube Based on BAPI: In Case of Virtual cube based on BAPI we can extract
the data from any other external systems with the help of BAPI (RFC) at run time.
How to Delete The Request In Info Pack?
Go to Info Pack ----> Schedule Menu ------> Initialization option for source system
----> Select Request -----> Delete.
In Cube we have 5 Requests, if we delete the 3 rd request it won’t delete the 4
th& 5 th request, but in DSO if we delete the 3 rd request it delete 4 th& 5 th
requests also.
You can Add a New Field at the DSO Level.
Ans: Yes, DSO is Nothing but a Table.
Where is the stored PSA Data.
Ans: In PSA Table.
DTP Info:
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/16761
44
In Extended Star Schema Why Master Data will maintain outside the Cube?
Ans: Reusability Purpose.
45
Diff B/W Template & Reference
Reference: If we have an info object ‘A’ when we create the info object ‘B’ by taking ’A’ as
reference, all the properties of ‘A’ are copied to ‘B’ & data also loaded, we can’t load data
separately, we can’t change the properties of ‘B’.
Template: If we have an info object ‘A’ when we create the info object ‘C’ by taking ‘A’ as
template, all the properties of ‘A’ are copied to ‘C’ & we can change the properties of ‘C’ &
we can load data separately.
[Go to Info object--->Char Cat log-->R.C--->Create info object--->Hear 2 options
---> 1.Reference Cha
2. Template]
I have loaded ‘#’ symbol from ECC to PSA its working fine. But when I am trying to load
this data from PSA to Cube I am getting an error –INVALID CHARACTERS?
Ans: write a code in the field routine for that particular field if you need to eliminate that #.
Sample code:
Say if your scource filed is KOKRS.
RESULT = SOURCE_FIELDS-KOKRS.
Replace all occurences of '# in Result with space.
condense Result.
I loaded data into DSO using flat file extraction, the problem is daily I am loading some
millions of records for this I have written some ABAP code. Still I face some performance
problem. how to rectify this issue?
Ans: 1. Better load data to Write-Optimize DSO from Flat File. Then Load data from Write
optimize DSO to Standard DSO ( In WODSO data wont overwrite, it maintains detail data)
2. Better to un check SID generation selection box.
46
Types of DTP's
1. Standard DTP
2. Direct Access DTP
3. Error DTP
4. RDA DTP
Diff B/W Normal DTP & Direct Access DTP?
1. In Normal DTP Generate some Request No.
2. In Direct Access DTP don’t generate any Request No.
Why create DSO?
1. The main Purpose of DSO is Over Write functionality.
2. Detail Level of data available in DSO.
How to Handle The Duplicate Records?
Ans: Go to master data DTP ---> Update Tab---> Handle Duplicate Record Checkbox Select.
Go to SE14--->Error Stack Table ----> Delete Records.
Go to SE16 ---> See The Data In Error Stack Table.
In IP Schedule 2/3 Times, so PSA Have 2/3 Requests (total 1000 Records, each request
200recrds), Then?
1. Full Load what happen----->Pick all Records
2. Delta Load what happen----> It Pick all the Records.
What is reconstruction?
This is the process by which you reload data from PSA (or ODS) into the cube/ODS.
CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?
ANS: Yes, of course. For example, for loading text and hierarchies we use different data
sources but the same Info Source.
THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL
CUBES IF ONE CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
ANS: Go to T Code SM66 then see which one is locked select that pid from there and go to
SM12 T_Code then unlock it this is happened when lock errors are occurred when u
scheduled.
How many characteristics can be found in a Dimension table?
Ans: (248)
What is the limit on Key figures in a Fact table?
Ans : (233)
When are Dimension ID's created?
ANS: When Transaction data is loaded into Info Cube.
When are SID's generated?
ANS: When Master data loaded into Master Tables (Attr, Text, Hierarchies).
47
How would we delete the data in ODS?
ANS: By request IDs, Selective deletion & change log entry deletion.
48
4 types of data Sources
1. Transaction Data.
2. DS for Master Data Attribute
3. Text
4. Hierarchies
How to create Transport Request?
Go to Particular Obj…….> D.C…> Extras (Menu Tab)…> Object Directory
Entry….> Give the Package…..> Request.
Master Data Data Sources
1. 0PAYER----> Payer
2. 0MAT----> Material number
3. 0MATERIAL----> Material
4. 0CUSTOMER----> Customer number
5. 0PLANT----> Plant
49