0% found this document useful (0 votes)
561 views49 pages

SAP BW Modelling, Extraction and Reporting

This document provides information about the implementation and development of data in Business Warehouse/Business Intelligence (BW/BI). It includes details about tabs, data sources, info objects, key figures, data transfer processes (DTP), and cubes. It also discusses topics like data selection, extraction, processing, data targets, scheduling, re-modelling, and the differences between BW 3.5 and BI 7.0.

Uploaded by

Arijit Chanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
561 views49 pages

SAP BW Modelling, Extraction and Reporting

This document provides information about the implementation and development of data in Business Warehouse/Business Intelligence (BW/BI). It includes details about tabs, data sources, info objects, key figures, data transfer processes (DTP), and cubes. It also discusses topics like data selection, extraction, processing, data targets, scheduling, re-modelling, and the differences between BW 3.5 and BI 7.0.

Uploaded by

Arijit Chanda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

 Implementation/Development Data:

Tabs:
Data Source: Info Objects:
1. General Information 1.General
2. Extraction 2. Business Explorer
3. Proposal 3.Master Data/Texts
4. Fields 4.Hierarchy
5. Preview 5.Attribute 6. Compounding

Info Package: Key Fig:

1. Data Selection 1.Type/Unit


2. Extraction(FF Only) 2.Aggregation
3. Processing 3.Additional Properties
4. Data Targets
5. Update
6. Schedule

DTP (Data Transfer Process): DSO:

1. Extraction 1.Contents
2. Update 2.Requests
3. Execute 3.Reconstruction

Cube:

1. Contents
2. Performance
3. Requests
4. Rollup
5. Collapse/Compression
6. Reconstruction

Cube:

1
DSO:

Info Obj:

Key Figs:

Data Source:

2
General Info: Give short, medium, long description.
Extraction: Name of the file,
Header rows to be ignored
Data format-CSV
Data separator
Escape sign.
Proposal: Load example data.
Fields: Field---Description----Data type----Length.
Preview: Read preview data (10000)
Info Package:

Data Selection: Info object---Technical name in the source system---


Description --- From value --- To value.
Extraction: Name of the file.
Data format-CSV
Data separator
Escape sign.
Processing:1. PSA and then data targets.
2. PSA and Data targets parallel.
3. Only PSA.
4. Only data targets.
Data target: Data Target name
Update: 1. Full 2. Initlize delta process
a. Init with data transfer b. Init without Data transfer
c. Early delta initialization.
Schedule: Start.

3
DTP:

Extraction: Extraction Mode: 1.Full 2. Delta


Update:
Execute: Execute.

4
BW/BI: Data Warehouse is a Methodology, we are executing data by using a Tool or
Application is Called as BW/BI.
VERSIONS:
1999: BIW1.2
2000: BIW3.0A
2003: BIW3.1C
2003: BW3.5
2004: BW3.5
2006: BI7.0
2010: BW7.3
SAP R/3 Applications: SD MM PP PM QM WM FI CO

CLINT NO: 000 to 999

TRFC: Transactional Remote Function Call.

NAMING CONVENSION:
0 ----> SAP Standard Delivery Objects.
Z/Y----> Custom Defined Objects (Developer Creates)
Y/Z ----> Temporary Testing
B ----> Super User
 Technical names of the Info Objects are Alpha Numeric
(Not support any Special Char).
 Length is 3 – 9.
Data dictionary (Table) naming Conversion:
 Standard Objects table: /BI0/<XXXXXXXX>
 Custom Object Table: /BIC/<XXXXXXXX>
Characteristics:
Data Types: CHAR 60
NUMC 60
DATS 8
TIMS 6
Key figures:
Data Types: AMOUNT QUANTITY NUMBER
INTEGER DATE TIME

5
Lowercase Letters Check box:
If select this check box: It allows both Lower & Upper case letters.
If Not select this check box: It allows only Upper case letters,
for ex : Field is Primary Key.
Converse Rout: Converting from source format to Data base format.
ALPHA: Numeric Values, it prefix with leading zeros based on length.
Ex: 9999 0000009999
C009999 C009999.

 Diff Between Fixed Currency & Unit/Currency:


1. MATERIAL PRICE(INR)
M01 100
M02 300
M03 500
In this scenario we are using FIXED CURRENCY.
2. MATERIAL PRICE CURRENCY
M01 100 INR
M02 300 EUR
M03 500 USD
In this scenario we are Using UNIT/ CURRENCY.

 Diff Between Fixed Unit & Unit/Currency:


1. MATERIAL QTY(KG)
M01 500
M02 1000
In this scenario we are Using FIXED UNIT.
2. MATERIAL QTY UNIT
M01 500 L
M02 700 KG
M03 1000 EA
In this scenario we are Using UNIT/ CURRENCY.

6
System generates minimum of 3 Data base Tables:
1. Master Data table: P Table to maintain the Time Independent Master Data
/BIC/P<Info Obj Name>
Ex: /BIC/PZMATERIAL
2. SID table: S Table to maintain the SID of Characteristic
/BIC/S<Info Obj Name>
Ex: /BIC/SZMATERIAL
3. Text Table: T Table to Maintain the TEXT Data
/BIC/T<Info Obj Name>
Ex: /BIC/TZMATERIAL
4. Q Table to maintain the Time Dependent Master Data
/BIC/Q<Info Obj Name>
Ex: /BIC/QZMATERIAL
IP UPDATE MODES:
1. FULL UPDATE
2. INITIAL UPDATE
3. DELTA UPDATE
FULL UPDATE: It Extract all the data from Source system with respect to Data Selection.
Flat file Support only FULL UPDATE.
INITIAL UPDATE: It Extract all the data from Source system and Enable the Delta load.
FULL UPDATE + Delta Q.
DELTA UPDATE: It Extracts only New Records and Changed Records/Modified Records.
Note: If you run Initialize load with some selections, delta load also run with the same
selections only (mandatory).

DTP Extraction Modes:


1. Full 2. Delta
PSA: (Persistent Staging Area): Error handling.
1. PSA having same data like source data and it is used to Error handling.
2. PSA structure is based on Data Source structure.
3. PSA structure:
Data source Fields + Req.No + Recd.No + Data Pkg.No + Partition Value.
4. PSA Table Naming Conversion: /BIC/B000*
5. One Data source having only one PSA.
6. PSA Data is temporary, once you load the data properly in to further data targets we
have to delete the data in to PSA.
7. Source system as a SAP system ----> Replicate Data source
8. Source system as a Non-SAP system ----> Create the Data source.

7
 Info Provider: Info Provider is an object on which we can do the Reporting.
 Info Area: It’s like a folder and main Area, which maintain all the data targets. i.e. Cube,
DSO,…..
 Application Component: It’s like a folder, its maintain a Data sources & Info Sources.
 Cato logs: It’s like a folder, its maintain a characteristics & Key figures.
 Data Targets: Data Targets are Objects, which Stores data physically.
 Data Source: Data Source is Object, which is related to source system.
 Info Pack: Info package is an object, it is used to load the data till Data Targets in BW 3.5
& up to PAS in BI7.0.
 Transformations: Converting the data from one state to another state, we use
Transformations B/W source and Targets.
 DTP: This object is newly introduced in BI 7.0, it is used to load the data from PSA to
Data Targets. And also used to update data from one data target to another data target. It
also enables error handling of records.

Diff B/W BW (3.5) & BI (7.0):


BW BI
1. 3.5 No Re-Modelling concept. 1.7.0 Re-Modelling Concept is there.
2. Info set include Info Objects & DSO 2. In 7.X include Cubes also.
3. Hear no RDA Concept. 3. In BI RDA Concept is there.
4. There is a no DTP in BW 3.5. 4. DTP is newly introduced in 7.0 &
load the data from PSA to Data Target.
5. In 3.5 Info Pack, Load data from 5.In 7.0 Info Pack, Load data from
Source system to Target. Source system to PSA only.
6. In 3.5 No Error stack. 6. In 7.0 DTP Error stack will be there.
7. In 3.5 ODS (Operational Data Source). 7. In 7.0 DSO (Data Store Object).
8. In 3.5 No Write Optimized DSO. 8. In 7.0 Write Optimized DSO is there.
9. In 3.5 Transfer rules & Update rules. 9. In 7.0 Transformations.
10. Unit Conversion not Implemented. 10. Unit Conversion Implemented.
11. Return table – Possible. 11. Return table – Not implemented.
10. PSA is not mandatory in 3.5. 10. PSA is mandatory in 7.0.

8
Re-Modelling: T_CODE: RSMRT
http://help.sap.com/saphelp_nw04s/helpdata/en/58/85e5414f070640e10000000a1550b0/
frameset.htm
 Cube is already loaded, without losing cube data we can change the structure of info
Cubes.
 Info Cube contains few compressed and few uncompressed requests you need to
compress all requests before you can start the Remodelling. Remodelling cannot be
preferred on a partially compressed Info Cube.
         Add characteristic
         Delete characteristic
         Replace characteristic
Condition Filled By: Add /Replace Char
1. Attribute of another Characteristics within the same Dimension.
2. From another Characteristics within the same Dimension (1:1 mapping with
Characteristic).
3. Constant.
4. Customer Exit.
         Add Key figure
         Delete Key figure
         Replace Key figure
Condition Filled By:
Add Key figure Support
1. Constant
2. Customer Exit
Replace Key figure Support Only
1. Customer Exit
Whenever we can do Re-Modelling remind some points:
1. We have to make sure that before going to Re-Modelling the Process Chain using
loading data in to that info cube should be stopped.
2. Whatever Characteristics or Key figures you are adding you have additional data base
space available in that cube or not.
3. After doing the Re-Modelling check Transformations everything is working fine then
Activating Process chains and schedule the data.

9
Star & Extended Star Schema:
Structure wise both are same but Diff is A fact table at the centre and surrounded (linked)
by dimension tables
1. In Star Schema we have total 16 Dim. [3 are SAP Defined & 13 are User Defined] &
1 Fact Table, these fact table having 255 Columns [233 for Key figures, 16 for Dim’s
& 6 for SAP Defined]
2. In Star Schema 1 Dim assign to 1 Characteristic.
3. Here Master Data Maintain Inside the Dimension Table.
4. In Extended Star Schema 1 Dim assign to 248 Characteristics.
5. Here Master Data Maintain Outside the Info Cube.
6. Here New Concept is SID, There is no SID Table in Star Schema, SID means
Surrogate Id.
7. In Extended Star Schema Master Data Linked to SID Table.
8. The SID Table generate the numeric Values for every Master Data Records.
9. In Extended Star Schema SID Table Linked to Dim Table.
10. The Dim Table generate the numeric values, these Dim Table numeric values
linked to Fact Table.
11. So in Extended Star Schema Master Data Linked to SID Table, SID Table Linked to
Dim Table, Dim Table Linked to Fact Table.
12. SID Table: To generate the Numeric Values for every Master Data Records and SID
table is the interface between master data table and the dimension tables.
Star Scheme Extended Star Schema

1. One Dim assign to 1 Char. 1. Here 1 Dim assign to 248 Char.


2. Master data Maintain Inside 2. Master Data maintain outside the Cube.
the Dim Table.
3. Here No SID Concept. 3. Here Generate some SID Values.
4. Doesn’t support Multi Languages. 4. It support Multi Languages.

10
Targets:
1. Info Object 5. Multi Provider
2. Info Cube 6. Virtual Provider
3. Real Time Info Cube 7. Info Set
4. Data Store Object

Info Object:

 Info Object is a One of the Data Target.


 Info Objects are the basic information or smallest information Providers in SAP BW.
 These are Different Types
1. Characteristics Info Object: It will maintain only Master Data.
Ex: Customer (0CUSTOMER)
Material (0MATERIAL)
2. Key figure Info Object: It will Maintain Transaction Data.
Quantity (0QUANTITY)
Amount (0AMOUNT)
3. Unit Info Object (Currency, Unit).
4. Time Characteristics (Cal Year, Month....).
5. Technical Info Object(Request ID, Change ID)
Master Data: Master Data Means the data which won’t be changed frequently.
The detailed level of information about the entities. It overwrites the data, It gives Present
Truth. Ex: EMP ID (PK)
Emp Name
Emp Adds
Master Data Divided into 3 Types
1. Attribute. 2. Text. 3. Hierarchy.
Transaction Data: Transaction Data means the Data which changes every Transaction.
Data related to occurrence of business is called as Transaction data, Never Overwrites,
It is additive. It gives Fact truth.
(CID, MID, EID, Price, Qty, Revenue).
Primary Key &: It is a column which can be use to uniquely identify row/record in a table
and to avoid duplications and Primary key Not Support Null Values.
Foreign Key: When a Primary key of one table is used in another table we call it as foreign
key. foreign key is always refers to primary key in another table. Foreign Key support Null
Values With the help of Primary key & foreign key we can connect multiple tables.
Composite Key: When two or more columns act as a Primary Key, We call it as CK.

11
Info Cube

 Info Cube is one of the Physical Data Target as well as Info Provider.
 From a Reporting point of view, an Info cube describes a self-contained dataset, this
dataset can be evaluated in a Bex Query.
 Info Cube maintains Historical data means older data.
 It will maintain Data Physically.
 Cube maintains Summarised Data.
 Cube Functionality is Additive.
 There are 3 Types Of Cubes
1. Standard Info Cube (Basic Info Cube)
2. Real Time Info Cube (Transactional)
3. Virtual Provider (Remote) 4. Aggregate Cube
 Info Cube have a 16 Dimensions
 3 are Free Defined/SAP Defined.
1. UNIT Dim.
2. TIME Dim.
3. DATA PACKAGE Dim.
 13 are User/Customer Defined.
 Here One Dim=248 Characteristics (7.0)
 In 3.5 One Dim=1 Characteristic.
 Cube have a One FACT Table
 One FACT Table=255 Columns
16 For Dimensions.
6 For Predefined/SAP Defined.
233 For Key figures.
 Characteristics are assigned to Dimensions & Key figures are assigned to Key figures.
 For Every Info Cube, we have to assign at least one Time Characteristic.
 These Time Characteristics are Provide by SAP,we Can’t create Time Characteristics.
1. Calendar Day 7. Calendar Year/week
2. Calendar Month 8. Calendar Year/Month
3. Calendar Year 9. Calendar Year/Quarter
4. Quarter 10.Fiscal Year
5. Half Year 11.Fiscal Year Variant
6. Week Day 12.Posting Period

12
13. Fiscal Year/Period

Top down approach: Based on the reporting requirement.


Bottom approach: Based on the structure of data source.
How many Dimensions we can create: Min 1 Max 13.
Tech name of Dim’s: Cube name followed by T,U,P,1,2,3,4,5,6,7,8,9,A,B,C,D.
Ex: ZIC_RVITMP, ZIC_RVITMT, ZIC_RVITMU,……
 Time Dim is Time Char, we can assign only fixed Time Chars.
 When you activate an Info Cube system generate backend tables.
Ex: 2 Fact Tables & Dimension tables for Each and every Dimension.
1. F – Fact Table ----- /BIC/F<Info Cube Name>
Ex:/BIC/FZIC_RVITM
2. Compressed E – Fact Table ----- /BIC/E<Info Cube Name>
Ex:/BIC/EZIC_RVITM.
Dim table Names: /BIC/D<Dimension Name>
Ex: /BIC/DZIC_RVITMP, /BIC/DZIC_RVITMT, /BIC/DZIC_RVITMU
How to see Data base Structure of Info Cube: T_CODE: /NLISTSCHEMA
How to see the Target Data: T_CODE: /NLISTCUBE.

13
VIRTUAL Providers

 Virtual Info Cube is also a Cube.


 It will maintain Data Logically.
 Virtual providers are mainly used to give live reporting and access data directly from
source system during Query run time.
 Virtual Cube have no Data, whenever doing the Report It will fetch the data from
tables.

When we can use VIC:


1. The volume of data should be less.
2. No.of Users accessing data parlally should be less.
3. Less frequently.
 Here we can Use Direct Access DTP.
 we can bring data in 3 different ways
1. We can Use Direct Access DTP (Remote System is SAP).
2. Based on BAPI (Remote System is general (Non-SAP)).
3. Based on Function Module (If you want to provide any Calculations).

Real Time Info Cube


 It is also a Physical Data Target.
 Standard info cube + Transactional Property(Planning)
 2 Options in Real Time Info Cube.
1. Planning 2. Loading.
 Only one Option will be work at a Time, if Planning will be work, then loading will
not be work & Loading will be work, then Planning will not be work.
 In Real Time Info Cube 2 Check Boxes

Real-Time Data Target can be loaded with Data: Planned not Allowed.
Real-Time Data Target can be planned with Data: Loaded not Allowed.

14
Diff B/W CUBE & VI CUBE
CUBE VI CUBE
1. Maintain Physical Data. 1. Maintain Data Logically.
2. Use Standard DTP. 2. Use Direct Access DTP.
3. Generate Some SID’s. 3. Not generate any SID’s.

Diff BW STD CUBE & REAL TIME CUBE:


STD CUBE REAL CUBE
1. Used for BW Reporting. 1. You can use RIC when you want to
Implementing Cube for BPS or IP
or BPC (Planning App’s ).
2. Only Read the Data by Executing Queries. 2. Read as well as Write the data.

If you want to Convert STD CUBE to REAL CUBE OR REAL CUBE to STD CUBE by
using Program called SE37: SAP_CONVERT_NORMAL_TRANS

DSO (Data Store Object)

DSO: DSO is a Data Storage area for Cleaned up & Consolidated Master OR
Transaction Data at Document Level (Detailed Level).
1. DSO is a One of the Data Target and maintain data Physically.
2. DSO Maintain Operational Data means Latest data.
3. It maintains Detail level of Data.
4. DSO having OVER WRITE Functionality.
5. Key Fields are must and should same, then only Overwrite Data, Otherwise insert a
new record.
6. Characteristics are assigned to Key Fields & Key Figures are assigned to Data Fields.
7. The field acts as a PK put it in to Key fields & the field act as a Non PK put in to Data
fields.
8. Maximum Key Fields are assigned to 16 & max Data Fields are assigned to 749.

15
 There are 3 types of DSO’s
1. Standard DSO (Standard).
2. Direct Update DSO (Transactional).
3. Write Optimized DSO.
 Standard DSO Have 3 Tables
1. New Table (Activation Queue).
2. Active Data Table.
3. Change Log Table.
Direct Update DSO & Write Optimised DSO having only one Table. i.e. Active Data Table .

Standard DSO:
1. When you want to use Standard DSO for Operational Reporting and Consolidating
data.
2. Data provided using a Data Transfer Process.
3. SID values can be generated.
4. Data is available for reporting after activation.
Data Base Structure:
1. New Data Table (/BIC/A<DSO NAME>40)
2. Active Data Table (/BIC/A<DSO NAME>00)
3. Change Log table (/BIC/B<SYS GEN.NUMBER>)

Write Optimized DSO: It is only in 7.X.


1. When you want to use Write Optimised DSO for faster loading of large amount of
data and Update data in to Further Data Targets.
2. Data provided using a Data Transfer Process.
3. SID values cannot be generated.
4. Data is available for reporting immediately after loaded.
5. Used mainly for Mass Uploads.
6. It supports Delta Loads based on request numbers of active data table.
7. We can also generate queries on WODSO, but SID’s generated during Query run
time, it effects Query Performance.
8. WODSO having only Uniqueness of Data setting.
Unique data Records: If you don’t want to Overwrite the data in to DSO still you want
to maintain the Unique data records use this Option.

16
9. In WODSO instead of Key Fields & Data Fields---- Technical Key (Generated) &
Semantic Key.
What are the Technical keys in the WODSO:
1. RECORD NUMBER
2. REQUEST NUMBER
3. DATA PACKET NUMBER
Data Base Structure:
Active Data Table (/BIC/A<DSO NAME>00).

Direct Update DSO:


1. When you want to use Direct Update DSO for Implementing any BPS (Business
Planning &simulation),IP(Integration Planning) Not for BW Requirement.
2. Data provided using APIs.
3. SID values cannot be generated.
4. we can do the reporting directly on this DSO.
5. It doesn’t support Delta load because here no change log table.
Data Base Structure:
Active Data Table (/BIC/A<DSO NAME>00).
 In DUDSO we can Read as well as Write the data but SDSO & WODSO only Read
data.

Full/Init Delta
Reporting Further data Targets

17
 Whenever you Load the Data in to DSO First it will come to New Table, Then
Activate, once activation is done Data Move From New Table to Active Data Table &
Change Log Table.
 Whenever data move to Active data Table & Change Log Table, data is not there in
the New Table.
 Key fields are drag & drop in to Data fields but we can’t drag & drop the Data fields
in to Key fields.
 DSO Having data we can add Data field, but we can’t add Key Field.
 Do the Direct Activation Process in DSO---> RC---> Activate Data.
 Images Maintain Under 0RECORD Mode in Change Log Table not in New &
Active Data Table.
1. NEW Image ----- N
2. BEFORE Image ----- X
3. AFTER Image -----__
4. REVERSE Image---- R
5. ADD Image ----A
6. DELETE Image -----D
7. UPDATE Image ----Y
 Do The Reporting on DSO, It pick the Data from Active Data Table.
 If you want to Load data From DSO to Further data Targets Data move from
Change Log.
 When FULL/INIT Load Data move from Active Data table & DELTA Load Data
move from Change Log table.
 Once Activate the data then only SID’s can be generated.
 Whenever your Extracting data from Flat file we can use DSO for data Staging.
 Reporting On DSO is Faster Compare to Reporting on CUBE.

18
DSO Settings: 1. SID Generation upon Activation.
2. Unique Data Records.
3. Set Quality Status to OK Automatically.
4. Activate Data Automatically.
5. Update Data Automatically.
1. SID Generation upon Activation:
Each Master Data Record Generate One SID value, These SID’s are stored in to
Separate SID Table, SAP System will Check every Master Data Record having SID or Not.
This Check Box Is Automatically Selected.
1. Unique Data Records:
Whenever you select The Check box Ignore The Duplicate Records, and any
Error Records will Comes you will get The Error Message.
2. Set Quality Status to OK Automatically:
Whenever data loading in to DSO it will comes in to New Data table, status
should be in Yellow here we can change the status from yellow to Green then Activate,
when we select this check box, automatically status will be changed from yellow to green.
3. Activate Data Automatically:
If we select this check box activated data automatically in ODS.
4. Update Data Automatically:
If we select this check box Updated data automatically from DSO to other data target.

 Multiple Requests are Activated at the Same Time is Possible or Not?


It is Possible…. Go to DSO Request Tab.
------> Select Multiple Requests.
-------> Activate (Click)
----> Do not Condense Requests in to One Request
When Activation takes Place. (Select Check Box).

19
Diff B/W CUBE & DSO:

CUBE DSO
1. Cube is ABR Functionality. 1. DSO is AIE Functionality.

2. Additive Functionality. 2. Over write Functionality.

3. Maintain Summarised Data 3. Maintain Detail level of data.

4. We can Create Aggregates. 4. No Aggregates.

5. We can do Compression Data. 5. We can’t do Compression Data.

6. No Activation Process is there. 6. Activation Process is there.

7. Here No Images. 7. DSO Maintain Some Images.

8. Cube maintain Historical data 8. DSO maintain Operational Data for

for Historical Reporting. Operational Reporting.

9. Cube Data base Structure Consists 9. DSO Data base Structure Consists of

of Multi Dimensional Model. Flat Relational Data base table.

Data Mart:
 Data Move From One Target to another Data Target.
 Here SAP BI Acts as a Source as well as Target.
 From DSO to DSO Data Move in Further Data Target
Go to Target DTP --> Extraction Tab --> Extraction From -->
1. Activate Table.
2. Data Store Change Log.
 ERROR STACK:
a. It is a temporary Storage Area, it maintain all Error Records.
b. These Error records are not Updated Through Normal DTP.
c. These Error records are Updated Through Error DTP.

20
Multi Providers:
1. Multi provider is a type of Info provider that contains data from a number of Info
providers and Provide Data for Reporting.
1. Info Objects   2. DSO Objects (Standard only) 
3. Info Cubes 4. Info Sets 5. Aggregation Level
2. It Improves the Query Performance.
3. Multi Provider as well as an Info Set doesn’t contain data physically, it contain data
logically.
4. Don’t use Direct Update & Write Optimized DSO in Multi provider.
5. In case, you have to assign Direct Update DSO in Multi provider, first you have to
assign Info set then this Info set assign to Multi provider.
6. Do not use more than one Non-cumulative Info Cube because this could lead to
incorrect query results.
7. Multi Provider Can also be built on Single Target.
Info Set:
 Info set is used to JOIN the Data from Different targets.
 At least One Object should be Same in all.
 It Providers Intersection Data.
 Don’t use more than 10 Info Providers in one Info Set. It is better to create Multiple
Info Sets depending on reporting needs.
 When do the reporting on Info set, we can get one Extra key field ‘Record Count’.
Record Count: How many records have been combined.
 The property of Info set is Inner Join (Intersection) or Left outer Join.
Inner Join: It brings only common fields.
Left outer Join: It will bring the common values & additional value from left operand.
 Temporal Join: A join is called temporal if at least one member is time-dependent.
What are the Inputs of Info set?
 In BW 3.x Info Set can Joined the data from only ODS & Info Object’s.
 In BI 7.x Info Set can Joined the data from ODS, Info Obj’s & 2 CUBE only.
What is the Diff BW MP & Info set:
1. Info set’s are for JOIN & MP’s are for UNION.
2. MP’s for Multi Dimensional Analysis & Info Set for Tabular format Analysis.

21
Performance Tuning:

 Whenever we executing the Query, Query Display might be Slow? Query Performance.
 When we are extracting data from source system to BI, Loading might be going slow?
Loading Performance.
 Query Performance: Whenever we execute the query it triggers the OLAP processer,
first check the data is available in OLAP cache, if data is not there in OLAP cache then
go to Cube & execute Bex report.
 How to Improve Query Performance:
1. Aggregates
2. Compression
3. Create Index
4. Partitioning
-----> http://explore-sapbw.blogspot.in/2012/02/use-of-aggregatescompression-roll-up.html
 How to Improve Load Performance:
1. Delete Index before Loading.
2. Prefer the delta updates.
3. Load Master Data before Transaction Data
B/C SID is generated for Master Data based on Cube Dim Id is generated.
4. Create Line Item Dimension:
Since Referred to Sid, no need to generated Dim Id’s hence Improves loading
Performance.
 Delete Index Improves The Load Performance.
 Create Index Improves the Query Performance.

22
Aggregates: T_code: RSDDV

 Aggregates are small Cubes or Baby Cubes.


 Aggregates Improves the Query Performance.
 Aggregates are Created on Standard Info Cube & Real Time Info Cube, don’t Create
Aggregates on Virtual Info Cube, Multi Provider.
 We can Create Unlimited Aggregates.
 Don’t give Technical Name in Aggregates, it will take Default Value start With
1, 00,000...
 If a CUBE Contains Aggregates, That Cube is called as a ‘Aggregate Cube’.
 Don’t Create Aggregates Our Own, whenever User Raise the Ticket then Create
Aggregates.
 After Raise the Ticket, first Check the Condition…ST03
 Percentage Of DB time > 30% of total time.
 Aggregation Ratio > 10%
 Execute the Query, Go and Search the Data in to Cube, searching time is Called
‘Aggregation Time’.
 Searching Data will be Displayed, Displayed time is Called ‘DB Time’.
 Aggregation time = NO. Of Records Selected.
NO .Of Records Transferred.
 Above 2 Conditions are exceeded, then Create Aggregates.
 Whenever Execute the Query first go to aggregates and Search, Aggregates have no
Data after go to Cube Search and Display.
 In Aggregates by default 0FACTCOUNT is available in KF.
 Initial fill: After creation of the Aggregate whatever the data we load for first time
from cube to Aggregate is called Initial Fill.
 Flat Aggregate:
We can assign 1-15 Characteristics in One Aggregate is Called ‘Flat Aggregate’,
In Generally we can assign 7 - 8 Characteristics in One Aggregate.
 Whenever we run the query, how can we know the data is fetching from Aggregate or
from cube?
RSRT Specify the query name  Execute + Debug 
Select Display Aggregate
found  OK

23
Whenever Create Aggregates 2 Options Comes

1. Generate Proposal: System propose Which Characteristics takes more time at Query
Execution time, Those Characteristics will be Display, then Select and assign to
Aggregates.
2. Create Your Self: System can’t propose, You Only Select and assign any
Characteristics in Aggregates.
Diff Options In Aggregates: SWITCH OFF/ON
ACTIVATE/DEACTIVATE
DELETE
SWITCH OFF/ON:
 If Don’t Use any Aggregate Simply SWITCH OFF.
 If SWITCH OFF The Aggregate, Structure is available & Data also available, but
Data will not be Displayed.
ON ------> Data Is Available to Query.
OFF-----> Data Is Not Available to Query.
 Aggregates are SWITCH OFF, Execute the Query, Direct go to Cube Search The
Data.
 Aggregates are SWITCH OFF, Don’t provide data to the Query, but Load Data From
Cube to Aggregates, You can load No Problem.
ACTIVATE/DEACTIVATE:
 If DEACTIVATE the Particular Aggregate, Structure is available but Data is Deleted.
DELETE:
 Whenever Delete The Aggregate, Structure as well as Data also Deleted.

ROLL-UP
 Data move from Cube to aggregates By Using Roll-Up based on Request Id’s.
 Without Request Id’s we can’t move data from Cube to Aggregates.
 If you want to move data from Cube to Aggregates, we have to move before
Compression of the Cube Data.
 We can Compress the Aggregates without Compress the Cube Data, but Compress the
Cube, it will Compress the Aggregates also.
 Once the aggregate is activated, compression is done automatically, if we select the
Check box----> Compress after Roll-Up In Roll-Up Tab.

24
Compression: (http://sapbi-allaboutcompression.blogspot.in/)

 Compression improves Performance & it removes the redundant data.


 Compression is The Last Activity in the Cube.
 When create cube it creates 2 fact tables F & E, Structure of E & F tables are same.
 Initially Data Comes with /F Fact Table.
 After Compression data will move from F table to E table.
 Once compress an Info Cube it is not possible to Delete the data from Info cube using
the request No, because when you compress Info cube system will delete all the
records from Data package Dimension and Nullifies all request Id’s.

Zero Elimination: If you select this check box, after Compress the data, if there is any
Key figure values with ‘zero’ it will be deleted those records.
Collapse tab there is checkbox - Zero Elimination
Selective Deletion:
Compress the cube data with no errors, after compression during testing process or reporting
process found some error how to solve that after compression we cannot delete the data based
on request Ids in that case by using selective Deletion, we are going to Delete those records
and after that once again schedule data from source system through the Selections.
What is Reverse Posting?
Reverse Posting is a Concept of SAP BW 3.x and not SAP BI 7.0.
Once a cube is compressed, you cannot alter the information, This creates big problem when
the data which is compressed has some wrong entries. To delete these wrong entries SAP
provides a way called “Reverse Posting”.
Reverse Posting is Possible only if the Data is loaded to Cube via PSA, if the Data is loaded
to ODS, reverse posting is not possible.
 F table:
Req No Cno Mno Calday Curr Unit Rev Qty
161 C1 M1 20080101 INR EA 100 20
161 C1 M1 20080102 USD EA 200 40
162 C1 M1 20080101 INR EA 40 60
162 C1 M1 20080102 USD EA 80 90
163 C1 M1 20080102 INR EA 200 400

25
 E table:
Req No Cno Mno Calday Curr Unit Rev Qty
0 C1 M1 20080101 INR EA 140 80
0 C1 M1 20080202 USD EA 280 130
0 C1 M1 20080102 INR EA 200 400
Ex: In Collapse Tab, if give 377 & Collapse, That Particular Request and bellow Requests
also Collapsed.
 How to See The Data in /F & /E Fact Tables.
/F -----> Cube ----> Manage ----> Content Tab ----> FACT Table.
/E ---- > SE11 ----> Data Base Table----> /BIC/EIBM_CUBE.
Contents: Display all The Characteristics in the Cube.
Performance: Maintain all the Indexes.
Indexes are 2 Types 1. Primary Index.
2. Secondary Index.
Primary Index: Primary Index is default maintain by System, we can’t Create
Or Delete.
Secondary Index: Secondary Indexes are we can Create.
These are 2 Types 1. Bit Map.
2. B-Tree.
Bit Map: Each Record Generated by One Binary Value (10110100) Based On Binary Value
Retrieve the Data.
B-Tree: The data will maintain in the form of parent & child Relationship.
Every Parent will have two children. Value of left child will be less than parent, Value of
right child will greater than parent. It Will Divide Data like Tree.
 SE14  F table Edit Indexes  0 indicated primary Index  all others are secondary
 Bit map is faster when number of customers are less (Cardinality low).
 BTree is faster when number of customers are high (Cardinality high).

Requests: It will Maintain all The Request Id’s.


Reconstruction: This is the process reload data from PSA (or ODS) to cube/ODS.
No Requirement for Reconstruction tab in BI7.0, we use reconstruction tab in 3.x highly.
Ex: PSA having data, from PSA to loaded data in to Cube in 3.x but in 3.x after loading the
data in to info cube got error, immediately what we have to do Delete the Request from Cube
and check TR & UR where we got the error fix it, after that what we have to do once again
reschedule the data from PSA to Target right, in 7.x DTP is there but in 3.x how we are going
to Reschedule data from PSA to your target, that’s why exactly we use Reconstruction.
Go to Reconstruct tab ----> Select Request & Press Reconstruct/Insert Button.

26
Line Item Dimension:

 If the size of the Dimension table is more than 20% of the size of the Fact table, then
create that Dimension as Line Item Dimension.
 By Making the Dimension as Line Item Dimension the SID Values are Directly
linked to Fact Table.
 Only One Characteristic is assigned to line Item Dimension.
Ex: Sales Order No, Account Doc No, Billing Doc No.
 Instead of Fact Table referring to Dimension Table, fact Table is referred to SID, Line
Item Dimension assigned.
 Cardinality of Dimension Table is more Than 20% of Fact Table, Line Item
Dimension is assigned.
ADV: When Line Item Dimension is assigned, Fact Table is referred to SID,
Hence The NO.Of Joins are reduced, This Improves the Query Performance.
What scenario dimension table is optional?
When you create Dimension as Line item Dimension, in this scenario Dimension Table is
Optional, SID table is mandatory.

High Cardinality:
 If the size of the Dimension table is more than 10% and less than 20% of the size of
the Fact table, then create that Dimension as High Cardinality Dimension.
What happen when you create Dimension as High Cordiality Dimension.
System generates B-Tree Index instead of Bit-Map Index.
How should we find the cardinality?
 RSDEW_INFOCUBE_DESIGNS will give the cardinality of Fact table & dimension tables
 SE37 RSDEW_INFOCUBE_DESIGNS single test (F8) Info cube name  Execute
 Check the table size & Percentage.

27
Partitioning:

 Partitioning is the method of dividing a table (either column or row wise).


 By Using Partitioning will Improves the Query performance, because it is easier to search
data in smaller tables.
 We can do the Partitioning in 2 ways.
 1.Physical Partitioning 2. Logical Partitioning
 Physical partitioning is done at Data base level & Logical Partitioning is done at Target
level.
 By default F Fact table is partitioned based on the request number.
 Physical Partition will do in E Fact table.
 We can partition a cube based on two time characteristics.
1) 0CALMONTH (calendar year/month).
2) 0FISCPER (Fiscal Year / Period).
 No.of Partitions: N+2.
 We can’t do the partition a cube which is having data.
 If we are not doing compression, there is no use of doing partition.
 Go to Cube Monitor ----> Extras ----> DB Performance ---> Partitioning.
Ex: I have data in my Sales Cube 2003 to 2011 My requirement is to compare Sales data
Quarterly wise in 2008 & 2009.
 Re – Partitioning is not possible in 3.X.
 Re – Partitioning is Possible in BI 7.0.

Re-Partitioning: It is available in 7.x.


Even after loading the data in to cube we can do the Re-Partitioning in 7.x.
Processing Options:
1. Merge previous Partition
2. Add Partitions
3. Entire New Partitions
Go to Cube R.C--->Additional Functions---->Re-Partitioning.

28
RDA (Real Time Data Acquisition): Only in 7.x
1. Whenever you want to extract up to minute (up to 5 min or up to 10 min) of data from
your OLTP system to BW system physically we are going to use Real Time Data
Acquisition.
2. If you want to extract data from source system using Real time data Acquisition that
Data source must and should support to Delta Update, if does not support Delta
Update mode we can’t extract data using a Real Time Data Acquisition from that Data
source.
3. Real Time Data Acquisition always the data load to DSO.
4. We have 2 types of DTP’s to upload the data in to DSO
a. Standard DTP
b. Real Time Data DTP (RDA DTP)
Using RDA DTP only we are going to load data into DSO.
In Real Time data Acquisition we are not going to execute IP the IP scheduled by DAEMON
& RDA DTP also scheduled by DAEMON.
How to Create:
1. Create Data Source and Activate it in BI.
2. Create a DSO.
3. Create Transformations.
4. Create IP for Delta.

Step1: Create DS and Replicate DS in to BI.

Step2: Create DSO.


Step3: Create Transformations.

29
Step4: go to DSO RC Create DTP

When you go to RDA DTP enable the Execute Button.

Step5: Create IP for Initial Update

Once complete the Init without Data Transfer then set the Delta Update.

Go to Processing Tab set the time in below screen ...Daily 1 hour

30
Next go to Schedule tab click on Assign.

Next click on DAEMON and give the Short Description then save.

Next that DS is under Unassigned Nodes.

Now change the Unassigned to YSALES.


Go to DS then RC click on Assign DAEMON

31
Give the DAEMON ID

Now it will come Under YSALES

Next go to DAEMON RC click on Start Daemon with All IP’s

Now it will start IP through DAEMON.

Refresh again and again.

32
Attribute: We can add already Created characteristic Info Object in to newly Created
Characteristic Info Object is Called ‘Attribute’.
Types of Attributes:
1. Display Attribute.
2. Navigational Attribute.
3. Exclusive Attribute.
4. Time Dependent Attribute.
5. Time Dependent Navigational Attribute.
6. Compounding Attribute.
7. Transitive Attribute.

Display Attribute:

1. To display the properties of characteristics in the Report.


2. Display attribute is only for display purpose and no analysis can be done.
3. Display Attribute is completely dependent on the main Characteristic.
4. If Attribute Only Check Box is selected, it indicates that the Info Object is Display
Attribute.
5. It is Stored in Attribute Table---> /P (Master Data Table)

Navigational Attribute:
1. If you create any attribute as Navigational attribute it behaves like a Display attribute
as well as Characteristic, if it is selected at Info Provider level or cube level.
2. Navigational Attribute is behaves like a regular characteristic in the Report.
3. It is Stored in Attribute Table---> /P (Master Data Table)
4. Naming convention of Navigational attribute is Main Characteristic Name_Attribute
Name.
1. In what Scenario we can use Navigational Attribute?
Ans: If you want make any Attribute behave like a Display Attribute as well as
Characteristic in that Scenario we can use Attribute type as Navigational Attribute.

3. When you create any Attribute as Navigational Attribute?


Ans: if you select that Navigational Attribute at Info provider level it behaves like a
Characteristic.
4. Why we cannot create all the Attributes as Navigational Attributes?
Ans: Because Navigational Attributes even though it behaves Characteristic,
it Decrease the Performance.

33
Diff B/W Display & Navigational Attributes:
 If you select Navigational Attribute at info provider level it behave like a
characteristic, but Display Attribute not behave like a Characteristic.
 Display attribute is used only for Display purpose in the report. Whereas Navigational
attribute is used for Drilling Down Option in the report.
Exclusive Attribute:
a) If Attribute Only Check Box is Unselected, it becomes Exclusive Attribute.
b) It is Stored in Attribute Table---> /P (Master Data Table).
c) Exclusive Attribute can be Used as a Characteristic in the Cube.
Time Dependent Attribute:
Whenever Data has to be maintained different time Intervals with different data, Time
Dependent Attribute is Used.
a) It is Stored in /Q Table( Time Dependent Master Data Table)
b) 2 New fields (Date To & Date From) By Default.
c) Date To will acts as a part of Primary Key.
d) ‘N’ No. of Records can be Maintained with Time Intervals. Date To is Primary Key.
Time Dependent Navigational Attribute:
It is both Time Dependent & navigational.
a. It is Stored in /Q Table.
b. Enables the Drill Down Facility.
c. Displays W.R.T. Key Date in Query Properties.
Compounding Attribute:
Value of One Info Object is completely dependent on Value of another Info Object, we go for
Compounding Attribute.
Ex: Company Code ---> Controlling Area, Storage Loc---> Plant,
GL Account---> Chat of Account.
a. Compounding Attribute is also called as ‘Superior’ Info Object.
b. If Compounding Attribute is Used, it degrades the Load Performance.
Transitive Attribute:
a. 2nd level of Navigational attribute.
b. One Navigational attribute There Exists One more Attribute attached it.
Ex: Sales Emp -----> Sales Office (Nav. Attr) -----> Location.

34
BI Content Installation: T_CODE:RSORBCT
BI Content Versions: 1. N- New
2. A- Activate
3. Modified
4. Delivered
 All Objects given by SAP in Delivered Version.
 If the Object is in Delivered Version it can’t used, it should be converted to Active
Version.
 Is it possible to Install Cube Only Delivery to Active.
Ans: No, Cube always contains Characteristics & Key Figures.
 Is it possible to Install Query Only Delivery to Active.
Ans: No, Queries always dependent on Info Providers.

Steps to Install BI Content Objects:


1. Source system Assignment
2. Grouping tab
3. Collection mode
4. Collect Required Objects
5. Version Comparison
6. Install tab

Step1:Source system Assignment

Step2: Grouping

35
1. Only Necessary Objects
2. In Data Flow Before
3. In Data Flow Afterword
4. In Data Flow Before & After
Only Necessary:
When you select only Necessary objects, whatever objects are minimum requirement.
Ex: Info Cube: Info Area+ Info Objects+ Info Cube
In Data Flow Before:
Whatever objects which submits data to the collected objects.
Ex: Info Cube: Info Area+ Info Objects+ Info Cube+ Data Source +
Transformations+ DTP+ IP.
In Data Flow After:
Whatever objects which obtains data from the collected Objects.
Ex: Info Cube: Info Area+ Info Objects+ Info Cube+ Queries+ Workbooks+
Web Templates+ Roles+ Reports.
In Data Flow Before & After:
Whatever objects which submits data to the Collected Objects as well as which obtains data
from the collected Objects.
Ex: Info Cube: Info Area+ Info Objects+ Info Cube+ Data Source+ Transformations+
DTP+ IP+ Queries+ Workbooks+ Web Templates+ Roles+ Reports.

Step3: Collection Mode:


1. Collect Automatically
2. Collect Manually
Collect automatically: whatever dependent objects which submits data and which obtains
data system automatically will collect for us.
Collect Manually: If you select the collect manually then you have to collect one by one,
one by one search and collect, search and collect, search and collect.
Step4: Version Comparison
1. No Active Version is Available ----> Select INSTALL
2. If Active Version is Available ----> Check Version.

Step5:Install:

36
1. Simulate Installation
2. Install
3. Install In Background
4. Installation and Transport

Simulation Installation: means logically it will check for Errors don’t install but if you
install physically are you going to create any errors check logically means checking.
Install: means it starts installing in the fore ground.
Install in Background: means it starts installing in back ground process.
Installation and Transport: means installing as well as ask Transport Request.
Display:
1. List
2. Hierarchy

MATCH(X) & COPY: To avoid the overwrite of previous Changes use this Option.
Merge your changes with new changes.
Ex: If one object is already Active Version once again install the same object select check
box otherwise overwrite the existing object.

Key Figures Types


1. Cumulative.
2. Non-Cumulative.
3. Virtual.
Non-Cumulative:
Non-cumulative means the value of the key figure is changed frequently.
Ex: The stock value in a goudown differs from the start of the day to the end of the day.
There is lot of inflow of material and outflow of finished goods or products.
 Latest Value Should be given based on Year/Data.
 When we use a Non-Cumulative Key figure in a Cube, then the Cube is Called Non-
Cumulative Cube.
2 Scenarios: 1. When extracting data for HR Module.
2. To calculate number of employees.
3. To calculate stock.

Ex: Company Company Code Year No. Of Employees

37
Reliance RELI 2006 3000
Reliance RELI 2007 5000
Non-Cumulative:
Company Code No.Of Employees
RELI 5000
Cumulative:
 Cumulative means the value of the key figure is does not change frequently.
 It is used to summarised data.
Company Code N0.Of Employees
RELI 8000
Go to Info Object -->No. Of Employees-->R.C -->Change -->Aggregation Tab
Exception Aggregate (Latest Value) ----> Activate
Non-Cumulative Inflow & outflow:
Coming In-----> Inflow Going Out ------> Outflow
Ex: Inflow Outflow
01.08.2008 SL1 M1 50
01.08.2008 SL2 M2 90
02.08.2008 SL1 M1 40
02.08.2008 SL2 M2 60
17.08.2008 SL1 M1 30
170.8.2008 SL2 M2 50

SL1 M1 -----> 50-40+30=40


SL2 M2 -----> 90-60+50=80

 Stock Over View Report: Opening Stock +In Flow – Out Flow.
 Historical Movements 01.08.2008 to 16.08.2008 -->Init
 Initial Stock is loaded Only Once.
 Latest Movements 17.08.2008 -----> Delta.
 Movements will be Loaded Daily.

38
Routines:
(http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/6090a621-c170-
2910-c1ab-d9203321ee19?QuickLink=index&overridelayout=true)

1. Start Routine.
2. End Routine.
3. Expert Routine.
4. Characteristic Routine / Field Routine
Start Routine:
Start Routine is available with Transformation. It is triggered before Transformation.
Generally Start routine is used for filtering records or fetching global data. Start Routine
exists with the same structure as that of the source.
Example:
I need to write a start routine. My data source is having three columns.
1. Rec type 2. Key code 3.Text
Rec type--------Key code-----Text
Region-----------ASA-----------Asia
Region-----------USA-----------America
Region-----------AUS-----------Australia
Country-----------IND-----------India
Country-----------SAF-----------South Africa
My requirement is i need to only extract the data if the Rec type = "REGION"

End Routine
End Routine is available with Transformation. It is triggered after Transformation.
Generally End routine is used for updating data based on existing data. End Routine exists
with the same structure as that of the target.

Expert Routine
To create an Expert routine go to Edit menu and select Expert Routine. Expert Routine
will trigger without any transformation Rule. All existing Rules will be deleted once you
develop Expert Routine. Generally Expert routine is used for customizing rules.
SOURCE_PACKAGE has same structure as that of Source of the Transformation.
RESULT_PACKAGE has same structure as that of target Object.

39
Rule Types:

1. Constant
2. Direct Assignment
3. Formula
4. Initial(Characteristics)
5. Read Master Data
6. Routine
7. No Transformation(Key figures Only)
Constant: Irrespective of the data coming from source system if you want make some field
value as constant we use constant type of transformation (Fixed value for all records).
Direct Assignment: To map the value of source filed to Target field.
Formula: we want to transfer the data from data source to target by implementing a Simple
logic we will use formula type of transfer rule.
Routine: We want to transfer the data from data source to target by implementing a Complex
logic we will use routine type of transfer rule.

Initial: The Field is not filled, it remains Empty.

Open Hub Destination: T_Code: RSBO


Open Hub destination to distribute data from SAP system to Other External Systems.
This is a Tool, you can send data from BW to Different Targets.
Open Hub support 3 types of targets--------> Flat file, Relational table, Third party Tool
Open Hub data sources are---------> Cube, DSO, Info Object.
Ex: Info Cube to Flat File.
RSA1---> click on Open Hub Destination---> go to Info Area---> give Open Hub
Destination: XXXX & Description----> give the object type & name-----> select Destination
Type: Data Base Table, File, Third party tool..give the file name & Directory----> save and
Activate.----> create T/R & DTP.

40
Creating Currency Translation Type

1. Transaction Code for Creating Currency Translation Type in BI 7.0 is RSCUR.


2. Different parameters that determine the exchange rate for currency conversion are
Exchange Rate Type, Source and Target Currencies and Time reference for the
translation.
3. TCURR table – All Exchange Rates
4. Create one Variable----> Exchange rate Type-----> M Rate, P Rate(Medium Avg
Rate) -----> run time user give the INR, USD…...
5. T006 table – To see all Unit of Measurement.
6. To replicate the exchange rate up to date à Source Systems à Select SAP R/3
Connection à Context menu à Transfer exchange rates.
7. To replicate the currencies, Unit of Measurements, Fiscal year variants & Factory
Calendar
Source Systems à Select SAP R/3 Connection à Context Menu à
Transfer Global Settings à Select the contents which you want replicate à Execute
OB08 – We can define our own currencies, By Default it will come to TCURR table.

Read Mode Of The Query:


A ------> Query to Read All Data at Once.
X ------> Query to Read Data During Navigation.
H ------> Query to Read when you Navigate or Expand Hierarchies
RSRT  query name  Properties  Select the option as per requirement

Go to RSRT -----> _____________ Query


|
Properties-----> Read Mode

41
We needs to migrate transfer rules first before migrating data source, it will copy
all transformation mappings as well as routines from transfer rule to newly created
transformation. If you migrate data source first, then you will lose transfer rules.

Migrating From 3.X to 7.0:

Step1: Migrate Transfer Rules

Go to Transfer Rules------> Right click Transfer rules--------> Additional functions------->


Create Transformation.
Next Select The Info Source ---------> Copy Info Source 3.x to New Info Source.
Give The Info Source Name.
Next Mapped & Activate The Transformations.
Step2: Migrate Data Source
Go to Data Source --------> Right Click Data Source ---------> Click On Migrate-------->
Click on With Export.
With Export ---------> Changing 3.X to 7.0 and again 7.0 to 3.X.
With Out Export ---------> Only 3.X to 7.0 and Not Possible to Changing 7.0 to 3.X.
Step3: Migrate Update Rules
Go to Update Rules---------> Right click Update rules--------->Additional functions ------->
Create Transformation.
Next Select The Info Source ---------> Use Available Info Source
Next Mapped &Activate The Transformations.
 How to Restore 7.x Data Source to 3.x Data source
Go to RSDS----> Give the Data source Name & Source system Name
Next go to Data Source Menu ----> Click on Restore 3.x Data Source

Attribute Change Run:

When ever load master data we have to perform the Attribute Change Run, then
master data tables will get the refreshed and if there is any Navigational Attributes or
Hierarchies used in Aggregates those data will be refreshed, since Aggregates give the
present Truth.

RSA1 Tools Menu  Apply Hierarchy / Hierarchy Attribute change run.

42
DTP & IP:
DTP follows one to one mechanism i.e. there is one DTP per One data target whereas, IP
loads for all data targets at once.
DTP can load Full/Delta load for same target via same DTP which is not possible via same
IP.
IP Delta: Check the Record wise.
DTP Delta: Check Request wise.

43
Queries:
 Repeat--->Repeat will create new instance…IP is Used Repeat.
Repair--->The Repair will continue same instance…DTP is Used Repair.
 DATA Packet: It is a group of logically related data into single unit.
Data Packet is related to Info Package which is used to load data from Source System
to BI (PSA). As per SAP standard, we prefer to have 50,000 records per one data
packet.
DATA Package: It is a group of data packet.
Data package is related to DTP which is used to load Data from PSA to
further Data Targets. Start and end routine works at package level so
routine run for each package one by one.
 Replication:
Brings the Data Source from R/3 to BW Is Called Replication.
 Virtual Cube Based on BAPI: In Case of Virtual cube based on BAPI we can extract
the data from any other external systems with the help of BAPI (RFC) at run time.
 How to Delete The Request In Info Pack?
Go to Info Pack ----> Schedule Menu ------> Initialization option for source system
----> Select Request -----> Delete.
 In Cube we have 5 Requests, if we delete the 3 rd request it won’t delete the 4
th& 5 th request, but in DSO if we delete the 3 rd request it delete 4 th& 5 th
requests also.
 You can Add a New Field at the DSO Level.
Ans: Yes, DSO is Nothing but a Table.
 Where is the stored PSA Data.
Ans: In PSA Table.
 DTP Info:
http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/16761

 We do the Roll-Up in Info Cube, Records are moved to Aggregates or moved


from /F to /E?
Ans: You do the Roll-Up Records moved from Cube to Aggregates, Not in /F to /E.

 How to Find Where Data from Fetched into Report.(Cube or Aggregate)


Go to RSRT ---->Query: RAJAQRY ------> Execute +Debug ---->Aggregates ----->
Display Aggregate Found.

44
 In Extended Star Schema Why Master Data will maintain outside the Cube?
Ans: Reusability Purpose.

Partitioning Parameters----->0CALMONTH, Fiscal Year/Period.

 How Many Days can You keep the Data in PSA.


Ans: We can Set The Time.
 Diff B/W PSA & DSO.
Ans: PSA: This is an Intermediate Data Container, This is Not a Data Target.
DSO: Reporting can be Done DSO, This is a Data Target, DSO Data is Over Writable.
You can do Reporting in DSO, In PSA We cannot do Reporting Directly.
 Lower Case Letters Not Allowed.
Ans: Un check Lower Case letters Check box ----> Under General Tab---> In Info Object.
 Compression ADV’s & Dis ADV’s.
ADV’s: Compressed Info Cubes, less Storage Space & faster for Retrieving Information.
Dis ADV’s: Once Cube is Compressed, you can not Alter the Information, This can be a
big Problem.
 You ROLL-UP the particular Request Id, before Request Id’s also Roll-Up.
 Where you Get Exception Aggregate Option
Go to KFG Tabs--->Aggregation Tab--->Aggregation: SUN, MIN, MAX.
Exception Aggregation: SUM, MIN, MAX, AVG, LASTVAL,
FIRSTVAL,VARIENCE....
How to Delete The Data After Compression
Request based deletion is not possible after compression, you need to go
for Selective deletion.
Go to Info cube Mange-->First tab-->you can find the selective deletion option there.
 Data is available in R/3 But not comes to BI side Why? What to do?
Ans: Use to Repair Full Request.
 Where You Get Authorization Relevant Option?
Go to CHAR Tabs--->Business Explorer Tab--->Authorization Relevant Option.

45
Diff B/W Template & Reference

Reference: If we have an info object ‘A’ when we create the info object ‘B’ by taking ’A’ as
reference, all the properties of ‘A’ are copied to ‘B’ & data also loaded, we can’t load data
separately, we can’t change the properties of ‘B’.
Template: If we have an info object ‘A’ when we create the info object ‘C’ by taking ‘A’ as
template, all the properties of ‘A’ are copied to ‘C’ & we can change the properties of ‘C’ &
we can load data separately.
[Go to Info object--->Char Cat log-->R.C--->Create info object--->Hear 2 options
---> 1.Reference Cha
2. Template]

Diff B/W Reference & BI Objects:


Ans: Whenever you create Reference Create SID table, but BI Objects not
create SID Table.
How many Hierarchy levels can be Created for a characteristic info object?
Ans: Max of 98 Levels

I have loaded ‘#’ symbol from ECC to PSA its working fine. But when I am trying to load
this data from PSA to Cube I am getting an error –INVALID CHARACTERS?
Ans: write a code in the field routine for that particular field if you need to eliminate that #.
Sample code:
Say if your scource filed is KOKRS.
RESULT = SOURCE_FIELDS-KOKRS.
Replace all occurences of '# in Result with space.
condense Result.
I loaded data into DSO using flat file extraction, the problem is daily I am loading some
millions of records for this I have written some ABAP code. Still I face some performance
problem. how to rectify this issue?
Ans: 1. Better load data to Write-Optimize DSO from Flat File. Then Load data from Write
optimize DSO to Standard DSO ( In WODSO data wont overwrite, it maintains detail data)
2. Better to un check SID generation selection box.

Diff B/W IP Delta & DTP Delta Loads


IP Delta: Check the Record wise.
DTP Delta: Check Request wise.

 In BI 7.0 PSA is Mandatory.


In BW 3.x PSA is Not Mandatory.

46
Types of DTP's
1. Standard DTP
2. Direct Access DTP
3. Error DTP
4. RDA DTP
Diff B/W Normal DTP & Direct Access DTP?
1. In Normal DTP Generate some Request No.
2. In Direct Access DTP don’t generate any Request No.
Why create DSO?
1. The main Purpose of DSO is Over Write functionality.
2. Detail Level of data available in DSO.
How to Handle The Duplicate Records?

Ans: Go to master data DTP ---> Update Tab---> Handle Duplicate Record Checkbox Select.
Go to SE14--->Error Stack Table ----> Delete Records.
Go to SE16 ---> See The Data In Error Stack Table.
In IP Schedule 2/3 Times, so PSA Have 2/3 Requests (total 1000 Records, each request
200recrds), Then?
1. Full Load what happen----->Pick all Records
2. Delta Load what happen----> It Pick all the Records.
What is reconstruction?
This is the process by which you reload data from PSA (or ODS) into the cube/ODS.
CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?
ANS: Yes, of course. For example, for loading text and hierarchies we use different data
sources but the same Info Source.
THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL
CUBES IF ONE CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
ANS: Go to T Code SM66 then see which one is locked select that pid from there and go to
SM12 T_Code then unlock it this is happened when lock errors are occurred when u
scheduled.
How many characteristics can be found in a Dimension table?
Ans: (248)
What is the limit on Key figures in a Fact table?
Ans : (233)
When are Dimension ID's created?
ANS: When Transaction data is loaded into Info Cube.
When are SID's generated?
ANS: When Master data loaded into Master Tables (Attr, Text, Hierarchies).

47
How would we delete the data in ODS?
ANS: By request IDs, Selective deletion & change log entry deletion.

Tables Starting with Description:     


M table - View of all Master data tables (Combines P and Q)
P table - Time Independent Master data table (/BIC/P0MATERIAL)
Q Table  - Time Dependent Master data table (/BIC/Q0MATERIAL)
    
H - Hierarchy table  
K - Hierarchy SID table  
I  - SID Hierarchy structure  
J  - Hierarchy interval table  
    
S  - SID table  
X - Time Independent SID table (Interface between master data SIDs and time independent
Navigational attributes SIDs)
Y - Time Dependent SID table (Interface between master data SIDs and time dependent
navigational attributes SIDs ) 
T  - Text Table  
F  - Fact Table - Direct data for cube ( B-Tree Index )  
E  - Fact Table - Compress cube ( Bitmap Index )

How would we delete the data in change log table of ODS?


ANS: Context menu of ODS → Manage → Environment → change log entries.
What is NODIM?
ANS: For example it converts 5lts + 5kgs = 10.

Diff B/W /F & /E fact tables:


/F Fact----> 1.The /F fact table is optimized for loading data, because
It is automatically partitioned by the package Dimension Table.
2. In /F fact table Delete the SID's.
/E Fact----> 1.The /E fact table is optimised for data requests, Because
the Dim ID of the package dimension table is set to Zero, which reduce the Key combination.
2. In /E fact table not possible to Delete the SID’s (Nullifies the SID's).

48
 4 types of data Sources
1. Transaction Data.
2. DS for Master Data Attribute
3. Text
4. Hierarchies
 How to create Transport Request?
Go to Particular Obj…….> D.C…> Extras (Menu Tab)…> Object Directory
Entry….> Give the Package…..> Request.
 Master Data Data Sources
1. 0PAYER----> Payer
2. 0MAT----> Material number
3. 0MATERIAL----> Material
4. 0CUSTOMER----> Customer number
5. 0PLANT----> Plant

49

You might also like