Final settlement-FS

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 43

Home

About Us
Submit An Article
Online Quiz

SAPSimplified
ERP
SAP
Tutorials
Interview Questions
Tcodes
Tables
Video Tutorials
Sample Resumes
SAP Job In India

SAP SD FLOW
SAP January 27, 2014 SAP ABAP, SAP BI/BW, Tutorials 2 Comments
41FLARES Twitter 0Facebook 41Google+ 0
LinkedIn 0inShareEmail --Email to a friendPin It Share0
Sales and Distribution:
Sales and distribution module is a part of logistics and it handles all the process of order to delivery, it
is fully integrated with the other modules of the SAP System such as MM and PP.

SD contains many phases such as Inquiry, Quotation, Sales Order, Sales Returns, Credit Management,
pricing, tax determination, and Goods Delivery modules.

SD Flow:
Inquiry: Once we receive the quotation from the customer
then as a vendor we need to check whether we can deliver the goods with customer conditions or not.
Quotation: Once we finish the inquiry then as a vendor we need to send quotation to the particular
customer.
Sales Order: Once we receive the purchase order from customer then as a vendor we need to raise the
sales order, while raising sales order we should know the partner functions.
Sold-To-Party who as raised the purchase order.

Ship-To-Party where we need to deliver the goods.

Bill-to-party to whom we have to give the bill.

Payer who is going to pay the money?

Goods Delivery: After raising the sales order as a vendor we need to deliver the goods.
Billing: While delivering the goods we need to send the billing document.
Finance: Once we deliver the goods with billing vendor finance guy will interact with the customer
finance guy for finance settlements.
Transaction Codes:

Inquiry VA11

Quotation VA21

Sales Order VA01

Goods Delivery VL01N

Billing VF01

Commonly used SAP SD tables:


Sales Documents:
VBAK Sales Document: Header Data

VBAP -Sales Document: Item Data

VBUP Item Status

VBUK Header Status and Administrative Data

VBFA Sales Document Flow

Delivery Tables:
LIKP SD Document: Delivery Header Data

LIPS SD document: Delivery: Item data

Customer Tables:
KNA1 General Data in Customer Master

KNB1 Customer Master (Company Code)


KNB5 Customer master (dunning data)

KNBK Customer Master (Bank Details)

KNVV Sales area data

Pricing Tables:
KONV Conditions (Transaction Data)

KONH Conditions (Header)

KONP Conditions (Item)

Billing Tables:
VBRK Billing Document: Header Data

VBRP Billing Document: Item Data

Shipping Tables:
VEKP -Handling Unit Header Table

VEPO Packing: Handling Unit Item (Contents)

Vendor Tables:
LFA1-Vendor Master (General Section)

LFB1-Vendor Master (Company Code)

LFB5-Vendor master (dunning data)

LFBK -Vendor Master (Bank Details)

Enjoy
Also Read

SAP MM FLOW
SAP January 26, 2014 SAP ABAP, SAP BI/BW, Tutorials 1 Comment

61FLARES Twitter 0Facebook 61Google+ 0

LinkedIn 0inShareEmail --Email to a friendPin It Share0

Materials Management:
The Materials Management (MM) module is a part of logistics and it helps in managing end to end
procurements and logistics business process, It is fully integrated with the other modules (SD, FI, CO,
PM, QM, MW) of the SAP System.

MM module contains many phases of materials management such as materials planning and control,
purchasing, goods receiving, inventory, invoice verification module.

SAP MM Flow:
Purchase Requisition: First the customer
prepares the purchase requisition with required goods.
Request for Quotation: As a customer we need to send RFQ to different vendors.
Vendor Evaluation: Once we receive the quotations from different vendors as a customer we analysis
all the quotations and finally we select one vendor.
Purchase Order: After vendor evaluation as a customer we need to raise the purchase order.
Goods Receipt: Here we receive the goods with goods receipt as a customer we need to analysis the
goods receipt.
Invoice Verification: Once we receive the goods and bills from the vendor as a customer we need to
verify all the good, if there is any problem in the goods then the customer informs to vendor.
Finance: Once we receive the goods with billing customer finance guy will interact with the vendor
finance guy for finance settlements.
Transaction codes:

Purchase Requisition ME51N


Request for Quotation ME41

Vendor Evaluation ME61

Purchase Order ME21N

Goods Receipt MIGO

Invoice Verification MIRO

Goods Issue -MB1A

Physical Inventory MI01

Frequently used tables:


MARA General Material Data

MARC Plant Data for Material

MARD Storage Location Data for Material

MAKT Material Descriptions

MVKE Sales Data for materials

MSEG Document Segment- Material

MBEW Material Valuation

MKPF Header- Material Document

EKKO Purchasing Document Header

EKPO Purchasing Document Item


EBAN Purchase Requisition

EBKN Purchase Requisition Account Assignment

EINA Purchasing Info Record- General Data

EINE Purchasing Info Record- Purchasing Organization Data

MAKT Material Descriptions

LFA1 Vendor Master (General section)

LFB1 Vendor Master (Company Code)

T023 Mat. groups

T024 Purchasing Groups

T156 Movement Type

Enjoy

Also Read:
SAP SD FLOW.
Link Between SAP SD and MM Flows.

Link Between SAP SD and MM Flows


SAP January 27, 2014 SAP ABAP, SAP BI/BW, Tutorials No Comment

78FLARES Twitter 0Facebook 78Google+ 0

LinkedIn 0inShareEmail --Email to a friendPin It Share0

What is Materials Management?


The Materials Management (MM) module is a part of logistics and it helps in managing end to end
procurements and logistics business process, it is fully integrated with the other modules (SD, FI, CO,
PM, QM, MW) of the SAP System.

MM module contains many phases of materials management such as materials planning and control,
purchasing, goods receiving, inventory, invoice verification module.

What is Sales and Distribution?

Sales and Distribution module is a part of logistics and it handles all the process of order to delivery, it
is fully integrated with the other modules of the SAP System such as MM and PP.

SD contains many phases such as Inquiry, Quotation, Sales Order, Sales Returns, Credit Management,
pricing, tax determination, and Goods Delivery modules.

Link Between SAP SD and MM Flows:


1. First the customer prepares
the purchase requisition with required goods and he send to request for quotation (RFQ) to different
vendors.
2. Once we receive the quotation from the customer then as a vendor we need to check whether we
can deliver the goods with customer conditions or not

3. Once we finish the inquiry then as a vendor we need prepare the quotation

4. Once we prepare the quotation as a vendor we need to send quotation to the particular customer

5. Once we receive the quotations from different vendors as a customer we analysis all the quotations
and finally we select one vendor

6. After vendor evaluation as a customer we need to raise the purchase order


7. Once we receive the purchase order from customer then as a vendor we need to raise the sales
order.

8. While raising sales order we should know the partner functions

Sold-To-Party who as raised the purchase order

Ship-To-Party where we need to deliver the goods

Bill-to-party to whom we have to give the bill

Payer who is going to pay the money?

9. After raising the sales order as a vendor we need to deliver the goods

10. Deliver the goods

11. Here we receive the goods with goods receipt as a customer we need to analysis the goods receipt

12. As a vendor we need to prepare the bills

13. While delivering the goods we need to send the billing document

14. Once we receive the goods and bills from the vendor as a customer we need to verify all the good.
If there is any problem in the goods as a customer you need to inform to vendor.

15. If we receive all the goods then as a customer we need to inform the finance guy

16. Once we deliver the goods with billing vendor finance guy will interact with the customer finance
guy.

17. Finally the finance guys interact for the finance settlements

Transaction Codes:
Purchase Requisition ME51N

Request for Quotation ME41

Vendor Evaluation ME61

Purchase Order ME21N

Goods Receipt MIGO

Invoice Verification MIRO

Goods Issue -MB1A

Physical Inventory MI01

Inquiry VA11

Quotation VA21

Sales Order VA01

Goods Delivery VL01N

Billing VF01

Enjoy

Effective Date Transformation in SAP BODS


SAP February 11, 2014 SAP BODS, Tutorials No Comment
0FLARES Twitter 0Facebook 0Google+ 0
LinkedIn 0inShareEmail --Email to a friendPin It Share0
Summary:
Effective Date transformation is a readymade logic that helps to define Time Dependent Dimensions.
Time dependent dimensions are the attributes which have a validity that expires.

Effective Date transformation defines validity of each and every record in a Time dependent
table.

It calculates and provides Effective-to (End date) value for an Effective date (Start date) in
the input dataset. Logic behind the Effective-to value depends on the Sequence column
provided.

Default date (Ex: 9999.12.31) is assigned to the latest or active record for which validity cannot
be defined.

Prerequisite:
Input dataset must contain Effective Date field.

(Note: If field name is given as EFFDT, Data services automatically select it as Effective Date column
else, we have to select the field manually.)

The input to the Effective Date transform must define at least one primary key.

Sequence column must be defined as a primary key which helps from primary constraint error.

NOTE:

Default date is provided in the Effective Date transform. It can be changed to required value
manually.

If Sequence column is not defined, The Effective-to value of a record will be equal to Effective
Date value of next record regardless the logic.

Example Scenario:
Figure1 shows a sample input dataset which gives an Employee designation details with Start date in
EFFDT field.

Figure 1: Sample Source Data


Effective-to value or End date is determined using Effective date transform as shown in sample
target in Figure2.

Figure 2: Sample Target DataThe logic behind calculating Effective-to value is:

Effective-to value of nth record = Effective date of (n+1)th record.

Figure 3: Effective Date Transformation Object Hierarchy

Figure 4: Effective Date Transformation job ETL flow

In figure 4, Target T1_Imperio_Data_Transfer gives the actual performance of Data Transfer


Transform.
Figure 5: Effective Date Transformation Defining rules

In Figure 5, Schema out consist of an extra field EFFECTIVE_TO_COLUMN generated by


Effective Date transform.

Default effective to date value is 9999.12.31 assigned to latest or active records.

Figure 6: Query Transformation Mapping rules

Figure 6 shows the mapping rules in Query_1 for Target T2_Imperio_Eff_Date


Mapping rule for Effective_To_Column field helps to get the Effective-to value = a day before
the Effective Date value of next record (Note: In Target T1_Imperio_Eff_Date, Effective-to
value of a record = Effective Date value of next record, Which is the actual logic inside the
Effective data transform).

Mapping rule for No_Of_Days field helps to find the No. of days between the Start and End
dates.

Attachments:
Source File:
Use the below file as source for working out on the transformation.

Click Here to Download Source Data.


.atl File:
Import the below .atl file in the Data Services Designer to find the Job for Effective Date
transformation.

Click Here to Download ATL Files.


Thats it.

Enjoy

Hierarchy Flattening Transformation in SAP BODS


SAP February 11, 2014 SAP BODS, Tutorials No Comment
0FLARES Twitter 0Facebook 0Google+ 0
LinkedIn 0inShareEmail --Email to a friendPin It Share0
Hierarchy Flattening Transformation:

Summary:
Hierarchy Flattening is a readymade logic in the form of transform which is used to load Hierarchies in
the table form.

Hierarchy flattening transformation used to group the Hierarchies (master data).

Supports only for Tree structured inputs.

In Business Objects Data Services, Hierarchies are loaded in two fashions.


1. Horizontal

2. Vertical

Horizontal Flattening :

Root to node relationship will be flattened.


Each row in the output describes a single node and the path to the node from the root
node.
Number of records in the output will be equal to the number of nodes in the input
tree.
Enables drill up and drill down.
Vertical Flattening :

It takes a node and defines its relationship with all the other nodes.
Each record of output describes a single relationship between Ancestor and
Descendent and the no. of nodes the relationship includes.
Enables global filtering performance process.
Prerequisite:
Input the Hierarchy Flattening transform must be a tree in the form of flat structure.

NOTE:

In a Hierarchy flattening scenario, Horizontal flattening is mandatory where as Vertical


flattening is optional.
Example Scenario:
Figure1 shows a sample input dataset which is a flat structure of given tree.

Figure 1: Sample Source Data

Figure2 shows a sample target data which is Horizontal flat structure of input dataset obtained using
Hierarchy flattening transform (Horizontal fashion).
Figure 2: Sample Target Data

Both Horizontal and Vertical flattening outputs have a Predefined structure which is discussed
in later sections.

Figure 3: Hierarchy Flattening Transformation Object Hierarchy

Figure 4: Hierarchy Flattening Transformation job structure

Figure 5: Hierarchy Flattening Transformation Job Horizontal ETL flow


Figure 6: Hierarchy Flattening Transformation Job Vertical ETL flow

Figure 7: Horizontal Flattening Transform Defining rules

Select Horizontal Flattening type for Horizontal hierarchy flattening as shown in figure 7.

Output of the Horizontal Hierarchy flattening has a Predefined structure as in Schema out of
above figure.

No. of levels depends on the Maximum depth of input tree.


Figure 8: Vertical Flattening Transform Defining rules

Select Vertical flattening type for Vertical hierarchy flattening as shown in figure 8.

Output structure of Vertical flattening has a Predefined structure as in Schema out of above
figure.

Attachments:
Source File:
Use the below file as source for working out on the transformation.

Click Here to Download Source Data.


.atl File:
Import the below .atl file in the Data Services Designer to find the Job for Effective Date
transformation.

Click Here to Download ATL File.


Thats it.

Enjoy

Map Operation Transformation in SAP BODS


SAP February 11, 2014 SAP BODS, Tutorials No Comment
0FLARES Twitter 0Facebook 0Google+ 0
LinkedIn 0inShareEmail --Email to a friendPin It Share0
Map Operation Transformation

Summary:-
Map Operation Transform enables us to change the operation code for records.

The operation code uses a flag which indicates how each row is applied into target.

There are 4 types of operation codes mentioned below

Normal:- Creates a new row in the target. All rows in a data set are flagged as NORMAL when they
extracted from source. In case a row is flagged as NORMAL it will load into target. It is the common
flag used by most of the transforms.
Insert:- - Creates a new row in the target.
Note: - Only history preserving and key generation transforms can accept data sets with rows
flagged as INSERT as input.
Delete :- In case any row is ignored by target it will flag as DELETE.
Only the History Preserving transform with the Preserve delete rows as update rows option is selected
can accept data sets with rows flagged as DELETE.

Update:- - Update overwrites an existing row in the target table


Only History Preserving and Key Generation transforms accept data sets with rows flagged as UPDATE
as input

Discard:- If you select this option, those rows will not be loaded into the target.
Example Scenario:-
In this figure 1 source file the changed records, updated, deleted records needs to be inserted into
different template table which means if the record MT03 has been updated as MT01that need to be
inserted to separate table and if the second row deleted that has to be inserted into other template
table.

Sequence column Material City Code Quarter


101 MT03 CT01 Q1
102 MT02 CT90 Q2

Figure 1:- Sample Source Data


In this figure 2 we can observe that the updated records are inserted.

Sequence column Material City Code Quarter


101 MT01 CT01 Q1

Figure 2:- Sample Target data1 for update


In this figure 3 we can observe that the deleted records are inserted

Sequence column Material City Code Quarter


102 MT02 CT90 Q2

Figure 3:- Sample Target data 2 for delete


Below figure 4 indicates the flow of objects in hierarchy format

Figure 4:- Map_Opeation transformation object hierarchy

Below figure 5 indicates the flow of ETL job.


Figure 5:- ETL flow of map operation transformation

We can see the source data in below figure 6

Figure6:- Source data

Delete Option:-
we can insert the deleted records into separate table by changing the delete option settings, as shown
in figure 7
Figure 7:- Map Operation transformation rule for delete

Update option:-
We can insert the updated records into separate table by changing the update option settings, as
shown in figure 8

Figure 8:- Map Operation transformation rule for update

We can see that the updated records are inserted into separate table, as shown in figure 9
Figure9:- Updated row in Target data

we can see that the deleted records are inserted into separate table, as shown in figure 10

Figure 10:- Deleted row in Target data

Source Files:-
Use the below files as sources for working out on the transformation
Click Here to Download Source Data.
Attachment of ATL File:-
Import the below .ATL file in the Data Services Designer to find the Job for the above transformation.

Click Here to Download ATL File.


Thats it.

Enjoy

Map CDC Operation Transformation in SAP BODS


SAP February 11, 2014 SAP BODS, Tutorials 1 Comment
0FLARES Twitter 0Facebook 0Google+ 0
LinkedIn 0inShareEmail --Email to a friendPin It Share0
Map CDC Operation Transformation:

Summary:
Map CDC operation transform enables Source based CDC (Changed Data Capture) or Delta
implementation.

Microsoft SQL (2000 onwards) and Oracle (9i onwards) supports Source based CDC.
Using values for the Sequencing column and Row operation column performs three
functions:
Sorts input data based on values in Sequencing column box and (optional) the
Additional Grouping Columns box.
Maps output data based on values in Row Operation Column box. Source table
rows are mapped to INSERT, UPDATE, or DELETE operations before passing them on
to the target.
Resolves missing, separated, or multiple before- and after-images for UPDATE
rows.
Implementing source based CDC depends completely on source behavior.
Prerequisite:
A CDC datastore should be created for CDC tables providing required credentials at BODS side.

A CDC subscription name should be provided in Source editor properties.

NOTE:

Map CDC operation transform reads data only from a CDC table source.

CDC table on top of SQL table should be created in MS SQL Server database and should be
imported to CDC datastore in BODS.

Setting up Microsoft SQL Server for CDC:


The following steps summarize enabling CDC on MS SQL Server (Enterprise version) database.

Enable CDC on database (that you are working on) before creating a CDC table as in the below
figure.

Figure 1: Procedure to enable CDC on a database

Create a table on top of which you want to enable CDC as in the below figure.

Figure 2: Query to Create base table


Enable CDC on the top of the table as in the below figure.

Figure3: Procedure to create CDC table over top of base table

The following steps summarize the procedure to configure SQL Replication Server for your Microsoft
SQL Server database.

On the Replication node of the Microsoft SQL Enterprise Manager, select the Configure
Publishing and the Distribution option. Follow the wizard to create the Distributor and
Distribution database.

The following steps summarize the procedure to configure SQL Replication Server for your Microsoft
SQL Server database. The wizard generates the following components that you need to specify on the
Datastore Editor when you define a Microsoft SQL Server CDC datastore:

MSSQL distribution server name

MSSQL distribution database name

MSSQL distribution user name

MSSQL distribution password


To create new publications that specify the tables that you want to publish. The software
requires following steps

1. Right-click Replication menu (or Local Publications menu), then select New Publication. The
New Publication Wizard opens.

2. In the New Publication Wizard, click Next.

3. Select the database that you want to publish and click Next.

4. Under Publication type, select Transactional publication, and then click Next to continue.

5. Select tables and other objects to publish as articles. Set columns to filter tables. Then click to
open Article Properties.

6. Set the following to False:


- Copy clustered index

- Copy INSERT, UPDATE and DELETE

- Create schemas at subscriber.

1. Set the Action if name is in use to keep existing object unchanged.

2. Set Update delivery format and Delete delivery format to XCALL <stored procedure> if you
want before images for UPDATE and DELETE commands. Click OK to save the article properties
and click Next.

3. Add filters to exclude unwanted rows from published tables (optional). Click Next.

4. Select Create a snapshot immediately and keep the snapshot to initialize subscriptions and
select Schedule the snapshot agent to run at following times(optional). Click Next.

5. Configure Agent Security and specify the account connection setting. Click Security Settings to
set the Snapshot agent.

6. Configure the Agent Security account with system administration privileges and click OK.

7. Enter the login password for the Log Reader Agent by clicking Security Settings. Note that it
has to be a login granting system administration privileges.

8. In the Log Reader Agent Security window, enter and confirm password information.

9. Click to select Create the publication then click Finish to create a new publication.

10. To complete the wizard, enter a Publication name and click Finish to create your publication.
Setting up Business Objects Data Services for CDC:
To use SAP Business Objects Data Services to read and load changed data from SQL Server databases,
do the following procedures on the Designer:

Create a CDC datastore for SQL Server

The CDC datastore option is available for SQL Server.

1. Open the Datastore Editor.

2. Enter a name for the datastore.

3. In the Datastore type box, select Database.


4. In the Database type box, select Microsoft SQL Server.

5. Check the Enable CDC box to enable the CDC feature.

6. Select a Database version. Change-data tables are only available from SQL Server 2000
Enterprise.

7. Enter a Database name (use the name of the Replication server).

8. Enter a database User name and Password.

9. In the CDC section, enter the following names that you created for this datastore when you
configured the Distributor and Publisher in the MS SQL Replication Server:

MSSQL distribution server name

MSSQL distribution database name

MSSQL publication name

MSSQL distribution user name

MSSQL distribution password

1. If you want to create more than one configuration for this datastore, click Apply, then click Edit
and follow step 9 again for any additional configurations.

2. Click OK.

You can now use the new datastore connection to import metadata tables into the current repository.

Import metadata for SQL Server tables

Configure a CDC source


Figure 4: CDC Datastore configuration in Designer for MS SQL server

Steps to create CDC datastore are clearly mentioned in the previous section.

Figure 5: Map CDC Operation Transformation object hierarchy

Figure 6: Map CDC Operation transform Job ETL flow

Map CDC Operation transform reads data only from CDC table as in figure 6.

One CDC table is accepted per one job.

No other table is allowed in the job which consists of CDC table.


Figure 7: Map CDC Operation Transformation Defining rules

Schema out of Map CDC operation transform will have same structure as that of CDC table in
MS SQL server as in figure 7.

CDC Table:
CDC Table is a different table generated using a procedure that comes with CDC package. It
consists two types of fields.

Business fields These are fields of SQL table on which CDC is enabled.
CDC or Technical fields These fields are generated by SQL Server.
CDC table cannot be created through Data Manipulation Language like SQL table.

Figure 8: CDC table structure

After importing the table two fields are generated by Data Services software.

DI_SEQUENCE_NUMBER Acts like a Surrogate ID or Serial number column.


DI_OPERATION_TYPE It generates operation type values.
Valid values for this column are: I for INSERT
D for DELETE

B for before-image of an UPDATE

U for after-image of an UPDATE

If a record is Updated in Base SQL table, CDC table is updated with two records

Before image of an update.


After image of an update.
CDC table Source editor properties:

CDC options tab:


CDC subscription name SQL Server CDC uses the subscription name to mark the last row read
so that the next job starts reading the CDC table from that position.

Enable check point Once a check-point is placed, the next time the CDC job runs, it reads
only the rows inserted into the CDC table since the last check-point.

Get before image for each update row If it is checked, database allows two images to be
associated with an UPDATE row: a before-image and an after-image.

Attachment:
.atl File:
Import the below .atl file in the Data Services Designer to find the Job for Map CDC Operation
transformation.

Click Here to Download ATL File.


Thats it.

Enjoy

Architecture of SAP BusinessObjects Data Services (BODS)


SAP January 26, 2014 SAP BODS, Tutorials No Comment
13FLARES Twitter 0Facebook 13Google+ 0
LinkedIn 0inShareEmail --Email to a friendPin It Share0
SAP BusinessObjects Data Services is a data warehousing product that delivers a single enterprise-class
software solution for data integration (ETL), data management (data quality) and text data processing.
Data Integrationis the Extraction, Transformation and Loading (ETL) technologyof enterprise
data between heterogeneous sources and targets.

Sources and Targets can be SAP Applications (ERP, CRM etc.), SAP BW, SAP HANA, Any Relational
Database (MS SQL Server, Oracle etc.), Any File (Excel Workbook, Flat file, XML, HDFS etc.),
unstructured text, Web services etc.

ETL technology of SAP BODS can be done in both Batch mode and Real time mode data integration.

Data Managementor Data Qualityprocess Cleanses, Enhances, Matches and Consolidates the
enterprise data to get an accurate or quality form of data.

Text Data Processing analyzes and extracts specific information (entities or facts) from large
volumes of unstructured text like emails, paragraphs etc,

Architecture:

The following figure outlines the architecture of standard components of SAP BusinessObjects
Data Services.
Note: On top of SAP BODS, a full SAP
BusinessObjects BI Platform or SAP BusinessObjects Information Platform Services (IPS) should be
installed for User and Rights security management from 4.x versions. Data Services relies on CMC
(Central Management Console) for Authentication and Security features. In earlier versions it was
done is Management console of SAP BODS.
BODS Designer:

SAP BusinessObjects Data Services Designer is a developer or designer tool. It is an easy-to-use


graphical user interface wheredevelopers can design objects that consist of data mappings,
transformations, and control logic.

Repository

Repository is the space in a database server which stores the metadata of the objects used in
SAP BusinessObjects Data Services. Each repositorymust be registered in the Central Management
Console (CMC) and associated with one or more Job Servers which run the jobs you create.
There are three types of repositories used with SAP BODS:

Local repository:
A local repository stores the metadata of all the objects (like projects, jobs, work flows, and data
flows) and source/target metadata defined by developers in SAP BODS Desinger.

Central repository:
A central repository is used for multi-user development and version management of objects.
Developers can check objects in and out of their local repositories to a shared object library provided
by central repository. The central repository preserves all versions of an applications objects, so you
can revert to a previous version if needed.

Profiler repository:
A profiler repository is used to store all the metadata of profiling tasks performed in SAP BODS
designer.

Where, CMS repository is used to store the metadata of all the tasks done in CMC of SAP BO BI
platform or IPS.

Information Steward Repository is used to store the metadata of profiling tasks and objects defined in
SAP Information Steward.

Job Server

The SAP BusinessObjects Data Services Job Server retrieves the job information from its respected
repository and starts the data engine to process the job.

The Job Server can move data in either batch or real-time mode and uses distributed query
optimization, multi-threading, in-memory caching, in-memory data transformations, and parallel
processing to deliver high data throughput and scalability.

Access Server
The SAP BusinessObjects Data Services Access Server is a real-time, request-reply message broker that
collects message requests, routes them to a real-time service, and delivers a message reply within a
user-specified time frame.

Management Console

SAP BusinessObjects Data Services Management Console is the Web-based application with the
following properties.

Administration, Impact and Lineage Analysis, Operational Dashboard, Auto Documentation, Data
Validation and Data Quality Reports.

Enjoy

History of SAP BODS


SAP January 27, 2014 SAP BODS, Tutorials No Comment
9FLARES Twitter 0Facebook 9Google+ 0
LinkedIn 0inShareEmail--Email to a friend
Evolution of SAP BODS:

SAP BusinessObjects Data Services was not directly developed by SAP Company. It was acquired from
BusinessObjects Company and BusinessObjects acquired it from Acta Technology Inc.

Acta Technology Inc., headquartered in Mountain View, CA was provider of first real time data
integration platform. The two software products provided by Acta were an ETL tool, named as Data
Integration (DI) tool also known as Actaworks and a Data Management or Data Quality (DQ) tool.

BusinessObjects, a French company, worlds leading provider of Business Intelligence (BI) solutions
acquired Acta Technology Inc. in the year 2002. BusinessObjects rebranded the two products of Acta
asBusinessObjects Data Integration (BODI) tool and BusinessObjects Data Quality (BODQ) tool.

In the year 2007, SAP, legend in ERP solutions, acquired BusinessObjects and renamed the products
as SAP BODI and SAP BODQ. Later in the year 2008 SAP integrated both software products into a
single end to end software product and named it as SAP BusinessObjects Data Services (BODS) which
provides both data integration and data management solutions. In the earlier versions of SAP BODS
text data processing solution is also included with it.

Why SAP BODS?

In the present market there are many ETL tools which dose Extraction, Transformation and Loading
tasks like SAP BODS, Informatica, IBM InfoSphere Data Stage, Abinitio, Oracle Warehouse Builder
(OWB) etc.

Lets look over why SAP BODS has got much importance in the present worlds market,

Firstly, it is a SAP product, as SAP is serving 70% of present worlds market and it is tightly
integrated with any database.

SAP BODS is a single product which delivers Data Integration, Data Quality, Data profiling and
Text data processing solutions.

SAP Data Services can Move, Unlock and govern enterprise data effectively.

SAP Data Services Cost-effectively delivers its solutions and is single window application with
complete easy to use GUI (Graphical User Interface).

Enjoy

Table Comparison Transformation in SAP BODS


SAP February 11, 2014 SAP BODS, Tutorials No Comment

0FLARES Twitter 0Facebook 0Google+ 0

LinkedIn 0inShareEmail --Email to a friendPin It Share0

Table Comparison Transformation

Summary:-
Table comparison transformation is used to compare the two data sets and check what are the
records were changed and updates, inserted and deleted.
Comparing two data sets, this transform can generate the difference between them as a resultant
dataset with each row of the result flagged as insert, update, or delete.

While loading data to a target table, this transform can be used to ensure a row are not duplicated in a
target table and hence is very helpful to load a dimension table.

The transform will take in records from a query- compare them with the target table.

First is will identify incoming records that match target records based on the key columns you select.
Any records that do not match will come out of the transform as inserts.
Records that match on the key columns will then be compared based on the selected compare
columns.

Records that match exactly on the key and compare columns will be ignored i.e. not output by the
table compare transform.
Records that match on the key columns but differ on the compare columns will be output as update
records.

Example scenario:-
The use of table comparison is used to update, insert and delete the records.

Now in the figure 1 the source data for FIRST_NAME column we are modifying the record TOM as
TOMY so this record needs to be updated which means in the place of TOM the record TOMY must
be placed and now we are inserting new row that has to inserted after the row number 3.Now we
can see by looking at the below target table

KEY_COLUMN CUSTOMER_NUMBER FIRST_NAME LAST_NAME

1 C111 TOM HANKS

2 C222 RAGHU RAM

3 C333 MACRO POLY

Figure 1: Sample Source Data


In figure 2 we can observe that the updated inserted records are in target table

KEY_COLUMN CUSTOMER_NUMBER FIRST_NAME LAST_NAME

1 C111 TOMY HANKS

2 C222 RAGHU RAM

3 C333 MACRO POLY

4 C444 DAVID UGANDA

Figure 2: Sample Target Data


By analyzing the above table the updated record in first row and inserted records in fourth row.

what are the changes happened in source file that need to be reflected in target table ,which means
updated, Inserted, deleted records must be known

NOTE: - In table comparison transformation we cant use generated key column as VARCHAR it always
must be INT
Below figure 3 indicates the flow of table comparison transformation hierarchy

Figure 4: ETL flow for Table comparison transformation

We can see the source data in below figure 5


Figure 5: Source Data

we can see the settings for table comparison transformation as shown in figure 6.if you want to detect
the deleted record you have to enable the DETECT DELETED ROW(S) FROM COMPARISON TABLE and
if you want to allow the duplicates for primary key you have to enable the INPUT CONTAINS THE
DUPLICATES KEYS

Figure 6: Table comparison transformation settings

The inserted ,updated and deleted records indicates as I and U and D flags .This flags will be
seen in debug mode only.

Figure 7: Target table

Thats it.

Enjoy

Validation Transformation in SAP BODS


SAP February 11, 2014 SAP BODS, Tutorials No Comment
0FLARES Twitter 0Facebook 0Google+ 0
LinkedIn 0inShareEmail --Email to a friendPin It Share0
Validation Transformation

Summary:-
Validation transform is used to filter or replace the source dataset based on validation rules to
produce desired output dataset.

This transform is used for NULL checking for mandatory fields, Pattern matching, existence of
value in reference table, validate data type, etc.

The Validation transform can generate two output dataset Pass, Fail.

The Pass Output schema is identical with the Input schema. The Fail Output schema has two
more columns, DI_ERRORACTION and DI_ERRORCOLUMNS.

Example Scenario:-
In this scenario source table1 have legacy values as shown in Figure 1 but we want SAP values.
We are getting SAP values from lookup tables 1,2 and 3 as shown below in Figures 1, 2,3.

In lookup function, we are getting null values because of not matched records available in
source table and lookup table but we dont want null values in our target table.

To avoid this problem we have to use validation transformation. The Validation transform can
generate two output datasets Pass, Fail based on conditions.

In pass target table we have clear output and In fail table we have history of failed records.
Below are shown the sample figures 5 and 6.

Legacy GL Account Legacy Cost center Legacy Profit Center Amount


1000 C1 P1 300

Figure 1: Sample Source Data 1


Legacy GL Account SAP GL Account
1000 100000001

Figure 2: Sample Lookup File 1


Legacy Cost Center SAP Cost Center
C1 C100

Figure 3: Sample Lookup File 2


Legacy Profit Center SAP Profit Center
P1 1000

Figure 4: Sample Lookup File 3

Figure 5: Sample Target Data1 (PASS)

Figure 6: Sample Target Data2 (FAIL)

Figures 7, 8 and 9 shows the object hierarchy for validation transformation job, ETL job flow and the
way we define the validation rules respectively.

Figure 7: Validation Transformation Object Hierarchy

Figure 8: Validation Transformation ETL Job Flow

In Validation Transformation:-
Exists in Table option:-

Exists in table option is used to specify that a columns value must exist in another tables
column.
Click the drop-down arrow to opens the window and select column in the provided window.
This option uses the LOOKUP_EXT function. Define the NOT NULL constraint for the column in
the LOOKUP table to ensure the Exists in table condition executes properly.
Figure 9 shows the validation rules in validation transformation

Figure 9: Validation Transformation Defining Rules

thats it.

Enjoy

You might also like