Final settlement-FS
Final settlement-FS
Final settlement-FS
About Us
Submit An Article
Online Quiz
SAPSimplified
ERP
SAP
Tutorials
Interview Questions
Tcodes
Tables
Video Tutorials
Sample Resumes
SAP Job In India
SAP SD FLOW
SAP January 27, 2014 SAP ABAP, SAP BI/BW, Tutorials 2 Comments
41FLARES Twitter 0Facebook 41Google+ 0
LinkedIn 0inShareEmail --Email to a friendPin It Share0
Sales and Distribution:
Sales and distribution module is a part of logistics and it handles all the process of order to delivery, it
is fully integrated with the other modules of the SAP System such as MM and PP.
SD contains many phases such as Inquiry, Quotation, Sales Order, Sales Returns, Credit Management,
pricing, tax determination, and Goods Delivery modules.
SD Flow:
Inquiry: Once we receive the quotation from the customer
then as a vendor we need to check whether we can deliver the goods with customer conditions or not.
Quotation: Once we finish the inquiry then as a vendor we need to send quotation to the particular
customer.
Sales Order: Once we receive the purchase order from customer then as a vendor we need to raise the
sales order, while raising sales order we should know the partner functions.
Sold-To-Party who as raised the purchase order.
Goods Delivery: After raising the sales order as a vendor we need to deliver the goods.
Billing: While delivering the goods we need to send the billing document.
Finance: Once we deliver the goods with billing vendor finance guy will interact with the customer
finance guy for finance settlements.
Transaction Codes:
Inquiry VA11
Quotation VA21
Billing VF01
Delivery Tables:
LIKP SD Document: Delivery Header Data
Customer Tables:
KNA1 General Data in Customer Master
Pricing Tables:
KONV Conditions (Transaction Data)
Billing Tables:
VBRK Billing Document: Header Data
Shipping Tables:
VEKP -Handling Unit Header Table
Vendor Tables:
LFA1-Vendor Master (General Section)
Enjoy
Also Read
SAP MM FLOW
SAP January 26, 2014 SAP ABAP, SAP BI/BW, Tutorials 1 Comment
Materials Management:
The Materials Management (MM) module is a part of logistics and it helps in managing end to end
procurements and logistics business process, It is fully integrated with the other modules (SD, FI, CO,
PM, QM, MW) of the SAP System.
MM module contains many phases of materials management such as materials planning and control,
purchasing, goods receiving, inventory, invoice verification module.
SAP MM Flow:
Purchase Requisition: First the customer
prepares the purchase requisition with required goods.
Request for Quotation: As a customer we need to send RFQ to different vendors.
Vendor Evaluation: Once we receive the quotations from different vendors as a customer we analysis
all the quotations and finally we select one vendor.
Purchase Order: After vendor evaluation as a customer we need to raise the purchase order.
Goods Receipt: Here we receive the goods with goods receipt as a customer we need to analysis the
goods receipt.
Invoice Verification: Once we receive the goods and bills from the vendor as a customer we need to
verify all the good, if there is any problem in the goods then the customer informs to vendor.
Finance: Once we receive the goods with billing customer finance guy will interact with the vendor
finance guy for finance settlements.
Transaction codes:
Enjoy
Also Read:
SAP SD FLOW.
Link Between SAP SD and MM Flows.
MM module contains many phases of materials management such as materials planning and control,
purchasing, goods receiving, inventory, invoice verification module.
Sales and Distribution module is a part of logistics and it handles all the process of order to delivery, it
is fully integrated with the other modules of the SAP System such as MM and PP.
SD contains many phases such as Inquiry, Quotation, Sales Order, Sales Returns, Credit Management,
pricing, tax determination, and Goods Delivery modules.
3. Once we finish the inquiry then as a vendor we need prepare the quotation
4. Once we prepare the quotation as a vendor we need to send quotation to the particular customer
5. Once we receive the quotations from different vendors as a customer we analysis all the quotations
and finally we select one vendor
9. After raising the sales order as a vendor we need to deliver the goods
11. Here we receive the goods with goods receipt as a customer we need to analysis the goods receipt
13. While delivering the goods we need to send the billing document
14. Once we receive the goods and bills from the vendor as a customer we need to verify all the good.
If there is any problem in the goods as a customer you need to inform to vendor.
15. If we receive all the goods then as a customer we need to inform the finance guy
16. Once we deliver the goods with billing vendor finance guy will interact with the customer finance
guy.
17. Finally the finance guys interact for the finance settlements
Transaction Codes:
Purchase Requisition ME51N
Inquiry VA11
Quotation VA21
Billing VF01
Enjoy
Effective Date transformation defines validity of each and every record in a Time dependent
table.
It calculates and provides Effective-to (End date) value for an Effective date (Start date) in
the input dataset. Logic behind the Effective-to value depends on the Sequence column
provided.
Default date (Ex: 9999.12.31) is assigned to the latest or active record for which validity cannot
be defined.
Prerequisite:
Input dataset must contain Effective Date field.
(Note: If field name is given as EFFDT, Data services automatically select it as Effective Date column
else, we have to select the field manually.)
The input to the Effective Date transform must define at least one primary key.
Sequence column must be defined as a primary key which helps from primary constraint error.
NOTE:
Default date is provided in the Effective Date transform. It can be changed to required value
manually.
If Sequence column is not defined, The Effective-to value of a record will be equal to Effective
Date value of next record regardless the logic.
Example Scenario:
Figure1 shows a sample input dataset which gives an Employee designation details with Start date in
EFFDT field.
Figure 2: Sample Target DataThe logic behind calculating Effective-to value is:
Mapping rule for No_Of_Days field helps to find the No. of days between the Start and End
dates.
Attachments:
Source File:
Use the below file as source for working out on the transformation.
Enjoy
Summary:
Hierarchy Flattening is a readymade logic in the form of transform which is used to load Hierarchies in
the table form.
2. Vertical
Horizontal Flattening :
It takes a node and defines its relationship with all the other nodes.
Each record of output describes a single relationship between Ancestor and
Descendent and the no. of nodes the relationship includes.
Enables global filtering performance process.
Prerequisite:
Input the Hierarchy Flattening transform must be a tree in the form of flat structure.
NOTE:
Figure2 shows a sample target data which is Horizontal flat structure of input dataset obtained using
Hierarchy flattening transform (Horizontal fashion).
Figure 2: Sample Target Data
Both Horizontal and Vertical flattening outputs have a Predefined structure which is discussed
in later sections.
Select Horizontal Flattening type for Horizontal hierarchy flattening as shown in figure 7.
Output of the Horizontal Hierarchy flattening has a Predefined structure as in Schema out of
above figure.
Select Vertical flattening type for Vertical hierarchy flattening as shown in figure 8.
Output structure of Vertical flattening has a Predefined structure as in Schema out of above
figure.
Attachments:
Source File:
Use the below file as source for working out on the transformation.
Enjoy
Summary:-
Map Operation Transform enables us to change the operation code for records.
The operation code uses a flag which indicates how each row is applied into target.
Normal:- Creates a new row in the target. All rows in a data set are flagged as NORMAL when they
extracted from source. In case a row is flagged as NORMAL it will load into target. It is the common
flag used by most of the transforms.
Insert:- - Creates a new row in the target.
Note: - Only history preserving and key generation transforms can accept data sets with rows
flagged as INSERT as input.
Delete :- In case any row is ignored by target it will flag as DELETE.
Only the History Preserving transform with the Preserve delete rows as update rows option is selected
can accept data sets with rows flagged as DELETE.
Discard:- If you select this option, those rows will not be loaded into the target.
Example Scenario:-
In this figure 1 source file the changed records, updated, deleted records needs to be inserted into
different template table which means if the record MT03 has been updated as MT01that need to be
inserted to separate table and if the second row deleted that has to be inserted into other template
table.
Delete Option:-
we can insert the deleted records into separate table by changing the delete option settings, as shown
in figure 7
Figure 7:- Map Operation transformation rule for delete
Update option:-
We can insert the updated records into separate table by changing the update option settings, as
shown in figure 8
We can see that the updated records are inserted into separate table, as shown in figure 9
Figure9:- Updated row in Target data
we can see that the deleted records are inserted into separate table, as shown in figure 10
Source Files:-
Use the below files as sources for working out on the transformation
Click Here to Download Source Data.
Attachment of ATL File:-
Import the below .ATL file in the Data Services Designer to find the Job for the above transformation.
Enjoy
Summary:
Map CDC operation transform enables Source based CDC (Changed Data Capture) or Delta
implementation.
Microsoft SQL (2000 onwards) and Oracle (9i onwards) supports Source based CDC.
Using values for the Sequencing column and Row operation column performs three
functions:
Sorts input data based on values in Sequencing column box and (optional) the
Additional Grouping Columns box.
Maps output data based on values in Row Operation Column box. Source table
rows are mapped to INSERT, UPDATE, or DELETE operations before passing them on
to the target.
Resolves missing, separated, or multiple before- and after-images for UPDATE
rows.
Implementing source based CDC depends completely on source behavior.
Prerequisite:
A CDC datastore should be created for CDC tables providing required credentials at BODS side.
NOTE:
Map CDC operation transform reads data only from a CDC table source.
CDC table on top of SQL table should be created in MS SQL Server database and should be
imported to CDC datastore in BODS.
Enable CDC on database (that you are working on) before creating a CDC table as in the below
figure.
Create a table on top of which you want to enable CDC as in the below figure.
The following steps summarize the procedure to configure SQL Replication Server for your Microsoft
SQL Server database.
On the Replication node of the Microsoft SQL Enterprise Manager, select the Configure
Publishing and the Distribution option. Follow the wizard to create the Distributor and
Distribution database.
The following steps summarize the procedure to configure SQL Replication Server for your Microsoft
SQL Server database. The wizard generates the following components that you need to specify on the
Datastore Editor when you define a Microsoft SQL Server CDC datastore:
1. Right-click Replication menu (or Local Publications menu), then select New Publication. The
New Publication Wizard opens.
3. Select the database that you want to publish and click Next.
4. Under Publication type, select Transactional publication, and then click Next to continue.
5. Select tables and other objects to publish as articles. Set columns to filter tables. Then click to
open Article Properties.
2. Set Update delivery format and Delete delivery format to XCALL <stored procedure> if you
want before images for UPDATE and DELETE commands. Click OK to save the article properties
and click Next.
3. Add filters to exclude unwanted rows from published tables (optional). Click Next.
4. Select Create a snapshot immediately and keep the snapshot to initialize subscriptions and
select Schedule the snapshot agent to run at following times(optional). Click Next.
5. Configure Agent Security and specify the account connection setting. Click Security Settings to
set the Snapshot agent.
6. Configure the Agent Security account with system administration privileges and click OK.
7. Enter the login password for the Log Reader Agent by clicking Security Settings. Note that it
has to be a login granting system administration privileges.
8. In the Log Reader Agent Security window, enter and confirm password information.
9. Click to select Create the publication then click Finish to create a new publication.
10. To complete the wizard, enter a Publication name and click Finish to create your publication.
Setting up Business Objects Data Services for CDC:
To use SAP Business Objects Data Services to read and load changed data from SQL Server databases,
do the following procedures on the Designer:
6. Select a Database version. Change-data tables are only available from SQL Server 2000
Enterprise.
9. In the CDC section, enter the following names that you created for this datastore when you
configured the Distributor and Publisher in the MS SQL Replication Server:
1. If you want to create more than one configuration for this datastore, click Apply, then click Edit
and follow step 9 again for any additional configurations.
2. Click OK.
You can now use the new datastore connection to import metadata tables into the current repository.
Steps to create CDC datastore are clearly mentioned in the previous section.
Map CDC Operation transform reads data only from CDC table as in figure 6.
Schema out of Map CDC operation transform will have same structure as that of CDC table in
MS SQL server as in figure 7.
CDC Table:
CDC Table is a different table generated using a procedure that comes with CDC package. It
consists two types of fields.
Business fields These are fields of SQL table on which CDC is enabled.
CDC or Technical fields These fields are generated by SQL Server.
CDC table cannot be created through Data Manipulation Language like SQL table.
After importing the table two fields are generated by Data Services software.
If a record is Updated in Base SQL table, CDC table is updated with two records
Enable check point Once a check-point is placed, the next time the CDC job runs, it reads
only the rows inserted into the CDC table since the last check-point.
Get before image for each update row If it is checked, database allows two images to be
associated with an UPDATE row: a before-image and an after-image.
Attachment:
.atl File:
Import the below .atl file in the Data Services Designer to find the Job for Map CDC Operation
transformation.
Enjoy
Sources and Targets can be SAP Applications (ERP, CRM etc.), SAP BW, SAP HANA, Any Relational
Database (MS SQL Server, Oracle etc.), Any File (Excel Workbook, Flat file, XML, HDFS etc.),
unstructured text, Web services etc.
ETL technology of SAP BODS can be done in both Batch mode and Real time mode data integration.
Data Managementor Data Qualityprocess Cleanses, Enhances, Matches and Consolidates the
enterprise data to get an accurate or quality form of data.
Text Data Processing analyzes and extracts specific information (entities or facts) from large
volumes of unstructured text like emails, paragraphs etc,
Architecture:
The following figure outlines the architecture of standard components of SAP BusinessObjects
Data Services.
Note: On top of SAP BODS, a full SAP
BusinessObjects BI Platform or SAP BusinessObjects Information Platform Services (IPS) should be
installed for User and Rights security management from 4.x versions. Data Services relies on CMC
(Central Management Console) for Authentication and Security features. In earlier versions it was
done is Management console of SAP BODS.
BODS Designer:
Repository
Repository is the space in a database server which stores the metadata of the objects used in
SAP BusinessObjects Data Services. Each repositorymust be registered in the Central Management
Console (CMC) and associated with one or more Job Servers which run the jobs you create.
There are three types of repositories used with SAP BODS:
Local repository:
A local repository stores the metadata of all the objects (like projects, jobs, work flows, and data
flows) and source/target metadata defined by developers in SAP BODS Desinger.
Central repository:
A central repository is used for multi-user development and version management of objects.
Developers can check objects in and out of their local repositories to a shared object library provided
by central repository. The central repository preserves all versions of an applications objects, so you
can revert to a previous version if needed.
Profiler repository:
A profiler repository is used to store all the metadata of profiling tasks performed in SAP BODS
designer.
Where, CMS repository is used to store the metadata of all the tasks done in CMC of SAP BO BI
platform or IPS.
Information Steward Repository is used to store the metadata of profiling tasks and objects defined in
SAP Information Steward.
Job Server
The SAP BusinessObjects Data Services Job Server retrieves the job information from its respected
repository and starts the data engine to process the job.
The Job Server can move data in either batch or real-time mode and uses distributed query
optimization, multi-threading, in-memory caching, in-memory data transformations, and parallel
processing to deliver high data throughput and scalability.
Access Server
The SAP BusinessObjects Data Services Access Server is a real-time, request-reply message broker that
collects message requests, routes them to a real-time service, and delivers a message reply within a
user-specified time frame.
Management Console
SAP BusinessObjects Data Services Management Console is the Web-based application with the
following properties.
Administration, Impact and Lineage Analysis, Operational Dashboard, Auto Documentation, Data
Validation and Data Quality Reports.
Enjoy
SAP BusinessObjects Data Services was not directly developed by SAP Company. It was acquired from
BusinessObjects Company and BusinessObjects acquired it from Acta Technology Inc.
Acta Technology Inc., headquartered in Mountain View, CA was provider of first real time data
integration platform. The two software products provided by Acta were an ETL tool, named as Data
Integration (DI) tool also known as Actaworks and a Data Management or Data Quality (DQ) tool.
BusinessObjects, a French company, worlds leading provider of Business Intelligence (BI) solutions
acquired Acta Technology Inc. in the year 2002. BusinessObjects rebranded the two products of Acta
asBusinessObjects Data Integration (BODI) tool and BusinessObjects Data Quality (BODQ) tool.
In the year 2007, SAP, legend in ERP solutions, acquired BusinessObjects and renamed the products
as SAP BODI and SAP BODQ. Later in the year 2008 SAP integrated both software products into a
single end to end software product and named it as SAP BusinessObjects Data Services (BODS) which
provides both data integration and data management solutions. In the earlier versions of SAP BODS
text data processing solution is also included with it.
In the present market there are many ETL tools which dose Extraction, Transformation and Loading
tasks like SAP BODS, Informatica, IBM InfoSphere Data Stage, Abinitio, Oracle Warehouse Builder
(OWB) etc.
Lets look over why SAP BODS has got much importance in the present worlds market,
Firstly, it is a SAP product, as SAP is serving 70% of present worlds market and it is tightly
integrated with any database.
SAP BODS is a single product which delivers Data Integration, Data Quality, Data profiling and
Text data processing solutions.
SAP Data Services can Move, Unlock and govern enterprise data effectively.
SAP Data Services Cost-effectively delivers its solutions and is single window application with
complete easy to use GUI (Graphical User Interface).
Enjoy
Summary:-
Table comparison transformation is used to compare the two data sets and check what are the
records were changed and updates, inserted and deleted.
Comparing two data sets, this transform can generate the difference between them as a resultant
dataset with each row of the result flagged as insert, update, or delete.
While loading data to a target table, this transform can be used to ensure a row are not duplicated in a
target table and hence is very helpful to load a dimension table.
The transform will take in records from a query- compare them with the target table.
First is will identify incoming records that match target records based on the key columns you select.
Any records that do not match will come out of the transform as inserts.
Records that match on the key columns will then be compared based on the selected compare
columns.
Records that match exactly on the key and compare columns will be ignored i.e. not output by the
table compare transform.
Records that match on the key columns but differ on the compare columns will be output as update
records.
Example scenario:-
The use of table comparison is used to update, insert and delete the records.
Now in the figure 1 the source data for FIRST_NAME column we are modifying the record TOM as
TOMY so this record needs to be updated which means in the place of TOM the record TOMY must
be placed and now we are inserting new row that has to inserted after the row number 3.Now we
can see by looking at the below target table
what are the changes happened in source file that need to be reflected in target table ,which means
updated, Inserted, deleted records must be known
NOTE: - In table comparison transformation we cant use generated key column as VARCHAR it always
must be INT
Below figure 3 indicates the flow of table comparison transformation hierarchy
we can see the settings for table comparison transformation as shown in figure 6.if you want to detect
the deleted record you have to enable the DETECT DELETED ROW(S) FROM COMPARISON TABLE and
if you want to allow the duplicates for primary key you have to enable the INPUT CONTAINS THE
DUPLICATES KEYS
The inserted ,updated and deleted records indicates as I and U and D flags .This flags will be
seen in debug mode only.
Thats it.
Enjoy
Summary:-
Validation transform is used to filter or replace the source dataset based on validation rules to
produce desired output dataset.
This transform is used for NULL checking for mandatory fields, Pattern matching, existence of
value in reference table, validate data type, etc.
The Validation transform can generate two output dataset Pass, Fail.
The Pass Output schema is identical with the Input schema. The Fail Output schema has two
more columns, DI_ERRORACTION and DI_ERRORCOLUMNS.
Example Scenario:-
In this scenario source table1 have legacy values as shown in Figure 1 but we want SAP values.
We are getting SAP values from lookup tables 1,2 and 3 as shown below in Figures 1, 2,3.
In lookup function, we are getting null values because of not matched records available in
source table and lookup table but we dont want null values in our target table.
To avoid this problem we have to use validation transformation. The Validation transform can
generate two output datasets Pass, Fail based on conditions.
In pass target table we have clear output and In fail table we have history of failed records.
Below are shown the sample figures 5 and 6.
Figures 7, 8 and 9 shows the object hierarchy for validation transformation job, ETL job flow and the
way we define the validation rules respectively.
In Validation Transformation:-
Exists in Table option:-
Exists in table option is used to specify that a columns value must exist in another tables
column.
Click the drop-down arrow to opens the window and select column in the provided window.
This option uses the LOOKUP_EXT function. Define the NOT NULL constraint for the column in
the LOOKUP table to ensure the Exists in table condition executes properly.
Figure 9 shows the validation rules in validation transformation
thats it.
Enjoy