SPM For Partner's 2day - Training - Bangaluru 201910 Part2 v1
SPM For Partner's 2day - Training - Bangaluru 201910 Part2 v1
SPM For Partner's 2day - Training - Bangaluru 201910 Part2 v1
Day 1 Day 2
High volume bulk and batch based data loads via flat files
Requirement to load/extract data via direct API calls in real/near real time
manner
Ability to directly connect and pull data from source systems for Commissions
Commissions
SAP DI Solutions Excel Data Loaders Data Loader DW
(CDL)
Up to 5M per file or
Volume Range
< 1,000 1GB compressed Depends on the middleware and source/target
(rows of data per run)
file
Data Pre-
None HANA stored procedure ETL tool
Processing
Use Case Routine data management High Volume Batch Loads SaaS to/from Commissions with ETL needs
Middleware based
Commissions Commissions Smart Data
Excel Data (SAP based or your
Use Case Data Loader Data Loader Landing Pad Integration REST API
Loaders own third party ETL
(CDL) (CDL) (SDI) ETL tool
tool)
File Based
Integrations X X X X X X
A2A/B2B
integrations X X X
Automated
integrations X X X X X
SAP
X X X X SAP CPI Recommended
source/target
Non-
technical X X
user
High Volume X X X X
Real/Near
Real Time X X X
Equivalent in
Commissions Both HANA Oracle Oracle HANA Both n/a
on Oracle
Templates are
Downloaded Directly
from the UI and Perform
Validations as Data is
loaded into the system
Loading of Transactions onto Commissions.
Ideal for One Time Data
Loads or non-systemic
data sources
© 2018 SAP SE or an SAP affiliate company. All rights reserved. ǀ INTERNAL
Batch Data Integration
Overview
UI Yes No
VPN No Yes
Email notifications
SFTP Pull
Inbound Outbound
Data Loader
• File processing
• Validations Kafka messaging
• HANA Component
• Redis Component Redis
Callidus Cloud
Outbound To-landing
SFTP Pull
DB Link
Outbound
Extracts ODI Files
Tenant ID
Timestamp (Hours,
Minutes, Seconds)
Content Tag
Compression
G-zip is the supported method of compression
Can Be Unzipped with Win-Zip
Encryption
GPG is the Supported Method of Encryption
Processing Summary
File Pre-Processing Results
No. Recs No. Recs No. Dups
Batch ID Target Staging Table Errors Alerts
(File) (Table) (Skipped)
CS_STAGESALE
ACME_TXSTA_DEV_20091117.txt STRANSACTIO Success 0 0 42411 0 Success 0 0 42411
N
CS_STAGETRA
ACME_TXTA_DEV_20091117.txt NSACTIONASSI Success Success
GN
Triggers Commission jobs when “ODJB” file arrives into the Inbox folder.
Gzip large files (any files over 1GB is not recommended), split if
possible
You can access the document of full list of APIs via the following URL:
https://{4 letter moniker}-{environment}.callidusondemand.com/ APIDocument/
Once the reports are generated(a and b), they will be published to the Sales Performance Home Page of
the users.
Default BO Groups:
Custom BO Groups
Custom BO Group allows dividing the user data further into smaller chunks and gives more flexibility on how
the reports should be generated.
The users in custom BO groups should be added and removed explicitly as needed.
Package/Procedure
A driver package should be developed, which will be one point of entry. This package will be internally called by the Generate Statement
Calculation (via ODSReportsGenerationConfig.xml). This driver package can then call other required reporting procedures/packages to
populate reporting tables. Once the reporting tables are populated, they should then be used in Crystal reports directly instead of going via
the universe.
PeriodSeq
ProcessungUnitSeq
CalendarSeq
TenantID
In the reporting table include the following fields, apart from the other required fields:
Positionseq
Payeeseq
ProcessingUnitSeq
PeriodSeq
PeriodName
ProcessingUnitName
CalendarName
Create indexes on the reporting tables, based on the SQL in the Crystal Reports. Create the index on PeriodSeq,
ProcessingUnitSeq, PositionSeq and PayeeSeq. Use local indexes instead of global in partitioned tables.
Make sure the datatypes of the indexed columns match that of the joined columns in the queries using the tables.
Truncate the partition relevant for the period and processing unit being processed on the reporting tables, analyze
the truncated partition and then populate and update the statistics of the partition again after the data is loaded.
Log each step in the reporting package into cs_debug table. This is required by Tech Ops to support with any
issues found in future.
Design the solution in such a way as to re-use the intermediate results in more than one report.
Do not combine two or more SQLs with UNION ALL, instead use them as separate statements.
Instead of connecting tables to SQL such as CS_ProcessingUnit and CS_Period, they should be queries further up in
the procedure and the required values should be fetched in variables. These variables should then be used in SQLs.
This will help in reducing the number of tables involved in SQLs.
Query the pipeline table to identify the reports that are being generated and based on this only populate the reporting
tables used by the reports being generated.
Always reference the custom objects from Crystal using the schema name qualification. The custom objects will need to
be given read access to the <TENANTID> schema from EXT schema. Example NTELEXT objects will be accessed
from the reports using NTEL schema, so NTEL should be given access to NTELEXT database objects.
If for some reason synonyms have to be created, then only create Private synonyms.
Never do any data manipulation or calculations in reports, instead get them aggregated in rules.
Never Hard Code in Crystal and PL/SQL code. Use constants package or store them in a reference table/generic
classifier and then use them in the reporting logic.
An OD Ticket will be required for the following tasks, to support with the Report development:
Developer access to Database: Access to EXT schema (Database user: fdc1ext, password and TNS) in the
database.
Adding new TrueComp Admin users by default are setup under BO user group. To move them to admin user group, a
ticket needs to be raised.
To add the Driver package to the Generate Statements Calculation (ODSReportsGenerationConfig.xml) a ticket will
have to be raised.
Code Promotion – A ticket will be required to be raised to move the reporting code from one environment to another
(e.g.: DEV to UAT or UAT to PROD).
User Sync Request – This is to be requested as and when needed. Also, this is required just before go-live.
Stage Description
Classify Classifies valid transactions.
Allocate Calculates credits and primary measurements and allocates them to position
assignments. The output of the Allocate stage is credits and primary
measurements.
Reward Calculates secondary measurements, incentives, commissions, and deposits
for each position assignment. The output of the Reward stage is deposit
amounts for participants for the period in which the calculation is run.
Pay Calculates trial payments for periods that are not finalized (marked as closed),
or calculates trial balances for periods that are finalized.
Post Changes trial payments and balances to posted status (read-only).
Finalize For a position group or for all positions effective in a period, Commissions
marks the period as finalized. Thereafter, Commissions generates balances
instead of payments if you run the Pay stage.
pre-stage Pre-stage hooks are executed prior to the specified stage. For example, a prestage hook
for the Allocate stage will execute before the Allocate stage begins.
post-stage Post-stage hooks are executed after the specified stage has completed. For example, a
post-stage hook for the Allocate stage will execute when the Allocate stage completes.
stage failure A stage failure hook is called if any stage exits because there is an exception during the
pipeline run. You can only specify one stage failure hook for the whole pipeline. This is
not commonly used
pl e
am
S
Flat
Commissions Files
HANA Server
Commissions
Pipeline Database
(REST APIs )
Flat
Files
HTTPS
Firewall
Customer On-Premise
Adapters
ECC Hadoop
DP Agent
RDBMS File
cust
p rd
PRD Drop Box
If the credit processing requires a externally calculated value to be available on the transactions, then SH can
be used to update certain GAs or GNs on the transaction. It is not always possible to have this information
available when the file is loaded into Commissions
A transaction should be credited only if the payee has at least N transactions for the product in the month.
If there is a large number of products and the list changes often, it may not be practical to create a PMR rule to
count every product’s sales for the month.
In this case, a stagehook can be run that counts the transactions in the system, and for each transaction for the
product, it stamps the count on the transaction.
This count is now easily available to the credit rule, and the logic can be implemented in credits
Note: Files may not be loaded for the entire month , they will be loaded day by day, so the total count will not be
known until the pipeline stagehook runs
• Example: A payee’s compensation may depend on the overall achievement of his team.
This can be done via roll up and rolldown in the pipeline. However,
If the team changes often, relationships can be too effort intensive to maintain. Storing the
relationships in a classifier or position attribute is often easier to maintain
Performance issue can occur due to rollups in SMR/IRs of the teams are too large.