100% found this document useful (2 votes)
3K views89 pages

Interview Questions SAP CPI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
3K views89 pages

Interview Questions SAP CPI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 89

Difference Header vs Property difference:

If the information needs to be transferred to the receiver system, then ‘Header’ should be used in
Content Modifier. If the information is internal to Iflow, then ‘Property’ should be used. The property
lasts for the entire duration of an exchange but it is not transferred to a receiver.

Suppose I have two integration flows ( IF1,IF2), where IF2 is connected to IF1 via process direct .Now
properties in IF1 cannot be accessed in IF2 because the scope of property is with the iflow (IF1), but
where as headers created in the IF1 can be accessed in IF2 because it as global scope.

Note : you can define properties in local integration and call that property in main integration process
also

What if we want to share the data between iflows what options do we have

 Global Variable
 Data store
Difference between variable vs Data store :

If you store values in headers or properties in your iflow, those values will be gone when the iflow
finishes. If you need to store values longer than that, you can store them in a variable using the Write
Variables step. In other words, they're a way to persist data.

A global variable can be accessed by any iflow, but a local variable can only be accessed by the iflow that
wrote it.

You can see all the variables, you've stored, in the Operations view => Manage Stores => Variables.

Variables are similar to Data Stores, really. But variables store scalar values (i.e. one number, one
timestamp etc.) whereas Data Stores contain complete payloads (e.g. an XML or JSON document).

Please note that while there's a Write Variables step, there's no Read Variables step. To fetch the value
of a variable into a property or header, you use a Content Modifer with the type set to either Local
Variable or Global Variable.
------------------------------------------------------------------------------------------------------------------------------------

In SAP Cloud Integration (CPI), variables have two types of scope: local and global:
 Local scope: Variables are only accessible and visible to one IFlow.
 Global scope: Variables can be read and modified by any IFlow.

------------------------------------------------------------------------------------------------------------------------------------------

Difference between Request Reply vs Content enricher :

Both steps make a synchronous call to an external system. Where they differ, is in how the response is
handled.

In the case of Request-Reply, the response from the external system replaces the current payload. This
means that if you need both the old payload and the response from the external system, you need to
store the old payload in e.g. an exchange property before making the external call.

In the case of Content Enricher, the response from the external system is merged into the current
payload, as per the step configuration. An example of this could be a lookup of e.g. customer details.
The returned information is added to a customer element in the current payload, thereby enriching it.
Difference between combine vs enrich in Content Enricher ?/Aggregation algorithms in content
enricher?

Now, we select the aggregation algorithm for the content enricher. There are two options for
aggregation algorithm:

1. Combine: On using this option, the Lookup Message is appended to the Original message.
2. Enrich: On using this option, the Lookup Message is merged with the Original message using a
common field.

Content Modifer is used to modify messages such as adding /removing ,properties and altering the
payload of the message

What is exception subprocess:

An Exception subprocess is used to handle errors that occur in iflow it allows developers to manage
errors gracefully by specifying actions to be taken when an exception occurs

Example : can be used to send an alert email or log the error

How can sensitive data be handled securely in sap cpi?

Sensitive data such as credentials or API keys should be handled using security artifacts ( like keystores)
and storing them in a secure format in cpi rather than hard coding then in iflow .

How do you handle large messages in sap cpi?

For large messages,splitter can be used to break then into manageable chunks

How can you encrypt a message payload in sap cpi?

Payload encryption can be achieved using PGP encryption step which encrypts the payload using PGP
public key making it secure for sensitive data transfer

How do you debug the iflow :

Debugging in CPI can be done using “Trace “ Mode which provides detailed information about each step
including message payload and headers

How can you test connectivity between SAP CPI and external systems

You can use the “Test Connectivity “ feature in


How is data encryption handled in sap cpi?

Data encryption in sap cpi can be implemented using keystore artifacts to encrypt/ decrypt messages

How is exception handling implemented in sap cpi?

Exception handling can be implemented using Exception subprocess Groory script can be used to log
errors , and alerts can be configured to notify about exceptions

What is data store:

Data store is used to store messages temporarily it allows integration flows to be more reliable by
enabling message persistence which ensures that message is not lost when error occurs.

Security artifacts are used to store credentials ..these artifacts are essential for ensuring communication
between cpi and external systems

Examples include :User credentials ..OAUth credentials

Timer is used to trigger the iflow at specific intervals or times

-------------------------------------------------------------------------------------------------------------------------------------

Difference between content enrich vs Poll enrich

Content Enrichment is used when you need to enrich a message by fetching additional data in real-time
from an external system, based on information available in the incoming message.

Poll Enrich:

The Poll Enrich pattern in SAP Cloud Integration (CPI) is used to enrich a message payload with
additional data from a file on an SFTP server. It can also be used to poll from an SFTP server that is
triggered by an external trigger, such as an HTTP call.

To poll multiple files using the Poll Enrich pattern, you can implement a loop in one integration flow or
call another integration flow via ProcessDirect adapter

Tips :
 It will poll only one file at one execution
 As we want to poll several files from a folder, we need to encapsulate a Poll Enrich and an SFTP
Sender Adapter in a local integration process, which is called via a Looping Process Call.
How to upload certificates in sap cpi? Or where do you upload certificates in sap cpi?
Monitor -> Manage security -> Keystores -> Add -> Certificates(here upload certificates )

Did you work on oAuth authentication how did you do the OAuth Authentication?

How do you Authenticate your Rest API using OAuth


Grant type: client credentials
Client ID
Client Secret
We have to pass in the body to Token server and in response we get access token ..this access
token need to pass it as a header to the Rest API

Example 1: Repeat the Purchase Order Number Using UseOneAsMany.


Let’s assume we have an interface to transfer Purchase Order information. The source message and
desired target message are as follows.

Purchase Order (PONumber) should be repeated at target for each line item.

--------------------------------------------------------------------------------------------------------------------------------

Difference Looping process vs Process call

Looping process call : Add a looping Process call step to repeatedly execute the steps defined in local
integration process until the condition is met or max iterations allowed is reached which ever comew
first

Example : In single call we cannot pull the entire data from success factor we go for looping process call
.. it will iterate/recall same local integration multiple times

If you are querying SAP SuccessFactors EC using the CompoundEmployee API, how would you query all
the records if the page size is set to 200 and they're a thousand records in EC?

This can be useful for automating tasks or working with large data sets.

Looping Process call:


For each call how many records you want to pull you need to specify

We specify that the looping process call will work according to the condition expression specified in the
"condition expression" field. By stating ".hasMoreRecords contains 'true', we indicate that the loop will
continue to run as long as there are multiple records. You can take a look at (hasMoreRecords).

.FilterSte
p 6.${property.payloadStack}${in.body} : We use it to continue adding each incoming fragmented data
to the body. You can take a look at it.
what if we still have more records to fetch but loop max iteration value reached? In such scenario How
to fetch missing records?
We can also use runtime property SAP_PollEnrichMessageFound to check file exist condition in looping
process call step.

Process call:

A process call in SAP Cloud Integration (CPI) is a way to connect a local integration process to an
integration flow

Idempotent process call :Detects if a message ID has already been processed and stores the status of
the successful process. This can be useful for modeling exactly-once or at-most-once handling

LOCAL INTEGRATION PROCESS: A Local Integration Process is like a subprocess-call within an IFlow

We shall need:

The below is the Integration Flow which has been created to demonstrate how to use Local
Integration Process. The IFlow has been divided into 3 parts.
---------------------------------------------------------------------------------------------------------------------------------------

How do you create reusable integration iflow in sap cpi?

Resuable integration flows can be created using subprocess and Process direct adaptor .common
functionality can be reused across different iflows

SFTP Authentication:

 User credentials
 Public Key

Public Key Authentication:

 if the Authentication type is public key Go to Monitor ->Manage Key store and click on create
drop down and select SSH key
 Once you create SSH key you need to download public key and share this with sftp server team
 Once you download this public key and send it to sfmc team they will assign one user to this
public key
 Sftp team will add user to this public key this is not our responsibility SFTP team will take care
 Once you have done sftp connectivity test now you can use it in iflow sftp adaptor
configuration

 How to check whether my file exists in sftp server or not ?

Test Connectivity->SSH -> click on check box check directory Access and provide the path of the
directory to see all the files in that directory

SFTP – test connectivity

SSH Connectivity Tests: When you have selected the SSH connection type, the test tool checks if the SSH
outbound connection reaches the associated SFTP server.

Depending on the chosen authentication, the following is checked by the test:


 If the server (host) is reachable for the tenant

 If the configured known_hosts file deployed on the tenant contains the certificate of the SSH
server.

Cloud Connector Connectivity Tests: When you have chosen the Cloud Connector, the test tool checks
whether the Cloud Connector has been configured and can be reached.

Kafka Connectivity Tests: If you choose the Kafka Connectivity test, the test tool checks if the
connection is successful or not.

AMQP Connectivity Tests :If you choose the AMQP (Advanced Message Queuing Protocol), the test
tool checks if the connection is successful or not.

SMTP Connectivity Tests :You can perform SMTP connectivity tests to check the settings required for
configuring the receiver mail adapter.

TLS Connectivity Tests

When you've chosen the TLS connection, the test tool checks the following:

 if the receiver (host) is reachable for the tenant.

 if the keystore is deployed correctly and contains those keys that are required for the specified
authentication method during TLS handshake.

Common Use Cases of Dynamic File Name and Directory

 Adding a Timestamp to File Names


 How to append a unique message ID to avoid file overwriting
 Dynamically including data elements from the incoming message like OrderID, InvoiceID,
etc in the file name (e.g., <OrderID>_yyyyMMddHHmmss.xml).
 At runtime determine the target directory the file should be saved based on incoming message
content, incoming filename pattern, etc. (e.g. Move files starting with “Order_” or “Order”
directory)
 how the target directory can be determined during runtime and dynamically assigned to the
receiver adapter?(for example files starting with Orders should be Moved to” Orders” target
folder on SFTP server Invoices to “Invoices folder and all others to “others “ folder
 How to Pick file from SFTP sender based on timestamp in CPI?
SFTP :

Mandatory configurations:

 Address :HOST + post


 Proxy type – internet or on premise

Authentication- public key

Connectivity test -> SSH

SFTP : will pick only one file from the directory

In SFTP configuration File name :* (if you give star it will pick all the files in the folder )Cloud
connector : to send any data to on premise application

--------------------------------------------------------------------------------------------------------------------------------------

Process direct: its synchronous call

Process direct : common iflow that will be called by other iflows

Nothing but calling a common routine in our programming language if you see in our programming
language generally we will write one reusable code which we put it in a separate Place either it we will
call it as a function or some sub routines so that many programs can call that sub routines from many
place right the same thing we are going to see that in CPI

your business requirement but all of most of them or some of them will have some common routine
that will be done in every integration flow so that instead of repeating that common integration flow or
common code in every integration flow what we will do is we will put it in a a separate place and each
of this integration flow will call that and come back so the calling to that common routine is done by a
adapter called process direct adaptor

for an example you can see suppose you want to write some log files right any integration flow
generally we will do the logs so instead of repeating that same code everywhere in the integration flow
we will put it in a common place then each of this high flow will call that common routine ..

getting a token suppose you are connecting to a you calling an API before calling an API you need some
token so getting a token can be a can be occurred in many places many i flows so instead of repeating
that get token process we'll put it in a common place and each of this iflow will call that routine or

quering some master data right suppose you are creating some Master data so that you can use that in a
common routine or writing a file to an FTP server so these are all the common routines that we will put
it in a a common I flow and your other integration flow will call that ..consume that I flow through a
process direct

When two integration flows need to communicate with each other, the value specified for the
Address parameter in both the sender and receiver ProcessDirect adapters must match.

If you see an area of re-usability like [sending notifications] or [sending mail alerts] or [establishing a
connection] you could use process direct adapter.

Monitoring: you can leverage correlation id in message monitoring which list down all related messages
( called from main flow to multiple flows connected via process direct adapter )
JMS and Process direct are both Internal Adaptors they don’t connect to external server
Process Direct – will send the data from one integration flow to another integrationflow.

Producer flow- will provide data to other integration flow within the tenant

Consumer flow- it is used to consume the data from producer flow

Process direct adaptor :

 An internal adaptor that allow to call internal flow very fast


 Used as a way to make shared functions for the system
 Can use it for logging to server
 Common functions that you need to include many places
 It is a synchronous process
 Multiple producers can connect to a single consumer, but the reverse is not possible. The
cardinality restriction is also valid across integration projects. If two consumers with the same
endpoint are deployed, then the start of the second consumer fails.
 Runtime lo data transfer from one iflow to another flow

Producer Integration flow:


 When the ProcessDirect adapter is used to send data to other integration flow, we consider
the integration flow as a Producer integration flow. This is when the integration flow has
a ProcessDirect Receiver adapter
Consumer integration flow:
When the ProcessDirect adapter is used to consume data from other integration flows, we consider the
integration flow as a Consumer integration flows. This is when the integration flow has a ProcessDirect
Sender adapter.

A iFlow will call another iFlow in different package?

Is it possible to call a iFlow with another iFlow into different package?

For example:

1. Package A: Idoc to cpi to proccessdirect

2.Package B: Processdirect to cpi to idoc

Can I call receiver processdirect address to another package processdirect sender address.

Answer: Yes its possible and process direct adapter dont have any impact due to different packages.

-----------------------------------------------------------------------------------------------------------------------------------------

Aggregator

Aggregator : used to combine multiple related messages into a single message

This is useful when a receiver expects a single message for related items, but the items are sent as
separate messages. For example, if an order is divided into an order header and multiple order items,
the Aggregator can combine the items into a single message to send to the receive

you can collect and store individual messages until a complete set of related messages has been
received. The aggregated message is then sent to the actual receiver.

Difference between Join vs Gather

If you want to combine messages that are transmitted to more than one route by a Multicast step, you
need to place Join step before the Gather step. If you want to combine messages that are split using a
Splitter step, you use only the Gather step.

Gather
Merges messages from different routes into a single message
Join
Combines messages from different routes without affecting the content of the messages. The Join step
can be used in combination with the Gather step
------------------------------------------------------------------------------------------------------------------------------------------

How do you debug an iflow in sap cpi?

Debugging in sap cpi can be done using “Trace Mode which provides detailed information about each
step including message payloads and headers .

1. After switching the TRACE mode, you have to replicate the steps again, to as to reproduce the
issue in CPI.
2. TRACE mode lasts for only 10 mins.
3. The data in TRACE mode lasts for only an hour.

Difference between Process direct and Process call

Difference :Process Direct allows you to invoke other integration flows, while Process Call is for calling
sub-processes within the same integration flow.

Process call : It’s mainly used for breaking down a larger integration flow into smaller, reusable sub-
processes. Process Call is used to call a sub-process within the same integration flow.

Use Case:

 Scenario: You have a complex iFlow that handles multiple tasks, such as data transformation,
validation, routing, and logging.
o Implementation: Use Process Call to divide the iFlow into sub-processes, each handling
a specific task. For example, one sub-process could handle transformation, another
validation, and another logging.

Process Direct :

Use Case:

 Scenario: Let’s say you have multiple iFlows that need to validate incoming data using the same
validation logic.
o Implementation: You can create a dedicated "Data Validation" iFlow with the validation
logic and expose it via Process Direct. Other iFlows that need to perform validation can
call this iFlow using the Process Direct adapter, rather than replicating the same logic in
each iFlow.
-------------------------------------------------------------------------------------------------------------------------------------------
-

Data Store related questions:

Data Store:

Q: How to write to data store?


A: Use DS write step.

Q: What is the different between Visibility: 'Global' and 'iFlow'?


A: 'Global' mean any iFlows can access DS; 'iFlow' mean only same iFlow that write can read it back.

Q: At Write DS, Is it mandatory to specify Entry ID?


A: No.

Q: What happen if write to DS with same entry ID twice?


A: By default will fail/error. If selected 'overwrite existing msg' then will replace/update.

Q: Is it message body only will write to DS?


A: Body always write. If select 'include msg headers' then headers will be write to DS as well.

Q: What kind of payload format can write to DS?


A: No restriction. Xml/Json/Text also is fine.

Q: For DS Get, what happen if not specify entry id?


A: Will fail, entry id is mandatory for DS Get.

Q: What are the main different between DS Get and DS Select?


A: DS Get fetch single entry; DS Select fectch multiple.
A: DS Get mandatory to specify entry id; DS Select no option to enter entry id.
A: DS Get support different data format; DS Select only support XML.

Q: After Get or Select from DS, what are the ways to delete the entry?
A: Use 'delete on completion' or DS delete by entry id.

Q: When writing a list of records to DS, if some record processing failed, will the DS operation have
partial success, partial failed?
A: Yes. Now new iflow by default 'transaction handling' is none.

Q: How to select multiple entry from DS, process each entry independently one-by-one, process those
success one, skip those failed one, and further write to different DS in case success or failed?
A: Use combination of DS select, splitter, router, exception sub-process, 'transaction handling' setting, 1
source DS + 2 target DS to achive this. Will show in course lesson.

Q: What data format supported by DS sender adapter?


A: Xml, non-xml or any format also is ok.

Q: What so special about DS sender adapter, compared to DS Get & Select?


A: DS sender adapter have auto retry feature.

Q: Why DS sender retry consider as 'smart' retry? Describe it, please?


A: It have 'Exponential Backoff' retry option. Each retry will double the wait time.

Data Store :

Business scenario1:

When ever user creates an invoices in source system (ariba )We are getting the invoices from sender
system and CPI will do transformation and then we are sending it to target system for payment this
process will continue ..

For any reason Say target system is down .. or if there is any issue with cpi transformation(while doing
groovy script or message mapping )so this cpi interface will be failed .. what ever data we are getting
from source system will be lost inorder to avoid this situation we are storing the data in Data store

Everyday thousands of invoices are being sent to target system(oracle) ..some upgrade is happening at
target system at that time .. if there is no option to retrigger from source system again to target
system we need to store the payload in Data store

I can store the Data and read the data from data store
In a day there are 30 invoices have failed because of so and so reason

We can create an iflow and place an timer and fetch the 30 invoices from data store and reprocess the
invoices to target system .. no dependecy with source system .. based on schedule it will read the
data from data store and send to oracle or target system

Business scenario2:

Say I have one HR system where we maintain employee details and I also have Manager system where
it has Manager Information of an employee ..

Now HR system will be sending the data to SFTP server as employee .csv Manager system wil lalso be
sendign the data to SFTP server as Manager.csv

Now I will create One iflow/interface in CPI to write/store the employee .csv data to Data store

I will create another iflow/interface in CPI to write/store the Manager .csv data to Data store

In my third interface I will read the data from both the iflows and Merge the data and do the
transfromations and send it to Oracle which is my target system ?

Data store: data base lo table lantidi its like a data base table ------entries ID -> you can give dynamic
value or contast value (you can store employe id in property and provide it here also if you dont give it
will generate an unique value )

Difference between get and select – single message we can read is get and select is multiple messages
/entries ->

Get = message header+body select is only = Message body


Data store : is nothing but a table in database to store data writing the data into data store and
reading the data from data store using entry ID

One data store can have Multiple Messages with Multipe entry IDs

 Get
 Select
 Write
 Delete
 Once you write the data into data store it will be shown in Monitor->Manage stores -
>Datastores
 Once you create a Queue using Jms adaptor at receiver side it will be shown in Monitor -
>Manage stores ->Message queues
 In order to create Number ranges you have to Go to Monitor ->Manage stores -> Number
Ranges
 When you create variable using write variables it will be shown in Monitor ->Manage stores-
>Variable

Once you create a variable and store the value in it you don’t have any option in CPI to Get the variable
value you need to use Content Modifier and select the type as Global Variable ..

Select – This step selects entries from a transient data store and creates a bulk message containing the
data store entries.
Write – This step performs a Write operation on the transient data store
Get – This step gets a specific entry from the transient data store.
Delete – This step deletes an entry from a transient data store. (Give it a try) Leave us a comment, if you
face any issue.

Visiblity drop down :

 Integration flow
 Global

Select Multiple records from data source using select operation

Entity ID can be static or dynamic ..use content Modifier to get the value from source xml and save it in
property and that property can be provided in Enitity ID

Something like this${property.ProductID}

Once you successfully read the data from data store it will be deleted if you check the check box delete
on completion
Say you are getting multiple products from source you can use splitter before data so that we will split
one large message into individual messages

Say in your xml you have 30 products with product IDs.. we can save/write them into single data store
with different product IDs by providing Entity ID as dynamic you can read multiple entries or records
by using select operation

If you send same message to the data store using write operation it will fail unless you check overwrite
existing message check box

After storing the data in data store if the Retention threshold is given as 2 then till 2 days it will be in
waiting status so that other flow will pick the data it will wait.. after 2 days The status will be overdue

You need to check the checkbox Delete on Completion

In Data store write : to achieve Entity ID dynamically we have followed the below process

 Content Modifier we have created an Property Product! ID with xpath expression //productID
and we have read that property in Write data store .
 We have created a Global variable called productID and read the property into it

In Data Store Read :

 Using Content modifier we have created the property and stored the variable value
 That property is given in the data Store get Entity ID

Variable Related questions :

: What is the different between local variable and global variable?


A: Local variable can be access via same iFlow only. Global variable can be access via differet iFlows.

How to read local variable and global variable?


A: Use Content Modifier read to either header or property.

Q: How to write Variable?


A: In iFlow, use 'Write Variables' step, take value from header/property/Xpath/expression.

Is it possible local and global variable having same name?


A: Yes. Since the scope is different between local and global.

: How to do delta synchronization via timestamp?


A: Use variable to remember last processed timestamp, so that next scheduled run will resume from last
process timestamp onward.
Q: What need to consider when design delta synchronization via timestamp?
A: (1)Data should be sorted by timestamp.
(2) Timestamp should be unique (E.g. only date without time might not work).
(3) The right date field should use for delta synchronization.
(4) Only update last processed timestamp at last step if all processing success.
(5) Timer/scheduler interval.

Q: What if I need to revert back to earlier past timestamp?


A: Build in same iFlow a manualRun/adhocRun flag to set manual timestamp, override value in variable.

: Should I use global variable or local variable?


A: Use global if other iFlow need to access same variable. Global can behave like local, but not the other
way round.

Q: What ways can be use to delete variable?


A: Manual delete via 'Manage Variables' page.

Q: What other potential use of variable?


A: Access same variable value in different branches of Multicast (because property will not work).

At iFlow first run, variable not created yet but need to use some initial/default value for processing, how
to handle such chicken-and-egg situation?
A: Use Content Modifier read variable and set default value.

------------------------------------------------------------------------------------------------------------------------------------

 Splitter should be followed by Gather


 Multicast : should be followed by Join+ Gather

----------------------------------------------------------------------------------------------------------------------------

Simulate vs simulation

Simulate – its done in only Message Mapping

Simulation : we can perform the simulation in between pallete functions

Testing the Message Mapping

 Display queue
 Simulate
--------------------------------------------------------------------------------------------------------------------------------------

Idoc :

 Idoc- combination of Message type and Idoc type /Basic type


 RFC
 Proxy- they create structure in created in ECC /s4 hana system using tools and they share the
wsdl to cpi team through wsdl they will be sending the data to CPI
 SFTP File

Prerequisite to implement idoc flow in cpi

Outbound Configuration:

High Level Steps :


In CPI

Consumer Proxy: its also called Reciever proxy( Abap proxy at receiver side

Provider Proxy: it is also called Sender proxy (ABAP proxy at sender side)

SOAP adaptor :
we will generate wsdl and give it to Abaper or ABAP team

End point will be given to Basis team so that In SOA manager they will create logical port

XI Adaptor:

Rfc connection

Sxmb_adm(transaction code): IS _URL CPI ki veledi destination maintain cheyali

CPI :

 STRUST (TCODE)- Import SSL certificate


 Rfc ->HTTP destination ( HTTP destination are two types G AND H type ) we use G type
connection
 Port
 Logical system
 We need to import certificates at both ends ie., ECC and CPI side

Cloud Connector:

 SAP on premise application on sender side then we don’t need cloud connector Ie.,RFC/IDoc
/Proxy

Idoc Sender SIDE AYITHE In Control records you will have document Number field -> Idoc
Number it will created in SAP application we get it from SAP Application

IDOC receiver side AITHE we are generating it in SAP Application it will come in IDOC response
header Property(SAP will be generating it)..

Ie.,Applicaiton Message ID
-------------------------------------------------------------------------------------------------------------------------------------------
-

To call Success factor we need to use Success Factor Adaptor which is Adaptor Specific

Splitter – if there is any requirement to process record by record then we can use the splitter

General splitter vs iterator splitter?

 Breaks down a message into multiple messages keeping the encapsulating elements (root
element)
 Breaks down a Message into Multiple messages without encapsulating elements (root element)

What is difference between Sequential Multicast vs Parallel multicast

Multicast : You can use the Multicast step to send copies of the same message to multiple routes. You
can send copies to all routes at once using Parallel Multicast.

Sequential Multicast: In sequential multicast, messages are sent to each receiver one after the other.
The next receiver only receives the message after the previous one has been processed.
If one of the branches fails during processing, the sequential multicast will stop further processing of the
remaining branches.

Parallel Multicast: :

In parallel multicast, messages are sent to all receivers simultaneously. Each receiver processes the
message independently and concurrently.

If one branch fails in a parallel multicast, the other branches continue processing. Each branch can
handle its own errors independently,

----------------------------------------------------------------------------------------------------------------------------------

Node functions:when interviewer asks about Node functions always speaks how many inputs it takes
and what it will do
 Create if –its always expects Boolean value –true or false
 Exists – if value exists in source it will return true if the value doesn’t exists in source it will
return false
 Formatby example also takes two input
 UseoneasMany takes three inputs
 ----------------------------------------------------------------------------------------------------------------------

What is difference between fixed value mapping vs Value Mapping

fixed Value Mapping: Fixed Value is very useful when you have lots of conditions coming in one source
fields during mapping, then to avoid the IfThenElse or UDF we go for Fixed Value mapping.

Example: if your source field contains 01 or 02 or 03 or.....12 and you want the result as JAN or FEB or
MAR or ......... DEC.

Advantage: Using fixed value table you can see the result immediately in the mapping.

Disadvantage: If you have used the same fixed value mapping at several places in you mapping then in
case of any change in fixed values you have to make the changes at several places. So in short fixed
value mapping is not good from the maintenance point of view.
Value Mapping: Value Mapping also works in the same way as fixed value mapping, only the thing is you
will get the result at run time only. And you have to define the value mapping in Integration directory.

Advantage: If you have used the same value mapping at several places in you mapping then in case of
any changes in values you don't have to make the changes in you mapping, just make the changes in in
value mapping in Integration Directory, that's it.

Disadvantage: you can't the see the result immediately in mapping. Results can be seen only at run time.

Scenario:

Fixed values – Max we can give 10 fields

Value Mapping Interview question say in source xml we have 1 field that has 100 values so is it
necessary to create 100 fields in values mapping?-i will import all the 100 fields in an csv and then
upload it into value mapping
and different failure options. 1. Use Key 2. Use Default Value 3. Throw exception

 Router step to send message to different routes based on condition, process using different
logic and optionally back to single main route.
 Multicast step to send the same message to multiple routes to process differently, and
optionally gather back results from all different routes.

Splitter step to tackle challenge of splitting large payload to smaller one then only process.

Router should have Minimum two branches -- in that one is default route

-----------------------------------------------------------------------------------------------------------------------------

Encryption and Decryption:

Source -> CPI ( here we are sharing the public key of CPI with Source system so that Source system
will encrypt the data and CPI will decrypt the data with own private key

CPI -> Reciever system(here Receiver system will be sharing public key with CPI so that CPI will encryp
the data and Reciever system will decrypt the data with their own private key)
Outbound Flow :

From CPI to Receiver system ..Receiver system will generate two keys public and private keys and
Public key we need to provide it to CPI developer so that he will encrypt before sending to Receiver
system. once the Encrypted has reached the Receiver system then using their own private key they will
decrypt the content
Which ever you have generated in Monitor ->Manage security->PGP Keys -> you might have uploaded
the public key and that name has to provided in Encryption Key User ID

Signing: Select including option it will ask for private key We need to generate private key and public
key has to be given to recipient
Which ever you have generated in Monitor ->Manage security->PGP Keys -> you might have uploaded
the public key and that name has to provided in Encryption Key User ID

Signing: Select including option it will ask for private key We need to generate private key and public
key has to be given to recipient

If you want to retrigger the failed messages from CPI

Primarly Messages can be failed at three layers

Extract: if source system is down that can be very well managed ..

Transformation: what about the case is when we have received the message and we are doing some
transformation it is failing because of the transformation

Load:

You can use Jms adaptor to build the scenario to retrigger the failed message where you can hold the
message in the queue and you can reprocess in case of any failure but this adaptor and design comes
with some level of constraint because it going to store the messages in the queue and you have to
manage the queue ie., queue management and what about the case if the customer they are not in
mood to use jms adaptor

CPI Data Stores


There are many scenarios where message content needs to be stored either as a whole, parts of, as an
error response or specific configuration data. Data Stores offer users with a means to achieve this
temporary storage in CPI. There are two types of Data Stores - Local and Global. Local Data Stores are
restricted to the iFlows they are created in while Global Data Stores can be accessed by various iFlows.
Use Cases
There are three common use cases for data stores:
Store & Pick Up Scenario
 This scenario globally stores the message for later use by another iFlow.
 It can be seen as a push pull pattern. The message received from a sender is stored in the data store. A
receiver seeking information actively polls (reads) the message in another message processing run
(iFlow).

Asynchronous Decoupling Scenario

 This scenario decouples the sender & receiver systems.


 The message is received by the sender is stored and independently received by another iFlow to forward
to the receiver.

Asynchronous Decoupling with Error Scenario

 This scenario decouples sender & receiver systems when an error occurs.
 If the message is successful, it is passed to the receiver within the same iFlow.
 If an error occurs, the error messaged is stored in Data Store to be accessed by another iFlow for
processing.
 In this scenario by using Data Stores SAP CPI users can create a generic error handling approach for
iFlows.

Security in SAP CPI:


 Server room
 MD room
 CEO room
Message Monitoring :
Encapsulating element –means header First record will always act as header – that header will
populate in each and every Message

General Splitter: split the composite incoming into individual Mesaages by keeping Encapsulating
element

Thumb rule : general splitter interprets the first line of inbound message as header or root element

Iterator splitter: : split the composite incoming into individual Messages without Encapsulating
element

General splitter comes with root element but iterator splitter doesn’t come with root element
Difference between node vs node list in filter:? Need answer

Filter limitations:

Limitation:

 It works only for XML messages:


 Content Modifier to be used post Filter step to rebuild the Valid XML since Filter removes the
Root node.

Use content modifier for root node if required


-------------------------------------------------------------------------------------------------------------------------------------------
------

 How will Gather know that the last split message has reached?-Camel Split Complete header

the Camel headers listed below are generated every time the runtime finishes splitting an Exchange.

 CamelSplitIndex :Provides a counter for split items that increases for each Exchange that is
split (starts from 0).
 CamelSplitSize: Provides the total number of split items (if you are using stream-based
splitting, this header is only provided for the last item, in other words, for the completed
Exchange).
 CamelSplitComplete :Indicates whether an Exchange is the last split.

----------------------------------------------------------------------------------------------------------------------------

 In what scenarios do we use sequential multicast


 Do you have any idea on PGP Keys
 What kind of issues you supported
 What is the complex scenario you have implemented
 Difference between poll enrich vs content enrich
 What is pagination in cpi
 How to read the 3 rd employee Id in cpi
 I have payload how to remove certain characters from it
 What kind of encoding did you use
 What is difference between security material vs keystore
 Difference between cloud foundry vs NEO
 CPI Architechture
 How to connect to SFTP server

where you moniter the the Messages in the CPI?

how to do the testing the Iflow after completion?

 How to handle exceptions in iflow?


 How will Gather know that the last split message has reached?-
 What does the Process Direct adapter do?
 What are the different ways in which an Integration Flow can be migrated from one tenant to
another?

General tips for easy reference :

 We always store csv in sftp


 Process call is used in conjuction with local integration process
 Splitter is used in conjuction with gather step (general vs iterator ) with encapsulating element
vs without encapsulating element
 Multicast is used in conjuction with Join + gather
 Router need have default route(each branch has one condition ) any number of branches
 Write variables once created can be read from content modifier
 Data store once you write we can read it from either get or select operation ..select is for
multiple entries and get is for one entry
 Cloud connector will have one location ID
 Trace will be 1o mins once trace enables the message logs will be for 1 hour
 Aggregator – is used when we are getting multiple incoming messages from sender system and
you want to send to receiver system as single message
 Simulate vs simulation
 Header will be sent to receiver system while properties are sent to receiver system
 If you use timer endpoint will not be generated
While create a variable using write variable pallete– check the checkbox (Global scope) if you want the
variable to be used in another ifow

Write variables uses cases: last successful run/Delta load or full load /

Another way of achieving looping is by using the general splitter. Sometimes that might make it lot
easier too. Yes, though Splitter is not a replacement of LPC, based on the requirement/scenario, the best
fit approach can be chosen with consideration of performance and expected load

Sucessfactor also supports odata

------------------------------------------------------------------------------------------------------------------------------------------

Looping concept in CPI

If you come from imperative/procedural programming background (e.g. Java, ABAP, C#), you might
wondering how to looping in SAP CPI? What are the equivalent CPI ways to do for loop/while loop since
all these CPI steps is drag-and-drop only without coding? The answer is using CPI “Looping Process Call”
with condition to exit loop. You should also aware the concept of OData V2 vs V4 looping and leverage
build-in looping feature of OData adapter.

7) Filter concept in CPI

Filter concept mean only take necessary data required, either source system only send required data, or
use CPI filter step to retain only required data For filter step you will need fair good knowledge of XPath
to filter effectively

8) Enrich/Lookup concept in CPI


There will be cases you might need get different payloads from multiple sources or multiple calls, then
only enrich the main payload with data from lookup payload

13) Persistence/Variable/Data Store mean CPI keep data for later usage

For normal integration flow processing, all header, properties and body temporarily hold in CPI during
runtime will not able to retrieve them back again after iFlow processing end. This
persistence/variable/Data store concept is asking CPI to ‘remember’ by storing in SAP CPI. Generally,
global/local variable is for storing single value, while Data Store is to store list of value/payloads under
same data store name.
14) Exception handling concept in CPI

When error happened in CPI message processing, we can either do nothing then let it failed in CPI in red
error, or add exception sub-process. Ideally should at least able to get back the error message and error
stack trace, then see how to handle errors, e.g. send alert email, store error message in SFTP server or
design advanced re-processing mechanism using data store.

Local variables are only visible and accessible within one IFlow, while global variables can be used by
multiple IFlows.
 ${property. SAP_MessageProcessingLogID}: Message GUID
 ${date:now:yyyy-MM-dd}: Current date and time
 ${CamelFileName}: File name

When to use cloud connector ?


it is not like sender or receiver, if CPI is initiating the pull/ push requests, then we need to use SAP CC
(cloud connector) like SFTP, JDBC, OData ..etc

incase of source server push the data to CPI, then no need SAP CC. Like HTTP, Soap, IDoc .. etc just we
will generate URL and share with Source server team, then they will consume our CPI URL, so this case
CC is not required

need answers to this questions:


 How is versioning managed for integration flows in Sap cpi?
 How can we migrate integration flows between different environments in sap cpi?
 Where we need to store these keys and certificates in SAP CPI?/upload certificates in spa cpi
 Have you used Encryption and Decryption pallets Assume target want to encrypt file how are
you going to implement this ?what key and whose key you use to encrypt ?
 What are external parameter/Externalization and how to use them?

Did you get chance to work on Odata adaptor Can you tell me some Odata commands what you used
for business case?
How to filter content while quering data from ODATA entries ?

Can you tell me any 5 Apache Camel expressions what you used in current project

Have you used Encryption and Decryption pallets Assume target want to encrypt file how are you going
to implement this ?what key and whose key you use to encrypt ?

Where we need to store these keys and certificates in SAP CPI?/upload certificates in spa cpi

How are you going to handle exceptions in SAP CPI?tell Apache Camel expression Name to hold these
exceptions?

 What is difference between parallel and sequential Multi cast?


 At what scenarios do we use sequential multicast ?
 At what scenarios will you use gather/join
 What are external parameter/Externalization and how to use them?
 What is the remove context and use oneas many functions?

Externalization:

Integration Developers build their integration flows using the SAP Cloud Platform Integration tools on
their development systems and once the development is complete, they move them to the test and
production systems. During this development phase, they realize that the same integration flow may not
work as is, across different systems and would require changes in the configurations of adapters or flow
steps. To overcome this situation, they use the externalization feature offered by SAP Cloud Platform
Integration tools.

Externalization feature enables an integration developer to define parameters for certain configurations
of adapters or flow steps of an integration flow, whose values can be provided at a later point in time,
without editing the integration flow.

ODATA Operations – Create, Merge, Update, Query, Read, Delete, Patch and Function Import.
Multicast question:

If you use sequential Multicast if one of the receiver got failed then what will happen next?whether data
will go to next receiver or not

If you use Parallel multi cast if one of the receiver got failed then what will happen to next?whether data
will go to the other receivers or not ?

Content enrich questions:

Content Enrich

 What is difference between Merge/combine and enrich in Content enricher?


 How to correlate original message and lookup message ?
 In content enricher ,if you use Aggregation Method enrich what are mandatory parameters you
observed?

 ODATA Receiver Adapter Configuration –

Transport Protocol – HTTP

Message Protocol – OData V2

Connection Details-

We need to provide Address, Proxy Type and Authentication Parameters details which are
Mandatory.




Output Payload with Aggregation Algorithm ‘Combine’



When you use Aggregation Algorithm Combine, You will get the all the details as per the query
execution and when we use Enrich, The Product details are added to Original Supplier message
based on Supplier ID.

Mail Adaptor questions:

 Have you worked on Mail Adatpor if yes,what are prerequisites to Use Mail Adaptor
 How to Populate the subject line dynamically? In body i want to populate Iflow name, Message
Id ,Date Time ..etc if any integration process fail how to implement this requirement

General Questions :

 Did you get any chance to Interact with Business directly if yes in which case you interacted
with business ?
 Can you take one interface what you implemented recently and explain the business case and
step by step implementation
 What is CPI architecture
 What is CPI tenant landscape in current project what type of work you did in each
environment
 How many interfaces you have in current project ?how many you have implemented
 What is the project landscape or what systems involved in current project

Request Reply- HTTP ,ODATA,JDBC

JMS adaptor : Queue based application ...first in first out

Exception sub process:

 We can have Exception sub process in both main integration process and Local Integration
Process
 Camel Expressions ${exception.stacktrace} and ${exception.message}

 ----------------------------------------------------------------------------------------------------------------

Cloud connector:

We will use it only receiver side ....But we will use in sender side when there is any action from cpi

 JDBC- we are pulling the data from data base so action from cpi both sides we will use sender
and receiver
 STFTP/FTP – we are pulling the file from sftp action from cpi both sides we will use sender and
receiver
 Odata – we are pulling the data both sides we will use sender and receiver

Any action from cpi push or pull

45. What is Exception Handling in SAP CPI?


A: Exception Handling in CPI involves capturing and managing errors during integration flow execution. It
typically involves using exception subprocesses, alerting, and retries to handle and mitigate issues.

42. Explain the concept of Retry Mechanism in SAP CPI.


A: The CPI retry Mechanism automatically retries failed message processing steps, either due to
temporary network issues or other system errors, ensuring the reliability of message delivery.

41. How do you configure Security Certificates in SAP CPI?


A: Security certificates in CPI are configured in the Keystore. These certificates ensure secure
communication channels, like HTTPS and FTPS, and enable digital message signing.

26. How do you enable Trace for an iFlow in SAP CPI?


A: Tracing in SAP CPI can be enabled within the iFlow by setting the Log Level to Trace in the integration
flow properties. This allows detailed logging of message payloads, headers, and properties during
runtime.
How will the gather know whether last message has been reached ? how will gather know how many
split messages are there …

Some properties get created in that

Gather will look for two properties :


 Camel Split Complete : it is always true or false if its false then there are still messages
 Camel Split index:

If the Camel Split complete is true then gather will come know that it has to combine the messages

Join with gather is used in combination when you use if you have Multiple branches – thumb rule

If you are using multicast it may be sequential or parallel then we can use join and then gather step to
combine all the messages

Otherwise splitter with gather combination is used

Class -8

 Router – there will be multiple branches but based on condition which ever satisfies it will go
to that branch( one default route if none of the condition satisfies )
 Parallel Multicasting-Messages will go all the branches/Multiple branches- independently
 Sequential Multicasting- Messages will go to Multiple branches in sequential order if branch1
fails it will not go to branch2 ..one branch Is dependent on other
 Gather vs Aggregator
 If sender is sending multiple messages we can merge multiple messages and sent to receiver
system - Aggregator -
 If sender is sending single message we are splitting it into multiple messages and to merge all
the messages and send to receiver system – gather
 If sender is sending 10 records in that Last Message field is false for 9 records and for 10 th
record it is true them merging will start
 Sender side sending message will have Last Message field something like this if its true

 Either based on data or time we will merge


Ways to connect from S4 Hana to SAP CPI :

 Idoc
 RFC
 Proxy
 SFTP File

 - outbound configuration

 Idoc type
 Control records : Idoc number,Idoc typec,Sender port, sender port type,reciever port,Reciever
port type,Sender partner ,Reciever partner
 As we have Write variables as we don’t have option to get/read Variables You need to read the
variable from content modifier by selecting local variable



In real time also we cant Poll more than one file using Poll Enricher in Single run....? Poll enrich
will pull only one file real time too

SFTP Adaptor :

 Have you worked on SFTP adaptor? If yes what are prerequisites to use SFTP adaptor?
 How to check the SFTP connectivity from cpi?
 What is use of Known _Host file ? how to get those details
 How to check whether my files exists in SFTP serve or Not?
 Assume we are getting order details from source system but business Order No in file name on
the target system can we implement this requirement in SAP CPI if yes how to implement it
 How to populate file names dynamically using FTP/SFTP adaptor?
 Append timestamp(checkbox) to get stamp in file name but i don’t want minutes and
seconds
 We have used key based Authentication for SFTP

u have to use multiple "poll enrich" functions to pull data from multiple folders

how to maintain multiple directories.. How can we achive this ( via value mapping?)
Let’s Enrich (Part 1)

Why need enrich?


The data that need to integrate, mostly likely will not come from same source. It could be from different
system or API or table, but still relate to each other via some keys. Normally before actual mapping, will
need to enrich original data, do lookup to another source (e.g. OData) base on matching keys then
enrich together become single data. Content Enricher is designed for this purpose.

Content enricher help us do below:


Original XML payload + Lookup XML payload = Enriched XML payload

There are different strategies, things to avoid or consider, when enrich original payload with lookup
payload from OData. If we not pay attention to those details, our flow might not work, become very
slow when dealing with large data, or worse if it works intermittent (sometime ok, sometime error).

Trust me, you don't want the error come and punch you in the face, every now and then.

How to enrich?
How big is the data need to look up?
How to efficiently and reliably lookup large data?

(1) Small data


If the lookup data size is relatively small, this is the simplest case, likely you can just use single Content
Enricher and single OData call to make the enrichment to original XML.
This stay true regardless of the original XML payload size (either big or small), of cause the original XML
should be reasonable size, not causing memory size issue during iFlow processing.

(2) Bigger Data with Dynamic $filter


If the original xml size is relatively small, but lookup data is large, say 50K, 200K rows or more, then the
content enricher step will be unbearable slow, due to it query all data unnecessary.
To speed up enrichment, you should consider query OData with $filter query string, dynamically based
on original payload's keys, only return required rows, reduce the number of rows retuned.
In lesson, will show you a reusable groovy script to build the dynamic $filter query string.

$filter=(OrderID eq 10248) or (OrderID eq 10249) or ...

(3) Query string length


Be mindful when building query string, there is limit on number of characters in query string, if you not
take into consideration on URL max length, you will end up intermittent error (like I experienced in the
past). In lesson, will show you how to handle these concerns.
(4) URL encoding
Also, you should consider URL encoding when building query string, otherwise might end up
intermittent error. Show you alternative way to use + as shorthand for space instead of %20, and why it
is good alternative.

Let’s Enrich (Part 2)

(5) Loop best-guess $filter


For large original data + large lookup data, for performance reason is a must to use $filter. You can avoid
query string limitation by doing enrichment in Splitter loop with best-guess grouping, then combine all
result back. I will guide you how to achieve this in lesson.

(6) Multiple reliable $filter


Beside best-guess Splitter grouping, there is a reliable and efficient way to query OData with different
$filter query string, that will not exceed max url limitation and still work regardless how complex the
$filter clause with multiple “and” and “or”, I had prepared another reusable groovy script to build
multiple $filter and iflow example. Enable you make use of this concept to apply in your real-world
integration requirements. (I build these examples based on my real-world project experiences)

(7) Enrich XML lookup XML using groovy


Why limit to Content Enricher only? In some case, you might want to do other XML manipulation, beside
plain enrichment. Or you have 2 different xml payloads sitting in exchange properties, then you can
simply use a groovy script to enrich, combine both xml to single enriched xml and with any other
necessary transformation. This is the most flexible way to enrich to cater for more complex
requirements, and support local testing as well.

(8) Enrich XML lookup JSON using groovy


What if your data is JSON? Groovy mapping enable you do enrich directly using different format involve
JSON.

(9) Enrich JSON lookup XML using groovy


Content enricher only accept XML format. Using groovy enable you to preserve original JSON and
flexible lookup XML enrich back to JSON.

(10) Enrich JSON lookup JSON using groovy


Lastly, nowsaday, many API already in pure JSON format only. This direct JSON lookup JSON, keep things
simple, no need convert between XML/JSON back-and-forth. This can enrich a subset of JSON nodes as
well.
Exception Subprocess

Similarly, if we are using JMS as a sender adapter and have put up a condition to end retries after a
certain retry count, in that case escalation end is the best choice.
how to handle exceptions in an iflow?

To handle exceptions in an SAP Cloud Integration (CPI) I-Flow, you can use an exception sub-process to
capture the exception message and send an email notification to the relevant stakeholders

The exception sub-process will capture the exception message and prepare an XML message body that
includes the date, time, message ID, IFlow name, and exception message. The email notification will
include the same information, plus details about the interface and the exception caught

Let's say you have the i-Flow below with the name Exception Subprocess. In this case, we ought to
develop an exception subprocess. So that the Support team will receive a notification email if any
exceptions or errors occur in that flow.
Remove Context:to remove all the context changes and put the values in same context so that we can
sort them -ie., single record

From every record it will take all the values and put it in single record

Createif –is to create target node based on certain condition it will take input as true or false if its true
it will create an target node otherwise it will suppress

Field to structure map chese tapudu we need to use Remove context

 Field – one record lo one value


 Structure – ani values oka record kinda vuntayi
How to reprocess the failed idocs?

Odata Interview questions

The OData adapter allows you to communicate with an OData API using OData protocol.

 Odata vs soap - Odata supports both xml and json but soap supports only xml

Odata –light weight but soap is heavy weight it takes more time for processing

 What Query operations we have in Odata (Query GET,CREATE,DELETE,UPDATE(PUT),Function


Import) Reciever side – Patch operation extra
 Odata Authentication-
Sender side :User role,Certificate based Authentication
Reciever side :Basic Authentication,Oauth Authentication ..None

Odata API call cheyali – Odata Adaptor

We can use request reply and content enrich

Project Explanation:

I have 3 years of experience in Integration area currently working for Nisum here i working as cpi
developer prior to that i worked on PI/PO From last 2 years i was completely in cpi space ..the client is
having s4 hana on cloud and they have got the cpi integration suite it is implementation project where
we have implemented material master data invoice ,product master data with the client .

So We have integrated our s/4 hana with third party application.We have mostly leveraged OData API
we have created communication arrangements and communication user details in the s/4 side and we
have used standard odata apis to connect to s/4 hana cloud..we have integrated with salesforce and
sftp server .. most of the data we have transferred is o2c and p2p interfaces like sales order
creation/sales order confirmation/outbound delivery/Advance shipping Notification/PGI all those things

Third party applications: SFTp server/govt web portal /we have on premise applications like we have
connected to onpremise ECC

I am familiar with adaptors like JDBC,JMS,SOAP,HTTP,OData,IDOC,SFTP

Purchase order creation/Purchase requisition/Delivery notification/shipment confirmation/order


confirmation/invoicing
My current project we have implemented Order processing with third party applications where in we
have to read the order from sftp server that we have to create Idoc also we have worked E invoicing
project for each country we have implemented invoicing

We have developed over 40+ interfaces during that implementation

I am Sai having around 9+ experience in IT industry and working on CPI from past 3+years. In CPI, I
majorly worked on integration suite connected with multiple systems like SAP ECC, S/4 Hana, Ariba,
Oracle and also customer specific systems.

I have used all the pallets to transform the data from one system to another system as per business
requirement. Connected third party systems using HTTP,SOAP, RFC and JDBC adapters.

Complex Scenario:

Requirement: We want to post the invoices to Ariba from third party systems. Here Ariba will accept
cXML format so we have to prepare the target XML as per ariba accepted format.

Here complex part is, we need to send the cXML to Ariba with 2 attachments so I have tried different
methods so converted attachment to MIME format because while doing R&D I came to know Ariba will
accept attachment as MIME format so I have used MIME convertor in CPI and passed headers as per
MIME accepted format and sent the data to Ariba

So that was complex scenario which I started my own research and successfully sent the cXML with
attachments to the Ariba.

Hyper care support -30days

End user will contact support team using incident management tickets SLA service level agreement

Support team- will do Monitoring + incident Management

We need to build interface

Dev /QA/Production environments

 QA- system testing (functional consulatant )


 UAT-user accpetence test(end user) SIGN OFF
 Generally in production we do smoke testing -Dry RUN

Before Inteface going Production Mandatory things :

 UAT sign off


 Kt sign off to Support team

PI/PO support Project

On PI/PO front I was involved in support project where we used to get the mail alerts when I flow was
failed so for that we have analyse that error and once we get the cause of that error we have create the
incident in the service now there we have to mention the details of that error and root cause and that
solution for that error we have to share with concern team they will work and we will close that incident
once error is resolved.

Why did you not resolve the issue?

Actually, I am working in the supporting project right there we have only access for monitoring we don’t
have access for development environment, so we have to create the service now ticket for concern
team.

 We have three environments dev ,qa ,prod first we need to develop in dev do our unit testing
then download th iflow an import in party and functional team qa environment Mostly end to
end testing is done here involving third once your end to tend testing is successful you need an
business approval to move this to production you have to raise one transport request once it
got approved the the same way you need to download the iflow from qa and import in prod and
do the necessary changes as per channel configuration ..just monitor the iflow in cpi is
messaging processing successfully or not
 For Every change you do after moving to the prod save it as version
 We have 300+ interfaces in production
 Gather the requirements build the interface perform the hyper care support
 Phase wise..wave 1 wave 2 wave 3

After a successful implementation we dive into the hypercare phase once hypercare wraps up the
implementation team will hand over knowledge to the support team ensuring a smooth transition

Support Maintenance

The support team will take the reins to maintain the Interfaces we handle incidents and service
requests promptly ensuring everything runs smoothly these are often refered to as AMS (Application
Maintenance system Projects) and continue as long as client use the Interfaces..
Daily Operations :

In a support project the business team manages their daily operations using the newly implemented
system if they encounter any issues they will raise an incident..Incident come with priorities :

Along side incidents businesses can raise service requests for new requirements these involve minor
changes in interface to enhance operations and go through a change requests Process with Regular
CAB meetings

Incident comes with priorities

 PI(Critical Issues must be resolved within 24 hours


 P2: 1-2 days for significant issues
 P3: 3-4 for moderate issues
 P4:usally a week for low priorty concerns

Fire fighter ID:

Sometimes we need special access the firefighter ID allows direct access to the product system for
urgent fixes when requesting this ID detail your planned activates and provide a strong business
justification for audit pupose Remember all actions are recorded

CTS +- transport – here btp team will be involved where you will transport the interface provide the TR
request (TR ID) - copy the TR ID and provide it to btp team so that they can move the interface to QA
tenant ..once they move package /interface you need to check the interface configure the necessary
details and deploy the interface in QA tenant (there will be transport button /option in the interface)

Inside package -

Cts + It is Something configuration is done by btp team in btp cockpit

Testing team /UAT Team will be involved to do integration testing..Regression testing and User
acceptance testing

Once we get approval from testing team.. either you can use same TR ID or generate a new tR and
provide to btp team to move the interface to production

rarely we will have ssl certificate need to deployed in cpi tenant in order to connect to ssl we will
download from target url

file based – means export and import ( once you click on export from development tenant zip file will
be generated on your local system and you can navigate to qa tenant and then you can import the
downloaded zip file )
Third party will send data XML with structure ORDERS05 – Sales Order to SFTP

The flow was sender is sftp and Receiver is SAP ERP

After data sent successfully, we will check the sales order created

1) In SAP side Configuration:

 First, we have to activate the service in the T-Code SICF and then activate.
 And then we have to register the service in the T Code SRTIDOC
 Then run the test service to get endpoint
http://host:port/sap/bc/srt/idoc?sap-client=<clientnumber>
 Then create the logical system in BD54
 And then need to create the Partner Profile in W20 and add the Inbound parameters
 And then test the service in the postman

2) Cloud connector setup:

 And then need to do the cloud connector setup in the cloud connector
 Here we need to create the destination once we create we will get the location id and we use in
the Adapter Configuration level

3) CPI Configuration:

 And then need to create the user credentials in the


Manage security Security Materials  Create
 And this user we need to use in the adapter configuration level

So in the Integration we create the Iflow

SFTP to IDoc –

Actually we are getting the data from sftp server for that we have to connect with sftp server via
location ID for that we have to do cloud connector setup in the cloud connector to get the location ID

And then have to transform the data in the cpi by uing required pallet options and we have to do the
mapping and that data we have to post it to S4 hana via Idoc for that we have to get the idoc address
and do the adaptor configuration for IDoc receiver and then we can successfully post the data into the
s/4 hana
We will be getting function spec/architecture from business/client/functional team In the Functional
spec we would be having details about source structure ,payload format ,message transformation,
target structure sample payload ... i will do test connectivity prior to my actual development just to
make sure connectivity is fine ... (certificates, client credentials ) .. i would start my development and
perform End to end unit testing once i am done development ..

End to end unit testing – you can ask source team to trigger or you can do it yourself based on adaptor
like sftp where you can pull the file from the directory

Once unit testing is successful we can move the interface from development tenant to QA tenant-

Basically we will get the requirements in the form of user stories so the user stories will contain
requirements along with Functional specification document we have to assess the functional
specification document from there we start the build if there are any gaps i have to connect with
functional consultant and functional consultant will provide the necessary information once that is
done i have to prepare technical specification document .. my lead will review it once it is approved i
will commence the build .. In cpi We have two tenants one is test tenant and production ..in the test
tenant i have to build my iflow once it is done i have to do technical unit testing once end to end
connection is done i have to inform to functional consultant they will do functional unit testing once
it is done .. they will move the interface to their test environment they will do SIT. And . end user will
do user acceptance testing once that id one

..we will move the interface to production while moving the interface to production we have to perform
cut over actives like manual setup and after we will do smoke testing

Once the user is assigned to me and ts is been is approved and user story is owned by me ..i have to
connect with functional consultant and respective technical team also if there are any concerns
regarding development i have to have a call with them .. daily we have a in the scrum call we have to
update any impediments for my development status

I have to check with functional consultant whether the logic is wrongly placed or we need extra
explanation i will reach out to him ..in that case provided by functional consultant and we have to
escalate to lead .. each and every he will bring that point each and every user story status ..interface
development

Walk me through .. disaster recover .. i wanted to experience what comes my way because i am still
young in cpi space ..i am welcoming to take those oppurtunities ..

what kind of projects – we will say client


Learn how to run a Business-to-Government (B2G) scenario using pre-configured content in Integration.

Real time scenarios


Write Variables in CPI | Full/Delta Load | Automatic and Manual Mode

Step by Step explanation:

Content Modifier

Last sync : is the property name and value is read from the write variable (as we don’t have get Variable
we need to read the write variable using content modifier)

IS Manual value be default it is false .. we need to use this property as we need to use it is Router
condition to decide manual mode or Automatic Mode. Depending on this value it will route to different
path
Step 2:Router to check the Mode whether it is Automatic or Manual : If IS Manual value is is true then
its Manual if IS Manual value is false then its Automatic

Step 3: Content Modifier


So that the value of last sync value will be manual Run that defined in content Modifier which is
1994-01-01

Step 4: Request Reply to call the odata service :

Step 5: Router to check if any new employee records are found or not if found only then we will
write the ..//Employee
Write Variable for all the records we need to retrieve the latest Hire date so basically as you can see
we have sorted our records in a way so the: first record will have the latest hire date and in this case it's
going to take the first record only and save it

In Automatic Mode the flow is controlled by Write variable

But in Manual Mode the flow is controlled by the value defined in Content Modifier
Write variable – we have local and global variable

While creating a variable if you don’t check the checkbox global scope then it will become local variable

If its global variable it can be used in another iflow ..if local variable it can be used within the same iflow

Looping process call with ODATA Service:

Content Modifier :
Local Integration process :

Content Modifier (Append Payload body)

Source Value:${property.StorePayload}${in.body}

Filter Condition:

When you are doing odata call using request reply say you have thousands of records How will you know
if there are more records or not is ..using Property Receive.Odata.HasMoreRecord is false means there
are no further records ..if the value is true there are some more records
Looping process call:
Manual and schedule run in Single Flow
SAP CPI | REST_API OAuth2.0 TokenAuthentication

Content Modifier :

Json to xml ..Added root element token


Content Modifier 2:

Content Modifier 3:

Data store with Odata to SFTP flow : Entry ID should be dynamic for that we are getting the product ID
and storing it in a property and that property we are providing it in Entry ID
Content Modifier :

Write to Data store :


Write Variables : as we need to share this entry ID with another iflow property scope is within the iflow

Using content Modifier we are reading the variable value and passing as an entry ID to the Get data
store operation
Idoc to Rest API

OAUTH 2.0
*OAuth 2.0** uses tokens instead of credentials, offering enhanced security. It involves obtaining an
access token through various grant types like Authorization Code, Client Credentials, etc

**OAuth 2.0**: Set up OAuth credentials in the security material and reference them in the adapter.

To use OAuth2 Client Credentials in SAP CPI, you will need to create a new security material of type
OAuth2 Client Credentials. This security material will store the client ID and client secret for the
application.

Here are the steps on how to configure OAuth2 Client Credentials in SAP CPI:

1. The main important step is to create the Service Key in SAP BTP Cockpit. This key is created
when you have created the Process Integration Runtime artifact.

2. Create the Security Material in Manage Security in SAP CPI

3. Click on Create and from the dropdown, select OAuth2 Client Credentials.
4. Provide the Token URL, Client ID, Client Secret, and the meaningful name to the Client
Credentials. Click on Deploy.

5. Create the simple integration flow in which we can use this credentials.

6. In the Authentication select as the "OAuth2 Client Credentials" and in the Credential name
provide the name which we have created in Step 4.

We pass it in body to token server to get access token ..

 Grant type: client credentials


 Cliend ID
 Client secret

Once we get the access token we need to pass it as header to the Rest API call

------------------------------------------------------------------------------------------------------------------------------

API Management :

Imagine you have hundreds of these APIs. Does that sound unrealistic to you? Nevertheless, it’s a real-
world example from SAP Business Suite and SAP S/4HANA. Every SAP Fiori app makes use of one or
more OData APIs. If the APIs are only consumed internally, the monitoring of APIs during operation is
often neglected. If you also want to make APIs available to external developers for use in their own
apps, requirements such as documentation, billing, security, and monitoring suddenly come into focus.
This is exactly where API Management helps you. It enables you to centrally provide and document
your interfaces and monitor their ongoing operation.

API Management consists of two main components: the API designer and the developer portal. With
the API designer, you create and model your APIs. You can create products and integrate one or more
APIs from one or more providers into them. You can intervene in the data flow and, for example, check
an API key or cache data. The number of requests within a certain period can also be limited with a
spike arrest.

You might also like