Interview Questions SAP CPI
Interview Questions SAP CPI
If the information needs to be transferred to the receiver system, then ‘Header’ should be used in
Content Modifier. If the information is internal to Iflow, then ‘Property’ should be used. The property
lasts for the entire duration of an exchange but it is not transferred to a receiver.
Suppose I have two integration flows ( IF1,IF2), where IF2 is connected to IF1 via process direct .Now
properties in IF1 cannot be accessed in IF2 because the scope of property is with the iflow (IF1), but
where as headers created in the IF1 can be accessed in IF2 because it as global scope.
Note : you can define properties in local integration and call that property in main integration process
also
What if we want to share the data between iflows what options do we have
Global Variable
Data store
Difference between variable vs Data store :
If you store values in headers or properties in your iflow, those values will be gone when the iflow
finishes. If you need to store values longer than that, you can store them in a variable using the Write
Variables step. In other words, they're a way to persist data.
A global variable can be accessed by any iflow, but a local variable can only be accessed by the iflow that
wrote it.
You can see all the variables, you've stored, in the Operations view => Manage Stores => Variables.
Variables are similar to Data Stores, really. But variables store scalar values (i.e. one number, one
timestamp etc.) whereas Data Stores contain complete payloads (e.g. an XML or JSON document).
Please note that while there's a Write Variables step, there's no Read Variables step. To fetch the value
of a variable into a property or header, you use a Content Modifer with the type set to either Local
Variable or Global Variable.
------------------------------------------------------------------------------------------------------------------------------------
In SAP Cloud Integration (CPI), variables have two types of scope: local and global:
Local scope: Variables are only accessible and visible to one IFlow.
Global scope: Variables can be read and modified by any IFlow.
------------------------------------------------------------------------------------------------------------------------------------------
Both steps make a synchronous call to an external system. Where they differ, is in how the response is
handled.
In the case of Request-Reply, the response from the external system replaces the current payload. This
means that if you need both the old payload and the response from the external system, you need to
store the old payload in e.g. an exchange property before making the external call.
In the case of Content Enricher, the response from the external system is merged into the current
payload, as per the step configuration. An example of this could be a lookup of e.g. customer details.
The returned information is added to a customer element in the current payload, thereby enriching it.
Difference between combine vs enrich in Content Enricher ?/Aggregation algorithms in content
enricher?
Now, we select the aggregation algorithm for the content enricher. There are two options for
aggregation algorithm:
1. Combine: On using this option, the Lookup Message is appended to the Original message.
2. Enrich: On using this option, the Lookup Message is merged with the Original message using a
common field.
Content Modifer is used to modify messages such as adding /removing ,properties and altering the
payload of the message
An Exception subprocess is used to handle errors that occur in iflow it allows developers to manage
errors gracefully by specifying actions to be taken when an exception occurs
Sensitive data such as credentials or API keys should be handled using security artifacts ( like keystores)
and storing them in a secure format in cpi rather than hard coding then in iflow .
For large messages,splitter can be used to break then into manageable chunks
Payload encryption can be achieved using PGP encryption step which encrypts the payload using PGP
public key making it secure for sensitive data transfer
Debugging in CPI can be done using “Trace “ Mode which provides detailed information about each step
including message payload and headers
How can you test connectivity between SAP CPI and external systems
Data encryption in sap cpi can be implemented using keystore artifacts to encrypt/ decrypt messages
Exception handling can be implemented using Exception subprocess Groory script can be used to log
errors , and alerts can be configured to notify about exceptions
Data store is used to store messages temporarily it allows integration flows to be more reliable by
enabling message persistence which ensures that message is not lost when error occurs.
Security artifacts are used to store credentials ..these artifacts are essential for ensuring communication
between cpi and external systems
-------------------------------------------------------------------------------------------------------------------------------------
Content Enrichment is used when you need to enrich a message by fetching additional data in real-time
from an external system, based on information available in the incoming message.
Poll Enrich:
The Poll Enrich pattern in SAP Cloud Integration (CPI) is used to enrich a message payload with
additional data from a file on an SFTP server. It can also be used to poll from an SFTP server that is
triggered by an external trigger, such as an HTTP call.
To poll multiple files using the Poll Enrich pattern, you can implement a loop in one integration flow or
call another integration flow via ProcessDirect adapter
Tips :
It will poll only one file at one execution
As we want to poll several files from a folder, we need to encapsulate a Poll Enrich and an SFTP
Sender Adapter in a local integration process, which is called via a Looping Process Call.
How to upload certificates in sap cpi? Or where do you upload certificates in sap cpi?
Monitor -> Manage security -> Keystores -> Add -> Certificates(here upload certificates )
Did you work on oAuth authentication how did you do the OAuth Authentication?
Purchase Order (PONumber) should be repeated at target for each line item.
--------------------------------------------------------------------------------------------------------------------------------
Looping process call : Add a looping Process call step to repeatedly execute the steps defined in local
integration process until the condition is met or max iterations allowed is reached which ever comew
first
Example : In single call we cannot pull the entire data from success factor we go for looping process call
.. it will iterate/recall same local integration multiple times
If you are querying SAP SuccessFactors EC using the CompoundEmployee API, how would you query all
the records if the page size is set to 200 and they're a thousand records in EC?
This can be useful for automating tasks or working with large data sets.
We specify that the looping process call will work according to the condition expression specified in the
"condition expression" field. By stating ".hasMoreRecords contains 'true', we indicate that the loop will
continue to run as long as there are multiple records. You can take a look at (hasMoreRecords).
.FilterSte
p 6.${property.payloadStack}${in.body} : We use it to continue adding each incoming fragmented data
to the body. You can take a look at it.
what if we still have more records to fetch but loop max iteration value reached? In such scenario How
to fetch missing records?
We can also use runtime property SAP_PollEnrichMessageFound to check file exist condition in looping
process call step.
Process call:
A process call in SAP Cloud Integration (CPI) is a way to connect a local integration process to an
integration flow
Idempotent process call :Detects if a message ID has already been processed and stores the status of
the successful process. This can be useful for modeling exactly-once or at-most-once handling
LOCAL INTEGRATION PROCESS: A Local Integration Process is like a subprocess-call within an IFlow
We shall need:
The below is the Integration Flow which has been created to demonstrate how to use Local
Integration Process. The IFlow has been divided into 3 parts.
---------------------------------------------------------------------------------------------------------------------------------------
Resuable integration flows can be created using subprocess and Process direct adaptor .common
functionality can be reused across different iflows
SFTP Authentication:
User credentials
Public Key
if the Authentication type is public key Go to Monitor ->Manage Key store and click on create
drop down and select SSH key
Once you create SSH key you need to download public key and share this with sftp server team
Once you download this public key and send it to sfmc team they will assign one user to this
public key
Sftp team will add user to this public key this is not our responsibility SFTP team will take care
Once you have done sftp connectivity test now you can use it in iflow sftp adaptor
configuration
Test Connectivity->SSH -> click on check box check directory Access and provide the path of the
directory to see all the files in that directory
SSH Connectivity Tests: When you have selected the SSH connection type, the test tool checks if the SSH
outbound connection reaches the associated SFTP server.
If the configured known_hosts file deployed on the tenant contains the certificate of the SSH
server.
Cloud Connector Connectivity Tests: When you have chosen the Cloud Connector, the test tool checks
whether the Cloud Connector has been configured and can be reached.
Kafka Connectivity Tests: If you choose the Kafka Connectivity test, the test tool checks if the
connection is successful or not.
AMQP Connectivity Tests :If you choose the AMQP (Advanced Message Queuing Protocol), the test
tool checks if the connection is successful or not.
SMTP Connectivity Tests :You can perform SMTP connectivity tests to check the settings required for
configuring the receiver mail adapter.
When you've chosen the TLS connection, the test tool checks the following:
if the keystore is deployed correctly and contains those keys that are required for the specified
authentication method during TLS handshake.
Mandatory configurations:
In SFTP configuration File name :* (if you give star it will pick all the files in the folder )Cloud
connector : to send any data to on premise application
--------------------------------------------------------------------------------------------------------------------------------------
Nothing but calling a common routine in our programming language if you see in our programming
language generally we will write one reusable code which we put it in a separate Place either it we will
call it as a function or some sub routines so that many programs can call that sub routines from many
place right the same thing we are going to see that in CPI
your business requirement but all of most of them or some of them will have some common routine
that will be done in every integration flow so that instead of repeating that common integration flow or
common code in every integration flow what we will do is we will put it in a a separate place and each
of this integration flow will call that and come back so the calling to that common routine is done by a
adapter called process direct adaptor
for an example you can see suppose you want to write some log files right any integration flow
generally we will do the logs so instead of repeating that same code everywhere in the integration flow
we will put it in a common place then each of this high flow will call that common routine ..
getting a token suppose you are connecting to a you calling an API before calling an API you need some
token so getting a token can be a can be occurred in many places many i flows so instead of repeating
that get token process we'll put it in a common place and each of this iflow will call that routine or
quering some master data right suppose you are creating some Master data so that you can use that in a
common routine or writing a file to an FTP server so these are all the common routines that we will put
it in a a common I flow and your other integration flow will call that ..consume that I flow through a
process direct
When two integration flows need to communicate with each other, the value specified for the
Address parameter in both the sender and receiver ProcessDirect adapters must match.
If you see an area of re-usability like [sending notifications] or [sending mail alerts] or [establishing a
connection] you could use process direct adapter.
Monitoring: you can leverage correlation id in message monitoring which list down all related messages
( called from main flow to multiple flows connected via process direct adapter )
JMS and Process direct are both Internal Adaptors they don’t connect to external server
Process Direct – will send the data from one integration flow to another integrationflow.
Producer flow- will provide data to other integration flow within the tenant
For example:
Can I call receiver processdirect address to another package processdirect sender address.
Answer: Yes its possible and process direct adapter dont have any impact due to different packages.
-----------------------------------------------------------------------------------------------------------------------------------------
Aggregator
This is useful when a receiver expects a single message for related items, but the items are sent as
separate messages. For example, if an order is divided into an order header and multiple order items,
the Aggregator can combine the items into a single message to send to the receive
you can collect and store individual messages until a complete set of related messages has been
received. The aggregated message is then sent to the actual receiver.
If you want to combine messages that are transmitted to more than one route by a Multicast step, you
need to place Join step before the Gather step. If you want to combine messages that are split using a
Splitter step, you use only the Gather step.
Gather
Merges messages from different routes into a single message
Join
Combines messages from different routes without affecting the content of the messages. The Join step
can be used in combination with the Gather step
------------------------------------------------------------------------------------------------------------------------------------------
Debugging in sap cpi can be done using “Trace Mode which provides detailed information about each
step including message payloads and headers .
1. After switching the TRACE mode, you have to replicate the steps again, to as to reproduce the
issue in CPI.
2. TRACE mode lasts for only 10 mins.
3. The data in TRACE mode lasts for only an hour.
Difference :Process Direct allows you to invoke other integration flows, while Process Call is for calling
sub-processes within the same integration flow.
Process call : It’s mainly used for breaking down a larger integration flow into smaller, reusable sub-
processes. Process Call is used to call a sub-process within the same integration flow.
Use Case:
Scenario: You have a complex iFlow that handles multiple tasks, such as data transformation,
validation, routing, and logging.
o Implementation: Use Process Call to divide the iFlow into sub-processes, each handling
a specific task. For example, one sub-process could handle transformation, another
validation, and another logging.
Process Direct :
Use Case:
Scenario: Let’s say you have multiple iFlows that need to validate incoming data using the same
validation logic.
o Implementation: You can create a dedicated "Data Validation" iFlow with the validation
logic and expose it via Process Direct. Other iFlows that need to perform validation can
call this iFlow using the Process Direct adapter, rather than replicating the same logic in
each iFlow.
-------------------------------------------------------------------------------------------------------------------------------------------
-
Data Store:
Q: After Get or Select from DS, what are the ways to delete the entry?
A: Use 'delete on completion' or DS delete by entry id.
Q: When writing a list of records to DS, if some record processing failed, will the DS operation have
partial success, partial failed?
A: Yes. Now new iflow by default 'transaction handling' is none.
Q: How to select multiple entry from DS, process each entry independently one-by-one, process those
success one, skip those failed one, and further write to different DS in case success or failed?
A: Use combination of DS select, splitter, router, exception sub-process, 'transaction handling' setting, 1
source DS + 2 target DS to achive this. Will show in course lesson.
Data Store :
Business scenario1:
When ever user creates an invoices in source system (ariba )We are getting the invoices from sender
system and CPI will do transformation and then we are sending it to target system for payment this
process will continue ..
For any reason Say target system is down .. or if there is any issue with cpi transformation(while doing
groovy script or message mapping )so this cpi interface will be failed .. what ever data we are getting
from source system will be lost inorder to avoid this situation we are storing the data in Data store
Everyday thousands of invoices are being sent to target system(oracle) ..some upgrade is happening at
target system at that time .. if there is no option to retrigger from source system again to target
system we need to store the payload in Data store
I can store the Data and read the data from data store
In a day there are 30 invoices have failed because of so and so reason
We can create an iflow and place an timer and fetch the 30 invoices from data store and reprocess the
invoices to target system .. no dependecy with source system .. based on schedule it will read the
data from data store and send to oracle or target system
Business scenario2:
Say I have one HR system where we maintain employee details and I also have Manager system where
it has Manager Information of an employee ..
Now HR system will be sending the data to SFTP server as employee .csv Manager system wil lalso be
sendign the data to SFTP server as Manager.csv
Now I will create One iflow/interface in CPI to write/store the employee .csv data to Data store
I will create another iflow/interface in CPI to write/store the Manager .csv data to Data store
In my third interface I will read the data from both the iflows and Merge the data and do the
transfromations and send it to Oracle which is my target system ?
Data store: data base lo table lantidi its like a data base table ------entries ID -> you can give dynamic
value or contast value (you can store employe id in property and provide it here also if you dont give it
will generate an unique value )
Difference between get and select – single message we can read is get and select is multiple messages
/entries ->
One data store can have Multiple Messages with Multipe entry IDs
Get
Select
Write
Delete
Once you write the data into data store it will be shown in Monitor->Manage stores -
>Datastores
Once you create a Queue using Jms adaptor at receiver side it will be shown in Monitor -
>Manage stores ->Message queues
In order to create Number ranges you have to Go to Monitor ->Manage stores -> Number
Ranges
When you create variable using write variables it will be shown in Monitor ->Manage stores-
>Variable
Once you create a variable and store the value in it you don’t have any option in CPI to Get the variable
value you need to use Content Modifier and select the type as Global Variable ..
Select – This step selects entries from a transient data store and creates a bulk message containing the
data store entries.
Write – This step performs a Write operation on the transient data store
Get – This step gets a specific entry from the transient data store.
Delete – This step deletes an entry from a transient data store. (Give it a try) Leave us a comment, if you
face any issue.
Integration flow
Global
Entity ID can be static or dynamic ..use content Modifier to get the value from source xml and save it in
property and that property can be provided in Enitity ID
Once you successfully read the data from data store it will be deleted if you check the check box delete
on completion
Say you are getting multiple products from source you can use splitter before data so that we will split
one large message into individual messages
Say in your xml you have 30 products with product IDs.. we can save/write them into single data store
with different product IDs by providing Entity ID as dynamic you can read multiple entries or records
by using select operation
If you send same message to the data store using write operation it will fail unless you check overwrite
existing message check box
After storing the data in data store if the Retention threshold is given as 2 then till 2 days it will be in
waiting status so that other flow will pick the data it will wait.. after 2 days The status will be overdue
In Data store write : to achieve Entity ID dynamically we have followed the below process
Content Modifier we have created an Property Product! ID with xpath expression //productID
and we have read that property in Write data store .
We have created a Global variable called productID and read the property into it
Using Content modifier we have created the property and stored the variable value
That property is given in the data Store get Entity ID
At iFlow first run, variable not created yet but need to use some initial/default value for processing, how
to handle such chicken-and-egg situation?
A: Use Content Modifier read variable and set default value.
------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------
Simulate vs simulation
Display queue
Simulate
--------------------------------------------------------------------------------------------------------------------------------------
Idoc :
Outbound Configuration:
Consumer Proxy: its also called Reciever proxy( Abap proxy at receiver side
Provider Proxy: it is also called Sender proxy (ABAP proxy at sender side)
SOAP adaptor :
we will generate wsdl and give it to Abaper or ABAP team
End point will be given to Basis team so that In SOA manager they will create logical port
XI Adaptor:
Rfc connection
CPI :
Cloud Connector:
SAP on premise application on sender side then we don’t need cloud connector Ie.,RFC/IDoc
/Proxy
Idoc Sender SIDE AYITHE In Control records you will have document Number field -> Idoc
Number it will created in SAP application we get it from SAP Application
IDOC receiver side AITHE we are generating it in SAP Application it will come in IDOC response
header Property(SAP will be generating it)..
Ie.,Applicaiton Message ID
-------------------------------------------------------------------------------------------------------------------------------------------
-
To call Success factor we need to use Success Factor Adaptor which is Adaptor Specific
Splitter – if there is any requirement to process record by record then we can use the splitter
Breaks down a message into multiple messages keeping the encapsulating elements (root
element)
Breaks down a Message into Multiple messages without encapsulating elements (root element)
Multicast : You can use the Multicast step to send copies of the same message to multiple routes. You
can send copies to all routes at once using Parallel Multicast.
Sequential Multicast: In sequential multicast, messages are sent to each receiver one after the other.
The next receiver only receives the message after the previous one has been processed.
If one of the branches fails during processing, the sequential multicast will stop further processing of the
remaining branches.
Parallel Multicast: :
In parallel multicast, messages are sent to all receivers simultaneously. Each receiver processes the
message independently and concurrently.
If one branch fails in a parallel multicast, the other branches continue processing. Each branch can
handle its own errors independently,
----------------------------------------------------------------------------------------------------------------------------------
Node functions:when interviewer asks about Node functions always speaks how many inputs it takes
and what it will do
Create if –its always expects Boolean value –true or false
Exists – if value exists in source it will return true if the value doesn’t exists in source it will
return false
Formatby example also takes two input
UseoneasMany takes three inputs
----------------------------------------------------------------------------------------------------------------------
fixed Value Mapping: Fixed Value is very useful when you have lots of conditions coming in one source
fields during mapping, then to avoid the IfThenElse or UDF we go for Fixed Value mapping.
Example: if your source field contains 01 or 02 or 03 or.....12 and you want the result as JAN or FEB or
MAR or ......... DEC.
Advantage: Using fixed value table you can see the result immediately in the mapping.
Disadvantage: If you have used the same fixed value mapping at several places in you mapping then in
case of any change in fixed values you have to make the changes at several places. So in short fixed
value mapping is not good from the maintenance point of view.
Value Mapping: Value Mapping also works in the same way as fixed value mapping, only the thing is you
will get the result at run time only. And you have to define the value mapping in Integration directory.
Advantage: If you have used the same value mapping at several places in you mapping then in case of
any changes in values you don't have to make the changes in you mapping, just make the changes in in
value mapping in Integration Directory, that's it.
Disadvantage: you can't the see the result immediately in mapping. Results can be seen only at run time.
Scenario:
Value Mapping Interview question say in source xml we have 1 field that has 100 values so is it
necessary to create 100 fields in values mapping?-i will import all the 100 fields in an csv and then
upload it into value mapping
and different failure options. 1. Use Key 2. Use Default Value 3. Throw exception
Router step to send message to different routes based on condition, process using different
logic and optionally back to single main route.
Multicast step to send the same message to multiple routes to process differently, and
optionally gather back results from all different routes.
Splitter step to tackle challenge of splitting large payload to smaller one then only process.
Router should have Minimum two branches -- in that one is default route
-----------------------------------------------------------------------------------------------------------------------------
Source -> CPI ( here we are sharing the public key of CPI with Source system so that Source system
will encrypt the data and CPI will decrypt the data with own private key
CPI -> Reciever system(here Receiver system will be sharing public key with CPI so that CPI will encryp
the data and Reciever system will decrypt the data with their own private key)
Outbound Flow :
From CPI to Receiver system ..Receiver system will generate two keys public and private keys and
Public key we need to provide it to CPI developer so that he will encrypt before sending to Receiver
system. once the Encrypted has reached the Receiver system then using their own private key they will
decrypt the content
Which ever you have generated in Monitor ->Manage security->PGP Keys -> you might have uploaded
the public key and that name has to provided in Encryption Key User ID
Signing: Select including option it will ask for private key We need to generate private key and public
key has to be given to recipient
Which ever you have generated in Monitor ->Manage security->PGP Keys -> you might have uploaded
the public key and that name has to provided in Encryption Key User ID
Signing: Select including option it will ask for private key We need to generate private key and public
key has to be given to recipient
Transformation: what about the case is when we have received the message and we are doing some
transformation it is failing because of the transformation
Load:
You can use Jms adaptor to build the scenario to retrigger the failed message where you can hold the
message in the queue and you can reprocess in case of any failure but this adaptor and design comes
with some level of constraint because it going to store the messages in the queue and you have to
manage the queue ie., queue management and what about the case if the customer they are not in
mood to use jms adaptor
This scenario decouples sender & receiver systems when an error occurs.
If the message is successful, it is passed to the receiver within the same iFlow.
If an error occurs, the error messaged is stored in Data Store to be accessed by another iFlow for
processing.
In this scenario by using Data Stores SAP CPI users can create a generic error handling approach for
iFlows.
General Splitter: split the composite incoming into individual Mesaages by keeping Encapsulating
element
Thumb rule : general splitter interprets the first line of inbound message as header or root element
Iterator splitter: : split the composite incoming into individual Messages without Encapsulating
element
General splitter comes with root element but iterator splitter doesn’t come with root element
Difference between node vs node list in filter:? Need answer
Filter limitations:
Limitation:
-------------------------------------------------------------------------------------------------------------------------------------------
------
How will Gather know that the last split message has reached?-Camel Split Complete header
the Camel headers listed below are generated every time the runtime finishes splitting an Exchange.
CamelSplitIndex :Provides a counter for split items that increases for each Exchange that is
split (starts from 0).
CamelSplitSize: Provides the total number of split items (if you are using stream-based
splitting, this header is only provided for the last item, in other words, for the completed
Exchange).
CamelSplitComplete :Indicates whether an Exchange is the last split.
----------------------------------------------------------------------------------------------------------------------------
Write variables uses cases: last successful run/Delta load or full load /
Another way of achieving looping is by using the general splitter. Sometimes that might make it lot
easier too. Yes, though Splitter is not a replacement of LPC, based on the requirement/scenario, the best
fit approach can be chosen with consideration of performance and expected load
------------------------------------------------------------------------------------------------------------------------------------------
If you come from imperative/procedural programming background (e.g. Java, ABAP, C#), you might
wondering how to looping in SAP CPI? What are the equivalent CPI ways to do for loop/while loop since
all these CPI steps is drag-and-drop only without coding? The answer is using CPI “Looping Process Call”
with condition to exit loop. You should also aware the concept of OData V2 vs V4 looping and leverage
build-in looping feature of OData adapter.
Filter concept mean only take necessary data required, either source system only send required data, or
use CPI filter step to retain only required data For filter step you will need fair good knowledge of XPath
to filter effectively
13) Persistence/Variable/Data Store mean CPI keep data for later usage
For normal integration flow processing, all header, properties and body temporarily hold in CPI during
runtime will not able to retrieve them back again after iFlow processing end. This
persistence/variable/Data store concept is asking CPI to ‘remember’ by storing in SAP CPI. Generally,
global/local variable is for storing single value, while Data Store is to store list of value/payloads under
same data store name.
14) Exception handling concept in CPI
When error happened in CPI message processing, we can either do nothing then let it failed in CPI in red
error, or add exception sub-process. Ideally should at least able to get back the error message and error
stack trace, then see how to handle errors, e.g. send alert email, store error message in SFTP server or
design advanced re-processing mechanism using data store.
Local variables are only visible and accessible within one IFlow, while global variables can be used by
multiple IFlows.
${property. SAP_MessageProcessingLogID}: Message GUID
${date:now:yyyy-MM-dd}: Current date and time
${CamelFileName}: File name
incase of source server push the data to CPI, then no need SAP CC. Like HTTP, Soap, IDoc .. etc just we
will generate URL and share with Source server team, then they will consume our CPI URL, so this case
CC is not required
Did you get chance to work on Odata adaptor Can you tell me some Odata commands what you used
for business case?
How to filter content while quering data from ODATA entries ?
Can you tell me any 5 Apache Camel expressions what you used in current project
Have you used Encryption and Decryption pallets Assume target want to encrypt file how are you going
to implement this ?what key and whose key you use to encrypt ?
Where we need to store these keys and certificates in SAP CPI?/upload certificates in spa cpi
How are you going to handle exceptions in SAP CPI?tell Apache Camel expression Name to hold these
exceptions?
Externalization:
Integration Developers build their integration flows using the SAP Cloud Platform Integration tools on
their development systems and once the development is complete, they move them to the test and
production systems. During this development phase, they realize that the same integration flow may not
work as is, across different systems and would require changes in the configurations of adapters or flow
steps. To overcome this situation, they use the externalization feature offered by SAP Cloud Platform
Integration tools.
Externalization feature enables an integration developer to define parameters for certain configurations
of adapters or flow steps of an integration flow, whose values can be provided at a later point in time,
without editing the integration flow.
ODATA Operations – Create, Merge, Update, Query, Read, Delete, Patch and Function Import.
Multicast question:
If you use sequential Multicast if one of the receiver got failed then what will happen next?whether data
will go to next receiver or not
If you use Parallel multi cast if one of the receiver got failed then what will happen to next?whether data
will go to the other receivers or not ?
Content Enrich
Connection Details-
We need to provide Address, Proxy Type and Authentication Parameters details which are
Mandatory.
When you use Aggregation Algorithm Combine, You will get the all the details as per the query
execution and when we use Enrich, The Product details are added to Original Supplier message
based on Supplier ID.
Have you worked on Mail Adatpor if yes,what are prerequisites to Use Mail Adaptor
How to Populate the subject line dynamically? In body i want to populate Iflow name, Message
Id ,Date Time ..etc if any integration process fail how to implement this requirement
General Questions :
Did you get any chance to Interact with Business directly if yes in which case you interacted
with business ?
Can you take one interface what you implemented recently and explain the business case and
step by step implementation
What is CPI architecture
What is CPI tenant landscape in current project what type of work you did in each
environment
How many interfaces you have in current project ?how many you have implemented
What is the project landscape or what systems involved in current project
We can have Exception sub process in both main integration process and Local Integration
Process
Camel Expressions ${exception.stacktrace} and ${exception.message}
----------------------------------------------------------------------------------------------------------------
Cloud connector:
We will use it only receiver side ....But we will use in sender side when there is any action from cpi
JDBC- we are pulling the data from data base so action from cpi both sides we will use sender
and receiver
STFTP/FTP – we are pulling the file from sftp action from cpi both sides we will use sender and
receiver
Odata – we are pulling the data both sides we will use sender and receiver
If the Camel Split complete is true then gather will come know that it has to combine the messages
Join with gather is used in combination when you use if you have Multiple branches – thumb rule
If you are using multicast it may be sequential or parallel then we can use join and then gather step to
combine all the messages
Class -8
Router – there will be multiple branches but based on condition which ever satisfies it will go
to that branch( one default route if none of the condition satisfies )
Parallel Multicasting-Messages will go all the branches/Multiple branches- independently
Sequential Multicasting- Messages will go to Multiple branches in sequential order if branch1
fails it will not go to branch2 ..one branch Is dependent on other
Gather vs Aggregator
If sender is sending multiple messages we can merge multiple messages and sent to receiver
system - Aggregator -
If sender is sending single message we are splitting it into multiple messages and to merge all
the messages and send to receiver system – gather
If sender is sending 10 records in that Last Message field is false for 9 records and for 10 th
record it is true them merging will start
Sender side sending message will have Last Message field something like this if its true
Idoc
RFC
Proxy
SFTP File
- outbound configuration
Idoc type
Control records : Idoc number,Idoc typec,Sender port, sender port type,reciever port,Reciever
port type,Sender partner ,Reciever partner
As we have Write variables as we don’t have option to get/read Variables You need to read the
variable from content modifier by selecting local variable
In real time also we cant Poll more than one file using Poll Enricher in Single run....? Poll enrich
will pull only one file real time too
SFTP Adaptor :
Have you worked on SFTP adaptor? If yes what are prerequisites to use SFTP adaptor?
How to check the SFTP connectivity from cpi?
What is use of Known _Host file ? how to get those details
How to check whether my files exists in SFTP serve or Not?
Assume we are getting order details from source system but business Order No in file name on
the target system can we implement this requirement in SAP CPI if yes how to implement it
How to populate file names dynamically using FTP/SFTP adaptor?
Append timestamp(checkbox) to get stamp in file name but i don’t want minutes and
seconds
We have used key based Authentication for SFTP
u have to use multiple "poll enrich" functions to pull data from multiple folders
how to maintain multiple directories.. How can we achive this ( via value mapping?)
Let’s Enrich (Part 1)
There are different strategies, things to avoid or consider, when enrich original payload with lookup
payload from OData. If we not pay attention to those details, our flow might not work, become very
slow when dealing with large data, or worse if it works intermittent (sometime ok, sometime error).
Trust me, you don't want the error come and punch you in the face, every now and then.
How to enrich?
How big is the data need to look up?
How to efficiently and reliably lookup large data?
Similarly, if we are using JMS as a sender adapter and have put up a condition to end retries after a
certain retry count, in that case escalation end is the best choice.
how to handle exceptions in an iflow?
To handle exceptions in an SAP Cloud Integration (CPI) I-Flow, you can use an exception sub-process to
capture the exception message and send an email notification to the relevant stakeholders
The exception sub-process will capture the exception message and prepare an XML message body that
includes the date, time, message ID, IFlow name, and exception message. The email notification will
include the same information, plus details about the interface and the exception caught
Let's say you have the i-Flow below with the name Exception Subprocess. In this case, we ought to
develop an exception subprocess. So that the Support team will receive a notification email if any
exceptions or errors occur in that flow.
Remove Context:to remove all the context changes and put the values in same context so that we can
sort them -ie., single record
From every record it will take all the values and put it in single record
Createif –is to create target node based on certain condition it will take input as true or false if its true
it will create an target node otherwise it will suppress
The OData adapter allows you to communicate with an OData API using OData protocol.
Odata vs soap - Odata supports both xml and json but soap supports only xml
Odata –light weight but soap is heavy weight it takes more time for processing
Project Explanation:
I have 3 years of experience in Integration area currently working for Nisum here i working as cpi
developer prior to that i worked on PI/PO From last 2 years i was completely in cpi space ..the client is
having s4 hana on cloud and they have got the cpi integration suite it is implementation project where
we have implemented material master data invoice ,product master data with the client .
So We have integrated our s/4 hana with third party application.We have mostly leveraged OData API
we have created communication arrangements and communication user details in the s/4 side and we
have used standard odata apis to connect to s/4 hana cloud..we have integrated with salesforce and
sftp server .. most of the data we have transferred is o2c and p2p interfaces like sales order
creation/sales order confirmation/outbound delivery/Advance shipping Notification/PGI all those things
Third party applications: SFTp server/govt web portal /we have on premise applications like we have
connected to onpremise ECC
I am Sai having around 9+ experience in IT industry and working on CPI from past 3+years. In CPI, I
majorly worked on integration suite connected with multiple systems like SAP ECC, S/4 Hana, Ariba,
Oracle and also customer specific systems.
I have used all the pallets to transform the data from one system to another system as per business
requirement. Connected third party systems using HTTP,SOAP, RFC and JDBC adapters.
Complex Scenario:
Requirement: We want to post the invoices to Ariba from third party systems. Here Ariba will accept
cXML format so we have to prepare the target XML as per ariba accepted format.
Here complex part is, we need to send the cXML to Ariba with 2 attachments so I have tried different
methods so converted attachment to MIME format because while doing R&D I came to know Ariba will
accept attachment as MIME format so I have used MIME convertor in CPI and passed headers as per
MIME accepted format and sent the data to Ariba
So that was complex scenario which I started my own research and successfully sent the cXML with
attachments to the Ariba.
End user will contact support team using incident management tickets SLA service level agreement
On PI/PO front I was involved in support project where we used to get the mail alerts when I flow was
failed so for that we have analyse that error and once we get the cause of that error we have create the
incident in the service now there we have to mention the details of that error and root cause and that
solution for that error we have to share with concern team they will work and we will close that incident
once error is resolved.
Actually, I am working in the supporting project right there we have only access for monitoring we don’t
have access for development environment, so we have to create the service now ticket for concern
team.
We have three environments dev ,qa ,prod first we need to develop in dev do our unit testing
then download th iflow an import in party and functional team qa environment Mostly end to
end testing is done here involving third once your end to tend testing is successful you need an
business approval to move this to production you have to raise one transport request once it
got approved the the same way you need to download the iflow from qa and import in prod and
do the necessary changes as per channel configuration ..just monitor the iflow in cpi is
messaging processing successfully or not
For Every change you do after moving to the prod save it as version
We have 300+ interfaces in production
Gather the requirements build the interface perform the hyper care support
Phase wise..wave 1 wave 2 wave 3
After a successful implementation we dive into the hypercare phase once hypercare wraps up the
implementation team will hand over knowledge to the support team ensuring a smooth transition
Support Maintenance
The support team will take the reins to maintain the Interfaces we handle incidents and service
requests promptly ensuring everything runs smoothly these are often refered to as AMS (Application
Maintenance system Projects) and continue as long as client use the Interfaces..
Daily Operations :
In a support project the business team manages their daily operations using the newly implemented
system if they encounter any issues they will raise an incident..Incident come with priorities :
Along side incidents businesses can raise service requests for new requirements these involve minor
changes in interface to enhance operations and go through a change requests Process with Regular
CAB meetings
Sometimes we need special access the firefighter ID allows direct access to the product system for
urgent fixes when requesting this ID detail your planned activates and provide a strong business
justification for audit pupose Remember all actions are recorded
CTS +- transport – here btp team will be involved where you will transport the interface provide the TR
request (TR ID) - copy the TR ID and provide it to btp team so that they can move the interface to QA
tenant ..once they move package /interface you need to check the interface configure the necessary
details and deploy the interface in QA tenant (there will be transport button /option in the interface)
Inside package -
Testing team /UAT Team will be involved to do integration testing..Regression testing and User
acceptance testing
Once we get approval from testing team.. either you can use same TR ID or generate a new tR and
provide to btp team to move the interface to production
rarely we will have ssl certificate need to deployed in cpi tenant in order to connect to ssl we will
download from target url
file based – means export and import ( once you click on export from development tenant zip file will
be generated on your local system and you can navigate to qa tenant and then you can import the
downloaded zip file )
Third party will send data XML with structure ORDERS05 – Sales Order to SFTP
After data sent successfully, we will check the sales order created
First, we have to activate the service in the T-Code SICF and then activate.
And then we have to register the service in the T Code SRTIDOC
Then run the test service to get endpoint
http://host:port/sap/bc/srt/idoc?sap-client=<clientnumber>
Then create the logical system in BD54
And then need to create the Partner Profile in W20 and add the Inbound parameters
And then test the service in the postman
And then need to do the cloud connector setup in the cloud connector
Here we need to create the destination once we create we will get the location id and we use in
the Adapter Configuration level
3) CPI Configuration:
SFTP to IDoc –
Actually we are getting the data from sftp server for that we have to connect with sftp server via
location ID for that we have to do cloud connector setup in the cloud connector to get the location ID
And then have to transform the data in the cpi by uing required pallet options and we have to do the
mapping and that data we have to post it to S4 hana via Idoc for that we have to get the idoc address
and do the adaptor configuration for IDoc receiver and then we can successfully post the data into the
s/4 hana
We will be getting function spec/architecture from business/client/functional team In the Functional
spec we would be having details about source structure ,payload format ,message transformation,
target structure sample payload ... i will do test connectivity prior to my actual development just to
make sure connectivity is fine ... (certificates, client credentials ) .. i would start my development and
perform End to end unit testing once i am done development ..
End to end unit testing – you can ask source team to trigger or you can do it yourself based on adaptor
like sftp where you can pull the file from the directory
Once unit testing is successful we can move the interface from development tenant to QA tenant-
Basically we will get the requirements in the form of user stories so the user stories will contain
requirements along with Functional specification document we have to assess the functional
specification document from there we start the build if there are any gaps i have to connect with
functional consultant and functional consultant will provide the necessary information once that is
done i have to prepare technical specification document .. my lead will review it once it is approved i
will commence the build .. In cpi We have two tenants one is test tenant and production ..in the test
tenant i have to build my iflow once it is done i have to do technical unit testing once end to end
connection is done i have to inform to functional consultant they will do functional unit testing once
it is done .. they will move the interface to their test environment they will do SIT. And . end user will
do user acceptance testing once that id one
..we will move the interface to production while moving the interface to production we have to perform
cut over actives like manual setup and after we will do smoke testing
Once the user is assigned to me and ts is been is approved and user story is owned by me ..i have to
connect with functional consultant and respective technical team also if there are any concerns
regarding development i have to have a call with them .. daily we have a in the scrum call we have to
update any impediments for my development status
I have to check with functional consultant whether the logic is wrongly placed or we need extra
explanation i will reach out to him ..in that case provided by functional consultant and we have to
escalate to lead .. each and every he will bring that point each and every user story status ..interface
development
Walk me through .. disaster recover .. i wanted to experience what comes my way because i am still
young in cpi space ..i am welcoming to take those oppurtunities ..
Content Modifier
Last sync : is the property name and value is read from the write variable (as we don’t have get Variable
we need to read the write variable using content modifier)
IS Manual value be default it is false .. we need to use this property as we need to use it is Router
condition to decide manual mode or Automatic Mode. Depending on this value it will route to different
path
Step 2:Router to check the Mode whether it is Automatic or Manual : If IS Manual value is is true then
its Manual if IS Manual value is false then its Automatic
Step 5: Router to check if any new employee records are found or not if found only then we will
write the ..//Employee
Write Variable for all the records we need to retrieve the latest Hire date so basically as you can see
we have sorted our records in a way so the: first record will have the latest hire date and in this case it's
going to take the first record only and save it
But in Manual Mode the flow is controlled by the value defined in Content Modifier
Write variable – we have local and global variable
While creating a variable if you don’t check the checkbox global scope then it will become local variable
If its global variable it can be used in another iflow ..if local variable it can be used within the same iflow
Content Modifier :
Local Integration process :
Source Value:${property.StorePayload}${in.body}
Filter Condition:
When you are doing odata call using request reply say you have thousands of records How will you know
if there are more records or not is ..using Property Receive.Odata.HasMoreRecord is false means there
are no further records ..if the value is true there are some more records
Looping process call:
Manual and schedule run in Single Flow
SAP CPI | REST_API OAuth2.0 TokenAuthentication
Content Modifier :
Content Modifier 3:
Data store with Odata to SFTP flow : Entry ID should be dynamic for that we are getting the product ID
and storing it in a property and that property we are providing it in Entry ID
Content Modifier :
Using content Modifier we are reading the variable value and passing as an entry ID to the Get data
store operation
Idoc to Rest API
OAUTH 2.0
*OAuth 2.0** uses tokens instead of credentials, offering enhanced security. It involves obtaining an
access token through various grant types like Authorization Code, Client Credentials, etc
**OAuth 2.0**: Set up OAuth credentials in the security material and reference them in the adapter.
To use OAuth2 Client Credentials in SAP CPI, you will need to create a new security material of type
OAuth2 Client Credentials. This security material will store the client ID and client secret for the
application.
Here are the steps on how to configure OAuth2 Client Credentials in SAP CPI:
1. The main important step is to create the Service Key in SAP BTP Cockpit. This key is created
when you have created the Process Integration Runtime artifact.
3. Click on Create and from the dropdown, select OAuth2 Client Credentials.
4. Provide the Token URL, Client ID, Client Secret, and the meaningful name to the Client
Credentials. Click on Deploy.
5. Create the simple integration flow in which we can use this credentials.
6. In the Authentication select as the "OAuth2 Client Credentials" and in the Credential name
provide the name which we have created in Step 4.
Once we get the access token we need to pass it as header to the Rest API call
------------------------------------------------------------------------------------------------------------------------------
API Management :
Imagine you have hundreds of these APIs. Does that sound unrealistic to you? Nevertheless, it’s a real-
world example from SAP Business Suite and SAP S/4HANA. Every SAP Fiori app makes use of one or
more OData APIs. If the APIs are only consumed internally, the monitoring of APIs during operation is
often neglected. If you also want to make APIs available to external developers for use in their own
apps, requirements such as documentation, billing, security, and monitoring suddenly come into focus.
This is exactly where API Management helps you. It enables you to centrally provide and document
your interfaces and monitor their ongoing operation.
API Management consists of two main components: the API designer and the developer portal. With
the API designer, you create and model your APIs. You can create products and integrate one or more
APIs from one or more providers into them. You can intervene in the data flow and, for example, check
an API key or cache data. The number of requests within a certain period can also be limited with a
spike arrest.