0% found this document useful (0 votes)
94 views

Basics

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views

Basics

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 72

HTTP adaptor Sender side:

we will generate the address and provide it to Sender –Manam evali sender system ki ..mana address
vadi degara vunti vadu courier chestadu

This endpoint /address will be appended to cpi tenant url after deployment and that url will be shared
with sender system

Runtime lo xml ni xsd hold chestadi

Message Mappping –

 Source – xsd-
 Target- wsdl- webservice

Wsdl will have number of operations and which operation you want to call

HTTP will accept any data like XML, Json plain text... Etc

Only in soap Web services cases we will use Soap Any other cases we will Use HTTP

You can download wsdl definition

Reciever system/target system server address need to be give in adaptor configuration Reciever side

XSLT Mapping : xml structure to remove or Groovy script

we can also use pallete Function XML modifier to you can remove xml tags and name spaces

Is there any other way to remove namespaces from payloads apart from XSLT mapping?

 Apart from process direct, is there any other options to call iflow from one to another... With
HTTP or SOAP
 there are two modes: one is combine mode, the other one is enrich mode. When you use
combine mode, what happens is the original message and the lookup message obtained from
the external call are combined into a single enhanced payload.
 Another difference: Content enricher – connector from receiver towards content Enricher
Request Reply- connector is towards the Receiver
 Request – Reply (synchronous)- HTTP ,ODATA,JDBC

SFTP: using SFTP adaptor we are connecting to sftp server

You need to have connectivity details to connect to sftp server


Authentication:

 User Credentials
 Public Key

Generate SSH key & SFTP Connectivity Using Public key


Go to Manage Keystore and click on drop down next to create
Once you create SSH key you need to download public key and share this with sftp server team

Once you download this public key and send it to sfmc team they will assign one user to this public key

Sftp team will add user to this public key this is not our responsibility just showing for your
understanding
Now we need to check the connectivity in CPI

We are getting error


Now change the Proxy type as On Premise
Now change it to Authentication to Publick key and Enter user name and Private key we generated in
cpi
Once you have done sftp connectivity test now you can use it in iflow sftp adaptor configuration
Always remember Address : Host + Port

Proxy type will decide whether it is Onpremise or cloud (internet)based on the option you select

Cloud connector:

it is not like sender or receiver, if CPI is initiating the pull/ push requests, then we need to use SAP CC
(cloud connector) like SFTP, JDBC, OData ..etc

incase of source server push the data to CPI, then no need SAP CC. Like HTTP, Soap, IDoc .. etc just we
will generate URL and share with Source server team, then they will consume our CPI URL, so this case
CC is not required

SFTP : Public key

 Create SSH key in manage Security-> keystores -> create ssh key
 Once you have created Key and you will have option to download Public key and that Public
key is shared with SFTP team
 SFTP team will add user to that Public key .
 Now you can test the connection from CPI to SFTP server by Selecting Authentication as Public
key
 Once you get the success full connection you can configure at sftp adaptor level

Note : if you select Proxy type as Internet then its cloud


If you select Proxy type as on Premise then its on Premise

SAP Application on sender side On premise Sender side no need of cloud connector :
Mostly in Exception handling we will put content Modifier to capture the error response
 Data Issues : we need to connect with Functional team
 Network issues /External system downtime

 Splitter : Breaks down a composite message into series of individual messages


 General Splitter:Breaks down a Message into individual messages keeping the encapsulating
elements
 Iterrating splitter:Breaks down a Message into individual Messages without encapsulating
elements
 --------------------------------------------------------------------------------------------------------------------------------
----------
 Multicast: sends the same message to more than one path
 Parallel Multicast:Sends the same message to more than one path in parallel
 Sequential Multicast:Sends the same message to more than one path in defined sequence
 Gather : The Gather step merges messages from different routes (into a single message) with
the option to define certain strategies on how to combine the initial messages.
 Aggregator : aggregates several incoming messages to a single one
 Join:Sends multiple incoming messages into single outgoing path
 Encryptor:Encrypts the content of incoming message body
 Decryptor:Decrypts the encrypted data contained in the body of incoming message body
 Signer:Digitally signs the message content
 Verifier :verifies the signature of message content
 Process call : Adds a call process step To execute the steps defined in the local integration
process
 Looping Process call: Adds a Process call step to repeatedly execute the steps defined in local
integration process
 Idempotent process:Adds a process call step to check for Duplicate Ids and conditionaly execute
the steps defined in local integration process
 Poll enrich:Polls to external system(SFTP server) and enriches incoming message
 Request Reply: it’s a call to external system and wait for response synchronous call
 Content Enricher :Mechanism to combine the incoming message with additional data received
from an external system

Router – Default route

 We can specify any number of branches with each branch having a condition
 If the incoming message satisfies the condition it will route to the corresponding branch

 Splitter should always contain Gather


 Multicast : should have combination of Join+ Gather

Aggregator : In the source if you are getting more than one incoming message

Simulate vs simulation

Simulate – its done in only Message Mapping

Simulation : we can perform the simulation in between pallete functions

Test the Message Mapping

 Display queue
 Simulate

Trace will be enabled for only 10 mins Once enabled we can see Message logs till 1 hour

Manage Security

 Keystores: upload certificates certificates ,keystore, SSH key


 PGP Keys: we can Add and download Public keys /Secret keys
 Security Material- User credentials ,OAUTH credentials we will create

Manage Integration Content


Monitor Message Processing : Failed Messages/ retry Messages /Completed Messages

COMPLETED: Message has been delivered to receiver successfully.

PROCESSING: Message is currently being processed.

RETRY: Status retry is set if an error occurred during message processing, and a retry was automatically
started.

ESCALATED: During message processing an error occurred, and no retry has been triggered. For
synchronous messages, an error message is sent to the sender.

FAILED: Message processing failed, message hasn’t been delivered to receiver, and no retries are
possible. In other words: FAILED is a final status, and message processing ultimately has failed.

CANCELLED: Manual cancellation of entries in the JMS queue - MPL is set to status cancelled.

DISCARDED: For scheduler triggered integration flows, the MPL is shown on the worker node where the
message processing started first. For all subsequent message processing starts, the message status is set
to DISCARDED.

For example, assume that an integration flow is initiated by a Timer event (scheduler) and a worker
node goes into an out of memory state. The system starts the worker node again and synchronizes the
integration flow as soon as the node is active. The message is restarted in that case, and a new message
ID generated. The message with the original ID goes into status DISCARDED.

ABANDONED: Message processing was interrupted or log wasn't updated for an uncommonly long time.
The status can change in case processing is resumed.
, "Failed" indicates a complete inability to process the message, while "Escalated" suggests that the
message encountered an issue that needs further investigation but isn’t outright failed. Both statuses
are crucial for monitoring and maintaining the health of integrations in SAP CPI.

Failed

This status indicates that the message processing encountered a critical error that prevented it from
being processed successfully. Common reasons for a failure might include configuration errors, missing
mappings, or connectivity issues.

In this we have two nodes

In SAP Cloud Platform Integration (CPI), the main difference between a tenant management node and a
runtime node is that a developer can modify the tenant management node, but only check the status of
messages in the runtime node:

in that we focus on Integration Suite and integration Suite in turn consists of 4 Services

 SAP CPI
 SAP API
Open connectors
 Integration Advisor

Open connector – if there is no adaptor provided by sap and you want to connect cpi with any third
party system then we go for open connector

Integration Advisor: when you have concept of EDI then we have to go for Integration Advisor

Property: Suppose I have two integration flows ( IF1,IF2), where IF2 is connected to IF1 via process direct
.Now properties in IF1 cannot be accessed in IF2 because the scope of property is with the iflow (IF1),
but where as headers created in the IF1 can be accessed in IF2 because it as global scope.
Properties and headers are only available at runtime in that particular iflow. Unlike variables and data
stores, there's no local or global scope. Also, when the iflow's executed, they're gone; there's no
persistence.

If you store values in headers or properties in your iflow, those values will be gone when the iflow
finishes. If you need to store values longer than that, you can store them in a variable using the Write
Variables step. In other words, they're a way to persist data.

A global variable can be accessed by any iflow, but a local variable can only be accessed by the iflow that
wrote it.

You can see all the variables, you've stored, in the Operations view => Manage Stores => Variables.

Variables are similar to Data Stores, really. But variables store scalar values (i.e. one number, one
timestamp etc.) whereas Data Stores contain complete payloads (e.g. an XML or JSON document).
: What is the different between local variable and global variable?
A: Local variable can be access via same iFlow only. Global variable can be access via differet iFlows.

How to read local variable and global variable?


A: Use Content Modifier read to either header or property.

Q: How to write Variable?


A: In iFlow, use 'Write Variables' step, take value from header/property/Xpath/expression.

Is it possible local and global variable having same name?


A: Yes. Since the scope is different between local and global.

: How to do delta synchronization via timestamp?


A: Use variable to remember last processed timestamp, so that next scheduled run will resume from last
process timestamp onward.

Q: What need to consider when design delta synchronization via timestamp?


A: (1)Data should be sorted by timestamp.
(2) Timestamp should be unique (E.g. only date without time might not work).
(3) The right date field should use for delta synchronization.
(4) Only update last processed timestamp at last step if all processing success.
(5) Timer/scheduler interval.

Q: What if I need to revert back to earlier past timestamp?


A: Build in same iFlow a manualRun/adhocRun flag to set manual timestamp, override value in variable.

: Should I use global variable or local variable?


A: Use global if other iFlow need to access same variable. Global can behave like local, but not the other
way round.
Q: What ways can be use to delete variable?
A: Manual delete via 'Manage Variables' page.

Q: What other potential use of variable?


A: Access same variable value in different branches of Multicast (because property will not work).

At iFlow first run, variable not created yet but need to use some initial/default value for processing, how
to handle such chicken-and-egg situation?
A: Use Content Modifier read variable and set default value.
-------------------------------------------------------------------------------------------------------------------------------

Data Store:

Q: How to write to data store?


A: Use DS write step.

Q: What is the different between Visibility: 'Global' and 'iFlow'?


A: 'Global' mean any iFlows can access DS; 'iFlow' mean only same iFlow that write can read it back.

Q: At Write DS, Is it mandatory to specify Entry ID?


A: No.

Q: What happen if write to DS with same entry ID twice?


A: By default will fail/error. If selected 'overwrite existing msg' then will replace/update.

Q: Is it message body only will write to DS?


A: Body always write. If select 'include msg headers' then headers will be write to DS as well.

Q: What kind of payload format can write to DS?


A: No restriction. Xml/Json/Text also is fine.

Q: For DS Get, what happen if not specify entry id?


A: Will fail, entry id is mandatory for DS Get.

Q: What are the main different between DS Get and DS Select?


A: DS Get fetch single entry; DS Select fectch multiple.
A: DS Get mandatory to specify entry id; DS Select no option to enter entry id.
A: DS Get support different data format; DS Select only support XML.

Q: After Get or Select from DS, what are the ways to delete the entry?
A: Use 'delete on completion' or DS delete by entry id.
Q: When writing a list of records to DS, if some record processing failed, will the DS operation have
partial success, partial failed?
A: Yes. Now new iflow by default 'transaction handling' is none.

Q: How to select multiple entry from DS, process each entry independently one-by-one, process those
success one, skip those failed one, and further write to different DS in case success or failed?
A: Use combination of DS select, splitter, router, exception sub-process, 'transaction handling' setting, 1
source DS + 2 target DS to achive this. Will show in course lesson.

Q: What data format supported by DS sender adapter?


A: Xml, non-xml or any format also is ok.

Q: What so special about DS sender adapter, compared to DS Get & Select?


A: DS sender adapter have auto retry feature.

Q: Why DS sender retry consider as 'smart' retry? Describe it, please?


A: It have 'Exponential Backoff' retry option. Each retry will double the wait time.

Looping Process call: In single call we cannot pull the entire data from success factor we go for looping
process call .. it will iterate/recall same local integration multiple times based on the count which we
provided in the looping process call it will work based on the condition which we provided .. you can
select local integration process you want to call

If you are querying SAP SuccessFactors EC using the CompoundEmployee API, how would you query all
the records if the page size is set to 200 and they're a thousand records in EC?

Looping Process Call: Call the Local Integration Process till Condition specified is true.

For example, it can be used to iterate through a list of failed messages present in a datastore for retry.
Similarly, it can also be used for sending a small subset of data at a time, which in turn improves
performance by reducing the memory and processing time for each iteration.

A Looping Process Call involves invoking a subprocess multiple times within a single flow based on a
collection of items or until a certain condition is met.

For each call how many records you want to pull you need to specify
Fetching Data From One Iflow Via Process Direct And Invoking A Loop Process Call With Exception
handling

JMS: JMS queues bring a very critical capability to iflow design and that is the Guaranteed Delivery
pattern. Using queues, we can decouple the communication between the sender and receiver
applications. This can be implemented by developing an iflow that receives messages using any type of
sender adapter, confirms technically that messages were received, and then stores the messages
securely on the tenant in a JMS queue.

From the queue the message can then be processed with the frequency and logic most suitable for the
specific requirements and capabilities of the receiving application. Using queues for decoupling we make
sure that messages will be delivered to the receiver irrespective of capacity or current state of the target
application.

Can you please make a video of an odata query that returns thousands of records ( e.g 20.000 ) and
write it to an ftp server as a single csv file? I heard of odata pagination, page size and looping process call
can be used to retrieve very large records, but I don't know how it works. I would be very happy if you
can solve this.

Request Reply vs content enricher :As you rightly point out, both steps make a synchronous call to an
external system. Where they differ, is in how the response is handled. In the case of Request-Reply,
the response from the external system replaces the current payload. This means that if you need both
the old payload and the response from the external system, you need to store the old payload in e.g. an
exchange property before making the external call.

Headers and Properties can be used in any local integration process in an iflow.

Let me give you an example :


Suppose I have two integration flows ( IF1,IF2), where IF2 is connected to IF1 via process direct .

Now properties in IF1 cannot be accessed in IF2 because the scope of property is with the iflow (IF1), but
where as headers created in the IF1 can be accessed in IF2 because it as global scope.

If the information needs to be transferred to the receiver system, then 'Header' should be used in
Content Modifier. If the information is internal to Iflow, then 'Property' should be used. The property
lasts for the entire duration of an exchange but it is not transferred to a receiver

Similarly, if we are using JMS as a sender adapter and have put up a condition to end retries after a
certain retry count, in that case escalation end is the best choice.

JDBC:
Mandatory Fields while configuration JDBC

 User name
 Password
 JDBC URL

 We don’t have it sender side


 We need to Use Timer with Content Modifier (we write query)and Request Reply -> JDBC
adaptor->JDBC


 First you need to install JDBC drivers

 Data Source creation: To connect to database we need to create a data source under Manage
JDBC material->JDBC Data Source

 We need to provide following information for Data Source:


 Name: Name of Data SourceDescription: provide relevant description for data source
Database type: select your database type
User: user id for database
Password: Password for database
JDBC URL: this we need to form
SQL: jdbc:sqlserver://virtualHost:virtualPort;DatabaseName=databaseName;
Oracle: jdbc:Oracle:thin:@virtualHost:virtualPort:databaseName
Location ID: CPI tenant location ID
Cloud Connector: check this check box

 JDBC adapter configuration: Configure JDBC adapter in integration flow to communicate with
database.Once these prerequisites are done then we can create an iflow which will pull data
from SQL database and generate a file at target FTP Server.

RFC Adaptor : we don’t have it in Sender side


Sender Side:

 ODATA
Process Direct
 SFTP
 HTTPS
 Success Factor
 JMS
 IDOc
 Data Store

Reciever Side :

 JDBC
JMS
SFTP
 RFC
 IDOC
 Success Factor
 HTTP
 OData

Only Reciever side:

 Slack
 Splunk
 Workday
 Twitter
 Sugar CRM
 Facebook
 Coupa
 AmazonDynamo DB

JMS Sender Side:

Mandatory configuration

 Queue Name
 Retry Interval
 Max Retry Interval

Reciever Side:

Mandatory Configuration:

Queue Name

Number of Concurrent Enter the number of concurrent processes for each worker node. The
Processes
recommended value depends on the number of worker nodes, the number
of queues on the tenant, and the incoming load. Make sure to enter a value
that is as small as possible (1-5) because JMS resources are limited,
Retry Handling Retry Interval (in m)
Enter a value for
the amount of
time to wait
before retrying
message delivery.

Exponential Backoff Enter a value to double the retry interval after each unsuccessful
retry.

Maximum Retry Enter a value for the maximum amount of time to wait before
Interval (in m)* retrying message delivery. The minimum value is 10 minutes. The
(only configurable default value is set to 60 minutes.
when Exponential
Backoff is selected)

Dead-Letter Queue If selected, the message is taken out of processing and marked
as Blocked in the queue in the following situation: Message
processing has been stopped due to an out-of-memory error in
the worker node. Later, the message has been retried twice by the
JMS sender adapter (and each retry has again lead to an out-of-
memory error in the worker node).
Processing large messages in scenarios with the JMS sender
adapter can sometimes cause a worker node failure. Node failure
can happen in such a scenario if the JMS adapter repeatedly tries
to process a failed (large) message. To avoid such a situation,
select this option (switched on by default).
In such cases, a lock entry is created, which you can view and
release in the Monitoring application in the Message Locks tile
under Managing Locks.
When you release the lock, the system starts retrying the message
again.
A dead-letter queue (DLQ) is a special type of message queue that temporarily stores messages that a
software system cannot process due to errors.

. It stores erroneous messages until you process them to investigate the reason for the error.

For example, if your developer sets the maximum retry count to one, the system moves all unsuccessful
deliveries to the DLQ after a single attempt. Some failed deliveries may be caused by temporary network
overload or software issues. This sends many undelivered messages to the DLQ. To get the right balance,
developers optimize the maximum retry count to ensure the software performs enough retries before
moving messages to the DLQ.

ODATA Sender Side:

ODATA IS ADAPTOR Specific to call ODAT Service we need to use Odata adaptor

ODATA Reciever side:

Mandatory Fields

 Address
 Operation Details
 Resource Path
To call Success factor we need to use Success Factor Adaptor which is Adaptor Specific
ABAP language is used

Ways to connect SAP s4 Hana to CPI

 IDOC
 RFC
Proxy
 SFTP File

Intermediate Document

Idoc Number:

This however means, for each IDOC type thats flows from S4H to CPI, a separate Type G connection will
have to be created [/cxf/matmas or /cxf/debmas....] in SM59

One could possibly mitigate this by having a generic routing iFlow [/cxf/idoc and thereby having only 1
SM59 connection for IDOCs to this CPI iFlow] and then making process direct calls to individual iFlows
processing different IDOC types, depending on field values in the control segment.

It would be ideal though, if SAP could build something on S4H, which would allow specifying a generic
path prefix [/cxf/idoc/<idoctype>] in SM59 connection [so that there is one connection, like in tRFC port
for IDOCs between ECC and PO ] and dynamically route the calls [?] to the iFlow endpoints [ /matmas or
/debmas... maintained in Connection of respective iFlows] from S4H.
Control records – idoc type message type idoc number Sender port reciever port, Reciever
partner,sender partner ..

Data Records

-
Direct 1- outbound

Direct 2 is Inbound

High Level Steps :


In Part1 I described the necessary settings to RZ11, STRUST and SM59 type G RFC Connection, IDOC Port
in WE21 and Partner Profile in WE20

---------------------------------------------------------------------------------------------------------

1.
System Connection between S4H and CPI
2. Background Configurations required
3. Configure Idoc Sender Adapter
4. Search messages in message monitoring based on Idoc Number

To be able to send messages from SAP S4Hana to SAP CPI import CPI certificates in S4Hana STRUST. You
can download CPI certificates either from keystore or from any browser.
1. Login to S4hana system --> Tcode STRUST (Import all 3 CPI certificates) --> Add to Certificate List

2. Create Logical System for S4H and CPI system in Tcode BD54

3. Create RFC Destination of Type G -->


Host = CPI Tenant, Port = 443, Path Prefix = from Sender Adapter (CPI - IFlow); Logon & Security
4. RFC Connection test returns HTTP 500 response

5. Create Port WE21 --> Port Type --> XML HTTP

7.Partner Profile WE20 -->


1.Import Outbound Message type with port created in earlier step

Exception Subprocess with Error End/Escalated End


Exception Subprocess

Similarly, if we are using JMS as a sender adapter and have put up a condition to end retries after a
certain retry count, in that case escalation end is the best choice.

SFTP :

,if we get unknown host exception while performing connectivity test how we can resolve the issue?

Particular host is incorrect or not reachable, cross check with SFTP team or check with network team
that Could connector and SFTP IPs are whitelisted or not at network level
I understand property scope is within the iflow and header can be accessed outside the iflow. When we
say "outside the iflow" , does that indicate outside iflow but within the package or headers are
accessible through the tenant everywhere. Please clarify

Answer: Headers declared are accessible outside your tenant. For an instance, if you are using Soap or
HTTPS you can get these headers in the header properly of Soap or HTTP response. With the list of
standard headers you will see these custom headers as well.

What is meant, is that if you use an HTTP-based adapter to call an external system, properties will never
be passed to that system, but headers might. The HTTP receiver adapter requires you to add headers to
an allow list, though, before they are passed to the external system.

Properties and headers are only available at runtime in that particular iflow. Unlike variables and data
stores, there's no local or global scope. Also, when the iflow's executed, they're gone; there's no
persistence.

----------------------------------------------------------------------------------------------------------------------------------

If you store values in headers or properties in your iflow, those values will be gone when the iflow
finishes. If you need to store values longer than that, you can store them in a variable using the Write
Variables step. In other words, they're a way to persist data.

A global variable can be accessed by any iflow, but a local variable can only be accessed by the iflow that
wrote it.

You can see all the variables, you've stored, in the Operations view => Manage Stores => Variables.

Variables are similar to Data Stores, really. But variables store scalar values (i.e. one number, one
timestamp etc.) whereas Data Stores contain complete payloads (e.g. an XML or JSON document).

Please note that while there's a Write Variables step, there's no Read Variables step. To fetch the value
of a variable into a property or header, you use a Content Modifer with the type set to either Local
Variable or Global Variable

What is the difference between Header, and Property?

The scope of the element declared in the Header is beyond the scope of the IFlow, The scope of the
element decalred in the Property is only within the IFlow i.e. the property parameters are not handed
over to the receiver.
Headers and Properties can be used in any local integration process in an iflow.

Suppose I have two integration flows ( IF1,IF2), where IF2 is connected to IF1 via process direct .

Now properties in IF1 cannot be accessed in IF2 because the scope of property is with the iflow (IF1), but
where as headers created in the IF1 can be accessed in IF2 because it as global scope.

Property is like a container where it is internal to a process and it can't be sent to receiver. But what is
the scope of property in an iflow?

The difference being that headers can be passed to another flow but properties can only be used locally.

-------------------------------------------------------------------------------------------------------------------------------------------
- Process Call :A step in an integration flow that invokes a Local Integration Process.

You can use ProcessDirect adapter to provide direct communication between two integration flows by
reducing latency and network overhead.

 Splitter should be used in conjunction with gather (If a Split or Multicast Step is used, the steps
belonging to one sub-message get the same segment number. They can therefore be grouped
together.)
 Multicast- Join+ gather
 In ideal scenario we will provide the endpoint to source system and its their duty how

${exception.message} Error Message


${exception.stacktrace} Error Stacktrace
${date:now:yyyy-MM-dd} Current Date/Time with format
${in.body} or ${body} Message Payload

${header. SAPJMSRetries} JMS Retry Count

 Monitor Message Processing


 Manage Integration content
Status :

 Failed: Message processing failed, message hasn’t been delivered to receiver, and no retries are
possible. In other words: FAILED is a final status, and message processing ultimately has failed.
 Retry: Status retry is set if an error occurred during message processing, and a retry was
automatically started.
 Completed: Message has been delivered to receiver successfully.
 Processing: Message is currently being processed.
 Cancelled
 Discarded
 Abandoned
 Escalated: During message processing an error occurred, and no retry has been triggered. For
synchronous messages, an error message is sent to the sender.
Time :
Last 30 days messages we can see

Every Message will have Message ID and correlation ID


Most of us already know that the ID based message search in SAP CPI is limited to Message
ID or Correlation ID or Application ID. In this blog let us explore on using more Searchable Identifier(s).

While Message ID and Correlation ID are technical identifiers generated by CPI, we are left with just
Application ID to set a process/functional identifier like Purchase Order Number, Sales Order Number,
Customer Number, etc. This helps to search the MPL based on the business identifier for
monitoring/troubleshooting.

Search based MPL Headers

Process Direct

Both will have same correlation ID-

 You can store messages in Data store


 You can send the message to SFTP
The correlation ID lets you track related messages in the monitor. For instance, when you call one iflow
from another using the Process Direct adapter, the correlation ID will let you follow the calls. Without it,
you can't immediately tell that those two messages are related. The application ID is set by you. It lets
you identify messages in the monitoring based on a meaningful domain value such as, say, an invoice
number.
Correlation Id:- suppose you need to send Second message that depends on the first message.

Correlation Id
The correlation id is a bit more interesting to monitor processes. In this example, we have a process that
spans two CPI processes using the process direct adapter. When we search for the Correlation ID we can
see both processes that it is used for.

It would make sense to be able to set this correlation id to be the PI message ID, that way it is a lot
easier to find which CPI process was started by which PI message. So you can just search on the PI
Message GUID.

We simply just need to set the header SAP_MplCorrelationId in the HTTP call to the SAP PI message ID,
and then we can search for the value in our CPI system.

Sender and Reciever names


One of the cool things about this is that I also get to learn something. So, Marco Verhoef, that I did a
podcast about their implementation on CPI had some extra way to log information. It is also possible to
use the sender and receiver to save extra information on the processing. That way you can specify what
was done with the message, so it is possible to see where the message was sent.
Click on Use More fields at the bottom of Artifacts tile

Open in text view


Most of us already know that the ID based message search in SAP CPI is limited to Message
ID or Correlation ID or Application ID. In this blog let us explore on using more Searchable Identifier(s).

While Message ID and Correlation ID are technical identifiers generated by CPI, we are left with just
Application ID to set a process/functional identifier like Purchase Order Number, Sales Order Number,
Customer Number, etc. This helps to search the MPL based on the business identifier for
monitoring/troubleshooting.

However, in my SAP CPI journey, there were multiple instances where I had a need to have more than
one ID to use for MPL search, for eg

 Application ID set to Customer Number from IDOC, however, I also require a search based on
IDOC number.
 Application ID set to Order Number from message sent over message broker, however, I also
require a search based on Message-ID from Message broker.

I achieved this use case by setting the additional ID as a Custom Header in MPL. Let us see in this blog on
implementing it.

Scenario

Let us see the implementation in a scenario where a DEBMAS IDoc is sent for Business Partner
Replication. We would set BP Number as Application ID and IDoc Number as Custom Header. Then we
see how to search MPL based on both these IDs.
IFLOW

Set Application ID

Use the Content Modifier step to set the below standard MPL headers.

Constan
SAP_Sender Header Sender System Name
t
Constan
SAP_Receiver Header Receiver System Name
t
Constan
SAP_MessageType Header Message Type
t
XPath to read the business
identifier i.e.
SAP_ApplicationID Header XPath
/DEBMAS07/IDOC/E1KNA1M/KUN
NR
SAP_MessageProcessingLogCustomSta Propert Constan
Custom Status
tus y t

Also, extract the IDOC number into a header field.

XPath to IDoc number field i.e


IDOCNUM XPath
/DEBMAS07/IDOC/EDI_DC40/DOCNUM
Content Modifier

Set Custom Header

Use the groovy script to read the IDoc Number extracted in Content Modifier and set it as a Custom
Header using addCustomHeaderProperty method of MessageLogFactory Interface.

import com.sap.gateway.ip.core.customdev.util.Message;

def Message processData(Message message) {


def messageLog = messageLogFactory.getMessageLog(message);
if(messageLog != null){
//Read IDoc number from Header
def IDOCNUM = message.getHeaders().get("IDOCNUM");
//Set IDoc number as Custom Header
if(IDOCNUM!=null)
messageLog.addCustomHeaderProperty("IDOCNUM", IDOCNUM);
}
return message;
}

Message Processing Log

This is how the MPL is for a BP 1000000001 processed with IDoc 0000000123456789.
Standard MPL Headers

Search based on Application ID

This is a straight forward search in SAP CPI Message processing Monitor. A search for ID "1000000001"
yields two messages processed for this BP.

Search based MPL Headers


Search based on Custom Header ID

The standard UI does not allow us to search message based on the Custom Header Identifier OOTB.
Hence we will use MPL OData API to retrieve the Message ID.

URL

https://{{cpi_tmn_host}}/itspaces/odata/api/v1/MessageProcessingLogCustomHeaderProperties?
$filter=Na... eq 'IDOCNUM' and Value eq '0000000123456789'&$expand=Log&$format=json

Note: Custom Header is inserted as a Name/Value pair, hence the $filter parameter should be passed
with the Custom Header Name and the value that we are searching for.
The API Call fetches the Message ID matching the custom header search condition. Use it in the standard
Message Processing monitor to search and troubleshoot.

Of course, it's up to every interface designer or organization to decide on this. But: In my opinion it's
more sensible to write a technical message ID of the application to the standard MPL header
"SAP_ApplicationID". In case of SAP senders, that would be an IDoc number or the XI message ID. After
all, the search field in the monitor is the same as for CPI's own message and correlations IDs. Also, like
this we don't mix up too many different IDs (with different meanings) into this one field.

Instead, I usually use the custom MPL headers for interface/scenario specific identifiers, like an order or
invoice number, for example. There, you have the advantage that you can specify the individual names
of the fields.
Its not possible to link iflows from manage integration artifact tile of Monitoring perspective. But you
can leverage correlation id in message monitoring which list down all related messages ( called from
main flow to multiple flows connected via process direct adapter )
But my question is about the address which we maintain in the adapter. For example we have some
dynamic address that we can see somewhere in the logs?

You can see the routing if you enable trace and navigate through MPL ( Text view also ) but there is a
catch if your iflows are big SAP will truncate these logs after certain character limit.

Other option would be before calling process call ( dynamically ) you can log the header as payload
( which is dynamic in nature in your case ) to see which particular process direct has been called.
Content Modifier :

1. Constant: Any constant value that will be carried across integration process.

2. Expression:Values can be called using camel simple expressions like $


{header.<HEADER_NAME>} ,${property.<PROPERTY_NAME>.Refer the blog to know more about
Camel Simple Expressions.
3. Global Variable: We can use Write Variables step type to create a variable at a certain point
within the message processing sequence to be used across Integration Flows of cloud
integration tenant.To consume the Global variable ( with the scope of tenant ) we can set type
as Global Variable and set it to header or property in Content Modifier.

4. Header: Pass value from other header to newly created header.


5. Local Variable:We can use Write Variables step type to create a variable at a certain point
within the message processing sequence within same integration flow.To consume the Local
variable ( with the scope of integration flow ) we can set type as Local Variable and set it to
header or property in Content Modifier.
6. Number Range:It helps to insert unique sequence numbers as part of the inbound or outbound
messages.To consume Number Ranges we can set type as Number Ranges and set it to header
or property in Content Modifier.

7. Property:source value depending on the type either header name,a property name, a variable
name or any other type.
8. XPath: Set the value of XPath to header.Refer to know different XPath functions supported.

 Data Type: The Data Type column is used only for the types XPath and Expression. The data type
can belong to any Java class.Refer to know more about supported datatypes.
 Value: Place holder for source value.Its value depends on the type ( like Header,Property,Local
Variable...)

 Default:If you have selected Local Variable or Global Variable as Type, the value specified as
Default will be assigned to the header value if the variable is not found at runtime.
SAP CPI – Message Monitoring – Standard Out of the Box Features to use in your Iflows

Introduction

With more customers now migrating or thinking about migrating their SAP PI/PO Integrations to CPI; a
few common questions I get from seasoned PI/PO developers and Integration Managers is

 How do I monitor?
 How do I filter messages in Message Monitor?
 How do I search?
 What should I do to make sure I migrate my Interfaces right from a Monitoring and Operations
perspective?

This post is an attempt to list out the Standard Out of the Box Features available in CPI that every
developer “should” mandatorily use and leverage to make both their and Operations teams lives easy.

Standard Message Headers

SAP CPI provides a list of Standard Message Headers. While its usage is upto Individual developers it is
essential and imperative that you use these Message Headers in your Iflows. Lets take a look at these
Standard Message Headers

SAP_Sender

 This is a header to identify your Sender System.


 If you are from a PI / PO Background think Sender Business System / Service.
 Example usage can be ERPCLNT100, HCMCLNT500 and so on for your on-premise SAP Systems.
 The usage is left to you from SAP but it is strongly recommended that
o You have a List of allowed values in your landscape for both SAP and not SAP systems.
o You have these values as externalized in your iflow when you define them
SAP_Receiver

 This is a header to identify your Receiving System.


 If you are from a PI / PO Background think Sender Business System / Service.
 Example usage can be ERPCLNT100, HCMCLNT500 and so on for your on-premise SAP Systems.
 The usage is left to you from SAP but it is strongly recommended that
o You have a List of allowed values in your landscape for both SAP and not SAP systems.
o You have these values as externalized in your iflow when you define them

SAP_MessageType

 This is a header to identify your Message Type.


 If you are from a PI/PO Background think Sender Interface /IDoc.
 Example Usage can be ORDERS.ORDERS05, SalesOrder, PurchaseOrder etc.
 The usage is left to you from SAP but it is strongly recommended that
o You have a List of allowed values in your landscape for both SAP and not SAP systems.

SAP_ApplicationID

 This is a header to identify your unique Message in runtime.


 If you are from a PI/PO background think IDocNumber, FileName, PurchaseOrderNumber etc.

SAP_MessageProcessingLogCustomStatus

 This is a header to put a Custom Status in your iflow.


 SAP provides standard status for the iflows – Completed, Escalated, Failed etc. What if you want
to provide a custom status in your iflow beyond this.
 In my experience this is not something that I have found much use for.,

How to use Standard Headers in your Iflow

 Define a Content Modifier step – ideally the 1st step of your Iflow.
 Provide values for each of these Headers in the Header

Data
Name Type Value Example
Type

ERPCLNT500
HCMCLNT100
SAP_Sender Constant {{SAP_Sender}} String SalesforceCRM
Coupa
C4C_ServiceCloud
ERPCLNT500
HCMCLNT100
SAP_Receiver Constant {{SAP_Receiver}} String SalesforceCRM
Coupa
C4C_ServiceCloud

Constant or
Orders
SAP_MessageType XPATH ( for {{SAP_MessageType}} String
ServiceTicket
IDocs)

XPATH ( for
Example for IDoc
SAP_ApplicationID taking value //DOCNUM Integer
Number
from Payload)

For SAP_MessageType when you use a IDoc as a Sender; I would recommend to use the following
XPATH so that you can have the IDoc as the SAP Message Type in CPI
with MessageType.IDocType.Extension

1 concat(//MESTYP, '.',//IDOCTYP,if (//CIMTYP) then concat('.', //CIMTYP) else ())

How to use Standard Headers in CPI Message Monitoring

Use Option “Use More Fields” to view these Headers in a Drop Down.
Custom Message Headers

So we have seen Standard SAP Message Headers. We have seen how they can b e used to search and
filter for messages. We have seen SAP_ApplicationID and how it can be used to search for messages.
BUT,

 What if I want to search a message by more than one field.


 IDoc Number works for me but what if I want to search by PONumber, DeliveryNumber, EDI
InterchangeID, and so on.
 Basically I want CPI to index and allow multiple identifiers to search a message.
 Are you from a SAP PO Background – think User Defined Message Search.
How to use Custom Headers in your Iflow

While custom headers provide additional mode to Log Headers in CPI and search for them in Standard
Message Monitor, the only way to log custom headers is to use a Groovy Script.

 Assume you get 2 Fields in your payload that you would like to set as your Custom headers –
IDocNumber and PONumber
 They are in the Input XML Payload with field names IDocNumber and PONumber.
 Extract these values and store them as Properties
 You can then use this Groovy script to log Custom Headers in CPI.
How to use Custom Message Headers in CPI Message Monitoring

Once you have logged your Custom Headers, you can then see them in your message monitor as below.

You can also search for your messages with Custom Headers as below
 In Message Monitor, Select Use More Fields
 Provide the CustomHeaderName and Value in the following Format “Name=Value”.
Eg: IdocNumber=2 in our example

Final Thoughts

Standard Message Headers

 Standard SAP Messages Headers are mandatory. Please use them.


 Ensure you have a allowed list of SAP_Sender, SAP_Receiver and SAP_MessageType in your
landscape. This will keep your landscape clean and allow for easy monitoring.
 Use SAP_ApplicationID to allow for Payload Search.
 If you use IDocs in your landscape use the XPATH I gave, you wont regret that.

Custom Message Headers

 Custom Message Headers are extremely powerful and will help with your message search in CPI.
Ensure you use this!
 Custom Message Headers can only be set via Groovy Script. The Groovy in this post is an
example. Ensure that you use a Re-usable Groovy script. Refer SAP CPI Custom Headers -
Reusable Groovy Script
 If you use EDI Conversion in CPI for EDI to XML and want to search your EDI Messages with EDI
Parameters, then refer SAP CPI – Reusable Groovy Script to Log All EDI Headers

CPIGroovyMonitoring
SAP Cloud Integration - Message search made easy with custom headers

As members of the Integration team, we frequently encounter requests from both the sender and
receiver teams to trace specific messages based on a key value. Currently, searching for these individual
messages within a large stack can be quite laborious.

By introducing custom headers, we can simplify this task. These headers would allow us to store key
fields associated with each message, which can then be displayed on the monitor page. This way, we can
easily search for messages using the values stored in these custom headers, significantly reducing the
effort required to locate a particular message from the heap of stacked messages.
To streamline our integration process, I highly propose implementing custom headers as a solution.

To incorporate custom headers effectively, we can employ a brief Groovy script immediately after the
component responsible for receiving the input file within the integration flow (iflow).

Scenario:

In the scenario of integrating SuccessFactors with a third-party (Ariba) system, where we receive input in
XML format, we can utilize the "person_id_external" field (employee number) as a key identifier. To
simplify message tracing for specific employees, we propose assigning this key field to a custom header
called EMPID.

By implementing this approach, we can populate the EMPID custom header with the employee number
value during the integration process. Subsequently, when there is a need to track a particular employee,
we can conveniently leverage the EMPID custom header on the monitoring screen. This enables us to
swiftly locate and trace the relevant messages associated with the desired employee, streamlining the
overall monitoring and troubleshooting process.
Groovy Script :

I have used XmlSlurper to parse the input XML in this script, the script will change based on the input
format.
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;

import java.lang.*;

import groovy.xml.MarkupBuilder;

def Message processData(Message message)

def messageLog = messageLogFactory.getMessageLog(message);

def body = message.getBody(java.lang.String) as String;

if(messageLog != null)
{

def doc = new XmlSlurper().parseText(body);

String str = "";

doc.'**'.findAll { it.name() == 'person' }.each

str=it.person_id_external.text();

messageLog.addCustomHeaderProperty("EMPID", str);

return message;

}
Monitoring Page:

Once you have selected the Artifact name, proceed to click on the "Use more fields" filter. Within this
filter, you will find the "Custom Header" option. Here, you can input the Custom header name and its
corresponding value, which in this case would be the employee you are searching for.

After entering the Custom header name and value, simply press the enter key. The monitoring screen
will then display the message(s) associated with the specified employee, allowing you to easily locate
the desired information. You can refer to the image below for a visual representation of this process.
Dynamically creating MPL custom headers without hardcoding in groovy script

In our work with CPI (Cloud Platform Integration) MPL (Message Processing Logs), we often encounter
the need to log specific values such as IDoc Numbers, PO Numbers, and Invoice Numbers as key-value
pairs in custom headers. These custom headers allow us to enhance monitoring capabilities.

Traditionally, we would create headers or properties in a content modifier and then read those headers
or properties in a custom Groovy script. However, this approach requires us to hardcode the property
names both in the custom header and the Groovy script. As new requirements arise, we find ourselves
repeatedly adding properties to the Groovy script, which can become cumbersome and error-prone.

To break out of this endless loop, I propose a solution:

1. Property Naming Convention:


o When creating properties for custom headers, start their names with “mpl-”. For
example:
 mpl-IDocNumber
 mpl-PONumber
 mpl-InvoiceNumber

2. Dynamic Groovy Script:


o Instead of hardcoding property names in the Groovy script, create a script that
dynamically reads all properties starting with “mpl-” from your integration.
o After trimming the first 4 characters (i.e., “mpl-”), use the remaining part as the key for
logging in the MPL custom header.

Output would be like:


Now, whenever you need to print something with custom header key-value pairs, you no longer have to
hardcode any of the headers or properties within the Groovy code. Instead, you can use a single Groovy

script for every interface in your tenant.

-------------------------------------------------------------------------------------------------------------------------------------

Hi, I have a requirment to modify this value during run time of the Iflow and it should only store the
latest assigned value.

Ex : I have the above scrpit at the begining of the flow and its storing the value as EMPID=2044298 and
at the end of the iflow there was a logic and its modfiied the value as EMPID=00004.

Currenlty what i can see is while using the API i am able to see two entry into the CustomHeaderAPI and
i can see both the values. however i only requred to see the second (EMPID=00004.)value

How to use Transaction BD87 to reprocess failed IDocs

You might also like