Basics
Basics
we will generate the address and provide it to Sender –Manam evali sender system ki ..mana address
vadi degara vunti vadu courier chestadu
This endpoint /address will be appended to cpi tenant url after deployment and that url will be shared
with sender system
Message Mappping –
Source – xsd-
Target- wsdl- webservice
Wsdl will have number of operations and which operation you want to call
HTTP will accept any data like XML, Json plain text... Etc
Only in soap Web services cases we will use Soap Any other cases we will Use HTTP
Reciever system/target system server address need to be give in adaptor configuration Reciever side
we can also use pallete Function XML modifier to you can remove xml tags and name spaces
Is there any other way to remove namespaces from payloads apart from XSLT mapping?
Apart from process direct, is there any other options to call iflow from one to another... With
HTTP or SOAP
there are two modes: one is combine mode, the other one is enrich mode. When you use
combine mode, what happens is the original message and the lookup message obtained from
the external call are combined into a single enhanced payload.
Another difference: Content enricher – connector from receiver towards content Enricher
Request Reply- connector is towards the Receiver
Request – Reply (synchronous)- HTTP ,ODATA,JDBC
User Credentials
Public Key
Once you download this public key and send it to sfmc team they will assign one user to this public key
Sftp team will add user to this public key this is not our responsibility just showing for your
understanding
Now we need to check the connectivity in CPI
Proxy type will decide whether it is Onpremise or cloud (internet)based on the option you select
Cloud connector:
it is not like sender or receiver, if CPI is initiating the pull/ push requests, then we need to use SAP CC
(cloud connector) like SFTP, JDBC, OData ..etc
incase of source server push the data to CPI, then no need SAP CC. Like HTTP, Soap, IDoc .. etc just we
will generate URL and share with Source server team, then they will consume our CPI URL, so this case
CC is not required
Create SSH key in manage Security-> keystores -> create ssh key
Once you have created Key and you will have option to download Public key and that Public
key is shared with SFTP team
SFTP team will add user to that Public key .
Now you can test the connection from CPI to SFTP server by Selecting Authentication as Public
key
Once you get the success full connection you can configure at sftp adaptor level
SAP Application on sender side On premise Sender side no need of cloud connector :
Mostly in Exception handling we will put content Modifier to capture the error response
Data Issues : we need to connect with Functional team
Network issues /External system downtime
We can specify any number of branches with each branch having a condition
If the incoming message satisfies the condition it will route to the corresponding branch
Aggregator : In the source if you are getting more than one incoming message
Simulate vs simulation
Display queue
Simulate
Trace will be enabled for only 10 mins Once enabled we can see Message logs till 1 hour
Manage Security
RETRY: Status retry is set if an error occurred during message processing, and a retry was automatically
started.
ESCALATED: During message processing an error occurred, and no retry has been triggered. For
synchronous messages, an error message is sent to the sender.
FAILED: Message processing failed, message hasn’t been delivered to receiver, and no retries are
possible. In other words: FAILED is a final status, and message processing ultimately has failed.
CANCELLED: Manual cancellation of entries in the JMS queue - MPL is set to status cancelled.
DISCARDED: For scheduler triggered integration flows, the MPL is shown on the worker node where the
message processing started first. For all subsequent message processing starts, the message status is set
to DISCARDED.
For example, assume that an integration flow is initiated by a Timer event (scheduler) and a worker
node goes into an out of memory state. The system starts the worker node again and synchronizes the
integration flow as soon as the node is active. The message is restarted in that case, and a new message
ID generated. The message with the original ID goes into status DISCARDED.
ABANDONED: Message processing was interrupted or log wasn't updated for an uncommonly long time.
The status can change in case processing is resumed.
, "Failed" indicates a complete inability to process the message, while "Escalated" suggests that the
message encountered an issue that needs further investigation but isn’t outright failed. Both statuses
are crucial for monitoring and maintaining the health of integrations in SAP CPI.
Failed
This status indicates that the message processing encountered a critical error that prevented it from
being processed successfully. Common reasons for a failure might include configuration errors, missing
mappings, or connectivity issues.
In SAP Cloud Platform Integration (CPI), the main difference between a tenant management node and a
runtime node is that a developer can modify the tenant management node, but only check the status of
messages in the runtime node:
in that we focus on Integration Suite and integration Suite in turn consists of 4 Services
SAP CPI
SAP API
Open connectors
Integration Advisor
Open connector – if there is no adaptor provided by sap and you want to connect cpi with any third
party system then we go for open connector
Integration Advisor: when you have concept of EDI then we have to go for Integration Advisor
Property: Suppose I have two integration flows ( IF1,IF2), where IF2 is connected to IF1 via process direct
.Now properties in IF1 cannot be accessed in IF2 because the scope of property is with the iflow (IF1),
but where as headers created in the IF1 can be accessed in IF2 because it as global scope.
Properties and headers are only available at runtime in that particular iflow. Unlike variables and data
stores, there's no local or global scope. Also, when the iflow's executed, they're gone; there's no
persistence.
If you store values in headers or properties in your iflow, those values will be gone when the iflow
finishes. If you need to store values longer than that, you can store them in a variable using the Write
Variables step. In other words, they're a way to persist data.
A global variable can be accessed by any iflow, but a local variable can only be accessed by the iflow that
wrote it.
You can see all the variables, you've stored, in the Operations view => Manage Stores => Variables.
Variables are similar to Data Stores, really. But variables store scalar values (i.e. one number, one
timestamp etc.) whereas Data Stores contain complete payloads (e.g. an XML or JSON document).
: What is the different between local variable and global variable?
A: Local variable can be access via same iFlow only. Global variable can be access via differet iFlows.
At iFlow first run, variable not created yet but need to use some initial/default value for processing, how
to handle such chicken-and-egg situation?
A: Use Content Modifier read variable and set default value.
-------------------------------------------------------------------------------------------------------------------------------
Data Store:
Q: After Get or Select from DS, what are the ways to delete the entry?
A: Use 'delete on completion' or DS delete by entry id.
Q: When writing a list of records to DS, if some record processing failed, will the DS operation have
partial success, partial failed?
A: Yes. Now new iflow by default 'transaction handling' is none.
Q: How to select multiple entry from DS, process each entry independently one-by-one, process those
success one, skip those failed one, and further write to different DS in case success or failed?
A: Use combination of DS select, splitter, router, exception sub-process, 'transaction handling' setting, 1
source DS + 2 target DS to achive this. Will show in course lesson.
Looping Process call: In single call we cannot pull the entire data from success factor we go for looping
process call .. it will iterate/recall same local integration multiple times based on the count which we
provided in the looping process call it will work based on the condition which we provided .. you can
select local integration process you want to call
If you are querying SAP SuccessFactors EC using the CompoundEmployee API, how would you query all
the records if the page size is set to 200 and they're a thousand records in EC?
Looping Process Call: Call the Local Integration Process till Condition specified is true.
For example, it can be used to iterate through a list of failed messages present in a datastore for retry.
Similarly, it can also be used for sending a small subset of data at a time, which in turn improves
performance by reducing the memory and processing time for each iteration.
A Looping Process Call involves invoking a subprocess multiple times within a single flow based on a
collection of items or until a certain condition is met.
For each call how many records you want to pull you need to specify
Fetching Data From One Iflow Via Process Direct And Invoking A Loop Process Call With Exception
handling
JMS: JMS queues bring a very critical capability to iflow design and that is the Guaranteed Delivery
pattern. Using queues, we can decouple the communication between the sender and receiver
applications. This can be implemented by developing an iflow that receives messages using any type of
sender adapter, confirms technically that messages were received, and then stores the messages
securely on the tenant in a JMS queue.
From the queue the message can then be processed with the frequency and logic most suitable for the
specific requirements and capabilities of the receiving application. Using queues for decoupling we make
sure that messages will be delivered to the receiver irrespective of capacity or current state of the target
application.
Can you please make a video of an odata query that returns thousands of records ( e.g 20.000 ) and
write it to an ftp server as a single csv file? I heard of odata pagination, page size and looping process call
can be used to retrieve very large records, but I don't know how it works. I would be very happy if you
can solve this.
Request Reply vs content enricher :As you rightly point out, both steps make a synchronous call to an
external system. Where they differ, is in how the response is handled. In the case of Request-Reply,
the response from the external system replaces the current payload. This means that if you need both
the old payload and the response from the external system, you need to store the old payload in e.g. an
exchange property before making the external call.
Headers and Properties can be used in any local integration process in an iflow.
Now properties in IF1 cannot be accessed in IF2 because the scope of property is with the iflow (IF1), but
where as headers created in the IF1 can be accessed in IF2 because it as global scope.
If the information needs to be transferred to the receiver system, then 'Header' should be used in
Content Modifier. If the information is internal to Iflow, then 'Property' should be used. The property
lasts for the entire duration of an exchange but it is not transferred to a receiver
Similarly, if we are using JMS as a sender adapter and have put up a condition to end retries after a
certain retry count, in that case escalation end is the best choice.
JDBC:
Mandatory Fields while configuration JDBC
User name
Password
JDBC URL
First you need to install JDBC drivers
Data Source creation: To connect to database we need to create a data source under Manage
JDBC material->JDBC Data Source
ODATA
Process Direct
SFTP
HTTPS
Success Factor
JMS
IDOc
Data Store
Reciever Side :
JDBC
JMS
SFTP
RFC
IDOC
Success Factor
HTTP
OData
Slack
Splunk
Workday
Twitter
Sugar CRM
Facebook
Coupa
AmazonDynamo DB
Mandatory configuration
Queue Name
Retry Interval
Max Retry Interval
Reciever Side:
Mandatory Configuration:
Queue Name
Number of Concurrent Enter the number of concurrent processes for each worker node. The
Processes
recommended value depends on the number of worker nodes, the number
of queues on the tenant, and the incoming load. Make sure to enter a value
that is as small as possible (1-5) because JMS resources are limited,
Retry Handling Retry Interval (in m)
Enter a value for
the amount of
time to wait
before retrying
message delivery.
Exponential Backoff Enter a value to double the retry interval after each unsuccessful
retry.
Maximum Retry Enter a value for the maximum amount of time to wait before
Interval (in m)* retrying message delivery. The minimum value is 10 minutes. The
(only configurable default value is set to 60 minutes.
when Exponential
Backoff is selected)
Dead-Letter Queue If selected, the message is taken out of processing and marked
as Blocked in the queue in the following situation: Message
processing has been stopped due to an out-of-memory error in
the worker node. Later, the message has been retried twice by the
JMS sender adapter (and each retry has again lead to an out-of-
memory error in the worker node).
Processing large messages in scenarios with the JMS sender
adapter can sometimes cause a worker node failure. Node failure
can happen in such a scenario if the JMS adapter repeatedly tries
to process a failed (large) message. To avoid such a situation,
select this option (switched on by default).
In such cases, a lock entry is created, which you can view and
release in the Monitoring application in the Message Locks tile
under Managing Locks.
When you release the lock, the system starts retrying the message
again.
A dead-letter queue (DLQ) is a special type of message queue that temporarily stores messages that a
software system cannot process due to errors.
. It stores erroneous messages until you process them to investigate the reason for the error.
For example, if your developer sets the maximum retry count to one, the system moves all unsuccessful
deliveries to the DLQ after a single attempt. Some failed deliveries may be caused by temporary network
overload or software issues. This sends many undelivered messages to the DLQ. To get the right balance,
developers optimize the maximum retry count to ensure the software performs enough retries before
moving messages to the DLQ.
ODATA IS ADAPTOR Specific to call ODAT Service we need to use Odata adaptor
Mandatory Fields
Address
Operation Details
Resource Path
To call Success factor we need to use Success Factor Adaptor which is Adaptor Specific
ABAP language is used
IDOC
RFC
Proxy
SFTP File
Intermediate Document
Idoc Number:
This however means, for each IDOC type thats flows from S4H to CPI, a separate Type G connection will
have to be created [/cxf/matmas or /cxf/debmas....] in SM59
One could possibly mitigate this by having a generic routing iFlow [/cxf/idoc and thereby having only 1
SM59 connection for IDOCs to this CPI iFlow] and then making process direct calls to individual iFlows
processing different IDOC types, depending on field values in the control segment.
It would be ideal though, if SAP could build something on S4H, which would allow specifying a generic
path prefix [/cxf/idoc/<idoctype>] in SM59 connection [so that there is one connection, like in tRFC port
for IDOCs between ECC and PO ] and dynamically route the calls [?] to the iFlow endpoints [ /matmas or
/debmas... maintained in Connection of respective iFlows] from S4H.
Control records – idoc type message type idoc number Sender port reciever port, Reciever
partner,sender partner ..
Data Records
-
Direct 1- outbound
Direct 2 is Inbound
---------------------------------------------------------------------------------------------------------
1.
System Connection between S4H and CPI
2. Background Configurations required
3. Configure Idoc Sender Adapter
4. Search messages in message monitoring based on Idoc Number
To be able to send messages from SAP S4Hana to SAP CPI import CPI certificates in S4Hana STRUST. You
can download CPI certificates either from keystore or from any browser.
1. Login to S4hana system --> Tcode STRUST (Import all 3 CPI certificates) --> Add to Certificate List
2. Create Logical System for S4H and CPI system in Tcode BD54
Similarly, if we are using JMS as a sender adapter and have put up a condition to end retries after a
certain retry count, in that case escalation end is the best choice.
SFTP :
,if we get unknown host exception while performing connectivity test how we can resolve the issue?
Particular host is incorrect or not reachable, cross check with SFTP team or check with network team
that Could connector and SFTP IPs are whitelisted or not at network level
I understand property scope is within the iflow and header can be accessed outside the iflow. When we
say "outside the iflow" , does that indicate outside iflow but within the package or headers are
accessible through the tenant everywhere. Please clarify
Answer: Headers declared are accessible outside your tenant. For an instance, if you are using Soap or
HTTPS you can get these headers in the header properly of Soap or HTTP response. With the list of
standard headers you will see these custom headers as well.
What is meant, is that if you use an HTTP-based adapter to call an external system, properties will never
be passed to that system, but headers might. The HTTP receiver adapter requires you to add headers to
an allow list, though, before they are passed to the external system.
Properties and headers are only available at runtime in that particular iflow. Unlike variables and data
stores, there's no local or global scope. Also, when the iflow's executed, they're gone; there's no
persistence.
----------------------------------------------------------------------------------------------------------------------------------
If you store values in headers or properties in your iflow, those values will be gone when the iflow
finishes. If you need to store values longer than that, you can store them in a variable using the Write
Variables step. In other words, they're a way to persist data.
A global variable can be accessed by any iflow, but a local variable can only be accessed by the iflow that
wrote it.
You can see all the variables, you've stored, in the Operations view => Manage Stores => Variables.
Variables are similar to Data Stores, really. But variables store scalar values (i.e. one number, one
timestamp etc.) whereas Data Stores contain complete payloads (e.g. an XML or JSON document).
Please note that while there's a Write Variables step, there's no Read Variables step. To fetch the value
of a variable into a property or header, you use a Content Modifer with the type set to either Local
Variable or Global Variable
The scope of the element declared in the Header is beyond the scope of the IFlow, The scope of the
element decalred in the Property is only within the IFlow i.e. the property parameters are not handed
over to the receiver.
Headers and Properties can be used in any local integration process in an iflow.
Suppose I have two integration flows ( IF1,IF2), where IF2 is connected to IF1 via process direct .
Now properties in IF1 cannot be accessed in IF2 because the scope of property is with the iflow (IF1), but
where as headers created in the IF1 can be accessed in IF2 because it as global scope.
Property is like a container where it is internal to a process and it can't be sent to receiver. But what is
the scope of property in an iflow?
The difference being that headers can be passed to another flow but properties can only be used locally.
-------------------------------------------------------------------------------------------------------------------------------------------
- Process Call :A step in an integration flow that invokes a Local Integration Process.
You can use ProcessDirect adapter to provide direct communication between two integration flows by
reducing latency and network overhead.
Splitter should be used in conjunction with gather (If a Split or Multicast Step is used, the steps
belonging to one sub-message get the same segment number. They can therefore be grouped
together.)
Multicast- Join+ gather
In ideal scenario we will provide the endpoint to source system and its their duty how
Failed: Message processing failed, message hasn’t been delivered to receiver, and no retries are
possible. In other words: FAILED is a final status, and message processing ultimately has failed.
Retry: Status retry is set if an error occurred during message processing, and a retry was
automatically started.
Completed: Message has been delivered to receiver successfully.
Processing: Message is currently being processed.
Cancelled
Discarded
Abandoned
Escalated: During message processing an error occurred, and no retry has been triggered. For
synchronous messages, an error message is sent to the sender.
Time :
Last 30 days messages we can see
While Message ID and Correlation ID are technical identifiers generated by CPI, we are left with just
Application ID to set a process/functional identifier like Purchase Order Number, Sales Order Number,
Customer Number, etc. This helps to search the MPL based on the business identifier for
monitoring/troubleshooting.
Process Direct
Correlation Id
The correlation id is a bit more interesting to monitor processes. In this example, we have a process that
spans two CPI processes using the process direct adapter. When we search for the Correlation ID we can
see both processes that it is used for.
It would make sense to be able to set this correlation id to be the PI message ID, that way it is a lot
easier to find which CPI process was started by which PI message. So you can just search on the PI
Message GUID.
We simply just need to set the header SAP_MplCorrelationId in the HTTP call to the SAP PI message ID,
and then we can search for the value in our CPI system.
While Message ID and Correlation ID are technical identifiers generated by CPI, we are left with just
Application ID to set a process/functional identifier like Purchase Order Number, Sales Order Number,
Customer Number, etc. This helps to search the MPL based on the business identifier for
monitoring/troubleshooting.
However, in my SAP CPI journey, there were multiple instances where I had a need to have more than
one ID to use for MPL search, for eg
Application ID set to Customer Number from IDOC, however, I also require a search based on
IDOC number.
Application ID set to Order Number from message sent over message broker, however, I also
require a search based on Message-ID from Message broker.
I achieved this use case by setting the additional ID as a Custom Header in MPL. Let us see in this blog on
implementing it.
Scenario
Let us see the implementation in a scenario where a DEBMAS IDoc is sent for Business Partner
Replication. We would set BP Number as Application ID and IDoc Number as Custom Header. Then we
see how to search MPL based on both these IDs.
IFLOW
Set Application ID
Use the Content Modifier step to set the below standard MPL headers.
Constan
SAP_Sender Header Sender System Name
t
Constan
SAP_Receiver Header Receiver System Name
t
Constan
SAP_MessageType Header Message Type
t
XPath to read the business
identifier i.e.
SAP_ApplicationID Header XPath
/DEBMAS07/IDOC/E1KNA1M/KUN
NR
SAP_MessageProcessingLogCustomSta Propert Constan
Custom Status
tus y t
Use the groovy script to read the IDoc Number extracted in Content Modifier and set it as a Custom
Header using addCustomHeaderProperty method of MessageLogFactory Interface.
import com.sap.gateway.ip.core.customdev.util.Message;
This is how the MPL is for a BP 1000000001 processed with IDoc 0000000123456789.
Standard MPL Headers
This is a straight forward search in SAP CPI Message processing Monitor. A search for ID "1000000001"
yields two messages processed for this BP.
The standard UI does not allow us to search message based on the Custom Header Identifier OOTB.
Hence we will use MPL OData API to retrieve the Message ID.
URL
https://{{cpi_tmn_host}}/itspaces/odata/api/v1/MessageProcessingLogCustomHeaderProperties?
$filter=Na... eq 'IDOCNUM' and Value eq '0000000123456789'&$expand=Log&$format=json
Note: Custom Header is inserted as a Name/Value pair, hence the $filter parameter should be passed
with the Custom Header Name and the value that we are searching for.
The API Call fetches the Message ID matching the custom header search condition. Use it in the standard
Message Processing monitor to search and troubleshoot.
Of course, it's up to every interface designer or organization to decide on this. But: In my opinion it's
more sensible to write a technical message ID of the application to the standard MPL header
"SAP_ApplicationID". In case of SAP senders, that would be an IDoc number or the XI message ID. After
all, the search field in the monitor is the same as for CPI's own message and correlations IDs. Also, like
this we don't mix up too many different IDs (with different meanings) into this one field.
Instead, I usually use the custom MPL headers for interface/scenario specific identifiers, like an order or
invoice number, for example. There, you have the advantage that you can specify the individual names
of the fields.
Its not possible to link iflows from manage integration artifact tile of Monitoring perspective. But you
can leverage correlation id in message monitoring which list down all related messages ( called from
main flow to multiple flows connected via process direct adapter )
But my question is about the address which we maintain in the adapter. For example we have some
dynamic address that we can see somewhere in the logs?
You can see the routing if you enable trace and navigate through MPL ( Text view also ) but there is a
catch if your iflows are big SAP will truncate these logs after certain character limit.
Other option would be before calling process call ( dynamically ) you can log the header as payload
( which is dynamic in nature in your case ) to see which particular process direct has been called.
Content Modifier :
1. Constant: Any constant value that will be carried across integration process.
7. Property:source value depending on the type either header name,a property name, a variable
name or any other type.
8. XPath: Set the value of XPath to header.Refer to know different XPath functions supported.
Data Type: The Data Type column is used only for the types XPath and Expression. The data type
can belong to any Java class.Refer to know more about supported datatypes.
Value: Place holder for source value.Its value depends on the type ( like Header,Property,Local
Variable...)
Default:If you have selected Local Variable or Global Variable as Type, the value specified as
Default will be assigned to the header value if the variable is not found at runtime.
SAP CPI – Message Monitoring – Standard Out of the Box Features to use in your Iflows
Introduction
With more customers now migrating or thinking about migrating their SAP PI/PO Integrations to CPI; a
few common questions I get from seasoned PI/PO developers and Integration Managers is
How do I monitor?
How do I filter messages in Message Monitor?
How do I search?
What should I do to make sure I migrate my Interfaces right from a Monitoring and Operations
perspective?
This post is an attempt to list out the Standard Out of the Box Features available in CPI that every
developer “should” mandatorily use and leverage to make both their and Operations teams lives easy.
SAP CPI provides a list of Standard Message Headers. While its usage is upto Individual developers it is
essential and imperative that you use these Message Headers in your Iflows. Lets take a look at these
Standard Message Headers
SAP_Sender
SAP_MessageType
SAP_ApplicationID
SAP_MessageProcessingLogCustomStatus
Define a Content Modifier step – ideally the 1st step of your Iflow.
Provide values for each of these Headers in the Header
Data
Name Type Value Example
Type
ERPCLNT500
HCMCLNT100
SAP_Sender Constant {{SAP_Sender}} String SalesforceCRM
Coupa
C4C_ServiceCloud
ERPCLNT500
HCMCLNT100
SAP_Receiver Constant {{SAP_Receiver}} String SalesforceCRM
Coupa
C4C_ServiceCloud
Constant or
Orders
SAP_MessageType XPATH ( for {{SAP_MessageType}} String
ServiceTicket
IDocs)
XPATH ( for
Example for IDoc
SAP_ApplicationID taking value //DOCNUM Integer
Number
from Payload)
For SAP_MessageType when you use a IDoc as a Sender; I would recommend to use the following
XPATH so that you can have the IDoc as the SAP Message Type in CPI
with MessageType.IDocType.Extension
Use Option “Use More Fields” to view these Headers in a Drop Down.
Custom Message Headers
So we have seen Standard SAP Message Headers. We have seen how they can b e used to search and
filter for messages. We have seen SAP_ApplicationID and how it can be used to search for messages.
BUT,
While custom headers provide additional mode to Log Headers in CPI and search for them in Standard
Message Monitor, the only way to log custom headers is to use a Groovy Script.
Assume you get 2 Fields in your payload that you would like to set as your Custom headers –
IDocNumber and PONumber
They are in the Input XML Payload with field names IDocNumber and PONumber.
Extract these values and store them as Properties
You can then use this Groovy script to log Custom Headers in CPI.
How to use Custom Message Headers in CPI Message Monitoring
Once you have logged your Custom Headers, you can then see them in your message monitor as below.
You can also search for your messages with Custom Headers as below
In Message Monitor, Select Use More Fields
Provide the CustomHeaderName and Value in the following Format “Name=Value”.
Eg: IdocNumber=2 in our example
Final Thoughts
Custom Message Headers are extremely powerful and will help with your message search in CPI.
Ensure you use this!
Custom Message Headers can only be set via Groovy Script. The Groovy in this post is an
example. Ensure that you use a Re-usable Groovy script. Refer SAP CPI Custom Headers -
Reusable Groovy Script
If you use EDI Conversion in CPI for EDI to XML and want to search your EDI Messages with EDI
Parameters, then refer SAP CPI – Reusable Groovy Script to Log All EDI Headers
CPIGroovyMonitoring
SAP Cloud Integration - Message search made easy with custom headers
As members of the Integration team, we frequently encounter requests from both the sender and
receiver teams to trace specific messages based on a key value. Currently, searching for these individual
messages within a large stack can be quite laborious.
By introducing custom headers, we can simplify this task. These headers would allow us to store key
fields associated with each message, which can then be displayed on the monitor page. This way, we can
easily search for messages using the values stored in these custom headers, significantly reducing the
effort required to locate a particular message from the heap of stacked messages.
To streamline our integration process, I highly propose implementing custom headers as a solution.
To incorporate custom headers effectively, we can employ a brief Groovy script immediately after the
component responsible for receiving the input file within the integration flow (iflow).
Scenario:
In the scenario of integrating SuccessFactors with a third-party (Ariba) system, where we receive input in
XML format, we can utilize the "person_id_external" field (employee number) as a key identifier. To
simplify message tracing for specific employees, we propose assigning this key field to a custom header
called EMPID.
By implementing this approach, we can populate the EMPID custom header with the employee number
value during the integration process. Subsequently, when there is a need to track a particular employee,
we can conveniently leverage the EMPID custom header on the monitoring screen. This enables us to
swiftly locate and trace the relevant messages associated with the desired employee, streamlining the
overall monitoring and troubleshooting process.
Groovy Script :
I have used XmlSlurper to parse the input XML in this script, the script will change based on the input
format.
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import java.lang.*;
import groovy.xml.MarkupBuilder;
if(messageLog != null)
{
str=it.person_id_external.text();
messageLog.addCustomHeaderProperty("EMPID", str);
return message;
}
Monitoring Page:
Once you have selected the Artifact name, proceed to click on the "Use more fields" filter. Within this
filter, you will find the "Custom Header" option. Here, you can input the Custom header name and its
corresponding value, which in this case would be the employee you are searching for.
After entering the Custom header name and value, simply press the enter key. The monitoring screen
will then display the message(s) associated with the specified employee, allowing you to easily locate
the desired information. You can refer to the image below for a visual representation of this process.
Dynamically creating MPL custom headers without hardcoding in groovy script
In our work with CPI (Cloud Platform Integration) MPL (Message Processing Logs), we often encounter
the need to log specific values such as IDoc Numbers, PO Numbers, and Invoice Numbers as key-value
pairs in custom headers. These custom headers allow us to enhance monitoring capabilities.
Traditionally, we would create headers or properties in a content modifier and then read those headers
or properties in a custom Groovy script. However, this approach requires us to hardcode the property
names both in the custom header and the Groovy script. As new requirements arise, we find ourselves
repeatedly adding properties to the Groovy script, which can become cumbersome and error-prone.
-------------------------------------------------------------------------------------------------------------------------------------
Hi, I have a requirment to modify this value during run time of the Iflow and it should only store the
latest assigned value.
Ex : I have the above scrpit at the begining of the flow and its storing the value as EMPID=2044298 and
at the end of the iflow there was a logic and its modfiied the value as EMPID=00004.
Currenlty what i can see is while using the API i am able to see two entry into the CustomHeaderAPI and
i can see both the values. however i only requred to see the second (EMPID=00004.)value