The ProcessDirect Adapter
The ProcessDirect Adapter
owned and managed by multiple integration developers independently. This allows several people to
work on different parts of the same integration flow simultaneously.
Process direct adapter is used for communication between integration artifact/flows in SAP Cloud
Platform Integration platform for the tenant.
If you want to improve performance of your inter related iflow then Process Direct adapter in SAP CPI is
the solution for you. You can use Process Direct adapter to provide direct communication between two
integration flows and unlike HTTP/SOAP adapter communication between two integration flows will not
be routed via load balancer.
Two integration flows, a producer integration flow, and a consumer integration flow, can be directly
connected using the ProcessDirect adapter. As shown in the figure, the producer integration flow utilizes
a ProcessDirect receiver adapter to send a message to the consumer integration flow, which, in turn,
uses a ProcessDirect sender adapter to receive the message.
Additional information
Before the development of the iFlows begins, it is necessary to bear in mind that several producer iFlows
can trigger one consumer iFlow using the ProcessDirect adapter.
Multiple producers can connect to a single consumer, but the reverse is not possible. The cardinality restriction is also
valid across integration projects. If two consumers with the same endpoint are deployed, then the start of the second
consumer fails.
While configuring the ProcessDirect adapter, please pay attention to the receiver address that you are
using to trigger the consumer iFlow from the producer iFlow.
Representation of address customization for the DirectProcessing adapter in both consumer and
producer iFlows
Example
For demonstration purposes, I am going to create two producer iFlows that will have the Content Modifier step in
them. The Content Modifier step is used to create the new exchange property and then print it out in the message
body.
Producer #1
Producer #2
Here are the settings of the ProcessDirect adapter for two producer iFlows
Moving to the consumer iFlow now. To make the consumer iFlow available to be started from the producer, it is
required to set up the ProcessDirect adapter as a starting entry point by providing the correct address. Please refer to
the screenshot below.
After deploying the changes, I am triggering the first producer's endpoint
and the second one
If I open the Monitoring tool, I would be able to see two calls of Producers' iFlow and two calls of Consumer's iFlow as
well.
I hope this makes sense.
Here there has to one field in the source xml depending on value we will call corresponding consumer
flow
Consumer flow: orders
Consumer flow :Invoice
-------------------------------------------------------------------------------------------------------------------------------
variables
Let us come back to the scenario we had described earlier: we wish to store last successful run date of a
particular IFlow. For this purpose, we can use a variable.
Creation of a variable
In our IFlow, we have to include a ‘Write Variables’ shape to do this. Using simple camel expression as
shown below, we can store values in variables. Please note that we did not need to create a Variable by
going into Operations view first.
As we had discussed earlier, we may wish to store this value only for a particular path of execution
among many. In that case, we can place this shape in only that path.
To read the values stored in variables, all we need to do is reference the variables as shown below. This
can be done in Message Exchange Properties as well.
Scope/Visibility of Variables:
Variables have Local scope by default. This means that they are visible and accessible to only one IFlow.
But what if we wanted to read this value, or possibly even modify this value from another IFlow? In that
case, we will have to use a Global variable. Like the name suggests, the scope of this variable is not
confined to just one IFlow. This variable can be read and modified by iFLow.
When can something like this be useful? As an example, suppose the value written by one IFlow is to be
used by another for its processing.
Since Global variables are not limited to one IFlow, when we look at these variables under Manage
Stores, they have no value mentioned for IFlow Ids.
To use a variable across multiple integration flows deployed on the same tenant, define it as a global
variable.
Number Ranges
Now we are going to hop on to the next topic: Number Ranges. We already know it’s a mode of data
persistence, but what exactly does it do and what makes it different?
Number ranges act as sequences which can be used for marking messages, among other applications.
Suppose that in a particular integration scenario, all messages to the destination system must have a
unique sequence number. This is where a number range can be of use.
Number ranges can be used as per requirement. If they are to be used only under certain conditions and
not in every execution, they can be referenced within the path matching those conditions to avoid
unnecessary wastage of numbers in sequence. This can be done with the use of Routers, for example.
You can also use a variable to share data across different steps of the same integration flow. In this case,
define it as a local variable.
Global Scope :
Select this option if you want the variable to be used in other integration flows deployed on the same
tenant.
1. Open the Integration Flow in the Edit mode. From the palette Open the Call > Local Call >
Process Call. Place the Process Call in the integration flow where you want to initiate the LIP.
2. From the Palette Choose the Process > Local Integration Process and place in the Integration
Editor.
3. The next step is to connect the Process Call and the Local Integration Process. Open the Process
Call and in the Processing tab select the LIP.
-------------------------------------------------------------------------------------------------------------------------------------------
-- The Iterating Splitter splits a composite message into a series of messages without copying the
enveloping elements of the composite message.
The General Splitter splits a composite message comprising N messages into N individual messages,
each containing one message with the enveloping elements of the composite message
Execute a process call step to check if an incoming message was already processed, and skip the
processing of this message.
The Idempotent Process Call detects if a message ID has already been successfully processed and stores
the status of the successful process in the idempotent repository. If there is duplicate execution with the
same message ID (for example if there’s a retry by the sender system), the called subprocess can either
be skipped or the message is marked as a duplicate. You can then decide how to handle the duplicate in
the subprocess.
The Idempotent Process Call is useful for modeling at-most-once or exactly-once handling in the
integration flow. For example, if you have a receiver system (let's say a third-party legacy system) that
can’t handle duplicate messages properly, you can call the receiver system from within an Idempotent
Process Call.
Message IDs stored in the idempotent repository are deleted by default after 90 days. After this
period, message IDs that have already been stored in the database can no longer be detected as
duplicates during a new processing.
The uniqueness check is bound to the specific flow step instance. That means:
Each instance of the idempotent process call is independent from other instances. If a message
is marked as done by one flow step instance, this doesn't influence other flow step instances
which use the same message ID.
Looping Process Call
---------------------------------------------------------------------------------------------------------------------------------------
Use Simulation feature to test an integration flow and check the desired outcome even before the
deployment.
Splitter
If a message contains multiple elements but each element needs to be processed in a different way, you
can use the Splitter pattern to break up the message into multiple individual messages according to the
number of elements.
:Let's assume that you've an order process where the order can be handled by a specific inventory
system depending on the shipping address. Content-based routing forwards the message to the right
recipient, depending on the content of a message.
Implementation
For each receiver branch, you maintain a condition in the form of an XPath expression. The XPath
expression can be either based on the payload data or the message header. During message
processing, Cloud Integration evaluates the condition, and if met, routes the message to the respective
receiver. If no receiver can be determined, Cloud Integration can proceed according to the following
variants:
Content Enricher
You want to send an order to a supplier but you can't provide all the information that the receiver
system requires to process the order. For instance, the items only contain a product category code but
the main category name is missing. The Content Enricher reads data synchronously from an external
system, and appends the additional information to the original message before routing to the actual
receiver.
Message Filter
Let's assume that you want to send product information to an inventory system, but the inventory
system only handles a subset of the range of products, depending on the product category. You can use
the Message Filter pattern to remove any data from a channel that you aren't interested in. The
Message Filter is a specific type of the Message Router pattern that has only one single receiver channel.
Any incoming message is evaluated, and if it meets the criteria specified by the Message Filter, the
message is routed to the receiver, otherwise it's discarded.
Use Case
The incoming order contains multiple items. However, you're only interested in items of a specific
product category. Implement the content filter in such a way that only those few data items are
included in the message. In addition, you need to ensure that the header information, such as the
purchase order, is kept.
Aggregator
You want to combine related individual messages so that they can be processed in bulk. Using an
Aggregator pattern, you can collect and store individual messages until a complete set of related
messages has been received. The aggregated message is then sent to the actual receiver.
Use Case
You want to collect individual product items related to the same order. The aggregated items are to be
sent as one single order with multiple items.
Handle Exceptions
Implementation
To handle exceptions, add an exception subprocess to the integration flow.
Exceptions that occur during message processing are caught and handled by the logic implemented in
the exception subprocess. The exception handling can even distinguish between different error
situations in the exception subprocess and, according to the error category, send back a custom error
message to the sender application system.
You cannot catch exceptions of local integration process in the main integration process.
Let's assume that you've modularized your integration logic using subprocesses, and an
exception is raised in a subprocess.
Implementation
To learn how to handle such an exception, check out the integration flow Handle Errors - Local
Integration Process.
Retriggered failed messages
Reprocessing failed messages
------------------------------------------------------------------------------------------------------------------
Process Direct :
Consumer flow: depending on message type we can create the N number of consumer flows
Consumer flow 2: invoice
Partner profile level you will add corresponding new message type