Integration Patterns and Practices
Integration Patterns and Practices
Practices
Version 45.0, Spring ’19
@salesforcedocs
Last updated: February 27, 2019
© Copyright 2000–2019 salesforce.com, inc. All rights reserved. Salesforce is a registered trademark of salesforce.com, inc.,
as are other names and marks. Other marks appearing herein may be trademarks of their respective owners.
CONTENTS
INTRODUCTION ...................................................1
APPENDICES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Appendix A: Resources—External . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Appendix B: Resources—Salesforce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Pattern Template
Each integration pattern follows a consistent structure. This provides consistency in the information provided in each pattern and also
makes it easier to compare patterns.
Name
The pattern identifier that also indicates the type of integration contained in the pattern.
1
Integration Patterns Overview Pattern Summary
Context
The overall integration scenario that the pattern addresses. Context provides information about what users are trying to accomplish and
how the application will behave to support the requirements.
Problem
The scenario or problem (expressed as a question) that the pattern is designed to solve. When reviewing the patterns, read this section
to quickly understand if the pattern is appropriate for your integration scenario.
Forces
The constraints and circumstances that make the stated scenario difficult to solve.
Solution
The recommended way to solve the integration scenario.
Sketch
A UML sequence diagram that shows you how the solution addresses the scenario.
Results
Explains the details of how to apply the solution to your integration scenario and how it resolves the forces associated with that scenario.
This section also contains new challenges that can arise as a result of applying the pattern.
Sidebars
Additional sections related to the pattern that contain key technical issues, variations of the pattern, pattern-specific concerns, and so
on.
Example
An end-to-end scenario that describes how the design pattern is used in a real-world Salesforce scenario. The example explains the
integration goals and how to implement the pattern to achieve those goals.
Pattern Summary
The following table lists the integration patterns contained in this document.
2
Integration Patterns Overview Pattern Approach
List of Patterns
Pattern Scenario
Remote Process Salesforce invokes a process on a remote system, waits for completion of that process, and then
Invocation—Request and Reply tracks state based on the response from the remote system.
Remote Process Invocation—Fire Salesforce invokes a process in a remote system but doesn’t wait for completion of the process.
and Forget Instead, the remote process receives and acknowledges the request and then hands off control
back to Salesforce.
Batch Data Synchronization Data stored in Lightning Platform should be created or refreshed to reflect updates from an external
system, and when changes from Lightning Platform should be sent to an external system. Updates
in either direction are done in a batch manner.
Remote Call-In Data stored in Lightning Platform is created, retrieved, updated, or deleted by a remote system.
UI Update Based on Data Changes The Salesforce user interface must be automatically updated as a result of changes to Salesforce
data.
Pattern Approach
The integration patterns in this document are classified into two categories:
• Data Integration—These patterns address the requirement to synchronize data that resides in two or more systems so that both
systems always contain timely and meaningful data. Data integration is often the simplest type of integration to implement, but
requires proper information management techniques to make the solution sustainable and cost-effective. Such techniques often
include aspects of Master Data Management (MDM), data governance, mastering, de-duplication, data flow design, and others.
• Process Integration—The patterns in this category address the need for a business process to leverage two or more applications to
complete its task. When you implement a solution for this type of integration, the triggering application has to call across process
boundaries to other applications. Usually, these patterns also include both orchestration (where one application is the central
“controller”) and choreography (where applications are multi-participants and there is no central “controller”). These types of
integrations can often require complex design, testing, and exception handling requirements. Also, such composite applications
are typically more demanding on the underlying systems because they often support long-running transactions, and the ability to
report on and/or manage process state.
Choosing the best integration strategy for your system is not trivial. There are many aspects to take into consideration and many tools
that can be used, with some tools being more appropriate than others for certain tasks. Each pattern addresses specific critical areas
including the capabilities of each of the systems, volume of data, failure handling, and transactionality.
Aspect Description
Source/Target Specifies the requestor of the integration transaction along with the target(s) that provide the
information. Note that the technical capabilities of the source and target systems, coupled with
3
Integration Patterns Overview Pattern Selection Guide
Aspect Description
the type and timing of the integration, may require an additional middleware or integration solution.
See the details of each pattern for additional details.
Note: An integration can require an external middleware or integration solution (for example, Enterprise Service Bus) depending
on which aspects apply to your integration scenario.
X X Remote Call-In
1
“Synchronous and asynchronous processes,” IBM Corporation, last accessed May 18, 2012,
http://publib.boulder.ibm.com/infocenter/adiehelp/v5r1m1/index.jsp?topic=%2Fcom.ibm.etools.ctc.flow.doc%2Fconcepts%2Fcsynchf.html.
2
Ibid.
4
Integration Patterns Overview Middleware Terms and Definitions
Term Definition
Event handling Event handling is the receipt of an identifiable occurrence at a designated receiver (“handler”). The
key processes involved in event handling include:
• Identifying where an event should be forwarded.
• Executing that forwarding action.
• Receiving a forwarded event.
• Taking some kind of appropriate action in response, such as writing to a log, sending an error
or recovery process, or sending an additional message.
Note that the event handler might ultimately forward the event to an event consumer.
Common uses of this feature with middleware can be extended to include the popular
“publish/subscribe” or “pub/sub” capability. In a publish/subscribe scenario, the middleware routes
requests or messages to active data-event subscribers from active data-event publishers. These
consumers with active listeners can then retrieve the events as they are published.
In Salesforce integrations using middleware, the control of event handling is assumed by the
middleware layer; it collects all relevant events (synchronous or asynchronous) and manages
distribution to all endpoints, including Salesforce.
Protocol conversion Protocol conversion “is typically a software application that converts the standard or proprietary
protocol of one device to the protocol suitable for another device to achieve interoperability.
In the context of middleware, connectivity to a particular target system may be constrained by
protocol. In such cases, the message format needs to be converted to or encapsulated within the
format of the target system, where the payload can be extracted. This is also known as tunneling.”
3
Salesforce doesn’t support native protocol conversion, so it’s assumed that any such requirements
are met by either the middleware layer or the endpoint.
See http://searchsoa.techtarget.com/definition/event-handler.
Translation and transformation Transformation is the ability to map one data format to another to ensure interoperability between
the various systems being integrated. Typically, this entails reformatting messages en route to
match the requirements of the sender or recipient. In more complex cases, one application can
send a message in its own native format, and two or more other applications might each receive
a copy of the message in their own native format.
3
Gregor Hohpe, and Bobby Woolf, Enterprise Integration Patterns (Boston: Addison-Wesley Professional, 2003).
5
Integration Patterns Overview Middleware Terms and Definitions
Term Definition
Middleware translation and transformation tools often include the ability to create service facades
for legacy or other non-standard endpoints; this allows those endpoints to appear to be
service-addressable.
With Salesforce integrations, it’s assumed that any such requirements are met by either the
middleware layer or the endpoint. Transformation of data can be coded in Apex, but we don’t
recommend it due to maintenance and performance considerations.
See http://en.wikipedia.org/wiki/Message-oriented_middleware.
Queuing and buffering Queuing and buffering generally rely on asynchronous message passing, as opposed to a
request-response architecture. In asynchronous systems, message queues provide temporary
storage when the destination program is busy or connectivity is compromised. In addition, most
asynchronous middleware systems provide persistent storage to back up the message queue.
The key benefit of an asynchronous message process is that if the receiver application fails for any
reason, the senders can continue unaffected; the sent messages simply accumulate in the message
queue for later processing when the receiver restarts.
Salesforce provides only explicit queuing capability in the form of workflow-based outbound
messaging. To provide true message queueing for other integration scenarios (including
orchestration, process choreography, quality of service, and so on) a middleware solution is required.
See http://en.wikipedia.org/wiki/Message-oriented_middleware.
Synchronous transport protocols Synchronous transport protocols refer to protocols that support activities wherein a “single thread
in the caller sends the request message, blocks to wait for the reply message, and then processes
the reply....The request thread awaiting the response implies that there is only one outstanding
request or that the reply channel for this request is private for this thread.”4
Asynchronous transport protocols Asynchronous transport protocols refer to protocols supporting activities wherein “one thread in
the caller sends the request message and sets up a callback for the reply. A separate thread listens
for reply messages. When a reply message arrives, the reply thread invokes the appropriate callback,
which reestablishes the caller’s context and processes the reply. This approach enables multiple
outstanding requests to share a single reply thread.”5
Mediation routing Mediation routing is the specification of a complex “flow” of messages from component to
component. For example, many middleware-based solutions depend on a message queue system.
While some implementations permit routing logic to be provided by the messaging layer itself,
others depend on client applications to provide routing information or allow for a mix of both
paradigms. In such complex cases, mediation (on the part of middleware) simplifies development,
integration, and validation).
“Specifically, Mediators coordinate a group of objects so that they [do not] need to know how to
coordinate with each other….Then, each consumer could focus on processing a particular kind of
message, and the coordinator [Mediator] could make sure the right message gets to the right
consumer.”6
4
Gregor Hohpe, and Bobby Woolf, Enterprise Integration Patterns (Boston: Addison-Wesley Professional, 2003).
5
Ibid.
6
Ibid.
6
Integration Patterns Overview Middleware Terms and Definitions
Term Definition
Process choreography and service Process choreography and service orchestration are each forms of “service composition” where
orchestration any number of endpoints and capabilities are being coordinated.
The difference between choreography and service orchestration is:
• Choreography can be defined as “behavior resulting from a group of interacting individual
entities with no central authority.”7
• Orchestration can be defined as “behavior resulting from a central conductor coordinating the
behaviors of individual entities performing tasks independent of each other.”8
In addition, “an orchestration shows the complete behavior of each service whereas the
choreography combines the interface behavior descriptions of each service.”9
Portions of business process choreographies can be built in Salesforce workflows or using Apex.
We recommend that all complex orchestrations be implemented in the middleware layer because
of Salesforce timeout values and governor limits (especially in solutions requiring true transaction
handling).
Transactionality (encryption, Transactionality can be defined as the ability to support global transactions that encompass all
signing, reliable delivery, necessary operations against each required resource. Transactionality implies the support of all
transaction management) four ACID (atomicity, consistency, isolation, durability) properties, where atomicity guarantees
all-or-nothing outcomes for the unit of work (transaction).
While Salesforce is transactional within itself, it’s not able to participate in distributed transactions
or transactions initiated outside of Salesforce. Therefore, it’s assumed that for solutions requiring
complex, multi-system transactions, transactionality (and associated roll-back/compensation
mechanisms) be implemented at the middleware layer.
See http://en.wikipedia.org/wiki/Distributed_transaction.
Routing Routing can be defined as specifying the complex flow of messages from component to component.
In modern services-based solutions, such message flows can be based in a number of criteria,
including header, content type, rule, and priority.
With Salesforce integrations, it’s assumed that any such requirements are met by either the
middleware layer or the endpoint. Message routing can be coded in Apex, but we don’t recommend
it due to maintenance and performance considerations.
Extract, transform, and load Extract, transform, and load (ETL) refers to a process that involves:
• Extracting data from the source systems. This typically involves data from a number of source
systems, and both relational and non-relational structures.
• Transforming the data to fit operational needs, which can include data quality levels. The
transform stage usually applies a series of rules or functions to the extracted data from the
source to derive the data for loading into the end target(s).
7
“Choreography and Orchestration: A Software Perspective,”e-Zest, last accessed April 11, 2013,
http://www.e-zest.net/blog/choreography-and-orchestration-a-software-perspective/.
8
Ibid.
9
“Orchestration vs. Choreography,” Stack Overflow, last accessed April 11, 2013,
http://stackoverflow.com/questions/4127241/orchestration-vs-choreography.
7
Integration Patterns Overview Middleware Terms and Definitions
Term Definition
• Loading the data into the target system. The target system can vary widely from database,
operational data store, data mart, data warehouse, or other operational systems.
While not strictly necessary, most mature ETL tools provide a change data capture capability. This
capability is where the tool identifies records in the source system that have changed since the
last extract, thereby reducing the amount of record processing.
See http://en.wikipedia.org/wiki/Extract,_transform,_load and
http://en.wikipedia.org/wiki/Change_data_capture.
8
DESIGN PATTERN CATALOG
Context
You use Salesforce to track leads, manage your pipeline, create opportunities, and capture order details that convert leads to customers.
However, the Salesforce system doesn’t contain or process orders. After the order details are captured in Salesforce, the order is created
in the remote system, which manages the order to conclusion.
When you implement this pattern, Salesforce calls the remote system to create the order and then waits for successful completion. If
successful, the remote system synchronously replies with the order status and order number. As part of the same transaction, Salesforce
updates the order number and status internally. The order number is used as a foreign key for subsequent updates to the remote system.
Problem
When an event occurs in Salesforce, how do you initiate a process in a remote system, pass the required information to that process,
receive a response from the remote system, and then use that response data to make updates within Salesforce?
Forces
Consider the following forces when applying solutions based on this pattern.
• Does the call to the remote system require Salesforce to wait for a response before continuing processing? Is the call to the remote
system a synchronous request-reply or an asynchronous request?
• If the call to the remote system is synchronous, does Salesforce have to process the response as part of the same transaction as the
initial call?
• Is the message size small or large?
• Is the integration based on the occurrence of a specific event, such as a button click in the Salesforce user interface, or DML-based
events?
Solution
The following table contains solutions to this integration problem.
9
Remote Process Invocation—Request and Reply
A custom Visualforce page or button initiates Best Salesforce enables you to invoke HTTP services using standard
an Apex HTTP callout in a synchronous manner. GET, POST, PUT, and DELETE methods. You can use several
HTTP classes to integrate with RESTful services. It’s also possible
to integrate to SOAP-based services by manually constructing
the SOAP message. The latter is not recommended because
it’s possible for Salesforce to consume WSDLs to generate proxy
classes.
A user-initiated action on a Visualforce page then calls an Apex
controller action that then executes this proxy Apex class to
perform the remote call. Visualforce pages require
customization of the Salesforce application.
A trigger that’s invoked from Salesforce data Suboptimal You can use Apex triggers to perform automation based on
changes performs an Apex SOAP or HTTP record data changes.
callout in a synchronous manner. An Apex proxy class can be executed as the result of a DML
operation by using an Apex trigger. However, all calls made
from within the trigger context must execute asynchronously
from the initiating event. Therefore, this solution isn’t
recommended for this integration problem. This solution is
better suited for the Remote Process Invocation—Fire and
Forget pattern.
A batch Apex job performs an Apex SOAP or Suboptimal You can make calls to a remote system from a batch job. This
HTTP callout in a synchronous manner. solution allows batch remote process execution and processing
of the response from the remote system in Salesforce. However,
a given batch has limits to the number of calls. For more
information, see Governor Limits.
A given batch run can execute multiple transaction contexts
(usually in intervals of 200 records). The governor limits are
reset per transaction context.
Sketch
The following diagram illustrates a synchronous remote process invocation using Apex calls.
10
Remote Process Invocation—Request and Reply
In this scenario:
1. You initiate an action on the Visualforce page (for example, clicking a button).
2. The browser performs an HTTP POST that in turn performs an action on the corresponding Apex controller.
3. The controller calls a previously generated Apex web service proxy class.
4. The proxy class performs the actual call to the remote web service.
5. The response from the remote system is returned to the Apex controller. The controller processes the response, updates data in
Salesforce as required, and re-renders the page.
In cases where the subsequent state must be tracked, the remote system returns a unique identifier that’s stored on the Salesforce record.
Results
The application of the solutions related to this pattern allows for event-initiated remote process invocations in which Salesforce handles
the processing.
Calling Mechanisms
The calling mechanism depends on the solution chosen to implement this pattern.
11
Remote Process Invocation—Request and Reply
Apex triggers Used primarily for invocation of remote processes using Apex callouts from DML-initiated events.
For more information about this calling mechanism, see pattern Remote Process Invocation—Fire
and Forget.
Apex batch classes Used for invocation of remote processes in batch. For more information about this calling
mechanism, see pattern Remote Process Invocation—Fire and Forget.
12
Remote Process Invocation—Request and Reply
Sidebars
Timeliness
Timeliness is of significant importance in this pattern. Usually:
• The request is typically invoked from the user interface, so the process must not keep the user waiting.
• Salesforce has a configurable timeout of up to 60 seconds for calls from Apex.
• Completion of the remote process is executed in a timely manner to conclude within the Salesforce timeout limit and within user
expectations.
Data Volumes
This pattern is used primarily for small volume, real-time activities, due to the small timeout values and maximum size of the request or
response for the Apex call solution. Do not use this pattern in batch processing activities in which the data payload is contained in the
message.
Endpoint Capability and Standards Support
The capability and standards support for the endpoint depends on the solution that you choose.
Apex HTTP callouts The endpoint must be able to receive HTTP calls. Salesforce must be able to access the endpoint
over the public Internet.
You can use Apex HTTP callouts to call RESTful services using the standard GET, POST, PUT, and
DELETE methods.
State Management
When integrating systems, keys are important for ongoing state tracking. There are two options.
• Salesforce stores the remote system’s primary or unique surrogate key for the remote record.
• The remote system stores the Salesforce unique record ID or some other unique surrogate key.
There are specific considerations for handling integration keys, depending on which system contains the master record, as shown in the
following table.
13
Remote Process Invocation—Request and Reply
Remote system The call to the remote process returns the unique key from the application, and Salesforce stores
that key value in a unique record field.
Total stack depth for any Apex invocation that recursively fires triggers due to insert, 16
3
update, or delete statements
Total number of callouts (HTTP requests or Web services calls) in a transaction 100
Maximum cumulative timeout for all callouts (HTTP requests or Web services calls) in a 120 seconds
transaction
Maximum number of methods with the future annotation allowed per Apex invocation 50 0 in batch and
future contexts; 1 in
queueable context
14
Remote Process Invocation—Request and Reply
Maximum number of push notification method calls allowed per Apex transaction 10
Maximum number of push notifications that can be sent in each push notification method 2,000
call
Middleware Capabilities
The following table highlights the desirable properties of a middleware system that participates in this pattern.
Protocol conversion X
Mediation routing X
Routing X
Example
A utility company uses Salesforce and has a separate system that contains customer billing information. They want to display the billing
history for a customer account without storing that data in Salesforce. They have an existing web service that returns a list of bills and
the details for a given account, but can’t display this data in a browser.
This requirement can be accomplished with the following approach.
1. Salesforce consumes the billing history service WSDL from an Apex proxy class.
2. Execute the Apex proxy class with the account number as the unique identifier by creating a Visualforce page and custom controller.
3. The custom controller then parses the return values from the Apex callout and the Visualforce page, and then renders the bill to the
user.
This example demonstrates that:
15
Remote Process Invocation—Request and Reply
• The state of the customer is tracked with an account number stored on the Salesforce account object
• Subsequent processing of the reply message by the caller
16
CHAPTER 3 Remote Process Invocation—Fire and Forget
Context
You use Salesforce to track leads, manage your pipeline, create opportunities, and capture order details that convert leads to customers.
However, Salesforce isn’t the system that holds or processes orders. After the order details are captured in Salesforce, an order must be
created in the remote system, which manages the order through to its conclusion.
When you implement this pattern, Salesforce calls the remote system to create the order, but doesn’t wait for the call’s successful
completion. The remote system can optionally update Salesforce with the new order number and status in a separate transaction.
Problem
When an event occurs in Salesforce, how do you initiate a process in a remote system and pass the required information to that process
without waiting for a response from the remote system?
Forces
Consider the following forces when applying solutions based on this pattern.
• Does the call to the remote system require Salesforce to wait for a response before continuing processing? Is the call to the remote
system synchronous request-reply or asynchronous?
• If the call to the remote system is synchronous, does the response need to be processed by Salesforce as part of the same transaction
as the call?
• Is the message size small?
• Is the integration based on the occurrence of a specific event, such as a button click in the Salesforce user interface, or DML-based
events?
• Is guaranteed message delivery from Salesforce to the remote system a requirement?
• Is the remote system able to participate in a contract-first integration in which Salesforce specifies the contract? In some solution
variants (for example, outbound messaging), Salesforce specifies a contract that the remote system endpoint implements.
• Are declarative configuration methods preferred over custom Apex development? In this case, solutions such as outbound messaging
are preferred over Apex callouts.
Solution
The following table contains solutions to this integration problem.
17
Remote Process Invocation—Fire and Forget
Outbound messaging and callbacks Best Callbacks provide a way to mitigate the impacts of
out-of-sequence messaging. In addition, they handle these
scenarios.
• Idempotency— If an acknowledgment isn’t received in a
timely fashion, outbound messaging performs retries.
Multiple messages can be sent to the target system. Using
a callback ensures that the data retrieved is at a specific
point in time rather than when the message was sent.
• Retrieving more data—A single outbound message can
send data only for a single object. A callback can be used
to retrieve data from other related records, such as related
lists associated with the parent object.
The outbound message provides a unique SessionId that you
can use as an authentication token to authenticate and
authorize a callback with either the SOAP API or the REST API.
The system performing the callback isn’t required to separately
authenticate to Salesforce. The standard methods of either API
can then be used to perform the desired business functions.
A typical use of this variant is the scenario in which Salesforce
sends an outbound message to a remote system to create a
record. The callback updates the original Salesforce record with
the unique key of the record created in the remote system.
18
Remote Process Invocation—Fire and Forget
Trigger that’s invoked from Salesforce data Suboptimal You can use Apex triggers to perform automation based on
changes performs an Apex SOAP or HTTP record data changes.
asynchronous callout An Apex proxy class can be executed as the result of a DML
operation by using an Apex trigger. However, all calls made
from within the trigger context must be executed
asynchronously.
Batch Apex job that performs an Apex SOAP Suboptimal Calls to a remote system can be performed from a batch job.
or HTTP asynchronous callout This solution allows for batch remote process execution and
for processing of the response from the remote system in
Salesforce. However, there are limits to the number of calls for
a given batch context. For more information, see the Salesforce
Limits Quick Reference Guide.
Sketch
The following diagram illustrates a call from Salesforce to a remote system in which create or update operations on a record trigger the
call.
19
Remote Process Invocation—Fire and Forget
In this scenario:
1. A DML update or insert occurs on a given set of records in Salesforce.
2. A Salesforce workflow rule triggers, based on a given set of conditions.
3. This workflow rule invokes a pre-configured outbound message that sends a SOAP-based message to a remote listener.
4. The remote listener receives the SOAP message, places the message on a local queue, and returns a positive acknowledgment to
Salesforce as part of the same transaction context.
5. The queuing application forwards the message to the remote application for processing.
6. Salesforce receives the acknowledgment and completes the request, but doesn’t wait for the remote application to process the
message.
7. Salesforce waits for an acknowledgment from the remote system for up to 10 seconds. After 10 seconds, Salesforce retries sending
the outbound message request for up to 24 hours.
In the case where the remote system must perform operations against Salesforce, you can implement an optional callback operation.
The outbound message sends a SessionId that you can use in the callback to authenticate and authorize an API or web service call into
Salesforce.
Results
The application of the solutions related to this pattern allows for:
• User interface–initiated remote process invocations in which the result of the transaction can be displayed to the end user
• DML event-initiated remote process invocations in which the result of the transaction can be processed by the calling process
Calling Mechanisms
The calling mechanism depends on the solution chosen to implement this pattern.
20
Remote Process Invocation—Fire and Forget
Workflow rules Used only for the outbound messaging solution. Create and update DML events trigger the
Salesforce workflow rules, which can then send a message to a remote system.
Apex triggers Used primarily for invocation of remote processes, using Apex callouts from DML-initiated events.
Apex batch classes Used for invocation of remote processes in batch mode.
Outbound messaging • Error handling—Because this pattern is asynchronous, the remote system handles error handling.
For outbound messaging, Salesforce initiates a retry operation if no positive acknowledgment
is received within the timeout period, for up to 24 hours.
Error handling must be performed in the remote service because the message is effectively
handed off to the remote system in a “fire-and-forget” manner.
• Recovery—Because this pattern is asynchronous, the system must initiate retries based on the
service’s quality of service requirements. For outbound messaging, Salesforce initiates retries
if no positive acknowledgment is received from the outbound listener within the timeout
period, up to 24 hours. The retry interval increases exponentially over time, starting with
15-second intervals and ending with 60-minute intervals. Administrators must monitor this
queue for any messages exceeding the 24-hour delivery period and retry manually, if necessary.
For custom Apex callouts, a custom retry mechanism must be built if the quality-of-service
requirements warrant it.
21
Remote Process Invocation—Fire and Forget
Outbound messaging sends a unique ID per message and this ID remains the same for any retries. The remote system can track duplicate
messages based on this unique ID. The unique record ID for each record being updated is also sent, and can be used to prevent duplicate
record creation.
Security Considerations
Any call to a remote system must maintain the confidentiality, integrity, and availability of the request. Different security considerations
apply, depending on the solution you choose.
Outbound Messaging For outbound messaging, one-way SSL is enabled by default. However, two-way SSL can be used
together with the Salesforce outbound messaging certificate.
The following are some additional security considerations.
• Whitelist Salesforce server IP ranges for remote integration servers.
• Protect the remote system by implementing the appropriate firewall mechanisms.
Sidebars
Timeliness
Timeliness is less of a factor with the fire-and-forget pattern. Control is handed back to the client either immediately or after positive
acknowledgment of a successful hand-off in the remote system. With Salesforce outbound messaging, the acknowledgment must occur
within 24 hours; otherwise, the message expires.
Data Volumes
Data volume considerations depend on which solution you choose. For the limits of each solution, see the Salesforce Limits Quick Reference
Guide.
Endpoint Capability and Standards Support
The capability and standards support for the endpoint depends on the solution that you choose.
22
Remote Process Invocation—Fire and Forget
Apex HTTP callouts The endpoint must be able to receive HTTP calls and be accessible over the public internet by
Salesforce.
Apex HTTP callouts can be used to call RESTful services using the standard GET, POST, PUT, and
DELETE methods.
Outbound message • The endpoint must be able to implement a listener that can receive SOAP messages in
predefined format sent from Salesforce.
• The remote listener must participate in a contract-first implementation, where the contract is
supplied by Salesforce.
• Each outbound message has its own predefined WSDL.
State Management
When integrating systems, unique record identifiers are important for ongoing state tracking. For example, if a record is created in the
remote system, you have two options.
• Salesforce stores the remote system’s primary or unique surrogate key for the remote record.
• The remote system stores the Salesforce unique record ID or some other unique surrogate key.
The following table lists considerations for state management in this pattern.
Remote system Salesforce must store a reference to the unique identifier in the remote system. Because the process
is asynchronous, storing this unique identifier can’t be part of the original transaction.
Salesforce must provide a unique ID in the call to the remote process. The remote system must
then call back to Salesforce to update the record in Salesforce with the remote system’s unique
identifier, using the Salesforce unique ID.
The callback implies specific state handling in the remote application to store the Salesforce unique
identifier for that transaction to use for the callback when processing is complete, or the Salesforce
unique identifier is stored on the remote system’s record.
23
Remote Process Invocation—Fire and Forget
Solution Considerations
Apex callouts In certain cases, solutions prescribed by this pattern require implementing several complex
integration scenarios best served using middleware or having Salesforce call a composite service.
These scenarios include:
• Orchestration of business processes and rules involving complex flow logic
• Aggregation of calls and their results across calls to multiple systems
• Transformation of both inbound and outbound messages
• Maintaining transactional integrity across calls to multiple systems
Outbound messaging Given the static, declarative nature of the outbound message, no complex integration scenarios,
such as aggregation, orchestration, or transformation, can be performed in Salesforce. The remote
system or middleware must handle these types of operations .
Governor Limits
Due to the multitenant nature of the Salesforce platform, there are limits to outbound callouts. Limits depend on the type of outbound
call and the timing of the call. There are no governor limits for outbound messaging. See the Salesforce Limits Quick Reference Guide.
Reliable Messaging
Reliable messaging attempts to resolve the issue of guaranteeing the delivery of a message to a remote system in which the individual
components are unreliable. The method of ensuring receipt of a message by the remote system depends on the solution you choose.
Outbound messaging Outbound messaging provides a form of reliable messaging. If no positive acknowledgment is
received from the remote system, the process retries for up to 24 hours. This process guarantees
delivery only to the point of the remote listener.
In most implementations, the remote listener calls another remote service. Ideally, the invocation
of this remote service through a reliable messaging system ensures full end-to-end guaranteed
delivery. The positive acknowledgment to the Salesforce outbound message occurs after the remote
listener has successfully placed its own message on its local queue.
Middleware Capabilities
24
Remote Process Invocation—Fire and Forget
The following table highlights the desirable properties of a middleware system that participates in this pattern.
Protocol conversion X
Mediation routing X
Routing X
Example
A telecommunications company wants to use Salesforce as a front end for creating accounts using the Lead to Opportunity process.
The creation of an order is initiated in Salesforce when the opportunity is closed and won, but the back-end ERP system is the data
25
Remote Process Invocation—Fire and Forget
master. The order must be saved to the Salesforce opportunity record, and the opportunity status changed to indicate that the order
was created.
The following constraints apply.
• The ERP system must be able to participate in a contract-first integration in which its service implements a Salesforce WSDL interface.
• No custom development in Salesforce.
• You don’t require immediate notification of the order number after the opportunity converts to an order.
This example is best implemented using Salesforce outbound messaging, but does require the implementation of a proxy service by
the remote system.
On the Salesforce side:
• Create a workflow rule to initiate the outbound message (for example, when the opportunity status changes to Close-Won).
• Create an outbound message that sends only the opportunity RecordId and a SessionId for a subsequent callback.
On the remote system side:
• Create a proxy service that can implement the Salesforce outbound message WSDL interface.
• The service receives one or more notifications indicating that the opportunity is to be converted to an order.
• The service transforms and places the message on a local message queue. On notification of receipt, the service replies with a positive
acknowledgment back to the Salesforce outbound message.
• The local message queue forwards the message to the back-end ERP system so that the order can be created.
• After the order is created, a separate thread calls back to Salesforce using the SessionId as the authentication token. The callback
updates the opportunity with the order number and status. You can do this callback using documented pattern solutions, such as
the Salesforce SOAP API, REST API, or an Apex web service.
This example demonstrates the following.
• Implementation of a remote process invoked asynchronously
• End-to-end guaranteed delivery
• Subsequent callback to Salesforce to update the state of the record
26
CHAPTER 4 Batch Data Synchronization
Context
You’re moving your CRM implementation to Salesforce and want to:
• Extract and transform accounts, contacts, and opportunities from the current CRM system and load the data into Salesforce (initial
data import).
• Extract, transform, and load customer billing data into Salesforce from a remote system on a weekly basis (ongoing).
• Extract customer activity information from Salesforce and import it into an on-premises data warehouse on a weekly basis (ongoing).
Problem
How do you import data into Salesforce and export data out of Salesforce, taking into consideration that these imports and exports can
interfere with end-user operations during business hours, and involve large amounts of data?
Forces
There are various forces to consider when applying solutions based on this pattern:
• Should the data be stored in Salesforce? If not, there are other integration options an architect can and should consider (mashups,
for example).
• If the data should be stored in Salesforce, should the data be refreshed in response to an event in the remote system?
• Should the data be refreshed on a scheduled basis?
• Does the data support primary business processes?
• Are there analytics (reporting) requirements that are impacted by the availability of this data in Salesforce?
Solution
The following table contains various solutions to this integration problem.
27
Batch Data Synchronization
Remote call-in Suboptimal Remote system It’s possible for a remote system to call into Salesforce by using
one of the APIs and perform updates to data as they occur.
However, this causes considerable on-going traffic between the
two systems.
It also requires that greater emphasis be placed on error handling
and locking because this pattern has the potential for causing
continual updates, which has the potential to impact performance
for end-users.
Remote process invocation Suboptimal Salesforce It’s possible for Salesforce to call into a remote system and perform
updates to data as they occur. However, this causes considerable
on-going traffic between the two systems.
It also requires that greater emphasis be placed on error handling
and locking because this pattern has the potential for causing
continual updates, which has the potential to impact performance
for end-users.
Sketch
The following diagram illustrates the sequence of events in this pattern, where the remote system is the data master.
28
Batch Data Synchronization
The following diagram illustrates the sequence of events in this pattern, where Salesforce is the data master.
Results
You can integrate data that’s sourced externally with Salesforce under the following scenarios:
• External system is the data master—Salesforce is a consumer of data provided by a single source system or multiple systems. In this
scenario, it’s common to have a data warehouse or data mart that aggregates the data before the data is imported into Salesforce.
• Salesforce is the data master—Salesforce is the system of record for certain entities.
In a typical Salesforce integration scenario, the implementation team does one of the following:
• Implement change data capture on the source data set.
• Implement a set of supporting database structures, known as control tables, in an intermediate, on-premises database.
The ETL tool is then used to create programs that will:
1. Read a control table to determine the last run time of the job and extract any other control values needed.
2. Use the above control values as filters and query the source data set.
3. Apply predefined processing rules, including validation, enrichment, and so on.
4. Use available connectors/transformation capabilities of the ETL tool to create the destination data set.
5. Write the data set to Salesforce objects.
6. If processing is successful, update the control values in the control table.
7. If processing fails, update the control tables with values that enable a restart and exit.
Note: We recommend that you create the control tables and associated data structures in an environment that the ETL tool has
access to even if access to Salesforce isn’t available. This provides adequate levels of resilience. Salesforce should be treated as a
spoke in this process and the ETL infrastructure is the hub.
For an ETL tool to gain maximum benefit from data synchronization capabilities, consider the following:
• Chain and sequence the ETL jobs to provide a cohesive process.
• Use primary keys from both systems to match incoming data.
• Use specific API methods to extract only updated data.
29
Batch Data Synchronization
• If importing child records in a master-detail or lookup relationship, group the imported data using its parent key at the source to
avoid locking. For example, if you’re importing contact data, be sure to group the contact data by the parent account key so the
maximum number of contacts for a single account can be loaded in one API call. Failure to group the imported data usually results
in the first contact record being loaded and subsequent contact records for that account to fail in the context of the API call.
• Any post-import processing, such as triggers, should only process data selectively.
• If your scenario involves large data volumes, follow the best practices in the white paper Best Practices for Deployments with Large
Data Volumes.
Error Handling and Recovery
An error handling and recovery strategy must be considered as part of the overall solution. The best method depends on the solution
you choose.
Write to Salesforce • Error handling—Errors that occur during a write operation can result from a combination of
factors in the application. The API calls return a result set that consists of the information listed
below. This information should be used to retry the write operation (if necessary).
– Record identifying information
– Success/failure notification
– A collection of errors for each record
External master system Errors should be handled in accordance with the best practices of the master system.
Security Considerations
Any call to a remote system must maintain the confidentiality, integrity, and availability of the request. Different security considerations
apply, depending on the solution you choose.
• A Lightning Platform license is required to allow authenticated API access to the Salesforce API.
30
Batch Data Synchronization
Sidebars
Timeliness
Timeliness isn’t of significant importance in this pattern. However, care must be taken to design the interfaces so that all of the batch
processes complete in a designated batch window.
As with all batch-oriented operations, we strongly recommend that you take care to insulate the source and target systems during batch
processing windows. Loading batches during business hours might result in some contention, resulting in either a user's update failing,
or more significantly, a batch load (or partial batch load) failing.
For organizations that have global operations, it might not be feasible to run all batch processes at the same time because the system
might continually be in use. Data segmentation techniques using record types and other filtering criteria can be used to avoid data
contention in these cases.
State Management
You can implement state management by using surrogate keys between the two systems. If you need any type of transaction management
across Salesforce entities, we recommend that you use the Remote Call-In pattern using Apex.
Standard optimistic record locking occurs on the platform, and any updates made using the API require the user, who is editing the
record, to refresh the record and initiate their transaction. In the context of the Salesforce API, optimistic locking refers to a process where:
• Salesforce doesn’t maintain the state of a record being edited by a specific user.
• Upon read, it records the time when the data was extracted.
• If the user updates the record and saves it, Salesforce checks to see if another user has updated the record in the interim.
• If the record has been updated, the system notifies the user that an update was made and the user should retrieve the latest version
of the record before proceeding with their updates.
Middleware Capabilities
The most effective external technologies used to implement this pattern are traditional ETL tools. It’s important that the middleware
tools chosen support the Salesforce Bulk API.
It’s helpful, but not critical, that the middleware tools support the getUpdated() function. This function provides the closest
implementation to standard change data capture capability on the Salesforce platform.
The following table highlights the desirable properties of a middleware system that participates in this pattern.
Protocol conversion X
31
Batch Data Synchronization
Routing X
Example
A utility company uses a mainframe-based batch process that assigns prospects to individual sales reps and teams. This information
needs to be imported into Salesforce on a nightly basis.
The customer has decided to implement change data capture on the source tables using a commercially available ETL tool.
The solution works as follows:
• A cron-like scheduler executes a batch job that assigns prospects to users and teams.
• After the batch job runs and updates the data, the ETL tool recognizes these changes using change data capture. The ETL tool collates
the changes from the data store.
• The ETL connector uses the Salesforce SOAP API to load the changes into Salesforce.
32
CHAPTER 5 Remote Call-In
Context
You use Salesforce to track leads, manage your pipeline, create opportunities, and capture order details that convert leads to customers.
However, Salesforce isn’t the system that contains or processes orders. Orders are managed by an external (remote) system that needs
to update the order status in Salesforce as the order passes through its processing stages.
Problem
How does a remote system connect and authenticate with Salesforce and update existing records?
Forces
There are various forces to consider when applying solutions based on this pattern:
• Does the call to Salesforce require the remote process to wait for a response before continuing processing? Remote calls to Salesforce
are always synchronous request-reply, although the remote process can discard the response if it’s not needed to simulate an
asynchronous call.
• What is the format of the message (for example, SOAP or REST, or both over HTTP)?
• Is the message size relatively small or large?
• In the case of a SOAP-capable remote system, is the remote system able to participate in a contract-first approach, where Salesforce
dictates the contract? This is required where our SOAP API is used, for which a predefined WSDL is supplied.
• Is transaction processing required?
• What is the extent to which you are tolerant of customization in the Salesforce application?
Solution
The following table contains various solutions to this integration problem.
33
Remote Call-In
34
Remote Call-In
Apex Web services Suboptimal Apex class methods can be exposed as Web service methods
to external applications. This is an alternative to SOAP API, and
is typically only used where the following additional
requirements must be met:
• Full transactional support is required (for example, create
an account, contact, and opportunity all in one transaction).
• Custom logic must be applied on the Salesforce side before
committing.
The benefit of using an Apex Web service must be weighed
against the additional code that needs to be maintained in
Salesforce.
Apex REST service Suboptimal An Apex class can be exposed as REST resources mapped to
specific URIs with an HTTP verb defined against it (for example,
POST or GET).
Unlike SOAP, there is no need for the client to consume a
service definition/contract (WSDL) and generate client stubs.
The remote system requires only the ability to form an HTTP
request and process the returned results (XML or JSON).
Bulk API Optimal for bulk Bulk API is based on REST principles, and is optimized for
operations loading or deleting large sets of data. It has the same
accessibility and security behavior as REST API.
Bulk API allows the client application to query, insert, update,
upsert, or delete a large number of records asynchronously by
submitting a number of batches, which are processed in the
background by Salesforce. In contrast, SOAP API is optimized
for real-time client applications that update small numbers of
records at a time.
Although SOAP API can also be used for processing large
numbers of records, when the data sets contain hundreds of
35
Remote Call-In
Sketch
The following diagram illustrates the sequence of events when you implement this pattern using SOAP API. The sequence of events is
the same when using REST API.
Results
The application of the solutions related to this pattern allows for:
• Remote system to call the Salesforce APIs to query the database and execute single-object operations (create, update, delete, and
so on).
• Remote system to call custom-built Salesforce APIs (services) that can support multi-object transactional operations and custom
pre/post processing logic.
Calling Mechanisms
36
Remote Call-In
The calling mechanism depends on the solution chosen to implement this pattern.
REST API The remote system has to authenticate before accessing any Apex REST service. The remote system
can use OAuth 2.0 or username/password authentication. In either case, the client must set the
authorization HTTP header with the appropriate value (an OAuth access token or a session ID can
be acquired via a login call to SOAP API).
The remote system then generates REST invocations (HTTP requests) with the appropriate verbs
and processes the results returned (JSON and XML data formats are supported).
Apex Web service The remote system consumes the custom Apex Web Service WSDL to generate client stubs that
are in turn used to invoke the custom Apex Web service.
Apex REST service As per REST API, the resource URI and applicable verbs are defined using the @RestResource,
@HttpGet, and @HttpPost annotations.
Bulk API Bulk API is a REST-based API, so the same calling mechanisms as REST API apply.
37
Remote Call-In
REST API We recommend that the remote system establish an OAuth trust for authorization. REST calls can
then be made on specific resources using HTTP verbs. It’s also possible to make REST calls with a
valid session ID that might have been obtained by other means (for example, retrieved by calling
SOAP API or provided via an outbound message).
We recommend that clients that call the REST API cache and reuse the session ID to maximize
performance, rather than obtaining a new session ID for each call.
Apex Web service The same security considerations as SOAP API apply.
Apex REST service The same security considerations as REST API apply.
Sidebars
Timeliness
SOAP API and Apex Web service API are synchronous. The following timeouts apply:
• Session timeout — The session will time out if there’s no activity based on the Salesforce organization’s session timeout setting.
• Query timeout — Each SOQL query has an individual timeout limit of 120 seconds.
Data Volumes
Data volume considerations depend on which solution and communication type you choose.
38
Remote Call-In
REST API The remote system must be capable of implementing a REST client that invokes Salesforce—defined
REST services, and processes the XML or JSON results.
Apex Web service The remote system must be capable of implementing a client that can invoke SOAP messages of
a predefined format, as defined by Salesforce.
The remote system must participate in a code-first implementation, where the contract is supplied
by Salesforce after the Apex Web service is implemented. Each Apex Web service has its own WSDL.
Apex REST service The same endpoint considerations as REST API apply.
State Management
When integrating systems, keys are important for on-going state tracking, for example, if a record gets created in the remote system, in
order to support ongoing updates to that record. There are two options:
• Salesforce stores the remote system’s primary or unique surrogate key for the remote record.
• The remote system stores the Salesforce unique record ID or some other unique surrogate key.
There are specific considerations for handling integration keys in this synchronous pattern.
Remote system In this scenario, Salesforce must store a reference to the unique identifier in the remote system.
Because the process is synchronous, the key can be provided as part of the same transaction using
external ID fields.
39
Remote Call-In
Solution Considerations
SOAP API or REST API SOAP API and REST API provide for simple transactions on objects. Complex integration scenarios,
such as aggregation, orchestration, and transformation, can’t be performed in Salesforce. These
scenarios will need to be handled by the remote system or middleware, with middleware as the
preferred method.
Apex Web service or Apex REST Custom Web services can provide for cross-object functionality, custom logic, and more complex
service transaction support. This solution should be used with care, and you should always consider the
suitability of middleware for any transformation, orchestration, and error handling logic.
Governor Limits
Due to the multi-tenant nature of the Salesforce platform, there are limits when using the APIs.
Solution Limits
SOAP API, REST API, and custom • API request limits—Salesforce applies a limit on the number of API calls per 24–hour period.
Apex APIs The limit is based on the Salesforce edition type and number of licenses. For example, Unlimited
Edition provides 5,000 API requests per Salesforce or Lightning Platform license per 24 hours.
For more information, see Salesforce Limits Quick Reference Guide.
• API query cursor limits—A user can have up to 10 query cursors open at a time. Otherwise, the
oldest of the 10 cursors is released. If the remote application attempts to open the released
query cursor, an error results. For example, if sharing integration user credentials, the maximum
query cursors need to be considered. Middleware may need to execute requests across multiple
users in a “round robin” fashion.
• Call limits—See Data Volumes sidebar for create, update, and query limits.
Reliable Messaging
Reliable messaging attempts to resolve the issue of guaranteeing the delivery of a message to a remote system where the individual
components themselves might be unreliable. The Salesforce SOAP API and REST API are synchronous and don’t provide explicit support
for any reliable messaging protocols, per se (for example, WS-ReliableMessaging).
We recommend that the remote system implement a reliable messaging system to ensure that error and timeout scenarios are successfully
managed.
Middleware Capabilities
The following table highlights the desirable properties of a middleware system that participates in this pattern:
40
Remote Call-In
Mediation routing X
Routing X
Example
A printing supplies and services company uses Salesforce as a front-end to create and manage accounts and opportunities. Opportunities
on existing accounts are updated with printing usage statistics from the on-premises Printer Management System (PMS), which regularly
monitors printers on client sites. Upon creation of an opportunity, an outbound message is sent to the PMS to register the new opportunity.
The PMS stores the Salesforce ID (Salesforce is the opportunity record master).
The following constraints apply:
• The PMS is capable of participating in a contract-first integration, where Salesforce provides the contract and the PMS acts as a client
(consumer) of the Salesforce service (defined via the Enterprise or Partner WSDL).
This example is best implemented using the Salesforce SOAP API or REST API.
In Salesforce:
• Download the Enterprise or Partner WSDL and provide it to the remote system.
In the remote system:
• Create a client stub from the Enterprise or Partner WSDL.
• Log in to the API using the integration user’s credentials (or the opportunity owner who created the record, assuming the session
ID is provided in the initial outbound message).
• Call the update operation on the Salesforce record ID provided in the outbound message and pass in the relevant field updates
(usage statistics).
• A callback to Salesforce to update a record (aligned with previously covered request/reply outbound patterns).
41
CHAPTER 6 UI Update Based on Data Changes
Context
You use Salesforce to manage customer cases. A customer service rep is on the phone with a customer working on a case. The customer
makes a payment, and the customer service rep needs to see a real-time update in Salesforce from the payment processing application,
indicating that the customer has successfully paid the order’s outstanding amount.
Problem
When an event occurs in Salesforce, how can the user be notified in the Salesforce user interface without having to refresh their screen
and potentially losing work?
Forces
There are various forces to consider when applying solutions based on this pattern:
• Does the data being acted on need to be stored in Salesforce?
• Can a custom user interface layer be built for viewing this data?
• Will the user have access for invoking the custom user interface?
Solution
The recommended solution to this integration problem is to use the Salesforce Streaming API. This solution is comprised of the following
components:
• A PushTopic with a query definition that allows you to:
– Specify what events trigger an update
– Select what data to include in the notification
• A JavaScript-based implementation of the Bayeux protocol (currently CometD) that can be used by the user interface
• A Visualforce page
• A JavaScript library included as a static resource
Sketch
The following diagram illustrates how Streaming API can be implemented to stream notifications to the Salesforce user interface. These
notifications are triggered by record changes in Salesforce.
42
UI Update Based on Data Changes
Results
Benefits
The application of the solution related to this pattern has the following benefits:
• Eliminates the need for writing custom polling mechanisms
• Eliminates the need for a user-initiated feedback loop
Unsupported Requirements
The solution has the following limitations:
• Delivery of notifications isn’t guaranteed.
• Order of notifications isn’t guaranteed.
• Notifications aren’t generated from record changes made by Bulk API.
Security Considerations
Standard Salesforce organization-level security is adhered to. It’s recommended you use the HTTPS protocol to connect to Streaming
API. See Security Considerations.
43
UI Update Based on Data Changes
Sidebars
The optimal solution involves creating a custom user interface in Salesforce. It’s imperative that you account for an appropriate user
interface container that can be used for rendering the custom user interface. Supported browsers are listed in the Streaming API
documentation.
Example
A telecommunications company uses Salesforce to manage customer cases. The customer service managers want to be notified
automatically when a case is successfully closed by one of their customer service reps.
Implementing the solution prescribed by this pattern, the customer should:
• Create a PushTopic that sends a notification when a case is saved with a Status of “Closed” and Resolution of “Successful.”
• Create a custom user interface available to customer service managers. This user interface subscribes to the PushTopic channel.
• Implement logic in the custom user interface that shows alerts generated by that manager’s customer service reps.
44
APPENDICES
APPENDIX A Resources—External
1. Hohpe, Gregor, and Bobby Woolf. Enterprise Integration Patterns. Boston: Addison-Wesley Professional, 2003.
2. Microsoft Corporation. Integration Patterns (Patterns & Practices). Redmond: Microsoft Press, 2004.
3. IBM Corporation. Application Integration Patterns. IBM Corporation, 2004.
4. “Synchronous and asynchronous processes,” IBM Corporation, last accessed March 18, 2013,
http://publib.boulder.ibm.com/infocenter/adiehelp/v5r1m1/index.jsp?topic=%2Fcom.ibm.etools.ctc.flow.doc%2Fconcepts%2Fcsynchf.html.
5. “Hub and Spoke [or] Zen and the Art of Message Broker Maintenance,” Enterprise Integration Patterns, last accessed March 18, 2013,
http://www.eaipatterns.com/ramblings/03_hubandspoke.html.
45
APPENDIX B Resources—Salesforce
Developer Documentation
• SOAP API Developer Guide
• REST API Developer Guide
• Salesforce Streaming API Developer’s Guide
• Bulk API Developer Guide
• Apex Developer Guide
• Salesforce Object Query Language (SOQL) Reference
• Salesforce Object Search Language (SOSL) Reference
• Salesforce Limits Quick Reference Guide
46
APPENDIX C Security Considerations
To be effective members of the enterprise portfolio, all applications must be created and integrated with relevant security mechanisms.
Modern IT strategies employ a combination of on-premises and cloud-based services.
While integrating cloud-to-cloud services typically focuses on Web services and associated authorization, connecting on-premises and
cloud services often introduces increased complexity. This section describes security tools, techniques, and Salesforce-specific
considerations.
Encryption
Some enterprises require selected transactions or data fields to be encrypted between a combination of on-premises and cloud-based
applications. If your organization must adhere to additional compliance requirements, you can implement alternatives, including:
• On-premises commercial encryption gateway services, including Salesforce’s own, CipherCloud, IBM DataPower, Computer Associates.
For each solution, the encryption engine or gateway is invoked at the transaction boundary by sending and receiving an encrypted
payload or when encrypting or decrypting specific data fields before the HTTP(S) request executes.
• Cloud-based options, such as Salesforce Shield Platform Encryption. Shield Platform Encryption gives your data a whole new layer
of security while preserving critical platform functionality. The data you select is encrypted at rest using an advanced key derivation
system. You can protect data more securely than ever before. Refer to the Salesforce online help for more information.
10
“What is a reverse proxy server?,” IBM Corporation, last accessed April 11, 2012,
http://publib.boulder.ibm.com/infocenter/sametime/v8r5/index.jsp?topic=%2Fcom.ibm.help.sametime.v851.doc%2Fconfig%2Fst_adm_port_rvprxy_overview_c.html.
11
“Reverse proxy,” Wikipedia, last accessed April 11, 2012, http://en.wikipedia.org/wiki/Reverse_proxy.
47
Security Considerations
• Security/XML gateway—Inject WS-Security credentials (IBM WebSeal or Datapower, Layer7, TIBCO, and so on) into the transaction
stream itself. This approach requires no changes to application-level Web services or Web service invocations from Salesforce. You
can also reuse this approach across the Salesforce installation. However, it requires additional design, configuration, testing, and
maintenance to manage the appropriate WS-Security injection into the existing security gateway approach.
• Transport-level encryption—Encrypt the communication channel using two-way SSL and IP restrictions. While this approach doesn’t
directly implement the WS-* protocol by itself, it secures the communication channel between the on-premises applications and
Salesforce without passing a username and password. It also doesn’t require changes to Salesforce-generated classes. However,
some on-premises Web services modifications might be required (at either the application itself or at the middleware/ESB layer).
• Salesforce custom development—Add WS-Security headers to the outbound SOAP request via the WSDL2Apex utility. This generates
a Java-like Apex class from the WSDL file used to invoke the internal service. While this requires no changes to back-end Web services
or additional components in the DMZ, it does require:
– an increased build and test effort
– a relatively complex and manual process to hand-code the WS-Security attributes (including XML serialization within the Apex
code)
– a higher long-term maintenance effort
Note: The last option isn’t recommended due to its complexity and the risk that such integrations need periodic reviews
based on regular updates to Salesforce.
48