0% found this document useful (0 votes)
182 views262 pages

Application Driven UnusedConnectors

The document describes various application-based connectors for integrating with Agiloft CLM, Aible AI, Alleantia IoT, Amazon Redshift, and Amazon S3. Each connector has specific prerequisites, operations, and functionalities, allowing users to automate tasks, manage data, and ensure secure connections. The document also outlines the actions supported by each connector, such as data retrieval, updates, and monitoring capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
182 views262 pages

Application Driven UnusedConnectors

The document describes various application-based connectors for integrating with Agiloft CLM, Aible AI, Alleantia IoT, Amazon Redshift, and Amazon S3. Each connector has specific prerequisites, operations, and functionalities, allowing users to automate tasks, manage data, and ensure secure connections. The document also outlines the actions supported by each connector, such as data retrieval, updates, and monitoring capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 262

Application-based Unused Connectors

Agiloft CLM - Partner connector


The Agiloft CLM - Partner connector makes it easy to connect other
applications with Agiloft on the Boomi Enterprise Platform and to
automate a wide range of tasks from simple data export to complex inter-
application workflows.

Any changes to table structures in the Agiloft KB are immediately also


available in the Agiloft CLM - Partner connector on the Boomi Enterprise
Platform.

Visibility of and access to data in the Agiloft KB with the connector is


policed by the permissions attached to the Agiloft credentials

Objects in the Agiloft KB are referred to as tables. Each table contains one
or more records, each record contains one or more fields. Data is stored in
fields. The connector may have limited access to table objects in Agiloft
depending on the permissions settings for the Agiloft account used with
the connector.

Prerequisites
The following prerequisites are needed in order to implement a connection
with an Agiloft KB from Integration:
 Knowledge Base name, domain URL, login credentials to access the
Agiloft KB.
 The Login credentials must have permissions to use the Agiloft API
and the correct access permissions in Agiloft to perform all
operations the process in implements.

Agiloft CLM - Partner operation


All operations use JSON format and support the following actions:
 Get — Returns a JSON object that contains the content of all fields
from a specific record in the Agiloft KB.
 Query — Returns a list of records and the content of each record. A
Query can be based on a “Saved Search” that exists in Agiloft or can
be configured with filter parameters.
 Update — Updates an existing record in Agiloft with the provided
data.
 Delete — Deletes the specified record in Agiloft.
 Attach — Attaches a new file to the specified attachment field in
Agiloft.
 AttachRetrieve — Returns an attached file from the specified field. If
the field contains multiple files, then each file can be selected in the
request by its ID.
 AttachRemove — Deletes an attached file in the specified field. If
the field contains multiple versions of this file, then the version can
be selected in the request.
Aible AI – Partner connector
Aible is built from the ground up to deliver business impact by helping you
predict, optimize, and anticipate change. The result: sustained ROI. Fast.

Prerequisites
To implement a connection to your Aible project from Boomi Integration:
 Have your login and password to access Aible
 Uploading data:
o Know which tenant to use (if you have access to more than
one)
o Know which cloud account to store the data (if more than one
is configured)
o Know which folder to store the file
 Scoring data:
o Know your Aible project ID and tenant

Operation:
Request Profile
The request profile is an array of JSON objects with each object
representing a single input record to be scored. Aible will score each
record by appending a predicted value of the outcome modeled in the
Aible project.

Response Profile
The response profile represents the output returned by Aible after
processing. Like the request profile, the response is also an array of JSON
objects with each object representing a single scored record. Each record
includes all of the fields sent in the Request Profile plus the following
output values:

Forma
Field Description
t
predict Numb 0 or 1 representing the Aible prediction.
Forma
Field Description
t
er
predict_sco Numb A decimal between 0.0 and 1.0 representing the
re er prediction score.
driver [1 - Names of the top 5 fields driving the variation in the
Text
5] prediction result.
impact [1 - Numb Decimal value with the impact variation of the
5] er respective drivers.

It also includes the following metadata fields used for troubleshooting:


Forma
Field Description
t
Numb
job_id Identifier of the scoring request.
er
Name of the AI model that generated the
model_name Text
prediction.
project_segment_id Text Identifier of the project subgroup.
scoring_warning_mess
Text Normally blank
ages

Alleantia IoT - Partner connector


Alleantia ISC is an off-the-shelf Industrial IoT (IIoT) integration software
platform that can integrate in seconds any industrial machine and device
with any application and cloud platform, transforming machine data (‘IoT
Data’) into secure, ready-to-use valuable information to monitor, manage
and control production and operational processes in real time, enabling
digital transformation for industrial end-users and machine makers.

The Alleantia IoT - Partner Connector is a component that bridges the


Alleantia ISC IoT Gateway and the Boomi Enterprise Platform, ensuring a
fast and easy connection between machines and applications, to enable
complete monitoring and control features for industrial devices, machines,
plants and sensors.
 System information
 List of all devices connected to a specific ISC instance (aka device
supplier, model, version…)
 List of all variables configured for a specific device (aka ids, names,
dataTypes, measurement units…)
 List of all configured variables values (near real time read from
devices)
 List of current active alarms
 List of alarm history
 List of raised events
 Write values on writable variables on devices

Benefits
 Fast integration of Industrial IoT devices information, bidirectional,
with Platform.
 Leverage of thousands of ready to use machine drivers for fast
connection.
 Wide flexibility in installation environments for Alleantia ISC –
physical or virtual architectures, centralized and distributed, on
hardware and OS of many kinds.
 Enterprise-grade IIoT solution with thousands of deployments and
supporting management platform.

Prerequisites
Alleantia IoT — Partner connector requires the following:
 Alleantia ISC up and running on target installation, with licensed
REST API module.
 Alleantia ISC Rest API plugin active on target installation.
 The locally deployed Atom shall reach the Alleantia ISC through
customer network or may be installed in the same system where
Alleantia ISC runs.

Test Connection troubleshooting


If the connection doesn't complete, there are three cases:
 Alleantia ISC software is not active
 Alleantia ISC host is not active / not connected to the network
 Alleantia ISC host is not reachable from Atom host.
In case of any problem, check if the Atom has visibility to the Alleantia ISC
host through the network of the Alleantia ISC instance, and also ensure
your firewall is not blocking communication on http port 80. Perform a
direct test on Alleantia ISC Rest API from Atom host using a browser and
the instructions in the Alleantia API documentation.

Alleantia IoT — Partner operation

Actions
The Alleantia IoT — Partner connector Operation only supports actions
which sends and receives data to and/or from Alleantia ISC Software.
Certain operations require you to specify certain fields in the request
necessary to complete the action. The Alleantia IoT — Partner connector
supports the following actions:
 Alleantia_System_Information — Use this action to return Alleantia
system information.
 All_Devices_Information — Use this action to return a data list to
retrieve information on all devices configured in the ISC Software.
 Single_Device_Information — Use this action to return the
configuration information for a device configured in the system.
 All_Custom_Variables_Information — Use this action to return a list
with the information on variables configuration of the plant
configured in the IoT.
 All_Variables_ Information — Use this action to return a list with the
information on variables configuration of devices configured in the
ISC Software.
 Single_Variable_Information — Use this action to return the
information on a variable configuration for a device configured on
the ISC Software.
 All_Alarms_Information — Use this action to return the information
on all alarms configured on the ISC Software.
 Single_Alarm_Information — Use this action to return the information
on an alarm configured on the ISC Software.
 All_Events_Information — Use this action to return information
relative to events that can be generated by the system.
 Single_Event_Information — Return information relative to the single
event that can be generated by the system.
 Get_Full_Telemetry — Use this action to return data lists to retrieve
information on all devices configured in the ISC Software.
 Get_Single_Device_Telemetry — Use this action to return a list of
actual values for the variables of a device configured in the ISC
Software.
 Get_Single_Variable_Telemetry — Use this action to return the actual
variable value for a device configured in the ISC Software.
 Get_Single_Custom_Variable_Telemetry — Use this action to return
the actual variable value for the plant configured in the ISC
Software.
 Get_Custom_Variable_Historical_Telemetry_Data — Use this action to
return the historical values of a plant variable configured in the
system for a specified time interval.
 Get_Variable_Historical_Telemetry_Data — Use this action to return
the historical values of a variable of a device configured in the ISC
Software for a specified time interval.
 Get_Full_Active_Alarms — Use this action to return a list of active
alarms in the ISC Software.
 Get_Alarms_History — Use this action to return the historical list of
the alarms in the ISC Software sorted by ascending time.
 Get_Full_Active_Events — Use this action to return a list of active
events in the ISC Software.
 Get_Events_History — Use this action to return the historical events
in the ISC Software for a specified time interval.
 Get_Single_Event_History — Use this action to return a single
historical event in the ISC Software for a specified time interval.
 Set _Variable_Value — Use this action to set the value of a writable
variable for a device configured in the ISC Software.

 Device ID is a unique identifier only within a specific ISS system and


is only valid for the system that has returned it. If the user
subsequently modifies the IoT Server configuration, by disabling or
removing the device, further calls to this API will no longer provide
the device ID.
 Plant Variable ID is unique only within the specific IoT Server that
has returned it. If the user subsequently modifies the IoT Server
configuration, by disabling the device variable, further calls to this
API will no longer provide the id.
Document Properties
This connector has the following document property that can be set or
referenced in various step parameters.
Command String - A JSON string containing an array of values with their
device identifier and variable identifier.
Structure -
{
{
"devid":5,
"varid":1,
"value":"7"
},
{
"devid":5,
"varid":2,
"value":"4"
}…
]

Amazon Redshift (Tech Preview) connector


Amazon Redshift Connector enables you to connect to an Amazon Redshift
database cluster setup via Amazon Redshift JDBC driver.

Note:
 This is an open source connector and you can access the code in
the Amazon Redshift connector repository.

The connector leverages the Amazon Simple Storage Service (S3) during
the Insert and Upsert operation for temporary storage and retrieval of
huge volume of data objects. The connector also offers the ability to
utilize Amazon S3 data encryption options for protecting the sensitive
data stored on disks in the Amazon S3 data centers.

Prerequisites
To use the Amazon Redshift Connector, you will require the following:
 Amazon Web Services (AWS) – Redshift cluster user account details
like username and password.
 AWS access key which includes access key ID and secret key details
for secured Amazon S3 connection. You can get the keys from the
AWS Management Console (My Security Credentials page). If you
don’t have an access key created already, create these keys in the
AWS Management Console. For more information,
see Understanding and getting your AWS credentials.
 Appropriate access rights to the Amazon S3 resource. You can verify
your rights in Amazon S3 by selecting a bucket, folder, or file and
clicking Properties > Permissions or by performing actions like
creating or deleting a bucket, adding and uploading files, etc.
 Redshift JDBC driver version 1.2.10.1009 deployed to the
appropriate Atom environment as a Custom Library Component.

Test Connection: Apart from supplied URL, Username, password,


access keys, we need to select the required Amazon resource for which
you want to check the connection.
 Redshift – Select to test the connection to Redshift database only.
 S3 – Select to test the connection to the Amazon S3 bucket.
 Both – Select to test the connection to both Redshift database and
Amazon S3 bucket.

The Amazon Redshift Connector operations define how to interact with


your Redshift cluster and represent a specific action (Insert, Update,
Upsert, Upload and Delete) to be performed against one/many redshift
database objects.

Amazon S3 connector

The Amazon S3 connector makes it easy to work with Amazon Simple


Storage Service by simplifying the interaction and adding convenience.

Features for this connector include:


 Dynamic properties that are backward compatible with existing
dynamic document properties.
 Three tracked properties.
 Enhanced process reporting that supports tracked properties.
Prerequisites
To implement a connection to your Amazon S3 account from , have the
following:
 Your AWS access key and AWS secret key.
 The appropriate access rights to the Amazon S3 resource.

Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
Bucket - The name of the bucket owned by the AWS account in which you
store objects.
File Key - The key name that identifies the object in the bucket.
Content-Type - Indicates the format of the content, such as string
(integer) or enumeration (value).

The Amazon S3 connection represents a single Amazon S3 account,


including login credentials.

If you have multiple accounts or buckets, use a separate connection for


each. Pair a single connection with different Amazon S3 operations to
perform a unique action against an Amazon S3 account.

Amazon Operations:
Amazon S3 supports the following actions:
 Get — (inbound) retrieves a single file or a list of files. Use a single
parameter ID (the Amazon key for the file in the Amazon S3
system). You cannot GET files encrypted with a client-side
encryption nor can you use custom keys to encrypt files.
Simulate Amazon’s folder structure by concatenating the folder path with
the final file name ending with a forward slash “/”. Use these objects:
o Get object retrieves a single file from the bucket.
o Get Object (Binary) retrieves a single file that is 50 MBs or
greater.
o Get Listing retrieves a list of files and folders in the specified
bucket, but not the contents of a file that include key,
folderName, filename, isDirectory, lastModified, size, and
bucketName.
 Delete — (outbound) deletes data based on the ID for the Amazon
S3 object key.
 Create — (outbound) uploads the current document into the
Amazon S3 Cloud as object data.

Amazon S3 REST connector


Amazon Simple Storage Service (S3) allows you to simply and securely
store and retrieve any amount of data from anywhere on the Web. Using
the Amazon S3 REST connector, you can use a process to upload, store,
retrieve, filter, and delete object data from an Amazon S3 bucket.
Benefits
The benefits to using the Amazon S3 REST connector in a process include
the following:
 Improved security to protect your data using server-side encryption
with customer-provided encryption keys (SSE-C).
 The connection automatically retrieves the AWS region in which
your bucket resides. You can manually specify the AWS region either
in the operation or at the document level to circumvent the
automatically retrieved region.
 The connection is not directly tied to a bucket, giving you more
flexibility with how you store and manage your data.
Prerequisites
To use the connector, implement a connection to your AWS account from ,
and help ensure that browsing and the operations are successful, have the
following information:
 Your AWS access key and secret key. You use the AWS Management
Console as the AWS account root user to manage access keys. When
you create an access key, AWS allows you to view and download
your secret key.
 The appropriate access rights and permissions to the Amazon S3
objects.
o To list objects inside a bucket, verify that you have
the s3:ListBucket permissions.
o To upload and download files, verify that you have read/write
permissions.
o To filter objects, verify that you
have s3:GetObject permissions for the object you are
filtering.
o To delete objects, verify that you
have s3:DeleteObject permissions.
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
 Bucket as the object type when browsing.
 Folder - (Upsert) The folder in the bucket where the object is
uploaded.
 File Name - (Upsert) The name of the object to upload.
 File Key
(Get, Upsert) The key name identifying the full path (folder and file
name) to the object in the bucket.
 Content-type
(Get, Upsert) The format of the content, such
as text or plain, application and octet-stream, string (integer),
or enumeration (value).
 Version ID
(Get) The version ID of the bucket in which you store objects, if you
enable versioning for the bucket.
 Bucket Region - (All operations except Delete) The AWS Region in
which your account resides

The Amazon S3 REST connection represents and contains all of the


information you need to connect and sign into a single AWS account.
The Amazon S3 REST operations use XML format and support the
following actions:
 Get — retrieve an object from Amazon S3.
 Upsert — upload and create a new object in Amazon S3 (a single
chunk or several chunks), or perform an update to an object, if the
object already exists.
 Select — filter the contents of Amazon S3 objects using a SQL
statement and retrieve just the subset of data you need.
 Delete — remove and delete an object from Amazon S3.
 Query — look up Amazon S3 objects based on specific search
criteria.

Retrying requests
To help ensure successful operation requests, the connector retries
requests based on the response status code from the server:
 Server error (5xx) — Indicates that you made a request, but the
server cannot complete the request. Requests retries are in a
phased approach (intervals of 0, 10, 30, 60, 120, and 300 seconds)
with a maximum of six retries.
 Server error (3xx) — Indicates that you made a request, but it needs
further action to process the request. Requests retry only once.
 Client error (4xx) — Indicates there is an issue with the request, and
not with the server. There are no retries of the request.
 Server response (2xx) — Indicates that a request is successfully
received, understood, and accepted. If there are no errors in the
XML provided in the request, it is considered successful and does
not retry.
 Communication error — Indicates that there is an error
communicating with the remote server. Request retries are in a
phased approach.

Browsing and dynamic buckets


When browsing, the connector lists all of the available buckets to which
you have access, regardless of your region, in the Object Type drop-down
list. Choose the bucket from which you can:
 Retrieve and store the object
 Filter and retrieve the contents of objects
 Delete the object
 Query the object

Special characters and file names


You can use any characters, including special characters, when specifying
the file name for an operation. However, there are some special
characters to avoid (for example, the pipe character "|," the tilde
character "~," and so on.).

Amazon Selling Partner (Tech Preview) connector

The Amazon Selling Partner connector allows you to utilize several of the
Selling Partner API (SP-API) APIs to programmatically access data on
orders, shipments, payments, and much more within a process.

Supported APIs
Currently, the Amazon Selling Partner connector supports the following
APIs:
 Catalog Items 2020-12-01
 Catalog Items 2022-04-01
 FBA Inbound Eligibility
 Feeds 2021-06-30
 Finances
 Fulfillment Inbound
 Fulfillment Outbound 2020-07-01
 Listings Items 2021-08-01
 Merchant Fulfillment
 Orders
 Reports 2021-06-30
 Sales
 Sellers
 Shipment Invoicing
 Shipping

Prerequisites
To utilize the SP-API in Integration, you need the following:
 An Amazon Professional Selling Account. You need to be registered
as a developer.
 An Atom deployed in either a hosted environment or on a local
machine.

Test Connection
You can test your connection settings before you use or save the
connection in a process. The Test Connection ensures that your specified
settings are correct, valid, and accessible.
The test connection is successful provided valid credentials are entered
and the private application has permission to get feeds. This is validated
with two requests:
 A request to the Selling Partner API authentication server, to
exchange the connection data for an access token. If this request is
successful, the authentication data provided is valid.
 A GET Feeds request to the Selling Partner API endpoint selected in
the connection.

The Amazon Selling Partner connector supports these actions:


 Execute
 Get
 Query
When you import an operation, you will see all the available APIs
operations. After selecting an API, the object types dropdown will display
the available operations.

Amazon SNS connector


The Amazon SNS connector makes it easy to work with the Amazon
Simple Notification Service by simplifying the interaction and adding
convenience.
The connector uses AWS Identity and Access Management (IAM) to
authenticate users to Amazon

The connector supports Amazon SNS Mobile Push Notifications using the
EXECUTE operation, and by setting a dynamic document property to the
SNS-assigned endpoint ARN. Additionally, the connector supports Amazon
SNS First-In-First-Out (FIFO) topics in the CREATE operation by setting
the Message Deduplication Id and Message Group Id dynamic
document properties.

Prerequisites
To implement a connection to your Amazon SNS account from , ensure you
deploy an Atom on your local machine or hosted with and have the
following ready:
 Your AWS access key and AWS secret key.
 The name of and password to your HTTP proxy server if you plan to
use that for authentication.
 The project name, project ID, project number, and registration ID for
each device.

Document properties (output)


This connector has the following properties for output documents that you can
set or reference in various step parameters:
EndPoint ARN - (Execute) A mobile endpoint to which you send notification
messages.
Message Deduplication Id - (Create for FIFO topics) Use this token for
deduplication of a sent message, and is sent to the service. This value helps to
avoid duplicate message delivery from the topic.
Message Group Id - (Create for FIFO topics) The tag specifies that a sent
message belongs to a specific message group and sends it to the service. If
messages belong to the same message group, processing occurs in FIFO.
However, the platform can process messages in different message groups out
of order.

The Amazon SNS operation supports the following actions:


 Create — add messages to a topic. The maximum message size is
256 KB.
 Execute — publish notification messages to apps on mobile devices
that can appear as message alerts, badge updates, or sound alerts.
This option supports the following services:
o APNS — Apple Push Notification Service that sends push
notification messages to iOS and OS X apps.
o APNS – Sandbox — Apple Push Notification Service that sends
push notification messages to the APNS sandbox for
development use.
o ADM — Amazon Device Messaging that sends push
notification messages to Kindle Fire apps.
o GCM — Google Cloud Messaging for Android that sends push
notification messages to Android apps.
o BAIDU — the Chinese cloud service that sends push
notification messages to mobile devices.
o WNS — Windows Push Notification Services that sends push
notification messages and updates to Windows 8 (and later)
and Windows Phone 8.1 (and later) apps.
o MPNS — Microsoft Push Notification Service for Windows
Phone that sends push notification messages to Windows 7+
and Windows Phone 8.0 apps.

Anaplan connector
The Anaplan connector allows you to upload and insert new Anaplan
business modeling and financial planning data, and import and export
data to On Demand and SaaS-based applications. You can upload a file as
a single chunk, or in several chunks.
Anaplan allows you to perform data-intensive planning and modeling
across a variety of industries for more effective top-down and bottom-up
planning.

Prerequisites
To implement a connection to your Anaplan account from and use the
connector operations, do the following:
 Have a local Atom or an Atom Cloud.
 Have the URL for the Anaplan API service, and the Anaplan
username, password, and workspace ID.
 Have knowledge of the hierarchical and data dependency structure
of the Anaplan model. For example, the Opportunity list might
assume the pre-existence of the Sales Representative list.
 Have a single document in CSV format that uses comma or tab
delimiters. If you do not have a CSV file and need to create one, use
the Anaplan Export Tabular Multiple Column layout. For more
information, see the readme.zip (Anaplan Connector: How to
Publish) provided to Anaplan customers.
 Complete the steps in Anaplan so you can use the connector
operations.
Tracked properties
The Anaplan connector has the following tracked properties that you can
set or reference in various step parameters.
Model Id - The ID of the model used in the operation. This ID is available
when running the Import operation.
Import Id - The ID of the import used in the operation. This ID is available
when running the Import operation.
Export Id - The ID of the export that is used in the operation. This ID is
available when running the Exportoperation.
Process Id - The ID of the process used in the operation. This ID is
available when running the Process operation.
Task Id - The ID of the task created when running the operation. This ID is
available after running the Import and Process operations.
Delete Id - The ID of the delete configuration being executed when
running the operation. This ID is available after running the Delete
operation.

The Anaplan operations use JSON format and support the following
actions:
 Import — bring data into Anaplan.
 Export — export data into a file.
 Upload — upload a file to Anaplan (as a single chunk or as several
chunks) so you can later import the file and bring it into Anaplan.
 Download — download a file that was previously uploaded to
Anaplan, or download a file after performing an export.
 Process — process several actions to run in a particular order. A
process can contain any combination of import, export, and delete
actions.
 Retrieve Dump File — retrieves errors during either an Import action
or a Processaction containing an import.
 Delete — remove and delete obsolete data.
 List — list models, imports, exports, and files.
Many of these actions require you to first complete steps in Anaplan. For
more information, see the action sections.
Aprimo connector

The Aprimo connector enables you to move data into and out of Aprimo
Marketing Studio On Demand.

Prerequisites
The Aprimo connector requires the following:
 An Aprimo Marketing Studio On Demand account with a user name
and password.
 Private and public keys for the Aprimo site’s base URL.
Connection tab
The following fields appear on the connection tab:
Aprimo Site’s Base URL - Aprimo’s Base URL from which to infer metadata and
gateway URLs.
Public Key - The user’s public key.
Private Key - The user’s private key for logging on which is used instead of a
password.

The Aprimo operation supports the following actions:


 Get — The inbound Get operation supplies an ID for the Aprimo
record that you want to retrieve.
 Query — After you select the inbound Query connector action and
use the Import Wizard, you can use the Fields tab to select the
fields the query returns. Some fields are in the
format <field_name>/value. Nest the value element selected to
include it in the results and to process it successfully. If you do not
select any fields, the results include all fields. The Aprimo connector
has field selection three levels deep in the structure. You can add
filters that limit the records returned to match certain criteria.
 Create — The outbound operation creates records in the Aprimo
object defined in the operation. You can set the Batch Count field
on the Operation component to group requests.
 Delete — The outbound operation removes records in the Aprimo
object defined in the operation.
 Update — The outbound action updates records in the Aprimo object
defined in the operation. You can set the Batch Count field on
the Operation component to group requests.

Aria – Partner connector

The Aria – Partner connector enables you to connect to the Aria


Subscription Billing Platform to move data into and out of the Aria
application.

can also use this connector to integrate the Aria platform with other On
Demand or SaaS-based applications, such as Intacct, Salesforce, NetSuite,
and so on. The Aria – Partner connector connects directly to the Aria
application and browses the interfaces in real-time.

Connection tab
URL - Used to select the base URL for the Aria web service. The following
sandbox and production environments are available: Sandbox Future,
Sandbox Current and Production. For example, to select the connection
URL for the production environment, select Production.
Client No - The client number for logging into the Aria web service.
Authentication Key - The valid authentication key for logging into the
Aria services.
API Version - Used to select an Aria API version. All Aria web service API
versions are listed.

The Aria – Partner operation supports the following actions:


 Inbound — Query
 Outbound — Create, Execute, Update
Note: Contact your Aria representative to learn more about the API
requirements for Aria and how to access other resources in the user
community for field definitions.

Execute
The following services and objects are available when you use the
outbound Execute action:
 Other-Generate_Random_String
 Other-Validate_Payment_Information
 TransactionInvoice-Pre_Calculate_Invoice
 Other-Issue_Refund_To_Account
 Account-Send_Arc_Threshold_Email
These actions directly run the objects. The Aria database retrieves the
data, parses the data into an XML structure in the response profile, and
returns a document for each logical record.

Autotask connector

The Autotask connector enables you to easily integrate your Autotask


application data, such as customers, contracts, and projects, with other
systems, including CRM applications, accounting packages, and more.

Prerequisites
To implement a connection to your Autotask account from , you need:
 An Atom installed in either a hosted environment or on a local
machine.
 An Autotask account.

Tracked properties
This connector has no predefined tracked properties.

Connection tab
Web Services Site - The URL to the Autotask web services. New
connections, by default, set the URL
to: https://webservices.autotask.net/atservices/1.6
CAUTION:
On December 31, 2021, Autotask deactivated version 1.5 of their API, and
the version is not supported. For backward compatibility, the connector
continues to support the legacy
URL https://webservices.autotask.net/atservices/1.5. However, encourages
you to update your existing connections and use version 1.6. After
updating the URL to the 1.6 version, save the connection to us the new
version in your process.
User - Your Autotask user name.
Password - The password for the user.

Autotask operation
The Autotask operation defines how to interact with your Autotask
account, and represents a specific action (Query, Create and so on.) to be
performed against a specific Autotask object (such as Contract, Account,
and Invoice).
Boomi for SAP connector
The Boomi for SAP connector allows you to connect to Boomi for SAP core
functionality via OpenAPIs. This connector retrieves the OpenAPI definition
and generates an easy to use interface for you to seamlessly configure
the operations available as part of Boomi for SAP.

Features
In this release, Boomi for SAP supports the following features:
 Table Services — Query SAP Tables, Views, and Joined Views/Tables
 BW Extractors — Get and Delete SAP BW Data
 Function Module Services — Create, Read, Update, and Delete SAP
data objects, trigger jobs, and much more.
 IDocs — Send IDocs to SAP
 Listen - Listen to and receive events based on SAP event
subscriptions
The following features can be utilized with the Boomi Web Server:
 Events — Receive Change Pointers, Business, Initialization, and
Custom Events in Boomi
 Table Service Batch Extracts — Receive batched, multi-threaded
extracts of Table Services in Boomi
 IDoc — Receive IDocs from SAP

Prerequisites
To use the connector and implement a connection, have the following:
 SAP UserID and Password
 SAP Application Server hostname including Port of SAP instance
which has Boomi for SAP installed.
 Boomi for SAP Core installed and configured on the SAP instance

Tracked properties
This connector has no predefined tracked properties.

The Boomi for SAP connector supports QUERY, FUNCTION, BW_GET,


BW_DELETE, IDOC_SEND, ABAP_PROXY_SEND, and LISTEN operations.
QUERY is used to connect to Boomi for SAP Core Table Services. FUNCTION
is used to connect to Boomi for SAP Core Function Module Services.
BW_GET and BW_DELETE are used to fetch and delete data, respectively,
via configured Boomi for SAP Core BW Extractors. LISTEN is used to open
a persistent connection to SAP and receive any event sent over a chosen
EDA subscription.
IDOC_SEND allows sending of IDocs to SAP, either single, or bundled in the
same XML. All operations support importing services are configured in
Boomi for SAP Core.

Boomi Flow Client connector

The Flow Client connector enables processes to request actions exposed


in the API.
The connector currently supports only the launch and invocation of flows:
 Public flows
 Flows with and without inputs

The Boomi Flow Client operation defines how to interact with Flow.
The operation sends a request for running a tangible action exposed in the
Flow REST API. The Flow tenant in which the action runs is specified in the
Boomi Flow Client connection.

Configuring a Flow Client operation to invoke a flow (no inputs)

Procedure
1. Create the Boomi Flow Client operation component.
2. In the operation, click Import.
The Flow Client Import Wizard opens.
3. Do one of the following:
o To browse for available object types in an Atom Cloud,
select Browse in Cloud.
o To browse for available object types in an Atom,
select Browse in Atom.
4. If you selected Browse in Atom, in the Atom list, select the Atom
in which to browse for available object types.
5. Select or create the Flow Client Connection for the flow user.
6. Click Next.
The wizard advances to the next screen.
7. In the Object Type list, select Run - Initialize and Invoke Flow.
8. Click Next.
The wizard advances to the next screen.
In that screen, Request Profile shows the name of the generated request
profile. The profile name Flow Client Run - Initialize and Invoke Flow
EXECUTE Request unless a profile with that name already exists in the
account. In that case, it appends a unique number to the profile name —
for example, Flow Client Run - Initialize and Invoke Flow EXECUTE Request
2.
9. Click Finish.
The wizard closes and creates the profile. In the
operation, Object and Request Profile update to reflect the object type
you selected in the wizard and the generated profile. You might want at
some point to change the name of the profile to one that is more
descriptive.
10. Optional: To enable processing to continue if the operation
returns an error, turn on Return Application Error Responses.
11. Save the operation.

Cross-source synchronization
Use to synchronize a domain’s master data across its source systems.
Synchronization relies on the deployment of the following types of
integration processes:
 Contributing source to — incremental synchronization processes
that batch source record updates and route them to the repository
on which a domain is hosted.
 to source — synchronization processes that route source record
update requests, which originate in batches from the repository, to
source systems.
The recommended execution sequence is as follows in cases where there
exists a domain-wide source of record — that is, a source whose data
across all fields is more trusted than data from all other sources:
1. Execute the contributing source-to- processes.
o Execute first the process that routes updates from the least
trusted source.
o Execute last the process that routes updates from the source
of record.
2. Execute the -to-source processes.
o Execute first the process that routes updates to the least
trusted source.
o Execute last the process that routes updates to the source of
record.
For example, consider a domain, Contacts, whose attached sources (all
contributing data) are Google Contacts, NetSuite and Salesforce, where
NetSuite is the source of record and Google Contacts is the least trusted
source. The recommended order of execution is as follows:
1. Source to Boomi DataHub
a. Google Contacts
b. Salesforce
c. NetSuite
2. Boomi DataHub to source
a. Google Contacts
b. Salesforce
c. NetSuite
These processes are typically executed as sub-processes of a regularly
scheduled master synchronization process.
Consider the following factors when determining the optimal frequency of
execution:
 anticipated volume of source record updates
 degree to which having timely master and source data is critical
 throughput and capacity of the relevant computing and storage
resources
Intervals of 5, 15 and 60 minutes are common.

Boomi DataHub connector


The Boomi DataHub connector integrates seamlessly with your Boomi
DataHub repositories. It helps build integrations for data synchronization
and management faster and easier.
It uses an HTTPS connection to make web service calls to your
authenticated repository via XML request and response profiles. The low-
code/ no-code user interface simplifies the complexities of connecting to
Boomi DataHub and setting up request headers, error handling, and
HTTPS connections.
Prerequisites
You must have the following before using the connector:
 Boomi DataHub enabled on your account. Contact your account
representative for more information.
 A Hub repository
 A deployed model in the repository
 A deployed Boomi Atom to run connector operations

Getting started
To use the connector in your integration:
Step 1: Set up the connection. Link your Hub repository to the
connector by providing your username, authentication token, and the Hub
cloud that hosts your repository.
Step 2: Select the Action. The connector can perform several inbound
and outbound operations on your Hub repository.
Step 3: Set up the operation. Click the plus icon to import a new
operation and create the XML profile.

Connector operations
The connector’s operations align with the Boomi DataHub Repository
API operations. Compared to the legacy connector, some operation titles
have changed. The connector supports the following operations:
 Get Golden Record - retrieves a specified golden record.
 Get Quarantine Entry - retrieves a specified quarantine entry.
 Update Golden Records - (previously called Upsert in the legacy
version) creates, updates, and/or end-dates golden records. The
operation can quarantine source entities. It also allows you to send
your new or updated records to staging.
 Query Golden Records - retrieves active golden records.
 Query Quarantine Entries - retrieves quarantine records.
 Fetch Channel Updates - (previously called Query in the legacy
version) fetches batches of source record update requests.
 Match Entities - Lists match results for a batch of entities from a
contributing source.
Tracked properties
This connector has two tracked properties.
 Mdm Current Delivery Id - This property is available with the
Fetch Channel Updates operation when using manual
acknowledgment. Read Fetch Channel Updates for more information
on manual acknowledgement.
 Query Total Count - This property is available with the Query
Golden Records and Query Quarantine Entries operations.

Creating a Boomi DataHub connection


You can create a connection for a single Boomi DataHub repository using
your credentials and Boomi Enterprise Platform permissions.

Authentication
You can authenticate to the repository using the My Hub Authentication
Token, which provides administrator privileges. The Boomi DataHub
connector does not support JSON Web Token (JWT) authentication.
Connecting to a repository
1. In the Integration build canvas > Connector menu, select Boomi
Data Hub (Tech Preview).
2. Click the plus icon next to Connection.
3. Select the Boomi Hub Cloud Name that hosts the repository you
want to connect to. Boomi maintains regional Hub Clouds worldwide.
To find the name of your cloud:
a. Navigate to Integration > Manage > Atom
Management and select the repository name in the list of
Atom Clouds.
b. Look at the Host Name for the cloud hosting the repository.
If you are using a custom cloud:
c. Navigate to Integration > Manage > Atom
Management and select the repository name from the list of
Atom Clouds.
d. Select Shared Web Server to view the Base URL for API
Request.
e. Enter the URL in the Custom Cloud field in the following
format, appending /mdm to the path: https://cloud
address[:port]/mdm/
4. Enter your Hub username as it appears on the Repository page. To
find your username, navigate to Boomi
DataHub > Repositories > select the repository > select
the Configure tab.
5. In the Configure tab, copy the My Hub Authentication Token.
6. In the Connection page of the Hub connector, select Click to
Set and paste it into the Token field.
7. Click Apply.
8. Click Save and Close.

Golden Records
A Golden Record is a single, reliable, and complete version of data
about an entity (like a customer, product, or organization). It's created by
combining, deduplicating, and cleansing data from multiple sources.
Think of it like having one perfect file that contains the most accurate and
up-to-date details about something, even if the information originally
came from many messy or conflicting files.
For example:
 If "John Doe" exists in two systems with different phone numbers,
the golden record will combine these details into a single, trusted
version after validating and resolving conflicts.

Quarantine Entries
Quarantine Entries are data records that are flagged for review or
correction because they don’t meet quality standards, have missing
fields, or fail validation checks. These records are kept in a separate
"quarantine" area until someone reviews and fixes them.
For example:
 If a record for "Jane Smith" has an email but no phone number and
the phone number is mandatory, this record would be sent to
quarantine until the missing phone number is added.
Get Golden Record operation:
Retrieves a "golden record," which is a unified, accurate, and complete
view of a data entity (like a customer, product, or organization). These
records are typically created by consolidating and deduplicating data
from multiple sources.
Get Quarantine Entry operation:
Fetches entries that have been flagged for review or correction due to
data quality issues or mismatches. Quarantined entries are not part of
the golden records until validated or corrected.
Update Golden Records operation:
Modifies existing golden records with new or corrected information.
This operation ensures that the golden record stays up to date and
accurate.
Query Golden Records operation:
Searches or retrieves golden records based on specific criteria, such as
filtering by attributes or matching conditions.
Query Quarantine Entries operation:
Searches for entries in the quarantine repository that match specific
criteria, often to identify and resolve data quality issues.
Fetch Channel Updates operation:
Retrieves updates or changes made to data from a specific source
channel. This ensures that the golden record is synchronized with the
latest data from various input channels.
Match Entities operation:
Compares data from different records to determine if they represent
the same entity. This is part of the deduplication and consolidation
process that helps create a golden record.

Boomi Master Data Hub Listener connector

The Boomi Master Data Hub Listener connector is a listen-only connector


used only in Integration Hub listener processes.
Hub listener processes perform data validation and enrichment on
incoming batches of source entities contributed to a master data domain
hosted on a Boomi DataHub repository Atom. A process data quality step
in a Boomi DataHub domain model specifies a Hub listener process
deployed to the Boomi Hub Cloud that hosts the repository that deploys
the model. The specified process must end with a Return
Documents step. When applying the data quality step while processing a
batch in Boomi DataHub, it runs if it deploys the process.

Boomi Master Data Hub Listener operation


The Boomi Master Data Hub Listener operation receives a batch of source
entities contributed to a domain hosted on a Hub repository Atom.
In addition, the operation receives batch metadata as the following
dynamic process properties:
 mdm_source_id — ID of the batch's contributing service.
 mdm_account_id — ID of the account under which the batch was
contributed.
 mdm_repository_id — system ID of the repository on which the
target domain is hosted.
 mdm_universe_id — system ID of the domain.
For each entity in a batch, the operation receives the source entity ID as
the dynamic document property mdm_source_entity_id. If an entity is an
update to an existing golden record, the operation receives the golden
record’s ID as the dynamic document property mdm_record_id. It
populates each root entity element’s id attribute with the Batch Entity
ID, accessible from the batch detail view on the Inbound Activity page
(Reporting > Inbound Activity). If an entity is an update to an existing
golden record, it populates the root entity element’s grid attribute with
the golden record’s ID.
You must configure a listener process Start step to run this operation.

Box connector

The Box connector enables government agencies and employees to safely


and securely store, access, share, and collaborate on files, folders, and
content across devices in a cloud-friendly and secure environment.

Prerequisites
 To implement a connection to your Box account from , have your
Box client ID and client secret provided when you created your
application.
 UTF-8 encode input JSON documents to preserve special characters
and ensure the operations work properly.
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
 File ID — The ID of the file from Box.
 File Name — The name of the file from Box.

Box operation
The Box operations defines how to interact with your Box account and
represents a specific action (Create, Get, Delete, Get Folder Items,
Update, Upsert, and Search) perform against a specific Box file, folder, or
folder item.
Create a separate operation component for each action and object
combination that your integration requires.
The Box operations use JSON format and support the following actions:
 Inbound: Get, Get Folder Items, Search
 Outbound: Create, Delete, Update, Upsert
Important: UTF-8 encode input JSON documents to preserve special
characters and ensure the operations work properly.

Ceres Technology – Partner connector

Ceres Technology offers an AI platform that enables clients to forecast


supply chain disruptions and delays, often months in advance.

This capability allows for better planning and procurement, helping you
understand which suppliers are vulnerable, manage inventory more
effectively, improve on-time deliveries, enhance supply resilience, and
meet Service Level Agreements. Not only do we identify risks, but we also
quantify the expected severity of any delays, that is how long the delay is
likely to last.

The Ceres Technology integration platform enables users to work in real-


time with the data generated by our customers’ applications.
Benefits
By integrating the Ceres Technology Nostradamus AI with your supply
chain platform, you will enhance your ability to predict and proactively
respond to supply chain delays and improve resilience. By using the
Boomi Partner connector, you will dramatically reduce the time required to
integrate and transmit your data, which will virtually eliminate data errors
and reduce the time required to launch Nostradamus within your supply
chain ecosystem. The result will be enhanced procurement, improved
planning, and increased resilience.
Connector configuration
To configure the connector to communicate with Ceres Technology, set up
the following two components:
 Ceres Technology connection
 Ceres Technology operation

Prerequisites
To use the connector, request the URL and Ceres API Token from the Ceres
Team. Below are the screenshots showing the use of the Ceres token.
CloudBlue Connect - Partner connector
The CloudBlue Connect platform enables the users to work in real-time
with the transactions generated between distributors, resellers, and
customers.
The CloudBlue Connect - Partner connector enables vendors and
distributors to seamlessly integrate their contract, listing, fulfillment, and
usage processes into their internal or line-of-business systems without
coding.

Benefits
By integrating CloudBlue Connect with your platform, you will drastically
reduce the time for various marketing operations and simultaneously
eliminate any possible errors. The CloudBlue Connect enables users to
address the automation for your business and collaborations with your
business partners.

Connection tab
The CloudBlue Connect connection uses access keys, which consist of an
endpoint and a secret token key. Create these tokens using the
Integrations section in the CloudBlue Connect portal. When you create the
token, assign the proper permissions for that integration will work. Once
you’ve created the token, configure the connection in the using the
endpoint and token.
Actions
The CloudBlue Connect Boomi Connector features various actions (also
called resources) that can be used to streamline and automate many
operations on the CloudBlue Connect platform.
Available Actions
The following list introduces actions that are managed by the connector:
 List Resource: This action is used to list your required resource file.
 Get Resource: Get the required resource file by using this
operation.
 Create Resource: Create a required resource file by using this
action.
 Update Resource File: This action is used to update resource files.
 Accept: This action is used to accept required resources.
 Reject: Reject a request by using this operation.
 Approve: Approve required requests by using this action.
 Inquire: Inquire for necessary data to process subscriptions.
 Pending: Switch requests to the pending state by using this action.
 Assign: This action assigns an agent to process Connect requests.
 Unassign: Used to remove an assigned agent from the request
processing.
 Close: This operation is used to close a record chunk within a usage
file.
 Regenerate: As the name implies, this action is used to regenerate
required resources.
 Bulk Close: This operation is used to close all usage records within
the usage file.
 Reprocess: Use this action to process the provided resource once
again.
 Submit: This operation is used to submit a usage file to your
business partner.
Operations
Each operation implements a specific task such as reading or creating an
object of CloudBlue Connect. The CloudBlue Connect connector operations
use JSON format and support the following actions:
 Inbound: Get Search
 Outbound: Create, Delete, Update, Custom action (Approve,
Pending, Assign, etc.,)
Extension Objects
Depending on your selected action, the following objects (also called
resources) of the extension are available:
 Asset: This object represents a subscription on the Connect
platform.
 Request: Fulfillment request object (also called subscription
request).
 Tier Account: A Tier Account represents an account of your
customer or reseller.
 Tier Account Version: This represents a version for your tier
account object.
 Tier Account Request: A request object for a Tier Account.
 Tier Config Request: This is a Tier Configuration request (tier
request) object.
 Tier Config: A tier configuration object on the Connect platform.
 Subscription Request: Fulfillment request object (also called asset
request).
 Subscription Asset: Represents a subscription (asset) object on
the platform.
 Product: This represents a product object on the platform.
 Usage Report: As the name implies, this represents a usage report
object.
 Usage Record: Your provided record is filed within the usage report
file.
 Usage Chunk: A chunk of usage records that are filed within the
usage report file.
 Usage Reconciliation: Usage reconciliation data is used to
indicate successful usage file processing.
 Case: This represents a Helpdesk case object on the CloudBlue
Connect platform.

Coupa connector

The Coupa connector enables you to move data into and out of Coupa and
integrate any on-premise applications and legacy systems with Coupa.

Creating a Coupa security token


By creating a Coupa security token (API key), you can access Coupa in
Integration.
Procedure
1. Sign in to Coupa using the browser to create your API Key.
2. Go to Setup > Company Setup > API Keys.
3. Create a new API Key, if necessary.

Coupa operation
The Coupa operation defines how to interact with your Coupa account and
represents a specific action, such as Search and Create to perform against
a specific Coupa record type or object (Suppliers, Items, Accounts, Users,
and so on).

Eyer - Partner connector


The Eyer platform is an automated time-series observability platform that
easily integrates and supports different technologies through APIs

Benefits
The Eyer - Partner connector requires an agent installed per
Atom/Molecule runtime environment (server/VM/Docker) to collect JMX
performance metrics for the Eyer anomaly detection engine. The Eyer -
Partner connector will integrate directly in Boomi processes via the
connector, giving you the power to act proactively on anomaly alerts from
Eyer. Alerts contain anomaly warnings and possible impacts relating to
Atom & Molecule performance like throughput, latency, queues, statuses,
and other metrics related to runtime processing.

Connection tab
The Eyer – Partner connection has the following fields that are required for
configuration.
Server: The base URL of Eyer with which you want to connect. Do not
change this field.
Eyer authentication key: By default, this field is set to CUSTOM. The
authentication key is obtained from Eyer.
Custom authentication key: Set your Eyer authentication key in this
field.

Eyer - Partner operation


The Eyer – Partner operation defines how to interact with your Eyer
account and represents a specific action (GET) to retrieve anomaly alerts
with set criteria and data.
Actions
The Eyer - Partner connector supports a single action (GET) with either
static or dynamic (set in the process) parameters for Query with
metrics and Query to date.

Fiix — Partner connector

The Fiix — Partner connector allows you to work with various objects
exposed through the Fiix API, meaning that you can access all
Computerized Maintenance Management System (CMMS) related data and
methods to work with assets, purchase orders, work orders, and more.

You can use the connector to seamlessly integrate Fiix CMMS with
intersecting systems so that you can access maintenance-specific data
(assets, work orders, purchase orders, and more).

Fiix — Partner connection


The Fiix — Partner connection represents a single Fiix CMMS account,
including API Access Keys.
If you have multiple accounts, use a separate connection for each account
and configure the URL accordingly. Pair a single connection with different
Fiix — Partner operations to perform unique actions on Fiix CMMS.
 Connection tab
 The Fiix — Partner connector uses API Access Keys for
authentication and authorization. You must provide valid details to
connect to Fiix CMMS successfully.
 URL
The URL for the Fiix Integration API.
 Application Key
The Application Key for the Fiix Integration API.
 Access Key
The Access Key for the Fiix Integration API.
 Secret Key
The Secret Key for the Fiix Integration API.
 Signature Method
Signature method to sign the API keys. Default is HmacSHA256.
 Signature Version
Signature method version.
 API Version
Fiix CMMS version.

Fiix — Partner operation


The Fiix — Partner operation defines how to interact with your Fiix CMMS.
The operation represents a specific action (Query, Create, Update, or
Delete) to be performed against a specific Fiix CMMS object.
Create a separate operation component for each action/object
combination that your integration requires.
The Fiix — Partner operations use JSON format and support the following
actions:
 Inbound: Query
 Outbound: Create, Update, Delete, Execute
Changelog
2024-09 (Version 60, Current)
 New Objects Added
o BusinessRole
o RCAAction
o RCACause
o RCAGrouping
o RCAGroupingAction
o RCAGroupingCause
o RCAProblem
o RegionUser
o RegionUserGroup
o UserCertification
o WorkOrderTaskFile
 Objects Amended
o Asset
o Project
o ScheduledMaintenance
o Stock
o StockHistory
o User
o WorkOrder
 Removed logic to find " and replace within the code since this is
preventing the connector from querying data which contains " within
the query parameter.

Files.com - Partner connector

Files.com provides a cloud-based Managed File Transfer (MFT) automation


platform. Over 6,000 companies trust Files.com to automate and secure
their business-critical transfers.
We obsess about security, compliance, reliability, and performance so that
your critical business processes work every time. Easily manage any
transfer flow without writing scripts or code, and onboard workloads and
partners effortlessly.
We support standard file transfer protocols, such as FTP, SFTP, and AS2,
for communicating with external partners and we also provide native apps
for high performance file transfers.
As a fully Cloud-Native SaaS solution, there are no servers for you to buy
or maintain. There is no installation required, and high availability and
redundancy are built in.
Out-of-the-box integrations include Microsoft (Azure, SharePoint,
OneDrive, Active Directory & Office), AWS (S3 & SNS), Google (Cloud &
Drive), Box, Dropbox, Zapier, and dozens of others. Developers can
leverage our SDKs, API, and CLI App to build custom integrations.

Benefits:
 Secure File Transfers: Ensure data security and meet compliance
standards (SOC2, HIPAA, GDPR, CCPA).
 Data Sovereignty: Select from seven geographic storage regions
or integrate with preferred storage solutions like Amazon S3, Azure,
Google Cloud, and more.
 Enhanced Collaboration: Enable partners, customers, and
vendors to connect using their preferred standards.
 Faster Onboarding: SaaS-based, always-available platform
eliminates server setup time, speeding up partner connections.
 Simplified Management: Offers web-based administration and
RESTful APIs for seamless integration.
 Continuous Updates: Files.com handles upgrades and security,
freeing up IT resources.
Connector Configuration:
 Connection: Links to a Files.com account using an API key for
authentication; reusable across multiple steps. The Files.com
connection represents the Files.com user account that will be used
to perform the actions on the Files.com platform. The connection will
be constrained to the permissions of the Files.com user account and
will only be able to access items that the Files.com user account has
been given permission to access. If you have multiple Files.com user
accounts, use a separate connection for each.
 Actions: Perform operations like creating/managing files, folders,
share links, users, and groups.
o File Actions: List, upload, download, copy, move, delete files;
integrate files with Boomi processes.
o Folder Actions: List, create, move, copy, delete folders;
navigate folder structures.
o Share Links: Create/manage secure download links with
notifications and tracking.
o User/Group Management: Create, manage, and delete
users/groups.
 Operations: Create a separate operation component for each
action that you wish to perform.
 The Files.com connector provides the following actions for use in
your Boomi processes:
 Copy
 Create
 Delete
 Download
 List
 Move
 Show
 Update
 Upload
 Download a file*
 List folder items*
 Upload a file*
 *Download a file, List Folder items, and Upload a file are
deprecated but remain for backwards compatibility.

Several Actions exists under Files partner connector. Need


to be referred the below link

https://help.boomi.com/docs/Atomsphere/Integration/
Connectors/r-atm-Files_operation_561f2419-bf24-4d87-b230-
f2380007b425

Prerequisites
The Files.com connector requires an API Key, which represents the
Files.com user account that will be performing the connector’s actions.

Changelog
Here's the information presented as a table:
Versio
Date Changes/Updates
n
202 - The "Return Application Error Responses" behavior is now
141
5-01 respected for operations.
- Fixed an upload operation issue causing timeouts before
202 file delivery completion. - Fixed a List operation issue where
133
4-11 date format was returned in Unix Epoch Time instead of UTC
format (response profile expects UTC).
- Added a Limit option to the List operation, allowing result
202
121 limits. - Fixed a defect in the deprecated List folder items
4-07
operation where the Per Page option was ignored.
119 202 - Added 9 new actions: Copy (File, Folder)Create (Folder,
4-06 Group, ShareLink, ShareLinkNotification, ShareLinkRecipient,
User)Delete (File, Folder, Group, ShareLink,
ShareLinkNotification, User)Download (File)List (Folder,
Versio
Date Changes/Updates
n
Groups, ShareLinks, ShareLinkDownloads,
ShareLinkNotifications, ShareLinkRecipients,
ShareLinkRegistrations, Users)Move (File, Folder)Show
(FileDetails, FolderDetails, GroupDetails, ShareLinkDetails,
ShareLinkNotificationDetails, UserDetails)Update (Group,
ShareLink, ShareLinkNotification, User)Upload (File) -
Deprecated 3 original operations (still available for backward
compatibility). - Improved connectivity, throughput, and
auto-retry capabilities.
.
Google Ad Manager connector
The Google Ad Manager connector enables users to integrate their data
with Google Ad Manager.
Google Ad Manager is an open, public API that enables managing,
delivering, and measuring the performance of ads.

Connector configuration
To configure a connector to communicate with Google Ad Manager, set up
two components:
 Google Ad Manager connection
 Google Ad Manager operation
This design provides reusable components that contain connection
settings and operation settings. After building your connection and
operation, set up your connector within a process. When you configure the
Google Ad Manager connector correctly within your process, can map to
and from virtually any connected application using the Google Ad Manager
connector to manage, deliver, and measure the performance of ads.

Prerequisites
To use the connector and implement a connection to your Google account
instance from , verify that you have the following:
 A Google Ad Manager Small Business account, or a Google Ad
Manager Premium Solution account.
 Prepared for OAuth 2.0 authentication.
 Enabled API access to at least one network.
Google Ad Manager operation
The Google Ad Manager operation defines how to interact with your
Google Ad Manager account and represents a specific action, such as
Query, and Create to perform against a specific Google Ad Manager
service and object type.
Get, Query, Create, Update, Execute

Exceptions:

Google Ad Manager changelog


Changelog
2024-08 (Version 24.08.1, Current)
 Necessary runtime packages are now packaged with the connector.
 The connector has been upgraded to support API version v202405.
Also, the connector now supports v202405, excluding
the createTargetingPresets and updateTargetingPresets operations
from the TargetingPresets service.
 The Google Ad Manager API version v202305 has been sunsetted
and the connector has been upgraded to support API version
v202402.
2024-04 (Version 24.04.0)
 The connector is upgraded to support the API versions v202308 and
v202311.
 The Google Ad Manager API version v202305 has been deprecated.
2023-11
 Several potential security vulnerabilities (Improper Input Validation,
Information Exposure, Man-in-the-Middle (MitM), Directory Traversal,
and XML External Entity (XXE) Injection) were resolved to make the
connector more secure.
Google BigQuery connector
Use the Google BigQuery connector to stream and insert incremental
structured data into Google BigQuery.
After streaming and inserting your data, you can do the following:
 Run a job to query against the data in Google BigQuery.
 Retrieve the results of a query against the data from the Run Job
operation.
 Update a Google BigQuery table and view.
Google BigQuery is a fully-managed enterprise analytics data warehouse
used to store all of your data. You can run SQL queries to analyze
terabytes of data in seconds and petabytes in minutes to find useful
insights. The data is analyzed in real-time. You can easily load, process,
and make interactive visualizations of your data.
Connector configuration
To configure the connector to communicate with Google BigQuery, set up
two components:
 Google BigQuery connection. This connector uses the OAuth 2.0
protocol for authentication and authorization, and you can use
the JSON Web Token (JWT) Bearer Token and Authorization
Code grant types for client authentication.
 Google BigQuery operation (Streaming Insert, Run Job, Get Query
Results, Update, Upsert Data)
This design provides reusable components, which contain connection
settings and operation settings. After building your connection and
operation, set up your connector within a process. When you configure the
Google BigQuery connector correctly within your process, can map to and
from virtually any connected application using the Google BigQuery
connector to:
 Stream and insert data into Google BigQuery
 Run a job to manipulate the data
 Run a SQL query
 Update a view

Prerequisites
To use the connector and implement a connection to your Google Cloud
Platform account from , have the following information available:
 The Google Cloud Platform project ID used for billing. Make sure to
enable billing for your project. You can find your project ID by going
to the Google Cloud Platform console and selecting Manage all
projects from the projects list. The names and IDs for all the
projects you are a member of appear.
 When using the JWT Bearer Token grant type for client
authentication, have your signature key and identify your issuer and
subject.
note
The scope is automatically set as an extended JWT claim. Do not change
the extended claim and also ensure that the Scope field is empty. You
create the signature key using the Google Cloud Platform console and
import it into . You find the issuer and subject in the Google Cloud Platform
console (Service account details page).
 When using the Authorization Code grant type for client
authentication, have your Google Cloud Platform client ID, the client
secret from when you created your project, and your scope.
note
Copy and paste the scope into the Scope field from the help (click
the Information button). Verify that you separate each entry in the scope
by a single space.

Google BigQuery connection

The Google BigQuery connection represents and contains all of the


information you need to connect to and sign in to a Google Cloud Platform
account instance associated with your project.
If you have multiple organizations or sandbox instances, you need a
separate connection for each. You can pair a single connection with
different Google BigQuery operations to perform a unique action against a
Google Cloud Platform instance.
Connection tab
The Google BigQuery connector uses the OAuth 2.0 protocol for
authentication and authorization and supports the JSON Web Token (JWT)
Bearer Token and Authorization Code grant types for client
authentication.

Google BigQuery operation


The Google BigQuery operation defines how to interact with your Google
Cloud Platform account instance and represents a specific action to
perform against a specific Google BigQuery object.
Create a separate operation component for each action and object
combination that your integration requires.
The Google BigQuery operation supports the following actions:
 Streaming Insert — (outbound) streams and inserts your data into a
Google BigQuery table.
 Run Job — (outbound) runs a query against the data in Google
BigQuery.
 Get Query Results — (inbound) retrieves the results of a query job.
 Update — (outbound) updates information in a Google BigQuery
table and view.
 Upsert Data - (outbound) uploads a data file within the input
document and makes the file available to load data into a Google
BigQuery table.

Google BigQuery Connector Changelog (Summary)


 2025-01 (v25.01.1): Dataset ID supports dynamic properties for
Update actions (via JSON input for dynamic tables; overrides for
static tables).
 2024-08 (v24.08.1): Added agent attribution in the User-Agent
header for multiple actions.
 2024-06 (v24.06.1): Dataset ID dynamic for Streaming Insert and
Upsert; resumable API added for interrupted uploads.
 2024-05 (v24.05.2): Dynamic Location ID setting for Upsert
actions.
 2024-04 (v24.04.7): Browse functionality improved; support for
nested/repeated columns in JSON Upserts.
 2024-02 (v24.02.0): "Use Legacy SQL" unchecked by default in
Upsert imports.
 2024-01 (v24.01.0): Browse improved for profiles, including "any"
type objects.
 2023-11 (v23.11.0): Resolved DoS vulnerability; runtime packages
bundled; overridable fields added (e.g., Access Token).
 2023-10: Fixed array parsing issues; Project ID override enabled for
cross-project DML operations.

Google G Suite connector


The Google G Suite connector enables you to integrate with the Google G
Suite platform’s Contacts, Calendar, and Spreadsheets services.
You can use this connector to do the following:
 Create, edit, delete, and search for Google contacts and contact
groups.
 Create, edit, delete, and search for calendars and events in Google
Calendar.
 Read and modify the data in Google Spreadsheets.
You can use Google Data Extended Properties with Google Contacts and
Google Contact Groups by adding custom fields to the XML profile.

Processing special characters


To process special characters in the Google G Suite connector, the data
must be UTF-8 encoded. You can do this when running a local Atom on
Windows by doing one of the following:
 Adding the line -Dfile.encoding=utf-8 to the atom.vmoptions file.
 Converting special characters in a Data Process step
setting Character Encode to UTF-8.
 Converting the data source.

Google G Suite connection


The Google G Suite connection represents a single Google account
instance including sign in credentials.
Connection tab
Authentication Type - Identifies the secure sign in type as OAuth 2.0.
User Name - User name for the Google account.
Password - Password for the user’s Google account.
API Key - The code required for API access. For more information, see
Google’s description about how to use and acquire API Keys.

Procedure
1. Open the Google Data Contact or Contact Group request XML
profile.
2. Right-click the root element, contact or contact-group.
3. Select Add Child Element.
Selecting this field adds a new element to the end of the profile.
4. Select the new element and change the Element Name to your
custom field name plus _c.
For example, to add a custom field called , change the element name to .
5. Expand the Namespace Declarationssection and click Add
Namespace. Set the URI field to http://www.w3.org.2005/Atom.
6. Save and close the XML profile.

Google Sheets connector

With the Google Sheets connector, you can create, delete, get, query, and
update objects in Google Sheets.
Google Sheets is a web-based application that enables you to create,
update, and modify spreadsheets and share data live online. Independent
from the Google G Suite connector, this connector uses OAuth 2.0 for
authentication and retrieves information in JSON format.

Supported objects
The Google Sheets operations supports the following objects:
 Spreadsheets
 Sheets
 RecordData
Prerequisites
The Google Sheets connector requires the following:
 A Google Client ID
 The Client Secret for the account

Google Sheets connection


The Google Sheets connection represents a single Google account
including sign in credentials.
If you have multiple accounts, use a separate connection for each account
and configure the URL accordingly. Pair a single connection with
different Google Sheets operations to perform unique actions on a Google
Sheets account.

Authentication Type - OAuth 2.0 is the default and cannot be changed.


note
For OAuth 2.0 authentication, you may need to specify in the external
application the Boomi callback or redirect URL. The callback URL
is https://platform.boomi.com/account/<your_account_ID>/oauth2/callbac
k where <your__account_ID> is your Integration account ID. You can also
find your account ID in the Integration platform at Settings > Account
Information and Setup > Account Information.
Client ID - Enter or paste your client ID from the Google Console.
Client Secret - Click Click to Set to open the Password window, enter
or paste your client secret, and click Apply. Enable pop-up windows in
your browser.
Access Token - Click Generate. When a new window opens asking you
to confirm offline access, click Allow. A second message indicates the
Authorization code was received.

Google Sheets operation


The Google Sheets operation defines how to interact with your Google
Sheets account and represents a specific action (Query, Create, and
Update) to perform against a specific Google Sheets object type such as
spreadsheet or sheet.
Create a separate operation component for each action and object
combination required for your integration.
The Google Sheets operations use JSON format and support the following
actions:
 Inbound: Get, Query
 Outbound: Create, Delete, Update
Changelog
2024-01 (Version 24.01.0, Current)
 Necessary runtime packages are now packaged with the connector.
 Vulnerability issue of type Improper Authorization was resolved to
make the connector more secure.
 Vulnerability issue of type Improper Verification of Crytographic
Signature was resolved to make the connector more secure.
 Vulnerability issue of type Deserialization of Untrusted Data was
resolved to make the connector more secure.
2023-11
 Overridable fields now include Access Token parameters,
Authorization parameters, and Extended JWT Claims (if supported by
the connector).

Google Storage connector

The Google Storage connector enables you to leverage a powerful, simple,


and cost effective object storage service. Using this connector, you can
store and retrieve files and data from the Google Cloud Storage platform
from a process.
Google Cloud Storage allows world-wide storage and retrieval of any
amount of data at any time. You can use Google Cloud Storage for a
variety of scenarios including serving website content, storing data for
archival and disaster recovery, and distributing large data objects to users
through direct download. The Google Storage connector allows you to
easily incorporate the Storage Bucket service into your integration and
data flow orchestrations. Implementations of using Google Cloud Storage
include storing and serving music, running e-commerce operations,
providing financial web and mobile apps, and more.

Connector configuration
To configure a connector to communicate with Google Cloud Storage, set
up these two components:
 Google Storage connection. This connector uses the OAuth 2.0
protocol for authentication and authorization, and you can use
the JWT Bearer Token and Authorization Code grant types for client
authentication.
 Google Storage operation

Prerequisites
To use the connector and implement a connection to your Google Cloud
Storage account from :
 Have your Google Cloud Storage client ID and client secret from
when you created your project available.
 Activate the Cloud Storage JSON API.
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
 Name — The name of the object.
 Bucket — The name of the bucket containing the object.
 Updated — The modification time of the object metadata.
 MD5 Hash — MD5 hash of the data; encoded using base64.
 Link — The media download link.

Google Storage operation


The Google Storage operation defines how to interact with your Google
Cloud Storage account and represents an action (Create, Get, Delete, and
List) that performs against a Google Cloud Storage object from your
bucket.
Create a separate operation component for each action/object
combination that your integration requires.
The Google Storage operations use JSON format and support the following
actions:
 Inbound: Get, List
 Outbound: Create, Delete
Before creating an operation component, make sure to activate the Cloud
Storage JSON API.

Changelog
2024-01 (Version 24.01.1, Current)
 Necessary runtime packages are now packaged with the connector.
2023-11
 Overridable fields now include Access Token parameters,
Authorization parameters, and Extended JWT Claims (if supported by
the connector).

Great Plains connector

The Great Plains connector enables you to move data into and out of the
Microsoft Great Plains (now called Microsoft Dynamics GP) business
management application.
Use the connector to Query and Upsert various types of records, including
Accounts, Customers, Sales Orders, Invoices, and more.
The connector uses an integration layer called eConnect to exchange data
with the Great Plains database. eConnect is a Microsoft API that facilitates
data exchange by performing business validation and ensures data
integrity. You implement it as a set of database-stored procedures
installed on the Great Plains database server. Ensure you install eConnect
before you attempt to connect to Great Plains.

 Modules:
o Bank Reconciliation (BR)
o Company (CMP)
o Field Service (FS)
o Fixed Assets (FA)
o General Ledger (GL)
o Inventory (IV)
o Lockbox (LB)
o Project Accounting (PA)
o Purchase Order Processing (POP)
o Purchasing (PM)
o Receivables (RM)
o Sales Order Processing (SOP)
o Shared Master (SM)
o US Payroll (UPR)
Prerequisites
To implement a connection to Great Plains from Integration:
1. Install Great Plains and the database.
2. Set up a local Atom within your network to connect to the Great
Plains SQL Server and eConnect. You must install the Atom on
Windows. This connector does not run on Linux. Install the Atom on
the same server as eConnect and the Great Plains SQL Server.
3. Configure the connector with a Windows domain user that has
access to the Great Plains SQL Server database machine. A SQL
Server login must be created for this domain user and the database
user must be granted the DYNGRP role within
the DYNAMICS database.
4. Install the eConnect libraries.
a. The option to install eConnect is under the Additional
Products section of the Great Plains installer.
b. See the eConnect Documentation (see the linked topic).
c. You do not need to install the full eConnect Runtime module.
d. Install IKVM.NET to connect to the .NET libraries.
The eConnect interfaces do not support every object and interaction
available within Great Plains. If your integration scenario requires working
with objects from other modules or more sophisticated queries, you must
use the Database connector to connect to back-end database tables
directly. Consult your eConnect documentation for more information about
the table schemas. Use caution when accessing application database
tables directly to maintain data integrity.
If you are encounter connectivity issues, check the container logs for
information about these error messages.
Tracked properties
This connector has no predefined tracked properties.

Great Plains operation


The Great Plains operation defines how to interact with your Great Plains
data and represents a specific action (Query and Upsert) to perform
against a specific Great Plains record type or object (Customers, Sales
Transactions, Purchase Orders, and so on).
Not all actions are available for every object. Create a separate operation
component for each action and object combination required for your
integration.
The Great Plains operation supports two types of actions:
 Get (Query) — Used to retrieve data from Great Plains.
 Send (Upsert) — Used to send data to Great Plains.

Hadoop connector
The Hadoop connector gets files from or sends files to data directories on
the Hadoop Distributed File System (HDFS) server(s) to which the Atom
has access.
HDFS is the primary distributed storage system used by Hadoop
applications. The Hadoop connector:
 Is built on top of the Apache Hadoop version 2.2.0 API library.
 Works with remote Hadoop cluster resources, version 2.2.0 and
higher.
 Works with Cloudera CDH, combining Apache Hadoop with other
open-source projects.
 Interacts with remote Hadoop clusters using Hadoop API libraries.
For information about configuring the native IO libraries, see the
linked topics.
 Does not open a connection to a remote Hadoop cluster name. In
addition, the connector does not listen to or accept connections to
Hadoop cluster nodes.
Hadoop MapReduce is a technique for working on large sets of data by
spreading multiple copies of the data across different machines that work
in parallel on small pieces. A Hadoop JobTracker keeps track of job runs,
schedules individual maps, monitors individual tasks, and works to
complete the entire batch. Typically, the MapReduce framework and the
HDFS run on the same set of nodes. To assist with your Java development
for an Atom, provides an adapted sample of the MapReduce application
which contains a readme file, dependency analysis, sample shell scripts,
and run details.

To access the sample, right-click WordCount, save the WordCount.tar.gz


file, and open it with a decompression utility such as WinZip.

Prerequisites
To implement a connection to your Hadoop Distributed File System
(HDFS), verify that you have the following:
 A local Atom.
 A stable and reliable connection to the Hadoop primary and cluster
nodes.
 A valid user ID that can read from and write to the HDFS.
 Install a native IO library. For more information, see the installation
topics linked below.
Limitation
The connector does not support Kerberos authentication to get files from
or send files to data directories.

Hadoop operation
The Hadoop operation defines how to write files to and read files from a
Hadoop Distributed File System (HDFS).
The Hadoop operation supports the following actions:
 Create — Creates files.
 Delete — Delete files.
 Execute — It uses this action for all other procedural API calls. This
operation works only on local Atoms and not on a Atom Cloud.
 Get — Reads files.

Changelog
2024-11 (Version 24.11.0, Current)
 Vulnerability issue of type Deserialization of Untrusted Data was
resolved to make the Connector more secure.
2024-07 (Version 24.07.0)
 Necessary runtime packages are now packaged with the connector.
You can dynamically override and update the Hadoop File
Name and HDFS directory values set in the operation utilizing dynamic
operation properties.

Dynamic operation properties are document-level overrides you define


values for, and pass into the Connector step to modify and update the
default values set in the operation for the File Name and HDFS
directory fields. These values only affect the Hadoop's Create, Get, and
Delete operations.

To use the Hadoop native IO libraries on Windows,, configure a Windows


Atom. .
Procedure
1. Create a new folder under the Atom root folder and name it
userlib/hadoop/bin.
2. Extract the contents of the hadoop-bin-Win32.zip or hadoop-bin-
Win64.zip to the new folder.
3. Set the library path in bin/atom.vmoptions to: -
Dhadoop.home.dir=atom_installation_directory/userlib/hadoop -
Djava.library.path=atom_installation_directory/userlib/hadoop/bin.

To help verify and troubleshoot the Hadoop configuration and functionality


on the computer that runs the Atom, install and configure the standalone
Hadoop package. This step is optional.
Procedure
1. Download the Hadoop v.2.2.0 distribution from the developer
archive.
2. Extract the contents of the hadoop-2.2.0.tar.gz file to a local folder.
3. Configure the Atom to use the Hadoop native IO libraries. Follow the
configuration instructions links in Related Tasks.
4. Use one of the following commands to set
the JAVA_HOME environment (Windows), or export
the Java_HOME environment (Linux):
o Windows: set
JAVA_HOME=C:/Boomi_AtomSphere/Atom/Atom_name/jre
o Linux: export
JAVA_HOME=~/home/username/Bomi_AtomSphere/Atom/Atom
_name/jre
5. Point a variable to the JRE. You can reuse a private JRE if you deploy
it with the Atom.
6. Run Hadoop sample commands.
By adding customized Hadoop configuration files from a network location,
you extend the functionality of the Hadoop connector and your integration
processes.
About this task
For local Atoms and Molecules, you can add customized Hadoop
configuration files as a configuration resource. For example, to support
scalability, high availability, and fault-tolerance, you have two HDFS
servers in your production environment (an active server and a standby
server). At any time, the standby server can become the active server. To
support this functionality, you add the dfs.nameservices=clustername
parameter to your .xml file and use .xml as a configuration resource.
Note: You can also add customized Hadoop configuration files from a JAR
file. However, this is the preferred method because you can easily make
custom changes by simply changing the values in the files. Moreover, you
do not have to redeploy the process and custom library. For more
information, see the linked Adding customized Hadoop configuration files
from a JAR file topic.

HubSpot CRM (Tech Preview) connector

The HubSpot CRM connector allows you to connect other systems or


applications with HubSpot CRM's Customer Relationship Management
(CRM) platform.
This connector enables seamless data synchronization between HubSpot
CRM and other business systems, facilitating the create, retrieve, archive,
update and search for contacts and companies objects..

Prerequisites
To use the connector and implement a connection to HubSpot CRM from
Integration, have the following:
 Sign up: Sign up Hubspot CRM account.
 Login: Login to Hubspot CRM account using username and
password with sufficient permissions
to Create, Update, Search, Retrieve and Archive various types
of records.
 Create HubSpot CRM connector configuration.
o Update the Re-direct URLin HubSpot CRM if using Oauth2,
example: https://platform.boomi.com/account/<your__account
_ID>/oauth2/callback.

Supported objects
The following objects are supported by one or more of the HubSpot CRM
operation actions.
Object ↓ Creat Upda Searc Retrie Archi
Action → e te h ve ve
Contacts Y Y Y Y Y
Company Y Y Y Y Y

The HubSpot CRM connection represents a single HubSpot CRM account,


including authentication credentials.
If you have multiple HubSpot CRM accounts, including sandbox accounts,
you will need a separate connection component for each. You can pair this
connection with different HubSpot CRM operations to perform unique
actions against a HubSpot CRM instance.
When configuring your processes, direct your HubSpot CRM connection to
a sandbox instance by using the base URL and access token URL provided
in the connection settings.

he HubSpot CRM connector uses a Private App Access token or OAuth 2.0
to authenticate.

Base URL - Enter the REST endpoint to connect to, for


example https://api.hubapi.com for production.
Authentication Types - Private App Access Token and OAuth 2.0 are the
two authentication types available. Select either Private App Access token
or OAuth 2.0 to authenticate your HubSpot CRM account. The default
value is OAuth 2.0. To learn more about Private App, see the
topic Authentication methods on HubSpot.
note
User account creation for private apps is optional.
Private Access
If the user selects the Private App Access token, the user needs to add a
private access token key in the respective field to authenticate.

HubSpot CRM (Tech Preview) operation


The HubSpot CRM Connector defines how to interact with your HubSpot
CRM account and enables specific actions such
as Create, Retrieve, Update, Search and Archive. These actions are
performed against specific HubSpot CRM object types,
including Contacts and Companies.
 For each object required for your integration, create a separate
operation component.
 The set of supported operations may vary depending on the
selected object type.
 Supported Actions: Create, Retrieve, Update, Search and Archive.
The following fields are available for all operations during browse:
 API Version - API version is available for selection in all the objects
right now.
note
The connector currently supports only V3 of the HubSpot CRM API.
This setup ensures seamless integration with HubSpot CRM, offering
flexibility and precise control over HubSpot CRM data management.

Create Operation
The Create action is an outbound operation that uses JSON input to create
new HubSpot CRM objects. Upon successful execution, it returns a JSON
response containing details of the newly created object, including its
unique ID.
Key Features:
 Supports creating associations between objects, depending on the
object type, using the Boomi connector.
 The response for each execution contains the ID of the created
object in case of a successful API call or an error message in case of
API exceptions.
Archive Operation
The Archive action is an outbound operation used to archive the HubSpot
CRM objects to the recycle bin using the object's ID as input.
Key Details:
 The ID can be provided in two ways:
o As a connector parameter.
o Through an XML input profile.
 The ID field supports multiple entries, enabling the deletion of
multiple objects in a single XML request.
 The response is a JSON object indicating the success or failure of the
archive request for each ID provided.
Update Operation
The Update action is an outbound operation that takes a JSON document
as input to modify information for an object on the Hubspot CRM. The
input document must include an ID field that specifies the record to be
updated.
Users can clear the properties of any HubSpot CRM object by passing
property as empty.
Radio button field to be selected for using Email ID or ID. This property is
part of operation properties and it cannot be overridden by Dynamic
Operation Properties. The ID field will contain the value of Email ID or
object ID as per the radio button selected and will be overridable using
Dynamic Operation Properties.
Object-Specific Support:
 For Contacts, both ID and Email ID are supported.
 For other object types, only ID is supported.
Retrieve Operation
The Retrieve action is an inbound operation used to fetch a single object
from HubSpot CRM based on its ID or Email ID. The response contains the
details of the retrieved HubSpot CRM object.
Radio button field to be selected for using Email ID or ID. This property is
part of operation properties and it cannot be overridden by Dynamic
Operation Properties. The ID field will contain the value of Email ID or
object ID as per the radio button selected and will be overridable using
Dynamic Operation Properties Retrieve operations return all the properties
as well as default associations of the object being retrieved.
Object-Specific Support:
 For Contacts, both ID and Email ID are supported.
 For other object types, only ID is supported.
If no matching record is found in HubSpot CRM, an empty document is
generated as the response, ensuring clear feedback in all scenarios.
Search Operation
The Search action is an outbound operation designed to search for specific
Hubspot CRM objects based on complex criteria, enabling the retrieval of
objects that meet specified parameters. This function supports targeted
retrieval of Hubspot CRM objects — such as specific Contacts, Company,
Deal, Ticket and Goals—that match defined filtering criteria.
The action uses cursor-based pagination to efficiently manage large result
sets and supports the following operators for filtering:
note
Maximum number of documents as a response supported by HubSpot
CRM is 10000.
 Pagination approach via the usage of Maximum Documents field as
operation property.
o Use this to specify the Maximum number of documents to be
fetched.
o One record per response document.
o Value less than 1 will get all the records; by default, it is set to
-1.
HubSpot CRM (Tech Preview) changelog

2025-01 (Version 5, Current)


info
The existing Tech Preview connector will be retiring soon and not available
to use. This is the new HubSpot CRM (Tech Preview) connector.
HubSpot CRM (Tech Preview) connector enhanced with the
following features:
 The HubSpot CRM connector now allows users to connect using
a Private App Access Token.
 The connector now supports specifying the API version in the API
URL. Based on the selected version during browsing, users can
choose supported objects and call the corresponding APIs
seamlessly.
 As the connector now fully supports
the Companies and Contacts object
for Retrieve, Create, Update, Search and Archive operations.
 This connector enables seamless integration with HubSpot CRM,
providing access to objects and records for comprehensive CRM
workflows.

Hypatos.ai - Partner connector


Partner connectors developed by our Tech Partners and published on the
Boomi platform provide seamless integration solutions for customers.
Customers will initiate support for these connectors.

Hypatos is a deep learning document automation hub for the enterprise.


Unlike rule-based solutions that perform data capturing, it collects,
understands, and enriches document data to automate higher value-
added processes for enterprise operations. Its deep learning foundation is
coupled with an intuitive, no-code interface (Hypatos Studio) to enable
human-in-the-loop participation when desired.

Using the Hypatos.ai – Partner connector, you can use an Integration


process to upload or download files from Hypatos Studio and retrieve or
update processed documents. At the same time, you can create and use
an Integration process to ingest into Hypatos relevant master data (I.e.
vendors, purchase orders, posting data) from your source system, to be
used during Master Data Matching and Attribute Prediction processes of
Hypatos Enrichment Service.
You can integrate on-premise and legacy solutions with Hypatos, and also
integrate with cloud native SaaS-solutions such as ERP (i.e. SAP, NetSuite)
or Procure-to-Pay (i.e. Coupa) systems. Integrating with Hypatos.ai –
Partner connector consists of RESTful web service calls using a JSON
request and response made over an HTTPS connection.

Benefits
One of the most notable advantages of Hypatos is that its AI models can
be fully customized and trained automatically using your organization's
own transaction and document history. This ensures that the solution is
tailored to meet the specific needs of your teams, providing them with a
powerful tool that can streamline and automate various finance document
processing tasks.
Hypatos' AI models are capable of capturing all relevant data from
transactional documents, assigning general accounts, matching POs,
choosing workflow owners, and automatically transferring the processed
information to your accounting and workflow systems via the Hypatos.ai –
Partner Connector. By reducing the need for human intervention, Hypatos
can help your teams save valuable time and boost productivity,
particularly in high-volume business environments.

Prerequisites
To use the connector, implement a connection to your Hypatos Studio
account from Integration, and help ensure that the operations are
successful, have the following information:
 Your Hypatos API Client ID and Client Secret for OAuth 2.0
authorization
 The appropriate API access rights and permissions to your Hypatos
Studio projects
Tracked properties
This connector has no predefined tracked properties.

Hypatos.ai - Partner connection


The Hypatos.ai – Partner connection represents and contains all the
information that is needed to connect your account to the Boomi
integration.
If you have multiple Hypatos Studio accounts, use a separate connection
for each of them and configure their respective client credentials. You can
pair a single connection with different Hypatos.ai – Partner operations to
perform a unique action against a Hypatos Studio account.
Connection tab
The Hypatos.ai – Partner connection uses OAuth2.0 authorization protocol.
Before setting up the connection details, you should request your API
service account from the Hypatos service portal. Once created, you will
get a unique pair of client ID and client secret keys to be used for
authorization token retrieval.

Hypatos.ai - Partner operation


The Hypatos.ai - Partner operation defines how to interact with your
Hypatos Studio account or the Hypatos Enrichment Service database and
represents a specific action (GET, POST, DELETE) to perform against a
specific object type such as a Hypatos Studio file or processed document
or a master data record of Hypatos Enrichment Service database.
Create a separate action and operation component for each action/object
combination required for your integration.
Available Actions
The Hypatos.ai - Partner operation features various actions that can be
used to streamline and automate many operations on the Hypatos Studio
platform or Hypatos Enrichment Service database.
The following list describes the actions that are managed by the
connector:
 GET - This action is used to retrieve information regarding
processed documents or download an originally uploaded file from
Hypatos Studio.
 POST - This action is used to upload a file, trigger file processing in
Hypatos Studio, or update Studio documents metadata. It is also
used to create or update master data records in Hypatos Enrichment
Service database.
 DELETE - This action is used to remove master data records from
Hypatos Enrichment Service database.
Operations
Each operation implements a specific task such as reading, creating, or
updating an object of Hypatos Studio or Hypatos Enrichment Service.
The Hypatos.ai – Partner connector operations use JSON format and
support the following actions:
 INBOUND - GET
 OUTBOUND - POST, DELETE
Extension Objects
Depending on your selected action, the following object types (also called
resources) of the Boomi extension are available:
GET Action
 companiesGetById - Retrieve a specific Hypatos Studio company
and its details your account is currently assigned to.
 companiesList - Retrieve a list of Hypatos Studio companies your
account is currently assigned to.
 documentsGetById - Retrieve a single document from one of your
Hypatos Studio projects.
 documentsList - Retrieve a list of documents across your Hypatos
Studio projects.
 emailsContentGetByDocumentId - Retrieve the email body and
attachments that were sent to your Hypatos Studio account for a
specific document in your Hypatos Studio company.
 projectsGetById - Retrieve a specific project and its details for
your Hypatos Studio company.
 projectsList - Retrieve a list of projects and their details for your
Hypatos Studio company, based on provided search criteria.
 retrieveDocumentStates - Retrieve a list of states the given
document passed through during its processing in Hypatos Studio.
 retrieveFile - Download from Hypatos Studio the content of a file
with the given identifier.
 schemasGetByProjectId - Retrieve the configured schema and its
details for a particular project in Hypatos Studio.
POST Action
 documentUpdateExternalData - Update the external data of the
given document with information required from any of your external
systems integrating with Hypatos Studio.
 documentsUpdateExternalId - Update the given document in
Hypatos Studio with the external identifier provided from your
downstream system, once it is successfully transferred there.
 documentsUpdateTitle - Update the title of the given document in
Hypatos Studio.
 documentsUpdateTransfer - Update the status of the given
document upon successful or failed transfer from Hypatos Studio to
your downstream system.
 insertApprover - Create or update an Approver master data entity
in Hypatos Enrichment Service.
 insertCompany - Create or update a Company master data entity
in Hypatos Enrichment Service.
 insertCostCenter - Create or update a Cost Center master data
entity in Hypatos Enrichment Service.
 insertCustomer - Create or update a Customer master data entity
in Hypatos Enrichment Service.
 insertGLAccount - Create or update a GL Account master data
entity in Hypatos Enrichment Service.
 insertInvoice - Create or update an Invoice posting data entity in
Hypatos Enrichment Service.
 insertPurchaseOrder - Create or update a Purchase Order master
data entity in Hypatos Enrichment Service.
 insertSupplier - Create or update a Supplier master data entity in
Hypatos Enrichment Service.
 processFileIntoDocument - Initiate processing of an uploaded file
in Hypatos Studio.
 uploadFile - Upload a binary file to be processed in Hypatos Studio.
DELETE Action
 deleteApprover - Remove an Approver master data entity from
Hypatos Enrichment Service.
 deleteCompany - Remove a Company master data entity from
Hypatos Enrichment Service.
 deleteCostCenter - Remove a Cost Center master data entity from
Hypatos Enrichment Service.
 deleteCustomer - Remove a Customer master data entity from
Hypatos Enrichment Service.
 DeleteGLAccount - Remove a GL Account master data entity from
Hypatos Enrichment Service.
 deleteInvoice - Remove an Invoice master data entity from
Hypatos Enrichment Service.
 deletePurchaseOrder - Remove a Purchase Order master data
entity from Hypatos Enrichment Service.
 deleteSupplier - Remove a Suppier master data entity from
Hypatos Enrichment Service.

IFS REST – Partner connector


The IFS REST – Partner connector simplifies communicating with IFS
Application 10 (Apps 10) or IFS Cloud instances, on-premise or in the
cloud, to create, read, update or delete data through IFS's REST APIs.
Using the connector, you can integrate Apps 10 or IFS Cloud with other
software applications and services wherever they reside. Utilizing the
connector, you can browse the IFS schema and set up filters, sorts, and
perform other common activities.
The IFS REST - Partner Connector connector connects to the OData REST
APIs offered by IFS over an HTTP(S) connection. The connector abstracts
the HTTP settings (request headers, errors, etc.) to simplify usage.

Supported editions
The connector supports the following editions:
 IFS Applications 10 (update 8 or later)
 IFS Cloud
Prerequisites
To implement a connection to your IFS account from Integration and use
the connector operations, do the following:
 Have the URL and port, if required, for the IFS API endpoint, and a
valid username and password.
 Have a local Atom or an Atom in an Atom Cloud.
 Have knowledge of the hierarchical and data dependency structure
of the IFS API model.
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:

 Custom Header Support


 Use Dynamic Document Properties prefixed with inheader_ to
set headers in OData requests (e.g., inheader_Content-Type).
 Supports Content-Type and Accept headers.
 Bound Entity Set Name
 Actions and Functions may require specifying the related entity set
(e.g., releasing a ShopOrder).
 Use the Bound Entity Set Name property to define this.
 Bound Entity Set Parameters
 Define parameters for bound entity sets as a comma-separated
string (e.g., OrderNo='200136',ReleaseNo='*',SequenceNo='*').
 Required for Update operations when using Extended Parameters.
 ETag
 Used to track and verify the version of a resource in OData.
 Extended Trailing Parameter
 Needed for Update operations involving document uploads to IFS.
 Specifies the field name within IFS that stores the document (e.g.,
FileData).
 Supplemental Entity Sets & Variables
 For complex multi-level operations, additional entity sets and
variables can be appended as a single text string.

IFS FSM REST connector

The IFS FSM REST connector simplifies communicating with an IFS Field
Service Management (FSM) instance on-premise or in the cloud to create,
read, update, or delete data through FSM's REST APIs. Using the
connector, you can integrate FSM with other software applications and
services wherever they reside. The connector allows you to browse the
FSM schema, and guides you in setting up filters, sorts, and other
common activities.
The IFS FSM REST connector connects the OData REST APIs offered by
FSM over an HTTPS connection. The connector abstracts the HTTP settings
(request headers, errors, etc.) to simplify usage.

Prerequisites
To implement a connection to your IFS FSM account from and use the
connector operations, do the following:
 Have a local Atom or an Atom in an Atom Cloud.
 Have the URL and port, if required, for the IFS FSM ODATA API
endpoint, and a valid username and password.
 Have knowledge of the hierarchical and data dependency structure
of the IFS FSM model.
Supported editions
The connector requires IFS FSM 6, update 7 or greater.
Tracked properties
This connector has no predefined tracked properties.

The IFS FSM REST connection represents and contains all the information
that is needed to connect and log into a single IFS FSM account associated
with your workspace. If you have multiple accounts, you need a separate
connection for each.

IFS FSM REST operation


The IFS FSM REST operation defines how to interact with your IFS FSM
account and represents a specific action to perform against a specific IFS
FSM object.
Create a separate operation component for each action/object
combination that your integration requires.
The IFS FSM REST operations use JSON format and support the following
actions:
 Action — provides the ability to invoke a selected FSM Perform
Message.
 Create — creates a new record in IFS FSM.
 Delete — deletes an existing records in IFS FSM.
 Get — retrieves a specific record from IFS FSM based on its primary
key.
 Query — retrieves one or more records from IFS FSM, optionally
based on a filter.
 Update — updates an existing record in IFS FSM.

Innact – Partner connector


The Innotas – Partner connector enables you to easily integrate your
Innotas On Demand Project Management application data (such as
customers, resources and projects) with other systems, including CRM
applications, accounting packages and more.

Connector configuration
To configure a connector to communicate with Innotas, set up two
components:
 Innotas – Partner connection — The connection represents your
Innotas account.
 Innotas – Partner operation — The operation represents an action
against a specific record type.
You will create one connection component per Innotas account with one or
more operations, one for each type of interaction required by your
Integration scenario.
Prerequisites
To implement a connection to Innotas from Boomi Integration, you must:
 Have an Innotas account.
 Obtain an API-only user from your Innotas representative.
Tracked properties
This connector has no predefined tracked properties.

Innotas – Partner connection


The Innotas – Partner connection represents a single Innotas account
including login information.
If you use multiple accounts, you need a separate connection for each.
You can pair the connection with different Innotas operations to perform a
unique action against an Innotas account.
Connection tab
WSDL URL: URL to the Innotas web services. For production, this should
always be: https://api1.innotas.com/services/MainService?wsdl
User: The Innotas API user name, which you obtain from your Innotas
representative. This is not your regular Innotas user name that you use to
sign in to the front end.
Password: Password for the user.

Innotas – Partner operation


The Innotas – Partner operation defines how to interact with your Service
Desk data and represents a specific action (Create, Delete, Get, and so
on) to perform against a specific Innotas record type or object (such as
Project, Account, Resource).
The Innotas – Partner operation supports the following actions:
 Create — Creates multiple records.
 Delete — Deletes multiple records.
 Get — Retrieves a single record by its ID; you can enter only one ID.
 Query — You can perform a query to search for data; you can use
filters (search parameters).
 Update — Updates multiple records.
 Upsert — Updates multiple records if they exist. If there are no
records, it creates them.
 Execute — Used for all other "procedural" API calls.
Not all actions are available for every object. Create a separate operation
for each action and object combination required for your integration. The
operation browses your object definitions in real-time to get any defined
custom fields.
Object Name: Object ID number
Account: 12
Allocated Role: 54
Issue: 6
Program: 3
Project Role: 47
Project: 4
Resource: 11
Task Role: 52
Task Schedule: 79
Task: 5
Team Member: 53
User: 23
Query
When using the Query action you can search for the same objects
available with the Create action, or you can use filters (search
parameters) that give you access to more objects.
After selecting the Query connector action and using the Import Wizard,
you can add query filters.
To enable filters, select Find entity by filter. Only fields included in the
query are listed when it generates the profile. For example, the following
list is additional filtering information:
 Only supports the EQUALS (=) operator.
 Only supports the AND operand.
 Does not support nested conditions.
With filters enabled, you can filter the following objects:
Object Name: Object ID number
HR Issue: 97
usiness Unit: 2
ontact: 19
Expense Entry: 71
Expense Manager: 22
Fixed Bid Invoice: 68
Fixed Bid Invoice Item: 69
Issue: 6
Org Role Demand: 138
Organization: 38
Portfolio: 114
Program: 3
Project: 4
Project Role: 47
Project Role Demand: 136
Project Role Demand by Portfolio: 139
Request: 96
Resource: 11
Resource Allocation by Project Role: 135
Resource Capacity: 137
Resource Schedule by Task Role: 134
Resource Time Summary: 148
Scoring Profile: 152
Summary Task: 13
Task: 5
Task Role: 52
Task Schedule: 79
Timesheet: 20
Timesheet Entry: 66
User: 23
On the Fields tab, you can clear the fields you do not want to be returned
in the response. It checks all fields by default. To quickly select or clear all
the fields for a given object, click the check box next to the object's name
at the top of the field list.
Innotas supports regular fields and user-defined fields. User-defined fields
have a <method> that always begins with UDF and a <methodId\
> consisting of a minimum of six digits.
Execute
The following objects are available only when you use the Execute action:
Object Name: Object ID number
Skill Set: 37
imesheet: 20
The following categories of objects are available when you use the
Execute action:
Object Category - Objects
Request -
createOrReplaceRequest, getRequestFields, getRequestTypes,
submitRequest, submitRequests
Resource -
allocateResourceTime, createOrReplaceResourceProfileSetting,
getResourceProfileSetting
Timesheet -
approveTimesheet, createOrReplaceTimesheetEntries,
createOrReplaceTimesheets, deleteTimesheetEntries, getTimesheet,
getTimesheetForUser, getTimesheetForUsers, getTimesheets,
getTimesheetVariables, submitTimesheet, submitTimesheets,
updateTimesheetEntries
Portfolio -
createOrReplacePortfolio, getPortfolioFields, getPortfolioTypes
Skill Categories -
getSkillCategories
Miscellaneous -
getDeleteHistory, getUpdateHistory, getListValues
Options tab
Click Import Operation, then use the Import wizard to select the object
to integrate. The following table describes the default field definitions:
Object - Displays the object type that you selected in the Import
Operation wizard.
Tracking Direction - Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.

Intacct connector
The Intacct connector enables you to move data into and out of Intacct's
On-Demand Financial Application Suite.
As an Intacct user, you can use this connector to integrate any on-premise
applications and legacy systems with Intacct. You can also use this
connector to integrate Intacct with other On Demand or SaaS-based
applications such as Salesforce.
The Intacct connector is different from other connectors because it does
not connect to the application and browse the interfaces in real-time.
However, Integration provides the most up-to-date interface definitions for
you to browse and choose from to complete your necessary integrations.
If you customized any Intacct modules with additional fields, you might
need to manually add these fields to the standard interface using
Integration.
Integrating with Intacct consists of an XML request and response setup
over an HTTPS connection. This connection's technical details, such as
request headers, error handling, HTTPS connection, and so on, are all
abstracted to simplify the implementation.

Prerequisites
To implement a connection to your Intacct account from Integration,
complete the following:
 Confirm that your Intacct account is subscribed to the XML gateway
and has privileges to connect to the Intacct API. Your Intacct
representative can provide a Sender ID and password specific to the
integration API when you obtain a Sage Intacct Web Services
developer license.
 Know which Intacct URL to connect to. The default
is https://www.intacct.com/ia/xml/xmlgw.phtml, but confirm this for
your specific Intacct implementation.
 Understand from a business process perspective which interface(s)
to integrate. For example, you may want to upload sales orders into
Intacct from a flat file.
 Have access to the Intacct Web Services Developer Documentation
(see the linked topic) for interface reference and details of each
individual field within the interface.
Supported versions
The connector supports the Intacct 2.1 and 3.0 request and response
DTDs.
Changes in Intacct 3.0
The Delete Action, which is new in 3.0, is handled as an Update in 2.1. The
list of objects supported in the Operation’s version 3.0 differs from 2.1.
Intacct 3.0 profiles directly reference objects, such as:
 Intacct allocation CREATE Request
 Intacct deposit CREATE Request
 Intacct deposit UPDATE Request
 Intacct allocation DELETE Request
 Intacct allocation GET Response
 Intacct allocation QUERY Response
Tracked properties
This connector has no predefined tracked properties

Intacct connection
The Intacct connection represents and contains all of the information to
connect and sign in to Intacct's On-Demand Financial Application Suite.
You need a separate connection for each Company ID.

Intacct operation
The Intacct operation defines how to interact with your Intacct company
and represents a specific action performed against a specific Intacct
record type or object (GL Account, Bill, GL Entry, Customer, and so on).
Not all actions are available for every object. Create a separate operation
component for each action and object combination required for your
integration.
The Intacct operation supports the following actions:
 Inbound: Get, Query, Read Report
 Outbound: Create, Delete, Update
Inverbis Analytics - Partner connector
The Inverbis Analytics — Partner connector enables users to upload their
process datasets from the Boomi platform into the Inverbis Analytics
platform. This integration enables users to explore and analyze their
processes in depth, uncovering detailed insights into how these processes
truly perform. By leveraging this connector, organizations can gain
valuable visibility into process behaviors, identify inefficiencies, and
discover opportunities for automation, driving continuous improvement
and more informed decision-making.
Benefits
The Inverbis solution extract the digital footprint of a process from
corporate systems to show its execution in real life.
 Visualize the different paths (variants) that have been followed
every time a process has been executed.
 Get the data you need to explore the behavior of the different paths,
individually or in an aggregated manner
 Filter by specific situations to get to their root causes
 Compare executions and performance between regions,
departments, sites, teams, etc.
 Define process KPIs and monitor them in real-time, complementing
your traditional dashboards
 Get insights into future behavior and performance
Connector configuration
To configure a connector to communicate with Inverbis Analytics platform,
set up these two components:
 Inverbis Analytics — Partner connection. This connector uses the
OAuth 2.0 protocol for authentication and authorization.
 Inverbis Analytics — Partner operation.
This approach creates reusable components which contain connection
settings and operation settings. After building your connection and
operation, set up your Inverbis Analytics — Partner connector within a
process. When the process is defined properly, AtomSphere can map to
and from virtually any system using the Inverbis Analytics — Partner
connector to retrieve data from or send data to the Inverbis Analytics
platform.
Prerequisites
To use the Inverbis Analytics — Partner Connector, you need to have a
user account at the Inverbis Analytics platform and know the account
Email and Password.
Tracked properties
This connector has no predefined tracked properties.

Inverbis Analytics - Partner connection


The Inverbis Analytics — Partner connection represents a single Inverbis
Analytics account including login credentials. If you have multiple
accounts, use a separate connection for each account.

Inverbis Analytics - Partner operation


The Inverbis Analytics — Partner operations defines how to interact with
your Inverbis Analytics account. The operation represents a specific action
(Get, Register) to be performed. Create a separate operation component
for each action/object combination that your integration requires.
Get
Get is an inbound action to obtain information from the specified element.
The following operations are available:
 getUserOrganizations - To retrieve user organizations. You must
specify the following in the options tab:
o Path tenant : To indicate in which tenant is the operation
executed.
o Query limit (optional): To indicate the page size for pagination
o Query onlyActive (optional): To choose whether include
deleted organizations
o Query search (optional): To filter by the organization name
o Query shared (optional): To choose to search for owned or
shared organizations.
o Query start (optional): To indicate the page index for
pagination
 getProjectsByOrganization - To get the projects available for the
organization indicated. You must specify the following in the options
tab:
o Path tenant : To indicate in which tenant is the operation
executed.
o Query organizationUid: To indicate the organization to check.
o Query limit (optional) : To indicate the page size for pagination
o Query includeArchived (optional): To choose whether include
archived projects
o Query search (optional) : To filter by the organization name
o Query start (optional): To indicate the page index for
pagination
 getDatasetByProject:To get the datasets available for the project
indicated. You must specify the following in the options tab:
o Path tenant : To indicate in which tenant is the operation
executed.
o Query projectUid: To indicate the project to check.
o Query limit (optional) : To indicate the page size for pagination
o Query includeArchived (optional): To choose whether include
archived datasets
o Query search (optional): To filter by the project name
o Query start (optional): To indicate the page index for
pagination
Register
Register is an outbound action to carry out the different steps required to
upload a dataset to the platform. The following operations are available:
 registerNewDataset: To register a new dataset, it receives as
payload a JSON message with the following fields:
o description: To add a description to the dataset.
o name: To indicate the name that will be given to the new
dataset.
o projectUid: To indicate the id of the project where the dataset
will be stored. And the user must specify at the options tab:
o Path tenant : To indicate in which tenant is the operation
executed.
 generateUploadToken: To generate an upload token to allow to
update the dataset content. You must specify the following in the
options tab:
o Path tenant : To indicate in which tenant is the operation
executed.
o Path datasetUid: To indicate the dataset from which generate
the token.
 uploadDataset: To upload the dataset content to the Inverbis
Analytics platform. You must specify the following in the options tab:
o Path tenant : To indicate in which tenant is the operation
executed.
o Path datasetUid: To indicate the dataset from which generate
the token.
o Path token: The token to allow the dataset upload
o Header fileName: The filename of the dataset file to upload.
You must have a dataset prepared as a file with this filename
to pass to the connector in this operation.
 DatasetConfigurationRegister: To add a configuration to the
dataset, it receives as payload a JSON message with the following
fields:
o name: To add a name.
o activityColumn: To indicate the column of the activities
o startTimeColumn: To indicate the column of the start time.
o traceColumn: To indicate the column of the traces
identification
o endTimeColumn: To indicate the column of the end time
o Columns: An array of dictionaries to indicate the configuration
of each column. Each dictionary has:
 name: The name of the column
 type: The type of data
 format: The format of the data
 relevant: To indicate if it is a relevant attribute.
You must specify the following in the options tab:
o Path datasetUid: To indicate the Id of the dataset.
o Path tenant : To indicate in which tenant is the operation
executed.
 postFinishUpload: To indicate that the dataset upload process is
finished. You must specify the following in the options tab:
o Path datasetUid: To indicate the Id of the dataset.
o Path tenant : To indicate in which tenant is the operation
executed.
JIRA REST connector

You can use the JIRA REST connector to move data into and out of the JIRA
application and integrate any on-premise applications and legacy
systems.
For example, you can integrate JIRA with other On Demand or SaaS-based
applications, such as Intacct, Salesforce, and NetSuite. The JIRA connector
connects directly to the JIRA application and browses the interfaces in real-
time. Integrating with JIRA consists of RESTful web service calls using JSON
request and response set up over an HTTP or HTTPS connection. It
abstracts the technical details of this connection, such as request headers,
error handling, and HTTPS connection, to simplify the implementation.
Note: This connector replaces the older JIRA connector.
Connector configuration
To configure a connector to communicate with JIRA REST, set up two
components:
 JIRA REST connection
 JIRA REST operation
This approach creates reusable components that contain connection
settings (such as URL, User, Password), and operation settings (such as
Object, Action, data definitions). After building your connection and
operation, set up your connector within a process. When you define the
JIRA connector within a process, can map to and from virtually any system
using the connector to retrieve data from or send data to the JIRA
application.
Supported editions
This connector supports JIRA 5.0 and later versions.
Tracked properties
This connector has no predefined tracked properties.
JIRA REST connection
The JIRA REST connection represents a single JIRA service, including sign-
in credentials.

If you have multiple organizations or sandbox instances, use a separate


connection for each. You can pair a single connection with different JIRA
REST operations to perform a unique action against a JIRA service.
If you are using an SSL connection:
 You might need to manually import a certificate.
 You must use a local Atom. Using an Atom Cloud creates an SSL
handshake failure because the connector does not select certificate
components from the Boomi Enterprise Platform.
JIRA REST operation
The JIRA REST operation defines how to interact with your JIRA service.
An operation represents a specific action, such as Create or Query,
performed against a JIRA object type.
You must create a separate operation component for each action and
object combination required for your integration.
The JIRA REST operation supports the following actions:
 Query (inbound)
 Create (outbound)
 Delete (outbound)
 Update (outbound)
You can avoid some operation errors by being aware of the following:
 [500] Internal Server Error — occurs when updating an Issue
Transition with an invalid transition ID.
 Attachment content is in Base64 format — the connector expects
attachments to be in base64 format. When inserting an attachment
into an issue, supply the content to the attachBodyBase64 field.
 Simple expression support — Project, Issue Attachment, Issue Link,
and Issue Transition support only simple expression queries.
Options tab
Click Import Operation, then use the Import wizard to select the object
you want to integrate. The following table describes the field definitions
for each action type:
Object - Displays the object type that you selected in the Import
Operation wizard.
Tracking Direction - Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.
Kanverse IDP - Partner connector
Built upon a SaaS-based Hyperautomation platform, Kanverse Intelligent
Document Processing (IDP) brings together cognitive computing and
automation in a standalone product to deliver unprecedented benefits to
customers.

It helps Boomi developers to take full advantage of the powerful Kanverse


multi-stage AI engine for automated classification, cognitive data
extraction, and business rule framework for validation.

Benefits
Kanverse IDP brings together a zero-touch experience for the enterprise
staff and has pioneered its approach to enhance productivity for teams
saddled with manual document processing. Kanverse digitizes document
processing workflows for its customers across multiple industries like BFSI,
Healthcare, retail, e-commerce, Logistics and Manufacturing, and High
Tech.
Kanverse offers a suite of products that combines AI and OCR
technologies with automation and enables multi-channel ingestion,
automated classification, extraction with high accuracy, AI-based
analytics, validation against business rules, and automated filing of
documents.
Kanverse IDP reduces document processing cycle time from days and
hours to seconds, optimizes operations cost by up to 80%, prevents fraud,
eliminates manual entry to deliver a zero-touch experience with up to
99.5% processing accuracy.
Connector configuration
To configure the connector to communicate with Kanverse IDP, set up two
components:
 Kanverse - Partner connection
 Kanverse - Partner operation
This design provides reusable components that contain connection and
operation settings. After building your connection and operation, set up
your connector within a process. When the process is defined properly,
Boomi Integration can map to and from virtually any system using the
Kanverse IDP - Partner connector to upload, store, retrieve, and delete
data from Kanverse IDP to manage your business workflows.
Prerequisites
To use the connector, subscribe at Kanverse IDP environment to obtain a
subscriber ID & authentication details, and ensure that the permissions
are set correctly.
Tracked properties
This connector has no predefined tracked properties.

The Kanverse IDP – Partner connection represents and contains all of the
information that is needed to connect your account to the Boomi
Enterprise Platform. If you have multiple Kanverse accounts, use a
separate connection for each and configure the token accordingly. You can
pair a single connection with different Kanverse IDP operations to perform
a unique action against a Kanverse account.

The Kanverse IDP – Partner operation defines how to interact with your
Kanverse IDP account and represents a specific action (Send, Execute
(Upload)) to perform against a specific object type such as Mark Publish,
Send Error, or Execute (Upload).
Actions
Create a separate action and operation component for each action/object
combination required for your integration. The Kanverse IDP – Partner
operation features two different actions (also called resources) that can be
used to streamline and automate three different operations on the
Kanverse platform.
Available Actions
The following list introduces actions that are managed by the connector:
 Execute (Upload): This action is used to upload invoices,
insurance, PO, and other such documents to Kanverse.
 Send: This action is used to send an update to the appropriate
status of a document to Kanverse. The status can be either Error or
Publish.
Operations
Each operation implements a specific task such as creating or updating an
object of Kanverse.
The Send Error and Mark Publish operations of Kanverse IDP – Partner
connector use JSON format, while Execute (Upload) supports unstructured
formats, for example, PDF, images, Microsoft Excel, etc. These operations
support the following actions:
Execute (Upload) action
Options tab
Connector action : The Connector action corresponds to the action you
want the operation to take in Kanverse. For this action, it is set
to Execute (Upload).
Object : The object can be imported using the Import button. For Execute
(Upload) option, object can only be set to Execute (Upload).
Request profile : The profile is set to Unstructured since this action
accepts different types of profiles, such as Microsoft Excel, PDF, or images.
Response profile : This is a JSON profile returned by Kanverse.
Tracking direction : Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which Boomi documents appear in Process
Reporting. Start steps always track output documents regardless of your
selection.
Error behavior : Select Error Behavior if you want to be able to act on
failed operations in your process instead of only reporting them on the
Manage page.
Document Type : Set this value to whatever type of document you are
uploading, for example, invoice or PO.
Folder : The folder on Kanverse into which you want to upload the file.
Enable Classification : Enter the value true to enable classification,
otherwise enter false.
File Name : The name of the file you are uploading.
Content Type : This determines what type of data the connector accepts.
It is set to multipart/form-data for this action.
Dynamic Operation Properties
Some connectors enable you to define dynamic operation properties,
which are document-level overrides that you define values for, and pass
them into the Connector step to override the operation component's
default value. The Connector step dialog's Dynamic Operation Properties
tab allows you to provide static values or document-level values
(Document Property) for the parameters. You can add, edit, and delete the
parameters.
For the Kanverse IDP - Partner connection you can set dynamic operation
properties for the following properties:
 Document Type
 Folder
 Enable Classification
 File Name
For more information on setting dynamic operation properties, see the
following topic: Connector step dialog, Dynamic Operation Properties tab
Send action
Options tab
Connector action : The Connector action corresponds to the action you
want the operation to take in Kanverse. For this action, it is set to Send.
Object : For the Send action, there are two different object types available,
one is Send Error, the other is Mark Publish. Use Send Error in case
the document fails any validation, then your can update it as errored in
Kanverse. Use Mark Publish in the case where the document uploads
successfully and passes all validations, then you can mark this as
published in Kanverse.
Request profile : The request profiule is a JSON profile that the connector
accepts. The profile varies according to the imported object.
Response profile : This is the profile returned by Kanverse.
Tracking direction : Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which Boomi documents appear in Process
Reporting. Start steps always track output documents regardless of your
selection.
Error behavior : Select Error Behavior if you want to be able to act on
failed operations in your process instead of only reporting them on the
Manage page.
Content Type : This determines what type of data the connector accepts. It
is set to application/json for this action.
LaminarHub - Partner connector

LaminarHub AI is a one-stop shop AI-driven solution for your business and


IT operations teams to transform your integration transaction data into
operational insights. Find and resubmit transactions, monitor integration
runtimes through dashboards, as well as configure alerts and notifications
based on business rules. LaminarHub AI Search will assist your users to
search using natural language to get trend analysis on any type of
transaction. The LaminarHub - Partner connector enables users to connect
their Boomi integrations to the Laminar Hub with ease. You can send any
Boomi Integration data to the LaminarHub by using the connector in your
integrations.
Connector configuration
To configure a connector to communicate with LaminarHub, set up these
two components:
 LaminarHub - Partner connection
 LaminarHub - Partner operation
This design provides reusable components that contain connection and
operation settings. After building your connection and operation, set up
your LaminarHub - Partner connector within a process. When the process
is defined properly, Boomi Integration can get or send data to the
LaminarHub AI with any other application.
Prerequisites
To use the connector, you will need:
 an active LaminarHub account
 LaminarHub hostname
 LaminarHub API key and Environment Key
Tracked properties
This connector has no predefined tracked properties.
Connection tab
The LaminarHub - Partner connection has the following fields that are
required for configuration:
LaminarHub Host - The hostname for the instance of LaminarHub you
wish to connect to.
API Key - This is the API key generated for the LaminarHub environment
you are connecting to.
Environment Id - This is the environment Id of the LaminarHub
environment you are connecting to. Each customer can have multiple
environments. For example, Prod, Dev, Test, etc.

The LaminarHub - Partner operations use JSON format and support the
following actions:
 Inbound: Get
 Outbound: Insert
Insert
Insert is an outbound action to send a transaction to the LaminarHub. A
JSON profile is created by using the Import Operation feature. If the Insert
operation is successful, the transaction is uploaded to LaminarHub and
the connector returns a JSON document with information about the status
of the insert.
Get
Get is an inbound action to download a transaction from the LaminarHub.
A JSON profile is created by using the Import Operation feature.
Filter Status - The Get operation does not accept any input parameters.
Instead, select the Filter Status setting that matches your use case:
 To Be Reprocessed – This selects transactions a user has marked
as needing to be reprocessed.
 To Be Resent – This selects transactions a user has marked as
needing to be resent.

Laserfiche - Partner connector


Leverage the Laserfiche - Partner connector to build integrations with
other enterprise platforms and implement custom end-to-end solutions.
Currently, the Laserfiche - Partner connector supports operations that let
you interact with your Laserfiche repository, which promotes easy access
to information inside and outside your business, all while maintaining
content compliance at all stages, from initial capture to final archival.

Benefits
The Laserfiche integration with Boomi is helpful for:
 Viewing a document or document metadata from Laserfiche and
viewing or downloading it in another system, like a CRM, ERP, or any
system with relevant documents stored within Laserfiche.
 Saving a document to Laserfiche or updating Laserfiche metadata
from another system. This action can be monitored to start a
Laserfiche business process.
 Gathering healthcare data. You can convert a Laserfiche document's
metadata into the HL7 data format before sending it to a healthcare
system requiring that standard.
 AP Processing. The Laserfiche solutions for automating invoice
processing can update necessary ERP and financial systems.

Prerequisites
To use the Laserfiche - Partner connector, implement a connection from
Laserfiche to Integration. To ensure the connection is successful, you must
have the following:
For Laserfiche Cloud
 A Laserfiche account with security permissions to create Service
Principal Users and access the developer console
 A Laserfiche repository (Learn more)
 A Laserfiche Service Principal Key
 A Laserfiche Base-64 Encoded Access Key
For Self-Hosted Laserfiche Installations
 A Laserfiche repository (Learn more)
 The base URL for your Laserfiche API, for
example, https://ServerName/LFRepositoryAPI
 A Laserfiche repository’s ID, that is, the name of your Laserfiche
repository.
 The username of an account that has the appropriate access rights
to the Laserfiche repository to make changes on behalf of Boomi.
 The password for the above account.
Supported versions
The Laserfiche - Partner connector supports all the actions included in
Laserfiche Repository API V1 build version
3a7a49ffc2db49aecc10091994a0292be398cd51_.20221214.3.
The connector works with both Laserfiche Cloud and self-hosted
installations.
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
 event - Used for the Get Audit Reasons operation. This property
indicates the event that the caused the audit
reason: deleteEntry or exportDocument.
 nextLink - Used when a collection is returned. This property
contains a URL that allows for the retrieval of the next subset of the
requested collection.
 count - Used when a collection is returned. This property contains
the count of a collection of entities or a collection of entity
references.

Connection tab
For Laserfiche Cloud
The Laserfiche Cloud connection uses a Service Principal Key and a Base-
64 Encoded Access Key.
Service Principal Key : Create the Service Principal Key by creating a
Boomi-specific Service Principal user in your Laserfiche Account
Administration. Grant this user the appropriate access rights in Laserfiche
to act on behalf of the Laserfiche connector. Open the Boomi Service
Principal user you created, and click Create Service Principal Keys.
Base-64 Encoded Access Key : Create the Base-64 Encoded Access Key
in the Laserfiche Developer Console. First, create a Boomi app by creating
a service application and selecting the Service Principal user as the
Service Account. Then, in the Boomi app you created, navigate to the
Authentication tab, and click Create key next to Access Keys. Finally,
download the key as a base-64 string and copy the key from there.
For Laserfiche Self-Hosted Installations
Base URL : The base URL for your Laserfiche API, for
example, https://ServerName/LFRepositoryAPI
Repository ID : The name of your Laserfiche repository
Username : The username of an account that has the appropriate access
rights to the Laserfiche repository to make changes on behalf of Boomi
Password : The password for the above account
Actions
Not all actions are available for every object. Create a separate operation
component for each action/object combination that is required for your
integration.
The Laserfiche - Partner connector features various actions that can be
used to automate operations in a Laserfiche repository.
Create
The Laserfiche - Partner connector supports a variety of Create actions for
creating entries.
 Copy Entry Async
 Create or Copy Entry
 Import Document
Get/Read
The Laserfiche - Partner connector supports a variety of Get actions for
finding entries, entry metadata, and trustee information.
 Create Search Operation
 Create Simple Search Operation
 Get Audit Reasons
 Get Document Content Type
 Get Dynamic Field Values
 Get Entry
 Get Entry Listing
 Get Entry by Path
 Get Field Definition by ID
 Get Field Definitions
 Get Field Values
 Get Link Definition by ID
 Get Link Definitions
 Get Link Values from Entry
 Get Operation Status and Progress
 Get Repository List
Important
If multiple documents are provided as inputs to the Get Repository List
operation, it will output multiple documents, each containing a copy of the
repository list. As a best practice, ensure no documents are provided as
inputs to the Get Repository List operation.
 Get Search Context Hits
 Get Search Results
 Get Search Status
 Get Tag Definition by ID
 Get Tag Definitions
 Get Tags Assigned to Entry
 Get Template Definition by ID
 Get Template Definitions
 Get Template Field Definitions
 Get Template Field Definitions by Template Name
 Get Trustee Attribute Key Value Pairs
 Get Trustee Attribute Value by Key
Update
The Laserfiche - Partner connector supports Update actions for assigning
metadata to entries, moving entries, and renaming entries.
 Assign Entry Links
 Assign Field Values
 Assign Tags
 Move or Rename Entry
 Write Template Value to Entry
Delete
The Laserfiche - Partner connector supports Delete actions for deleting
entries and metadata.
 Delete Assigned Template
 Delete Document
 Delete Entry Info
 Delete Pages
Export
The Laserfiche - Partner connector supports Export actions for exporting
entries out of your Laserfiche repository.
 Export Document
 Export Document with Audit Reason
Cancel
The Laserfiche - Partner connector supports Cancel operations.
 Cancel Operation
 Cancel or Close a Search
Live Optics connector
The Live Optics connector enables you to connect to the Live Optics
application and Get necessary data in JSON format using PAPIs and Project
ID.
Connector configuration
To configure a connector to communicate with Live Optics, set up two
components:
 Live Optics connection
 Live Optics operation
This design provides reusable components that contain connection
settings such as username, password, and settings. After building your
connection and connector operation, set up your connector within a
process. When the Live Optics connector is defined properly within your
process, Integration can map to and from virtually any system using the
connector to retrieve data.
Prerequisites
The following prerequisites are necessary to implement a connection to
your account from Integration:
 Live Optics API URL
 To implement a connection to your Live Optics account, make sure
an Atom is either deployed on your local machine or is hosted with
Boomi Integration and have the following information:
o A valid set of credentials with the following three-string
components used in Session Login
 Session Login ID: An ASCII string of characters that
uniquely identifies the user.
 Session Login Secret: A base-64 encoded string that
the user uses to authenticate themselves.
 Shared Secret: A second base-64 encoded string that
the user uses to extract the session string from the
Session Login.
 The Project ID: This is the ID of the project for which
the project details are to be rendered.
 Configure Integration with valid user credentials and necessary
permissions to perform the tasks necessary for the integration (such
as being able to invoke RFCs and PAPIs, manipulate specific
business objects, etc.).
Tracked properties
This connector has no predefined tracked properties.

Connection tab
URL: The URL of the server to connect to. This can be an http or https
address but must be URL encoded. The URL field for the connector helps
you to connect to the Live Optics portal. The connection supports the GET
operation. Project details can be fetched with the given project ID as input.
Login ID: An ASCII string of characters that uniquely identifies the user.
Login Secret: A base-64 encoded string used to authenticate the user.
Shared Secret: A second base-64 encoded string that is used to extract
the session string from the Session Login method response.

Live Optics operation


The Live Optics operation defines how to interact with your Live Optics
account and represents a specific action to perform against a specific
record.
The Live Optics operation supports the following actions:
 Get — Used to set up a connector operation associated with PAPI.

Lumesse TalentLink – Partner connector


The Lumesse TalentLink – Partner connector supports importing profile
definitions. When you create a Lumesse TalentLink connection, enter
certain configuration settings provided by Lumesse professional services,
such as Company Name, User, Password and API Keys to the relevant
bundles that you use. The connector uses the Bundle:ServiceName and
host to build the WSDL URL (using the
[Host]/[BundleName]/SOAP/[ServiceName]?WSDL pattern) to fetch the
WSDL and present the available operations. You can then select the
operation to call, and the connector builds the request and response
profiles based on the XSDs provided by the WSDL.

Create one Lumesse TalentLink – Partner connection component to use


with all bundles, with one or more Lumesse TalentLink operations, for each
type of interaction required by your integration scenario.
This design provides reusable components, which contain connection
settings (such as user name and password) and operation settings (such
as Object, Action, data definitions). After building your connection and
operation, set up your connector within a process. When you properly
define the Lumesse TalentLink within your process, can map to and from
virtually any system using the connector to retrieve data from Lumesse
TalentLink.
Supported editions
Boomi supports the following services (WSDLs). For more information,
contact your Lumesse account manager.
 https://api.lumessetalentlink.com/HRIS/ESB/Candidate?wsdl
 https://api.lumessetalentlink.com/HRIS/ESB/Position?wsdl
 https://api.lumessetalentlink.com/HRIS/ESB/Queue?wsdl
 https://api.lumessetalentlink.com/HRIS/ESB/Document?wsdl
 https://api.lumessetalentlink.com/HRIS/ESB/LOV?wsdl
 https://api.lumessetalentlink.com/HRIS/ESB/LovHierarchy?wsdl
 https://api.lumessetalentlink.com/HRIS/ESB/ConfigurableFields?wsdl
 https://api.lumessetalentlink.com/UserAdmin/ESB/User?wsdl
 https://api.lumessetalentlink.com/UserAdmin/ESB/Role?wsdl
 https://api.lumessetalentlink.com/UserAdmin/ESB/Organization?wsdl
Tracked properties
This connector has no predefined tracked properties.

Connection tab
Lumesse Host - The host which is part of the URL. The default
is: https://api.lumessetalentlink.com. See the Lumesse TalentLink – Partner
connector topic for a list of supported WSDLs.
Company Name - The company name to use when connecting to the
Lumesse TalentLink service.
User - The user name to use when connecting to the Lumesse TalentLink
service.
Environment - The Back Office (BO) or Front Office (FO).
Password - The password for the user connecting to the Lumesse
TalentLink service.
HRIS Bundle API Key - The API key for the Human Resources Information
Systems (HRIS) bundle.
User Bundle API Key - The API key for the User bundle.

The Lumesse TalentLink – Partner operation defines how to interact with


the Lumesse TalentLink service.
The operation represents a specific action (such as Execute) to perform
against a specific Lumesse TalentLink Bundle and ServiceName (for
example, User:User, User:Organization, User:Role or HRIS:LovHierarchy)
and the Lumesse TalentLink method. The Lumesse TalentLink service must
be available through the SOAP API. See the Lumesse TalentLink Connector
topic for a list of supported WSDLs.
You must create a separate Operation component for each
Bundle:ServiceName and method required for your integration.
The Lumesse TalentLink – Partner operation supports only the Execute
action.

Magento connector
The Magento connector makes it easy to work with the Magento
eCommerce website software by simplifying the interaction and adding
convenience.
Connector configuration
To configure a connector to communicate with Magento, set up two
components:
 Magento connection
 Magento operation
This approach creates reusable connection settings components. After
building your connection and connector operation, set up your connector
within a process. When configure your connector with a process, can send
data sets to the Magento application.
Prerequisites
To implement a connection to your Magento account, deploy an Atom on
your local machine or is hosted with and have the following information:
 Your Magento user name and password.
 The store base URL.
Supported editions and versions
The Magento connector uses the Magento 1.x REST API to support
application versions v2.1.1 and higher. The Magento 1.x REST API uses
JSON format and manages profiles in JSON.
You can set up the Magento connector to move data into and out of any
Magento platform (Community Edition, Enterprise Edition, and Magento
Enterprise Cloud Edition) using Magento application versions v2.1.1 and
later from any on-premise application and legacy system including other
On Demand or SaaS-based applications.
Tracked properties
This connector has no tracked properties.

Connection tab
If you use multiple installations, you need a separate connection
component for each.
Token-based authentication
The connector uses token-based authentication by taking
the Username, Password, and User Type provided in the connection,
sending a request to the Magento server to retrieve a token, and uses the
token to authenticate each operation request in your Magento store.
note
Boomi does not support the Integration token type.
Store Base URL - Enter the base URL assigned to your storefront, for
example http://www.yourdomain.com and https://www.yourdomain.com.
note
Based on your Magento instances, you may find that you need to
use https for one instance, and http for another.
Username
Enter the administrator or customer user name for authentication. The
user name you provide is used with the password to request an access
token from Magento to establish a connection with your Magento store.
Password - Enter the Magento password for authentication. It uses the
password with the user name to request an access token from Magento to
establish a connection with your Magento store.
User Type - Select either Administrator or Customer. It does not
support Guest.

The Magento operation supports the following actions:


 Inbound: Configure the Magento operation to retrieve object
records from a Magento account. It requires unique parameters at
runtime to request data. See the topic about parameter values to
understand how to statically or dynamically define these values in a
process for runtime.
o Query — queries records for Products, Customers, Orders,
and Invoices. The Query action enables to look up object
records based on specific search criteria. The Query action
returns zero to many documents from a single request based
on one or more filters. After you select the Query connector
action and use the Import Wizard, you can select fields and
add query filters. Query actions have only a response profile.
o Get — uses the Magento record ID to get records for Products,
Customers, Orders, and Invoices.
 Outbound: Configure the Magento operation to send data to a
Magento account. For all outbound operations, send request JSON
data into the connector that corresponds with the defined JSON
profile. Usually, request documents are the destination output of a
map. However, you can also set parameters on the outbound
connector's Parameter tab within the process to supply static or
dynamic request values.
o Create — creates records for Products, Customers, Orders,
and Invoices.
o Delete — deletes records for Products and Customers.
o Update — updates records for Products, Customers, Orders,
and Invoices.
o Upsert — upserts records for Products and Customers. To
perform this action on Customer object types, complete these
fields: firstname, lastname, email, website_id

Marketo — Partner connector


Use the Marketo — Partner connector to integrate any on-premise
applications and legacy systems. You can also use this connector to
integrate Marketo with other On Demand, SaaS or cloud-based
applications such as Intacct, Salesforce, NetSuite, etc. The Marketo —
Partner connects directly to the Marketo application and browses the
interfaces in real-time. Integrating with Marketo consists of SOAP web
service calls using XML request/response set up over an HTTPS
connection. The technical details of this connection such as request
headers, error handling, HTTPS connection, etc. are abstracted to simplify
the implementation.

Connector configuration
To configure a connector to communicate with Marketo, set up two
components:
 Marketo — Partner connection
 Marketo — Partner operation
This design provides reusable components, which contain connection
settings (URL, User ID, Encryption Key) and operation settings (Object,
Action, data definitions). When your connection and operation are built,
set up your connector within a process. After properly defining the
Marketo — Partner connector within your process, Integration can map to
and from virtually any system using the connector to retrieve data from or
send data to Marketo.
Prerequisites
To implement a connection to your Marketo account from Integration, you
must be able to access the Marketo Admin section to configure/review the
SOAP API setup. For more information, see the Marketo — Partner
connection topic.
Tracked properties
This connector has no predefined tracked properties.

Connection tab
You must sign into your Marketo account at https://www.marketo.com, and
access the SOAP API Setup section on the Admin > Integration >
Web Services menu to review your connection settings. Ensure that
your Access Status is Active.
URL - The full URL path for the Marketo SOAP endpoint.
User ID - The static User ID identified in the SOAP API Setup section.
Encryption Key - The configurable encryption key identified in the SOAP
API Setup section.
API Version - The version of the SOAP API used to make calls to the
Marketo service.
See the topic about password encryption security to understand how to
securely store passwords in a Connection component.
Test Connection
You can test your connection settings before you use or save the
connection in a process.

The Marketo operation supports the following actions:


 Inbound: Get, Query
You can configure the Marketo — Partner operation’s inbound actions to
retrieve document records from a Marketo account. It requires unique
parameters at run-time to request data. To understand how to statically or
dynamically define values in a process for run-time, review the topic about
parameter values.
 Outbound: Upsert
You can configure the Marketo operation’s outbound actions to send data
to a Marketo account. For Upsert operations, send request XML data into
the connector that corresponds with the defined XML profile. Most often,
request documents are the destination output of a map. However, you can
also set parameters on the outbound connector's Parameter tab within
the process to supply static or dynamic request values.
Marketo REST connector
The Marketo REST connector makes it easy to move data into and out of
Marketo’s marketing automation platform using the REST protocol.
For Simple Object Access Protocol (SOAP) to web services, use the
Marketo connector.
Connector configuration
To configure a connector to communicate with Marketo REST, set up two
components:
 Marketo REST connection
 Marketo REST operation
This design provides reusable components, which contain connection
settings and operation settings. After building your connection and
operation, set up your connector within a process. When the Marketo
REST connector is configured properly within your process, Integration can
map to and from virtually any system using the connector to retrieve data
from or send data to Marketo.

Prerequisites
To implement a connection to your Marketo REST connector so you can
retrieve and generate endpoint and identify URLs, and retrieve client ID
and secrets, you must have Administrator access to the Marketo Custom
Service.
Supported objects
The following objects are supported by one or more of the Marketo REST
operation actions:
 Add Custom Activity to Lead
 Add Lead to List
 Companies
 Custom Objects
 Lead
 Lead Activity
 Lead Change
 List
 Opportunities
 Opportunity Roles
 Remove Lead from List
Tracked properties
There are no tracked properties for this connector.
Connection tab
If you have multiple accounts, you need a separate connection for each.
You can pair a single connection with different Marketo operations to
perform a unique action against a Marketo account instance.
The Marketo REST connector supports OAuth 2.0 for authentication and
the connection automatically refreshes the authentication token.
Endpoint URL - Enter the Endpoint URL for the Marketo service. For
example, https://064-CCJ-768.mktorest.com/rest. You can find the
Endpoint URL on the Marketo Admin > Web Services panel in the REST API
section.
Identity URL - Enter the Identity URL for the Marketo account instance.
For example, https://064-CCJ-768.mktorest.com/identity. You can find the
Identity URL on the Marketo Admin > Web Services panel in the REST API
section.
Client ID - The client ID obtained in Marketo from a Custom Service to
authenticate with Marketo. You can find the Client ID in Marketo in
the Admin > LaunchPoint menu by selecting the custom service and
clicking View Details.
Client Secret - The client secret obtained in Marketo. Click Click to
Set to open the Password window, and enter or paste the alphanumeric
Client Secret associated with the Client ID. You can find the Client Secret
in Marketo in the Admin > LaunchPoint menu by selecting the custom
service and clicking View Details. When you are done, click Apply.
Test Connection
You can test your connection settings before you use or save the
connection in a process. The Test Connection ensures that your specified
settings are correct, valid, and accessible. If the test connection is
successful, you can save the connection. Otherwise, review and correct
any incorrect settings, then test again.
The operation supports the following actions:
 Execute — uses the List ID field from the operation and Lead Id as
a parameter to add Leads to a list, remove Leads from a list, or add
a custom activity to a Lead.
note
You can also use Execute to delete custom objects having one
deduplication field, and also compound keys (multiple deduplication
fields). For example, you use the Message step and set the deduplication
fields (Country and State) for the records to delete. You use a Data
Process step to split the records, and the Map step to map the fields in
the JSON with the request profile of the Execute operation. Finally, when
you use the Execute operation, select the Is custom object? option when
browsing, and select the custom object type for the record that you want
to delete.
 Get — returns Leads and Lists by Id.
 Query — searches for Marketo REST objects (Companies, Leads,
Lists, Lead Changes, Lead Activities, Opportunities, and Opportunity
Roles) based on specific search criteria for fields and filters you
define.
o Query searches on ID, name, programName, workspaceName,
batchsize, or nextPageToken.
o To search for Leads, use only a single filter.
o To search for a list of Lead Activity by type, filter on activities
and sinceDatetime, the date and time to use as the beginning
of the search, in yyyy-MM-dd'T'HH:mm:ss'Z' format. The listId
and leadIds filters are optional.
o To search based on deduplication fields, you select the
deduplication field (for example, externalSalesPersonId) when
adding a filter expression and add the value of the filter (for
example, djones) on the Parameters tab.
 Upsert — Creates new Leads, Companies, Opportunities,
Opportunity Roles, or Custom Objects in Marketo. If the object
exists, updates it with the submitted values. The operation Upserts
the objects in bulk by sending multiple documents in batches of
300, which is Marketo's limit.
 Delete — Deletes objects, including Companies, Leads,
Opportunities, and Custom Objects from Marketo's Lead Database.
You provide the ID of the record to delete as a parameter. You
cannot use Upsert to delete Opportunity Roles.
 Opportunity Roles Compound Key Query — Further refine your
search for Opportunity Roles by creating multiple filter expressions
(composite filters). It requires you to use the compound key to
query, such as externalOpportunityId, LeadId, and Role.
 Custom Objects Compound Key Query — Further refine your
search for Custom Objects by creating multiple filter expressions
(composite filters). This action requires you to have the following
permissions: Read-Only Custom Object, Read-Write Custom
Object.
Marketo has two types of lists: static lists of fixed leads and smart lists
that are dynamic collections of leads based on characteristics used in
Smart Campaigns. You can access static lists only.
Partitioning can create confusion. When you create a Lead, Marketo stores
the Lead in a partition within a workspace. You can have more than one
partition in a workspace and more than one workspace. When you perform
a query operation, unless you specify a partition, Marketo looks within the
default partition for that object. If you then perform an Upsert operation,
you can get an error message saying that object already exists. Because
Marketo looks across all partitions, the object may be in another location.

Microsoft Azure Blob Storage connector


The Microsoft Azure Blob Storage connector makes it easy to build
integrations by simplifying construction to your Azure Storage objects.
Connector configuration
To configure a connector to communicate with Azure, set up two
components:
 Microsoft Azure Blob Storage connection
 Microsoft Azure Blob Storage operation
This approach creates reusable components that contain connection
settings (such as URL, User, Password, Account) and operation settings
(such as container). After building your connection and operation, set up
your Microsoft Azure Blob Storage connector within a process. When you
define the process correctly, Integration can map to and from virtually any
system using the Microsoft Azure Blob Storage connector to retrieve data
from or send data to the Microsoft Azure Blob Storage application.

Prerequisites
To implement a connection to your Microsoft Azure account from
Integration, do the following:
 Have your storage account name and access key to access Microsoft
Azure Blob Storage.
 Deploy an Atom on your local machine or hosted with Integration.
 Check the size of your BLOB. The Azure SDK prevents processing
BLOBs larger than 64 MB on an Atom Cloud. If your BLOB is larger
than 64 MB, do your processing on a local Atom.
Document properties (input)
This connector has no predefined properties for input documents.
Document properties (output)

 Blob-Type: Identifies the type of blob:


o BLOCK_BLOB: Stores text and binary data in blocks.
o PAGE_BLOB: Stores random access files (e.g., VHDs for Azure
VMs).
o APPEND_BLOB: Optimized for append operations (e.g.,
logging).
 Cache-Control: Stores cache settings if specified during blob
creation.
 Content-Disposition: Stores content disposition value if specified.
 Content-Encoding: Stores encoding type if specified.
 Content-Language: Stores language setting if specified.
 Content-MD5: MD5 hash of the blob for integrity checks.
 Content-Type: Stores MIME type if specified.
 E-Tag: Unique identifier updated on write operations.
 Last-Modified: Timestamp of the last modification (yyyy-MM-
dd'T'HH:mm:ss.SSS'Z').


Lease-Duration: Specifies if lease is INFINITE or FIXED.
 Lease-State: Indicates state (AVAILABLE, LEASED, EXPIRING,
BREAKING, BROKEN).
 Lease-Status: Specifies if lease is LOCKED or UNLOCKED.
 Length: Size of the blob in bytes.
 User-defined Metadata: Custom metadata properties for the blob.
Microsoft Azure Blob Storage operation
The Microsoft Azure Blob Storage operation defines how to operate with
blob objects.
The operation supports the following actions:
 Delete — Delete blobs based on ID parameters, as well as
snapshots. In Azure, you can mark blobs for deletion during garbage
collection.
 Get — Read blobs (the only available objects) by passing the name
of the blob as an ID and returning the contents of the blob.
 Create — When you import blobs, you can assign a name to a blob
utilizing two methods:
o Use the Blob Name field.
o Use a dynamic document property. When setting the dynamic
document property, create a property for
the azureBlobName and set the parameter to a static value of
the desired blob name. If you do not enter a blob name as a
dynamic document property, then the operation generates a
name.

Changelog
2024-11 (Version 24.11.0, Current)
 Updated the SDK to a new version.
 Necessary runtime packages are now packaged with the connector.

Microsoft Azure Data Lake Storage connector


Microsoft Azure Data Lake Storage (ADLS) Gen2 connector enables you to
access and interact with your ADLS Gen2 storage account using the ADLS
Gen2 REST APIs.
This connector allows you to create and manage file systems, directories
and files in your storage account that has hierarchical namespace (HNS)
enabled. Using this connector, you can Create, Get, List, Update and
Delete filesystems and/or paths (directories and files). It supports handling
both the structured or unstructured data stored inside the files or
filesystems. The connection to the ADLS Gen2 storage is secured using
the OAuth 2.0 authentication protocol.

Connector configuration
To configure the connector to communicate with ADLS Gen2 storage
account, set up the following components:
 Microsoft ADLS connector's connection — Represents your ADLS
Gen2 Storage account and login credentials.
 Microsoft ADLS connector's operation — Represents an action
against an object (Filesystem and/or Path).
This design provides reusable components which contain connection
settings and operation settings. After building your connection and
operation, set up your connector within a process. When the process is
defined properly, Boomi Integration can map to and from virtually any
system using the Microsoft ADLS connector to the required ADLS Gen2
storage account.
Prerequisites
The Microsoft ADLS connector requires the following:
 An ADLS Gen2 storage account. Refer to the Azure Documentation -
Create a Storage account.
 An application (client) registered in the Azure Active Directory
(Microsoft Azure Portal - App Registrations page). Once the
registration is complete, an Application (client) ID will be generated.
This client App credentials will be used in the Microsoft ADLS
connector's connection settings for OAuth2.0 authorization to the
ADLS Gen2 system. You can get the client app credentials and
OAuth access token in the Azure Portal App registration - Overview
page of the client app.
 The client app must be assigned with proper role to have required
permissions on the storage account and its contents. Refer to
the Azure Documentation - Manage ACL.
 An Atom/Molecule/Atom Cloud.

Connection tab
Endpoint Suffix - Displays the endpoint suffix for the ADLS service.
Default value is core.windows.net and it is not recommended to modify it.
Storage Account
Enter the name of your ADLS Gen2 storage account.
Response Timeout
Provide the maximum time (in milliseconds) that the connector can wait to
receive an operation response from the ADLS Gen2 REST APIs.
Authentication Type
Displays the supported authentication type "OAuth 2.0".
Client ID
Provide the unique application ID of your client app assigned by the Azure
portal at the time of your app registration. You can get this ID from the
Azure Active Directory — Manage — App Registrations — client app
Overview page.
Client Secret
Click to open the Password window, enter or paste your client secret, and
click Apply.
Scope
Provide the URL that defines the ADLS Gen2 storage account access
permissions granted for the given client ID. In general, .default scope will
be displayed https://storage.azure.com/.default.
Access Token URL
Provide the URL to retrieve the encrypted access token from the Azure
portal, to access the ADLS Gen2 storage account. This URL can be found
in the Azure AD (Endpoints page of the client app — OAuth 2.0 token
endpoint (v2)).

The Microsoft ADLS connector's operation defines how to interact with


your ADLS Gen2 Storage account. It represents a specific action (Create,
List, Get, Update, and Delete) to be performed against an ADLS Gen2
storage object (Filesystem and Path) within your account.
 Filesystem - The parent location that holds the files and folders.
 Path - The forward slash delimited (/) representation of the directory
structure containing the files.
Create a separate operation component for each action/object
combination that your integration requires.
The Microsoft ADLS connector operations support the following actions:
 Create -
o Create one or more new filesystems in a given ADLS Gen2
storage account.
o Create one or more new files in a given filesystem or a specific
directory within the filesystem.
 List -
o Retrieve a list of filesystems within a given ADLS Gen2 storage
account.
o Retrieve a list of paths (directory structure containing the files)
within a filesystem.
 Get - Read the content of a file in a filesystem.
 Update - Update the content of a file in a filesystem.
 Delete -
o Delete one or more filesystems in a given storage account.
o Delete the paths (directories/files) from a filesystem.
Options tab
Click Import Operation, then use the Import wizard to select the object
to integrate. Based on the selected action, the following action specific
fields will appear in the Import wizard.
Create Type (Create) - Select the ADLS Gen2 storage object to be
created. The available options are FileSystem and Path.
List Type (List) - Select the ADLS Gen2 object type (Filesystem/Path) to
be listed.
Delete Type (Delete) - Select the ADLS Gen2 object type
(Filesystem/Path) to be deleted.

Prerequisites
The Microsoft Azure SQL Data Warehouse connector requires the following:
 Microsoft Azure Data Warehouse connection information, including
server host, IP address and port, user name, and password. The user
name and password configured on the database must have
sufficient permissions to access the objects required by the
integration. For more information, refer to the documentation for
your database.
 If you use local Atoms, put the Microsoft SQL Server Driver
appropriate for your runtime environment in the <installation\
_directory\>/userlib/database directory.
Microsoft Azure SQL Data Warehouse
The Microsoft Azure SQL Data Warehouse connector makes it easy to work
with Microsoft Azure SQL Warehouse.
This connector pre-configures the Database connector with the necessary
settings to simplify building relational queries against structured data
stored in a Microsoft Azure SQL Data Warehouse. As a result, information
on process reporting runs, the run history on the Start step, the process
log, and licensing information shows the connection as Database rather
than Microsoft Azure SQL Data Warehouse.
Connector configuration
To configure a Boomi Integration connector to communicate with the
Azure SQL Data Warehouse, set up two components:
 Microsoft Azure SQL Data Warehouse connection
 Microsoft Azure SQL Data Warehouse operation
This approach creates reusable components that contain connection
settings (such as URL, User, Password, and Account) and operation
settings (such as Object, Action, and data definitions). After building your
connection and operation, set up your connector within a process. When
you define the process correctly, Integration can map to and from virtually
any system using this connector to retrieve data from or send data to the
Azure SQL Data Warehouse.
Prerequisites
The Microsoft Azure SQL Data Warehouse connector requires the following:
 Microsoft Azure Data Warehouse connection information, including
server host, IP address and port, user name, and password. The user
name and password configured on the database must have
sufficient permissions to access the objects required by the
integration. For more information, refer to the documentation for
your database.
 If you use local Atoms, put the Microsoft SQL Server Driver
appropriate for your runtime environment in the <installation\
_directory\>/userlib/database directory. You can do that in one of
two ways:
note
For more information about SQL Server drivers, compatibility, and
available downloads, see the Microsoft JDBC Driver for SQL Server support
matrix and Download Microsoft JDBC Driver for SQL Server pages from the
Microsoft help.
 Upload the JAR file into your account library (Setup > Account
Libraries), add that file to a Custom Library component, and
deploy it to the appropriate Atom, Molecule, Atom Cloud, or
environment.
For JAR files that support a connector, you create a Custom
Library component with a type of Connector and select a connector type
— in this case, Database. When you deploy the component, it deploys the
referenced the JAR files to
the /<installation_directory>/userlib/database directory. Integration
creates the userlib/database directory if it does not exist.
 Load the JAR file manually into the appropriate directory of a local
Atom, Molecule, or private Atom Cloud. After you load the JAR files,
you must restart the Atom, Molecule, or private Atom Cloud. If the
userlib/database directory does not exist, you must create it.
 If you use a Atom Cloud to connect to the Azure SQL Data
Warehouse, it installs the SQL Server JDBC driver sqljdbc4.jar for
you.
The Microsoft Azure SQL Data Warehouse connector uses the same driver
as SQL Server. It does not include drivers for local Atoms. You can
download the drivers from the Microsoft, Download Microsoft JDBC Driver
for SQL Server page.
Tracked properties
The Microsoft Azure SQL Data Warehouse connector has no tracked
properties.

Microsoft Azure SQL Data Warehouse operation


The Microsoft Azure SQL Data Warehouse operation defines how to
interact with the connection, including batching and commit options.
The Microsoft Azure SQL Data Warehouse profile contains the SQL
statement, which is referenced in the operation.
The Microsoft Azure SQL Data Warehouse operation supports two types of
actions:
 Get — Used to retrieve data from the Azure SQL Data Warehouse.
 Send — Used to send data to the Azure SQL Data Warehouse.

Prerequisites
The Microsoft Azure SQL Database connector requires the following:
 Microsoft Azure Database connection information, including server
host, IP address and port, user name, and password. The user name
and password configured on the database must have sufficient
permissions to access the objects required by the integration. For
more information, refer to the documentation for your database.
 If you use local Atoms, put the Microsoft SQL Server Driver
appropriate for your runtime environment in the <installation\
_directory\>/userlib/database directory. You can do that in one of
two ways:
note
For more information about SQL Server drivers, compatibility, and
available downloads, see the Microsoft JDBC Driver for SQL Server support
matrix and Download Microsoft JDBC Driver for SQL Server pages from the
Microsoft help.
 Upload the JAR file into your account library (Setup > Account
Libraries), add that file to a Custom Library component, and
deploy the component to the appropriate Atom, Molecule, Atom
Cloud, or environment.
For JAR files that support a connector, you create a Custom
Library component with a type of Connector and select a
connector type — in this case, Database. When you deploy the
component, it deploys the referenced JAR files to
the /<installation_directory>/userlib/database directory. If
the userlib/database directory does not exist, creates it.
 Load the JAR file manually into the appropriate directory of a local
Atom, Molecule, or private Atom Cloud. After you load the JAR files, a
restart is required. If the userlib/database directory does not exist,
you must create it.
 If you use a Atom Cloud to connect to the Azure SQL Database, it
installs the SQL Server JDBC driver sqljdbc4.jar for you.
The Microsoft Azure SQL Database connector uses the same driver as SQL
Server. It does not include drivers for local Atoms. You can download the
drivers from the Microsoft, Download Microsoft JDBC Driver for SQL Server
page.
Tracked properties
The Azure SQL Database connector has no tracked properties.
Connection tab
Database URL - Populates a read-only field when you specify other
connection values. Used for reference only.
Driver Type -
Azure SQL Database driver is the default, which you cannot change.
The required driver file must reside in
the /<installation_directory>/userlib/database directory.
User - Database user name.
Password - Database user password.
Host - Name or IP address of the database server.
Port - The port with which to connect to the database server.
Database Name - The name of your database.
Additional Options - Any additional options specified in your database
URL. Name and value pairs delimited by semicolons are often used , such
as ;instance=DB01
Connect to Microsoft Azure SQL Database as a Windows user, for
example: ;domain=<Your_Windows_Domain_Name>
If you are using SQL Server, connecting as a Windows user, and
connecting to a named instance, the order of the properties is important
for the JDBC URL. For example: ;instance=<value>;domain=<value>
Class Name - Shows the fully qualified Java class name of the Microsoft
SQL Server driver com.microsoft.sqlserver.jdbc.SQLServerDriver.

Microsoft Azure SQL Database operation


The Microsoft Azure SQL Database operation defines how to interact with
the connection, including batching and commit options.
The database profile contains the SQL statement, which the operation
references.
The Microsoft Azure SQL Database operation supports two types of
actions:
 Get: The action retrieves data from the Microsoft Azure SQL Server.
 Send: The action sends data to the Microsoft Azure SQL Server.

Microsoft Dynamics 365 for Customer Engagement


connector
Use the Microsoft Dynamics 365 for Customer Engagement connector to
connect the Boomi Enterprise Platform to any Dynamics 365 Customer
Engagement service, including Dynamics 365 for CRM.
Microsoft Dynamics 365 for Customer Engagement allows you to bring
together all of the data about your customers and prospects so you can
manage the data in one place. You gain the insights that you need to
manage your daily activities, stay on top of your team's progress, and
make informed decisions to nurture your customers and build lifelong
relationships.
Using the Microsoft Dynamics 365 for Customer Engagement connector,
you can use a Boomi Integration process to create, update, and delete
accounts and opportunities, retrieve customer contact information, look
up potential prospects and lead opportunities, and more.
The Microsoft Dynamics 365 for Customer Engagement connector uses
the OData version 4.0 protocol.
Connector configuration
To configure a connector to communicate with a Microsoft Dynamics 365
for Customer Engagement connector, set up two components:
 Microsoft Dynamics 365 for Customer Engagement connection
 Microsoft Dynamics 365 for Customer Engagement operation

Limitations
The following known limitation affects this connector:
 This connector does not currently have built-in proxy support and
does not honor the Atom proxy settings. Until this is supported, and
if you need to use the Atom proxy settings, recommends that you
use the HTTP Client connector to access the service.
Tracked properties
ETag - To identify specific resource versions, use the entity tag from the
OData protocol. It detects any modification of the eTag since it was last
retrieved.
The Microsoft Dynamics 365 for Customer Engagement operation defines
how to interact with the Microsoft Dynamics service's URL endpoint
defined in the connection component and represents an action (Create,
Delete, and so on) to perform against a specific Dynamics 365 entity
object.
The Microsoft Dynamics 365 for Customer Engagement operation supports
the following actions:
 Inbound: Get and Query, which are supported in all versions of
Dynamics 365. To request data, it requires unique parameters at
run-time. Review the topic about parameter values to understand
how to statically or dynamically define these values in a process for
runtime. The Get action retrieves data for an entity specified as the
resource with a unique identifier, and is available only for entity
objects. The Query action applies options to set criteria for the data
and to return specific entity properties, and is available for entity
objects.
 Outbound: Create, Delete, Update. Use the Create action to
generate and retrieve data from the Dynamics 365 for Customer
Engagement service. Use the Delete action to remove entities or
individual properties of entities. Use the Update action to Upsert
objects. These actions are available for all entity objects in the
connector. For all outbound operations, send request JSON data to
the connector that corresponds with the defined JSON profile.
Typically, request documents are the destination output of a map.
You can also set parameters on the outbound
connector's Parameters tab within the process to supply static or
dynamic request values.
note
Not all actions are available for every object. You must create a separate
operation component for each action and entity object combination
required for your integration.

Microsoft Dynamics AX connector


The Microsoft Dynamics AX connector enables moving data into and out of
the Microsoft Dynamics AX business management application.
The connector uses web services to connect to the Microsoft Dynamics AX
Application Integration Framework (AIF). The connector enables you to
query, add, modify, and delete various types of records including
Accounts, Customers, Sales Orders, Invoices, and more.
The connector uses Application Integration Framework (AIF) web services
to exchange documents between Microsoft Dynamics AX and Boomi
Integration using SOAP web services. You must install these web services
on an IIS server following the instruction in the AIF Installation Guide.
The IIS server acts as a gateway between the Integration connector and
the Microsoft Dynamics AX server, as shown in the following diagram.

Prerequisites
Before creating a Microsoft Dynamics AX connector, install and configure
AIF, set up a document exchange and generate the web services.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
WSDL URL - Application Integration Framework (AIF)
Fully qualified base URL of the web services that listen on the IIS server.
The URL does not include the service name, for
example: http://localhost/MicrosoftDynamicsAXAif50.
User - The Windows domain user with access to IIS-hosted web services
at the previously mentioned URL.
Password - Password for the user
Options tab
Click Import Operation, then use the Import wizard to select the object
with which you want to integrate.
Object - Displays the object type that you selected in the Import
Operation wizard.
Tracking Direction - Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.
note
If the tracking direction is read-only, the feature to change the direction is
either unavailable or the developer set the configuration to read-only. The
default value you see shows you which document appears in Process
Reporting.
Request Profile - The XML profile definition that represents the XML
structure that is being sent by the connector.
The XML profile built by the connector exposes the full SOAP envelope.
This allows you to set the Envelope\Header\CallContext\Company field,
which controls which company the to run the request against. You must do
this if you have a single Microsoft Dynamics AX instance with multiple
companies, and you want to connect to different companies.
Return Application Error Responses - This setting controls whether an
application error prevents an operation from completing:
 If you clear the setting, the process stops and reports the error on
the Process Reporting page.
 If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.
WS Object - The name of the web service object.
Microsoft Dynamics CRM connector
The Microsoft Dynamics CRM connector enables you to move data into
and out of the Microsoft Dynamics CRM application.
The connector uses a web service that Microsoft provides to interact with
their CRM system. Use the connector to perform Query, Get, Add, Modify,
and Delete operations on various types of records.
Connector configuration
To configure a connector to communicate using Microsoft Dynamics CRM
connector, set up two components:
 Microsoft Dynamics CRM connection
 Microsoft Dynamics CRM operation
Create a single connection component for each Microsoft Dynamics CRM
organization. You can add multiple operations, one for each action.
Supported editions
Boomi supports the following editions:
 Online 2011 up to 2017 (v8.2.2.0160)
 On-Premise 2011/2013/2015/2016
 Online 4.0
 On-Premise 4.0
Prerequisites
To implement a connection to Microsoft Dynamics CRM from Integration,
you need:
 A user name and password with sufficient permissions to query, get,
add, modify, and delete various types of records.
 Java with strong encryption (Java Cryptography Extension or JCE)
support installed for on-premise 2011 and 2013. You can download
the files and instructions from Oracle.
Tracked properties
This connector has no predefined tracked properties.
Connector configuration
To configure a connector to communicate with Microsoft Dynamics AX, set
up two components:
 Microsoft Dynamics AX connection — that represents a specific
Microsoft Dynamics AX instance (IIS server instance that is listening
for AIF services). You create one connection for each Microsoft
Dynamics AX instance.
 Microsoft Dynamics AX operation — that represents an action
against a specific record type. Create one or more operations, one
for each type of interaction required by your integration scenario.
Supported editions
Boomi Integration supports the following editions:
 Microsoft Dynamics AX 2009
 Microsoft Dynamics AX 2012
Supported operations
The connector supports all operations that AIF exposes. Microsoft
Dynamics AX includes over 70 standard documents. Each document
supports a common business process. The AIF lets you customize these
standard documents or create your own documents for your individual
business processes.
For more information about the included documents with Microsoft
Dynamics AX, see Standard Axd Documents in the Microsoft Dynamics AX
SDK Help.
Prerequisites
Before creating a Microsoft Dynamics AX connector, install and configure
AIF, set up a document exchange and generate the web services.
Tracked properties
This connector has no predefined tracked properties
Connection tab
WSDL URL - Application Integration Framework (AIF)
Fully qualified base URL of the web services that listen on the IIS server.
The URL does not include the service name, for
example: http://localhost/MicrosoftDynamicsAXAif50.
User - The Windows domain user with access to IIS-hosted web services
at the previously mentioned URL.
Password - Password for the user
The web services authentication process is as follows:
1. The connector sends a request to the IIS that hosts the AIF services.
IIS retrieves the user credentials depending on the authentication
mechanism specified in the service configuration (for example,
NTLM). IIS then tries to map the security credentials onto a valid
domain user. By default, Microsoft Dynamics AX configures WCF to
use the basicHttpBinding binding with message security, so the
message SOAP header contains the user credentials. Finally, IIS
authenticates the user as valid in Active Directory.
2. Then, it passes the request to AIF that performs further
authentication by verifying that the user:
a. Is a valid Microsoft Dynamics AX user
b. Has access to the service through the appropriate security key
3. After AIF determines that the user has access to the service, it
processes the message. At run time, standard AIF security ensures
that the user has access to the data exposed by the service.
Microsoft Dynamics AX operation
The Microsoft Dynamics AX operation defines how to interact with the AX
services.
The operation represents a specific action to perform against an object in
Microsoft Dynamics AX. The operation has one action: Execute.
Microsoft Dynamics CRM connection
The Microsoft Dynamics CRM connection represents a single Microsoft
Dynamics CRM organization including sign-in credentials.
If you have multiple organizations or sandbox instances, use a separate
connection for each. You can pair a single connection with different
Microsoft Dynamics CRM operations to perform a unique action against a
Microsoft Dynamics CRM organization.
Supported authentication and protocols
This connector supports the following authentication and protocols:
 Authentication for OCP-managed (Online Commerce Platform)
accounts, the subscription and billing platform used by Microsoft
Office 365.
 Windows Live ID, the single sign-on web service.
 Online federation authentication policies, but not Microsoft Active
Directory Federation Services (ADFS or AD FS).
Microsoft Dynamics CRM supported authentication models
Microsoft Dynamics CRM supports three authentication models:
 Windows Authentication.
 Claims-based Authentication for internal access to an untrusted
domain.
 Claims-based Authentication for external access to an untrusted
domain.
note
This connector does not support the communication protocols for Claims-
based Authentication.
This connector leverages Windows Authentication, which requires you to
install the Atom on a computer in a trusted domain by the CRM host.
Windows Authentication leverages SPNEGO and NTLM to authenticate with
the CRM installation. You can determine the URLs for access by navigating
to Developer Resources from Settings in the MS Dynamics CRM console.
Connection to an Internet Facing Deployment (IFD) requires using Claims-
based Authentication. Claims-based Authentication leverages Kerberbos
for internal access between untrusted domains and a sign-in protocol for
external access to untrusted domains.
Connection tab
Version - The version of Microsoft Dynamics CRM to which you connect:
 Online 2011 up to 2017 (v8.2.2.0160)
 On-Premise 2011/2013/2015 (and for 2016)
 Online 4.0
 On-Premise 4.0
Server - The Microsoft Dynamics CRM Online host name. For on-premise
versions that are SSL protected, the server name can be in a format
like https://crm.boomi.com:8090/.
Port
For on-premise versions, the host port for the Microsoft Dynamics CRM
Server. (If you selected an online version, you can leave this blank.)
Organization Name
The unique organization name differs depending on the version:
 For Online 4.0 version — after you sign in, you can find the
organization name in Settings > Customization > Download
Web Service Description Files.
 For Online 2011, 2013, and 2015 versions — if you log in
as https://uniquename.crm.dynamics.com, then the organization
name is uniquename.
 For On-Premise 2011, 2013, and 2015 versions — if you log in
as http://hostname/uniquename/, then the organization name is the
uniquename.
User name - The user name differs depending on the version:
 For on-premise versions, enter the user name of the domain, such
as domain\user for signing into the Microsoft Dynamics CRM server.
 For online versions, enter the Microsoft account’s user name.
Password - The password for signing into the Microsoft Dynamics CRM
server. For online versions, enter the Microsoft account’s password.

Microsoft Dynamics CRM operation


The Microsoft Dynamics CRM operation defines how to interact with your
Microsoft Dynamics CRM account.
The operation represents a specific action (such as Query and Create) to
perform against a specific Microsoft Dynamics CRM object type. Not all
objects are available for each action. Create a separate operation
component for each action and object combination required for your
integration.
Get
The inbound Get action requires an entity type and ID.
The Get action does not return attributes with a null value. Instead, if you
set the attribute value, it is returned in all subsequent Get actions to
match the profile.
The Get action does not support paging or batching.
Query
Boomi Integration supports the inbound Query action.
The query does not return attributes with a null value. Instead, if you set
the attribute to a value, it is returned in all subsequent queries to match
the profile.
Entities can have attributes or references to other entities. The Microsoft
Dynamics CRM 2011 connector enables you to specify an arbitrary logic
tree for top-level attributes and reference attributes.
 You can query by attribute only top-level attributes, such as account
name and contact anniversary.
 You can query by reference for entity references; their attributes are
ID and logicalName. The ID attribute is required in a query by
reference.
When you construct a query, entity reference attributes are nested in the
profile display. For example, the reference attributes for an account’s
createdby entity reference appears
as createdby/Id and createdby/logicalName.
The Query action does not support batching or relationship entities.
Create
The outbound Create action generates new records in the Microsoft
Dynamics CR object as defined in the Create operation. Boomi Integration
supports the Create action for most object types, including custom types.
However, it cannot create all object types. See this list of Default entities
that you can make. Batching is not supported for this action.
Update
Boomi Integration supports the outbound Update action. You must include
the primary ID of the updated record in the request.
The service does not support batched or paged Update requests.
note
When you create and update statuscodes, they have to match a
statecode, which you cannot create or update. For instance, associate
account.statuscode to account.statecode. Finding a status code that
matches a state code you can modify can be challenging.
Delete
The outbound Delete action deletes an existing record in the Microsoft
Dynamics CRM object defined in the Delete operation. It ignores the input
profile for a Delete operation. The service does not support bulk or paged
deletes. The Delete action does not support batching.

Microsoft Dynamics GP connector


Use the Microsoft Dynamics GP connector to move data into and out of the
Microsoft Dynamics GP business accounting application.
The connector enables you to query, add, modify, and delete various
types of records including Accounts, Customers, Sales Orders, Invoices,
and more.
The connector uses an integration layer called eConnect to exchange data
with the Great Plains database. eConnect is a Microsoft API that facilitates
the exchange of data by performing business validations and ensures data
integrity. You implement it as a set of database stored procedures installed
on the Great Plains database server. There is also an API component that
accesses the stored procedures for all actions except the Query action.
This connector invokes the API component installed as part of the
eConnect runtime. It is important that you install eConnect before you
attempt to connect to Great Plains.
Connector configuration
To configure a connector to communicate with Microsoft Dynamics GP, set
up two components:
 Microsoft Dynamics GP connection — The connection represents a
specific Great Plains company database. Create one connection for
each company database.
 Microsoft Dynamics GP operation — The operation represents an
action against a specific record type. Create one or more operations
for each type of interaction needed for your integration scenario.
Supported editions
Boomi supports the following Microsoft Dynamics GP editions:
 2010
 2013/2015/2016/2018+
Prerequisites
To implement a connection to Microsoft Dynamics GP from Integration:
1. Install Great Plains and the database.
2. Set up a local Atom within your network to connect to the Great
Plains SQL Server and eConnect. Boomi recommends that you install
the Atom on the same server as eConnect and the Great Plains SQL
Server to simplify the setup.
3. Configure the connector with a Windows domain user that has
access to the Great Plains SQL Server database machine. Create a
SQL Server login for this domain user and grant the database user
the DYNGRP role within the DYNAMICS database.
4. Install the eConnect libraries.
a. The option to install eConnect is under the Additional Products
section of the Great Plains installer. For more information, see
the eConnect Documentation.
b. You do not need to install the full eConnect Runtime module.
c. Install the eConnect Integration Service to connect to the
database.
The eConnect interfaces do not support every object and interaction
available within Great Plains. To work with objects from other modules or
more sophisticated queries, use the Database connector to connect to
back-end database tables directly. Consult your eConnect documentation
for more information about the table schemas. Use caution to maintain
data integrity when accessing an application database table directly.
Tracked properties
This connector has no predefined tracked properties.

Microsoft Dynamics GP connection


The Microsoft Dynamics GP connection contains details about how to
connect to the Microsoft Dynamics GP database (typically, a Microsoft SQL
Server database) as well as the eConnect service.
If you have multiple companies to integrate, you need a separate
connection for each.
Connection tab
Database Host Name: Host name or IP address of the Great Plains
database server. You can use localhost if you install the Atom on the same
server as eConnect.
Database Port: Database port to use, usually 1433 (SQL Server default).
Database Name: Name of the specific company database. Do not use
the name DYNAMICS, which Great Plains uses for system-level
information. You can find the name of the database in the Great Plains
user interface by going to maintenance SQL list and looking for the
database that corresponds to your company.
eConnect Host Name: Host name or IP address of the eConnect server.
You can use localhost if you install the Atom on the same server as
eConnect.
For processes that include the Microsoft Dynamics GP connection and that
you created or updated after the 6 March 2013 release, you can use
process extensions to extend the eConnect Host Name. The extensions
enable you to change the eConnect Host Name so that it can use the
same process to work with different Microsoft Dynamics GP instances. You
might do this if you use integration packs. This functionality is available in
the 2010 and newer versions.
Domain User: The Windows domain user with access to the Great Plains
SQL Server database machine. You must configure the user name in the
SQL Server database and grant them the DYNGRP security role. Enter in
the format domain\user.
Domain User Password: The Windows domain password for the user.
Great Plains Version: The supported versions are 2010, or
2013/2015/2016/2018+

Microsoft Dynamics GP operation


The Microsoft Dynamics GP operation defines how to interact with Great
Plains data.
The operation represents an action (Create, Query, Execute, Upsert)
performed against a Great Plains record type or object (such as
Customers, Sales Transactions, Purchase Orders). Not all actions are
available for every object. Create a separate operation component for
each action and object combination required for your integration.
Query
The inbound Query action allows to look up object records based on
specific search criteria. You supply static or dynamic request parameters
(filters) to control selecting the object records. A single Query request can
return zero or more object record documents. After selecting the Query
connector action and using the Import Wizard, you can add query filters.
Create
Use the outbound Create action to insert data into your Microsoft
Dynamics GP account.
Execute
For Microsoft Dynamics GP 2010 and 2013, the outbound Execute action
exposes the following eConnect web service functions:
 Create Entity
 Create Transaction Entity
 Delete Entity
 Delete Transaction Entity
 Dispose
 Get Entity
 Get Next GL Entry Journal Number
 Get Next IV Number
 Get Next PO Number
 Get Next PO Receipt Number
 Get Next RM Number
 Get Next SOP Number
 Get PM Next Voucher Number
 Roll Back Documents
 Update Entity
 Update Transaction Entity
Upsert
The outbound Upsert action creates a new record if one does not exist;
otherwise it updates the existing record.

Microsoft SQL Server BCP connector


Use the Microsoft SQL Server Bulk Copy Process (BCP) connector to
perform Bulk Copy operations on a SQL Server database.
Bulk copy operations are efficient import and export operations to load
data into or export data from a single SQL Server table. Use this connector
to quickly import or export a large quantity of data into or from a SQL
Server database. The Microsoft SQL Server BCP connector performs
imports and exports faster than the Database connector because it is
optimized to reduce overhead. The connector can bypass data integrity
checks when loading data and can import and export thousands of
documents that can contain thousands of records. The connector can
handle millions of records. Only one bulk copy process runs at a time
within the during the same process.
Connector configuration
To configure the connector to communicate with Microsoft SQL Server
BCP, set up two components:
 Microsoft SQL Server BCP connection. The connection provides the
connection credentials to the appropriate SQL Server instance.
 Microsoft SQL Server BCP operation. The operation represents an
action against a SQL Server database table.
Create one Microsoft SQL Server BCP connection component for each SQL
Server database table with one or more Microsoft SQL Server BCP
operations for each type of action that your interaction requires.
Supported editions
The connector supports SQL Server versions 2000, 2005, 2008, 2012,
2014, 2016, and 2017.
Prerequisites
The Microsoft SQL Server BCP connector is available on Windows only and
requires the following:
 A user name and password with sufficient permissions to perform
the Bulk Copy operation on the target schema.
 An Atom configured for Microsoft SQL Server BCP browse operations.
Because the Microsoft SQL Server BCP connector is available only on
Windows, you must install the Atom on Windows.
 The bcp utility (bcp.exe) distributed with SQL Server Tools to
perform bulk copy operations from the command line. Verify that it
installs the correct version of the bcp utility:
o SQL Server 2000 uses bcp Utility version 8.0.
o SQL Server 2005 uses bcp Utility version 9.0
o SQL Server 2008 & 2008 R2 uses bcp Utility version 10.0.
o SQL Server 2012 uses bcp Utility version 11.0
o SQL Server 2014 uses bcp Utility version 12.0.
o SQL Server 2016 uses bcp Utility version 13.0.
o SQL Server 2017 uses bcp Utility version 14.0.
note
If you are running the connector's Atom from a machine other than the
one hosting the server, install a copy of the bcp utility on the host
machine.
Using trusted connections for browse operations
Enabling trusted connections for Microsoft SQL Server BCP browse
operations requires a deployed DLL file so that Java can use Windows
functionality to authenticate the user. The Microsoft SQL Server BCP
connector automatically deploys the file the first time it attempts a
browse operation using a trusted connection. You must add this path to
the Atom’s JVM configuration to locate the file. Perform these steps the
first time you attempt a browse operation using a trusted connection with
the Microsoft SQL Server BCP connector. The configuration is then set for
subsequent attempts.
Tracked properties
This connector has no predefined tracked properties
Microsoft SQL Server BCP connection
The Microsoft SQL Server BCP connection contains authentication details
for the Bulk Copy Program (bcp) utility and how to connect to the
appropriate SQL Server instance.
Connection tab
The Microsoft SQL Server BCP connection configures several bcp utility
arguments. The following fields appear on the Connection tab:
Server name - The name of the server hosting the target database. If
available, you can include the instance name in the format: serverName\
instanceName. This entry sets the bcp utility's -S [server_name\
instance_name] argument.
Use Trusted Connection - If selected, connects to SQL Server using
Windows Integrated Authentication and validates the account name and
password using the Windows principal token in the operating system. This
entry sets the bcp Utility's -T argument.
User Name
The user name to connect to the database. You must use the user name if
you do not use a trusted connection. You must have sufficient permissions
to perform the bulk copy operation on the target schema. This sets the
bcp utility's -U login-id argument.
 Exporting — To export the table or view, you must enable the
SELECT permission.
 Importing — To import the table, you must enable INSERT and
SELECT permissions. You need the ALTER TABLE permission when
using options that require data definition language (DDL)
operations, such as disabling constraints.
Password - It requires the password for the user name if you do not use a
trusted connection. This entry sets the bcp utility's -P password argument.
Database Name - The name of the database that contains the target
data. This entry sets the bcp utility's -d database_name argument
Schema Name Pattern - The name of the schema that contains the
target table. Enter the percent (%) sign to search all schemas. If blank, it
searches all objects without a schema.
SQL Server Version
Used to select a SQL Server version:
 2012 and newer
 2008
 2005
 2000
For details about the arguments, see bcp Utility on the Microsoft Ignite
website.
Test Connection
You can test your connection settings before you use or save the
connection in a process. The Test Connection ensures that your specified
settings are correct, valid, and accessible. If the test connection is
successful, you can save the connection. Otherwise, review and correct
any incorrect settings, then test again.
Attention
You can only run the Test Connection against containers associated with
your account (local Atoms, Molecules, and Atom Clouds).
Microsoft SQL Server BCP operation
The Microsoft SQL Server BCP inbound and outbound actions import data
into and export data from SQL Server.
Query
You can use the Microsoft SQL Server BCP operation's inbound Query
action to export data from SQL Server. For example, you can export an
entire database table or the results of a SQL query. The connector uses
JDBC to browse for a list of schemas and tables within a particular
database. does not support Query filtering or paging.
You can export a single table at a time. If you need data from multiple
tables, create a view on the server and export data from the view.
When exporting, there is no limit to the number of records you can include
in a document. The bcp utility saves all the documents to disk in a single
file that the Microsoft SQL Server BCP connector reads then passes as
documents in batches of the configured batch size, as determined by the
Number of Records per Batch setting.
Create
You can use the Microsoft SQL Server BCP operation's outbound Create
action to import data into SQL Server. A Create action imports data into a
new record inserted into the target SQL Server table.
The Microsoft SQL Server BCP connector processes documents in batches
of up to 5000 documents to ensure it does not exhaust critical system
resources. Each batch corresponds to a single bcp utility run. The bcp
utility run allows the commitment of records in batches. The number of
records processed before a commit is configurable through the Number of
Records Per Batch field. Each commit transaction serves as a checkpoint,
declaring those records as successful if the process fails at a later time.
 The connector processes documents in a batch of 5000 documents
per bcp utility run.
 By default, it imports all the records in a run as one batch.
 For large process runs, you can set the number of records to process
before committing the transaction in the Number of Records Per
Batch field, so it does not need to repeat the entire process if an
error occurs when inserting a record later in the process.
 If the transaction for any batch fails, it reverses insertions from the
current batch only. Batches already imported by committed
transactions are unaffected by a later failure.
 If an application error occurs, such as a primary key duplication, all
subsequent records in the run fail. The process also stops any
remaining records if it reaches the maximum number of allowed
data errors for the run.
 To get high levels of throughput, recommends that documents
contain anywhere from 1000 to 5000 records, which results in the
bcp utility processing from 5 to 25 million records per run.
With the Microsoft SQL Server BCP connector, you must set Create
parameters in a Map step or read in from a disk or a Message step. If you
configure parameters on the Connector step dialog's Parameters tab, it
reads the first record of the document only.
The Microsoft SQL Server BCP connector reads data in XML format.
Therefore, use an XML output profile to map input profiles. Alternatively,
you can read XML data from a Message step.
Data types
When you import XML data, the Microsoft SQL Server BCP connector
translates it to the string data type. The bcp utility translates string data
to the appropriate SQL Server column type.
When you export, the bcp utility translates data to character format. The
Microsoft SQL Server BCP connector then translates the data from
delimited text to XML character data.
SQL Server supports these advanced data types:
 varbinary(max)/image (BLOBs)
 varchar(max)/text (CLOBs)
 nvarchar(max)/ntext (NCLOBs)
 XML
Microsoft SQL Server BCP connector translates advanced data types to
character data during processing. Also, it processes BLOBs as Base64
character data that can escape and be nested into a single field in an XML
document.
When used with and the Microsoft SQL Server BCP connector, the bcp
utility offers these data formats:
 It uses character formats with other applications and which convert
data to character format on import or export.
 Unicode support that enables using extended character sets. If your
files include special characters or extended character sets, use
Unicode to ensure correct translation.
Microsoft Teams connector
Microsoft Teams connector allows you to connect to your Microsoft Teams
server and perform required data transfer from/to, using Microsoft Graph
REST APIs.
Microsoft Teams connector allows you to connect to your Microsoft Teams
server and perform required data transfer from/to, using Microsoft
Graph REST APIs. The Microsoft Graph APIs offer the ability to access data
exposed as resources of Microsoft Cloud Services (e.g. Microsoft 365
services). The connection to the Microsoft Graph APIs is secured via OAuth
2.0 authentication framework. Through this connector, you can use a
Boomi Integration process to Create, Get, Manage, Execute, List, and
Delete the Microsoft Teams resources (e.g. Team, Channel, Chat message,
Team members).
note
The Microsoft Teams connector supports only on-premise
(local Atoms, Molecules, and Atom Clouds).
Connector configuration
To configure the connector to communicate with a Microsoft Teams server,
set up the following two components:
 Microsoft Teams connection - Represents your client application
account registered in the Microsoft Azure portal for accessing the
Microsoft Graph API.
 Microsoft Teams operation - Represents an action against a specific
object i.e. Microsoft Teams resource. You will create one or more
operations—one for each type of interaction required by your
integration scenario.
This design provides reusable components which contain connection
settings and operation settings. After building the connection and
operation, set up your connector within a process. When the process is
defined properly, Integration can map to and from virtually any system
using the Microsoft Teams connector to transfer the Microsoft Teams data.
Supported editions
Microsoft Teams supporting Microsoft Graph v1.0.
Prerequisites
To use the Microsoft Teams connector, you will require the following:
 A Microsoft Azure Active Directory tenant account with the privilege
to register new application in the Microsoft Azure Portal.
 A client app authorized to receive access token to call the Microsoft
Graph APIs. For this, you must register a new application (client) in
the Microsoft Azure Portal and the application must be authorized by
either a user or an administrator for accessing the required
Microsoft Graph resources. The access token contains information
about your app and the permissions it has for the resources and
APIs available through Microsoft Graph. For more information, refer
to Microsoft Documentation - Register an application with the
Microsoft identity platform and Authentication and authorization
basics for Microsoft Graph.
 Client app identification details. You can find these details in
the Microsoft Azure - App registrations - your
application's Overview page.
o Client ID – A unique application (client) ID assigned to your
app by the Microsoft Azure, post the app registration.
o Client Secret - A password or a public/private key pair that
your app uses to authenticate with the Microsoft Azure.
Configured at the time of app registration.
o Tenant ID - Global Unique Identifier (GUID) assigned to a
tenant.
 Microsoft Graph API authentication endpoint details.
o API Service Endpoint - Authentication URL for the Microsoft
Graph API service endpoint.
Value: “https://login.microsoftonline.com/:tenant_id/oauth2/v2.
0/token”
o Scope - Use the Microsoft Graph URL suffixed with /.default.
Value: https://graph.microsoft.com/.default.
 Necessary access rights and permissions assigned to the registered
client app for using the Microsoft Graph APIs. The required API name
and permission type can be found within the respective operation.
Delegated authorization must be configured for the client app to
perform Get and Execute operation. To know about configuring API
permissions for the client app, refer to Microsoft Graph
Documentation – Manage app registration.
Tracked properties
This connector has no predefined tracked properties.
Microsoft Teams operation
Microsoft Teams connector allows you to connect to your Microsoft Teams
server and perform required data transfer from/to, using Microsoft Graph
REST APIs.
The Microsoft Teams operation defines how to interact with your Microsoft
Teams server and represent a specific action (Create, Get, Execute,
Manage, List, Delete) to be performed against one/many Microsoft Teams
resources. Create a separate operation component for each action
combination that your integration requires.
The Microsoft Teams connector supports the following actions:
 Inbound: Get, List
 Outbound: Create, Execute, Manage, Delete
MongoDB connector
The MongoDB connector allows you to perform single/bulk transfer of data
from/to a MongoDB server using MongoDB Java APIs. Also, MongoDB
connector supports Azure cosmosDB MongoDB variant.
Through this connector, you can use a Boomi Integration process to GET,
QUERY, CREATE, UPDATE, UPSERT and DELETE the data stored in the
MongoDB database. The connector provides the ability to use any of
the MongoDB supported client authentication methods (such as Salted
Challenge Response Authentication Mechanism (SCRAM), X.509,
Lightweight Directory Access Protocol (LDAP) and KERBEROS) for a
secured database connection. Its flexible design enables using this
connector with MongoDB Sharded Cluster deployment.
It supports utilizing the "Mongo+SRV" feature for MongoDB clusters. The
use of SRV eliminates the requirement for every client to pass in a
complete set of state information for the cluster. Allows more flexibility of
deployment and the ability to change the servers in rotation without
reconfiguring clients.
Important
 This is an open source connector and you can access the code in
the MongoDB connector repository.
 The MongoDB connector is supported in both local and Cloud Atom.
The Cloud Atom support has not been tested for KERBEROS and
Mongo+SRV functionalities.
Connector configuration
To configure the connector to communicate with MongoDB, set up the
following two components:
 MongoDB connection — The connection represents a specific
MongoDB server and its associated user authentication.
 MongoDB operation — The operation represents an action against a
specific record (document) type. You will create one or more
operations — one for each type of interaction required by your
integration scenario.
This design provides reusable components containing connection settings
and operation settings. After building the connection and operation, set up
your connector within a process. When the process is configured properly,
Integration can map to and from virtually any system using the MongoDB
connector to communicate with the MongoDB server.
Prerequisites
To use the MongoDB connector, you will require the following:
 Hostname and port number of the MongoDB server.
 Name of the MongoDB database.
 User authentication details required for the selected authentication
type.
o None — The minimal MongoDB server details such as
hostname, port number and MongoDB database name are
adequate to establish the connection.
o SCRAM-SHA-1 & SCRAM-SHA-256 — Requires username,
password and name of authentication database (authorized
MongoDB users are registered here at the time of user
creation).
o X.509 — Uses User certificate (.pfx/.p12 format) and
Certificate Authority (CA) certificate (.cer/.der format).
Valid X.509 certificates generated and signed by a single Certificate
Authority should be used. A self-signed CA certificate can also be used.
These certificates can be generated using the OpenSSL tool. Obtaining
and managing these certificates is beyond the scope of this
documentation. Contact the server Admin for further details.
o LDAP — Uses distinguished username as defined in the LDAP
directory server, password and authentication database name.
o KERBEROS — Uses client principal name, password, user
authentication database name, Kerberos Key Distribution
Center (KDC) host name, Realm name along with the following
configuration files.
 krb5.conf file — Kerberos client configuration file
contains the location of Kerberos Key Distribution Center
(KDC) and admin servers for the Kerberos realms. This
file is configured as part of the Kerberos setup. If the
KDC and Realm names are updated here, the KDC
Name and Realm Name fields in the Connection setup
are optional.
 keytab file — Stores long-term keys for one or more
principals/users in encrypted format.
note
If you are not aware of the Krb5 and/or keytab files’ location or do not
have one already, contact your Kerberos server Admin.
 gss-jaas.conf file — Consists of one or more entries,
each specifying which underlying authentication
technology should be used for an application or
applications. This file is mandatory, if the underlying
authentication mechanism uses Java driver (JAVA
Authentication and Authorization Service (JAAS) &
Generic Security Service Application Program Interface
(GSSAPI)) to perform Kerberos user authentication.
Sample format for gss-jaas.conf file is shown below.

com.sun.security.jgss.initiate {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="<PATH_OF_KEYTAB>"
storeKey=false
useTicketCache=false
principal="<PRINCIPAL_NAME>"
doNotPrompt=true
debug=true;};

Remember:
 All these files must be placed in a separate folder
in your local machine.
 In the gss-jaas.conf file, go to the
line keyTab="<PATH_OF_KEYTAB>" and add the
local directory path of the keytab file.
 In the connection setup, the local directory path of
Krb5.conf file and gss-jaas.conf file must be
specified in the Krb5 Config Path and JAAS
Config Path field respectively.
 Enterprise Edition of MongoDB for LDAP and Kerberos
Authentication.
 Connection String for Mongodb+SRV (mandatory if the Mongo+SRV
option is selected). The connection string contains the end point of
the DNS server hosting the SRV record. Optionally, you can pass
other required configuration like SSL, authentication type, etc. For
more details on SRV connection string, refer to the MongoDB
documentation - DNS Seedlist Connection Format.
Make sure the following settings are done at the DNS server end.
o SRV and TXT record for the cluster must be defined on the
DNS server. A single SRV record points to the members of the
replica set. The TXT record defines the options for the replica
set, specifically the database that will be used for
authorization and the name of the replica set. For details on
creating the SRV and TXT record, refer to the MongoDB
documentation - MongoDB Seed Lists.
o User credentials associated with the DNS server. These
credentials can be entered in the Username/Client
Principal and Password field. The database name can be
provided in the Database Name field.
o The DNS must be resolvable from the system, when trying to
establish the connection.
Connector-specific container setting
 The following property allows the (Atom, Molecule, or Private Atom
Cloud and will not support the Public cloud) owner to control the
amount of memory that the MongoDB connector uses while
retrieving and storing documents in memory before processing.
Using this property is helpful for a Cloud owner to ensure that a
single tenant or account does not overwhelm the container by
consuming a huge amount of heap memory simultaneously.
 com.boomi.connector.mongodb.maxDocumentSize - (Query and Get
operations) The property mongodb.maxDocumentSize indicates the
maximum size of a single document the server returns per request.
It represents the Maximum document Size in bytes.
 Based on the given container value, the document will be filtered for
the Query and Get operations.
note
The sorting functionality for Query operation with Azure CosmosDB
MongoDB variant works only for those fields which are indexed.
note
 A value less than 1 indicates no limit on the document size for the
Get and Query operations. This enables the Get and Query
operations to support the Azure CosmosDB MongoDB variant
effectively. For Azure Cosmos DB, the value of this property should
be set to less than or equal to Zero.
 Without this container property, the connector will operate AS IS,
with a default value set to 1MB, consequently not supporting Get
and Query operations for the Azure CosmosDB MongoDB variant.
Supported editions
MongoDB versions from v2.6 to v6.x.
Tracked properties
This connector has no predefined tracked properties.
MongoDB operation
The MongoDB operations define how to interact with your MongoDB server
and represent a specific action (Get, Query, Create, Update, Upsert and
Delete) to be performed against one/many MongoDB Id(s) (Unique ID of a
document stored in the MongoDB collections).
The connector supports using the following BSON datatypes: Integer,
Boolean, String, Decimal128, Date, Double, Long, ObjectId (hexadecimal
value), Binary data, JavaScript, Timestamp, and Null.
Create a separate operation component for each action combination that
your integration requires. The MongoDB supports the following actions:
 Inbound: Get, Query
 Outbound: Create, Update, Upsert, Delete
Import certificates
For the MongoDB connector to use X-509 authentication, two types of
certificates (User Certificate, CA Certificate) should be imported.
About this task
For the CA certificate import, use the Add (+) icon against the CA
Certificate field and follow steps 2 - 5. CA certificate import does not
require a password.
Procedure
1. Navigate to the Connector’s Connection tab, in the User
Certificate field, click on the Add (+) icon.
2. In the New Certificate section, provide the certificate name.
3. Click the Import button.
4. Choose the certificate file from your local system and specify the
user certificate password.
5. Click the Finish button. The certificate uploaded successfully
message are displayed.

MongoDB changelog
Changelog
2024-07 (Version 9, Current)
 The connector has improved exception handling.
 Memory usage optimization for Get and Query operations.
 Enhancement to support Azure CosmosDB MongoDB variant for Get
and Query operation by addressing the default 1MB document size
limit restriction. Maximum document size can be controlled via new
container property
- com.boomi.connector.mongodb.maxDocumentSize.
 For Cosmos
DB, com.boomi.connector.mongodb.maxDocumentSize should be
set to less than 1, in this case the size limit restriction of the
document will be removed.
2024-01 (Version 8)
 Necessary runtime packages are now packaged with the connector.
 Vulnerability issue of type Denial of Service was resolved to make
the Connector more secure.
 The connector supports the latest driver version 6.0.
2023-11 (Version 7)
 The connector now successfully creates structured and unstructured
JSON profiles for MongoDB collections (including Blank Array(s))
without throwing an exception.
 The Query operation successfully executes without throwing an
Arithmetic Exception when the Batch size is set to 0.
NetSuite connector
NetSuite is a financial management solution allowing you to manage IT
costs, optimize accounting efficiency, streamline order management and
procurement processes.
Using the NetSuite connector, you can use a Boomi Integration process to
insert, retrieve, look up, delete, and update data in your NetSuite account
(NetSuite On-Demand Application Suite, and NetSuite OneWorld Global
ERP). You can integrate on-premise and legacy solutions with NetSuite,
and also integrate with On-Demand and SaaS-based solutions such as
Salesforce and SAP. Integrating with NetSuite consists of web service calls
using an XML request and response made over an HTTPS connection.
SuiteTalk SOAP-based web services API
The NetSuite connector uses the SuiteTalk SOAP-based web services API to
move data into and out of your NetSuite account, and simplifies working
with the API by:
 Making it easy to work with custom fields. When using the Query
action, you can use NetSuite's search criteria for objects (including
multi-select customizations) by referencing custom fields. Query
filters for RecordRef or List fields
include CustomLists and Multiselect fields. You also have options
to disable mandatory custom field validation and system notes for
custom fields.
 Retrieving and providing the text values for list-type fields and
internal IDs when querying. For example, it is easier to add query
filters to retrieve a custom list of items from inventory.
 Streamlining the handling of multi-select and select references to
objects by using comma-delimited values. When
defining Parameter Values, you can enter a comma-delimited list
in the Static Value field.
 Improving performance when inserting or updating large sets of
documents by grouping them into batches.
Standard and Custom Record Types
The NetSuite connector exposes all standard and custom record types
available in your NetSuite account. You can browse the list and import the
records you want to work with to automatically generate the request and
response XML profiles, including all standard and custom fields (except
custom segments). However, if you later modify the record type in
NetSuite, such as adding or renaming a custom field, you must reimport
the record to update the connector operation. The reimport is required
because the operation component caches the NetSuite internal metadata
about the record type, such as NetSuite data types and custom field script
IDs, to facilitate the interaction.
When reading and writing, the NetSuite connector manipulates the raw
XML data for custom fields to simplify working with Integration profiles
and mapping. It can make the document data within Integration differ
slightly when viewed within the NetSuite Web Services Usage Logs.
Specifically, during imports the NetSuite connector uses
NetSuite’s Description field as the element name and caches the custom
field script ID within the operation component. This produces a more
strongly-typed XML and avoids the need to use qualifiers in the XML
profile to access a specific custom field.
Here is an example snippet from NetSuite:
<ns2:customField xsi:type="ns2:StringCustomFieldRef"
internalId="CUSTENTITYMYCUSTOMFIELD123">
<ns2:value>My Value</ns2:value>
</ns2:customField>

Here is the corresponding document data in Integration:


<ns2:My_Custom_Field_123 xsi:type="ns2:StringCustomFieldRef"
internalId="CUSTENTITYMYCUSTOMFIELD123">
<ns2:value>My Value</ns2:value>
</ns2: My_Custom_Field_123>

Error Processing
The NetSuite connector handles processing errors in different ways
depending on the error type.
 If records in NetSuite are damaged, such as having a dropped bit, it
activates a process-level exception and processing stops.
 If a connection or authorization error occurs, Integration retries the
connector run based on the Number of Retries setting in the
connection.
 If a document error occurs, such as an invalid ID or damaged data in
a document, processing does not stop.
Connection Pooling
The NetSuite connector automatically manages connection pooling behind
the scenes to avoid exceeding the maximum number of allowed
concurrent connections when multiple processes attempt to call NetSuite.
For example, when two processes try to query NetSuite simultaneously,
the second process waits to make the call until the first process is
complete.
note
Connection pooling does not span across JVMs (Molecule nodes or Atom
Cloud forked executions). When scheduling NetSuite processes to
a Molecule or Atom Cloud, either stagger runs or run them sequentially.
Connector configuration
To configure a connector to communicate with NetSuite, set up two
components:
 NetSuite connection
 NetSuite operation
This approach creates reusable components that contain connection
settings (such as URL, User, Password, Account) and operation settings
(such as Object, Action, data definitions). After building your connection
and operation, set up your NetSuite connector step within a process. With
the profiles generated during the operation configuration, a Boomi
Integration process can map to and from virtually any system using the
NetSuite connector to retrieve data from or send data to the NetSuite
application.
Prerequisites
To implement a connection to your NetSuite account from Integration, do
the following:
 For user credentials authentication, have your user name, password,
NetSuite account number to access the NetSuite web service, and
application ID.
 For token-based authentication (recommended), have your NetSuite
account number, consumer key and secret, token Id and secret, and
application ID. In NetSuite, you must also enable token-based
authentication for your NetSuite account, create a custom role,
assign the role to the appropriate user, create a new integration
record for Boomi Integration, and create an access token. For
complete details to configure the NetSuite connection with token-
based authentication, see the How to configure the NetSuite
Connection with Token Based Authentication article in the Boomi
Community.
 For both user credentials and token-based authentication, confirm
that your NetSuite account includes privileges to connect to the
NetSuite API.
 Deploy an Atom on your local machine or hosted with Boomi
Integration.
 Know which interface(s) from a business process perspective you
are going to integrate. For example, you may want to upload sales
orders from a flat file into NetSuite.
Supported editions
At any time the six latest endpoints are supported by NetSuite. Change
notes for the supported NetSuite editions are located in the NetSuite
change notes topic.
With 2023.2, the supported WSDL versions are:
 2023.2
 2023.1
 2022.2
 2022.1
 2021.2
 2021.1.
The following endpoints are available but not supported by NetSuite:
 2020.1
 2019.2
 2019.1
 2018.2
 2018.1
 2017.2
 2017.1
Updating these endpoints to one of the supported WSDL versions is
recommended.
NetSuite has sunset for 2016.1 and 2016.2 or older endpoints. If you are
connecting to these endpoints, upgrading to the supported WSDL
endpoints is imperative.
For more information, refer to NetSuite documentation.
note
When creating a NetSuite connection, you may not see the version of the
NetSuite API you are using or upgrading to in the Version field. For
example, if you upgrade your NetSuite application instance to NetSuite
2022.1 but you cannot select version 2022.1 in the connection. New and
existing integration processes that access previous API versions work with
the latest NetSuite interface version. You only need to use the 2022.1 API
to access version-specific features.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
NetSuite supports both user credentials and token-based authentication
(2017.1 and later versions). When you create a connection, you can use
either approach:
 When using user credentials, enter your user name, password,
account number, and application ID.
 When using token-based authentication, do not enter your user
name, password, and application ID. Instead, enter your account
number, consumer key and secret, and token ID and secret.
NetSuite operation
The NetSuite operation defines how to interact with your NetSuite
account.
The operation represents a specific action (Create, Delete, Get, and so on)
to perform against a specific NetSuite record type or object. Not all actions
are available for every object. You must create a separate operation
component for each action and object combination that your integration
requires.
The NetSuite operation supports the following actions:
 Inbound — Get and Query. The operation requires unique
parameters at runtime for all inbound operations to request data. To
learn how to statically or dynamically define values in a process,
read the Parameter Values help topic.
 Outbound — Create, Delete, Execute, Update, and Upsert. For all
outbound operations, send request XML data into the connector that
corresponds with the defined XML profile. Usually, request
documents are the destination output of a map. You can set
parameters on the outbound connector's Parameters tab within
the process to supply static or dynamic request values.
NetSuite OpenAir connector
The NetSuite OpenAir connector enables you to move data into and out of
NetSuite OpenAir services resource planning (SRP) software.
Use the NetSuite OpenAir connector to integrate any combination of on-
premise and cloud applications with NetSuite OpenAir. The connector is
prebuilt for Salesforce.com. You can run business processes automatically
by integrating NetSuite OpenAir with other applications across an
organization, including very large enterprise systems like SAP and Oracle
in a two-tier model. For example, your company needs to track the sale,
progress, and billing of a service engagement across CRM, PSA and ERP
applications. When Boomi Integration integrates these applications to
automate this process, the results are greater efficiencies, a reduction in
errors, and improved visibility into all aspects of customer engagement.
If you customize NetSuite OpenAir modules with additional fields, the
fields automatically appear when browsing the NetSuite OpenAir interface
with Boomi Integration.
Integrating with NetSuite OpenAir consists of web service calls utilizing
SOAP request and response setup over an HTTPS connection. The
technical details of this connection, such as request headers, error
handling, HTTPS connection, and so on, are abstracted to simplify your
implementation.
Connector configuration
To configure a connector to communicate with NetSuite OpenAir, set up
two components:
 NetSuite OpenAir connection
 NetSuite OpenAir operation
This design provides reusable components containing connection settings
(such as the Endpoint URL, User, Password, and Company) and operation
settings (such as Object, Action, and data definitions). After building your
connection and operation, set up your connector within a process. When
you have correctly defined the NetSuite OpenAir connector within your
process, Integration can map to and from virtually any system using the
connector to retrieve or send data to the NetSuite OpenAir application.
Supported editions
The NetSuite OpenAir connector supports Version 2013-05-16 of the
OpenAir SOAP API.
Prerequisites
To implement a connection to your NetSuite OpenAir account from
Integration, you must have a valid user name and password so that you
can sign into the NetSuite OpenAir service.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
OpenAir SOAP Endpoint URL - The endpoint URL for the NetSuite
OpenAir service. For example, https://<OpenAirHostName>.com/soap. If
you are using a sandbox, your URL might look something like
this: https://sandbox.openair.com/soap.
Company
The company code used to determine the instance for the connection.
Login User Name - The user name used to sign in to the NetSuite
OpenAir service.
Login Password - The password associated with the sign-in user name.
API Namespace - The namespace required for API access.
API Key - The code required for API access.
NetSuite OpenAir operation
The NetSuite OpenAir operation defines how to interact with your NetSuite
OpenAir instance.
The operation represents a specific action (Create, Delete, Get, and so on)
performed against a specific NetSuite OpenAir object (such as Project,
Time Sheet, Proposal). Not all actions are available for every object.
Create a separate operation component for each action and object
combination required for your integration.
The NetSuite OpenAir operation supports the following actions:
 Inbound
o Get: looks up data based on a single internal ID for the
NetSuite OpenAir object that you want to retrieve. The Get
action retrieves a single record from the NetSuite OpenAir
object defined in the Get operation. The bottom half of the
operation component page contains configuration options to
choose the fields to return.
o Query: looks up object records based on specific search
criteria. Supply static or dynamic request parameters (filters)
to control the selected object records. A single Query request
can return zero or more object record documents. After
selecting the Queryconnector action and using the Import
Wizard, choose the fields to return and add filters to limit the
results. Filters have four operators: Equal To, Not Equal To,
Newer Than and Older Than (for date fields).
 Outbound
o Create
o Delete
o Update
Options tab
Click Import Operation, then use the Import wizard to select the object
with which you want to integrate. The following table describes the default
field definitions for each action type:
Object - Displays the object type that you selected in the Import
Operation wizard.
Tracking Direction - Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.
note
If the tracking direction is read-only, the feature to change the direction is
either unavailable or the developer set the configuration to read-only. The
default value you see shows you which document appears in Process
Reporting.
Request Profile (Create, Delete, Update only)
The XML profile definition that represents the XML structure that is being
sent by the connector.
Response Profile (Create, Get, Query, and Update only)
The XML profile definition that represents the XML structure received by
the connector.
Return Application Error Responses - This setting controls whether an
application error prevents an operation from completing:
 If you clear the setting, the process stops and reports the error on
the Process Reporting page.
 If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.
Okta connector
The Okta connector allows you to interact with an Okta account, to
leverage Okta's REST APIs to easily provision and manage users.
The Okta Identity Cloud provides secure identity management with single
sign-on, multi-factor authentication, life cycle management (provisioning),
and more. The Okta connector makes it easier for Boomi customers to
provision Okta users, groups, and group management (add, update,
delete users to groups).
note
This is an open source connector and you can access the code in
the Okta connector repository.
Connector configuration
To configure the connector to communicate with Okta APIs, set up two
components:
 Okta connection — Represents a specific Okta instance and its
associated user authentication details.
 Okta operation — Represents an action against a specific Okta
instance (object type).
Prerequisites
To connect with Okta's APIs, you must have an Okta account. First, sign up
and create an Okta developers account. Once you have the unique Okta
subdomain and API Token, you can connect to Okta APIs.
Tracked properties
This connector has no predefined tracked properties.

Connection tab
Okta URL (oktaUrl) - Enter the domain base string field type Okta URL.
For example, dev-8275164.okta.com.
Okta API Token (oktaApiKey)
Enter the password field type API Token for the respective Okta domain.

Okta operation
The Okta operation represent a specific action to be performed against a
specific Okta account.
The Okta connector supports the following actions:
 Inbound — GET, LIST.
 Outbound — CREATE, UPDATE, DELETE, ADD, REMOVE.

OP Database - Partner connector


The OP Database — Partner Connector allows you to process or retrieve
data from any JDBC-supported database using Database drivers. It
enables you to Insert, Get, Update, Upsert, Delete and Execute Stored
procedures on the data available in the database.
note
The documentation for this connector is provided by a partner.
Connector configuration
To configure the connector, set up the following two components:
 Connection: The connection contains all connection settings.
 Operation: Represents an action used to interact with the provider.
Prerequisites
The connector requires the following:
 Database connection information, including server host/IP address
and port, user name, and password. The user name and password
configured on the database must have sufficient permissions to
access the objects required by the integration scenario. For more
information, see your database documentation.
 A database and tables. The Database connector does not create
tables or other objects.
 An installed and configured JDBC driver if using the custom
database type.
 Atom running with Java 11
Supported Editions
The connector supports any JDBC-compliant or ODBC-compliant database
(tested with Microsoft SQL Server, PostgreSQL, H2, IBM DB2, MariaDB
database).
The connector comes with a set of pre-defined databases with pre-loaded
configurations and drivers:
 Microsoft SQL Server
 PostgreSQL
 IBM DB2
 MariaDB
Please note that while the connector offers comprehensive functionality
with these databases, there may be limitations with certain databases or
drivers. For example, the MongoDB JDBC driver has limited support,
typically allowing only select statements in the cloud environment.
Atom Cloud uses high-level Java security policies, meaning some JDBC
drivers may not work.
Tracked Properties
This connector has no predefined tracked properties.
Connection tab
Video
Link
Type
Setting up
pre-
https://drive.google.com/file/d/12-
defined
cD8HwYET4Hp71sTB9oI4csAvknXx2J/preview
connectio
ns
Setting up https://drive.google.com/file/d/
custom 1w6I5Wmo9B5jiFZjjdl3osTHN3TDJbfJf/preview
connectio
Video
Link
Type
ns
 Database Type: The type of database that you want to connect to
o Postgres
o MySQL
o MSSQL
o Oracle
o MariaDB
o IBM DB2
o SAP HANA
o Custom (Not one of the pre-defined databases)
 Host: Name or IP address of the database server (Only available
with pre-defined databases)
 Port: The port that is used to connect to the database server (Only
available with pre-defined databases)
 Database Name: The name of your database (Only available with
pre-defined databases)
 Connection URL: Refers to the Database URL. It is mandatory to
include the database name in the connection URL (Only available
with custom databases)
 Class Name: The class name of the database driver to be sent to
the respective database (Only available with custom databases)
 Case Sensitive Character: The case-sensitive character for the
database (Only available with custom databases)
 Upsert Type: The type of upsert to use (Only available with custom
databases)
 Field Case: The case the database uses for non-case-sensitive
fields (Only available with custom databases)
 Get All Tables Query: A command used to retrieve all tables if the
information cannot be retrieved through the metadata (Only
available with custom databases)
 Get All Stored Procedures Query: A command used to retrieve
all stored procedures if the information cannot be retrieved through
the metadata (Only available with custom databases)
 User Name: The database user on whose behalf the connection is
made
 Password: The user's password
 Schema Name: The name of your schema (MariaDB does not use
schemas)
 Connection Timeout (ms): Enter the maximum time (in
milliseconds) that the connector can wait to complete the
connection handshake with the database
 Read Timeout (ms): Enter the maximum time (in milliseconds) that
the Molecule or Atom Cloud can spend attempting to read data from
the database
 Enable Connection Pooling : If selected, connection pooling is
enabled, allowing connections to be reused for future requests.
Connection pooling may improve performance by utilising resources
more efficiently. The default is false.
o Maximum Connections: The maximum number of
connections allowed in the pool. The default is -1, which
indicates an unlimited number of connections. You will receive
an error if you attempt to exceed the maximum (Connection
pooling must be enabled)
o Minimum Connections: Enter the minimum number of
connections allowed in the pool. The default is 0 (Connection
pooling must be enabled)
o Maximum Idle Time: Enter the maximum amount of time (in
seconds) that a connection can remain open in an idle state.
The default is 0 (Connection pooling must be enabled)
o When Exhausted Action: Indicate whether to wait for a
connection or immediately fail, when a connection pool is
exhausted (Connection pooling must be enabled)
o Maximum Wait Time (Optional): Enter the maximum
amount of time (in seconds) to wait for a connection from the
pool. The default is 0.
o Test Connection When Borrowing from Pool: Select this
option to verify if a connection is still valid while taking it from
the pool (Connection pooling must be enabled)
o Test Connection When Returning from Pool: Select this
option to test if a connection is still valid when returning it to
the pool (Connection pooling must be enabled)
o Test Idle Connections: Select this option to test if the idle
connections are still valid (Connection pooling must be
enabled)
 Validation Query: Enter a simple SQL Statement to return a single
row of data, if the connection is valid (Connection pooling must be
enabled)
 Connection Properties: Use the Add Property button to add
additional connection properties, if required to establish a
connection to the database.

JDBC drivers
Driver
Group ID Artifact ID Version
Type
Postgres org.postgresql postgresql 42.7.4
mysql-
MySQL com.mysql 9.1.0
connector-j
com.microsoft.sqlserv
MSSQL mssql-jdbc 12.8.1.jre11
er
com.oracle.database.j
Oracle ojdbc11 23.5.0.24.07
dbc
mariadb-java-client-
MariaDB
3.4.1.jar
mariadb-java-
MariaDB org.mariadb.jdbc 3.5.0
client
IBM DB2 com.ibm.db2 jcc 11.5.9.0
SAP
com.sap.cloud.db.jdbc ngdbc 2.22.12
HANA

OP Database - Partner operation


The OP Database — Partner operations define how to interact with your
database and represent a specific action (INSERT, GET, UPDATE, UPSERT,
DELETE, and STORED PROCEDURE) to be performed against one/many
database objects. Additionally, there is a VERSION operation to see which
build you are using. Apart from the UPSERT and STORED PROCEDURE
operations, all the other operations can be classified as:
 Standard Operation: Uses prepared SQL statements entered by
the user.
 Dynamic Operation: Uses the statement class to build dynamic
SQL queries during the runtime based on the user’s input.
 Entity Operation: Uses the statement class to build a dynamic SQL
query during the runtime based on the primary keys—this operation
is performed on a single entity.
OP Intelligence - Partner connector
Connector configuration
To configure the connector, set up the following two components:
 OP Intelligence connection: The connection contains all connection
settings.
 OP Intelligence operation: Represents an action used to interact with
the provider.
Prerequisites
The connector requires the following:
 Access to an LLM Provider
 Atom running with Java 11
Tracked Properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
 Request ID: The unique identifier for each request made through
the connector.
 Input Tokens: The request sent to the LLM API, containing the text
or data for processing.
 Output Tokens: The response received from the LLM API, which
includes the output text or data.
 Finish Reason: The reason why the request was completed, such
as completed, stopped, or error.
 Tool Call ID: A unique identifier for each call made to a specific tool
or function within the connector.
 Tool Call Name: The name of the tool or function that was
executed during the request.
 Response Fingerprint: A hash or unique identifier generated
based on the output tokens, useful for comparing responses.
 Reasoning: An explanation or summary of the logic that led to
generating the output tokens.
Connection tab
 URL: (Optional) Set if you want to override the default URL.
 API Key: The API key used to connect to the LLM provider.
 License Key: Feature use. For Atom Cloud, no need for a licence.
For on-premises Atoms, no licence is required up to 1000 requests
per hour.
 Default Chat Model: (Optional) Default chat model if not specified
in the operation.
 Default Embedding Model: (Optional) Default embedding model
if not specified in the operation.
 Connection Timeout: The maximum wait time, in milliseconds, to
establish a connection to a service.
 Read Timeout: The maximum wait time, in milliseconds, to return
all the data from a service.
 Connection Pooling Settings: If selected, connection pooling is
enabled, allowing connections to be reused for future requests.
Connection pools may improve performance by utilising resources
more efficiently.
Embeddings operation
The Embeddings operation generates vector representations
(embeddings) for input text, capturing semantic meaning for applications
like similarity search, clustering, and classification. The output is a JSON
object containing the embedding vector, along with metadata like input
token usage. You can configure this operation using the following
parameters:
 Follow Redirects: A string specifying how redirects are handled.
 Input Type: A value indicating what format the input is expected to
be in.
 Model: An optional string that overrides the default LLM model
defined in the connection.
 Advanced Settings: If this parameter is set to true then you will
be able to set query parameters and request headers.
OP WebSearch - Partner connector
The OP WebSearch — Partner connector offers a robust and scalable
solution for web search and data extraction, leveraging various APIs and
custom connectors to perform searches and retrieve data from multiple
sources. This connector is designed to facilitate seamless integration with
a range of search engines, allowing users to access a wealth of
information effortlessly.
note
The documentation for this connector is provided by a partner.
Connector configuration
To configure the connector, set up the following two components:
 OP Websearch - Partner connection: The connection contains all
connection settings.
 OP Websearch - Partner operation: Represents an action used to
interact with the provider.
Prerequisites
The connector requires the following:
 API key for the chosen provider (if necessary)
 Atom running with Java 11
Tracked properties
This connector has no predefined tracked properties.
Connection tab
 Provider: The search engine provider for the WebSearch. Available
options include:
 DuckDuckGo (Default, Free, SEARCH only)
 Google (SEARCH only)
 Bing (SEARCH only)
 Serp (SEARCH only)
 Tavily (BOTH)
 ScrapingBee (BOTH)
 Firecrawl (EXTRACT only)
 API Key: (Required for specific providers) Enter the API key to
connect to the chosen search provider. The API key is required for
the following providers:
 Google
 Bing
 Serp
 Tavily
 ScrapingBee
 Firecrawl
 Connection Timeout: The maximum wait time, in milliseconds, to
establish a connection to the search provider. A value less than or
equal to zero means that the process will wait indefinitely. The
default is -1.
 Read Timeout: The maximum wait time, in milliseconds, to
retrieve all data from the search provider. A value less than or equal
to zero means the process will wait indefinitely. The default is -1.
 Enable Connection Pooling: If selected, connection pooling is
enabled, allowing connections to be reused for future requests.
Connection pooling may improve performance by utilizing resources
more efficiently. The default is false.
 Max Simultaneous Requests: The maximum number of
simultaneous requests allowed in the connection at any time. The
default is 20. If connection pooling is enabled and this value is not
set or is -1 or 0, the default of 20 is used.
 Idle Timeout (seconds): The maximum amount of time, in
seconds, that a connection can remain idle before being closed. The
default is 30. This setting is applicable only if connection pooling is
enabled.
Search operation
The Search operation uses a query string to search for relevant articles.
The output of this operation is a JSON array of results that include the title,
the URL for the page and a snippet of the article. You can configure this
operation using the following parameters:
 Follow Redirects: A string specifying how redirects are handled
 Grouped Results: A boolean value indicating whether to group the
results by the title or not
 Empty If No Results: A boolean value indicating whether to return
an empty result if no results are found
 Input Type: A value indicating what format the input is expected to
be in
 Search Path: If the Input Type is JSON or XML, then you will need to
specify which path the input data is located
 Remove HTML Tags: A boolean indicating whether the HTML tags
should be removed
 Max Results: An optional integer defining the maximum number of
items returned by each request
 Language: An optional string specifying the language of the search
results
 County: An optional string specifying the country to narrow down
the search results geographically
 Site: An optional string specifying a particular website or domain to
search within
 Engine: An optional string specifying the search engine to use for
processing the query (Required for Google's API)
 Advanced Settings: If this parameter is set to true, then you will
be able to set query parameters and request headers
Extract operation
The Extract operation uses a URL to return a JSON object containing the
title, URL and contents of the article the URL links to. You can configure
this operation using the following parameters:
 Follow Redirects: A string specifying how redirects are handled
 Empty If No Results: A boolean value indicating whether to return
an empty result if no results are found
 Input Type: A value indicating what format the input is expected to
be in
 Search Path: If the Input Type is JSON or XML, then you will need to
specify which path the input data is located
 Remove HTML Tags: A boolean indicating whether the HTML tags
should be removed
 Max Results: An optional integer defining the maximum number of
items returned by each request
 Engine: An optional string specifying the search engine to use for
processing the query (Required for Google's API)
 Advanced Settings: If this parameter is set to true, then you will
be able to set query parameters and request headers
OP Wikipedia - Partner connector
The OP Wikipedia — Partner Connector offers a robust and scalable
solution for searching and extracting data from Wikipedia by leveraging
the Wikipedia API. This connector facilitates seamless integration with
Wikipedia, allowing users to access a wealth of information effortlessly.
note
The documentation for this connector is provided by a partner.
Connector configuration
To configure the connector, set up the following two components:
 OP Wikipedia connection: The connection contains all connection
settings.
 OP Wikipedia operation: Represents an action used to interact with
the provider.
Prerequisites
The connector requires the following:
 Atom running with Java 11
Tracked properties
This connector has no predefined tracked properties.
Connection tab
 Connection Timeout: The maximum wait time, in milliseconds, to
establish a connection to the Wikipedia service. A value less than or
equal to zero means that the process will wait indefinitely until the
connection is made. The default is -1.
 Read Timeout: The maximum wait time, in milliseconds, to
retrieve all the data from the Wikipedia service. A value less than or
equal to zero means that the process will wait indefinitely until all
the data has been returned. The default is -1.
 Enable Connection Pooling: If selected, connection pooling is
enabled, allowing connections to be reused for future requests.
Connection pooling may improve performance by utilising resources
more efficiently. The default is false.
 Max Simultaneous Requests: The maximum number of
simultaneous requests allowed in the connection at any time. The
default is 20. If connection pooling is enabled and this value is not
set or is set to -1 or 0, the default of 20 is used.
 Idle Timeout (seconds): The maximum length of time, in seconds,
that a connection can remain open while idle. The default is 30. This
setting is applicable only if connection pooling is enabled.
Search operation
The Search operation uses a query string to search for relevant Wikipedia
pages. The output of this operation is a JSON array of results that include
the title, the URL for the page, a snippet of the article and a timestamp of
its last updated date. You can configure this operation using the following
parameters:
 Follow Redirects: A string specifying how redirects are handled.
 Grouped Results: A boolean value indicating whether to group the
results by the title or not.
 Empty If No Results: A boolean value indicating whether to return
an empty result if no results are found.
 Input Type: A value indicating what format the input is expected to
be in.
 Search Path: If the Input Type is JSON or XML then you will need to
specify which path the input data is located.
 Remove HTML Tags: A boolean indicating whether the HTML tags
should be removed.
 Max Results: An optional integer defining the maximum number of
items returned by each request.
 Advanced Settings: If this parameter is set to true then you will
be able to set query parameters and request headers.
Extract operation
The Extract operation uses a URL to extract the contents of a Wikipedia
article. You can configure this operation using the following parameters:
 Follow Redirects: A string specifying how redirects are handled.
 Empty If No Results: A boolean value indicating whether to return
an empty result if no results are found.
 Input Type: A value indicating what format the input is expected to
be in.
 Search Path: If the Input Type is JSON or XML then you will need to
specify which path the input data is located.
 Remove HTML Tags: A boolean indicating whether the HTML tags
should be removed.
 Max Results: An optional integer defining the maximum number of
items returned by each request.
 Advanced Settings: If this parameter is set to true then you will
be able to set query parameters and request headers.
Version operation
Outputs the version of the build to ensure there are no mismatches
between what Boomi says is being used and what is being used.

OpenManage Enterprise connector


The OpenManage Enterprise connector allows you to connect to DELL
Open Manage Enterprise (OME) and use a process to query Alerts (based
on ID, SeverityType and Timestamp filters) in DELL OME and receive
output data in JSON format.
The OpenManage Enterprise connector enables accessing data in your
OME system and sharing that data out with other systems. The robust
filtering options let you selectively choose alerts based on specific criteria
(ID, SeverityType, Timestamp).
Connector configuration
To configure the connector to communicate with OME, set up two
components:
 OpenManage Enterprise connection
 OpenManage Enterprise operation
This design provides reusable components containing connection settings
and operation settings. After building your connection and operation, set
up your connector within a process. When the process is defined properly,
Integration can map to and from virtually any system using the
OpenManage Enterprise connector to communicate with DELL Open
Manage Enterprise (OME).
Tracked properties
This connector has no predefined tracked properties.

Connection tab
Username: Enter your user name to log into your OME account.
Password: Enter your password for your OME account.
IP Address: Enter the Hostname or IP address of the OME system.
Enable SSL (Option): Select this if your OME instance supports SSL

OpenManage Enterprise operation


The OpenManage Enterprise operations define how to interact with DELL
OME and represent a specific action (Query) to be performed against a
specific file.
Create a separate operation component for each action/object
combination that your integration requires.
The OpenManage Enterprise operations support the following action:
 Inbound: Query

Oracle CRM On Demand connector


The Oracle CRM On Demand connector enables you to move data into and
out of the Oracle CRM On Demand solutions, and the Oracle CRM On
Demand Industry Editions solutions.
This connector connects directly to your Oracle CRM On Demand
application and enables you to Get, Query, Create, Execute, Update, and
Delete various types of records.
note
The API documentation is available after you sign into Oracle CRM On
Demand. The documentation is located in Admin > Web Services
Administration. The Admin page displays generic and custom WSDLs.
Boomi Integration interacts with the custom WSDLs.
Connector configuration
To configure a connector to communicate with Oracle CRM On Demand,
set up two components:
 Oracle CRM On Demand connection
 Oracle CRM On Demand operation
This design provides reusable components, which contain connection
settings (such as URL, user name, password) and operation settings (such
as Object, Action, data definitions). After building your connection and
operation, set up your connector within a process. When you define the
Oracle CRM On Demand connector within your process, Integration can
map to and from virtually any system using the connector to retrieve data
from or send data to the Oracle CRM On Demand application.
Tracked properties
This connector has no predefined tracked properties.

Connection tab
Oracle CRM On Demand Host URL - The URL for the Oracle CRM On
Demand host serving the WSDL. For example: https://secure-
slsomxvga.crmondemand.com. To locate the correct URL log in Oracle
CRM On Demand, go to the Admin section and find the host name for the
WSDL.
User Name - The user name for signing in to the host.
Password - The password for signing in to the host.
Is Staging - Used to indicate if this is a staging (testing) environment.

Oracle CRM On Demand operation


The Oracle CRM On Demand operation defines how to interact with your
Oracle CRM On Demand account.
Each operation represents a specific action performed against a specific
Oracle CRM On Demand object type (such as Core::Account,
Partner::Accreditation). Create a separate operation component for each
object required for your integration.
The following services and objects are available when you use the inbound
(Get or Query) actions and most outbound (Create, Delete, or Update)
actions:
Service Name - Object Names
Webservice_V2 (core edition) - Account, Activity, Asset, Book,
Campaign, Category, Contact, Group, Lead, Note, Opportunity, Product,
Service Request, Solution, Territory, and User. You can also access up to 15
custom objects, named Custom Object 01 through 15.
Webservice_V2 (partner edition) - Accreditation, Accreditation
Request, Application, Certification, Certification Request, Course, Course
Enrollment, Deal Registration, Exam, Exam Registration, Fund, MDF
Request, Partner, Price List, Price List Line Item, Special Pricing Request,
and Special Pricing Request Line Item.
Webservice_V2 (life sciences edition) - Allocation, Business Plan,
Contact Best Time, Contact State License, Inventory Audit Report,
Inventory Period, MedEd, Message Response, Messaging Plan, Messaging
Plan Item, Messaging Plan Item Relation, Modification Tracking, Objective,
Plan Account, Plan Contact, Plan Opportunity, Sample Disclaimer, Sample
Inventory, Sample Lot, Sample Transaction, Signature, and Transaction
Item.
Webservice_V2 (financial services edition) - Claim, Coverage,
Damage, Financial Account, Financial Account Holder, Financial Account
Holding, Financial Plan, Financial Product, Financial Transaction,
Household, Insurance Property, Involved Party, Policy, Policy Holder, and
Portfolio.
Webservice_V2 (automotive edition) - Dealer and Vehicle.

Oracle Database connector


The Oracle Database connector allows you to select, insert, update,
delete, upsert records and also allows you to call stored procedures in an
Oracle database.
The Oracle Database connector uses OJDBC to communicate to the
database and to manipulate the records. The connector provides the
option to insert the SQL statement manually, or dynamically build the SQL
Statement based on the input parameters. An OJDBC driver establishes a
connection to the database through the connection. You can interact with
the database, for example, and create a statement to execute SQL queries
against tables. The connector also supports leveraging the Oracle Wallet
feature for enhanced security.
The connector is supported in both local and cloud (Public and
Private) Atom.
Connector configuration
To configure the connector to communicate with the Oracle database, set
up the following components:
 Oracle Database connection
 Oracle Database operation
This design provides reusable components which contain connection
settings and operation settings. After building your connection and
operation, set up your connector within a process. When the process is
defined properly, Integration can map to and from virtually any system
using the Oracle Database connector to work with the Oracle database.
note
This is an open source connector and you can access the code in
the Oracle Database connector repository.
Supported database and editions
The connector has been tested with Oracle Database 18c and Oracle JDBC
driver "ojdbc8.jar".
Limitations
 The Oracle Database connector does not work for databases hosted
on the OCI cloud.
o The SQL Query field does not auto-populate the SQL query
statement, if there are more than one table to be used in the
SQL query.
 Named parameter ($param) is applicable only for Get operation.
 Dynamic Upsert operation does not support composite keys
(multiple primary keys) as this operation uses DUAL table for
building the SQL Query dynamically in the connector. As Dual table
accommodates only one column, composite keys are not supported
in Dynamic Upsert Operation.
note
If a table has composite keys, you must select Operation type as
Standard Upsert Operation and enter INSERT query in SQL Query text
box. The connector then decides whether to Insert or Update the record
based on the keys or availability of record in the table.
 StoredProcedure -
o Procedure overloading is not supported.
o Stored Procedures with a Ref Cursor data type as an IN
parameter will import the profile successfully instead of
throwing an exception, but input cannot be passed and the
parameter will not appear in the request profile.
o Oracle Function with OUT parameters is not supported.
Prerequisites
The oracle database connector requires the following:
 Installed and configured Oracle database.
 OJDBC database driver. You can download the database from the
following location.
o https://www.oracle.com/uk/database/technologies/appdev/
jdbc-downloads.html
o https://mvnrepository.com/artifact/com.oracle.jdbc/
ojdbc8/18.3.0.0
 URL of the database instance.
 User name and password for a database account.
 An Atom installed locally to the database server or to another
machine within your network that can access the database.
 OJDBC driver JAR must be configured in the Boomi Custom Library.
To do this,
o In Boomi Integration, upload the driver JAR file to an account
by using the Manage Account Libraries page (Settings —
Account Libraries). For more details, see the topic Account
library management.
o Create a Custom Library Component with the Custom Library
Type as “Connector” and the Connector Type as Oracle
Database, and then add the uploaded driver JAR file.
o Deploy the custom library component to the Atom. For more
information, see the topic Working with custom libraries.
 To use XMLType data type, you will require the following jars to be
deployed in the Custom Library.
o ons
o oraclepki
o osdt_cert
o osdt_core
o simplefan
o ucp
o xdb6
o xmlparserv2
 To use Oracle Wallet/TCPS, you require the following jars in the
Custom Library:
o oraclepki
o osdt_cert
o osdt_core
note
You can directly download these jars
from: https://www.oracle.com/database/technologies/jdbc-ucp-122-
downloads.html
Tracked properties
This connector has no predefined tracked properties.

Connection tab
Connection URL: Refers to your oracle database URL. Format
— jdbc:oracle:thin:@hostname:port Number:databaseName
For example,
jdbc:oracle:thin:@localhost:1521:xe

Class Name: Refers to the class naming of the database driver which
needs to be communicated to respective database.
User Name: Refers to the database user on whose behalf the connection
is made.
Password: Refers to the user’s password.
Schema Name: Enter the database schema name to fetch the table
names for selection. You can also provide the schema name at time of
operation - import. If the schema value does not exist in both the
Connection and Operation settings, the schema name will be taken from
the user name.
note
If any of the existing integration process has a different schema name
used in the Connection and Import, make sure to reimport the profile.
Connection Timeout (ms): Enter the maximum time (in milliseconds)
that the connector can wait to complete the connection handshake with
the database.
Read Timeout: Enter the maximum time (in milliseconds) that the
connector can spend in attempting to read data from the database. If the
value is left as blank, the default value of 600s will be used. The Read
timeout value entered in the Connection UI can be overridden by passing
the new value via the Message step.

Oracle Database operation


The Oracle Database operation represents an action against a specific
database table (Object Type).
Create a separate operation component for each action/object
combination that your integration requires. Apart from stored procedures,
all the operations can be classified into the following types:
 Dynamic Operation — utilizes the prepared statement class to
build the dynamic queries.
 Standard Operation — utilizes a prepared statement.

Supported Operations and Data Types


The connector builds the JSON schema automatically by identifying the
data types of each column in the table. The table names are selected in
the Object type drop down. The table below provides the list of operations
and data types supported in the Database specific connectors.
Supported
Connector Supported Data Types
Operations
• INSERT
INT, BIGINT, SMALLINT, TINYINT, CHAR,
• GET
VARCHAR, NVARCHAR, NCHAR,
Database V2 • UPDATE
LONGNVARCHAR (text), LONGVARCHAR
Connector • UPSERT
(text), DATETIME, TIMESTAMP, BOOLEAN,
(MySQL) • DELETE
DECIMAL, DOUBLE, FLOAT, CLOB, JSON,
• STORED
BLOB, LONGBLOB.
PROCEDURE
INT, SMALLINT, BIGINT, CHAR, NCHAR,
Database V2 Same as VARCHAR, VARCHAR (n), LONGNVARCHAR
Connector Database V2 (text), LONGVARCHAR (text), DATETIME,
(PostgreSQL Connector TIMESTAMP, DECIMAL (Numeric), DOUBLE
) (MySQL). (Double Precision), BOOLEAN, CLOB, JSON,
FLOAT, BYTEA (variable-length binary string).
INT, BIGINT, SMALLINT, TINYINT, CHAR,
Same as VARCHAR, NVARCHAR, NCHAR,
Database V2
Database V2 LONGNVARCHAR (NTEXT), LONGVARCHAR
Connector
Connector (TEXT), DATETIME, BOOLEAN, DECIMAL,
(MS SQL)
(MySQL). DOUBLE PRECISION, FLOAT, CLOB, JSON,
BLOB (Varbinary/Image).
Database V2 Same as INT, SMALLINT, DECIMAL, FLOAT, DOUBLE
Supported
Connector Supported Data Types
Operations
PRECISION/NUMBER, CHAR, NCHAR,
Database V2
Connector VARCHAR, NVARCHAR2, LONGNVARCHAR
Connector
(Oracle) (LONG), LONGVARCHAR (LONG), DATETIME,
(MySQL).
TIMESTAMP, CLOB, JSON, BLOB.
Oracle • INSERT INT, VARCHAR, DATE, TIME, TIMESTAMP,
Database • GET CLOB, VARRAY, BOOLEAN, JSON, CHAR,
Connector • UPDATE LONGVARCHAR, NVARCHAR, DECIMAL,
• UPSERT NUMERIC, BIT, DOUBLE, FLOAT, REAL, BLOB,
• DELETE BINARY, VARBINARY - Applicable for all
• STORED operations.
PROCEDURE XMLType - Applicable for Storedprocedure
only.
NESTED TABLE and VARRAY - Applicable for
storedprocedure only.
• NESTED TABLE, OBJECT (supported data
types: NUMBER, VARCHAR, DATE,
TIMESTAMP, CHAR, NCHAR, DOUBLE, FLOAT,
DECIMAL, NUMERIC, REAL, BLOB, INTEGER,
SMALLINT). These datatypes are also
applicable for object type attribute used in
the nested table.
• VARRAY (supported data types): NUMBER,
VARCHAR, DATE, FLOAT, NCHAR, NVARCHAR,
TIMESTAMP, LONGVARCHAR, INTEGER,
TINYINT, SMALLINT, DOUBLE, DECIMAL,
NUMERIC.
• VARRAY (supported user defined data
types: INTEGER, TINYINT, SMALLINT,
DECIMAL, NUMERIC, BOOLEAN, NVARCHAR,
BIGINT, DOUBLE, FLOAT, VARCHAR, CHAR,
LONGVARCHAR, NCHAR, LONGNVARCHAR,
REAL, BLOB, BINARY, LONGVARBINARY,
VARBINARY, DATE, and TIMESTAMP.
Note: In Oracle Database, usage of
BOOLEAN data type within VARRAY and
TABLE TYPE is not supported. Similarly, BLOB
datatype is not supported in VARRAY Type.
Supported TIMESTAMP Format: yyyy-mm-dd
hh:mm:ss and yyyy-mm-dd hh:mm:ss.ff.
Supported Date Format: dd-MMM-yyyy and
dd/MMM/yyyy.
Supported
Connector Supported Data Types
Operations
The following format is applicable for both
TIMESTAMP and DATE data type (all
operations).
• YYYY-MM-DD HH:MI:SS:FF
NESTED TABLE and VARRAY - Applicable for
INSERT, GET, UPDATE, and DELETE
operations. Please note, supported only for
nested table of object type and single record.
• INSERT
• QUERY
• UPDATE
• UPSERT
SAP HANA
• DELETE INT, VARCHAR, NVARCHAR, NCHAR, CLOB,
Database
• STORED DATE, TIME, TIMESTAMP, BOOLEAN.
Connector
PROCEDURE
• BULK LOAD
• BULK
UNLOAD

Changelog
2024-01 (Version 13, Current)
 Necessary runtime packages are now packaged with the connector.
 The connector correctly updates Database Null, when the input is
provided as null for Nullable Field/Column and throws an error when
input is null for Non Nullable Field/Column. (CONC-2667)
 The connector successfully retrieves values larger than or lesser
than INTEGER limit (-2147483648 to 2147483647) in NUMBER type
columns for Oracle database without any error. (CONC-1793)
 The connector handles numeric data bigger than “2147483647” as
decimal value.
 Test connections successfully work with/without connection pooling
enabled for Oracle Database step.
 Vulnerability issue of type Denial of Service was resolved to make
the Connector more secure.

Oracle E-Business (EBS) connector


The Oracle E-Business (EBS) connector (classified as an Enterprise
connector) enables you to connect to the Oracle E-Business Suite. This
integration runs database stored procedure calls (Execute operation) and
queries (Query operation) using Java (JDBC).
Supported integrations
Attention
The Oracle E-Business (EBS) connector is not available for use by
customers who have not previously used it. Boomi encourages you to take
advantage of the functionality provided by the Oracle E-Business Suite V2
connector.
The connector supports the following integrations:
 Execute operations call Oracle PL/SQL stored procedures as defined
in the Oracle Integration Repository PL/SQL interfaces. PL/SQL
procedures are categorized by module, package, and procedure
name.
 Query operations run basic SQL SELECT queries for E-Business
objects for which you define the returned fields, specify nested filter
expressions, and return results sorted by one or more fields.
 The Oracle EBS connector supports stored procedures with the
collection types TABLE, ARRAY, and VARRAY as arguments and as
nested types for RAW and ANYDATA data types.
When you perform an import, it adds both the Execute and Query
operations using the metadata in the Oracle E-Business database schema.
Connector configuration
To configure a connector to communicate with the Oracle E-Business
Suite, set up two components:
 Oracle E-Business connection
 Oracle E-Business operation
This design provides reusable of components that contain connection
settings (such as URL, user name, password) and operation settings (such
as Object, Action, data definitions). After building your connection and
operation, set up your connector within a process. When you define the
Oracle E-Business connector within your process, Integration can map to
and from virtually any system using the connector to retrieve or send data
to the Oracle E-Business Suite.
Supported editions
Boomi Integration supports Oracle E-Business Suite Version 11.
Important
If you need to integrate with Oracle E-Business Suite version 12 or later,
use the HTTP Client connector or Web Services SOAP Client connector. You
can use these connectors to integrate with Oracle E-Business Suite by
interfacing with the Oracle E-Business Suite Integrated SOA Gateway
(ISG).
Limitations
The following known limitations affect this connector:
 Boomi supports running PL/SQL stored procedure calls. However,
Boomi does not currently support function calls.
 Boomi does not support all Oracle data types. If a Query or Execute
stored procedure’s defined parameter or result set uses one of the
following data types, you receive an exception error:
o BLOB
o CLOB
o ROW
o SDO_GEOMETRY (Spatial)
o XML
Prerequisites
To implement a connection to the Oracle E-Business Suite from :
 You must install an Atom, which you can install on a different
machine than the Oracle E-Business Suite. However, the Atom must
have network connectivity to the Oracle E-Business Suite machine.
 The user must have access to the Oracle E-Business Suite database
and objects in order to query, execute PL/SQL, and create and
delete temporary database objects.
 The Oracle E-Business connector requires a database account that
has CREATE privileges. See your Oracle E-Business documentation
or contact your Oracle E-Business administrator for assistance.
Implementation considerations
If you are unsure whether to use the Database connector or the Oracle E-
Business connector, consider the following:
 Are you querying data from Oracle?
o You can do many database queries using the Database
connector if you know the tables and elements to access and
the SQL queries to perform. The Oracle E-Business connector
enables you to query specific business objects and tables but
not write complex SQL queries such as joins. The Database
connector allows this.
 Are you writing data to Oracle?
o If you are writing data to Oracle, you can write to the Oracle
Interface tables in the E-Business schema using the Database
connector. You can then use the tools from within the E-
Business Suite to validate and load the data from the Interface
tables into the production tables. A common example is orders
that require approval or inventory checks before being
fulfilled.
o If you are writing data to Oracle and using the Oracle E-
Business connector, which leverages PL/SQL (API) procedures
as opposed to the ODBC/JDBC database drivers used by the
Database connector, you can avoid writing to the Interface
tables and leverage the logic built into the PL/SQL procedures
that write directly to the production tables.
o You can write directly to the production tables using the
Database connector. This process works well if you know
which tables and elements to update or insert. You must also
be familiar with the interdependencies between the tables in
the Oracle E-Business Suite.
The primary advantage of the Oracle E-Business connector is the logic and
validation in the PL/SQL API code that automatically performs the updates
to multiple underlying tables and maintains the interdependencies
between the associated tables.
Tracked properties
This connector has no predefined tracked properties.

Oracle E-Business (EBS) connection


The Oracle E-Business (EBS) connection represents a single Oracle E-
Business Suite instance, including sign-in credentials.
If you use multiple databases, you need a separate connection for each
database. You can pair the connection with different Oracle E-Business
operations to perform a unique action against an Oracle E-Business Suite
instance.
Connection pooling
When you enable Pooled Connection, it seeds the pool with one initial
connection. It creates new connections as needed up to the maximum of
32. The pool size, however, is determined by the minimum. If there is a
discrepancy between the minimum and maximum, it closes additional
connections above the minimum on return. As a result, the minimum
number of connections is set to equal the maximum. There are two dead
connection checks: a 10–minute check on the pool and a 30–minute check
that shuts down pools that have not been used within the last ten
minutes.
Default pool settings
 Total number of connections: 1
 Number of initial connections: 1
 Number of available connections: 1
 Minimum number of connections: 32
 Maximum number of connections: 32
 Maximum number of open sessions: 32
 Maximum wait time for a connection: 180 seconds
Connection tab
JDBC Connection String - (Optional) Use to specify the JDBC connection
string. If you leave this field blank, it uses the host, port, instance, and
sign-in credentials to connect to the Oracle E-Business Suite instance.
When specified, the string must start as follows: jdbc:oracle:thin@. For
example, jdbc:oracle:thin:@54.221.8.208:1531/orcl In addition, you must
specify the user name and password.
Host - The host name to use if you do not specify the JDBC Connection
String.
Port - The port number to use if you do not specify the JDBC Connection
String.
Instance - The instance name to use if you do not specify the JDBC
Connection String.
User Name - The user name for signing into the JDBC or host. The user
must have the appropriate access to the Oracle E-Business objects.
Password - The password for signinginto the JDBC or host.
Pooled Connection - If selected, it pools connections.

Oracle E-Business (EBS) operation


The Oracle E-Business (EBS) connector supports the Execute and Query
operations that use JDBC to run database calls against the Oracle E-
Business Suite instance. The Execute action runs Oracle E-Business-
specific stored procedures contained within modules.

Oracle E-Business Suite V2 connector


The Oracle E-Business Suite (EBS) V2 connector (classified as an
Enterprise connector) allows you to connect to an Oracle EBS instance.
You can interact with Oracle EBS based on the applicable Interface types
defined in the Oracle Integration Repository, or establish a direct
connection to the Oracle Database via Oracle JDBC drivers.
Features
The Oracle E-Business Suite V2 connector supports the following features
to connect to Oracle EBS systems:
 Perform REST_Query and REST_Execute operations on the Oracle
EBS data through the REST APIs defined in the Oracle Integration
Repository.
 Perform PLSQL_Execute operations by calling Oracle PL/SQL stored
procedures.
 Query the required E-Business database tables by executing PLSQL
queries via PLSQL_Query.
 Perform LISTEN operations for XML Gateway and Business Event
Messages, and SEND operations for XML Gateway Messages.
 Establish secured database connections using the database connect
descriptor (DBC) file.
Through this connector, you can use a process to interface with different
types of Oracle EBS Interfaces. You can browse the list of available APIs for
a given interface type in your Oracle EBS Integration Repository, and
automatically generate the request and response profiles in XML format to
use in processes and maps.
Supported authentication types
The connector supports any of the following authentication types for
secured access of the REST services:
 HTTP Basic Authentication (Username and Password)
 Token-based Authentication (Username and a valid token, such as
Oracle EBS session ID)
 Oracle Database User and Password credential
 DBC file-based authentication
Supported interface types
The connector supports the following Interface types:
REST_EXECUTE and REST_QUERY operations
 PL/SQL
 Open Interface
 Java (Subtype: Java Bean Services)
Defined via
 Oracle EBS Integrated SOA Gateway - Responsible for allowing
service enablement.
 Oracle Integration Repository - Central Repository storing all
service-related business interfaces. It contains numerous interface
endpoints exposed by applications throughout the entire Oracle EBS
and enables to easily discover and search on interfaces, regardless
of Oracle native or custom ones.
The connector uses a Rest Service Locator API in the Integration
Repository to search for all deployed EBS REST APIs. The Rest Service
Locator is a sample Java API that consists of methods to retrieve details
about the deployed Oracle EBS’ REST APIs. For more information, see the
topic Setting up the REST Service Locator.
PLSQL_Execute and PLSQL_Query operations
 Execute PL/SQL stored procedures defined via Oracle Integration
Repository in the Oracle EBS are accessible.
 Basic PL/SQL SELECT queries.
LISTEN and SEND operations
 Listening XML Gateway Messages from ECX_OUTBOUND queue
 Sending XML Gateway Messages to ECX_INBOUND queue
 Listening Business Event Messages from WF_BPEL_Q queue
The connector supports the following operations using the supported
interface types:
 REST_EXECUTE operation invokes Oracle PL/SQL APIs (Native and
Custom PL/SQL) deployed in the Oracle Integration Repository, to
perform certain actions in the Oracle EBS based on the request
input.
note
The native and custom APIs must be deployed in the Integration
Repository, to make the APIs available for use in the connector. For more
information, see the Prerequisites section in this topic.
 REST_QUERY operation invokes Open Interface and Java (Java Bean
Services) APIs to retrieve required EBS data based on the user
defined filters. To verify if your Open Interface/JAVA APIs support the
QUERY operation, see the Prerequisites section in this topic.
 PLSQL_EXECUTE operation calling Oracle PL/SQL stored procedures
as defined in the Oracle Integration Repository PL/SQL interfaces.
PL/SQL procedures are categorized by module, package, and
procedure name. The Oracle EBS connector supports stored
procedures with the collection types TABLE, ARRAY, and VARRAY as
arguments.
 PLSQL_QUERY operation that executes SQL SELECT queries for E-
Business database table for which you define the returned fields,
specify nested filter expressions, and return results sorted by one or
more fields.
 LISTEN operation that retrieves XML Gateway messages from the
ECX_OUTBOUND queue and Business Event messages from the
WF_BPEL_Q queue.
 SEND operation where XML Gateway messages are enqueued into
ECX_INBOUND.
Connector configuration
To configure the connector to communicate with Oracle EBS, set up the
following two components:
 Oracle EBS connection — The connection represents a specific
Oracle EBS instance and its associated user authentication method.
 Oracle EBS operation — The operation represents an action against
a specific E-Business Integration scenario such as Execution of
PL/SQL interfaces to execute actions in Oracle EBS as per the
Request Input, Querying the Open Interface and Java APIs to retrieve
required Oracle EBS data, Execution of Oracle PL/SQL stored
procedures, Querying for required Oracle EBS data by executing
PL/SQL queries on the E-Business database tables. With LISTEN and
SEND operations, the connector allows you to integrate with Oracle
E-Business Suite for XML Gateway and Business Event Messages.
The LISTEN operation is for receiving XML Gateway and Business
Event messages from ECX_OUTBOUND Queue and WF_BPEL_Q
Queue respectively. The SEND operation is for sending XML Gateway
messages to ECX_INBOUND queue.
This design provides reusable components containing connection settings
(such as URL, username, password) and operation settings (such as
object, action, data definitions). After building the connection and
operation, set up your connector within a process. When the process is
configured properly, can map to and from virtually any system using the
Oracle EBS V2 connector to communicate with Oracle EBS.
note
A separate connection is required for REST vs. PL/SQL, LISTEN, SEND.
REST operation requires only API details. However, PL/SQL, LISTEN, SEND
requires DB details. You have to provide either DB schema details or use
the DBC file option for the connection.
Supported editions
The connector supports the following editions:
 REST_Execute and REST_Query — Oracle EBS version 12.2.x or later
(with proper Integrated SOA Gateway and REST services framework
setup).
 PLSQL_Execute and PLSQL_Query — Oracle database 12c, 19c or
earlier (requires the JDBC driver for the Oracle DB version) with
Oracle EBS version 12.1.x and 12.2.x.
 JDBC Driver Version:
o Add OJDBC ojdbc8-19.9.0.0 and UCP ucp-19.9.0.0 version
dependency.
o Create Custom Library for ojdbc8-19.9.0.0 and ucp-
19.9.0.0 jars at the time of executing this process.
Prerequisites
To use the connector, you require the following:
REST_EXECUTE and REST_QUERY operation
 Oracle EBS instance with proper Integrated SOA Gateway and REST
service framework setup.
 REST URL of the Oracle EBS instance.
 Username and password for an Oracle EBS account.
 Security token (e.g. Oracle EBS session ID) provided by the Oracle
EBS server. This is required only if token-based authentication is
used.
 Rest Service Locator API (oracle.apps.fnd.rep.ws. service.
EbsRestLocator) deployed in the Oracle Integration Repository. For
more information, see the topic Setting up the REST Service Locator.
 Security Grants permission to access or execute the GET service
operation (getRestInterface) contained in the Rest Service Locator
API.
 Alias name of the Rest Service Locator API deployed in your Oracle
Integration Repository.
REST_EXECUTE operation
 Required PL/SQL APIs deployed in the Oracle Integration Repository.
 Security Grants permission to access or execute certain service
operations of the PL/SQL API. For access request, contact your
Oracle EBS Integration Repository Administrator.
note
Refer to the Oracle Help Topic on Using PL/SQL REST Services – Deploying
a PL/SQL REST Web Service.
 Custom PL/SQL APIs packaged and deployed in the Oracle
Integration Repository to make it available for use in the connector
(APPLICABLE ONLY if custom PL/SQL API to be used).
REST_QUERY operation
 Required Open Interface / Java (Java Bean Services) APIs deployed
and configured with GET HTTP method in the Oracle Integration
Repository, for use in QUERY operation.
 Security Grants permission to access or execute certain service
operations of the APIs. For access request, contact your Oracle EBS
Integration Repository Administrator.
PLSQL_EXECUTE, PLSQL_QUERY, LISTEN and SEND operations
 Oracle EBS Database — OJDBC connection string.
 Username and password for an Oracle EBS account.
 JDBC Driver Version:
o Add OJDBC ojdbc8-19.9.0.0 and UCP ucp-19.9.0.0 version
dependency.
o Create Custom Library for ojdbc8-19.9.0.0 and ucp-
19.9.0.0 jars at the time of executing this process.
note
If you have JDBC drivers for Oracle in the general custom directory, those
drivers take precedence to the drivers in the connector directory.
Therefore, you must either use the same version in general directory or
have specific JDBC drivers for Oracle in its own connector directory.
Database connection using DBC via ebsdb.dbc file
 ebsdb.dbc file obtained from Oracle EBS environment and converted
as JAR file. For more information, see the topic Creating DBC jar file
"ebsdb.dbc". For information about the process to upload the JAR file
as custom library component, see the topic Working with custom
libraries.
Tracked properties
This connector has no predefined tracked properties.

Oracle E-Business Suite V2 operation


The Oracle EBS V2 operations define how to interact with your Oracle EBS
server and represent a specific action (REST_EXECUTE, REST_QUERY,
PLSQL_EXECUTE PLSQL_QUERY, LISTEN and SEND) to be performed
against one/many Oracle EBS object types.
Use the Import Wizard to provide the specific API to use from the list of
APIs deployed in your Oracle EBS Integration Repository and select the
required API service operation. The wizard automatically generates the
request and response XML profiles for the specified API, representing the
input and output parameters. Create a separate operation component for
each action combination required for your integration.
The Oracle EBS V2 connector support the following actions:
 REST_Execute — Execution of PL/SQL APIs via REST.
 REST_Query — Execution of Open Interface and Java (Java Bean
Services) APIs via REST.
 PLSQL_Query — Query for required Oracle EBS data by executing
basic SQL SELECT queries on the E-Business database tables.
 PLSQL_Execute — Execution of Oracle PL/SQL stored procedures.
 LISTEN —
o XML Gateway: Dequeues ECXMSG message types from the
ECX_OUTBOUND queue.
o Business Event — Dequeues WF_EVENT_T message types from
the WF_BPEL_Q queue.
 SEND (XML Gateway) — Enqueues ECXMSG message types to the
ECX_INBOUND queue.
Oracle Fusion connector
The Oracle Fusion connector supports connectivity to Oracle Fusion/Oracle
ERP Cloud via Simple Object Access Protocol (SOAP) services and BASIC
authentication protocol.
The Oracle Fusion connector includes two actions, EXECUTE and QUERY.
The SOAP APIs include various types of operations and you can make a
SOAP call by executing EXECUTE or QUERY actions for any of the following
operation types from the connector:
 Via EXECUTE Operation:
o CREATE
o GET
o UPDATE
o DELETE
o MERGE
o CUSTOM
o PROCESS
o PROCESSCS
 Via QUERY Operation:
o FIND OR FINDFILTER
Features
The Oracle Fusion connector supports the following features:
 Discovery of SOAP APIs using service name from list of services or
WSDL URL of any specific service.
 Import of specific type of APIs based on type of operation selected
by the user on UI.
 Execution of certain operation that is generated with profiles.
 Sending and receiving request and response.
Connector configuration
To configure Oracle Fusion connector and to communicate with SOAP APIs,
you need to setup the below components:
 Oracle Fusion connector connection
 Oracle Fusion connector operation
This design provides reusable components which contain connection
settings and operation settings. After building your connection and
operation, set up your connector within a process. When the process is
defined properly, Integration can map to and from virtually any system
using the Oracle Fusion connector to fetch the desired response or data by
authenticating through BASIC protocol.
Prerequisites
 Boomi Atom/Molecule/Cloud (Private or Public)
 All required SOAP services must be configured and active in the
Oracle Fusion environment.
 User must have necessary access permission to execute the
required SOAP services for the selected service operation. For
access request, contact your Oracle Fusion Administrator.
Supported editions
This connector supports APIs with active status and these API versions
change upon enhancements. The latest version would be in active status
and the previous version will be updated as deprecated status.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
The Oracle Fusion connection requires the following connection properties:
Fusion Endpoint URL - Refers to the host URL of the Oracle Fusion.
Security Type - Refers to the BASIC authentication protocol.
User Name - Refers to the Oracle Fusion user.
Password - Refers to the Oracle Fusion user password.

Connection Types (BASIC)


The Oracle Fusion connector supports the following protocol and
authentication type:
 Protocol
o HTTPS
 Authentication Type
o BASIC

Oracle Fusion operation


The Oracle Fusion connector operation imports and executes SOAP
services.
Create a separate operation component for each action/object
combination that your integration requires.
The Oracle Fusion connector operations use XML format and support the
following actions:
 EXECUTE
 QUERY
Changelog
2023-09 (Current)
 In the connection configuration, Service Name and Custom SOAP
WSDL Relative Path fields are mutually exclusive. Service Name
dropdown is applicable and displayed only when Service Operation
is not CUSTOM. Custom SOAP WSDL Relative Path is applicable and
displayed only when the Service Operation is CUSTOM.
 The connection extension allows overriding of the Fusion endpoint
URL without re-importing the profile.
 There was an issue in the Oracle Fusion Connector Import Wizard
where the Custom SOAP WSDL URL field threw an error when the
absolute path URL was entered. This has been fixed by renaming
the label as Custom SOAP WSDL Relative URL. Users must enter
the relative path URL to successfully execute the Customer Service
operation.
For example, Instead of entering the absolute path URL: https://fa-etan-
dev8-saasfademo1.ds-fa.oraclepdemos.com/xmlpserver/services/
PublicReportService?WSDL, users are requested to enter the relative path
URL: xmlpserver/services/PublicReportService?WSDL

Oracle SQL*Loader connector


The Oracle SQL*Loader connector enables you to perform bulk data
uploads to an Oracle database.
This connector performs the bulk uploads faster than the Database
connector. The Oracle SQL*Loader uses the Oracle SQL*Loader utility, is
optimized to reduce overhead, and is capable of bypassing data integrity
checks when loading data. The connector is capable of importing
thousands of documents that may contain thousands of records, meaning
that it can handle millions of records.
Connector configuration
To configure a connector to communicate with the Oracle SQL*Loader, set
up two components:
 Oracle SQL*Loader connection — The connection represents the
connection credentials to the appropriate Oracle instance.
 Oracle SQL*Loader operation — The operation represents an action
against an Oracle database table.
Supported editions
This connector works with SQL*Loader 9.2 and later editions.
Prerequisites
The Oracle SQL*Loader connector requires the following:
 Oracle SQL*Loader 9.2 or later is installed.
 Oracle SQL*Loader must be installed on the server on which
the Atom is installed.
 Oracle database is installed.
 A user name and password with sufficient permissions to perform
the bulk data upload on the target schema.
 The database has been created, the table and columns have been
defined, and the sequences have been defined.
Tracked properties
This connector has no predefined tracked properties.

Connection tab
The following fields appear on the Connection tab:
Server Name - The name of the server hosting the target database.
Port Number - The port number of the server hosting the target
database. The default port is 1521.
User Name - The user name that connects to the database.
Password - The password for the user name.
Database Name - The name of the database that contains the target
data.
Schema Name - The name of th

Oracle SQL*Loader operation


The Oracle SQL*Loader connector supports only the Create action.
Create
Create is an outbound action that translates data from an XML profile to
the SQL data type to insert data in a new record into the target Oracle
table.
Boomi Integration handles standard Oracle data types, such as dates and
times, numbers, and characters. Because the Oracle SQL*Loader
connector expects input data to be in XML format, you might need to map
the input data to XML before sending it to the connector.
The Oracle SQL*Loader supports advanced data types, including:
o BLOB — The Oracle SQL*Loader connector can handle BLOB
data only if it is the last column in the input data and the
database table. For more information,
see http://www.orafaq.com/wiki/BLOB.
o CLOB — For more information,
see http://www.orafaq.com/wiki/CLOB.

PowerSteering – Partner connector


business leaders and PMOs to optimize project portfolios, balance capacity
against demand, improve financial-based decision making, and align the
progress of projects to strategy and objectives across large organizations.
You can use this connector and integrate PowerSteering with other ERP
and financial systems, such as SAP, Oracle, Microsoft Dynamics, Sage,
NetSuite, JDEdwards, Exact, and QuickBooks to allow complete end-to-end
project accounting.
Note: The documentation for this connector is provided by a Boomi
partner. The last update of the documentation is May 2018.
You must sign in to PowerSteering to view these resources:
 PowerSteering API Reference:
[your-powersteering-instance-url]/rest/documentation/v1/main
 PowerSteering Documentation:
[your-powersteering-instance-url]/help/help.jsp
Connector configuration
To configure the connector to communicate with PowerSteering, set up
two components:
 PowerSteering – Partner connection. This connector uses Basic
access authentication <username>:<password> for authentication
and authorization.
 PowerSteering – Partner operation (Get, Query)
This design provides reusable components that contain connection
settings and operation settings. After building your connection and
operation, set up your connector within a process. When you configure the
PowerSteering – Partner connector properly within your process,
Integration can map to and from virtually any system using the
PowerSteering – Partner connector to retrieve data from PowerSteering.
Supported editions
The connector supports the PowerSteering API v1 and product versions
18.1 and greater. For more information, see the linked resources.
Prerequisites
To use the connector and implement a connection to your PowerSteering
account from Integration, have the following:
 The URL for the PowerSteering API service
([your_powersteering_instance_url]/rest/).
 A PowerSteering user with Administrator permissions.
 The PowerSteering API v1.
Supported objects
The PowerSteering –Partner operation actions support the following
objects:
Ge Quer
Object
t y
Project X
Projects X
Project Children X
Project Children ID X
Project Download X
Project Download All
X
Works
Organization X
Organizations X
Download Organizations X
Discussion X
Discussions X
Issue X
Issues X
Issue Item Thread X
Timesheet X
Timesheets X
User X
Current User X
Active Users X
Users X
Online Users X
Download Users X
Person Rates X
All Rate Tables X
Rate Codes X
Rate Table By ID X
Rate List by Rate Table X
Rates History X
Action Item By ID X
Action Items By Work ID X
Costs by Project ID X
Actual costs by Project X
Ge Quer
Object
t y
ID
Estimated costs by
X
Project ID
Template X
Template By Name X
Templates X
Portfolio X
Portfolio Work Items X
Group Members X
Group Members By
X
Group ID
Group X
Groups X
Task X
Tasks X
Tasks By Project ID X
Download Tasks X
Metric X
Metric Data X
Metrics X
Metric Template X
Metric Template By
X
Name
Metric Templates X
Resource Pool X
Resource Pool Managers X
Resource Pool Members X
Resource Pools X
Resource Assignment X
Resource Allocations X
Resource Demands X
Events X
Role X
Roles X
Reports X
Run Report X
Report Status X
Tracked properties
This connector has no predefined tracked properties.
Connection tab
The PowerSteering – Partner connector uses Basic access
authentication <username>:<password> for authentication and
authorization. In addition, provide the URL to access the PowerSteering
API service endpoint, and the API Version.
The following fields appear on the Connection tab.
URL - The URL for the PowerSteering REST API service endpoint. Enter the
base path up to /rest. This URL is the same one used to sign in to the
PowerSteering web interface.
Username - The user name for logging in to the PowerSteering web
interface. You must have Administrator permissions.
Password - The password for signing in to the PowerSteering web
interface.
API Version - The PowerSteering REST API version, currently v1.

PowerSteering – Partner operation


The PowerSteering – Partner operation defines how to interact with your
PowerSteering account and represents a specific action (Get and Query) to
perform against a specific PowerSteering resource.
Create a separate operation component for each action and object
combination that your integration requires.
The PowerSteering – Partner operation uses the XML format and supports
the following actions:
 Inbound: Get, Query
 Outbound: None
Pricefx - Partner connector
Together, Boomi Integration and the Pricefx - Partner connector enable
you to easily connect any customer data or application to Pricefx, which
enables the creation of a full suite of pricing management solutions.
The connector uses the REST API to communicate to Pricefx and to
manipulate the data and execute actions. The connector provides the
option to fetch the data by building a JSON request message manually, or
dynamically build the filter by query parameters. The connector also
provides the option to update or insert data to Pricefx by building a JSON
batch upsert or bulk data load request message, or by uploading an Excel
file via other Boomi connectors such as FTP connector or Disk connector,
etc. The connector allows you to implement data integration between your
applications more efficiently and quickly. You can work with various
objects exposed in the connector.
Connector configuration
To configure the connector to communicate with Pricefx, set up two
components:
 Pricefx - Partner connection
 Pricefx - Partner operation
This design provides reusable components which contain connection
settings and operation settings. After building your connection and
operation, set up your connector within the process. When the process is
defined properly, Integration can map to and from virtually any system
using the Pricefx connector to create, update, retrieve, and delete records
from Pricefx.
Supported editions
Pricefx application version release “Collins” 5.0.0 and higher.
Prerequisites
To use the Pricefx connector, you will require the following:
 Pricefx partition - User account details like username and password.
 Pricefx Key - If you would like to use Pricefx API protocol version 2.
 Appropriate access rights to the Pricefx resource. You can verify your
rights in Pricefx by browsing the data in Pricefx. You would be able
to retrieve them from the connector if you could browse it in Pricefx.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
URL -
Refers to the URL when you access Pricefx via browser. The path is not
required. For example, if the URL is https://www.pricefx.eu/app/, you would
just need to input https://www.pricefx.eu.
Username - Refers to the username that you used to login to Pricefx via
browser.
Password - Refers to the password that you used to login to Pricefx via
browser.
Partition - Refers to the name of the partition that you used to login to
Pricefx via browser.
Pricefx - Partner operation
The Pricefx - Partner operation defines how to interact with your Pricefx
partition and represents a specific action to perform against a Pricefx
object.
Create a separate operation component for each action/object
combination that your integration requires. All Pricefx - Partner connector
operations use JSON format and support the following actions:
 Query — Retrieve records from the Pricefx Partition based on a
filter.
 User Access Operation — Provide actions relevant to
authentication and authorization.
 PA - Truncate Data — Truncate PA data from Pricefx Partition
based on a filter in the request message.
 PA - Data Upload — Upload PA data in Excel file format to the
Pricefx Partition.
 PA - Data Copy — Copy data from master data to PA tables.
 PA - Dataload Calculation — Execute a calculation dataload in
Pricefx partition.
 Refresh Datamart — Refresh data from a specific Pricefx
Datamart.
 Get — Retrieve a specific record from Pricefx Partition based on its
primary key.
 Metadata — Retrieve metadata of a specific table from Pricefx
Partition.
 Execute Formula — Execute a logic in Pricefx partition.
 Quote Operations – Perform various actions on a Pricefx Quote.
 Product Image Operations – Perform various actions to a Pricefx
Product Image.
 Admin Operations – Perform various administrative actions to a
Pricefx Partition.
 Fetch — Retrieve records from Pricefx Partition based on a filter in
the request message.
 Delete — Delete existing records in Pricefx Partition based on a
filter in the request message.
 Delete by key — Delete existing record in Pricefx partition based
on the business key in the request message.
 Upsert — Create a new record or update an existing record in
Pricefx Partition.
 Update — Update an existing record in Pricefx Partition.
 Bulk Data Load — Upload data to Pricefx Partition.
 Create — Create a new record in Pricefx Partition.
For more information about creating a Pricefx operation, see the
topic Creating an Operation.
Options tab
Click Import Operation, then use the Import wizard to select the object
to integrate. When you configure an action, the following fields appear on
the Options tab.
Object - An object defines what you want to integrate or the action to
execute, which you select in the Import Wizard.
Object - Displays the object type that you selected in the Import
Operation wizard.
Tracking Direction - Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.
Pricefx - Partner connector
Together, Boomi Integration and the Pricefx - Partner connector enable
you to easily connect any customer data or application to Pricefx, which
enables the creation of a full suite of pricing management solutions.
The connector uses the REST API to communicate to Pricefx and to
manipulate the data and execute actions. The connector provides the
option to fetch the data by building a JSON request message manually, or
dynamically build the filter by query parameters. The connector also
provides the option to update or insert data to Pricefx by building a JSON
batch upsert or bulk data load request message, or by uploading an Excel
file via other Boomi connectors such as FTP connector or Disk connector,
etc. The connector allows you to implement data integration between your
applications more efficiently and quickly. You can work with various
objects exposed in the connector.
Connector configuration
To configure the connector to communicate with Pricefx, set up two
components:
 Pricefx - Partner connection
 Pricefx - Partner operation

Prerequisites
To use the Pricefx connector, you will require the following:
 Pricefx partition - User account details like username and password.
 Pricefx Key - If you would like to use Pricefx API protocol version 2.
 Appropriate access rights to the Pricefx resource. You can verify your
rights in Pricefx by browsing the data in Pricefx. You would be able
to retrieve them from the connector if you could browse it in Pricefx.
Tracked properties
This connector has no predefined tracked properties.

Connection tab
URL -
Refers to the URL when you access Pricefx via browser. The path is not
required. For example, if the URL is https://www.pricefx.eu/app/, you would
just need to input https://www.pricefx.eu.
Username - Refers to the username that you used to login to Pricefx via
browser.
Password - Refers to the password that you used to login to Pricefx via
browser.
Partition - Refers to the name of the partition that you used to login to
Pricefx via browser.
Use JWT Token - Refers to the feature for the connector to use JWT token
instead of Username, Password to invoke Pricefx API. It is recommended to
use for better performance at the Pricefx server. The token could be
retrieved from Get Token operation and would expire in about 30 minutes.
When using JWT token authentication, you must specify the token value.
You can set the JWT token value using the following document property:
JWT Token required only if the Use JWT Token option is selected in the
Pricefx - Partner connection. The token should be retrieved by the “Get
Token” operation.
Use OAuth Token in process - Refers to the feature for the connector to
access Pricefx API with API protocol version 2. You should first have your
Pricefx key before using this option. The token could be retrieved from the
“Get Token” operation and would expire in about 60 minutes. When using
Oauth token authentication, you must specify the token value. You can set
the OAuth token value using the following document property: OAuth
Access Token required only if the Use OAuth Token in process option is
selected in the Pricefx - Partner connection. The token should be retrieved
by the “Get Token” operation. You can choose either using JWT token or
OAuth token in the connection.
Pricefx - Partner operation
The Pricefx - Partner operation defines how to interact with your Pricefx
partition and represents a specific action to perform against a Pricefx
object.
Create a separate operation component for each action/object
combination that your integration requires. All Pricefx - Partner connector
operations use JSON format and support the following actions:
 Query — Retrieve records from the Pricefx Partition based on a
filter.
 User Access Operation — Provide actions relevant to
authentication and authorization.
 PA - Truncate Data — Truncate PA data from Pricefx Partition
based on a filter in the request message.
 PA - Data Upload — Upload PA data in Excel file format to the
Pricefx Partition.
 PA - Data Copy — Copy data from master data to PA tables.
 PA - Dataload Calculation — Execute a calculation dataload in
Pricefx partition.
 Refresh Datamart — Refresh data from a specific Pricefx
Datamart.
 Get — Retrieve a specific record from Pricefx Partition based on its
primary key.
 Metadata — Retrieve metadata of a specific table from Pricefx
Partition.
 Execute Formula — Execute a logic in Pricefx partition.
 Quote Operations – Perform various actions on a Pricefx Quote.
 Product Image Operations – Perform various actions to a Pricefx
Product Image.
 Admin Operations – Perform various administrative actions to a
Pricefx Partition.
 Fetch — Retrieve records from Pricefx Partition based on a filter in
the request message.
 Delete — Delete existing records in Pricefx Partition based on a
filter in the request message.
 Delete by key — Delete existing record in Pricefx partition based
on the business key in the request message.
 Upsert — Create a new record or update an existing record in
Pricefx Partition.
 Update — Update an existing record in Pricefx Partition.
 Bulk Data Load — Upload data to Pricefx Partition.
 Create — Create a new record in Pricefx Partition.
For more information about creating a Pricefx operation, see the
topic Creating an Operation.
Options tab
Click Import Operation, then use the Import wizard to select the object
to integrate. When you configure an action, the following fields appear on
the Options tab.
Object - An object defines what you want to integrate or the action to
execute, which you select in the Import Wizard.
Object - Displays the object type that you selected in the Import
Operation wizard.
Tracking Direction - Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.
QuickBooks connector
The QuickBooks connector enables you to move data into and out of
QuickBooks financial software.
The connector connects directly to the QuickBooks company data file
(*.QBW) or online account. The connector enables you to query, add,
modify, and delete various types of records, including accounts,
customers, sales orders, and invoices.
The QuickBooks connector creates a folder in the $
{atom_installation_directory}/work/quickbooks_host directory where it
writes a temporary registry file. This file allows the connector to make a
single modification in the Windows registry to set up the connection.
Supported editions and versions
Boomi supports the following editions and versions:
 Local QuickBooks Desktop Edition
o Pro, Premier, and Enterprise Editions
o General Business, Contractor, Manufacturing and Wholesale,
Nonprofit, Professional Services, and Retail editions
o Global, US, Canada, and UK Editions
o Connectivity to the 2002 edition and later
o QuickBooks 2002 (R1) and (R2), 2003 (prior to R7 and R7 and
above), 2004, 2005 (prior to R5 and R5 and above), 2006,
2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014 and later
(2017)
 QuickBooks Online Edition (This legacy edition is not available to
new users after the May 2014 release.)
o Simple Start and Plus Editions
o QB SDK versions 2.0–8.0 and 10.0–13.0. This corresponds to
QuickBooks 2002–2014.
 QuickBooks Online REST API
o Intuit’s QuickBooks Online edition supports V3 API version 7.
Unsupported editions
Boomi does not support the following editions:
 QuickBooks for Mac (all editions)
 QuickBooks Desktop Simple Start Edition
 QuickBooks Point of Sale
 QuickBooks Desktop Australia Editions
Prerequisites
General prerequisites:
 You must have QuickBooks administrator rights.
Prerequisites for Desktop editions:
 You must have a basic runtime deployed to the machine where you
install a QuickBooks client application.
 You must configure QuickBooks to authorize Integration to connect
to your company file. You can find instructions in the connection
configuration.
 The basic runtime must be a 32–bit Runtime if you are using
Windows. The Windows COM libraries used to establish the
connection support only 32 bit.
 The basic runtime must run in desktop mode, rather than as a
Windows service. In addition, Windows users must always sign into
the machine.
 The Windows user who is running the basic runtime must have
network access with permissions to read and write the company
data file (*.QBW file) if the data file resides on a different machine
than the client.
 Allow the QuickBooks company file to use integrated applications by
going into the QuickBooks company file
(Edit > Preferences > Integrated Applications > Company
Preferences) and ensuring that Don't allow any applications to
access this company file is clear.
Steps to set up Desktop editions in a distributed or multiple-user
environment. If you have multiple users accessing a single QuickBooks
company file in a shared location or server:
1. Authorize the basic runtime to connect as the Admin user.
2. Install a QuickBooks client on the central server, even if not used by
users. The basic runtime needs certain libraries contained within the
client installation.
3. Sign into QuickBooks on the same machine where you install
the Runtime; it does the one-time authorization for the runtime to
connect to QuickBooks.
4. Create a separate Windows user to run the basic runtime. The basic
runtime must run as a desktop application under a specific Windows
user, which can be a Local user.
5. Ensure that the Windows user is always signed into the server. You
can lock the user session but not sign out. Signing out of the server
causes the basic runtime to stop running. If the user signs onto the
server twice, it causes two basic runtime instances to run. It can
result in conflicts as both basic runtimes try to access QuickBooks
simultaneously.
6. Prevent others from signing into QuickBooks as the Admin user,
which can cause conflicts.
7. Have users log into QuickBooks in multiple-user mode for normal
business. If a user logs in under single-user mode, the basic
runtime is prevented from connecting to the company file. Some
administrative functions can be performed only in single-user mode
so be aware that the integration stops during those times. When the
administrative work is complete, switch back to multiple-user mode.
The integration picks up where it left off.
8. Close the QuickBooks client on the server when users switch to
single-user mode.
Prerequisites for QuickBooks Online Edition:
 You must have a QuickBooks Online account.
 You must obtain an Application Connection Ticket to authorize
Integration to connect to your account. See the topic about
generating a QuickBooks Application Connection Ticket.
Prerequisites for the QuickBooks Online REST API 2013 Edition:
 You must have a QuickBooks Online account.
 You must create an Intuit developer account to access the
QuickBooks Online REST API’s service endpoint and authorize
Integration as a QuickBooks REST API application. See the help
topics Intuit Partner Platform, Authorizing Integration with
QuickBooks, and Adding a QuickBooks connection that uses the
Online REST API.
Tracked properties
This connector has no predefined tracked properties.
Settings tab
QuickBooks Edition - Choose the type of edition to connect to:
 Local QuickBooks Desktop Edition — If you are using this edition,
see the topic Authorizing Boomi Integration to connect to the
Desktop edition.
 QuickBooks Online Edition — If you are using this edition, see the
topic Generating an Application Connection Ticket.
 QuickBooks Online REST API 2013 — If you are using this edition,
see the topics Authorizing Boomi Integration with QuickBooks and
Adding a QuickBooks Online REST API connection.
Application Name (Desktop)
Used by QuickBooks to identify the client attempting to connect — that is,
Boomi Integration. If you do not specify the application, QuickBooks adds
one at run-time.
Company File Name (Full Path) (Desktop)
The full path and name of the company file (for example, C:\Program Files\
Intuit\My Company File.QBW). You can easily find the path from within
QuickBooks. Open a company file, press F2, then copy and paste the
entire value from Company File > Location.
If not set, the connector attempts to connect to the Open Session. If
QuickBooks is open, the connector connects to the company file with
which you are working. If QuickBooks is not open, you get a connection
error.
File Mode (Desktop)
Sets the way Boomi Integration opens the company file. Select the same
mode that your QuickBooks client uses:
 Single-User Mode
 Multi-User Mode
 Auto Detect (recommended)
The following table provides information about file modes:
Who started
Mode Who can obtain access
QuickBooks
Integrated Single-
All other integrated applications = access
Application user
Who started
Mode Who can obtain access
QuickBooks
QuickBooks users on same machine = no
access
Integrated Multi-
All other integrated applications = access
Application user
QuickBooks users on other machines =
access
Single- QuickBooks user signed in. Only one
QuickBooks User
user integrated application = access
Multi- QuickBooks users = access
QuickBooks User
user Integrated applications = access
Application Connection Ticket (Online)
QuickBooks generates an authentication key to permit Boomi Integration
to access your account. Click Get Connection Ticket. See the
instructions for generating the ticket.
Consumer Key (Online REST API)
A value used by the consumer to identify yourself to Intuit. Obtain this key
from your Intuit developer account on the Intuit Partner Platform site.
Consumer Secret (Online REST API)
A secret used by the consumer to establish ownership of the consumer
key. Obtain this secret from your Intuit developer account on the Intuit
Partner Platform site.
Generate (Online REST API)
Used to generate an access token, token secret, and token expiration
date. The Generate OAuth Access Tokens dialog opens with the
following fields:
 Link — A link to a page from Intuit where you can provide sign-in
credentials.
 PIN — A field to enter the PIN from Intuit.
Access Token (Online REST API)
A value used by the consumer to gain access to the protected resources
on behalf of the user instead of using the user's Intuit credentials.
Token Secret (Online REST API)
A secret used by the consumer to establish ownership of a given token.
Token Expiration Date (Online REST API)
The expiration date for the access token.
Company ID (Online REST API)
Your QuickBooks Online company ID. Intuit assigns the ID when you get
your PIN (sometimes called the Realm ID).

QuickBooks operation
The QuickBooks operation defines how to interact with the company file.
The operation represents a specific action to perform against a specific
QuickBooks record type or object, such as Purchase Order, Item Receipt,
Customer, and so on. Not all actions are available for every object. Create
a separate operation component for each action and object combination
needed for your integration.
The QuickBooks operation supports the following actions:
 Inbound: Get (Get/Read, Query)
The operation requires unique parameters at runtime to request data in all
inbound operations.
See the Parameter values topic to read about statically or dynamically
defining values in a process at runtime.
 Outbound: Send (Create or Add, Delete, Update or Modify)
For all outbound operations send request XML data to the connector that
corresponds with the defined XML profile. Usually, request documents are
the destination output of a map, however you can set parameters on the
outbound connector's Parameters tab in the process to supply static or
dynamic request values.
Adding a QuickBooks Online REST API connection
By adding a QuickBooks Online REST API connection, you authorize Boomi
Integration to access QuickBooks Online data.
Before you begin
To complete this task, you must have an authorized Boomi Integration as
a QuickBooks REST API Application and your OAuth Consumer Key and
OAuth Consumer Secret.
About this task
QuickBooks Online REST API users need to authorize (using OAuth)
Integration’s access to their QuickBooks Online data. You can do this by
creating a QuickBooks connection that uses the Online REST API.
Procedure
1. In the Boomi Integration QuickBooks connection component, select
the QuickBooks Online REST API 2013 edition.
2. Enter your OAuth Consumer Key.
3. In the Consumer Secret field, select Click to Set, enter your
Consumer Secret, and click OK.
4. In the Access Token field, click Generate.
The Generate OAuth Access Tokens dialog opens. The page referenced by
the link in this dialog opens automatically in a new browser window.
5. Optional: If the browser does not allow pop-ups, open the link
manually.
6. On the external application page, type your sign-in credentials.
7. Choose the QuickBooks company (boomi.com), and
click Authorize.
You receive a PIN and Company ID. Make a note of both because you need
them in the following steps.
8. Copy the PIN from the external application into the PIN field in
the Generate OAuth Access Tokens dialog.
9. Click Next.
The Generate OAuth Access Tokens dialog closes and populates
the Access Token, Token Secret, and Token Expiration Date fields.
10. Enter the Company ID.
11. Click Save.

QuickBooks Online connector


The QuickBooks Online connector enables you to move data into and out
of the QuickBooks Online financial software.
The connector connects directly to your Online account so you can create,
retrieve, return, update, and delete various types of objects such as
accounts, customers, departments, purchase orders, sales orders,
invoices, and more. You can also query the Trial Balance Report.
Connector configuration
To configure the connector to communicate with QuickBooks Online, set
up two components:
 QuickBooks connection. The connector uses the OAuth 2.0 protocol
for authentication and authorization.
 QuickBooks operation.
Prerequisites
To use the connector and implement a connection to your QuickBooks
Online account from Integration, have the following:
 A local Runtime or a runtime cloud.
 The base URL for the QuickBooks Online API service.
 The QuickBooks Online company ID (also called the Realm ID)
assigned by Intuit.
 QuickBooks administrator rights.
 A QuickBooks Online account.
 An Intuit developer account to access the QuickBooks Online REST
API’s service endpoint. For more information about creating an Intuit
developer account, see this topic.
Supported objects
The following objects are supported by one or more of the QuickBooks
Online operation actions:
Creat Ge Delet Upda Query, Advanced Repo
Object
e t e te Query rt
Account X X X X X
Attachable X X X X X
Creat Ge Delet Upda Query, Advanced Repo
Object
e t e te Query rt
Bill X X X X X
Bill Payment X X X X X
Budget X
Class X X X X X
Company X X X
Company
X X X X X
Currency
Company Info X X X
Credit Memo X X X X X
Customer X X X X
Department X X X X X
Deposit X X X X X
Employee X X X X
Estimate X X X X X
Exchange Rate X X X
Invoice X X X X X
Item X X X X
Journal Entry X X X X X
Journal Code X X X X X
Payment X X X X X
Payment
X X X X
Method
Preferences X X
Purchase X X X X X
Purchase Order X X X X X
Refund Receipt X X X X X
Sales Receipt X X X X X
Tax Agency X X X
Tax Code X X X
Tax Rate X X X
Tax Return X X X
Term X X X X
Time Activity X X X X X
Transfer X X X X X
TrialBalance X
Vendor X X X X
Vendor Credit X X X X X
Tracked properties
This connector has no predefined tracked properties.
Connection tab
The QuickBooks Online connector uses the OAuth 2.0 protocol for
authentication and authorization. For more information, see the linked
Authentication and Authorization topic.
The following fields appear on the Connection tab.
Base URL - The URL for the QuickBooks Online API service endpoint. For
example, https://quickbooks.api.intuit.com.
Company ID - Your QuickBooks Online company ID. Intuit assigns this ID
to a company when a QuickBooks Online user creates a company and you
get your PIN. This ID is also known as the Realm ID.
OAuth 2.0 tab
note
For OAuth 2.0 authentication, you may need to specify in the external
application the Boomi callback or redirect URL. The callback URL
is https://platform.boomi.com/account/<your_account_ID>/oauth2/callbac
k where <your_account_ID> is your Integration account ID. You can also
find your account ID in the Integration platform at Settings > Account
Information and Setup > Account Information.
Client ID - You obtain the client ID from QuickBooks Online. You can get
your client ID from the Keys tab on the application profile created in the
developer portal.
Client Secret - The client secret obtained from QuickBooks Online.
Click Click to Set to open the Password window, enter or paste your
client secret, and click Apply. Enable pop-up windows in your browser.
You can get your Client Secret from the Keys tab on the application profile
created in the developer portal.
Access Token - The encrypted access token retrieved from QuickBooks
Online used to access protected resources. Click Generate. When a new
window opens asking you to confirm offline access, click Allow. A second
message indicates it received the Authorization code and generated it
successfully.
Reset - Click to clear the access token.

QuickBooks Online operation


The QuickBooks Online operation defines how to interact with your
QuickBooks Online account and represents a specific action to perform
against a specific QuickBooks Online object.
Create a separate operation component for each action and object
combination that your integration requires.
The QuickBooks Online operations use XML format and support the
following actions:
 Inbound: Get, Query, Advanced Query.
 Outbound: Create, Delete, Update, Report.

Radius by Campus Management connector


The Radius by Campus Management connector (formerly, Hobsons
Radius) enables Boomi Integration users to integrate with Radius by
Campus Management, a best-in-class admissions and enrollment
management CRM.
Connector configuration
To configure a connector to communicate with Radius by Campus
Management, set up two components:
 Radius by Campus Management connection
 Radius by Campus Management operation

Connection tab
URL - The base URL for the Radius by Campus Management service. For
example: https://useastapi.hobsonsradius.com.
User Name - User name for logging into the Radius by Campus
Management service.
Password - The password for your Radius by Campus Management
service.

Radius by Campus Management operation


The Radius by Campus Management operation (formerly, Hobsons Radius)
defines how to interact with the Radius by Campus Management service
and represents a specific action (Query, Create, etc.) to perform against a
specific Radius by Campus Management service and object type
(Appointment, Contact, Invoice, etc.).
You must create a separate operation component for each object required
for your integration.
The operation supports the following actions:
 Inbound
o Get
o Query — Returns zero to many documents from a single Query
request based on zero or more filters. After selecting the
Query connector action and using the Import Wizard, you can
select fields and add query filters. The bottom half of the
Operation component page contains tabs on which you can
configure these options. See the linked topic about the
Operation component’s query definition tabs.
o Execute — Provides the ability to create a single query filter
by combining the OR operator with the AND filters created
from individual documents. Each input document is
considered a group of attributes that are combined using the
AND operator. You can specify a relation operator for each
attribute using one of these values: contains (default), equals,
startswith, endswith.
 Outbound
o Create
o Delete
o Update
o Upsert

RightNow – Partner connector


info
Partner connectors developed by our Tech Partners and published on the
Boomi platform provide seamless integration solutions for customers.
Customers will initiate support for these connectors through the Boomi
Support Portal, where tickets will be triaged to the respective partner. The
partner or their designated third party is responsible for ongoing support,
including troubleshooting, bug fixes, and resolving related issues.
The RightNow – Partner connector enables you to move data into and out
of the Oracle RightNow CX CRM.
Important
As of the Feb 2021 release, the RightNow – Partner connector is no longer
available for use by customers who have not previously used it. If you
utilized this connector prior to the date, you can continue to do so.
Unsupported editions
The RightNow – Partner connector does not support RightNow November
2014 version and later.
Prerequisites
To implement a connection to your RightNow account from Integration,
complete the following:
 Understand from a business process perspective which interface(s)
to integrate. For example, you may want to retrieve from RightNow
Incident records and then integrate them with a Salesforce custom
object.
 Have ready your user name, password, and host and site names to
access the RightNow Connect web service.
 Confirm that your RightNow account has privileges to connect to
the RightNow Connect API.
Contact your Boomi representative for the correct IP range to add.
 Confirm that the proper Boomi IP range is added to the following
locations: Administrative Items > System
Configuration > Settings > (Select Appropriate
Interface) > Common > Security, where the appropriate
interface is
either SEC_VALID_ADMIN_HOSTS or SEC_VALID_INTEG_HOSTS.
Tracked properties
This connector has no predefined tracked properties.

Connection tab
URL - The URL for the RightNow service:
http://<rightnow_host_name>/cgi-bin/<site_name>.cfg/php/xml_api/
soap_api.php
Configure the RightNow host name and site name (in angle brackets) to
connect to the proper instance.
User - The user name for logging into the RightNow service.
Password - The password for logging into the RightNow service.
To understand how passwords are securely stored in a connection
component, see the Password encryption security help topic.
Partner operation
The RightNow – Partner operation defines how to interact with your
RightNow CRM account.
For all inbound operations, unique parameters are required at run-time to
request data. Outbound operations send request XML data to the
connector that corresponds to the defined XML profile. Usually, request
documents are the destination output of a map. Review the topic
Parameter values to understand how to define these values statically or
dynamically within a process for run-time.

RutterKey Procore - Partner connector


info
Partner connectors developed by our Tech Partners and published on the
Boomi platform provide seamless integration solutions for customers.
Customers will initiate support for these connectors through the Boomi
Support Portal, where tickets will be triaged to the respective partner. The
partner or their designated third party is responsible for ongoing support,
including troubleshooting, bug fixes, and resolving related issues.
The RutterKey Procore - Partner Connector simplifies communication with
Procore to create, read, update or delete data through Procores REST APIs.
Using this connector, you can integrate Procore with other software
applications and services wherever they reside. Utilizing the connector
you can browse Procore API operations, setup filters and perform other
common activities.
Connector Configuration
To configure a connector to communicate with RutterKey Procore - Partner
connector, set up two components:
 RutterKey Procore - Partner connection
 RutterKey Procore - Partner operation

Supported Environments
The connection supports connections to the Procore Production and
Sandbox Environement.
Prerequisites
To implement a connection to your Procore account from Integration and
use the connector operations, do the following:
 Have the System API User configured in Procore
 Have a runtime cloud
 Have knowledge of the Procore Object Model
Tracked Properties
This connector has no pre-defiend properties.
Connection tab
To start using the RutterKey Procore - Partner Connector:
Create a connection in Boomi Integration and select the Procore
Connector in the Connector field.
Next, configure your connection. The Connector provides most of the
information you need.
Then select which Procore server to connect to - there's an option for a
custom server for advanced debugging. The Procore Company ID is an
optional field that pre-populates the Company ID field in all created
Connector Operations using this connection.
Connection Types
The connection only supports OAuth 2.0 authentication which can be
configured in the OAuth 2.0 tab.
RutterKey Procore - Partner operation
The RutterKey Procore - Partner Connector imports operations from
Procores 1100+ operations and provides dynamic documentation within
connector relevant to the operation selected.
You can create a separate action and operation component for each
action/object combination required for your integration.
Import an Operation
Once a connection has been created, create a Connector Operation using
the Procore connector.
 Click the Import button on the top-right of the window.
 Select the Runtime and Connection.
You can also filter the results here; this supports wildcards and is case-
insensitive. The Connector formats all operation labels in the same way
for easy filtering and uniformity wth the following format: [Endpoint]
(v[API Version])
 Once the basic runtime, Connection and any required filter has been
set, click Next and the Procore connector will return all the
Operations from Procore that match the filter.
 Select the required operation and click Create.
The connector creates the operation, required request and response
maps, and all required fields present with all the associated Procore API
documentation.

Saba TalentSpace connector


Use the Saba TalentSpace connector to move data into and out of Saba
TalentSpace and to integrate any on-premise applications and legacy
systems.
For example, you can integrate Saba TalentSpace with other On Demand
or Saas-based applications, such as Salesforce or Netsuite. The Saba
TalentSpace connector connects directly to Saba TalentSpace and browses
the interfaces in real time. Integrating with Saba TalentSpace consists of
SOAP web service calls over an HTTPS connection. The technical details of
this connection, such as request, error handling, and HTTPS connection,
are abstracted to simplify the implementation.
Connector configuration
To configure a connector to communicate with Saba TalentSpace, set up
two components:
 Saba TalentSpace connection
 Saba TalentSpace operation
This approach creates reusable components that contain connection
settings (such as URL, User, Password) and operation settings (such as
Object, Action, data definitions). After building your connection and
operation, set up your connector within a process. When you properly
define the Saba TalentSpace connector within a process, Integration can
map to and from virtually any system using the connector to retrieve data
from or send data to Saba TalentSpace.
Supported editions
This connector supports Saba TalentSpace version 11 and above.
Tracked properties
This connector has no tracked properties.

Connection tab
WSDL URL - The WSDL for Saba TalentSpace SOAP service.
SOAP Endpoint URL - The SOAP service URL for Saba TalentSpace. If you
leave this field blank, the endpoint URL from the WSDL is used.
Username - The username for the Saba TalentSpace account.
Password - The password for the Saba TalentSpace account.
Client SSL Certificate (Optional) - Used to select, edit, or remove a
private certificate for SSL authentication. Your private certificate is used to
verify your identity when sending requests to the endpoint. To set up a
client authentication, you must manually export your public certificate and
provide it to your endpoint owner.
Trust SSL Server Certificate (Optional) - Used to select, edit, or
remove a public certificate component for SSL server authentication.
Contact the owner of the endpoint to obtain its public certificate and then
import it into a certificate component.

Saba TalentSpace operation


The Saba TalentSpace operation defines how to interact with your Saba
TalentSpace and represents a specific action (Create, Query or Update) to
perform against a Saba TalentSpace object type.
The Saba TalentSpace operation supports the following actions:
 Inbound
o Query
 Outbound
o Create
o Update
o Delete
o Execute
You can avoid operation errors by being aware of the following:
 Query user operations support only simple expressions.
 Query operations return a SOAP “no user record found” error. To
silence this error, select Return Application Error Responses.
Sage 50/Peachtree connector
The Sage 50/Peachtree connector lets you move data into and out of the
Sage 50 financial application (formerly, “Peachtree Accounting”).
When connected directly to the Sage 50 data files, you can query, add,
modify, and delete various types of records, such as Address, PayInfo,
EmployerFields, Custom Fields, etc.
Connector configuration
To configure a connector to communicate with Sage 50, set up two
components:
 Sage 50/Peachtree connection
 Sage 50/Peachtree operation
This design provides reusable components, which contain connection
settings such as user name, password, etc. After you build a connection
and connector operation, set up your connector within a process. When
your connector is configured properly within a process, Integration can
map to and from virtually any system using the Sage 50/Peachtree
connector to retrieve data from or send data to the Sage 50 application.
Supported editions
The following editions of Sage 50 are supported:
 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016,
2018, 2022
 Single and multiple user installations
All editions (Quantum, Premium, etc.) are supported for a given year of
Sage 50.
Prerequisites
The Sage 50/Peachtree connector is supported on Windows only. The
connector runs only on local Basic runtimes — not in runtime clouds. To
implement a connection to Sage 50 from Integration, do the following:
 Install a local basic runtime on a server running Sage 50. In
client/server installations, install the basic runtime on a server. Also
install a copy of the Sage 50 client application on the server.
 Ensure the basic runtime is a 32–bit basic runtime. Windows COM
libraries used to establish the connection support only 32 bit.
 Run the basic runtime as a desktop application rather than as a
Windows service. After installing the basic runtime, stop
the Runtime’s Windows service and disable it in the Windows
Services Management Console. The Windows user under whom
the basic runtime is running must always be logged in. For more
information, see To Start or Configure a basic runtime to Run as a
Desktop Application.
 Dedicate a desktop user session to Integration. Even though the
Sage 50 session stays open, do not use it.
 Use the identical user name and password in the connector settings
as in the Sage 50 product. Also use the Sage 50 company file in the
connection settings to avoid access conflicts to the Sage 50 product
operating through the connector.
Sage 50 is a desktop client-only application which does not support
multiple concurrent sessions. To integrate with multiple company files in
one Sage 50 installation, stop and then restart the basic runtime to initiate
the connection to the new company file.
Salesforce Marketing Cloud connector
Salesforce Marketing Cloud (SFMC) connector enables you to interact with
the Salesforce Marketing Cloud platform through the Marketing Cloud APIs.
Salesforce Marketing Cloud (SFMC) is a marketing automation platform
that offers unique features to help users to increase their customer
engagement. The SFMC connector allows you to connect and access data
to/from the Marketing Cloud platform using the SFMC APIs. You can use
this connector to manipulate SFMC data like managing subscribers and
list, importing subscribers and scheduling triggered emails. The connector
provides a secure way of accessing the SFMC platform via OAuth 2.0 user
authentication.
note
The connector is supported in basic runtime, runtime cluster, and Cloud
(Public and Private).
Connector configuration
To configure the connector to communicate with SFMC, set up the
following components:
 Salesforce Marketing Cloud connection
 Salesforce Marketing Cloud operation
This design provides reusable components which contain connection
settings and operation settings. After building your connection and
operation, set up your connector within a process. When the process is
defined properly, Integration can map to and from virtually any system
using the SFMC connector to SFMC platform.
Prerequisites
The SFMC connector requires the following:
 a basic runtime deployed in either a hosted environment or on a
local machine.
 SFMC tenant account details — Client ID and Password.
 Web Service Descriptor Language (WSDL) URL — Locate your
tenant’s SOAP Authentication Base
URIs. Format:https://YOUR_SUBDOMAIN.soap.marketingcloudapis.co
m/etframework.wsdl.
 SOAP URL API — Locate your tenant’s endpoints, which contain your
sub-domain, in the Marketing Cloud. Each tenant will have a unique
sub-domain and when your sub-domain is appended to Marketing
Cloud APIs, it creates a unique tenant specific endpoint for making
API
requests. Example:https://YOUR_SUBDOMAIN.soap.marketingclouda
pis.com/Service.asmx
For details about the Subdomain, refer to the Your Subdomain and Your
Tenant's Endpoints topic.
For more details on WSDL and SOAP API URL, refer to the WSDL and
Endpoint Links for the SOAP API topic.
Tracked properties
This connector has no predefined tracked properties.
Salesforce Marketing Cloud operation
Salesforce Marketing Cloud connector enables you to connect and interact
with the Salesforce Marketing Cloud (SFMC) platform through the
Marketing Cloud APIs.
Create a separate operation component for each action/object
combination that your integration requires.
The Salesforce Marketing Cloud connector's operations use SOAP request
(Except Retrieve and Delete). It supports the following actions:
 Create Subscriber — Create a new subscriber in the SFMC platform.
 Retrieve Subscriber — Fetches details of the existing subscriber(s) in
the SFMC platform.
 Update Subscriber — Update details of an existing subscriber in the
SFMC using the unique Subscriber Key input.
 Delete Subscriber — Delete subscriber(s) in the SFMC platform using
the unique subscriber input.
 Create List — Create a new subscriber list in the SFMC platform.
 Retrieve List — Retrieve details of single or multiple subscriber list in
the SFMC platform.
 Update List — Update an existing subscriber list in the SFMC
platform.
 Delete List — Delete one or more subscriber list in the SFMC
platform using the unique List ID input.
 Send Email — Send triggered email to any subscriber within a
subscriber list, in response to an subscriber action. This operation
requires the External Key of the email message and the subscriber
key used as an unique identifier in the SFMC platform.
 Import Subscriber — Import a list of subscribers onto the SFMC
platform via SFTP server.
SAP Business ByDesign connector
The SAP Business ByDesign connector enables to connect to specific
solutions of the SAP Cloud platform.
The SAP Cloud platform solutions include SAP Business ByDesign
(formerly, SAP Cloud for Customer) including the SAP Cloud for Sales, SAP
Cloud for Service, and SAP Cloud for Social Engagement solutions. You can
get and send data by connecting to the services to which your installation
has access. Each service corresponds to an executable operation for a
particular object or to a custom service. This connector uses SAP Business
ByDesign’s WSIL registry to determine the URL for each service. All
services are protected by Basic authentication.
For more information, see the documentation provided for each SAP
Business ByDesign web service. To locate this documentation:
1. In SAP Business ByDesign, navigate to the Communication
Arrangement Overview.
2. Select a web service. In the lower table, select the row containing
the name of the web service.
3. Click the Display Documentation button.
Connector configuration
To configure a connector to communicate with SAP Business ByDesign, set
up two components:
 SAP Business ByDesign connection
 SAP Business ByDesign operation
This design provides reusable components, which contain connection
settings such as user name and password. After building your connection
and connector operation, set up your connector within a process. When
the SAP connector is defined properly within your process, can map to and
from virtually any system using the connector to retrieve data from or
send data to SAP Business ByDesign.
Prerequisites
To implement a connection to your SAP Business ByDesign account from ,
you need your SAP user name and password.
Tracked properties
This connector has no predefined tracked properties.

Connection tab
SAP Host Name - Host name of the server hosting the SAP Business
ByDesign WSIL.
Username - User name used to connect to the SAP Business ByDesign
web services.
Password - Password used to connect to the SAP Business ByDesign web
services.

SAP Business ByDesign operation


The SAP Business ByDesign operation defines how to interact with the SAP
application.
Each connector action corresponds to a SAP Business ByDesign service.
The connector operation supports the following actions that are conducted
through SOAP-based web services:
 Inbound
o Get works the same as SAP’s Query service except that you
query by ID. For more information, see the SAP Business
ByDesign operation’s inbound actions topic.
o Query works the same as SAP’s Query service.
 Outbound
o Create works the same as SAP’s Create operation in Manage
service.
o Delete works the same as SAP’s Delete and Remove
operations in Manage service.
o Update works the same as SAP's Update operation in Manage
service and also provides No_Action operations.
o Upsert works the same as SAP’s Save operation in Manage
service.
o Execute is used for custom services.

SAP HANA Database connector


SAP HANA Database connector allows you to interact with SAP HANA
Database using Java Database Connectivity (JDBC) drivers. The connector
uses the SAP HANA JDBC driver version "ngdbc-2.19.16.jar".
You can use this connector in a Integration process to perform the
following operations; Insert, Query, Update, Upsert, or Delete data to/from
the SAP HANA database. Additionally, you can also execute stored
procedures and perform bulk file transfers to/from local and SAP HANA file
system. It offers the provision to insert the SQL Statement manually or to
dynamically build the SQL Statement based on the input parameters. This
connector also supports inserting JSON documents with SAP HANA 2.0 SPS
01, SAP HANA 2.0 SPS 03 and SAP HANA 2.0 SPS 06.
As per SAP, SAP HANA Client versions are cross-version compatible and
HANA Client/driver 2.19.16 is compatible with all HANA 2.0 SPS’s.
Connector configuration
To configure the connector to communicate with the SAP HANA Database,
set up the following components:
 SAP HANA Database connection - The connection represents a
specific database instance and its associated user authentication
details.
 SAP HANA Database operation - The operation represents an action
against a specific SAP HANA database table (Object Type). You will
create one or more operations, one for each type of interaction
required by your Integration scenario.
Prerequisites
The SAP HANA database connector requires the following:
 SAP HANA Database (On-premise or cloud).
 URL of the database instance.
 Username and password for a database account.
 a basic runtime installed locally to the database server or to another
machine within your network that can access the database.
Supported editions
The connector supports the SAP HANA Database SAP HANA 2.0 SPS 01
and SAP HANA 2.0 SPS 03.
Tracked properties
This connector has no predefined tracked properties.

Connection tab
Connection URL - Enter the JDBC URL of the SAP HANA Database.
User Name - Enter the username for connecting to the SAP HANA DB.
Password - Enter the user’s password.
Note: The above fields are mandatory for connecting to the SAP HANA
Database. When you use Test Connection, the connector validates only
these connection properties.
The following fields are applicable for file transfer (Bulk Load and Unload)
only.
File System User - Specify the username for accessing the SAP HANA
Database file system.
File System Host - Specify the host name of the SAP HANA Database file
server.
File System Port - Specify the secure shell (SSH) port number for the
SAP HANA Database file system.
File System Key File Path - Specify the private key or .pem file path to
authenticate with SAP HANA Database host.
File System Key File Password - Set the password of the private key
file, if the key file is password protected.

SAP HANA Database operation


The SAP HANA Database connector operations define how to interact with
your database and represent a specific action (INSERT, QUERY, UPDATE,
UPSERT, DELETE, STORED PROCEDURE, BULK LOAD, BULK UNLOAD) to be
performed against one/many database objects. Create a separate
operation component for each action combination that your integration
requires.
Insert, Query, Update, and Delete operations support the following SQL
query insert options,
 Standard — Uses prepared SQL statement entered by the user.
 Dynamic — Based on the user's input request, the connector uses
statement class to build dynamic SQL queries during the runtime.
Create a separate operation component for each action/object
combination that your integration requires. The SAP HANA Database
connector supports the following actions:
 Inbound: QUERY, BULK LOAD
 Outbound: INSERT, UPDATE, UPSERT, DELETE, BULK UNLOAD
 Execution of STOREDPROCEDURE (IN, OUT and INOUT Parameters)

Changelog
2024-08 (Version 10, Current)
 Upgraded the connector SDK to version 2.18.2.
 SAP HANA Database connector now supports :: double colon while
executing the stored procedure.
o For example -
Table Name (Stored procedure name) - STORED_PROCEDURE_NAME
Package Name – PACKAGE_NAME
Schema Name – SCHEMA_NAME
HANA full object name –
“SCHEMA_NAME”.”PACKAGE_NAME::STORED_PROCEDURE_NAME”.

 A stored procedure with a SELECT COUNT() statement will return a


valid value.
 The SAP HANA connector default document tracking has been
changed from Input to output for Query operation, allowing users to
view the output document in process reporting.
2024-07 (Version 8)
 Supports decimal data type for the following operations: Insert,
Query, Update, Upsert, Delete, Bulk Load, and Bulk Unload.
 Supports decimal data type for Stored Procedure.
 Using the SAP HANA connector user can perform Query operation,
accessing multiple tables that contain decimal data types.
 Upgraded to the latest driver version 2.19.16, to support SPS O6
Revision 67.
2024-01 (Version 7)
 Necessary runtime packages are now packaged with the connector.
2023-09
 Dynamic insert of similar table names from different schemas is now
successful and you can create a profile with the columns from the
table specified in the URL without any duplicate column names.
Shopify connector
The Shopify connector enables you to integrate with Shopify, the e-
commerce site for setting up and running online storefronts.
You can use the Shopify connector to create, delete, get, query, and
update objects in Shopify. You can integrate Shopify and an ERP (for
example, NetSuite) to synchronize your Shopify product catalog and
orders with your inventory so that it's always correct.
Connector configuration
To configure a connector to communicate with Shopify, set up these
components:
 Shopify connection
 Shopify operation

Prerequisites
To use the connector and implement a connection to your Shopify
storefront from the Boomi Enterprise Platform, have the following
information available:
 Check for TLS 1.2 support in your basic runtime. The Shopify
connector only supports TLS 1.2, and support for TLS versions 1.0
and 1.1 have been removed to better protect your customer data. If
you do not enable TLS 1.2 support in your basic runtime, you will
see connection errors. For more information about enabling TLS 1.2
support, see the related Properties panel reference topic (Advanced
tab, HTTPS Protocols property) and the related Community article
about disabling TLS 1.0.
 Your storefront address provided by Shopify when you opened your
store. If you can't remember your shop name, you can find it by
logging into your Shopify admin (General Settings page, Store name
field). Also, when logged into Shopify admin, the URL should include
your store name.
 For OAuth 2.0 authentication, have your Shopify client ID and secret,
the authorization and access token URL, your scope, and generate
an access token. Verify that each entry in the scope is separated by
a comma.
 For Access token authentication, have your private app's Admin API
password which you generate from the Shopify admin of the store
you want to connect with your app.
 Shopify Access scope defines the permissions for Boomi processes,
impacting what data can be accessed or modified. New access
tokens need to be generated when the permissions associated with
the access scope change.
Supported objects
The following objects are supported by one or more of the Shopify
operation actions:
Adv Rel
ance Calc Ca Cl Com Cr De Exe G H M O Qu eas Resc Up
Object d ulat nc os plet eat let cut e ol ov pe er e hedul dat
Que e el e e e e e t d e n y Hol e e
ry d
Custo
X X X X X X
mer
Custo
m
X X X X X
Collect
ion
Fulfillm
X X X X X X X
ent
Fulfillm
ent
(Depre X X X
cated)
*
Fulfillm
ent X X X X X X X X X
Order
Invent
ory X X X
Item
Invent
ory X X X
Level
Locatio
X X
n
Order*
X X X X X X
*
Produc
X X X X X
t
Refund X X X X
Transa
X X X
ction
Variant
(Produ X X X X X
ct)
* Fulfillment (Deprecated) must be selected if you are using the Shopify
2022-04 API version. The recommendation is to upgrade to the Shopify
2023-01 API version for new Fulfillment updates.
** Order API version is upgraded to 2023-07 and the 2020-07 version is no
longer supported. It is recommended to re-import the schemas to get
advantage of all the updates, new features and adjust for the change or
retired features.

Connection tab
The Shopify connector supports the OAuth 2.0 protocol for authentication
and authorization.
In addition, to simplify the connection setup for those with a private
Shopify app within their store, the connector supports Access token
authentication.
Description
Store Base URL
Enter the storefront address provided by Shopify when you opened your
store in the format https://<your_shop_name>.myshopify.com. If you
cannot remember your shop name, you can find it by logging into your
Shopify admin (General Settings page, Store name field). Also, when
logged into Shopify admin, the URL should include your store name.
note
In the URL, make sure and use all lowercase for http and https to ensure
the connection works.
Authentication Type
Select the authentication type for the connection.
 When you select OAuth 2.0, enter your credentials on the OAuth
2.0 tab.
 When you select Access token, enter your private app's Admin API
password in the Access token field.
Access token
Enter your private app's Admin API password, which you generate from
the Shopify admin of the store you want to connect with your app.

Shopify operation
The Shopify operation defines how to interact with your Shopify account.
The operation represents a specific action (Query, Create, Update, etc.) to
be performed against a specific Shopify object type (customer, fulfillment,
order, location, etc.). Create a separate operation component for each
action/object combination required for your integration.
To update your process to the latest supported version, re-import the
schemas. Once the schemas are re-imported, your process will be updated
to the latest version. Additionally, a new read-only field, "API Version," will
be added as a single-option button under the Operation property to
support the updated design of the Shopify API version update.
Note: The connector supports the API version listed in the Shopify
connector topic.
The Shopify operations use JSON format and support the following actions:
 Inbound: Advanced Query, Get, Query
 Outbound: Calculate, Cancel, Close, Complete, Create, Delete,
Execute, Open, Update, Hold, Move, Release Hold, Reschedule

Outbound Requests
These operations are outbound requests that perform an action on a
record on the Shopify platform. Input documents must contain an ID field
specifying the ID of the order where the operation is to be executed. Input
documents for fulfillment operations must also contain the ID of the
fulfillmentwhere the operation will be executed.
 Calculate — calculates refunds for a line item and shipping or for a
partial amount of shipping.
 Cancel — cancels an order or a fulfillment or a fulfillment order.
 Close — marks a fulfillment order as incomplete.
 Complete — completes a fulfillment order.
 Open — marks the fulfillment order as open.
 Move — moves a fulfillment order to a new location.
 Hold — applies a fulfillment hold on an open fulfillment order.
 Release Hold — releases the fulfillment hold on a fulfillment order.
 Reschedule — reschedules the fulfill_at time of a scheduled
fulfillment order.
Shopify changelog
Changelog
2024-11 (Version 24.11.2, Current)
 The Shopify connector has been updated to use the 2024-4 version
of the Shopify REST APIs. Additionally, the Shopify object request
and response profiles have also been updated to support this
change. Existing integrations will remain unchanged and continue to
use their original versions.
o APIs of all the objects were updated to use the 2024-04
version. Users must re-import the schemas to update their API
version to 2024-04. Backward compatibility is supported. If
users choose not to update to the latest version, no
modifications will be made.
o We have added or updated fields to Request/Response profiles
to support 2024-04 version, re-import the schemas to apply
the update. To learn more, see the topic Shopify 2024-11
updates.
2024-09 (Version 24.09.0)
 Addressed flagged Sonar issues.
2024-06 (Version 24.06.2)
 Necessary runtime packages are now packaged with the connector.
2024-02 (Version 24.02.0)
 User is able to set the deadline with the desired date and time for
Fulfillment Orders through the connector.
2023-11
 Overridable fields now include Access Token parameters,
Authorization parameters, and Extended JWT Claims (if supported by
the connector).
2023-10
 The order response payment_details was incorrectly being returned
as an array when using an imported Orders profile.
 The Fulfilment Order API Query action was corrected to not have any
selectable response fields and all the following fields
except order_id were deprecated from the filtering
option: created_at_min, line_items, receipt, since_id, tracking_numb
ers, tracking_urls, updated_at_max, updated_at_min.
 The Fulfillment Order APIs were updated to use the 2023-07 version.
2023-09
 We deleted the following deprecated fields from the
Request/Response JSON schemas:
o total_spent
o total_price_usd
o last_order_id
o last_order_name
o orders_count
o total_spent
o origin_location
o destination_location
o total_price_usd
Users must re-import the schemas to get the deletion updates reflected in
the latest 2023-07 version of the Order API.

Signifyd – Partner connector


info
Partner connectors developed by our Tech Partners and published on the
Boomi platform provide seamless integration solutions for customers.
Customers will initiate support for these connectors through the Boomi
Support Portal, where tickets will be triaged to the respective partner. The
partner or their designated third party is responsible for ongoing support,
including troubleshooting, bug fixes, and resolving related issues.
The Signifyd – Partner connector allows you to access the Signifyd system
through the Signifyd REST API and manage cases.
Important: As of December 2, 2020 the Signifyd – Partner connector is no
longer available for use by customers who have not previously used it. If
you utilized the Signifyd – Partner connector prior to this date, you can
continue to do so.
The connector uses JSON format and you can do the following:
 Create a case.
 Get a case, which retrieves details about individual cases and case
entries.
 Cancel a case.
Note: The documentation for this connector is provided by a partner. The
documentation was last updated in April 2018.
Connector configuration
To configure the connector to communicate with Signifyd, set up two
components:
 Signifyd – Partner connection. This connector uses HTTP basic
authentication.
 Signifyd – Partner operation (Create, Get, Update)
This design provides reusable components which contain connection
settings and operation settings. After building your connection and
operation, set up your connector within a process. When the Signifyd –
Partner connector is configured properly within your process, can map to
and from Signifyd using the connection to retrieve data from or send data
to Signifyd and manage cases.
Supported editions
The connector supports the Signifyd REST API version 2. For more
information, see the linked resources.
Prerequisites
To use the connector and implement a connection to your Signifyd
account from , have the following:
 A local basic runtime or a runtime cloud.
 The URL for the Signifyd API service.
 A Signifyd username, which is your API key.
Supported objects
The Signifyd – Partner connector supports the following objects:
 Cases (Create, Get, and Update actions)
 Entries (Get actions)
Tracked properties
This connector has no predefined tracked properties.
Signifyd – Partner connection
The Signifyd – Partner connection represents a single connection to the
Signifyd REST API made with a specific API key.
If you have multiple accounts, you need a separate connection for each.
Configure the URL accordingly.
Connection tab
URL
The base URL that is used to access case data through the Signifyd REST
API. By default, the URL is populated with the following URL:
https://api.signifyd.com/v2/

API_KEY - The API Key to authenticate the connection. You can find your
API Key by logging in to your account at https://app.signifyd.com and
reviewing the Settings page. If you need further assistance, contact
Signifyd Support at [email protected].

Signifyd – Partner operation


The Signifyd – Partner operation defines how to interact with your Signifyd
system and represents a specific action (Create, Get, Update) to perform
against a Signifyd case.
Create a separate operation component for each action/object
combination that your integration requires.
The Signifyd –Partner operations use JSON format and support the
following actions:
 Create — used to create a case in Signifyd.
 Get — used to retrieve details about a case and case entries in
Signifyd.
 Update — used to cancel a case in Signifyd.

Skyflow - Partner connector


info
Partner connectors developed by our Tech Partners and published on the
Boomi platform provide seamless integration solutions for customers.
Customers will initiate support for these connectors through the Boomi
Support Portal, where tickets will be triaged to the respective partner. The
partner or their designated third party is responsible for ongoing support,
including troubleshooting, bug fixes, and resolving related issues.
The Skyflow Boomi connector enables you to easily integrate with
Skyflow’s APIs to Isolate, Protect, and Govern access to sensitive data in a
globally distributed manner. This approach ensures that your backend
services need not have to handle sensitive data directly, thereby
significantly reducing your compliance scope and increasing your security
posture by mitigating the sensitive data sprawl problem.
Skyflow is a data privacy vault that makes achieving data privacy,
security, compliance, and governance easy via a simple API. Designed
specifically for sensitive customer data (PII), Skyflow encrypts and stores
data in a zero-trust vault, and makes it accessible only via specific data
governance rules you create. Your data is isolated in your vault, which can
be located globally to meet data locality and residency requirements. You
can use Skyflow’s powerful tokenization capabilities to reduce your
compliance scope by replacing sensitive data with tokens, and you can
also use Skyflow to securely share sensitive information directly with third-
party services in tokenized or encrypted formats. Advanced data privacy
functionality built into Skyflow includes polymorphic encryption, data
masking, fine-grained access controls, auditability, and more.
Connector configuration
To configure the connector to communicate with the Skyflow API, set-up
these components:
 Skyflow API connection
 Skyflow API operation
Prerequisites
To use the Skyflow Boomi Connector, you'll need the following:
 A Skyflow account
 Your Skyflow Vault URL and Vault ID
 A Skyflow API Key with the necessary permissions
Supported editions
The connector supports the following version of the Skyflow API: 33
Tracked properties
This connector has no predefined tracked properties.

Connection tab
Important
The documentation for this connector is provided by a Boomi partner. This
documentation was last updated in August 2024.
To create a connection from the Skyflow API to Integration:
1. Click Create to add a new connection.
2. Select the Skyflow - Partner connector from the list of available
connectors.
3. Enter your Skyflow Vault URL in the Server URL field.
4. Set the Authenticated Type to CUSTOM.
5. Enter your Skyflow API Key in Customer Authentication
Credentials.
Skyflow - Partner operation
Important
The documentation for this connector is provided by a Boomi partner. This
documentation was last updated in August 2024.
The Skyflow - Partner connector supports the following operations:
 Insert record: Securely add new data to a Skyflow vault.
 Get record: Selectively retrieve data from a Skyflow vault.
 Update record: Modify existing data in a Skyflow vault while
maintaining an audit trail.
 Delete record: Securely remove data from a Skyflow vault.
 Tokenize: Replace sensitive data with non-sensitive tokens in a
Skyflow vault.
 Detokenize: Retrieve the original sensitive data from tokens in a
Skyflow vault.
Slack connector
The Slack connector simplified the integration with Slack, the cloud-based
collaboration tool, to help improve collaboration in your organization.
Connector configuration
To configure a connector to communicate with Slack, set up these two
components:
 Slack connection
 Slack operation
Prerequisites
To implement a connection to Slack from Boomi Integration, do the
following:
 Create a new Slack application at https://api.slack.com/apps/. Be
sure to select the Boomi workspace to create your Slack app.
 Get your Slack client ID and client secret.
 Make sure a basic runtime is deployed on your local machine or
hosted with Boomi Integration.
Tracked properties
There are no tracked properties in the Slack connector.
Connection tab
Complete the connection settings in the OAuth 2.0 tab.
Authentication Type - OAuth 2.0 is the default authentication and
cannot be changed.
OAuth 2.0 tab
Client ID - The Client ID obtained from Slack.
Client Secret - The client secret obtained from Slack.
Access Token - The encrypted access token retrieved from Slack that is
used to access protected resources. Click to generate the access and
refresh tokens. When a new window opens asking you to confirm offline
access, click Accept. A second message indicates the access token was
received and successfully generated.

Slack operation
The Slack operation defines how to interact with your Slack account.
The operation represents a specific action (Query, Create, Update, etc.) to
be performed against a specific Slack record type or object type (files,
groups, channels, etc.). Create a separate operation component for each
action/object combination required for your integration.
The Slack operation supports the following actions:
 Get — retrieves channels info, files info, groups info, users info,
users presence
 Query — searches channels history, channels list, files, files list,
groups list, group history, users list, usergroups list, messages
 Create — creates channels, chat messages, groups, child groups,
usergroups
 Delete — deletes chats and files
 Update — updates chats and usergroups
 Execute — writes to the following endpoints:
o Channels — archive, invite, join, kick, leave, mark, rename,
setPurpose, setTopic, unarchive
o Files — revokePublicURL, sharedPublicURL, upload binary,
upload text
o Groups — archive, close, invite, kick, leave, mark, open,
rename, setPurpose, setTopic, unarchive
o Usergroups — disable and enable
Slack changelog
Changelog
2024-11 (Version 24.11.0, Current)
 Necessary runtime packages are packaged with the connector.
2023-11
 Overridable fields now include Access Token parameters,
Authorization parameters, and Extended JWT Claims (if supported by
the connector).

Snowflake connector
The Snowflake connector allows you to leverage the capabilities of a
Snowflake data warehouse i.e. speed, usability, and flexibility.
This connector is a key part of the Integration process that makes it easy
to work with Snowflake, one of the fastest growing cloud data
management platforms.
Warning
The Snowflake connector is supported on the Boomi Local basic
runtime, runtime cluster and Private runtime cloud. The Snowflake
connector is not supported on the Public runtime cloud.
note
 This is an open source connector and you can access the code in
the Snowflake connector repository.
 When using the Snowflake connector in a Private runtime cloud,
make sure to update its Properties – Advanced configuration to
include the “Security Policy” as Low. For more details, refer to the
topic Security policies for Clouds and runtime clusters.
Connector configuration
To configure the connector to communicate with the Snowflake tenant, set
up the following components:
 Snowflake connection
 Snowflake operation
This approach provides reusable components, which contain connection
settings such as username, password, etc. After building your connection
and operation, set up your connector within a process. When you have
properly configured your connector within your process, Integration can
map to and from virtually any system using the Snowflake connector to
query, create, update and/or delete data.
Supported editions
The Snowflake connector is now packaged with the Snowflake JDBC driver
version 3.13.29.
Attention
If you encounter any issue as JDBC driver not recognized, then try
deploying the driver into Integration Account Library. To do this:
1. Download a Snowflake JDBC driver. This is a JAR file (snowflake-jdbc-
#.#.#.jar – where #.#.# is the JDBC driver version number). To
download the driver, see the Downloading the Driver topic in the
Snowflake documentation. The Snowflake JDBC driver requires a 64-
bit Boomi runtime using Java 1.8 (or higher).
2. Deploy the JDBC driver by uploading the JAR file into Integration
Account Library (Setup > Account Libraries). Then, create a Custom
Library component that referenced that file and set the ‘Custom
Library Type’ to type ‘Connector’ and the ‘Connector Type’ to
‘Snowflake’. Finally, deploy the Custom Library to the
appropriate basic runtime or runtime cluster environment(s).
If you face any error in the driver deployment, manually copy the driver
(snowflake-jdbc-x.xx.x.jar) to the runtime userlib directory, restart the
runtime, and try again. Contact support, if required.
The uploaded or imported files are first passed through a virus scanner. If
any virus detected, the upload or import will result in an error, and the file
will be rejected. Contact Boomi Support, if error persists.
Limitations
The Snowflake connector has the following limitations:
 SnowSQL operation —
o When batch size is greater than 1, the update count for each
statement cannot be retrieved. However, Snowflake-specific
Behaviour can be used to return the Query ids to the user
which were executed in the batch. The user can then track
them back in snowflake.
o When batch size is greater than 1 and DQL statement is
presented in the SQL script as the first statement, an
exception is thrown with error
code : net.snowflake.client.jdbc.ErrorCode.UPDATE_FIRST_RES
ULT_NOT_UPDATE_COUNT. However, this exception can be
ignored as this cannot prevent the execution of the SQL script.
 Create operation with Snowflake defaults for empty
fields option — Batching is not supported.
 In the JSON Schema data type mapping, only the data types are
mapped and not the format. Hence, for seamless execution of the
process, make sure to set the required format manually in the
request and/or response profile. For example,
o Float data type with two precision and two scale - In the
profile, set the format as ##.##.
o Date/Time, Date, Time data type - If AUTO option is
selected to use the custom date, time and date/time format,
make sure to manually set the required date and/or time
format manually in the profile.
note
To apply this data type mapping, perform the profile reimport in the
existing integration process. Otherwise, the existing profiles will continue
to use old data type mapping (mapping all Snowflake data type to Boomi
characters).
 All input request documents (Create, Update, Execute, and
SnowSQL) must not exceed 1 MB. Only one request can be provided
to the connector at a time. If required, use Bulk Operations to
handle large volume of data.
Warning
For the Snowflake connector, the Query Text Size (i.e., SnowSQL) should
not exceed 500 KB.
Prerequisites
The following prerequisites are necessary to implement a connection to
your Snowflake account:
 a basic runtime deployed on your local machine or hosted with
Boomi Integration.
 Java environment variables on your system, such as JAVA_HOME,
CLASSPATH.
 Necessary authorization to access database.
 Connection URL of the Snowflake server.
 One of the following authentication types:
o Username and Password
o Using Private Key and Passphrase
 Boomi Integration Username and Password with the necessary
credentials.
 Amazon S3 access, secret keys, bucket details with the following
permissions on the S3 bucket to access files in the folder (and sub
folders):
o s3:GetObject
o s3:GetObjectVersion
o s3:ListBucket
o s3:ListAllBuckets
note
 These fields are optional and applicable only when you are using the
Amazon S3 external storage.
 The additional s3:PutObject and s3:DeleteObject permissions are
required if you plan to unload files to the bucket or automatically
purge the files after loading them into a table.
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
 SnowSQL – Refers to the SnowSQL script containing the SnowSQL
command you want to execute.
Snowflake operation
The Snowflake operation defines how to interact with the connection,
including transactions, batching, custom properties, etc.
The Snowflake connector supports the following actions:
 Inbound: Get, Query, Bulk Unload, Bulk Get, Copy Into Location
 Outbound: Create, Update, Delete, Bulk Load, Bulk Put, Copy Into
Table
 In/Outbound: Execute, SnowSQL

Changelog
2025-01 (Version 25, Current)
 Enhanced the connector to
support Database and Schema via operation properties, which
can be overridden using the Dynamic Operation Properties.
 Improved flexibility with Truncate functionality. If the
same Database and Schema combination is repeated for the
documents, truncation is performed only once per combination,
eliminating redundant truncations for each file.
2024-09 (Version 24)
 Enhanced the Snowflake connector so that the Bulk Unload
operation properties (S3 Bucket Name, AWS Region, and Internal
Stage Name) are overridden via dynamic operation property.
 Enhanced the Snowflake connector so that the Bulk Load operation
properties (S3 Bucket Name, AWS Region, Internal Stage Name, and
Internal Source File Path) are overridden via dynamic operation
property.
 Vulnerability issue of type Denial of Service was resolved to make
the connector more secure.
2024-08 (Version 22)
 Upgraded the connector SDK to version 2.18.2.
2024-07 (Version 21)
 The connector successfully supports Create action for multiple
documents having different missing fields when Input Options for
Missing Fields field configured with the Snowflake defaults for
empty fields option and Batch Size set to one.
 Vulnerability issue of type Denial of Service was resolved to make
the Connector more secure.
 Fixed multiple critical and major code quality issues for the
Snowflake connector.
 Number of SnowSQL Statements tool tip was updated to make it
more informative. While Batch Size was updated for getting query
results.
2024-06 (Version 20)
 Heap memory usage complies with acceptable standards, resulting
in successful execution processes.
2024-04 (Version 19)
 Supports applying multiple filters on the same column, enabling the
querying of data within a specified range, based on the column. For
example, retrieving data between two dates.
 The document Tracking Direction option for the Bulk Unload is
updated to Output Documents, by default.
 The Snowflake connector supports the key pair authentication
mechanism using OpenSSL for generating public/private keys.
Backward compatibility is also ensured to support older OpenSSL
versions, those are:
o OpenSSL 0.9.8h
o OpenSSL 1.1.1v
o OpenSSL v3.x.
 Snowflake connector is able to connect using the key pair
authentication mechanism and generates private/public keys using
the python cryptography "RSA" function within the asymmetric
library.
2024-02 (Version 18)
 When one of the documents (without batching) failed at the
SnowSQL operation, then all documents were being failed and
moved to the Catch path. Now, all documents are correctly labeled
as successful or failed.
 The Snowflake Connector Class code was not loading the JDBC
Driver class correctly with Java 1.8.
 The Snowflake Bulk Load operation was not accepting the Other
Format Options value for the CSV file format.
Stripe connector
The Stripe connector enables you to integrate with Stripe, where Stripe is
a payment processing platform that enables businesses to accept online
payments and handle global transactions.
You can use the Stripe connector to Create, Delete, Get, List, Execute,
Retrieve, Query, List Line Items, Update and Update Lines objects in
Stripe. You can integrate Stripe and an ERP (for example, NetSuite) to
synchronize online payment transactions with your financial records in
real-time.
Connector configuration
To configure a connector to communicate with Stripe, set-up these
components:
 Stripe connection
 Stripe operation
Supported editions
The connector supports the following version of the Stripe API: Version v1
Supported objects
The following parent objects are supported by one or more of the Stripe
operation actions:
Ge Delet Creat Upda Lis Quer Retrie
Object ↓ Action →
t e e te t y ve
Customers
Payment Links
Prices
Products
Invoices
Invoice Items
Payment Intents
Charges
Upcoming Invoice
Upcoming Invoice
Lines
Balance
Transactions
The following parent objects are supported by one or more of the Stripe
operation actions:
Object ↓ List Line Update
Action → Items Lines
Invoices
Payment Links
The following Business Actions are supported by the Execute action:
 Reconcile Customer Balance for Payment Intent
 Verify Microdeposits for Payment Intent
 Authorize Increment for Payment Intent
 Capture Charges
 Capture Payment Intent
 Confirm Payment Intent
 Cancel Payment Intent
 Send Invoice
 Void Invoice
 Pay Invoice
 Finalize Invoice
 Mark Invoice as Uncollectible
Tracked properties
This connector has no predefined tracked properties.
Connection tab
The Stripe connector supports the usage of API keys for authentication
and authorization.
 Description
Base URL - The address is defaulted to the Stripe API Base
URL : https://api.stripe.com/v1.
 API Key
Enter the unique API Key, obtained from navigating through Stripe
Dashboard at Developers > API Keys.
Stripe operation
The Stripe operation defines how to interact with your Stripe account.
The operation represents a specific action to be performed against a
specific Stripe object type. Create a separate operation component for
each action/object combination required for your integration.
The Stripe operations use JSON format and support the following actions:
 Inbound: Get, List, Query, List Line Items, Retrieve.
 Outbound: Create, Update, Delete, Execute, Update Lines.
Stripe Objects supported: Customers, Prices, Products, Payment Links,
Invoices, Invoice Items, Payment Intents and Charges.

Changelog
2025-01 (Version 5, Current)
 New actions have been introduced for Balance Transactions and
Capture Charges objects.
 The Balance Transactions object now supports the Create request,
response profiles, Get and List actions. The Charges object now
supports the Capture Charges Execute action.
2024-11 (Version 3)
 This connector is now in General Availability (GA).
 New actions have been introduced for various objects, including
Execute, Retrieve, List Line Items, and Update Lines for Customers,
Products, Prices, Payment Links, Invoices, Invoice Items, Payment
Intents, and Charges. Additionally, a new Query action is now
available for Customers, Products, Prices, Payment Intents, Invoices,
and Charges.
2024-09 (Version 2)
 Enhanced Tech Preview connector
o When an action is selected for the Stripe connector, only the
supported object types for that action (e.g: Customers,
Charges, Invoices) will be displayed at the import tab.
o Updated Request/Response Profiles for Customers, Products,
Prices, Payment Links, Payment Intents, Charges, Invoice
Items, and Invoices to resolve earlier profile configuration
mismatches and ensure smoother integration.
o Introduced the new List action for the objects Customers,
Products, Prices, Payment Links, Invoices, Invoice Items,
Payment Intents and Charges.
o Introduced new objects Invoices, Invoice Lines, Charges and
Payment Intents.
o Enhanced user experience by making the id parameter
available as a connector step parameter for the Get action
and as part of the request profile for the Update action.
2024-08 (Version 1)
 Initial Tech Preview release of the connector. To learn about support
levels of Tech Preview connectors, see the topic Feature Release
Stages.

SugarCRM REST connector


The SugarCRM connector enables you to move data into and out of any
edition of SugarCRM.
This connector makes it easy to work with SugarCRM by simplifying the
interaction and adding convenience. You can set up the SugarCRM
connector to move data into and out of any edition of SugarCRM from any
on-premise application and legacy system, including other On Demand or
SaaS-based applications.
Connector Configuration
To configure a connector to communicate with SugarCRM, set up two
components:
 SugarCRM connection
 SugarCRM operation
This approach creates reusable connection settings components. After
building your connection and connector operation, set up your connector
within a process. When you have properly configured your connector with
a process, Integration can send data sets to SugarCRM.
Supported editions
The following editions are supported:
 All editions of SugarCRM (Professional, Enterprise, and Ultimate).
This connector was tested against SugarCRM version 9.0.0.
 SugarCRM API version 10.
Prerequisites
To implement a connection to your SugarCRM account from Integration
and use the connector operations, do the following:
 Have a SugarCRM account and OAuth 2.0 credentials.
 Have the URL for the SugarCRM REST-based service endpoint.
 Have a basic runtime deployed on-premise or in the cloud.
 Register your custom platform in each Sugar instance using the
Platform extension.
 Make sure you are running your basic runtime on a recent Java
version (JRE 1.8.0_91 or later) to help avoid connectivity issues.
Tracked Properties
This connector has no tracked properties.

Connection tab
The connector uses OAuth 2.0 Resource Owner Password Credentials
Grant flow type for username and password authentication.
Base URL - Enter the URL that identifies where the module is available
and its version, for
example https://dellboomiisvpr.sugarondemand.com/rest/v10/.
Client Id - The client ID obtained from SugarCRM (Admin > OAuth Keys).
Client Secret - The client secret obtained from SugarCRM.
Username - Enter the user name required to authenticate and log into
SugarCRM.
Password - Enter the password for your SugarCRM account.

SugarCRM REST operation


The SugarCRM operation defines how to interact with your SugarCRM
account and represents a specific action performed against a specific
SugarCRM record type or object.
Create a separate operation component for each action/object
combination required for your integration.
The SugarCRM operation supports the following actions for Accounts,
Contacts, Leads, Opportunities, and Cases:
 Inbound: Get, Query
You can configure the SugarCRM operation to retrieve object records from
a SugarCRM account. For all inbound operations, unique parameters are
required at runtime to request data. Review the topic about Parameter
values to understand how to statically or dynamically define these values
in a process for runtime.
 Outbound: Create, Delete, Update
You can configure the SugarCRM operation to send data to a SugarCRM
account. For all outbound operations, send request JSON data into the
connector that corresponds with the defined JSON profile. Usually, request
documents are the destination output of a map. However, you can also set
parameters on the outbound connector's Parameter tab within the process
to supply static or dynamic request values.
Taleo connector
The Taleo connector enables you to move data into and out of the Taleo
Business Edition application.
Use the Taleo connector to integrate any on-premise applications and
legacy systems and integrate Taleo with other On Demand or SaaS-based
applications such as Intacct, Salesforce, NetSuite, etc.
The Taleo connector connects directly to the Taleo application and browses
the interfaces in real-time. If any of your Taleo objects are customized with
additional fields, these fields automatically appear when browsing the
Taleo interface with Boomi Integration.
Integrating with Taleo consists of web service calls using XML
request/response setup over an HTTPS connection. The technical details of
this connection, such as request headers, error handling, HTTPS
connection, etc. are abstracted to simplify the implementation.
Connector configuration
To configure a connector to communicate with Taleo, set up two
components:
 Taleo connection
 Taleo operation
This design provides reusable components, which contain connection
settings (such as URL, User, Password) and operation settings (such as
Object, Action, data definitions). After building your connection and
operation, set up your connector within a process. When you have
properly defined the Taleo connector within your process, Integration can
map to and from virtually any system using the connector to retrieve data
from or send data to Taleo.
Supported editions
The Taleo Business Edition is supported.
Prerequisites
Either create a new user within Taleo or use an existing user to access the
Taleo API with the Taleo connector. Because Integration connectivity is
subject to the same access control restrictions as users, as defined within
Taleo, validate the controls you need the API to use and define a user with
the appropriate controls.
Tracked properties
This connector has no predefined tracked properties.

Taleo connector
The Taleo connector enables you to move data into and out of the Taleo
Business Edition application.
Use the Taleo connector to integrate any on-premise applications and
legacy systems and integrate Taleo with other On Demand or SaaS-based
applications such as Intacct, Salesforce, NetSuite, etc.
The Taleo connector connects directly to the Taleo application and browses
the interfaces in real-time. If any of your Taleo objects are customized with
additional fields, these fields automatically appear when browsing the
Taleo interface with Boomi Integration.
Integrating with Taleo consists of web service calls using XML
request/response setup over an HTTPS connection. The technical details of
this connection, such as request headers, error handling, HTTPS
connection, etc. are abstracted to simplify the implementation.
Connector configuration
To configure a connector to communicate with Taleo, set up two
components:
 Taleo connection
 Taleo operation
This design provides reusable components, which contain connection
settings (such as URL, User, Password) and operation settings (such as
Object, Action, data definitions). After building your connection and
operation, set up your connector within a process. When you have
properly defined the Taleo connector within your process, Integration can
map to and from virtually any system using the connector to retrieve data
from or send data to Taleo.
Supported editions
The Taleo Business Edition is supported.
Prerequisites
Either create a new user within Taleo or use an existing user to access the
Taleo API with the Taleo connector. Because Integration connectivity is
subject to the same access control restrictions as users, as defined within
Taleo, validate the controls you need the API to use and define a user with
the appropriate controls.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
Organization Code - The company code of the Taleo customer.
Login User Name - The user name for logging into the Taleo instance.
Login Password - The password for logging into the Taleo instance

The Taleo operation supports the following actions:


 Inbound: Get, Query
You can configure the Taleo operation’s inbound actions to retrieve
document records from a Taleo account. Unique parameters are required
at run-time to request data for all inbound operations. To understand how
to statically or dynamically define values in a process for run-time, review
the topic about parameter values.
 Outbound: Create, Update, Upsert, Delete
You can configure the Taleo operation’s outbound actions to send data to a
Taleo account. For all outbound operations, send request XML data into
the connector that corresponds with the defined XML profile. Most often,
request documents are the destination output of a map. However, you can
also set parameters on the outbound connector's Parameter tab within the
process to supply static or dynamic request values.
Taleo REST connector
Use the Taleo REST connector to move data into and out of the Taleo
Business Edition application by using its public REST API.
As a Taleo user, you can use the Taleo REST connector to integrate any on-
premise applications and legacy systems. You can also use this connector
to integrate Taleo Business Edition with other On Demand or SaaS-based
applications such as Intacct, Salesforce, NetSuite, etc.
Connector configuration
To configure a connector to communicate with Taleo Business Edition
through its REST API, set up two components:
 Taleo REST connection
 Taleo REST operation
Supported editions
This connector supports version 12.5 and future versions of the Taleo
Business Edition REST API.
Prerequisites
Either create a new user within Taleo, or use an existing user to access the
Taleo REST API with this connector. Boomi Integration connectivity is
subject to the same access control restrictions as defined within Taleo.
Validate the controls you will need the REST API to use and define a user
with the appropriate controls.
Tracked properties
This connector has no predefined tracked properties.

Connection tab
If you use multiple accounts, you need a separate connection component
for each account. You can pair this connection with different Taleo
operations to perform a unique action against a Taleo instance.
Organization Code - The unique Taleo Business Edition company
instance, referred to as a company code.
Login User Name - The unique user name of a Taleo Business Edition
user. The user must be an administrator.
Login Password - The user’s password.

Taleo REST operation


The Taleo REST operation defines how to interact with your Taleo account
instance and represents a specific action (Query, Create, etc.) performed
against a specific Taleo object type (Requisition, Candidate, Employee,
etc.).
Not all objects are available for each action. Create a separate operation
component for each action/object combination required for your
integration.
note
Contact your Taleo representative to learn more about the API
requirements for Taleo and how to access other resources in the user
community for field definitions.
The Taleo REST operation supports the following actions:
 Inbound: Get, Query
Configure the Taleo REST operation to retrieve object records from a Taleo
account. For all inbound operations, unique parameters are required at
runtime to request data. Review the linked topic about parameter values
to understand how to statically or dynamically define these values in a
process for runtime.
To use the Get or Query action with the Status object, add to the
connector’s Parameters tab the entityCode and ID in this
format: entityCode/ID, for example: CAND/5002. You cannot use the
entityType with the Status object.
 Outbound: Create, Update, Upsert, Delete
Configure the Taleo REST operation to send data to a Taleo account. For all
outbound operations, send request XML data into the connector that
corresponds with the defined XML profile. Request documents are typically
the destination output of a map. However, you can also set parameters on
the outbound connector's Parameter tab within the process to supply
static or dynamic request values.
Fields and objects
The Taleo REST operation supports custom fields. The Taleo REST
operation does not support the following objects:
 parseresume
 resumetocandidate
Taleo’s XML-based REST API does not support some characters. When
defining custom field external names within Taleo, names can contain
only:
 lowercase (a-z)
 uppercase (A-Z)
 numbers (0–9)
Custom field names cannot start with a number.
Attachments
You can use the Get, Query, Create, Delete, and Update actions with
attachment objects. These characteristics apply to the different actions:
 Get — the response profile is set to “Unstructured”. The output
documents consist of the contents of the attachment. Provide the ID
in the following format: <entityID>:<attachmentID>.
 Delete — provide the ID in the following
format: <entityID>:<attachmentID>
 Create and Update — the request profile is set to “Unstructured”.
Each input document consists of the content of the attachment that
is being created or updated. and must have the following dynamic
document properties set on it:
o attachmentId — For the Create action, set to the ID of the
entity that the attachment is being attached to. For the
Update action, set to the attachment ID. The format
is <entityID>:<attachmentID>.
o fileName — The name of the attachment.
o contentType — The content type of the attachment, for
example “application/pdf” if you are attaching a PDF.
o (Optional) description — A description of the file.
Options tab
Click Import Operation, then use the Import wizard to select the object
with which you want to integrate.
The following table describes the default field definitions for each action
type:
Object - Displays the object type that you selected in the Import
Operation wizard.
Tracking Direction - Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.
note
If the tracking direction is read-only, the feature to change the direction is
either unavailable or the developer set the configuration to read-only. The
default value you see shows you which document appears in Process
Reporting.
Request/Response Profile - The XML profile definition that represents
the XML structure that is being sent or received by the connector. When
you use the Get, Create, or Update action with an attachment object, the
request or response profile is set to “Unstructured”.
Return Application Error Responses - This setting controls whether an
application error prevents an operation from completing:
 If you clear the setting, the process stops and reports the error on
the Process Reporting page.
 If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.
Get
Taleo REST operation
The Taleo REST operation defines how to interact with your Taleo account
instance and represents a specific action (Query, Create, etc.) performed
against a specific Taleo object type (Requisition, Candidate, Employee,
etc.).
Not all objects are available for each action. Create a separate operation
component for each action/object combination required for your
integration.
note
Contact your Taleo representative to learn more about the API
requirements for Taleo and how to access other resources in the user
community for field definitions.
The Taleo REST operation supports the following actions:
 Inbound: Get, Query
Configure the Taleo REST operation to retrieve object records from a Taleo
account. For all inbound operations, unique parameters are required at
runtime to request data. Review the linked topic about parameter values
to understand how to statically or dynamically define these values in a
process for runtime.
To use the Get or Query action with the Status object, add to the
connector’s Parameters tab the entityCode and ID in this
format: entityCode/ID, for example: CAND/5002. You cannot use the
entityType with the Status object.
 Outbound: Create, Update, Upsert, Delete
Configure the Taleo REST operation to send data to a Taleo account. For all
outbound operations, send request XML data into the connector that
corresponds with the defined XML profile. Request documents are typically
the destination output of a map. However, you can also set parameters on
the outbound connector's Parameter tab within the process to supply
static or dynamic request values.
Fields and objects
The Taleo REST operation supports custom fields. The Taleo REST
operation does not support the following objects:
 parseresume
 resumetocandidate
Taleo’s XML-based REST API does not support some characters. When
defining custom field external names within Taleo, names can contain
only:
 lowercase (a-z)
 uppercase (A-Z)
 numbers (0–9)
Custom field names cannot start with a number.
Attachments
You can use the Get, Query, Create, Delete, and Update actions with
attachment objects. These characteristics apply to the different actions:
 Get — the response profile is set to “Unstructured”. The output
documents consist of the contents of the attachment. Provide the ID
in the following format: <entityID>:<attachmentID>.
 Delete — provide the ID in the following
format: <entityID>:<attachmentID>
 Create and Update — the request profile is set to “Unstructured”.
Each input document consists of the content of the attachment that
is being created or updated. and must have the following dynamic
document properties set on it:
o attachmentId — For the Create action, set to the ID of the
entity that the attachment is being attached to. For the
Update action, set to the attachment ID. The format
is <entityID>:<attachmentID>.
o fileName — The name of the attachment.
o contentType — The content type of the attachment, for
example “application/pdf” if you are attaching a PDF.
o (Optional) description — A description of the file.
Options tab
Click Import Operation, then use the Import wizard to select the object
with which you want to integrate.
The following table describes the default field definitions for each action
type:
Object - Displays the object type that you selected in the Import
Operation wizard.
Tracking Direction - Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.
note
If the tracking direction is read-only, the feature to change the direction is
either unavailable or the developer set the configuration to read-only. The
default value you see shows you which document appears in Process
Reporting.
Request/Response Profile - The XML profile definition that represents
the XML structure that is being sent or received by the connector. When
you use the Get, Create, or Update action with an attachment object, the
request or response profile is set to “Unstructured”.
Return Application Error Responses - This setting controls whether an
application error prevents an operation from completing:
 If you clear the setting, the process stops and reports the error on
the Process Reporting page.
 If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.
Teqtron Agile PLM — Partner connector
info
Partner connectors developed by our Tech Partners and published on the
Boomi platform provide seamless integration solutions for customers.
Customers will initiate support for these connectors through the Boomi
Support Portal, where tickets will be triaged to the respective partner. The
partner or their designated third party is responsible for ongoing support,
including troubleshooting, bug fixes, and resolving related issues.
The Teqtron Agile PLM connector enables users to access their Oracle
Agile PLM system which accelerates innovation and new product
introductions by efficiently managing items, parts, products, documents,
requirements, engineering change orders, and quality workflows across
supply chains while seamlessly integrating to computer-aided design
(CAD) systems.
The connector performs this action by accessing Application Interface
services and custom web services. The connector can also be used to
access AXML/PDX files placed in Oracle Agile PLM’s SFTP server through
event based or scheduled Agile Content Service (ACS). This is an event-
driven XML-based publishing service, making the product record available
to a wide variety of business applications and users, both internally and
across the global manufacturing network.
The Teqtron Agile PLM Connector enables you to Access AIS and custom
web services as well as get and send data from a Boomi process.
Connector configuration
To configure a connector to communicate with Oracle Agile PLM system,
set up these two components:
 Teqtron Agile PLM connection. This connector supports http and
https for authentication and encryption.
 Teqtron Agile PLM operation.
This approach creates reusable components which contain connection
settings and operation settings. After building your connection and
operation, set up your Teqtron Agile PLM connector within a process. When
the process is defined properly, Integration can map to and from the Agile
PLM system using the Teqtron Agile PLM connector to retrieve data from or
send data to the Agile PLM system.
Prerequisites
To implement a connection to your Agile PLM system from the Boomi
Integration, complete the following prerequisites:
 Have host, port, username and password to access your Oracle Agile
PLM system.
 If you are planning to use custom web services, then have the
custom web service url extension.
 If you are planning to use SFTP part of Oracle Agile PLM system,
then have the sftp port, username and password information with
you to set up a connection.
Supported editions
Release 1.0 of the Teqtron Agile PLM connector support following versions
of Agile PLM:
 Agile 9.3.4+
Supported objects
The Teqtron Agile PLM AIS operation supports the following list of objects:
 BusinessObject
 Search
 Attachment
 Folder
 PGC
 Table
 AdminMetadata
 Collaboration
 Report
 DocPublishing
 UserProfile
 Project
 PC
Tracked properties and dynamic properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
FileNameSftp - The name of the file to be sent to Agile PLM systems sftp
server in case of the send operation. It is also the name of the file which is
set in the case of the Get and move, Get and delete action and set by the
connector in case of a Get operation.

Connection tab
To connect to multiple agile systems, use a separate connection for each
system and configure credentials accordingly. Pair a single connection with
different available operations to perform desired actions on the Agile PLM
system.
The Teqtron Agile PLM connection involves connection with the Agile PLM
system, as well as the SFTP server allocated for the Agile PLM system.
These settings are common, and are accessible by all operations built into
the connector. The Agile PLM connection settings are primarily used to
connect with the Agile PLM system’s application interface service
operations. The SFTP connection settings are used by the Get and Send
operations on the SFTP server allocated for the Agile PLM system.
WSDL Url - Oracle Agile PLM Custom PXs WSDL Url Extension
Version - Oracle Agile PLM application version
Protocol Type - Client Security Protocols
Host - Oracle Agile PLM application Host Name
Port - Username - Oracle Agile PLM application Username
Password - Oracle Agile PLM application Password
Trust SSL Certificate - Trust SSL Application Certificate used in case of
https protocol type
SFTP Port - Agile Content Service SFTP Destination Port
SFTP Username - Agile Content Service SFTP Destination Username
SFTP Password -
Agile Content Service SFTP Destination Password

Teqtron Agile PLM – Partner operation


The Teqtron Agile PLM Connector operations define how to interact with
your Agile PLM application.
The operations represent specific actions of connecting to Agile PLM web
services as well as Agile PLM application’s SFTP server. Create a separate
component for each action/object combination that your integration
requires.
The Teqtron Agile PLM connector's supported operations are the following:
 AIS: Application Interface Services, uses XML format
 CustomWebServices: access custom web services, uses XML format
 Get: Get files from Agile PLM sftp server, uses AXML or PDX format
 Send: Send files to Agile PLM sftp server, uses any file extension
Thru MFT – Partner connector
info
Partner connectors developed by our Tech Partners and published on the
Boomi platform provide seamless integration solutions for customers.
Customers will initiate support for these connectors through the Boomi
Support Portal, where tickets will be triaged to the respective partner. The
partner or their designated third party is responsible for ongoing support,
including troubleshooting, bug fixes, and resolving related issues.
The Thru MFT (Managed File Transfer) - Partner Connector provides out-of-
the-box file exchange for your Boomi processes in conjunction with Thru’s
MFT service. By connecting a Boomi process to Thru’s cloud-native MFT
service, you can take your file transfer to a new level with unparalleled
monitoring and control across all lines of business.
Important
The documentation for this connector is provided by a Boomi partner. This
documentation was last updated in July 2024.
Easily design, build, and integrate high volume file transfers into any
process with Thru MFT:
 Instantly scalable, cloud-native file exchange platform where you
pay for what you need.
 Reliable file exchange between partners and processes with audits,
alerts and replay.
 Partner self-management with onboarding, subscriptions, reusable
endpoints, and support.
 A centralized system of record to mitigate risks.
 Real-time, end-to-end processing dashboards to monitor and control
all file transfers.
 Separation of data processing in processes from file transfers.
 Unified connector for file pickup, drop off and exception handling.
Thru's MFT - Partner connector for Boomi enables a Boomi process to call
the Thru for data channel operations. Thru's MFT connector is an
operation-based connector, meaning you need to configure a specific
operation for the connector to perform within a Boomi process.
Thru is a cloud based Managed File Transfer as a Service (MFTaaS) that
enables enterprises and their partners to share files based on a postal
service model. Both Enterprise and Partner organizations simply “Pickup”
and or “Dropoff” files in Thru via a self-managed file exchange portal.
Internal teams, business units or systems can also use Thru flows to
interact with one another’s end points with the ability to subscribe to file
exchange “Flows”.
This powerful combination of industry leading file exchange and
integration platform solutions, enables enterprises to “drag and drop”
complex file exchange processes into an enterprise business ecosystem
with ease.
Thru MFT is built on three primary components:
 Organizations – File exchange process participants.
 Endpoints – File exchange source and target locations.
 Flows – File exchange workflow process.
Once granted access by the Enterprise, Organizations self-manage their
users and reusable Endpoints. Endpoints are used to subscribe to the
Flows (file exchange processes) that the Enterprise has exposed to the
Organization.
To configure the connector to communicate with Thru MFT, set up three
components:
 Object - GET (Pickup the file from Thru), CREATE (File Dropped off)
 Connection - API URL, SiteURL, SiteKey to be used to create a THRU
connection.
 Operation - This component defines the type of operation that can
be performed on the THRU connector. Also, Flow Secret Field is
located on the operation or can left blank and defined on a Set
Properties step.
This approach allows reusable connections settings and operations. After
building the connection and operation setup and test your THRU
connector within a process and then deploy the process in a package
when ready. When the process is defined properly, Boomi Integration can
map to and from virtually any system using the Thru MFT- Partner
connector to communicate with Thru MFT.
Supported editions
The connector supports the following Thru MFT API version:
https://www.thruinc.com/integration-api/
Prerequisites
To use the connector and implement a connection to your Thru portal from
Boomi Integration, do the following:
 Have a Thru MFT account with the necessary Flow Endpoint
credentials. See the Thru MFT documentation guide to learn more.
 Ensure the Thru MFT portal is accessible to Boomi Integration.
Supported objects
The following objects are supported by one or more of the Thru MFT -
Partner connector operations:
 FLOW_PICKUP (Binary output)
 FILE_METADATA (JSON output)
 DROP_OFF (Binary Input)
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
FileName - The name of the file from Flow_Pickup.
FileCode - The FileCode from Flow_Pickup.

Connection tab
Important
The documentation for this connector is provided by a Boomi partner. This
documentation was last updated in July 2024.
The Thru MFT – Partner connector uses the following information to
connect to Thru MFT.

This information is created when a Thru iPaaS Connector endpoint is


added to FlowStudio in Thru. The Endpoint becomes a Flow Endpoint and
the connection details are available from the connections tab.
API Url - The Thru MFT API URL.
Storage Repository - The Storage URL (Storage is also referred to as
'Site').
Site Key - The Storage secret key.
Secret Key - The Secret key for the flow to access an Thru MFT Transport.

Thru MFT – Partner operation


The Thru MFT - Partner operation defines how to interact with your Thru
MFT instance and which flow to use with connection parameters.
Important
The documentation for this connector is provided by a Boomi partner. This
documentation was last updated in July 2024.
The operation represents a specific action, such as Create and Get to be
performed against a specific Flow for a file, metadata or exception. The
Thru MFT - Partner operation uses binary and JSON formats. The following
actions are supported:
 Get - Inbound
 Create - Outbound
Thru MFT – Partner operation
The Thru MFT - Partner operation defines how to interact with your Thru
MFT instance and which flow to use with connection parameters.
Important
The documentation for this connector is provided by a Boomi partner. This
documentation was last updated in July 2024.
The operation represents a specific action, such as Create and Get to be
performed against a specific Flow for a file, metadata or exception. The
Thru MFT - Partner operation uses binary and JSON formats. The following
actions are supported:
 Get - Inbound
 Create - Outbound
Connection tab
TIC Server Name - The name or IP address of the server where Toad
Intelligence Central is installed.

Port - The port number to connect to the Toad Intelligence Central MySQL
database. Use the default of 3566 unless you modified it during the Toad
Intelligence Central installation.

Database Name - The name of the Toad Intelligence Central database


where data sets are created and retrieved for this connection.

User - The name of the Toad Intelligence Central account to use when
connecting to the server.

Password - The password for the specified Toad Intelligence Central


account.

Batch Count - The number of records to commit to Toad Intelligence


Central database in a batch. Use the default of 500 unless you are tuning
the performance of this Toad Intelligence Central connection.

Toad Intelligence Central operation


The Toad Intelligence Central operation defines how to interact with your
Toad Intelligence Central data repository, and supports the outbound
Execute action to create or update a Toad Intelligence Central data set.
To complete creating a new Toad Intelligence Central operation,
click Import Operation to initialize the profiles.

Twilio connector
Use the Twilio connector to query metadata about a Twilio account, phone
numbers, calls, text messages, and recordings, initiate outbound calls,
and send text (SMS) messages, in a Boomi Integration process.
note
The component of the Twilio API that provides support for handling
incoming calls through TwiML instructions is accessible to a Boomi
Integration process through the Web Services Client connector.
Connection configuration
To configure a connector to communicate with Twilio, set up two
components:
 Twilio connection
 Twilio operation
Prerequisites
To implement a connection to Twilio from Integration, you must:
 Have a Twilio account.
 Establish credentials for use with the connector.
Supported objects
The following objects are supported by one or more of the Twilio operation
actions:
*SMSMessage is deprecated and is no longer maintained or updated. It is
expected to be removed in the future versions of Twilio API.
**Message is added as a replacement for SMSMessage and it is the
recommended option to get all the latest features and updates.
Supported editions
The Twilio connector supports Version 2010-04-01 of the Twilio REST API.
Unsupported editions
The Twilio connector does not support Version 2008-08-01 of the Twilio
REST API.
Tracked properties
This connector has no predefined tracked properties.

Connection tab
Account SID - Enter the unique Twilio Account SID, obtained from Twilio.
Auth Token - Enter the Authentication token, obtained from Twilio.

Twilio operation
The Twilio operation defines how to interact with your Twilio account and
represents a specific action (Query, Create, etc.) to be performed against
a specific Twilio object type.
Create a separate operation component for each action/object
combination required for your integration. Not all objects are available for
each action.
The following list provides some generalities and specifics:
 For all inbound operations, unique parameters are required at run-
time to request data.
 For all outbound operations, send request XML data into the
connector that corresponds with the defined XML profile. Request
documents are typically the destination output of a map. Some
outbound actions on particular objects result in the manipulation of
Twilio account data and in the execution of a tangible action.
 Review the Parameter values topic to understand how to statically
or dynamically define values in a process for run-time.
 Twilio specifies phone numbers in E.164 format, for
example +13115556789. A raw caller ID string in a response means
Twilio was unable to normalize the incoming caller ID to E.164.
 Twilio specifies timestamps as GMT in RFC 2822 format. For
example, 6:13:42 PM PDT on September 17, 2012, is specified
as Mon, 17 Sep 2012 01:13:42 +0000.

UBIX advanced partner connector


Use the UBIX Advanced Analytics - Partner connector to serve as a
gateway to send critical enterprise data from your Boomi-connected
systems into a big data store called DataSpace. The UBIX platform allows
you to quickly build Advanced Analytics workflows and cloud-based
application leveraging a no-code ModelSpace and SolutionSpace App
Builder capabilities to transform data into valuable assets by delivering
real-time insights and data-driven decision making to your organization,
thus transforming how you do business.

Connector configuration

To configure the connector to communicate with the UBIX platform, set up


two components:

 UBIX Advanced Analytics - Partner connection

 UBIX Advanced Analytics - Partner operation


Prerequisites

 To implement a connection to your UBIX account from Integration,


have your UBIX Base URL, App Key, and App Secret, and ensure that
the permissions are successful. The UBIX Base URL, App Key, and
App Secret are obtained during UBIX sign-up.

 Prior to transfer, data needs to be in JSON format.

UBIX Advanced Analytics - Partner operation

The UBIX Advanced Analytics - Partner operation defines how to interact


with your UBIX account and represents a specific data transfer action, the
default is SEND.

The UBIX Advanced Analytics - Partner operation supports the following


action:

 SEND - This is the default action supported by the UBIX - Partner


connector. This action transfers your data into your UBIX DataSpace
cloud storage for access to a full suite of ready-made data analytics
and artificial intelligence business tools. Datasets transferred to
UBIX are required to use JSON format.

UKG Pro connector


Use the UKG Pro connector to easily browse, query, and get data out of
UKG Pro and integrate that data with other back-office systems.

The UKG Pro connector connects directly to the UKG Pro application and
browses the interfaces in real time. Integrating with UKG Pro consists of
web service calls utilizing XML request/response set up over an HTTPS
connection. The technical details of this connection, such as request
headers, error handling, and HTTPS connection, are abstracted to simplify
your implementation.

Connector configuration

To configure a connector to communicate with UKG Pro, set up two


components:

 UKG Pro connection

 UKG Pro operation


This connector provides reusable components, which contain connection
settings (such as URL, User, and Password) and operation settings (such
as Object, Action, and data definitions). After building your connection and
operation, set up your connector within a process. When the UKG Pro
connector is configured properly within your process, Integration can map
to and from virtually any system using the UKG Pro connector to retrieve
data from or send data to the UKG Pro application.

Prerequisites

To implement a connection to your UKG Pro account from Integration:

 Have both your login and password to access the UKG Pro web
service.

 Confirm that your UKG Pro account has privileges to connect to the
UKG Pro API. Your UKG Pro representative can confirm your
privileges by providing you with a Sender ID and password specific
to the integration API.

 Replace your current password in UKG Pro connector's connection


settings with a combination of your password and a security token.
This replacement is needed due to a UKG Pro security change.

Connection tab

Names -

Service Root URL - Enter the base URL of your UKG Pro service, such
as https://service101.ukgpro.com/api/v1/.

Authentication Type - Select the type of authentication to use:

 None

 Basic — This authentication requires a username and password.

Username - Enter the user name required to authenticate UKG Pro.

Password - Enter the password of the account on UKG Pro.

User API Key - Enter or paste your User API key. You can retrieve the User
API and Customer API keys from your UKG Pro administrator account by
selecting System Configuration > Security > Web Services.

Customer API Key - Enter or paste your Customer API key.

UKG Pro operation


The UKG Pro operation defines how to interact with your UKG Pro account
and represents a specific action performed against a specific UKG Pro
record type or object.

Create a separate operation component for each action/object


combination required for your integration. The UKG Pro operation browses
local and cloud attachments to generate and import profiles for the
following actions:

 Get — retrieves data from UKG Pro and returns it to the process.

 Query — searches linked entities as well as entity types. When you


select fields to query, the field names are appended to the URL. As a
result, the browser’s URL length limit can be exceeded, which
generates a 502 server error. If this happens, unselect all fields.

Ultipro (UKG) — An object having a composite primary key must use the
QUERY operation of the connector. Objects with a single primary key must
use the GET operation. Click Import Operation to select the object(s) to
integrate. The Import wizard uses the connector browser and imports the
request and response profiles for the selected action.

Options tab

The following table describes the default field definitions for each action
type:

Object - An object defines what you want to integrate or the action to run,
which you select in the Import Operation wizard.

Return Application Error Responses - This setting controls whether an


application error prevents an operation from completing:

 If you clear the setting, the process stops and reports the error on
the Process Reporting page.

 If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.

$skip - Skip the designated number of records and return the remaining
records. For example, entering 25 skips the first 25 records and returns
records 26 and higher.

$top - Retrieve the designated number of top records. You can combine
$top with $skip. For example, setting $skip to 25 retrieves records 26 and
up. Setting $top to 5 then receives records 26 to 30.
Veeva connector
Veeva is a cloud-based customer relationship management platform for
the life sciences and pharmaceutical industries. The Veeva connector
allows you to create and interact with objects in the Veeva Vault in various
ways.

note

This is an open source connector and you can access the code in
the Veeva connector repository.

Connector configuration

To configure a connector to communicate using Veeva, set up two


components:

 Veeva connection

 Veeva operation

This design provides reusable components that contain connection and


operation settings. After building your connection and operation, set up
your connector within a process. When the process is defined properly,
Integration can map to and from virtually any system using the Veeva
connector.

Prerequisites

To implement a connection to Veeva from Integration, you need the


following:

 A Veeva account with a sign in and password.

 a basic runtime deployed in either a hosted environment or on a


local machine.

Tracked properties

Response Headers - This group tracked property captures the response


header from the service and returns it alongside the output document for
the Create / Upsert Vault, Update Vault, Query,
and Execute operations.

Connection tab

Authentication Type - Select either User Credentials or OAuth 2.0 to


authenticate your Veeva account. The default value is User Credentials.

User Credentials
 If the Vault Authentication API burst limit is exceeded while
requesting the Session ID, the connector waits 60 seconds before
retrying once when you use the User Credentials authentication
type.

Vault Subdomain/DNS - Enter the URL of your Vault instance. The


default value is: {vault_subdomain}.veevavault.com. See the Veeva
Developer Docs to learn more.

note

The Veeva Developer Docs use the terms Vault DNS and Vault
Subdomain interchangeably. While using this connector, Boomi advises
you to use the same input for both Vault DNS and Vault Subdomain to
avoid issues.

API Version - The default Veeva REST API API version. The current version
is v23.3, the current GA version.

User - Enter your Vault user name.

Password - Enter your Vault user password.

Session timeout (minutes)- Enter the Session Duration (in minutes)


configured in your Vault by your Vault admin. This value must be at least
10 minutes and no longer than 480 minutes."

Client ID (Optional) - Enter your header name if any. If you input a value,
it will be concatenated after Boomi_. This value is trimmed before being
used in the X-VaultAPI-ClientID header.

note

The following restrictions apply to Client ID: A valid client ID must be an


alphanumeric string with a maximum of 100 characters. A Client ID can
be mixed-case and the only special characters allowed are periods,
underscores, and hyphens. If an API request includes an invalid Client ID,
the value will appear as invalid_client_id in the API Usage Log.

OAuth2.0/Open ID Connect profile ID - The ID of your Veeva Vault


OAuth2.0 / Open ID Connect profile.

Authorization Server Client Application ID (Optional) - The ID of the


client application at the external Authorization Server Provider.

OAuth 2.0

Veeva requires you to use an external Authorization Server Provider, for


more information see the resource Configuring OAuth 2.0 / OpenID
Connect Profiles.
The following fields are displayed when you select OAuth 2.0 to
authenticate your Veeva session.

Client ID - The external authorization service client application ID.

Client Secret - The external authorization service client application


secret.

Authorization Token URL - The external Authorization Server


Authorization Token URL.

Scope - The external authorization service client application scope. The


default is openid offline_access.

Access Token URL - The external Authorization Server Access Token


URLs.

Access Token -The encrypted access token retrieved from the external
Authentication Server to access protected resources. Click to generate the
access and refresh tokens. When a new window opens asking you to
confirm offline access, click Accept. A second message indicates the
access token was received and successfully generated.

Test Connection

You can test your connection settings before you use or save the
connection in a process. The Test Connection ensures that your specified
settings are correct, valid, and accessible. If the test connection is
successful, you can save the connection. Otherwise, review and correct
any incorrect settings, then test again.

Attention

You can only run the Test Connection against containers associated with
your account (local basic runtimes, runtime clusters, and runtime clouds).

URL logging

Every time the connector executes a request to the service, the URL is
logged. The table below details where the logs for different executions are
stored.

Requests
Logger
execution

Docume
Operations
nt
Requests
Logger
execution

Containe
Test Connection
r

Docume
Browser
nt

Containe
Retries
r

The following is an example of a log stored in the container for


Authentication requests.

Jan 21, 2025 10:31:27 AM ART INFO


[com.boomi.connector.veeva.VeevaConnection testConnection] GET :
https://example.veevavault.com/api/v23.3/metadata/vobjects

The runtime prints buffered debug messages whenever a log message at


the WARNING level or higher is written by default. See the
topic Changing the default for logging debug messages to change
this setting; You must have the Runtime Management privilege to perform
this task.

Workday connector

Use the Workday connector to connect to the Workday Financial


Management and Human Capital Management applications.

Connector configuration

To configure a connector to communicate with Workday, set up two


components:

 Workday connection

 Workday operation

This design provide reusable components, which contain connection


settings (such as URL, user name, password) and operation settings (such
as Object, Action, data definitions). After building your connection and
operation, set up your connector within a process. When you have
properly defined the Workday connector within your process, Integration
can map to and from virtually any system using the connector to retrieve
data from or send data to the Workday application.

Connection tab

URL - The URL for the Workday service, such as https://<workday host
name>.workday.com/ccx/service/<tenant name>/. The trailing ‘/’ is
required. Configure the bracketed Workday host name and tenant name to
successfully connect to the correct instance.

Version - The web service version including point releases, such as 23.2.
You do not need to enter ‘v’ because Integration inserts it for you.

Username - The user name for logging into the Workday service with the
format username@tenantname.

Password - The password for logging into the Workday service.

Workday operation

The Workday operation defines how to interact with your Workday account
and represents a specific action to be performed against a specific
Workday object type (Adjust\_Time\_Off, Add\_Dependent, Add\_Stock\
_Grant, etc.).

The Workday connector operation supports the Execute action. You must
create a separate operation component for each object required for your
integration.

When importing profiles, date and dateTime fields should be in one of two
formats that are supported: date yyyy-MM-ddZZ and dateTime yyyy-MM-
dd'T'HH:mm:ss.SSSZZ.

note

During an import, if you choose to filter by Workday operation, filters must


be entered exactly as they appear in Workday.

Options tab

When you create a new operation or want to change the service specified,
use the Options tab. Click Import Operation to use the Import wizard
and select the object you want to integrate with. An import is always
required to change the Service Name, Custom Service Name, and to
re-import objects. Otherwise, the component XML is not updated.
The following fields appear on the Options tab:

Connector Action - The Connector Action field is set to Execute and is


not configurable.

Object - An object defines what you want to integrate or the action to run,
which you select in the Import Operation wizard.

Tracking Direction - Select the document tracking direction for the


operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process Reporting.
Start steps always track output documents regardless of your selection.

note

If the tracking direction is read-only, the feature to change the direction is


either unavailable or the developer set the configuration to read-only. The
default value you see shows you which document appears in Process
Reporting.

Request/Response Profile - The XML profile definition that represents


the structure sent or received by the connector.

Return Application Error Responses - This setting controls whether an


application error prevents an operation from completing:

 If you clear the setting, the process stops and reports the error on
the Process Reporting page.

 If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.

Return All Pages - If selected, the connector makes repeated calls to the
endpoint so that all pages are returned as separate documents.

If cleared, the maximum number of documents returned is the default of


100.

note

Paging works only when Response_Filter is present in the request and


Response_Results is present in the response.

Service Name - The name of the Workday service you want to interact
with. For new operations or to change the service, click Import and select
the Workday service you want to use.

note
You must select Custom to specify a Workday service that is available to
your account, but not displayed in the drop-down list. The Service
Name field always takes precedence over Custom Service Name.

The Workday connector supports the following services:

 Custom

 Absence Management

 Academic Advising

 Academic Foundation

 Admissions

 Benefits Administration

 Campus Engagement

 Cash Management

 Compensation

 Compensation Review

 Delegated Authentication

 Dynamic Document Generation

 External Integrations

 Financial Aid

 Financial Management

 Human Resources

 Identity Management

 Integrations

 Inventory

 Notification

 Payroll

 Payroll CAN

 Payroll FRA

 Payroll GBR

 Payroll Interface

 Performance Management
 Professional Services Automation

 Recruiting

 Resource Management

 Revenue Management

 Settlement Services

 Staffing

 Student Finance

 Student Records

 Student Recruiting

 Talent

 Tenant Data Translation

 Time Tracking

 Workday Connect

 Workday Planning

Custom Service Name


The name of the custom Workday service you want to interact with. Use
this field to specify a Workday service that is not included in Service
Name. For new operations or to change the service, click Import, set
the Service Name to Custom, and enter a valid Workday service. The
Workday service should match exactly how it appears in Workday (name
and format).

Workday Prism Analytics connector

The Workday Prism Analytics connector allows Workday customers to


bring together data from any source, in order to prepare, analyze, and
securely share it within your organization. By doing so, business insights
are enabled to drive better business decisions and outcomes.

With the ability to bring in external data sources into Workday Prism
Analytics and govern them along with existing Workday Prism Analytics
data, the business will be enabled to generate financial and HR insights
that can be shared across your organization.
Through this connector, you can use a Boomi Integration process to
perform operations such as Get, Create, (Dataset/Table and Bucket),
Upload, Complete Bucket, and Import.

Connector configuration

Using the Workday Prism Analytics connector, you can use an Integration
process to upload large data files to a Workday Prism Analytics table. To
improve performance and efficiency, the connector breaks up the files into
smaller chunks.

To configure the connector to communicate with Workday Prism Analytics,


set up two components:

 Workday Prism Analytics connection

 Workday Prism Analytics operation

This design provides reusable components containing connection settings


and operation settings. After building the connection and operation, set up
your connector within a process. When the process is configured properly,
Integration can map to and from any system using the Workday Prism
Analytics connector to create a table, create a bucket to load files into a
table, upload files to a table, and complete the bucket. The connector
manages all the steps to load big data into Workday Prism Analytics
tables.

note

New to the connector is support for the Workday Table object in the Data
Catalog, added in the Workday 33 release.

Prerequisites

To use the connector and implement a connection to Workday Prism


Analytics from Integration, have the following information and access
rights:

 API Service Endpoint — The URL to your Workday REST API


service endpoint. You can find this endpoint in Workday using the
View API Client task.

 Workday Client ID and Secret — The client ID and secret are


generated for you in Workday when you register the API client in
your tenant using the Register API Client for Integrations task.

 Refresh Token — You can generate a refresh token in Workday


when you register the API client in your tenant using the Register
API Client for Integrations task.
 Have the appropriate access rights and permissions to create
Workday Prism Analytics datasets and tables using the Workday
user interface. The API uploads are performed with this user.

Supported editions

The Workday Prism Analytics connector supports all available editions of


the Workday services.

Tracked properties

This connector has no predefined tracked properties. See the


topic Adding tracked fields to a connector operation to learn how to
add a custom tracked field.

Connection tab

To establish a connection to Workday, you need your API service endpoint,


your client ID and secret, and your refresh token.

note

Have the appropriate access rights and permissions to create Workday


Prism datasets and tables using the Workday user interface. The API
uploads are performed with this user.

API Service Endpoint - Enter the URL for the Workday REST API service
endpoint. For example, the URL to your data center and production
service may
be https://wd2-impl-services1.workday.com/ccx/api/v1/{tenant_id}. The
URL is analyzed to determine and set the base host API and tenant
account. You can find the endpoint in Workday using the View API Client
task.

Client ID - The client ID obtained from Workday to request authorized


access to client data. The Client ID is generated in Workday while
registering the API client in the tenant using the Register API Client for
Integrations task.

Client Secret - The client secret obtained from Workday. Enter the
alphanumeric client secret that is associated with the Client ID. The Client
Secret is generated in Workday while registering the API client in the
tenant using the Register API Client for Integrations task.

Refresh Token - Enter the refresh token allowing access to Workday. You
can generate a refresh token in Workday when you register the API client
in your tenant using the Register API Client for Integration task.
Workday Prism Analytics operation

The Workday Prism Analytics operation defines how to interact with your
Workday account and represents a specific action to be performed (Get,
Create (Dataset/Table and Bucket), Upload, Complete Bucket, and Import.

Create a separate operation component for each action/object


combination that your integration requires. The Workday Prism Analytics
operations support the following actions:

 Inbound: Get

 Outbound: Create (Table and Bucket), Upload, Complete Bucket, and


Import.

Options tab

Click Import Operation, then use the Import wizard to select the object
with which you want to integrate. When you configure an action, the
following fields appear on the Options tab.

Connector Action - Determines the type of operation the connector is


performing related to Inbound or Outbound, specify the proper connector
action. Depending on how you create the operation component, the action
type is either configurable or non-configurable from the drop-down list.

Object - Defines the object with which you want to integrate and which is
selected in the Import Wizard.

 The Object type for Create Table is “Dataset”.

 The Object types for Create Bucket are:

o “List of Datasets” available if “Use existing schema” is


selected during Import.

o “Dynamic Dataset” if “Use existing schema” is cleared during


Import.

Request Profile - Select or add an XML profile component that


represents the structure sent by the connector.

Response Profile - Select or add an XML profile component that


represents the structure received by the connector.

Tracking Direction - Select the document tracking direction for the


operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process Reporting.
Start steps always track output documents regardless of your selection.

note
If the tracking direction is read-only, the feature to change the direction is
either unavailable or the developer set the configuration to read-only. The
default value you see shows you which document appears in Process
Reporting.

Return Application Error Responses - This setting controls whether an


application error prevents an operation from completing:

 If you clear the setting, the process stops and reports the error on
the Process Reporting page.

 If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.

Automatically generate bucket name (Create) - (Optional, and


Bucket only) Select to automatically generate the bucket name. The
bucket name must be unique across all datasets in the catalog.

Header lines to ignore (Upload) - (Optional) Enter the number of


header rows to ignore from the file being uploaded.

Maximum chunk size (Upload) - Enter the maximum size (in


megabytes, without compression) for each chunk to upload. The
maximum upload limit for a single compressed file that the Prism Data API
supports is 256 MB.

Wait for Process to Complete (Complete Bucket) - (Optional) Select


to have the process wait for the service to finish processing the bucket to
return the final state. When successful, the bucket is assigned to the
table.

Wait Timeout (seconds) (Complete Bucket)


(Optional) Enter the maximum number of seconds to wait for the service
to finish processing the bucket. This timeout is used when the Wait for
Process to Complete check box is selected.

Wait for Process to Complete (Import) - Select to have the process


wait for the service to finish processing the bucket to return the final
state. When successful, the bucket is assigned to the table.

Wait Timeout (seconds) (Import) - Enter the maximum number of


seconds to wait for the service to finish processing the bucket. This
timeout is used when the Wait for Process to Complete check box is
selected.

Header lines to ignore (Import) - Enter the number of header rows to


ignore the data from the file being uploaded. The default value is 1.
Maximum chunk size (Import) - Enter the maximum size (in
megabytes, without compression) for each chunk to upload. The
maximum upload limit for a single compressed file that the Prism Data API
supports is 256 MB.

Automatically generate bucket name (Import) - (Required) Select to


automatically generate the bucket name. The bucket name cannot be
blank and must be unique across all tables in the catalog. You can
override the generated bucket name with a profile field. If you do not
select this option, the connector throws an error.

Get

The Get operation is used to obtain information about a bucket. For


example, the Bucket name, the associated dataset, and the status of the
bucket by specifying the Bucket ID as an input parameter in the
configuration of the Connector step.

Create

The Create operation is used to do the following:

 Create a table.

 Create a bucket.

Creating a table

When you need to load data into Workday Prism Analytics, the first step is
to use the Create action to create a new empty table. Before you begin,
verify that you have the appropriate access rights and permissions to
create datasets and tables using the Workday user interface and
documentation at doc.workday.com. The JSON input document for the
table only requires the name, but you can optionally provide a label and
description. After creating the table, you can reuse it for the buckets
containing the CSV files that you want to upload to the table.

JSON format to create table

{
"name": "name of the table",
"displayName": "display name of the table",
"fields": [
{
"ordinal": 1,
"name": "Column Name",
"description": "Description of the column",
"precision": 255,
"scale": 0,
"type":{
"id": "fdd7dd26156610006a12d4fd1ea300ce",
"descriptor": "Text"
}
},
{
"ordinal": 2,
"name": "Column Name",
"description": "Description of the column",
"precision": 255,
"scale": 0,
"type":{
"id": "fdd7dd26156610006a12d4fd1ea300ce",
"descriptor": "Text"
}
},
]
}

note

The table name and column names of the table should be unique.

Creating a bucket

After creating the dataset, your next step is to use the Create action to
create the bucket, which is a temporary folder for the CSV files that you
want to upload to the dataset. When creating the bucket, you select either
a specific dataset or a Dynamic Dataset (you provide the dataset using a
document property) when browsing. Specify options for how the bucket
name is generated, how fields in the file to upload are enclosed and
delimited, and the number of lines to ignore in the file to upload. The JSON
input document for the bucket requires the name, some fields to define
the schema of the files to upload, and the dataset.

When browsing to create a bucket, you can select the Use existing
schema option to retrieve the schema fields from the dataset that has
already been uploaded and use the schema in the new bucket you are
creating. If the dataset does not have an uploaded schema, or you do not
select this option. You must define the schema in an input document as
shown below.

JSON format to create Bucket


'{
"name": "BucketForDYNAMIC_DATASET_'{1}'",
"targetDataset": {
"id": "b818f1f5fbf0017544cf2700c0393b15"
},
"operation": {
"id": "Operation_Type=Replace"
},
"schema": {
"parseOptions": {
"fieldsDelimitedBy": ",",
"fieldsEnclosedBy": "\"",
"headerLinesToIgnore": 1,
"charset": {
"id": "Encoding=UTF-8"
},
"type": {
"id": "Schema_File_Type=Delimited"
}
},
"fields": [
{
"ordinal": 1,
"name": "Employee",
"description": "Employee",
"precision": 255,
"scale": 0,
"type": {
"id": "fdd7dd26156610006a12d4fd1ea300ce",
"descriptor": "Text"
}
},
{
"ordinal": 8,
"name": "Annual_Salary",
"description": "Annual Salary",
"precision": 18,
"scale": 2,
"type": {
"id": "32e3fa0dd9ea1000072bac410415127a",
"descriptor": "Numeric"
}
},
],
"schemaVersion": {
"id": "Schema_Version=1.0" }
}
}'

note

Buckets expire after 24 hours, and after that period they can no longer be
used to upload files. If you do not complete the entire data loading
process (create the dataset/table, create the bucket, upload files into the
dataset/table, complete the bucket) in the 24-hour period, you must start
over by creating a new bucket.

Upload

The Upload operation is used to upload the CSV files into the bucket after
it is created. Use the Set Properties step and provide two required
document properties:

 Filename — the name of the CSV file that you want to upload into
the bucket.

 Bucket ID — the destination bucket where the file is uploaded.

You can also provide an optional document property for the number of
header lines to ignore. If set, the document property overrides the
operation setting.

Complete Bucket

The Complete Bucket operation is used to initiate the data transfer from
bucket to dataset once the CSV files are uploaded successfully into the
bucket. The input JSON for Complete Bucket provides the Bucket ID, and
the output is a JSON document containing the Bucket ID and the state of
the bucket (either Success or Failed). The Bucket ID should be added as an
input parameter in the Connector step. When successful, a "Success"
status is received stating that the bucket was completed. The Object Type
is Bucket.

Import

The Import operation is used to create a bucket and upload the files in the
dataset for a pre-existing Table. Upon execution, this operation will
execute the following:

 Creates a bucket for an existing table.


 Uploads the file to the bucket.

 Completes the bucket.

Workday Prism Analytics changelog

Changelog

2025-03 (Version 3, Current)

 Necessary runtime packages are now packaged with the connector.

 The Workday Prism Analytics v2 connector has been renamed


to Workday Prism Analytics connector.

Zuora connector
Use the Zuora connector to connect to Zuora’s Z-Commerce Platform to
access your Z-Billing and Z-Payments data.

As a Zuora user, you can use the Integration Zuora connector to integrate
any on-premise applications and legacy systems, and to integrate Zuora
with other On Demand or SaaS-based applications such as Salesforce,
NetSuite, etc.

Important

When developing new processes, use the Zuora REST connector, which is
an update to this Zuora legacy connector. All legacy SOAP calls are
available in REST.

Connector configuration

To configure a connector to communicate with Zuora, set up these


components:

 Zuora connection

 Zuora operation

This design provides reusable components, which contain connection


settings (such as web service site, user name, password) and operation
settings (such as object, action, data definitions). After building your
connection and operation, set up your connector within a process. When
you have properly defined the Zuora connector within your process,
Integration can map to and from virtually any system using the connector
to retrieve data from or send data to the Zuora application.
The linked Zuora API documentation contains valuable information about
common scenarios, such as Customer Account creation, Payment creation,
and Subscription creation.

Supported editions

The Zuora connector supports v.39, v.59, and v.69 of the WSDL. New
objects and operations are available in version 69.

In WSDL 68 and earlier versions, the Zuora SOAP API treated many date
fields as dateTime fields. In WSDL 69 and later versions, Zuora treats
those fields only as date fields and no longer accepts dateTime values.

note

Before downloading and using WSDL 69 or later, Zuora cautions that you
should check if your SOAP integration passes or receives dateTime values
to or from any of these fields.

The WSDL is common across all Zuora instances or tenants and does not
contain custom fields configured in your specific tenant. However, you can
extract and populate custom fields. See the Zuora operation and Zuora
operation’s inbound actions topics for more information.

Prerequisites

The Zuora connector requires the following:

 Your API-enabled Zuora tenant. Contact your Zuora representative


for information.

 A Zuora user name with password, configured with the API Access
permission.

Tracked properties

This connector has no predefined tracked properties

Connection tab

Web Service Site - The Zuora web service site. The default for
Production is https://www.zuora.com/apps. For the API sandbox,
use https://apisandbox.zuora.com/apps.

User Name - The Zuora tenant user name for logging into the web
service.

Password - The Zuora tenant password for logging into the web service.

WSDL version - The version of the Web Services Description Language to


use.
Zuora operation

The Zuora operation defines how to interact with your Zuora account and
represents a specific action (Query, Create, etc.) performed against a
specific Zuora object type (such as Account, Invoice, Payment.).

Create a separate Operation component for each object required for your
integration.

Get

The inbound Get action, available in versions 59 and 69, returns these
objects:

 The Amend Response object, which generates a profile to map the


Amend Response document. This object matches the response to a
specific profile.

 The Invoice Split Execute object, which works in conjunction with


several objects:

a. Create the individual InvoiceSplit object.

b. Create an InvoiceSplitItem object for each split invoice you


want to create from the original invoice.

c. Execute the InvoiceSplit object to split the original invoice into


multiple invoices.

Query

The inbound Query action returns zero-to-many object record documents


from a single request. You can add query filters, located on the bottom
half of the Operation component page, to limit results and speed up
processing. Filters use logical operators (AND and OR) and operators
(Equal To, Not Equal To, Greater Than Or Equal, Greater Than, Less Than
Or Equal, Less Than, Like, Is Null, Is Not Null). Because SQL cannot test for
null values using comparison values, query for fields that have no values
using Is Null (does not have) or Is Not Null (does have).

Query supports the following object types:

 Account Full

 Billing Preview Run (WSDL version 69)

 Bill Run (WSDL version 69)

 Invoice Full

 Invoice Split Full (WSDL versions 59 and 69)


 Payment Full

 Account (WSDL versions 59 and 69)

 Accounting Code (WSDL versions 59 and 69)

 Amendment

 Communication Profile

 Contact

 Credit Balance Adjustment

 Feature (WSDL version 69)

 Invoice

 Invoice Adjustment

 InvoiceFile (WSDL version 69)

 Invoice Item

 Invoice Item Adjustment

 Invoice Payment

 Invoice Payment Refund

 Invoice Split (WSDL versions 59 and 69)

 Invoice Split Item (WSDL versions 59 and 69)

 Payment

 Payment Method

 Payment Method Snapshot (WSDL version 69)

 Product

 Product Feature (WSDL version 69)

 Product Rate Plan

 Product Rate Plan Charge

 Product Rate Plan Charge Tier

 Rate Plan

 Rate Plan Charge

 Rate Plan Charge Tier

 Refund
 Subscription

 Subscription Product Feature (WSDL version 69)

 Taxation Item

 Unit Of Measure (WSDL version 69)

 Usage

Object types that have the word “Full” in their name (Account Full, Invoice
Full, Invoice Split Full, and Payment Full) are custom Integration object
types. When you query these objects, the connector performs some joins
to build complex objects. For example, the regular Account object type
has a BillToId field that contains a Contact Id field. When you use the
Account Full object type, BillToId is replaced by BillToContact that has all
the Contact fields. When you query the Account Full object type,
Integration performs a join based on the Contact Id and then populates
the BillToContact fields.

Querying custom and mutually-exclusive fields

The Zuora API WSDL is static and does not have account-specific
customizations. Therefore, you cannot import custom fields in the
automatically-generated request and response profiles. To extract custom
fields using the Query action, enter the custom API field names in the
Extra Query Fields field on the Zuora operation. Use only top-level fields
separating API field names with commas. Add custom fields as elements
on the Query action’s response profile. You then add the API field name to
the Element Name field and configure and set the Zuora namespace. You
cannot filter by custom fields.

The following is an example:

 The custom field’s API field name is NS__BillAddrIntegrationId__c.

 The XML element


is <NS_BillAddrIntegrationID__c>1234</NS_BillAddrIntegrationID__c
>.

 Add an element to the response profile


named NS__BillAddrIntegrationId__c whose data type is Character,
then set the Zuora namespace.

note

An alternative to extracting custom fields using Query is using the Execute


action’s export function to extract related objects in a single query.
The Zuora API prevents including mutually-exclusive fields on some
objects when querying data from Zuora. To extract one of these fields in a
query, include the API field name to the Extra Query Fields field on the
Zuora operation. Separate API field names with a comma. To return
records containing a specified field, enter the field in the operation’s Extra
Query Fields.

RatePlanChargeOne, a commonly used object, has the following mutually


exclusive fields: OveragePrice, Price, IncludedUnits, DiscountAmount or
DiscountPercentage. Specify the desired field in the Extra Query Fields
field on the operation to return records containing that field.

Create

The outbound Create action creates new records in the Zuora object
defined in the operation.

The Zuora API WSDL is static and does not contain account-specific
customizations. You cannot import custom fields in the automatically-
generated request profile. If you use custom fields, add them manually as
elements on the Create action’s request profile, add their API field name
to the Element Name field, then configure and set the Zuora namespace.

Create supports the following object types:

 Account

 Accounting Code (WSDL versions 59 and 69)

 Amendment

 Amendment Request (WSDL versions 59 and 69)

 Billing Preview Run

 Contact

 Credit Balance Adjustment

 Invoice (WSDL versions 59 and 69)

 Invoice Adjustment

 Invoice Generator (version 39 WSDL only)

 Invoice Item Adjustment

 Invoice Payment

 Invoice Split (WSDL versions 59 and 69)

 Invoice Split Item (WSDL versions 59 and 69)


 Payment

 Payment Method

 Product (WSDL version 69 only)

 Product Rate Plan (WSDL version 69 only)

 Product Rate Plan Charge (WSDL version 69 only)

 Rate Plan (WSDL version 69 only)

 Refund

 Subscribe

 Taxation Item

 Unit Of Measure (WSDL version 69 only)

 Usage

Using the Create action for bulk import

Use these objects in the Create action to leverage Zuora's bulk import API:

 Import – Payment — Used to import payments and apply them to


invoices in Zuora. The invoices must exist and the invoice number is
a required input field.

 Import – Quote — Used to import a payment towards a purchase of


a sale which the customer has been quoted. The quote is converted
to an invoice and the payment specified in the import is applied to
the invoice created. This object requires Zuora/Salesforce
integration because looking up a quote is a Salesforce feature. To
enable that integration, a Z-Force 360 must be installed into a
Salesforce account. The quote must exist in Salesforce and the
quote number is a required input field.

 Import – Usage — Used to import usage records for subscription


services in Zuora.

When you use the Zuora connector to import high volumes of payment,
quote, or usage data, follow these rules:

 Import only one object type (payment, quote or usage) at a time


into Zuora.

 Keep input batches to a maximum of 10,000 records in each


document for all three types of imports. If an input document
contains 10,001 records, there are two output documents: one
containing 10,000 records and one containing one record. This is an
inefficient way to use this connector.

 Bring the records from the source system into a Boomi Integration
process first. The incoming records can be in any format but must
be mapped to an XML profile by using the Map step.

 Add static fields to the XML profile. Each object has eight to 10 static
fields that are valid for all clients. The fields are added when you
click the Import button on the Zuora connector’s operation
component.

 Import custom fields by manually adding the fields to the XML


profile:

o Make sure that the XML profile’s Options tab > Respect
Min Occurs check box is on.

o Use the Zuora ID for the custom field’s name.

o Mark each custom field as required. On the XML profile’s Data


Elements tab, set the element’s Min Occurs field to “1”.

o Do not set a namespace (including the Zuora namespace) for


the custom field.

 Use the yyyy-MM-dd'T'HH:mm:ss format for Payment and Quote


objects’ date fields. For Usage objects, use the MM/dd/yyyy format
for start dates and end dates.

 Use the XML profile in a Map step. The Map step ensures that the
first record contains placeholder elements for columns it might not
have data for. This ensures that data is not truncated from
subsequent records.

 Send the XML data from the Map step to the Zuora connector. The
Zuora connector converts the XML data to CSV, as required by
Zuora.

note

Zuora changes affect importing payments and quotes. In the request


profile the external_unique_key field has been renamed order_id. The
response profile has new
fields: amount_applied_to_invoice and amount_applied_to_credit_balance.
To get these changes into Zuora operation request and response
profiles created before the 6 March 2013 release, use the Zuora Import
Wizard to browse and select the object. You must also update previously
created maps and static data to include the changed and new fields.
Delete

The inbound Delete action removes records from the Zuora object as
defined in the operation. You must supply the Zuora ID for the record.

The Zuora operation’s Delete action supports the following object types:

 Account

 Amendment (WSDL versions 59 and 69)

 Contact

 Invoice (WSDL version 69 only)

 Invoice Adjustment (WSDL version 69 only)

 Invoice Item Adjustment (WSDL version 69 only)

 Payment

 Payment Method (WSDL version 69 only)

 Product (WSDL version 69 only)

 Product Rate Plan (WSDL version 69 only)

 Product Rate Plan Change (WSDL version 69 only)

 Refund (WSDL version 69 only)

 Subscription

 Unit Of Measure (WSDL version 69 only)

 Usage (WSDL version 39)

Execute

The outbound Execute action uses the Zuora Export ZOQL (Zuora Object
Query Language) to create exports with the Z-Commerce API. Zuora
Export ZOQL is similar to Z-Commerce API ZOQL, with a few differences.
The biggest difference is that with Zuora Export ZOQL you query a data
source, not a Z-Commerce API object.

The Export API enables you to extract fields from a parent object in a
single query instead of having to perform subsequent connector calls to
retrieve related information. For example, in one query you can retrieve
an Invoice, its InvoiceItems, the associated Product Rate Plan Charges, the
Account, and the account’s Bill To Contact. The Export API is highly
recommended for use with any transaction record types.

The Zuora Export ZOQL query is constructed from the Zuora Operation
component. (See the Zuora Operation topic for information about the
fields that are used by the connector’s Execute action: Fields to Select,
Table to Select From, Where Clause, Grouping Fields, Order By Field and
Sort By.) The Zuora connector creates an export record, queries the Zuora
database, retrieves the data, parses the data into an XML structure in the
response profile and returns a document for each logical record. The data
is returned in a flattened structure unless you use the Operation
component’s Grouping Fields field to enter field(s) to group details under
their header. For example, you can group invoice line items under their
invoice header.

note

To use the Execute action’s Export function, you should be familiar with
Zuora Export ZOQL (Zuora Object Query Language), the query language
used to create exports with Zuora SOAP API. You can read about Export
ZOQL at Zuora’s Knowledge Center as well as data sources and pre-
joined objects.

Update

The outbound Update action updates existing records in the Zuora object
defined in the operation.

The Zuora API WSDL is static and does not contain tenant-specific
customizations. Therefore, custom fields cannot be imported in the
automatically generated request profile. If you are using custom fields,
manually add them as elements on the Update action’s request profile.
Add their API field name to the Element Name field and configure and set
the Zuora namespace.

Update supports the following object types:

 Account

 Accounting Code (WSDL versions 59 and 69)

 Contact

 Credit Balance Adjustment

 Invoice

 Invoice Adjustment

 Invoice Item (Removed in WSDL version 69 only)

 Invoice Item Adjustment

 Invoice Payment (Removed in WSDL version 69 only)

 Invoice Payment Refund


 Payment

 Payment Method

 Product

 Product Rate Plan

 Product Rate Plan Charge

 Product Rate Plan Charge Tier (Removed in WSDL version 69 only)

 Rate Plan Charge (WSDL version 69)

 Refund

 Subscription

 Unit Of Measure (WSDL version 69 only)

 Usage

Options tab

Select a connector action, then after filling in the fields, use the Import
Wizard to select the object with which you want to integrate. The following
table describes the default field definitions for each action type:

Object - Displays the object type that you selected in the Import
Operation wizard.

Tracking Direction - Select the document tracking direction for the


operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.

note

If the tracking direction is read-only, the feature to change the direction is


either unavailable or the developer set the configuration to read-only. The
default value you see shows you which document appears in Process
Reporting.

Request Profile - Select or add an XML profile component that


represents the structure sent by the connector.

Response Profile - Select or add an XML profile component that


represents the structure received by the connector.

Return Application Error Responses - This setting controls whether an


application error prevents an operation from completing:
 If you clear the setting, the process stops and reports the error on
the Process Reporting page.

 If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.

Import Operation Timeout (Create only, and only when using Import
objects)
Sets the maximum time (in minutes) to wait for import file processing. The
default is 10 minutes.

Fields to Select (Execute only)


Enter the fields to be selected. Reference fields by their table and field
name separated by a period. Separate field names with commas. For
example, Invoice.InvoiceNumber,InvoiceItem.Id. For more information,
including others fields used by the Execute action, see the linked Zuora
Operation’s Outbound Actions topic.

Table to Select From (Execute only)


Enter the table from which to select fields. For example, InvoiceItem.

Where Clause (Execute only)


Enter a WHERE clause to be used when searching. Reference the field by
its table and field name separated by a period. For
example, Invoice.InvoiceNumber.

You can insert dynamic filter values into a WHERE clause using the
request profile. The special syntax to use is: Table.FieldName='$
{arbitraryFilterName}'. For example, Invoice.InvoiceNumber='$
{invoiceNumber}'.

Grouping Fields (Execute only)


Data is returned in a flattened structure unless you enter field(s) to group
the results. Grouping fields tells the Zuora connector to group the data to
make mapping easier, for example to group invoices by invoice number,
row, item, quantity, etc. Reference fields by their table and field name
separated by a period. Separate field names with a comma. For
example, Invoice.InvoiceNumber.

Order By Field (Execute only)


Enter a field to order by when executing the ZOQL (Zuora Object Query
Language) query. The field must be referenced by its table and field name
separated by a period. For example, Invoice.InvoiceNumber.

Sort Order (Execute only)


Select ascending or descending order of results in Order By Field.
Export Operation Timeout (Execute only)
Sets the maximum time (in minutes) to wait for export file preparation.
The default is 10 minutes.

Extra Query Fields (Query only)


Enter additional fields to query that are not included in the import results,
such as custom fields and mutually exclusive fields. Only top-level fields
can be used. Use the API field names and separate them with a comma.
For more information see the linked Zuora Operation’s Inbound Action
topic.

Zuora REST connector


The Zuora REST connector enables you to connect to your Zuora’s
automated operations like Z-Billing, Z-Payments, and Z-Tax.

Connector configuration

To configure the connector to communicate with Zuora, set up two


components:

 Zuora REST connection

 Zuora REST operation

This design provides reusable components containing connection settings


and operation settings (such as Object, Action, data definitions). After
building your connection and operation, set up your connector within a
process. When the Zuora REST connector is configured properly within
your process, Integration can map to and from virtually any system using
the connector to retrieve data from or send data to the Zuora
environment. The linked Zuora API documentation contains valuable
information about common scenarios, such as Customer Account creation,
Payment creation, and Subscription creation.

Prerequisites

 To use the connector and implement a connection to Zuora from


Integration, have the following:

o Base URL

o Access Token URL

o API Access Key ID

o API Secret Access Key

o Entity ID or Entity Name for Zuora Multi-entity


 UTF-8 encode input JSON documents to preserve special characters
and ensure the operations work properly.

Supported editions

The connector supports the Zuora v1 API major version.

Supported objects

The following objects are supported by one or more of the Zuora REST
operation actions. The connector does not currently support the
POST_Attachments, POST_MassUpdater, POST_Usage, and
Object_POSTImport operations.

note

You can add any field that you want to an input profile. By doing so, you
can also include Zuora platform custom fields. All fields, including custom
fields, are sent to the related Zuora endpoint.

Crea Dele Exec G Que Upd


Object
te te ute et ry ate

AcceptUserAccess X

Account X X X

AccountingCode X X X X

AccountingPeriod X X X

AccountSummary X

Actionamend X

Actioncreate X

Actiondelete X
Crea Dele Exec G Que Upd
Object
te te ute et ry ate

Actionexecute X

Actiongenerate X

Actionquery X

ActionqueryMore X

Actionsubscribe X

Actionupdate X

ActivateAccountingCod
X
e

AllAccountingCodes X

AllAccountingPeriods X

AllOrders X

AllSummaryJournalEntri
X
es

AmendmentsByKey X

AmendmentsBySubscri
X
ptionID
Crea Dele Exec G Que Upd
Object
te te ute et ry ate

ApplyCreditMemo X

ApplyPayment X

Attachments,
X X X X
AttachmentsList

BasicSummaryJournalE
X
ntry

BatchUpdateDebitMem
X
os

BatchUpdateInvoices X

BillingDocuments X

BillingPreview X

BillingPreviewRun X

BreakdownCreditMemo
X
ByOrder

BreakdownInvoiceByOr
X
der

CalloutHistory X

CancelAuthorization X
Crea Dele Exec G Que Upd
Object
te te ute et ry ate

CancelCreditMemo X

CancelDebitMemo X

CancelPayment X

CancelRefund X

CancelSubscription X

Catalog X X

CloseAccountingPeriod X

CM_TaxationItems X

Connections X

createToken X

CreateAndRetrieveExp
X
ort

Tracked properties

This connector has no predefined tracked properties.

Connection tab

The Zuora REST connector uses your Client ID and Client Secret to
authenticate you.
Authentication Type - OAuth 2.0 is the default and the only type
available.

Base URL - Enter the REST endpoint to connect to, for


example https://rest.zuora.com for U.S. production.

Zuora REST operation

The Zuora REST operation defines how to interact with your Zuora account
and represents a specific action (Get, Query, etc.) performed against a
specific Zuora object type (Accounts, Invoice Payments, Product Features,
etc.)

Create a separate operation component for each object required for your
integration. The set of operations changes for each operation type. The
operation supports the following actions:

 Get — takes the ID of the element as input and returns a JSON


document with the element that matches the Response Profile.
Some operations return binary documents.

 Query — returns JSON documents that match the response profile.


When Get retrieves information, it uses the default Zuora page size
of 10 and continues fetching files until all are returned.

 Create — creates objects in Zuora.

 Delete — deletes objects in Zuora.

 Execute — updates objects in Zuora.

Important

UTF-8 encode input JSON documents to preserve special characters and


ensure the operations work properly.

CreateAndRetrieveExport object

The CreateAndRetrieveExport object, available when browsing the Execute


operation, is a convenient way to simplify and streamline your process so
you don't have to create two separate operations (one for the creation of
the data source export, and another to access the exported data).
Typically, the entire creation and export cycle is performed together in a
single process, so the two actions are combined in the Execute operation.

When using this object, the request profile contains a header parameter,
X-Zuora-WSDL-Version. This can be used, for example, when you need to
make a request to a specific WSDL version. To do so, you add a field
named X-Zuora-WSDL-Version in your input document with the specific
WSDL version. Then, the connector adds a header with the same name in
the request executed by the operation to ensure that the API version you
are connecting to and the WSDL version specified in the input document
are the same. In this way, you can take advantage of certain operations
and objects supported in specific versions.

Options tab

Select a connector action and use the Import Wizard to select the object
with which you want to integrate. The following table describes the fields
for each action type:

Object - Displays the object type that you selected in the Import
Operation wizard.

Tracking Direction - Select the document tracking direction for the


operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.

note

If the tracking direction is read-only, the feature to change the direction is


either unavailable or the developer set the configuration to read-only. The
default value you see shows you which document appears in Process
Reporting.

Request Profile - Select or add an XML profile component that


represents the structure sent by the connector.

Response Profile - Select or add an XML profile component that


represents the structure received by the connector.

Return Application Error Responses - This setting controls whether an


application error prevents an operation from completing:

 If you clear the setting, the process stops and reports the error on
the Process Reporting page.

 If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.

Zuora REST changelog

Changelog

2024-11 (Version 24.11.0, Current)


 Vulnerability issue of type Denial of Service was resolved to make
the Connector more secure.

2024-07 (Version 24.07.0)

 Necessary runtime packages are now packaged with the connector.

You might also like