Application Driven UnusedConnectors
Application Driven UnusedConnectors
Objects in the Agiloft KB are referred to as tables. Each table contains one
or more records, each record contains one or more fields. Data is stored in
fields. The connector may have limited access to table objects in Agiloft
depending on the permissions settings for the Agiloft account used with
the connector.
Prerequisites
The following prerequisites are needed in order to implement a connection
with an Agiloft KB from Integration:
Knowledge Base name, domain URL, login credentials to access the
Agiloft KB.
The Login credentials must have permissions to use the Agiloft API
and the correct access permissions in Agiloft to perform all
operations the process in implements.
Prerequisites
To implement a connection to your Aible project from Boomi Integration:
Have your login and password to access Aible
Uploading data:
o Know which tenant to use (if you have access to more than
one)
o Know which cloud account to store the data (if more than one
is configured)
o Know which folder to store the file
Scoring data:
o Know your Aible project ID and tenant
Operation:
Request Profile
The request profile is an array of JSON objects with each object
representing a single input record to be scored. Aible will score each
record by appending a predicted value of the outcome modeled in the
Aible project.
Response Profile
The response profile represents the output returned by Aible after
processing. Like the request profile, the response is also an array of JSON
objects with each object representing a single scored record. Each record
includes all of the fields sent in the Request Profile plus the following
output values:
Forma
Field Description
t
predict Numb 0 or 1 representing the Aible prediction.
Forma
Field Description
t
er
predict_sco Numb A decimal between 0.0 and 1.0 representing the
re er prediction score.
driver [1 - Names of the top 5 fields driving the variation in the
Text
5] prediction result.
impact [1 - Numb Decimal value with the impact variation of the
5] er respective drivers.
Benefits
Fast integration of Industrial IoT devices information, bidirectional,
with Platform.
Leverage of thousands of ready to use machine drivers for fast
connection.
Wide flexibility in installation environments for Alleantia ISC –
physical or virtual architectures, centralized and distributed, on
hardware and OS of many kinds.
Enterprise-grade IIoT solution with thousands of deployments and
supporting management platform.
Prerequisites
Alleantia IoT — Partner connector requires the following:
Alleantia ISC up and running on target installation, with licensed
REST API module.
Alleantia ISC Rest API plugin active on target installation.
The locally deployed Atom shall reach the Alleantia ISC through
customer network or may be installed in the same system where
Alleantia ISC runs.
Actions
The Alleantia IoT — Partner connector Operation only supports actions
which sends and receives data to and/or from Alleantia ISC Software.
Certain operations require you to specify certain fields in the request
necessary to complete the action. The Alleantia IoT — Partner connector
supports the following actions:
Alleantia_System_Information — Use this action to return Alleantia
system information.
All_Devices_Information — Use this action to return a data list to
retrieve information on all devices configured in the ISC Software.
Single_Device_Information — Use this action to return the
configuration information for a device configured in the system.
All_Custom_Variables_Information — Use this action to return a list
with the information on variables configuration of the plant
configured in the IoT.
All_Variables_ Information — Use this action to return a list with the
information on variables configuration of devices configured in the
ISC Software.
Single_Variable_Information — Use this action to return the
information on a variable configuration for a device configured on
the ISC Software.
All_Alarms_Information — Use this action to return the information
on all alarms configured on the ISC Software.
Single_Alarm_Information — Use this action to return the information
on an alarm configured on the ISC Software.
All_Events_Information — Use this action to return information
relative to events that can be generated by the system.
Single_Event_Information — Return information relative to the single
event that can be generated by the system.
Get_Full_Telemetry — Use this action to return data lists to retrieve
information on all devices configured in the ISC Software.
Get_Single_Device_Telemetry — Use this action to return a list of
actual values for the variables of a device configured in the ISC
Software.
Get_Single_Variable_Telemetry — Use this action to return the actual
variable value for a device configured in the ISC Software.
Get_Single_Custom_Variable_Telemetry — Use this action to return
the actual variable value for the plant configured in the ISC
Software.
Get_Custom_Variable_Historical_Telemetry_Data — Use this action to
return the historical values of a plant variable configured in the
system for a specified time interval.
Get_Variable_Historical_Telemetry_Data — Use this action to return
the historical values of a variable of a device configured in the ISC
Software for a specified time interval.
Get_Full_Active_Alarms — Use this action to return a list of active
alarms in the ISC Software.
Get_Alarms_History — Use this action to return the historical list of
the alarms in the ISC Software sorted by ascending time.
Get_Full_Active_Events — Use this action to return a list of active
events in the ISC Software.
Get_Events_History — Use this action to return the historical events
in the ISC Software for a specified time interval.
Get_Single_Event_History — Use this action to return a single
historical event in the ISC Software for a specified time interval.
Set _Variable_Value — Use this action to set the value of a writable
variable for a device configured in the ISC Software.
Note:
This is an open source connector and you can access the code in
the Amazon Redshift connector repository.
The connector leverages the Amazon Simple Storage Service (S3) during
the Insert and Upsert operation for temporary storage and retrieval of
huge volume of data objects. The connector also offers the ability to
utilize Amazon S3 data encryption options for protecting the sensitive
data stored on disks in the Amazon S3 data centers.
Prerequisites
To use the Amazon Redshift Connector, you will require the following:
Amazon Web Services (AWS) – Redshift cluster user account details
like username and password.
AWS access key which includes access key ID and secret key details
for secured Amazon S3 connection. You can get the keys from the
AWS Management Console (My Security Credentials page). If you
don’t have an access key created already, create these keys in the
AWS Management Console. For more information,
see Understanding and getting your AWS credentials.
Appropriate access rights to the Amazon S3 resource. You can verify
your rights in Amazon S3 by selecting a bucket, folder, or file and
clicking Properties > Permissions or by performing actions like
creating or deleting a bucket, adding and uploading files, etc.
Redshift JDBC driver version 1.2.10.1009 deployed to the
appropriate Atom environment as a Custom Library Component.
Amazon S3 connector
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
Bucket - The name of the bucket owned by the AWS account in which you
store objects.
File Key - The key name that identifies the object in the bucket.
Content-Type - Indicates the format of the content, such as string
(integer) or enumeration (value).
Amazon Operations:
Amazon S3 supports the following actions:
Get — (inbound) retrieves a single file or a list of files. Use a single
parameter ID (the Amazon key for the file in the Amazon S3
system). You cannot GET files encrypted with a client-side
encryption nor can you use custom keys to encrypt files.
Simulate Amazon’s folder structure by concatenating the folder path with
the final file name ending with a forward slash “/”. Use these objects:
o Get object retrieves a single file from the bucket.
o Get Object (Binary) retrieves a single file that is 50 MBs or
greater.
o Get Listing retrieves a list of files and folders in the specified
bucket, but not the contents of a file that include key,
folderName, filename, isDirectory, lastModified, size, and
bucketName.
Delete — (outbound) deletes data based on the ID for the Amazon
S3 object key.
Create — (outbound) uploads the current document into the
Amazon S3 Cloud as object data.
Retrying requests
To help ensure successful operation requests, the connector retries
requests based on the response status code from the server:
Server error (5xx) — Indicates that you made a request, but the
server cannot complete the request. Requests retries are in a
phased approach (intervals of 0, 10, 30, 60, 120, and 300 seconds)
with a maximum of six retries.
Server error (3xx) — Indicates that you made a request, but it needs
further action to process the request. Requests retry only once.
Client error (4xx) — Indicates there is an issue with the request, and
not with the server. There are no retries of the request.
Server response (2xx) — Indicates that a request is successfully
received, understood, and accepted. If there are no errors in the
XML provided in the request, it is considered successful and does
not retry.
Communication error — Indicates that there is an error
communicating with the remote server. Request retries are in a
phased approach.
The Amazon Selling Partner connector allows you to utilize several of the
Selling Partner API (SP-API) APIs to programmatically access data on
orders, shipments, payments, and much more within a process.
Supported APIs
Currently, the Amazon Selling Partner connector supports the following
APIs:
Catalog Items 2020-12-01
Catalog Items 2022-04-01
FBA Inbound Eligibility
Feeds 2021-06-30
Finances
Fulfillment Inbound
Fulfillment Outbound 2020-07-01
Listings Items 2021-08-01
Merchant Fulfillment
Orders
Reports 2021-06-30
Sales
Sellers
Shipment Invoicing
Shipping
Prerequisites
To utilize the SP-API in Integration, you need the following:
An Amazon Professional Selling Account. You need to be registered
as a developer.
An Atom deployed in either a hosted environment or on a local
machine.
Test Connection
You can test your connection settings before you use or save the
connection in a process. The Test Connection ensures that your specified
settings are correct, valid, and accessible.
The test connection is successful provided valid credentials are entered
and the private application has permission to get feeds. This is validated
with two requests:
A request to the Selling Partner API authentication server, to
exchange the connection data for an access token. If this request is
successful, the authentication data provided is valid.
A GET Feeds request to the Selling Partner API endpoint selected in
the connection.
The connector supports Amazon SNS Mobile Push Notifications using the
EXECUTE operation, and by setting a dynamic document property to the
SNS-assigned endpoint ARN. Additionally, the connector supports Amazon
SNS First-In-First-Out (FIFO) topics in the CREATE operation by setting
the Message Deduplication Id and Message Group Id dynamic
document properties.
Prerequisites
To implement a connection to your Amazon SNS account from , ensure you
deploy an Atom on your local machine or hosted with and have the
following ready:
Your AWS access key and AWS secret key.
The name of and password to your HTTP proxy server if you plan to
use that for authentication.
The project name, project ID, project number, and registration ID for
each device.
Anaplan connector
The Anaplan connector allows you to upload and insert new Anaplan
business modeling and financial planning data, and import and export
data to On Demand and SaaS-based applications. You can upload a file as
a single chunk, or in several chunks.
Anaplan allows you to perform data-intensive planning and modeling
across a variety of industries for more effective top-down and bottom-up
planning.
Prerequisites
To implement a connection to your Anaplan account from and use the
connector operations, do the following:
Have a local Atom or an Atom Cloud.
Have the URL for the Anaplan API service, and the Anaplan
username, password, and workspace ID.
Have knowledge of the hierarchical and data dependency structure
of the Anaplan model. For example, the Opportunity list might
assume the pre-existence of the Sales Representative list.
Have a single document in CSV format that uses comma or tab
delimiters. If you do not have a CSV file and need to create one, use
the Anaplan Export Tabular Multiple Column layout. For more
information, see the readme.zip (Anaplan Connector: How to
Publish) provided to Anaplan customers.
Complete the steps in Anaplan so you can use the connector
operations.
Tracked properties
The Anaplan connector has the following tracked properties that you can
set or reference in various step parameters.
Model Id - The ID of the model used in the operation. This ID is available
when running the Import operation.
Import Id - The ID of the import used in the operation. This ID is available
when running the Import operation.
Export Id - The ID of the export that is used in the operation. This ID is
available when running the Exportoperation.
Process Id - The ID of the process used in the operation. This ID is
available when running the Process operation.
Task Id - The ID of the task created when running the operation. This ID is
available after running the Import and Process operations.
Delete Id - The ID of the delete configuration being executed when
running the operation. This ID is available after running the Delete
operation.
The Anaplan operations use JSON format and support the following
actions:
Import — bring data into Anaplan.
Export — export data into a file.
Upload — upload a file to Anaplan (as a single chunk or as several
chunks) so you can later import the file and bring it into Anaplan.
Download — download a file that was previously uploaded to
Anaplan, or download a file after performing an export.
Process — process several actions to run in a particular order. A
process can contain any combination of import, export, and delete
actions.
Retrieve Dump File — retrieves errors during either an Import action
or a Processaction containing an import.
Delete — remove and delete obsolete data.
List — list models, imports, exports, and files.
Many of these actions require you to first complete steps in Anaplan. For
more information, see the action sections.
Aprimo connector
The Aprimo connector enables you to move data into and out of Aprimo
Marketing Studio On Demand.
Prerequisites
The Aprimo connector requires the following:
An Aprimo Marketing Studio On Demand account with a user name
and password.
Private and public keys for the Aprimo site’s base URL.
Connection tab
The following fields appear on the connection tab:
Aprimo Site’s Base URL - Aprimo’s Base URL from which to infer metadata and
gateway URLs.
Public Key - The user’s public key.
Private Key - The user’s private key for logging on which is used instead of a
password.
can also use this connector to integrate the Aria platform with other On
Demand or SaaS-based applications, such as Intacct, Salesforce, NetSuite,
and so on. The Aria – Partner connector connects directly to the Aria
application and browses the interfaces in real-time.
Connection tab
URL - Used to select the base URL for the Aria web service. The following
sandbox and production environments are available: Sandbox Future,
Sandbox Current and Production. For example, to select the connection
URL for the production environment, select Production.
Client No - The client number for logging into the Aria web service.
Authentication Key - The valid authentication key for logging into the
Aria services.
API Version - Used to select an Aria API version. All Aria web service API
versions are listed.
Execute
The following services and objects are available when you use the
outbound Execute action:
Other-Generate_Random_String
Other-Validate_Payment_Information
TransactionInvoice-Pre_Calculate_Invoice
Other-Issue_Refund_To_Account
Account-Send_Arc_Threshold_Email
These actions directly run the objects. The Aria database retrieves the
data, parses the data into an XML structure in the response profile, and
returns a document for each logical record.
Autotask connector
Prerequisites
To implement a connection to your Autotask account from , you need:
An Atom installed in either a hosted environment or on a local
machine.
An Autotask account.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
Web Services Site - The URL to the Autotask web services. New
connections, by default, set the URL
to: https://webservices.autotask.net/atservices/1.6
CAUTION:
On December 31, 2021, Autotask deactivated version 1.5 of their API, and
the version is not supported. For backward compatibility, the connector
continues to support the legacy
URL https://webservices.autotask.net/atservices/1.5. However, encourages
you to update your existing connections and use version 1.6. After
updating the URL to the 1.6 version, save the connection to us the new
version in your process.
User - Your Autotask user name.
Password - The password for the user.
Autotask operation
The Autotask operation defines how to interact with your Autotask
account, and represents a specific action (Query, Create and so on.) to be
performed against a specific Autotask object (such as Contract, Account,
and Invoice).
Boomi for SAP connector
The Boomi for SAP connector allows you to connect to Boomi for SAP core
functionality via OpenAPIs. This connector retrieves the OpenAPI definition
and generates an easy to use interface for you to seamlessly configure
the operations available as part of Boomi for SAP.
Features
In this release, Boomi for SAP supports the following features:
Table Services — Query SAP Tables, Views, and Joined Views/Tables
BW Extractors — Get and Delete SAP BW Data
Function Module Services — Create, Read, Update, and Delete SAP
data objects, trigger jobs, and much more.
IDocs — Send IDocs to SAP
Listen - Listen to and receive events based on SAP event
subscriptions
The following features can be utilized with the Boomi Web Server:
Events — Receive Change Pointers, Business, Initialization, and
Custom Events in Boomi
Table Service Batch Extracts — Receive batched, multi-threaded
extracts of Table Services in Boomi
IDoc — Receive IDocs from SAP
Prerequisites
To use the connector and implement a connection, have the following:
SAP UserID and Password
SAP Application Server hostname including Port of SAP instance
which has Boomi for SAP installed.
Boomi for SAP Core installed and configured on the SAP instance
Tracked properties
This connector has no predefined tracked properties.
The Boomi Flow Client operation defines how to interact with Flow.
The operation sends a request for running a tangible action exposed in the
Flow REST API. The Flow tenant in which the action runs is specified in the
Boomi Flow Client connection.
Procedure
1. Create the Boomi Flow Client operation component.
2. In the operation, click Import.
The Flow Client Import Wizard opens.
3. Do one of the following:
o To browse for available object types in an Atom Cloud,
select Browse in Cloud.
o To browse for available object types in an Atom,
select Browse in Atom.
4. If you selected Browse in Atom, in the Atom list, select the Atom
in which to browse for available object types.
5. Select or create the Flow Client Connection for the flow user.
6. Click Next.
The wizard advances to the next screen.
7. In the Object Type list, select Run - Initialize and Invoke Flow.
8. Click Next.
The wizard advances to the next screen.
In that screen, Request Profile shows the name of the generated request
profile. The profile name Flow Client Run - Initialize and Invoke Flow
EXECUTE Request unless a profile with that name already exists in the
account. In that case, it appends a unique number to the profile name —
for example, Flow Client Run - Initialize and Invoke Flow EXECUTE Request
2.
9. Click Finish.
The wizard closes and creates the profile. In the
operation, Object and Request Profile update to reflect the object type
you selected in the wizard and the generated profile. You might want at
some point to change the name of the profile to one that is more
descriptive.
10. Optional: To enable processing to continue if the operation
returns an error, turn on Return Application Error Responses.
11. Save the operation.
Cross-source synchronization
Use to synchronize a domain’s master data across its source systems.
Synchronization relies on the deployment of the following types of
integration processes:
Contributing source to — incremental synchronization processes
that batch source record updates and route them to the repository
on which a domain is hosted.
to source — synchronization processes that route source record
update requests, which originate in batches from the repository, to
source systems.
The recommended execution sequence is as follows in cases where there
exists a domain-wide source of record — that is, a source whose data
across all fields is more trusted than data from all other sources:
1. Execute the contributing source-to- processes.
o Execute first the process that routes updates from the least
trusted source.
o Execute last the process that routes updates from the source
of record.
2. Execute the -to-source processes.
o Execute first the process that routes updates to the least
trusted source.
o Execute last the process that routes updates to the source of
record.
For example, consider a domain, Contacts, whose attached sources (all
contributing data) are Google Contacts, NetSuite and Salesforce, where
NetSuite is the source of record and Google Contacts is the least trusted
source. The recommended order of execution is as follows:
1. Source to Boomi DataHub
a. Google Contacts
b. Salesforce
c. NetSuite
2. Boomi DataHub to source
a. Google Contacts
b. Salesforce
c. NetSuite
These processes are typically executed as sub-processes of a regularly
scheduled master synchronization process.
Consider the following factors when determining the optimal frequency of
execution:
anticipated volume of source record updates
degree to which having timely master and source data is critical
throughput and capacity of the relevant computing and storage
resources
Intervals of 5, 15 and 60 minutes are common.
Getting started
To use the connector in your integration:
Step 1: Set up the connection. Link your Hub repository to the
connector by providing your username, authentication token, and the Hub
cloud that hosts your repository.
Step 2: Select the Action. The connector can perform several inbound
and outbound operations on your Hub repository.
Step 3: Set up the operation. Click the plus icon to import a new
operation and create the XML profile.
Connector operations
The connector’s operations align with the Boomi DataHub Repository
API operations. Compared to the legacy connector, some operation titles
have changed. The connector supports the following operations:
Get Golden Record - retrieves a specified golden record.
Get Quarantine Entry - retrieves a specified quarantine entry.
Update Golden Records - (previously called Upsert in the legacy
version) creates, updates, and/or end-dates golden records. The
operation can quarantine source entities. It also allows you to send
your new or updated records to staging.
Query Golden Records - retrieves active golden records.
Query Quarantine Entries - retrieves quarantine records.
Fetch Channel Updates - (previously called Query in the legacy
version) fetches batches of source record update requests.
Match Entities - Lists match results for a batch of entities from a
contributing source.
Tracked properties
This connector has two tracked properties.
Mdm Current Delivery Id - This property is available with the
Fetch Channel Updates operation when using manual
acknowledgment. Read Fetch Channel Updates for more information
on manual acknowledgement.
Query Total Count - This property is available with the Query
Golden Records and Query Quarantine Entries operations.
Authentication
You can authenticate to the repository using the My Hub Authentication
Token, which provides administrator privileges. The Boomi DataHub
connector does not support JSON Web Token (JWT) authentication.
Connecting to a repository
1. In the Integration build canvas > Connector menu, select Boomi
Data Hub (Tech Preview).
2. Click the plus icon next to Connection.
3. Select the Boomi Hub Cloud Name that hosts the repository you
want to connect to. Boomi maintains regional Hub Clouds worldwide.
To find the name of your cloud:
a. Navigate to Integration > Manage > Atom
Management and select the repository name in the list of
Atom Clouds.
b. Look at the Host Name for the cloud hosting the repository.
If you are using a custom cloud:
c. Navigate to Integration > Manage > Atom
Management and select the repository name from the list of
Atom Clouds.
d. Select Shared Web Server to view the Base URL for API
Request.
e. Enter the URL in the Custom Cloud field in the following
format, appending /mdm to the path: https://cloud
address[:port]/mdm/
4. Enter your Hub username as it appears on the Repository page. To
find your username, navigate to Boomi
DataHub > Repositories > select the repository > select
the Configure tab.
5. In the Configure tab, copy the My Hub Authentication Token.
6. In the Connection page of the Hub connector, select Click to
Set and paste it into the Token field.
7. Click Apply.
8. Click Save and Close.
Golden Records
A Golden Record is a single, reliable, and complete version of data
about an entity (like a customer, product, or organization). It's created by
combining, deduplicating, and cleansing data from multiple sources.
Think of it like having one perfect file that contains the most accurate and
up-to-date details about something, even if the information originally
came from many messy or conflicting files.
For example:
If "John Doe" exists in two systems with different phone numbers,
the golden record will combine these details into a single, trusted
version after validating and resolving conflicts.
Quarantine Entries
Quarantine Entries are data records that are flagged for review or
correction because they don’t meet quality standards, have missing
fields, or fail validation checks. These records are kept in a separate
"quarantine" area until someone reviews and fixes them.
For example:
If a record for "Jane Smith" has an email but no phone number and
the phone number is mandatory, this record would be sent to
quarantine until the missing phone number is added.
Get Golden Record operation:
Retrieves a "golden record," which is a unified, accurate, and complete
view of a data entity (like a customer, product, or organization). These
records are typically created by consolidating and deduplicating data
from multiple sources.
Get Quarantine Entry operation:
Fetches entries that have been flagged for review or correction due to
data quality issues or mismatches. Quarantined entries are not part of
the golden records until validated or corrected.
Update Golden Records operation:
Modifies existing golden records with new or corrected information.
This operation ensures that the golden record stays up to date and
accurate.
Query Golden Records operation:
Searches or retrieves golden records based on specific criteria, such as
filtering by attributes or matching conditions.
Query Quarantine Entries operation:
Searches for entries in the quarantine repository that match specific
criteria, often to identify and resolve data quality issues.
Fetch Channel Updates operation:
Retrieves updates or changes made to data from a specific source
channel. This ensures that the golden record is synchronized with the
latest data from various input channels.
Match Entities operation:
Compares data from different records to determine if they represent
the same entity. This is part of the deduplication and consolidation
process that helps create a golden record.
Box connector
Prerequisites
To implement a connection to your Box account from , have your
Box client ID and client secret provided when you created your
application.
UTF-8 encode input JSON documents to preserve special characters
and ensure the operations work properly.
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
File ID — The ID of the file from Box.
File Name — The name of the file from Box.
Box operation
The Box operations defines how to interact with your Box account and
represents a specific action (Create, Get, Delete, Get Folder Items,
Update, Upsert, and Search) perform against a specific Box file, folder, or
folder item.
Create a separate operation component for each action and object
combination that your integration requires.
The Box operations use JSON format and support the following actions:
Inbound: Get, Get Folder Items, Search
Outbound: Create, Delete, Update, Upsert
Important: UTF-8 encode input JSON documents to preserve special
characters and ensure the operations work properly.
This capability allows for better planning and procurement, helping you
understand which suppliers are vulnerable, manage inventory more
effectively, improve on-time deliveries, enhance supply resilience, and
meet Service Level Agreements. Not only do we identify risks, but we also
quantify the expected severity of any delays, that is how long the delay is
likely to last.
Prerequisites
To use the connector, request the URL and Ceres API Token from the Ceres
Team. Below are the screenshots showing the use of the Ceres token.
CloudBlue Connect - Partner connector
The CloudBlue Connect platform enables the users to work in real-time
with the transactions generated between distributors, resellers, and
customers.
The CloudBlue Connect - Partner connector enables vendors and
distributors to seamlessly integrate their contract, listing, fulfillment, and
usage processes into their internal or line-of-business systems without
coding.
Benefits
By integrating CloudBlue Connect with your platform, you will drastically
reduce the time for various marketing operations and simultaneously
eliminate any possible errors. The CloudBlue Connect enables users to
address the automation for your business and collaborations with your
business partners.
Connection tab
The CloudBlue Connect connection uses access keys, which consist of an
endpoint and a secret token key. Create these tokens using the
Integrations section in the CloudBlue Connect portal. When you create the
token, assign the proper permissions for that integration will work. Once
you’ve created the token, configure the connection in the using the
endpoint and token.
Actions
The CloudBlue Connect Boomi Connector features various actions (also
called resources) that can be used to streamline and automate many
operations on the CloudBlue Connect platform.
Available Actions
The following list introduces actions that are managed by the connector:
List Resource: This action is used to list your required resource file.
Get Resource: Get the required resource file by using this
operation.
Create Resource: Create a required resource file by using this
action.
Update Resource File: This action is used to update resource files.
Accept: This action is used to accept required resources.
Reject: Reject a request by using this operation.
Approve: Approve required requests by using this action.
Inquire: Inquire for necessary data to process subscriptions.
Pending: Switch requests to the pending state by using this action.
Assign: This action assigns an agent to process Connect requests.
Unassign: Used to remove an assigned agent from the request
processing.
Close: This operation is used to close a record chunk within a usage
file.
Regenerate: As the name implies, this action is used to regenerate
required resources.
Bulk Close: This operation is used to close all usage records within
the usage file.
Reprocess: Use this action to process the provided resource once
again.
Submit: This operation is used to submit a usage file to your
business partner.
Operations
Each operation implements a specific task such as reading or creating an
object of CloudBlue Connect. The CloudBlue Connect connector operations
use JSON format and support the following actions:
Inbound: Get Search
Outbound: Create, Delete, Update, Custom action (Approve,
Pending, Assign, etc.,)
Extension Objects
Depending on your selected action, the following objects (also called
resources) of the extension are available:
Asset: This object represents a subscription on the Connect
platform.
Request: Fulfillment request object (also called subscription
request).
Tier Account: A Tier Account represents an account of your
customer or reseller.
Tier Account Version: This represents a version for your tier
account object.
Tier Account Request: A request object for a Tier Account.
Tier Config Request: This is a Tier Configuration request (tier
request) object.
Tier Config: A tier configuration object on the Connect platform.
Subscription Request: Fulfillment request object (also called asset
request).
Subscription Asset: Represents a subscription (asset) object on
the platform.
Product: This represents a product object on the platform.
Usage Report: As the name implies, this represents a usage report
object.
Usage Record: Your provided record is filed within the usage report
file.
Usage Chunk: A chunk of usage records that are filed within the
usage report file.
Usage Reconciliation: Usage reconciliation data is used to
indicate successful usage file processing.
Case: This represents a Helpdesk case object on the CloudBlue
Connect platform.
Coupa connector
The Coupa connector enables you to move data into and out of Coupa and
integrate any on-premise applications and legacy systems with Coupa.
Coupa operation
The Coupa operation defines how to interact with your Coupa account and
represents a specific action, such as Search and Create to perform against
a specific Coupa record type or object (Suppliers, Items, Accounts, Users,
and so on).
Benefits
The Eyer - Partner connector requires an agent installed per
Atom/Molecule runtime environment (server/VM/Docker) to collect JMX
performance metrics for the Eyer anomaly detection engine. The Eyer -
Partner connector will integrate directly in Boomi processes via the
connector, giving you the power to act proactively on anomaly alerts from
Eyer. Alerts contain anomaly warnings and possible impacts relating to
Atom & Molecule performance like throughput, latency, queues, statuses,
and other metrics related to runtime processing.
Connection tab
The Eyer – Partner connection has the following fields that are required for
configuration.
Server: The base URL of Eyer with which you want to connect. Do not
change this field.
Eyer authentication key: By default, this field is set to CUSTOM. The
authentication key is obtained from Eyer.
Custom authentication key: Set your Eyer authentication key in this
field.
The Fiix — Partner connector allows you to work with various objects
exposed through the Fiix API, meaning that you can access all
Computerized Maintenance Management System (CMMS) related data and
methods to work with assets, purchase orders, work orders, and more.
You can use the connector to seamlessly integrate Fiix CMMS with
intersecting systems so that you can access maintenance-specific data
(assets, work orders, purchase orders, and more).
Benefits:
Secure File Transfers: Ensure data security and meet compliance
standards (SOC2, HIPAA, GDPR, CCPA).
Data Sovereignty: Select from seven geographic storage regions
or integrate with preferred storage solutions like Amazon S3, Azure,
Google Cloud, and more.
Enhanced Collaboration: Enable partners, customers, and
vendors to connect using their preferred standards.
Faster Onboarding: SaaS-based, always-available platform
eliminates server setup time, speeding up partner connections.
Simplified Management: Offers web-based administration and
RESTful APIs for seamless integration.
Continuous Updates: Files.com handles upgrades and security,
freeing up IT resources.
Connector Configuration:
Connection: Links to a Files.com account using an API key for
authentication; reusable across multiple steps. The Files.com
connection represents the Files.com user account that will be used
to perform the actions on the Files.com platform. The connection will
be constrained to the permissions of the Files.com user account and
will only be able to access items that the Files.com user account has
been given permission to access. If you have multiple Files.com user
accounts, use a separate connection for each.
Actions: Perform operations like creating/managing files, folders,
share links, users, and groups.
o File Actions: List, upload, download, copy, move, delete files;
integrate files with Boomi processes.
o Folder Actions: List, create, move, copy, delete folders;
navigate folder structures.
o Share Links: Create/manage secure download links with
notifications and tracking.
o User/Group Management: Create, manage, and delete
users/groups.
Operations: Create a separate operation component for each
action that you wish to perform.
The Files.com connector provides the following actions for use in
your Boomi processes:
Copy
Create
Delete
Download
List
Move
Show
Update
Upload
Download a file*
List folder items*
Upload a file*
*Download a file, List Folder items, and Upload a file are
deprecated but remain for backwards compatibility.
https://help.boomi.com/docs/Atomsphere/Integration/
Connectors/r-atm-Files_operation_561f2419-bf24-4d87-b230-
f2380007b425
Prerequisites
The Files.com connector requires an API Key, which represents the
Files.com user account that will be performing the connector’s actions.
Changelog
Here's the information presented as a table:
Versio
Date Changes/Updates
n
202 - The "Return Application Error Responses" behavior is now
141
5-01 respected for operations.
- Fixed an upload operation issue causing timeouts before
202 file delivery completion. - Fixed a List operation issue where
133
4-11 date format was returned in Unix Epoch Time instead of UTC
format (response profile expects UTC).
- Added a Limit option to the List operation, allowing result
202
121 limits. - Fixed a defect in the deprecated List folder items
4-07
operation where the Per Page option was ignored.
119 202 - Added 9 new actions: Copy (File, Folder)Create (Folder,
4-06 Group, ShareLink, ShareLinkNotification, ShareLinkRecipient,
User)Delete (File, Folder, Group, ShareLink,
ShareLinkNotification, User)Download (File)List (Folder,
Versio
Date Changes/Updates
n
Groups, ShareLinks, ShareLinkDownloads,
ShareLinkNotifications, ShareLinkRecipients,
ShareLinkRegistrations, Users)Move (File, Folder)Show
(FileDetails, FolderDetails, GroupDetails, ShareLinkDetails,
ShareLinkNotificationDetails, UserDetails)Update (Group,
ShareLink, ShareLinkNotification, User)Upload (File) -
Deprecated 3 original operations (still available for backward
compatibility). - Improved connectivity, throughput, and
auto-retry capabilities.
.
Google Ad Manager connector
The Google Ad Manager connector enables users to integrate their data
with Google Ad Manager.
Google Ad Manager is an open, public API that enables managing,
delivering, and measuring the performance of ads.
Connector configuration
To configure a connector to communicate with Google Ad Manager, set up
two components:
Google Ad Manager connection
Google Ad Manager operation
This design provides reusable components that contain connection
settings and operation settings. After building your connection and
operation, set up your connector within a process. When you configure the
Google Ad Manager connector correctly within your process, can map to
and from virtually any connected application using the Google Ad Manager
connector to manage, deliver, and measure the performance of ads.
Prerequisites
To use the connector and implement a connection to your Google account
instance from , verify that you have the following:
A Google Ad Manager Small Business account, or a Google Ad
Manager Premium Solution account.
Prepared for OAuth 2.0 authentication.
Enabled API access to at least one network.
Google Ad Manager operation
The Google Ad Manager operation defines how to interact with your
Google Ad Manager account and represents a specific action, such as
Query, and Create to perform against a specific Google Ad Manager
service and object type.
Get, Query, Create, Update, Execute
Exceptions:
Prerequisites
To use the connector and implement a connection to your Google Cloud
Platform account from , have the following information available:
The Google Cloud Platform project ID used for billing. Make sure to
enable billing for your project. You can find your project ID by going
to the Google Cloud Platform console and selecting Manage all
projects from the projects list. The names and IDs for all the
projects you are a member of appear.
When using the JWT Bearer Token grant type for client
authentication, have your signature key and identify your issuer and
subject.
note
The scope is automatically set as an extended JWT claim. Do not change
the extended claim and also ensure that the Scope field is empty. You
create the signature key using the Google Cloud Platform console and
import it into . You find the issuer and subject in the Google Cloud Platform
console (Service account details page).
When using the Authorization Code grant type for client
authentication, have your Google Cloud Platform client ID, the client
secret from when you created your project, and your scope.
note
Copy and paste the scope into the Scope field from the help (click
the Information button). Verify that you separate each entry in the scope
by a single space.
Procedure
1. Open the Google Data Contact or Contact Group request XML
profile.
2. Right-click the root element, contact or contact-group.
3. Select Add Child Element.
Selecting this field adds a new element to the end of the profile.
4. Select the new element and change the Element Name to your
custom field name plus _c.
For example, to add a custom field called , change the element name to .
5. Expand the Namespace Declarationssection and click Add
Namespace. Set the URI field to http://www.w3.org.2005/Atom.
6. Save and close the XML profile.
With the Google Sheets connector, you can create, delete, get, query, and
update objects in Google Sheets.
Google Sheets is a web-based application that enables you to create,
update, and modify spreadsheets and share data live online. Independent
from the Google G Suite connector, this connector uses OAuth 2.0 for
authentication and retrieves information in JSON format.
Supported objects
The Google Sheets operations supports the following objects:
Spreadsheets
Sheets
RecordData
Prerequisites
The Google Sheets connector requires the following:
A Google Client ID
The Client Secret for the account
Connector configuration
To configure a connector to communicate with Google Cloud Storage, set
up these two components:
Google Storage connection. This connector uses the OAuth 2.0
protocol for authentication and authorization, and you can use
the JWT Bearer Token and Authorization Code grant types for client
authentication.
Google Storage operation
Prerequisites
To use the connector and implement a connection to your Google Cloud
Storage account from :
Have your Google Cloud Storage client ID and client secret from
when you created your project available.
Activate the Cloud Storage JSON API.
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
Name — The name of the object.
Bucket — The name of the bucket containing the object.
Updated — The modification time of the object metadata.
MD5 Hash — MD5 hash of the data; encoded using base64.
Link — The media download link.
Changelog
2024-01 (Version 24.01.1, Current)
Necessary runtime packages are now packaged with the connector.
2023-11
Overridable fields now include Access Token parameters,
Authorization parameters, and Extended JWT Claims (if supported by
the connector).
The Great Plains connector enables you to move data into and out of the
Microsoft Great Plains (now called Microsoft Dynamics GP) business
management application.
Use the connector to Query and Upsert various types of records, including
Accounts, Customers, Sales Orders, Invoices, and more.
The connector uses an integration layer called eConnect to exchange data
with the Great Plains database. eConnect is a Microsoft API that facilitates
data exchange by performing business validation and ensures data
integrity. You implement it as a set of database-stored procedures
installed on the Great Plains database server. Ensure you install eConnect
before you attempt to connect to Great Plains.
Modules:
o Bank Reconciliation (BR)
o Company (CMP)
o Field Service (FS)
o Fixed Assets (FA)
o General Ledger (GL)
o Inventory (IV)
o Lockbox (LB)
o Project Accounting (PA)
o Purchase Order Processing (POP)
o Purchasing (PM)
o Receivables (RM)
o Sales Order Processing (SOP)
o Shared Master (SM)
o US Payroll (UPR)
Prerequisites
To implement a connection to Great Plains from Integration:
1. Install Great Plains and the database.
2. Set up a local Atom within your network to connect to the Great
Plains SQL Server and eConnect. You must install the Atom on
Windows. This connector does not run on Linux. Install the Atom on
the same server as eConnect and the Great Plains SQL Server.
3. Configure the connector with a Windows domain user that has
access to the Great Plains SQL Server database machine. A SQL
Server login must be created for this domain user and the database
user must be granted the DYNGRP role within
the DYNAMICS database.
4. Install the eConnect libraries.
a. The option to install eConnect is under the Additional
Products section of the Great Plains installer.
b. See the eConnect Documentation (see the linked topic).
c. You do not need to install the full eConnect Runtime module.
d. Install IKVM.NET to connect to the .NET libraries.
The eConnect interfaces do not support every object and interaction
available within Great Plains. If your integration scenario requires working
with objects from other modules or more sophisticated queries, you must
use the Database connector to connect to back-end database tables
directly. Consult your eConnect documentation for more information about
the table schemas. Use caution when accessing application database
tables directly to maintain data integrity.
If you are encounter connectivity issues, check the container logs for
information about these error messages.
Tracked properties
This connector has no predefined tracked properties.
Hadoop connector
The Hadoop connector gets files from or sends files to data directories on
the Hadoop Distributed File System (HDFS) server(s) to which the Atom
has access.
HDFS is the primary distributed storage system used by Hadoop
applications. The Hadoop connector:
Is built on top of the Apache Hadoop version 2.2.0 API library.
Works with remote Hadoop cluster resources, version 2.2.0 and
higher.
Works with Cloudera CDH, combining Apache Hadoop with other
open-source projects.
Interacts with remote Hadoop clusters using Hadoop API libraries.
For information about configuring the native IO libraries, see the
linked topics.
Does not open a connection to a remote Hadoop cluster name. In
addition, the connector does not listen to or accept connections to
Hadoop cluster nodes.
Hadoop MapReduce is a technique for working on large sets of data by
spreading multiple copies of the data across different machines that work
in parallel on small pieces. A Hadoop JobTracker keeps track of job runs,
schedules individual maps, monitors individual tasks, and works to
complete the entire batch. Typically, the MapReduce framework and the
HDFS run on the same set of nodes. To assist with your Java development
for an Atom, provides an adapted sample of the MapReduce application
which contains a readme file, dependency analysis, sample shell scripts,
and run details.
Prerequisites
To implement a connection to your Hadoop Distributed File System
(HDFS), verify that you have the following:
A local Atom.
A stable and reliable connection to the Hadoop primary and cluster
nodes.
A valid user ID that can read from and write to the HDFS.
Install a native IO library. For more information, see the installation
topics linked below.
Limitation
The connector does not support Kerberos authentication to get files from
or send files to data directories.
Hadoop operation
The Hadoop operation defines how to write files to and read files from a
Hadoop Distributed File System (HDFS).
The Hadoop operation supports the following actions:
Create — Creates files.
Delete — Delete files.
Execute — It uses this action for all other procedural API calls. This
operation works only on local Atoms and not on a Atom Cloud.
Get — Reads files.
Changelog
2024-11 (Version 24.11.0, Current)
Vulnerability issue of type Deserialization of Untrusted Data was
resolved to make the Connector more secure.
2024-07 (Version 24.07.0)
Necessary runtime packages are now packaged with the connector.
You can dynamically override and update the Hadoop File
Name and HDFS directory values set in the operation utilizing dynamic
operation properties.
Prerequisites
To use the connector and implement a connection to HubSpot CRM from
Integration, have the following:
Sign up: Sign up Hubspot CRM account.
Login: Login to Hubspot CRM account using username and
password with sufficient permissions
to Create, Update, Search, Retrieve and Archive various types
of records.
Create HubSpot CRM connector configuration.
o Update the Re-direct URLin HubSpot CRM if using Oauth2,
example: https://platform.boomi.com/account/<your__account
_ID>/oauth2/callback.
Supported objects
The following objects are supported by one or more of the HubSpot CRM
operation actions.
Object ↓ Creat Upda Searc Retrie Archi
Action → e te h ve ve
Contacts Y Y Y Y Y
Company Y Y Y Y Y
he HubSpot CRM connector uses a Private App Access token or OAuth 2.0
to authenticate.
Create Operation
The Create action is an outbound operation that uses JSON input to create
new HubSpot CRM objects. Upon successful execution, it returns a JSON
response containing details of the newly created object, including its
unique ID.
Key Features:
Supports creating associations between objects, depending on the
object type, using the Boomi connector.
The response for each execution contains the ID of the created
object in case of a successful API call or an error message in case of
API exceptions.
Archive Operation
The Archive action is an outbound operation used to archive the HubSpot
CRM objects to the recycle bin using the object's ID as input.
Key Details:
The ID can be provided in two ways:
o As a connector parameter.
o Through an XML input profile.
The ID field supports multiple entries, enabling the deletion of
multiple objects in a single XML request.
The response is a JSON object indicating the success or failure of the
archive request for each ID provided.
Update Operation
The Update action is an outbound operation that takes a JSON document
as input to modify information for an object on the Hubspot CRM. The
input document must include an ID field that specifies the record to be
updated.
Users can clear the properties of any HubSpot CRM object by passing
property as empty.
Radio button field to be selected for using Email ID or ID. This property is
part of operation properties and it cannot be overridden by Dynamic
Operation Properties. The ID field will contain the value of Email ID or
object ID as per the radio button selected and will be overridable using
Dynamic Operation Properties.
Object-Specific Support:
For Contacts, both ID and Email ID are supported.
For other object types, only ID is supported.
Retrieve Operation
The Retrieve action is an inbound operation used to fetch a single object
from HubSpot CRM based on its ID or Email ID. The response contains the
details of the retrieved HubSpot CRM object.
Radio button field to be selected for using Email ID or ID. This property is
part of operation properties and it cannot be overridden by Dynamic
Operation Properties. The ID field will contain the value of Email ID or
object ID as per the radio button selected and will be overridable using
Dynamic Operation Properties Retrieve operations return all the properties
as well as default associations of the object being retrieved.
Object-Specific Support:
For Contacts, both ID and Email ID are supported.
For other object types, only ID is supported.
If no matching record is found in HubSpot CRM, an empty document is
generated as the response, ensuring clear feedback in all scenarios.
Search Operation
The Search action is an outbound operation designed to search for specific
Hubspot CRM objects based on complex criteria, enabling the retrieval of
objects that meet specified parameters. This function supports targeted
retrieval of Hubspot CRM objects — such as specific Contacts, Company,
Deal, Ticket and Goals—that match defined filtering criteria.
The action uses cursor-based pagination to efficiently manage large result
sets and supports the following operators for filtering:
note
Maximum number of documents as a response supported by HubSpot
CRM is 10000.
Pagination approach via the usage of Maximum Documents field as
operation property.
o Use this to specify the Maximum number of documents to be
fetched.
o One record per response document.
o Value less than 1 will get all the records; by default, it is set to
-1.
HubSpot CRM (Tech Preview) changelog
Benefits
One of the most notable advantages of Hypatos is that its AI models can
be fully customized and trained automatically using your organization's
own transaction and document history. This ensures that the solution is
tailored to meet the specific needs of your teams, providing them with a
powerful tool that can streamline and automate various finance document
processing tasks.
Hypatos' AI models are capable of capturing all relevant data from
transactional documents, assigning general accounts, matching POs,
choosing workflow owners, and automatically transferring the processed
information to your accounting and workflow systems via the Hypatos.ai –
Partner Connector. By reducing the need for human intervention, Hypatos
can help your teams save valuable time and boost productivity,
particularly in high-volume business environments.
Prerequisites
To use the connector, implement a connection to your Hypatos Studio
account from Integration, and help ensure that the operations are
successful, have the following information:
Your Hypatos API Client ID and Client Secret for OAuth 2.0
authorization
The appropriate API access rights and permissions to your Hypatos
Studio projects
Tracked properties
This connector has no predefined tracked properties.
Supported editions
The connector supports the following editions:
IFS Applications 10 (update 8 or later)
IFS Cloud
Prerequisites
To implement a connection to your IFS account from Integration and use
the connector operations, do the following:
Have the URL and port, if required, for the IFS API endpoint, and a
valid username and password.
Have a local Atom or an Atom in an Atom Cloud.
Have knowledge of the hierarchical and data dependency structure
of the IFS API model.
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
The IFS FSM REST connector simplifies communicating with an IFS Field
Service Management (FSM) instance on-premise or in the cloud to create,
read, update, or delete data through FSM's REST APIs. Using the
connector, you can integrate FSM with other software applications and
services wherever they reside. The connector allows you to browse the
FSM schema, and guides you in setting up filters, sorts, and other
common activities.
The IFS FSM REST connector connects the OData REST APIs offered by
FSM over an HTTPS connection. The connector abstracts the HTTP settings
(request headers, errors, etc.) to simplify usage.
Prerequisites
To implement a connection to your IFS FSM account from and use the
connector operations, do the following:
Have a local Atom or an Atom in an Atom Cloud.
Have the URL and port, if required, for the IFS FSM ODATA API
endpoint, and a valid username and password.
Have knowledge of the hierarchical and data dependency structure
of the IFS FSM model.
Supported editions
The connector requires IFS FSM 6, update 7 or greater.
Tracked properties
This connector has no predefined tracked properties.
The IFS FSM REST connection represents and contains all the information
that is needed to connect and log into a single IFS FSM account associated
with your workspace. If you have multiple accounts, you need a separate
connection for each.
Connector configuration
To configure a connector to communicate with Innotas, set up two
components:
Innotas – Partner connection — The connection represents your
Innotas account.
Innotas – Partner operation — The operation represents an action
against a specific record type.
You will create one connection component per Innotas account with one or
more operations, one for each type of interaction required by your
Integration scenario.
Prerequisites
To implement a connection to Innotas from Boomi Integration, you must:
Have an Innotas account.
Obtain an API-only user from your Innotas representative.
Tracked properties
This connector has no predefined tracked properties.
Intacct connector
The Intacct connector enables you to move data into and out of Intacct's
On-Demand Financial Application Suite.
As an Intacct user, you can use this connector to integrate any on-premise
applications and legacy systems with Intacct. You can also use this
connector to integrate Intacct with other On Demand or SaaS-based
applications such as Salesforce.
The Intacct connector is different from other connectors because it does
not connect to the application and browse the interfaces in real-time.
However, Integration provides the most up-to-date interface definitions for
you to browse and choose from to complete your necessary integrations.
If you customized any Intacct modules with additional fields, you might
need to manually add these fields to the standard interface using
Integration.
Integrating with Intacct consists of an XML request and response setup
over an HTTPS connection. This connection's technical details, such as
request headers, error handling, HTTPS connection, and so on, are all
abstracted to simplify the implementation.
Prerequisites
To implement a connection to your Intacct account from Integration,
complete the following:
Confirm that your Intacct account is subscribed to the XML gateway
and has privileges to connect to the Intacct API. Your Intacct
representative can provide a Sender ID and password specific to the
integration API when you obtain a Sage Intacct Web Services
developer license.
Know which Intacct URL to connect to. The default
is https://www.intacct.com/ia/xml/xmlgw.phtml, but confirm this for
your specific Intacct implementation.
Understand from a business process perspective which interface(s)
to integrate. For example, you may want to upload sales orders into
Intacct from a flat file.
Have access to the Intacct Web Services Developer Documentation
(see the linked topic) for interface reference and details of each
individual field within the interface.
Supported versions
The connector supports the Intacct 2.1 and 3.0 request and response
DTDs.
Changes in Intacct 3.0
The Delete Action, which is new in 3.0, is handled as an Update in 2.1. The
list of objects supported in the Operation’s version 3.0 differs from 2.1.
Intacct 3.0 profiles directly reference objects, such as:
Intacct allocation CREATE Request
Intacct deposit CREATE Request
Intacct deposit UPDATE Request
Intacct allocation DELETE Request
Intacct allocation GET Response
Intacct allocation QUERY Response
Tracked properties
This connector has no predefined tracked properties
Intacct connection
The Intacct connection represents and contains all of the information to
connect and sign in to Intacct's On-Demand Financial Application Suite.
You need a separate connection for each Company ID.
Intacct operation
The Intacct operation defines how to interact with your Intacct company
and represents a specific action performed against a specific Intacct
record type or object (GL Account, Bill, GL Entry, Customer, and so on).
Not all actions are available for every object. Create a separate operation
component for each action and object combination required for your
integration.
The Intacct operation supports the following actions:
Inbound: Get, Query, Read Report
Outbound: Create, Delete, Update
Inverbis Analytics - Partner connector
The Inverbis Analytics — Partner connector enables users to upload their
process datasets from the Boomi platform into the Inverbis Analytics
platform. This integration enables users to explore and analyze their
processes in depth, uncovering detailed insights into how these processes
truly perform. By leveraging this connector, organizations can gain
valuable visibility into process behaviors, identify inefficiencies, and
discover opportunities for automation, driving continuous improvement
and more informed decision-making.
Benefits
The Inverbis solution extract the digital footprint of a process from
corporate systems to show its execution in real life.
Visualize the different paths (variants) that have been followed
every time a process has been executed.
Get the data you need to explore the behavior of the different paths,
individually or in an aggregated manner
Filter by specific situations to get to their root causes
Compare executions and performance between regions,
departments, sites, teams, etc.
Define process KPIs and monitor them in real-time, complementing
your traditional dashboards
Get insights into future behavior and performance
Connector configuration
To configure a connector to communicate with Inverbis Analytics platform,
set up these two components:
Inverbis Analytics — Partner connection. This connector uses the
OAuth 2.0 protocol for authentication and authorization.
Inverbis Analytics — Partner operation.
This approach creates reusable components which contain connection
settings and operation settings. After building your connection and
operation, set up your Inverbis Analytics — Partner connector within a
process. When the process is defined properly, AtomSphere can map to
and from virtually any system using the Inverbis Analytics — Partner
connector to retrieve data from or send data to the Inverbis Analytics
platform.
Prerequisites
To use the Inverbis Analytics — Partner Connector, you need to have a
user account at the Inverbis Analytics platform and know the account
Email and Password.
Tracked properties
This connector has no predefined tracked properties.
You can use the JIRA REST connector to move data into and out of the JIRA
application and integrate any on-premise applications and legacy
systems.
For example, you can integrate JIRA with other On Demand or SaaS-based
applications, such as Intacct, Salesforce, and NetSuite. The JIRA connector
connects directly to the JIRA application and browses the interfaces in real-
time. Integrating with JIRA consists of RESTful web service calls using JSON
request and response set up over an HTTP or HTTPS connection. It
abstracts the technical details of this connection, such as request headers,
error handling, and HTTPS connection, to simplify the implementation.
Note: This connector replaces the older JIRA connector.
Connector configuration
To configure a connector to communicate with JIRA REST, set up two
components:
JIRA REST connection
JIRA REST operation
This approach creates reusable components that contain connection
settings (such as URL, User, Password), and operation settings (such as
Object, Action, data definitions). After building your connection and
operation, set up your connector within a process. When you define the
JIRA connector within a process, can map to and from virtually any system
using the connector to retrieve data from or send data to the JIRA
application.
Supported editions
This connector supports JIRA 5.0 and later versions.
Tracked properties
This connector has no predefined tracked properties.
JIRA REST connection
The JIRA REST connection represents a single JIRA service, including sign-
in credentials.
Benefits
Kanverse IDP brings together a zero-touch experience for the enterprise
staff and has pioneered its approach to enhance productivity for teams
saddled with manual document processing. Kanverse digitizes document
processing workflows for its customers across multiple industries like BFSI,
Healthcare, retail, e-commerce, Logistics and Manufacturing, and High
Tech.
Kanverse offers a suite of products that combines AI and OCR
technologies with automation and enables multi-channel ingestion,
automated classification, extraction with high accuracy, AI-based
analytics, validation against business rules, and automated filing of
documents.
Kanverse IDP reduces document processing cycle time from days and
hours to seconds, optimizes operations cost by up to 80%, prevents fraud,
eliminates manual entry to deliver a zero-touch experience with up to
99.5% processing accuracy.
Connector configuration
To configure the connector to communicate with Kanverse IDP, set up two
components:
Kanverse - Partner connection
Kanverse - Partner operation
This design provides reusable components that contain connection and
operation settings. After building your connection and operation, set up
your connector within a process. When the process is defined properly,
Boomi Integration can map to and from virtually any system using the
Kanverse IDP - Partner connector to upload, store, retrieve, and delete
data from Kanverse IDP to manage your business workflows.
Prerequisites
To use the connector, subscribe at Kanverse IDP environment to obtain a
subscriber ID & authentication details, and ensure that the permissions
are set correctly.
Tracked properties
This connector has no predefined tracked properties.
The Kanverse IDP – Partner connection represents and contains all of the
information that is needed to connect your account to the Boomi
Enterprise Platform. If you have multiple Kanverse accounts, use a
separate connection for each and configure the token accordingly. You can
pair a single connection with different Kanverse IDP operations to perform
a unique action against a Kanverse account.
The Kanverse IDP – Partner operation defines how to interact with your
Kanverse IDP account and represents a specific action (Send, Execute
(Upload)) to perform against a specific object type such as Mark Publish,
Send Error, or Execute (Upload).
Actions
Create a separate action and operation component for each action/object
combination required for your integration. The Kanverse IDP – Partner
operation features two different actions (also called resources) that can be
used to streamline and automate three different operations on the
Kanverse platform.
Available Actions
The following list introduces actions that are managed by the connector:
Execute (Upload): This action is used to upload invoices,
insurance, PO, and other such documents to Kanverse.
Send: This action is used to send an update to the appropriate
status of a document to Kanverse. The status can be either Error or
Publish.
Operations
Each operation implements a specific task such as creating or updating an
object of Kanverse.
The Send Error and Mark Publish operations of Kanverse IDP – Partner
connector use JSON format, while Execute (Upload) supports unstructured
formats, for example, PDF, images, Microsoft Excel, etc. These operations
support the following actions:
Execute (Upload) action
Options tab
Connector action : The Connector action corresponds to the action you
want the operation to take in Kanverse. For this action, it is set
to Execute (Upload).
Object : The object can be imported using the Import button. For Execute
(Upload) option, object can only be set to Execute (Upload).
Request profile : The profile is set to Unstructured since this action
accepts different types of profiles, such as Microsoft Excel, PDF, or images.
Response profile : This is a JSON profile returned by Kanverse.
Tracking direction : Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which Boomi documents appear in Process
Reporting. Start steps always track output documents regardless of your
selection.
Error behavior : Select Error Behavior if you want to be able to act on
failed operations in your process instead of only reporting them on the
Manage page.
Document Type : Set this value to whatever type of document you are
uploading, for example, invoice or PO.
Folder : The folder on Kanverse into which you want to upload the file.
Enable Classification : Enter the value true to enable classification,
otherwise enter false.
File Name : The name of the file you are uploading.
Content Type : This determines what type of data the connector accepts.
It is set to multipart/form-data for this action.
Dynamic Operation Properties
Some connectors enable you to define dynamic operation properties,
which are document-level overrides that you define values for, and pass
them into the Connector step to override the operation component's
default value. The Connector step dialog's Dynamic Operation Properties
tab allows you to provide static values or document-level values
(Document Property) for the parameters. You can add, edit, and delete the
parameters.
For the Kanverse IDP - Partner connection you can set dynamic operation
properties for the following properties:
Document Type
Folder
Enable Classification
File Name
For more information on setting dynamic operation properties, see the
following topic: Connector step dialog, Dynamic Operation Properties tab
Send action
Options tab
Connector action : The Connector action corresponds to the action you
want the operation to take in Kanverse. For this action, it is set to Send.
Object : For the Send action, there are two different object types available,
one is Send Error, the other is Mark Publish. Use Send Error in case
the document fails any validation, then your can update it as errored in
Kanverse. Use Mark Publish in the case where the document uploads
successfully and passes all validations, then you can mark this as
published in Kanverse.
Request profile : The request profiule is a JSON profile that the connector
accepts. The profile varies according to the imported object.
Response profile : This is the profile returned by Kanverse.
Tracking direction : Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which Boomi documents appear in Process
Reporting. Start steps always track output documents regardless of your
selection.
Error behavior : Select Error Behavior if you want to be able to act on
failed operations in your process instead of only reporting them on the
Manage page.
Content Type : This determines what type of data the connector accepts. It
is set to application/json for this action.
LaminarHub - Partner connector
The LaminarHub - Partner operations use JSON format and support the
following actions:
Inbound: Get
Outbound: Insert
Insert
Insert is an outbound action to send a transaction to the LaminarHub. A
JSON profile is created by using the Import Operation feature. If the Insert
operation is successful, the transaction is uploaded to LaminarHub and
the connector returns a JSON document with information about the status
of the insert.
Get
Get is an inbound action to download a transaction from the LaminarHub.
A JSON profile is created by using the Import Operation feature.
Filter Status - The Get operation does not accept any input parameters.
Instead, select the Filter Status setting that matches your use case:
To Be Reprocessed – This selects transactions a user has marked
as needing to be reprocessed.
To Be Resent – This selects transactions a user has marked as
needing to be resent.
Benefits
The Laserfiche integration with Boomi is helpful for:
Viewing a document or document metadata from Laserfiche and
viewing or downloading it in another system, like a CRM, ERP, or any
system with relevant documents stored within Laserfiche.
Saving a document to Laserfiche or updating Laserfiche metadata
from another system. This action can be monitored to start a
Laserfiche business process.
Gathering healthcare data. You can convert a Laserfiche document's
metadata into the HL7 data format before sending it to a healthcare
system requiring that standard.
AP Processing. The Laserfiche solutions for automating invoice
processing can update necessary ERP and financial systems.
Prerequisites
To use the Laserfiche - Partner connector, implement a connection from
Laserfiche to Integration. To ensure the connection is successful, you must
have the following:
For Laserfiche Cloud
A Laserfiche account with security permissions to create Service
Principal Users and access the developer console
A Laserfiche repository (Learn more)
A Laserfiche Service Principal Key
A Laserfiche Base-64 Encoded Access Key
For Self-Hosted Laserfiche Installations
A Laserfiche repository (Learn more)
The base URL for your Laserfiche API, for
example, https://ServerName/LFRepositoryAPI
A Laserfiche repository’s ID, that is, the name of your Laserfiche
repository.
The username of an account that has the appropriate access rights
to the Laserfiche repository to make changes on behalf of Boomi.
The password for the above account.
Supported versions
The Laserfiche - Partner connector supports all the actions included in
Laserfiche Repository API V1 build version
3a7a49ffc2db49aecc10091994a0292be398cd51_.20221214.3.
The connector works with both Laserfiche Cloud and self-hosted
installations.
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
event - Used for the Get Audit Reasons operation. This property
indicates the event that the caused the audit
reason: deleteEntry or exportDocument.
nextLink - Used when a collection is returned. This property
contains a URL that allows for the retrieval of the next subset of the
requested collection.
count - Used when a collection is returned. This property contains
the count of a collection of entities or a collection of entity
references.
Connection tab
For Laserfiche Cloud
The Laserfiche Cloud connection uses a Service Principal Key and a Base-
64 Encoded Access Key.
Service Principal Key : Create the Service Principal Key by creating a
Boomi-specific Service Principal user in your Laserfiche Account
Administration. Grant this user the appropriate access rights in Laserfiche
to act on behalf of the Laserfiche connector. Open the Boomi Service
Principal user you created, and click Create Service Principal Keys.
Base-64 Encoded Access Key : Create the Base-64 Encoded Access Key
in the Laserfiche Developer Console. First, create a Boomi app by creating
a service application and selecting the Service Principal user as the
Service Account. Then, in the Boomi app you created, navigate to the
Authentication tab, and click Create key next to Access Keys. Finally,
download the key as a base-64 string and copy the key from there.
For Laserfiche Self-Hosted Installations
Base URL : The base URL for your Laserfiche API, for
example, https://ServerName/LFRepositoryAPI
Repository ID : The name of your Laserfiche repository
Username : The username of an account that has the appropriate access
rights to the Laserfiche repository to make changes on behalf of Boomi
Password : The password for the above account
Actions
Not all actions are available for every object. Create a separate operation
component for each action/object combination that is required for your
integration.
The Laserfiche - Partner connector features various actions that can be
used to automate operations in a Laserfiche repository.
Create
The Laserfiche - Partner connector supports a variety of Create actions for
creating entries.
Copy Entry Async
Create or Copy Entry
Import Document
Get/Read
The Laserfiche - Partner connector supports a variety of Get actions for
finding entries, entry metadata, and trustee information.
Create Search Operation
Create Simple Search Operation
Get Audit Reasons
Get Document Content Type
Get Dynamic Field Values
Get Entry
Get Entry Listing
Get Entry by Path
Get Field Definition by ID
Get Field Definitions
Get Field Values
Get Link Definition by ID
Get Link Definitions
Get Link Values from Entry
Get Operation Status and Progress
Get Repository List
Important
If multiple documents are provided as inputs to the Get Repository List
operation, it will output multiple documents, each containing a copy of the
repository list. As a best practice, ensure no documents are provided as
inputs to the Get Repository List operation.
Get Search Context Hits
Get Search Results
Get Search Status
Get Tag Definition by ID
Get Tag Definitions
Get Tags Assigned to Entry
Get Template Definition by ID
Get Template Definitions
Get Template Field Definitions
Get Template Field Definitions by Template Name
Get Trustee Attribute Key Value Pairs
Get Trustee Attribute Value by Key
Update
The Laserfiche - Partner connector supports Update actions for assigning
metadata to entries, moving entries, and renaming entries.
Assign Entry Links
Assign Field Values
Assign Tags
Move or Rename Entry
Write Template Value to Entry
Delete
The Laserfiche - Partner connector supports Delete actions for deleting
entries and metadata.
Delete Assigned Template
Delete Document
Delete Entry Info
Delete Pages
Export
The Laserfiche - Partner connector supports Export actions for exporting
entries out of your Laserfiche repository.
Export Document
Export Document with Audit Reason
Cancel
The Laserfiche - Partner connector supports Cancel operations.
Cancel Operation
Cancel or Close a Search
Live Optics connector
The Live Optics connector enables you to connect to the Live Optics
application and Get necessary data in JSON format using PAPIs and Project
ID.
Connector configuration
To configure a connector to communicate with Live Optics, set up two
components:
Live Optics connection
Live Optics operation
This design provides reusable components that contain connection
settings such as username, password, and settings. After building your
connection and connector operation, set up your connector within a
process. When the Live Optics connector is defined properly within your
process, Integration can map to and from virtually any system using the
connector to retrieve data.
Prerequisites
The following prerequisites are necessary to implement a connection to
your account from Integration:
Live Optics API URL
To implement a connection to your Live Optics account, make sure
an Atom is either deployed on your local machine or is hosted with
Boomi Integration and have the following information:
o A valid set of credentials with the following three-string
components used in Session Login
Session Login ID: An ASCII string of characters that
uniquely identifies the user.
Session Login Secret: A base-64 encoded string that
the user uses to authenticate themselves.
Shared Secret: A second base-64 encoded string that
the user uses to extract the session string from the
Session Login.
The Project ID: This is the ID of the project for which
the project details are to be rendered.
Configure Integration with valid user credentials and necessary
permissions to perform the tasks necessary for the integration (such
as being able to invoke RFCs and PAPIs, manipulate specific
business objects, etc.).
Tracked properties
This connector has no predefined tracked properties.
Connection tab
URL: The URL of the server to connect to. This can be an http or https
address but must be URL encoded. The URL field for the connector helps
you to connect to the Live Optics portal. The connection supports the GET
operation. Project details can be fetched with the given project ID as input.
Login ID: An ASCII string of characters that uniquely identifies the user.
Login Secret: A base-64 encoded string used to authenticate the user.
Shared Secret: A second base-64 encoded string that is used to extract
the session string from the Session Login method response.
Connection tab
Lumesse Host - The host which is part of the URL. The default
is: https://api.lumessetalentlink.com. See the Lumesse TalentLink – Partner
connector topic for a list of supported WSDLs.
Company Name - The company name to use when connecting to the
Lumesse TalentLink service.
User - The user name to use when connecting to the Lumesse TalentLink
service.
Environment - The Back Office (BO) or Front Office (FO).
Password - The password for the user connecting to the Lumesse
TalentLink service.
HRIS Bundle API Key - The API key for the Human Resources Information
Systems (HRIS) bundle.
User Bundle API Key - The API key for the User bundle.
Magento connector
The Magento connector makes it easy to work with the Magento
eCommerce website software by simplifying the interaction and adding
convenience.
Connector configuration
To configure a connector to communicate with Magento, set up two
components:
Magento connection
Magento operation
This approach creates reusable connection settings components. After
building your connection and connector operation, set up your connector
within a process. When configure your connector with a process, can send
data sets to the Magento application.
Prerequisites
To implement a connection to your Magento account, deploy an Atom on
your local machine or is hosted with and have the following information:
Your Magento user name and password.
The store base URL.
Supported editions and versions
The Magento connector uses the Magento 1.x REST API to support
application versions v2.1.1 and higher. The Magento 1.x REST API uses
JSON format and manages profiles in JSON.
You can set up the Magento connector to move data into and out of any
Magento platform (Community Edition, Enterprise Edition, and Magento
Enterprise Cloud Edition) using Magento application versions v2.1.1 and
later from any on-premise application and legacy system including other
On Demand or SaaS-based applications.
Tracked properties
This connector has no tracked properties.
Connection tab
If you use multiple installations, you need a separate connection
component for each.
Token-based authentication
The connector uses token-based authentication by taking
the Username, Password, and User Type provided in the connection,
sending a request to the Magento server to retrieve a token, and uses the
token to authenticate each operation request in your Magento store.
note
Boomi does not support the Integration token type.
Store Base URL - Enter the base URL assigned to your storefront, for
example http://www.yourdomain.com and https://www.yourdomain.com.
note
Based on your Magento instances, you may find that you need to
use https for one instance, and http for another.
Username
Enter the administrator or customer user name for authentication. The
user name you provide is used with the password to request an access
token from Magento to establish a connection with your Magento store.
Password - Enter the Magento password for authentication. It uses the
password with the user name to request an access token from Magento to
establish a connection with your Magento store.
User Type - Select either Administrator or Customer. It does not
support Guest.
Connector configuration
To configure a connector to communicate with Marketo, set up two
components:
Marketo — Partner connection
Marketo — Partner operation
This design provides reusable components, which contain connection
settings (URL, User ID, Encryption Key) and operation settings (Object,
Action, data definitions). When your connection and operation are built,
set up your connector within a process. After properly defining the
Marketo — Partner connector within your process, Integration can map to
and from virtually any system using the connector to retrieve data from or
send data to Marketo.
Prerequisites
To implement a connection to your Marketo account from Integration, you
must be able to access the Marketo Admin section to configure/review the
SOAP API setup. For more information, see the Marketo — Partner
connection topic.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
You must sign into your Marketo account at https://www.marketo.com, and
access the SOAP API Setup section on the Admin > Integration >
Web Services menu to review your connection settings. Ensure that
your Access Status is Active.
URL - The full URL path for the Marketo SOAP endpoint.
User ID - The static User ID identified in the SOAP API Setup section.
Encryption Key - The configurable encryption key identified in the SOAP
API Setup section.
API Version - The version of the SOAP API used to make calls to the
Marketo service.
See the topic about password encryption security to understand how to
securely store passwords in a Connection component.
Test Connection
You can test your connection settings before you use or save the
connection in a process.
Prerequisites
To implement a connection to your Marketo REST connector so you can
retrieve and generate endpoint and identify URLs, and retrieve client ID
and secrets, you must have Administrator access to the Marketo Custom
Service.
Supported objects
The following objects are supported by one or more of the Marketo REST
operation actions:
Add Custom Activity to Lead
Add Lead to List
Companies
Custom Objects
Lead
Lead Activity
Lead Change
List
Opportunities
Opportunity Roles
Remove Lead from List
Tracked properties
There are no tracked properties for this connector.
Connection tab
If you have multiple accounts, you need a separate connection for each.
You can pair a single connection with different Marketo operations to
perform a unique action against a Marketo account instance.
The Marketo REST connector supports OAuth 2.0 for authentication and
the connection automatically refreshes the authentication token.
Endpoint URL - Enter the Endpoint URL for the Marketo service. For
example, https://064-CCJ-768.mktorest.com/rest. You can find the
Endpoint URL on the Marketo Admin > Web Services panel in the REST API
section.
Identity URL - Enter the Identity URL for the Marketo account instance.
For example, https://064-CCJ-768.mktorest.com/identity. You can find the
Identity URL on the Marketo Admin > Web Services panel in the REST API
section.
Client ID - The client ID obtained in Marketo from a Custom Service to
authenticate with Marketo. You can find the Client ID in Marketo in
the Admin > LaunchPoint menu by selecting the custom service and
clicking View Details.
Client Secret - The client secret obtained in Marketo. Click Click to
Set to open the Password window, and enter or paste the alphanumeric
Client Secret associated with the Client ID. You can find the Client Secret
in Marketo in the Admin > LaunchPoint menu by selecting the custom
service and clicking View Details. When you are done, click Apply.
Test Connection
You can test your connection settings before you use or save the
connection in a process. The Test Connection ensures that your specified
settings are correct, valid, and accessible. If the test connection is
successful, you can save the connection. Otherwise, review and correct
any incorrect settings, then test again.
The operation supports the following actions:
Execute — uses the List ID field from the operation and Lead Id as
a parameter to add Leads to a list, remove Leads from a list, or add
a custom activity to a Lead.
note
You can also use Execute to delete custom objects having one
deduplication field, and also compound keys (multiple deduplication
fields). For example, you use the Message step and set the deduplication
fields (Country and State) for the records to delete. You use a Data
Process step to split the records, and the Map step to map the fields in
the JSON with the request profile of the Execute operation. Finally, when
you use the Execute operation, select the Is custom object? option when
browsing, and select the custom object type for the record that you want
to delete.
Get — returns Leads and Lists by Id.
Query — searches for Marketo REST objects (Companies, Leads,
Lists, Lead Changes, Lead Activities, Opportunities, and Opportunity
Roles) based on specific search criteria for fields and filters you
define.
o Query searches on ID, name, programName, workspaceName,
batchsize, or nextPageToken.
o To search for Leads, use only a single filter.
o To search for a list of Lead Activity by type, filter on activities
and sinceDatetime, the date and time to use as the beginning
of the search, in yyyy-MM-dd'T'HH:mm:ss'Z' format. The listId
and leadIds filters are optional.
o To search based on deduplication fields, you select the
deduplication field (for example, externalSalesPersonId) when
adding a filter expression and add the value of the filter (for
example, djones) on the Parameters tab.
Upsert — Creates new Leads, Companies, Opportunities,
Opportunity Roles, or Custom Objects in Marketo. If the object
exists, updates it with the submitted values. The operation Upserts
the objects in bulk by sending multiple documents in batches of
300, which is Marketo's limit.
Delete — Deletes objects, including Companies, Leads,
Opportunities, and Custom Objects from Marketo's Lead Database.
You provide the ID of the record to delete as a parameter. You
cannot use Upsert to delete Opportunity Roles.
Opportunity Roles Compound Key Query — Further refine your
search for Opportunity Roles by creating multiple filter expressions
(composite filters). It requires you to use the compound key to
query, such as externalOpportunityId, LeadId, and Role.
Custom Objects Compound Key Query — Further refine your
search for Custom Objects by creating multiple filter expressions
(composite filters). This action requires you to have the following
permissions: Read-Only Custom Object, Read-Write Custom
Object.
Marketo has two types of lists: static lists of fixed leads and smart lists
that are dynamic collections of leads based on characteristics used in
Smart Campaigns. You can access static lists only.
Partitioning can create confusion. When you create a Lead, Marketo stores
the Lead in a partition within a workspace. You can have more than one
partition in a workspace and more than one workspace. When you perform
a query operation, unless you specify a partition, Marketo looks within the
default partition for that object. If you then perform an Upsert operation,
you can get an error message saying that object already exists. Because
Marketo looks across all partitions, the object may be in another location.
Prerequisites
To implement a connection to your Microsoft Azure account from
Integration, do the following:
Have your storage account name and access key to access Microsoft
Azure Blob Storage.
Deploy an Atom on your local machine or hosted with Integration.
Check the size of your BLOB. The Azure SDK prevents processing
BLOBs larger than 64 MB on an Atom Cloud. If your BLOB is larger
than 64 MB, do your processing on a local Atom.
Document properties (input)
This connector has no predefined properties for input documents.
Document properties (output)
Lease-Duration: Specifies if lease is INFINITE or FIXED.
Lease-State: Indicates state (AVAILABLE, LEASED, EXPIRING,
BREAKING, BROKEN).
Lease-Status: Specifies if lease is LOCKED or UNLOCKED.
Length: Size of the blob in bytes.
User-defined Metadata: Custom metadata properties for the blob.
Microsoft Azure Blob Storage operation
The Microsoft Azure Blob Storage operation defines how to operate with
blob objects.
The operation supports the following actions:
Delete — Delete blobs based on ID parameters, as well as
snapshots. In Azure, you can mark blobs for deletion during garbage
collection.
Get — Read blobs (the only available objects) by passing the name
of the blob as an ID and returning the contents of the blob.
Create — When you import blobs, you can assign a name to a blob
utilizing two methods:
o Use the Blob Name field.
o Use a dynamic document property. When setting the dynamic
document property, create a property for
the azureBlobName and set the parameter to a static value of
the desired blob name. If you do not enter a blob name as a
dynamic document property, then the operation generates a
name.
Changelog
2024-11 (Version 24.11.0, Current)
Updated the SDK to a new version.
Necessary runtime packages are now packaged with the connector.
Connector configuration
To configure the connector to communicate with ADLS Gen2 storage
account, set up the following components:
Microsoft ADLS connector's connection — Represents your ADLS
Gen2 Storage account and login credentials.
Microsoft ADLS connector's operation — Represents an action
against an object (Filesystem and/or Path).
This design provides reusable components which contain connection
settings and operation settings. After building your connection and
operation, set up your connector within a process. When the process is
defined properly, Boomi Integration can map to and from virtually any
system using the Microsoft ADLS connector to the required ADLS Gen2
storage account.
Prerequisites
The Microsoft ADLS connector requires the following:
An ADLS Gen2 storage account. Refer to the Azure Documentation -
Create a Storage account.
An application (client) registered in the Azure Active Directory
(Microsoft Azure Portal - App Registrations page). Once the
registration is complete, an Application (client) ID will be generated.
This client App credentials will be used in the Microsoft ADLS
connector's connection settings for OAuth2.0 authorization to the
ADLS Gen2 system. You can get the client app credentials and
OAuth access token in the Azure Portal App registration - Overview
page of the client app.
The client app must be assigned with proper role to have required
permissions on the storage account and its contents. Refer to
the Azure Documentation - Manage ACL.
An Atom/Molecule/Atom Cloud.
Connection tab
Endpoint Suffix - Displays the endpoint suffix for the ADLS service.
Default value is core.windows.net and it is not recommended to modify it.
Storage Account
Enter the name of your ADLS Gen2 storage account.
Response Timeout
Provide the maximum time (in milliseconds) that the connector can wait to
receive an operation response from the ADLS Gen2 REST APIs.
Authentication Type
Displays the supported authentication type "OAuth 2.0".
Client ID
Provide the unique application ID of your client app assigned by the Azure
portal at the time of your app registration. You can get this ID from the
Azure Active Directory — Manage — App Registrations — client app
Overview page.
Client Secret
Click to open the Password window, enter or paste your client secret, and
click Apply.
Scope
Provide the URL that defines the ADLS Gen2 storage account access
permissions granted for the given client ID. In general, .default scope will
be displayed https://storage.azure.com/.default.
Access Token URL
Provide the URL to retrieve the encrypted access token from the Azure
portal, to access the ADLS Gen2 storage account. This URL can be found
in the Azure AD (Endpoints page of the client app — OAuth 2.0 token
endpoint (v2)).
Prerequisites
The Microsoft Azure SQL Data Warehouse connector requires the following:
Microsoft Azure Data Warehouse connection information, including
server host, IP address and port, user name, and password. The user
name and password configured on the database must have
sufficient permissions to access the objects required by the
integration. For more information, refer to the documentation for
your database.
If you use local Atoms, put the Microsoft SQL Server Driver
appropriate for your runtime environment in the <installation\
_directory\>/userlib/database directory.
Microsoft Azure SQL Data Warehouse
The Microsoft Azure SQL Data Warehouse connector makes it easy to work
with Microsoft Azure SQL Warehouse.
This connector pre-configures the Database connector with the necessary
settings to simplify building relational queries against structured data
stored in a Microsoft Azure SQL Data Warehouse. As a result, information
on process reporting runs, the run history on the Start step, the process
log, and licensing information shows the connection as Database rather
than Microsoft Azure SQL Data Warehouse.
Connector configuration
To configure a Boomi Integration connector to communicate with the
Azure SQL Data Warehouse, set up two components:
Microsoft Azure SQL Data Warehouse connection
Microsoft Azure SQL Data Warehouse operation
This approach creates reusable components that contain connection
settings (such as URL, User, Password, and Account) and operation
settings (such as Object, Action, and data definitions). After building your
connection and operation, set up your connector within a process. When
you define the process correctly, Integration can map to and from virtually
any system using this connector to retrieve data from or send data to the
Azure SQL Data Warehouse.
Prerequisites
The Microsoft Azure SQL Data Warehouse connector requires the following:
Microsoft Azure Data Warehouse connection information, including
server host, IP address and port, user name, and password. The user
name and password configured on the database must have
sufficient permissions to access the objects required by the
integration. For more information, refer to the documentation for
your database.
If you use local Atoms, put the Microsoft SQL Server Driver
appropriate for your runtime environment in the <installation\
_directory\>/userlib/database directory. You can do that in one of
two ways:
note
For more information about SQL Server drivers, compatibility, and
available downloads, see the Microsoft JDBC Driver for SQL Server support
matrix and Download Microsoft JDBC Driver for SQL Server pages from the
Microsoft help.
Upload the JAR file into your account library (Setup > Account
Libraries), add that file to a Custom Library component, and
deploy it to the appropriate Atom, Molecule, Atom Cloud, or
environment.
For JAR files that support a connector, you create a Custom
Library component with a type of Connector and select a connector type
— in this case, Database. When you deploy the component, it deploys the
referenced the JAR files to
the /<installation_directory>/userlib/database directory. Integration
creates the userlib/database directory if it does not exist.
Load the JAR file manually into the appropriate directory of a local
Atom, Molecule, or private Atom Cloud. After you load the JAR files,
you must restart the Atom, Molecule, or private Atom Cloud. If the
userlib/database directory does not exist, you must create it.
If you use a Atom Cloud to connect to the Azure SQL Data
Warehouse, it installs the SQL Server JDBC driver sqljdbc4.jar for
you.
The Microsoft Azure SQL Data Warehouse connector uses the same driver
as SQL Server. It does not include drivers for local Atoms. You can
download the drivers from the Microsoft, Download Microsoft JDBC Driver
for SQL Server page.
Tracked properties
The Microsoft Azure SQL Data Warehouse connector has no tracked
properties.
Prerequisites
The Microsoft Azure SQL Database connector requires the following:
Microsoft Azure Database connection information, including server
host, IP address and port, user name, and password. The user name
and password configured on the database must have sufficient
permissions to access the objects required by the integration. For
more information, refer to the documentation for your database.
If you use local Atoms, put the Microsoft SQL Server Driver
appropriate for your runtime environment in the <installation\
_directory\>/userlib/database directory. You can do that in one of
two ways:
note
For more information about SQL Server drivers, compatibility, and
available downloads, see the Microsoft JDBC Driver for SQL Server support
matrix and Download Microsoft JDBC Driver for SQL Server pages from the
Microsoft help.
Upload the JAR file into your account library (Setup > Account
Libraries), add that file to a Custom Library component, and
deploy the component to the appropriate Atom, Molecule, Atom
Cloud, or environment.
For JAR files that support a connector, you create a Custom
Library component with a type of Connector and select a
connector type — in this case, Database. When you deploy the
component, it deploys the referenced JAR files to
the /<installation_directory>/userlib/database directory. If
the userlib/database directory does not exist, creates it.
Load the JAR file manually into the appropriate directory of a local
Atom, Molecule, or private Atom Cloud. After you load the JAR files, a
restart is required. If the userlib/database directory does not exist,
you must create it.
If you use a Atom Cloud to connect to the Azure SQL Database, it
installs the SQL Server JDBC driver sqljdbc4.jar for you.
The Microsoft Azure SQL Database connector uses the same driver as SQL
Server. It does not include drivers for local Atoms. You can download the
drivers from the Microsoft, Download Microsoft JDBC Driver for SQL Server
page.
Tracked properties
The Azure SQL Database connector has no tracked properties.
Connection tab
Database URL - Populates a read-only field when you specify other
connection values. Used for reference only.
Driver Type -
Azure SQL Database driver is the default, which you cannot change.
The required driver file must reside in
the /<installation_directory>/userlib/database directory.
User - Database user name.
Password - Database user password.
Host - Name or IP address of the database server.
Port - The port with which to connect to the database server.
Database Name - The name of your database.
Additional Options - Any additional options specified in your database
URL. Name and value pairs delimited by semicolons are often used , such
as ;instance=DB01
Connect to Microsoft Azure SQL Database as a Windows user, for
example: ;domain=<Your_Windows_Domain_Name>
If you are using SQL Server, connecting as a Windows user, and
connecting to a named instance, the order of the properties is important
for the JDBC URL. For example: ;instance=<value>;domain=<value>
Class Name - Shows the fully qualified Java class name of the Microsoft
SQL Server driver com.microsoft.sqlserver.jdbc.SQLServerDriver.
Limitations
The following known limitation affects this connector:
This connector does not currently have built-in proxy support and
does not honor the Atom proxy settings. Until this is supported, and
if you need to use the Atom proxy settings, recommends that you
use the HTTP Client connector to access the service.
Tracked properties
ETag - To identify specific resource versions, use the entity tag from the
OData protocol. It detects any modification of the eTag since it was last
retrieved.
The Microsoft Dynamics 365 for Customer Engagement operation defines
how to interact with the Microsoft Dynamics service's URL endpoint
defined in the connection component and represents an action (Create,
Delete, and so on) to perform against a specific Dynamics 365 entity
object.
The Microsoft Dynamics 365 for Customer Engagement operation supports
the following actions:
Inbound: Get and Query, which are supported in all versions of
Dynamics 365. To request data, it requires unique parameters at
run-time. Review the topic about parameter values to understand
how to statically or dynamically define these values in a process for
runtime. The Get action retrieves data for an entity specified as the
resource with a unique identifier, and is available only for entity
objects. The Query action applies options to set criteria for the data
and to return specific entity properties, and is available for entity
objects.
Outbound: Create, Delete, Update. Use the Create action to
generate and retrieve data from the Dynamics 365 for Customer
Engagement service. Use the Delete action to remove entities or
individual properties of entities. Use the Update action to Upsert
objects. These actions are available for all entity objects in the
connector. For all outbound operations, send request JSON data to
the connector that corresponds with the defined JSON profile.
Typically, request documents are the destination output of a map.
You can also set parameters on the outbound
connector's Parameters tab within the process to supply static or
dynamic request values.
note
Not all actions are available for every object. You must create a separate
operation component for each action and entity object combination
required for your integration.
Prerequisites
Before creating a Microsoft Dynamics AX connector, install and configure
AIF, set up a document exchange and generate the web services.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
WSDL URL - Application Integration Framework (AIF)
Fully qualified base URL of the web services that listen on the IIS server.
The URL does not include the service name, for
example: http://localhost/MicrosoftDynamicsAXAif50.
User - The Windows domain user with access to IIS-hosted web services
at the previously mentioned URL.
Password - Password for the user
Options tab
Click Import Operation, then use the Import wizard to select the object
with which you want to integrate.
Object - Displays the object type that you selected in the Import
Operation wizard.
Tracking Direction - Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.
note
If the tracking direction is read-only, the feature to change the direction is
either unavailable or the developer set the configuration to read-only. The
default value you see shows you which document appears in Process
Reporting.
Request Profile - The XML profile definition that represents the XML
structure that is being sent by the connector.
The XML profile built by the connector exposes the full SOAP envelope.
This allows you to set the Envelope\Header\CallContext\Company field,
which controls which company the to run the request against. You must do
this if you have a single Microsoft Dynamics AX instance with multiple
companies, and you want to connect to different companies.
Return Application Error Responses - This setting controls whether an
application error prevents an operation from completing:
If you clear the setting, the process stops and reports the error on
the Process Reporting page.
If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.
WS Object - The name of the web service object.
Microsoft Dynamics CRM connector
The Microsoft Dynamics CRM connector enables you to move data into
and out of the Microsoft Dynamics CRM application.
The connector uses a web service that Microsoft provides to interact with
their CRM system. Use the connector to perform Query, Get, Add, Modify,
and Delete operations on various types of records.
Connector configuration
To configure a connector to communicate using Microsoft Dynamics CRM
connector, set up two components:
Microsoft Dynamics CRM connection
Microsoft Dynamics CRM operation
Create a single connection component for each Microsoft Dynamics CRM
organization. You can add multiple operations, one for each action.
Supported editions
Boomi supports the following editions:
Online 2011 up to 2017 (v8.2.2.0160)
On-Premise 2011/2013/2015/2016
Online 4.0
On-Premise 4.0
Prerequisites
To implement a connection to Microsoft Dynamics CRM from Integration,
you need:
A user name and password with sufficient permissions to query, get,
add, modify, and delete various types of records.
Java with strong encryption (Java Cryptography Extension or JCE)
support installed for on-premise 2011 and 2013. You can download
the files and instructions from Oracle.
Tracked properties
This connector has no predefined tracked properties.
Connector configuration
To configure a connector to communicate with Microsoft Dynamics AX, set
up two components:
Microsoft Dynamics AX connection — that represents a specific
Microsoft Dynamics AX instance (IIS server instance that is listening
for AIF services). You create one connection for each Microsoft
Dynamics AX instance.
Microsoft Dynamics AX operation — that represents an action
against a specific record type. Create one or more operations, one
for each type of interaction required by your integration scenario.
Supported editions
Boomi Integration supports the following editions:
Microsoft Dynamics AX 2009
Microsoft Dynamics AX 2012
Supported operations
The connector supports all operations that AIF exposes. Microsoft
Dynamics AX includes over 70 standard documents. Each document
supports a common business process. The AIF lets you customize these
standard documents or create your own documents for your individual
business processes.
For more information about the included documents with Microsoft
Dynamics AX, see Standard Axd Documents in the Microsoft Dynamics AX
SDK Help.
Prerequisites
Before creating a Microsoft Dynamics AX connector, install and configure
AIF, set up a document exchange and generate the web services.
Tracked properties
This connector has no predefined tracked properties
Connection tab
WSDL URL - Application Integration Framework (AIF)
Fully qualified base URL of the web services that listen on the IIS server.
The URL does not include the service name, for
example: http://localhost/MicrosoftDynamicsAXAif50.
User - The Windows domain user with access to IIS-hosted web services
at the previously mentioned URL.
Password - Password for the user
The web services authentication process is as follows:
1. The connector sends a request to the IIS that hosts the AIF services.
IIS retrieves the user credentials depending on the authentication
mechanism specified in the service configuration (for example,
NTLM). IIS then tries to map the security credentials onto a valid
domain user. By default, Microsoft Dynamics AX configures WCF to
use the basicHttpBinding binding with message security, so the
message SOAP header contains the user credentials. Finally, IIS
authenticates the user as valid in Active Directory.
2. Then, it passes the request to AIF that performs further
authentication by verifying that the user:
a. Is a valid Microsoft Dynamics AX user
b. Has access to the service through the appropriate security key
3. After AIF determines that the user has access to the service, it
processes the message. At run time, standard AIF security ensures
that the user has access to the data exposed by the service.
Microsoft Dynamics AX operation
The Microsoft Dynamics AX operation defines how to interact with the AX
services.
The operation represents a specific action to perform against an object in
Microsoft Dynamics AX. The operation has one action: Execute.
Microsoft Dynamics CRM connection
The Microsoft Dynamics CRM connection represents a single Microsoft
Dynamics CRM organization including sign-in credentials.
If you have multiple organizations or sandbox instances, use a separate
connection for each. You can pair a single connection with different
Microsoft Dynamics CRM operations to perform a unique action against a
Microsoft Dynamics CRM organization.
Supported authentication and protocols
This connector supports the following authentication and protocols:
Authentication for OCP-managed (Online Commerce Platform)
accounts, the subscription and billing platform used by Microsoft
Office 365.
Windows Live ID, the single sign-on web service.
Online federation authentication policies, but not Microsoft Active
Directory Federation Services (ADFS or AD FS).
Microsoft Dynamics CRM supported authentication models
Microsoft Dynamics CRM supports three authentication models:
Windows Authentication.
Claims-based Authentication for internal access to an untrusted
domain.
Claims-based Authentication for external access to an untrusted
domain.
note
This connector does not support the communication protocols for Claims-
based Authentication.
This connector leverages Windows Authentication, which requires you to
install the Atom on a computer in a trusted domain by the CRM host.
Windows Authentication leverages SPNEGO and NTLM to authenticate with
the CRM installation. You can determine the URLs for access by navigating
to Developer Resources from Settings in the MS Dynamics CRM console.
Connection to an Internet Facing Deployment (IFD) requires using Claims-
based Authentication. Claims-based Authentication leverages Kerberbos
for internal access between untrusted domains and a sign-in protocol for
external access to untrusted domains.
Connection tab
Version - The version of Microsoft Dynamics CRM to which you connect:
Online 2011 up to 2017 (v8.2.2.0160)
On-Premise 2011/2013/2015 (and for 2016)
Online 4.0
On-Premise 4.0
Server - The Microsoft Dynamics CRM Online host name. For on-premise
versions that are SSL protected, the server name can be in a format
like https://crm.boomi.com:8090/.
Port
For on-premise versions, the host port for the Microsoft Dynamics CRM
Server. (If you selected an online version, you can leave this blank.)
Organization Name
The unique organization name differs depending on the version:
For Online 4.0 version — after you sign in, you can find the
organization name in Settings > Customization > Download
Web Service Description Files.
For Online 2011, 2013, and 2015 versions — if you log in
as https://uniquename.crm.dynamics.com, then the organization
name is uniquename.
For On-Premise 2011, 2013, and 2015 versions — if you log in
as http://hostname/uniquename/, then the organization name is the
uniquename.
User name - The user name differs depending on the version:
For on-premise versions, enter the user name of the domain, such
as domain\user for signing into the Microsoft Dynamics CRM server.
For online versions, enter the Microsoft account’s user name.
Password - The password for signing into the Microsoft Dynamics CRM
server. For online versions, enter the Microsoft account’s password.
com.sun.security.jgss.initiate {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="<PATH_OF_KEYTAB>"
storeKey=false
useTicketCache=false
principal="<PRINCIPAL_NAME>"
doNotPrompt=true
debug=true;};
Remember:
All these files must be placed in a separate folder
in your local machine.
In the gss-jaas.conf file, go to the
line keyTab="<PATH_OF_KEYTAB>" and add the
local directory path of the keytab file.
In the connection setup, the local directory path of
Krb5.conf file and gss-jaas.conf file must be
specified in the Krb5 Config Path and JAAS
Config Path field respectively.
Enterprise Edition of MongoDB for LDAP and Kerberos
Authentication.
Connection String for Mongodb+SRV (mandatory if the Mongo+SRV
option is selected). The connection string contains the end point of
the DNS server hosting the SRV record. Optionally, you can pass
other required configuration like SSL, authentication type, etc. For
more details on SRV connection string, refer to the MongoDB
documentation - DNS Seedlist Connection Format.
Make sure the following settings are done at the DNS server end.
o SRV and TXT record for the cluster must be defined on the
DNS server. A single SRV record points to the members of the
replica set. The TXT record defines the options for the replica
set, specifically the database that will be used for
authorization and the name of the replica set. For details on
creating the SRV and TXT record, refer to the MongoDB
documentation - MongoDB Seed Lists.
o User credentials associated with the DNS server. These
credentials can be entered in the Username/Client
Principal and Password field. The database name can be
provided in the Database Name field.
o The DNS must be resolvable from the system, when trying to
establish the connection.
Connector-specific container setting
The following property allows the (Atom, Molecule, or Private Atom
Cloud and will not support the Public cloud) owner to control the
amount of memory that the MongoDB connector uses while
retrieving and storing documents in memory before processing.
Using this property is helpful for a Cloud owner to ensure that a
single tenant or account does not overwhelm the container by
consuming a huge amount of heap memory simultaneously.
com.boomi.connector.mongodb.maxDocumentSize - (Query and Get
operations) The property mongodb.maxDocumentSize indicates the
maximum size of a single document the server returns per request.
It represents the Maximum document Size in bytes.
Based on the given container value, the document will be filtered for
the Query and Get operations.
note
The sorting functionality for Query operation with Azure CosmosDB
MongoDB variant works only for those fields which are indexed.
note
A value less than 1 indicates no limit on the document size for the
Get and Query operations. This enables the Get and Query
operations to support the Azure CosmosDB MongoDB variant
effectively. For Azure Cosmos DB, the value of this property should
be set to less than or equal to Zero.
Without this container property, the connector will operate AS IS,
with a default value set to 1MB, consequently not supporting Get
and Query operations for the Azure CosmosDB MongoDB variant.
Supported editions
MongoDB versions from v2.6 to v6.x.
Tracked properties
This connector has no predefined tracked properties.
MongoDB operation
The MongoDB operations define how to interact with your MongoDB server
and represent a specific action (Get, Query, Create, Update, Upsert and
Delete) to be performed against one/many MongoDB Id(s) (Unique ID of a
document stored in the MongoDB collections).
The connector supports using the following BSON datatypes: Integer,
Boolean, String, Decimal128, Date, Double, Long, ObjectId (hexadecimal
value), Binary data, JavaScript, Timestamp, and Null.
Create a separate operation component for each action combination that
your integration requires. The MongoDB supports the following actions:
Inbound: Get, Query
Outbound: Create, Update, Upsert, Delete
Import certificates
For the MongoDB connector to use X-509 authentication, two types of
certificates (User Certificate, CA Certificate) should be imported.
About this task
For the CA certificate import, use the Add (+) icon against the CA
Certificate field and follow steps 2 - 5. CA certificate import does not
require a password.
Procedure
1. Navigate to the Connector’s Connection tab, in the User
Certificate field, click on the Add (+) icon.
2. In the New Certificate section, provide the certificate name.
3. Click the Import button.
4. Choose the certificate file from your local system and specify the
user certificate password.
5. Click the Finish button. The certificate uploaded successfully
message are displayed.
MongoDB changelog
Changelog
2024-07 (Version 9, Current)
The connector has improved exception handling.
Memory usage optimization for Get and Query operations.
Enhancement to support Azure CosmosDB MongoDB variant for Get
and Query operation by addressing the default 1MB document size
limit restriction. Maximum document size can be controlled via new
container property
- com.boomi.connector.mongodb.maxDocumentSize.
For Cosmos
DB, com.boomi.connector.mongodb.maxDocumentSize should be
set to less than 1, in this case the size limit restriction of the
document will be removed.
2024-01 (Version 8)
Necessary runtime packages are now packaged with the connector.
Vulnerability issue of type Denial of Service was resolved to make
the Connector more secure.
The connector supports the latest driver version 6.0.
2023-11 (Version 7)
The connector now successfully creates structured and unstructured
JSON profiles for MongoDB collections (including Blank Array(s))
without throwing an exception.
The Query operation successfully executes without throwing an
Arithmetic Exception when the Batch size is set to 0.
NetSuite connector
NetSuite is a financial management solution allowing you to manage IT
costs, optimize accounting efficiency, streamline order management and
procurement processes.
Using the NetSuite connector, you can use a Boomi Integration process to
insert, retrieve, look up, delete, and update data in your NetSuite account
(NetSuite On-Demand Application Suite, and NetSuite OneWorld Global
ERP). You can integrate on-premise and legacy solutions with NetSuite,
and also integrate with On-Demand and SaaS-based solutions such as
Salesforce and SAP. Integrating with NetSuite consists of web service calls
using an XML request and response made over an HTTPS connection.
SuiteTalk SOAP-based web services API
The NetSuite connector uses the SuiteTalk SOAP-based web services API to
move data into and out of your NetSuite account, and simplifies working
with the API by:
Making it easy to work with custom fields. When using the Query
action, you can use NetSuite's search criteria for objects (including
multi-select customizations) by referencing custom fields. Query
filters for RecordRef or List fields
include CustomLists and Multiselect fields. You also have options
to disable mandatory custom field validation and system notes for
custom fields.
Retrieving and providing the text values for list-type fields and
internal IDs when querying. For example, it is easier to add query
filters to retrieve a custom list of items from inventory.
Streamlining the handling of multi-select and select references to
objects by using comma-delimited values. When
defining Parameter Values, you can enter a comma-delimited list
in the Static Value field.
Improving performance when inserting or updating large sets of
documents by grouping them into batches.
Standard and Custom Record Types
The NetSuite connector exposes all standard and custom record types
available in your NetSuite account. You can browse the list and import the
records you want to work with to automatically generate the request and
response XML profiles, including all standard and custom fields (except
custom segments). However, if you later modify the record type in
NetSuite, such as adding or renaming a custom field, you must reimport
the record to update the connector operation. The reimport is required
because the operation component caches the NetSuite internal metadata
about the record type, such as NetSuite data types and custom field script
IDs, to facilitate the interaction.
When reading and writing, the NetSuite connector manipulates the raw
XML data for custom fields to simplify working with Integration profiles
and mapping. It can make the document data within Integration differ
slightly when viewed within the NetSuite Web Services Usage Logs.
Specifically, during imports the NetSuite connector uses
NetSuite’s Description field as the element name and caches the custom
field script ID within the operation component. This produces a more
strongly-typed XML and avoids the need to use qualifiers in the XML
profile to access a specific custom field.
Here is an example snippet from NetSuite:
<ns2:customField xsi:type="ns2:StringCustomFieldRef"
internalId="CUSTENTITYMYCUSTOMFIELD123">
<ns2:value>My Value</ns2:value>
</ns2:customField>
Error Processing
The NetSuite connector handles processing errors in different ways
depending on the error type.
If records in NetSuite are damaged, such as having a dropped bit, it
activates a process-level exception and processing stops.
If a connection or authorization error occurs, Integration retries the
connector run based on the Number of Retries setting in the
connection.
If a document error occurs, such as an invalid ID or damaged data in
a document, processing does not stop.
Connection Pooling
The NetSuite connector automatically manages connection pooling behind
the scenes to avoid exceeding the maximum number of allowed
concurrent connections when multiple processes attempt to call NetSuite.
For example, when two processes try to query NetSuite simultaneously,
the second process waits to make the call until the first process is
complete.
note
Connection pooling does not span across JVMs (Molecule nodes or Atom
Cloud forked executions). When scheduling NetSuite processes to
a Molecule or Atom Cloud, either stagger runs or run them sequentially.
Connector configuration
To configure a connector to communicate with NetSuite, set up two
components:
NetSuite connection
NetSuite operation
This approach creates reusable components that contain connection
settings (such as URL, User, Password, Account) and operation settings
(such as Object, Action, data definitions). After building your connection
and operation, set up your NetSuite connector step within a process. With
the profiles generated during the operation configuration, a Boomi
Integration process can map to and from virtually any system using the
NetSuite connector to retrieve data from or send data to the NetSuite
application.
Prerequisites
To implement a connection to your NetSuite account from Integration, do
the following:
For user credentials authentication, have your user name, password,
NetSuite account number to access the NetSuite web service, and
application ID.
For token-based authentication (recommended), have your NetSuite
account number, consumer key and secret, token Id and secret, and
application ID. In NetSuite, you must also enable token-based
authentication for your NetSuite account, create a custom role,
assign the role to the appropriate user, create a new integration
record for Boomi Integration, and create an access token. For
complete details to configure the NetSuite connection with token-
based authentication, see the How to configure the NetSuite
Connection with Token Based Authentication article in the Boomi
Community.
For both user credentials and token-based authentication, confirm
that your NetSuite account includes privileges to connect to the
NetSuite API.
Deploy an Atom on your local machine or hosted with Boomi
Integration.
Know which interface(s) from a business process perspective you
are going to integrate. For example, you may want to upload sales
orders from a flat file into NetSuite.
Supported editions
At any time the six latest endpoints are supported by NetSuite. Change
notes for the supported NetSuite editions are located in the NetSuite
change notes topic.
With 2023.2, the supported WSDL versions are:
2023.2
2023.1
2022.2
2022.1
2021.2
2021.1.
The following endpoints are available but not supported by NetSuite:
2020.1
2019.2
2019.1
2018.2
2018.1
2017.2
2017.1
Updating these endpoints to one of the supported WSDL versions is
recommended.
NetSuite has sunset for 2016.1 and 2016.2 or older endpoints. If you are
connecting to these endpoints, upgrading to the supported WSDL
endpoints is imperative.
For more information, refer to NetSuite documentation.
note
When creating a NetSuite connection, you may not see the version of the
NetSuite API you are using or upgrading to in the Version field. For
example, if you upgrade your NetSuite application instance to NetSuite
2022.1 but you cannot select version 2022.1 in the connection. New and
existing integration processes that access previous API versions work with
the latest NetSuite interface version. You only need to use the 2022.1 API
to access version-specific features.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
NetSuite supports both user credentials and token-based authentication
(2017.1 and later versions). When you create a connection, you can use
either approach:
When using user credentials, enter your user name, password,
account number, and application ID.
When using token-based authentication, do not enter your user
name, password, and application ID. Instead, enter your account
number, consumer key and secret, and token ID and secret.
NetSuite operation
The NetSuite operation defines how to interact with your NetSuite
account.
The operation represents a specific action (Create, Delete, Get, and so on)
to perform against a specific NetSuite record type or object. Not all actions
are available for every object. You must create a separate operation
component for each action and object combination that your integration
requires.
The NetSuite operation supports the following actions:
Inbound — Get and Query. The operation requires unique
parameters at runtime for all inbound operations to request data. To
learn how to statically or dynamically define values in a process,
read the Parameter Values help topic.
Outbound — Create, Delete, Execute, Update, and Upsert. For all
outbound operations, send request XML data into the connector that
corresponds with the defined XML profile. Usually, request
documents are the destination output of a map. You can set
parameters on the outbound connector's Parameters tab within
the process to supply static or dynamic request values.
NetSuite OpenAir connector
The NetSuite OpenAir connector enables you to move data into and out of
NetSuite OpenAir services resource planning (SRP) software.
Use the NetSuite OpenAir connector to integrate any combination of on-
premise and cloud applications with NetSuite OpenAir. The connector is
prebuilt for Salesforce.com. You can run business processes automatically
by integrating NetSuite OpenAir with other applications across an
organization, including very large enterprise systems like SAP and Oracle
in a two-tier model. For example, your company needs to track the sale,
progress, and billing of a service engagement across CRM, PSA and ERP
applications. When Boomi Integration integrates these applications to
automate this process, the results are greater efficiencies, a reduction in
errors, and improved visibility into all aspects of customer engagement.
If you customize NetSuite OpenAir modules with additional fields, the
fields automatically appear when browsing the NetSuite OpenAir interface
with Boomi Integration.
Integrating with NetSuite OpenAir consists of web service calls utilizing
SOAP request and response setup over an HTTPS connection. The
technical details of this connection, such as request headers, error
handling, HTTPS connection, and so on, are abstracted to simplify your
implementation.
Connector configuration
To configure a connector to communicate with NetSuite OpenAir, set up
two components:
NetSuite OpenAir connection
NetSuite OpenAir operation
This design provides reusable components containing connection settings
(such as the Endpoint URL, User, Password, and Company) and operation
settings (such as Object, Action, and data definitions). After building your
connection and operation, set up your connector within a process. When
you have correctly defined the NetSuite OpenAir connector within your
process, Integration can map to and from virtually any system using the
connector to retrieve or send data to the NetSuite OpenAir application.
Supported editions
The NetSuite OpenAir connector supports Version 2013-05-16 of the
OpenAir SOAP API.
Prerequisites
To implement a connection to your NetSuite OpenAir account from
Integration, you must have a valid user name and password so that you
can sign into the NetSuite OpenAir service.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
OpenAir SOAP Endpoint URL - The endpoint URL for the NetSuite
OpenAir service. For example, https://<OpenAirHostName>.com/soap. If
you are using a sandbox, your URL might look something like
this: https://sandbox.openair.com/soap.
Company
The company code used to determine the instance for the connection.
Login User Name - The user name used to sign in to the NetSuite
OpenAir service.
Login Password - The password associated with the sign-in user name.
API Namespace - The namespace required for API access.
API Key - The code required for API access.
NetSuite OpenAir operation
The NetSuite OpenAir operation defines how to interact with your NetSuite
OpenAir instance.
The operation represents a specific action (Create, Delete, Get, and so on)
performed against a specific NetSuite OpenAir object (such as Project,
Time Sheet, Proposal). Not all actions are available for every object.
Create a separate operation component for each action and object
combination required for your integration.
The NetSuite OpenAir operation supports the following actions:
Inbound
o Get: looks up data based on a single internal ID for the
NetSuite OpenAir object that you want to retrieve. The Get
action retrieves a single record from the NetSuite OpenAir
object defined in the Get operation. The bottom half of the
operation component page contains configuration options to
choose the fields to return.
o Query: looks up object records based on specific search
criteria. Supply static or dynamic request parameters (filters)
to control the selected object records. A single Query request
can return zero or more object record documents. After
selecting the Queryconnector action and using the Import
Wizard, choose the fields to return and add filters to limit the
results. Filters have four operators: Equal To, Not Equal To,
Newer Than and Older Than (for date fields).
Outbound
o Create
o Delete
o Update
Options tab
Click Import Operation, then use the Import wizard to select the object
with which you want to integrate. The following table describes the default
field definitions for each action type:
Object - Displays the object type that you selected in the Import
Operation wizard.
Tracking Direction - Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.
note
If the tracking direction is read-only, the feature to change the direction is
either unavailable or the developer set the configuration to read-only. The
default value you see shows you which document appears in Process
Reporting.
Request Profile (Create, Delete, Update only)
The XML profile definition that represents the XML structure that is being
sent by the connector.
Response Profile (Create, Get, Query, and Update only)
The XML profile definition that represents the XML structure received by
the connector.
Return Application Error Responses - This setting controls whether an
application error prevents an operation from completing:
If you clear the setting, the process stops and reports the error on
the Process Reporting page.
If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.
Okta connector
The Okta connector allows you to interact with an Okta account, to
leverage Okta's REST APIs to easily provision and manage users.
The Okta Identity Cloud provides secure identity management with single
sign-on, multi-factor authentication, life cycle management (provisioning),
and more. The Okta connector makes it easier for Boomi customers to
provision Okta users, groups, and group management (add, update,
delete users to groups).
note
This is an open source connector and you can access the code in
the Okta connector repository.
Connector configuration
To configure the connector to communicate with Okta APIs, set up two
components:
Okta connection — Represents a specific Okta instance and its
associated user authentication details.
Okta operation — Represents an action against a specific Okta
instance (object type).
Prerequisites
To connect with Okta's APIs, you must have an Okta account. First, sign up
and create an Okta developers account. Once you have the unique Okta
subdomain and API Token, you can connect to Okta APIs.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
Okta URL (oktaUrl) - Enter the domain base string field type Okta URL.
For example, dev-8275164.okta.com.
Okta API Token (oktaApiKey)
Enter the password field type API Token for the respective Okta domain.
Okta operation
The Okta operation represent a specific action to be performed against a
specific Okta account.
The Okta connector supports the following actions:
Inbound — GET, LIST.
Outbound — CREATE, UPDATE, DELETE, ADD, REMOVE.
JDBC drivers
Driver
Group ID Artifact ID Version
Type
Postgres org.postgresql postgresql 42.7.4
mysql-
MySQL com.mysql 9.1.0
connector-j
com.microsoft.sqlserv
MSSQL mssql-jdbc 12.8.1.jre11
er
com.oracle.database.j
Oracle ojdbc11 23.5.0.24.07
dbc
mariadb-java-client-
MariaDB
3.4.1.jar
mariadb-java-
MariaDB org.mariadb.jdbc 3.5.0
client
IBM DB2 com.ibm.db2 jcc 11.5.9.0
SAP
com.sap.cloud.db.jdbc ngdbc 2.22.12
HANA
Connection tab
Username: Enter your user name to log into your OME account.
Password: Enter your password for your OME account.
IP Address: Enter the Hostname or IP address of the OME system.
Enable SSL (Option): Select this if your OME instance supports SSL
Connection tab
Oracle CRM On Demand Host URL - The URL for the Oracle CRM On
Demand host serving the WSDL. For example: https://secure-
slsomxvga.crmondemand.com. To locate the correct URL log in Oracle
CRM On Demand, go to the Admin section and find the host name for the
WSDL.
User Name - The user name for signing in to the host.
Password - The password for signing in to the host.
Is Staging - Used to indicate if this is a staging (testing) environment.
Connection tab
Connection URL: Refers to your oracle database URL. Format
— jdbc:oracle:thin:@hostname:port Number:databaseName
For example,
jdbc:oracle:thin:@localhost:1521:xe
Class Name: Refers to the class naming of the database driver which
needs to be communicated to respective database.
User Name: Refers to the database user on whose behalf the connection
is made.
Password: Refers to the user’s password.
Schema Name: Enter the database schema name to fetch the table
names for selection. You can also provide the schema name at time of
operation - import. If the schema value does not exist in both the
Connection and Operation settings, the schema name will be taken from
the user name.
note
If any of the existing integration process has a different schema name
used in the Connection and Import, make sure to reimport the profile.
Connection Timeout (ms): Enter the maximum time (in milliseconds)
that the connector can wait to complete the connection handshake with
the database.
Read Timeout: Enter the maximum time (in milliseconds) that the
connector can spend in attempting to read data from the database. If the
value is left as blank, the default value of 600s will be used. The Read
timeout value entered in the Connection UI can be overridden by passing
the new value via the Message step.
Changelog
2024-01 (Version 13, Current)
Necessary runtime packages are now packaged with the connector.
The connector correctly updates Database Null, when the input is
provided as null for Nullable Field/Column and throws an error when
input is null for Non Nullable Field/Column. (CONC-2667)
The connector successfully retrieves values larger than or lesser
than INTEGER limit (-2147483648 to 2147483647) in NUMBER type
columns for Oracle database without any error. (CONC-1793)
The connector handles numeric data bigger than “2147483647” as
decimal value.
Test connections successfully work with/without connection pooling
enabled for Oracle Database step.
Vulnerability issue of type Denial of Service was resolved to make
the Connector more secure.
Connection tab
The following fields appear on the Connection tab:
Server Name - The name of the server hosting the target database.
Port Number - The port number of the server hosting the target
database. The default port is 1521.
User Name - The user name that connects to the database.
Password - The password for the user name.
Database Name - The name of the database that contains the target
data.
Schema Name - The name of th
Prerequisites
To use the Pricefx connector, you will require the following:
Pricefx partition - User account details like username and password.
Pricefx Key - If you would like to use Pricefx API protocol version 2.
Appropriate access rights to the Pricefx resource. You can verify your
rights in Pricefx by browsing the data in Pricefx. You would be able
to retrieve them from the connector if you could browse it in Pricefx.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
URL -
Refers to the URL when you access Pricefx via browser. The path is not
required. For example, if the URL is https://www.pricefx.eu/app/, you would
just need to input https://www.pricefx.eu.
Username - Refers to the username that you used to login to Pricefx via
browser.
Password - Refers to the password that you used to login to Pricefx via
browser.
Partition - Refers to the name of the partition that you used to login to
Pricefx via browser.
Use JWT Token - Refers to the feature for the connector to use JWT token
instead of Username, Password to invoke Pricefx API. It is recommended to
use for better performance at the Pricefx server. The token could be
retrieved from Get Token operation and would expire in about 30 minutes.
When using JWT token authentication, you must specify the token value.
You can set the JWT token value using the following document property:
JWT Token required only if the Use JWT Token option is selected in the
Pricefx - Partner connection. The token should be retrieved by the “Get
Token” operation.
Use OAuth Token in process - Refers to the feature for the connector to
access Pricefx API with API protocol version 2. You should first have your
Pricefx key before using this option. The token could be retrieved from the
“Get Token” operation and would expire in about 60 minutes. When using
Oauth token authentication, you must specify the token value. You can set
the OAuth token value using the following document property: OAuth
Access Token required only if the Use OAuth Token in process option is
selected in the Pricefx - Partner connection. The token should be retrieved
by the “Get Token” operation. You can choose either using JWT token or
OAuth token in the connection.
Pricefx - Partner operation
The Pricefx - Partner operation defines how to interact with your Pricefx
partition and represents a specific action to perform against a Pricefx
object.
Create a separate operation component for each action/object
combination that your integration requires. All Pricefx - Partner connector
operations use JSON format and support the following actions:
Query — Retrieve records from the Pricefx Partition based on a
filter.
User Access Operation — Provide actions relevant to
authentication and authorization.
PA - Truncate Data — Truncate PA data from Pricefx Partition
based on a filter in the request message.
PA - Data Upload — Upload PA data in Excel file format to the
Pricefx Partition.
PA - Data Copy — Copy data from master data to PA tables.
PA - Dataload Calculation — Execute a calculation dataload in
Pricefx partition.
Refresh Datamart — Refresh data from a specific Pricefx
Datamart.
Get — Retrieve a specific record from Pricefx Partition based on its
primary key.
Metadata — Retrieve metadata of a specific table from Pricefx
Partition.
Execute Formula — Execute a logic in Pricefx partition.
Quote Operations – Perform various actions on a Pricefx Quote.
Product Image Operations – Perform various actions to a Pricefx
Product Image.
Admin Operations – Perform various administrative actions to a
Pricefx Partition.
Fetch — Retrieve records from Pricefx Partition based on a filter in
the request message.
Delete — Delete existing records in Pricefx Partition based on a
filter in the request message.
Delete by key — Delete existing record in Pricefx partition based
on the business key in the request message.
Upsert — Create a new record or update an existing record in
Pricefx Partition.
Update — Update an existing record in Pricefx Partition.
Bulk Data Load — Upload data to Pricefx Partition.
Create — Create a new record in Pricefx Partition.
For more information about creating a Pricefx operation, see the
topic Creating an Operation.
Options tab
Click Import Operation, then use the Import wizard to select the object
to integrate. When you configure an action, the following fields appear on
the Options tab.
Object - An object defines what you want to integrate or the action to
execute, which you select in the Import Wizard.
Object - Displays the object type that you selected in the Import
Operation wizard.
Tracking Direction - Select the document tracking direction for the
operation, either Input Documents or Output Documents. This setting
enables you to choose which document appears in Process
Reporting. Start steps always track output documents regardless of your
selection.
QuickBooks connector
The QuickBooks connector enables you to move data into and out of
QuickBooks financial software.
The connector connects directly to the QuickBooks company data file
(*.QBW) or online account. The connector enables you to query, add,
modify, and delete various types of records, including accounts,
customers, sales orders, and invoices.
The QuickBooks connector creates a folder in the $
{atom_installation_directory}/work/quickbooks_host directory where it
writes a temporary registry file. This file allows the connector to make a
single modification in the Windows registry to set up the connection.
Supported editions and versions
Boomi supports the following editions and versions:
Local QuickBooks Desktop Edition
o Pro, Premier, and Enterprise Editions
o General Business, Contractor, Manufacturing and Wholesale,
Nonprofit, Professional Services, and Retail editions
o Global, US, Canada, and UK Editions
o Connectivity to the 2002 edition and later
o QuickBooks 2002 (R1) and (R2), 2003 (prior to R7 and R7 and
above), 2004, 2005 (prior to R5 and R5 and above), 2006,
2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014 and later
(2017)
QuickBooks Online Edition (This legacy edition is not available to
new users after the May 2014 release.)
o Simple Start and Plus Editions
o QB SDK versions 2.0–8.0 and 10.0–13.0. This corresponds to
QuickBooks 2002–2014.
QuickBooks Online REST API
o Intuit’s QuickBooks Online edition supports V3 API version 7.
Unsupported editions
Boomi does not support the following editions:
QuickBooks for Mac (all editions)
QuickBooks Desktop Simple Start Edition
QuickBooks Point of Sale
QuickBooks Desktop Australia Editions
Prerequisites
General prerequisites:
You must have QuickBooks administrator rights.
Prerequisites for Desktop editions:
You must have a basic runtime deployed to the machine where you
install a QuickBooks client application.
You must configure QuickBooks to authorize Integration to connect
to your company file. You can find instructions in the connection
configuration.
The basic runtime must be a 32–bit Runtime if you are using
Windows. The Windows COM libraries used to establish the
connection support only 32 bit.
The basic runtime must run in desktop mode, rather than as a
Windows service. In addition, Windows users must always sign into
the machine.
The Windows user who is running the basic runtime must have
network access with permissions to read and write the company
data file (*.QBW file) if the data file resides on a different machine
than the client.
Allow the QuickBooks company file to use integrated applications by
going into the QuickBooks company file
(Edit > Preferences > Integrated Applications > Company
Preferences) and ensuring that Don't allow any applications to
access this company file is clear.
Steps to set up Desktop editions in a distributed or multiple-user
environment. If you have multiple users accessing a single QuickBooks
company file in a shared location or server:
1. Authorize the basic runtime to connect as the Admin user.
2. Install a QuickBooks client on the central server, even if not used by
users. The basic runtime needs certain libraries contained within the
client installation.
3. Sign into QuickBooks on the same machine where you install
the Runtime; it does the one-time authorization for the runtime to
connect to QuickBooks.
4. Create a separate Windows user to run the basic runtime. The basic
runtime must run as a desktop application under a specific Windows
user, which can be a Local user.
5. Ensure that the Windows user is always signed into the server. You
can lock the user session but not sign out. Signing out of the server
causes the basic runtime to stop running. If the user signs onto the
server twice, it causes two basic runtime instances to run. It can
result in conflicts as both basic runtimes try to access QuickBooks
simultaneously.
6. Prevent others from signing into QuickBooks as the Admin user,
which can cause conflicts.
7. Have users log into QuickBooks in multiple-user mode for normal
business. If a user logs in under single-user mode, the basic
runtime is prevented from connecting to the company file. Some
administrative functions can be performed only in single-user mode
so be aware that the integration stops during those times. When the
administrative work is complete, switch back to multiple-user mode.
The integration picks up where it left off.
8. Close the QuickBooks client on the server when users switch to
single-user mode.
Prerequisites for QuickBooks Online Edition:
You must have a QuickBooks Online account.
You must obtain an Application Connection Ticket to authorize
Integration to connect to your account. See the topic about
generating a QuickBooks Application Connection Ticket.
Prerequisites for the QuickBooks Online REST API 2013 Edition:
You must have a QuickBooks Online account.
You must create an Intuit developer account to access the
QuickBooks Online REST API’s service endpoint and authorize
Integration as a QuickBooks REST API application. See the help
topics Intuit Partner Platform, Authorizing Integration with
QuickBooks, and Adding a QuickBooks connection that uses the
Online REST API.
Tracked properties
This connector has no predefined tracked properties.
Settings tab
QuickBooks Edition - Choose the type of edition to connect to:
Local QuickBooks Desktop Edition — If you are using this edition,
see the topic Authorizing Boomi Integration to connect to the
Desktop edition.
QuickBooks Online Edition — If you are using this edition, see the
topic Generating an Application Connection Ticket.
QuickBooks Online REST API 2013 — If you are using this edition,
see the topics Authorizing Boomi Integration with QuickBooks and
Adding a QuickBooks Online REST API connection.
Application Name (Desktop)
Used by QuickBooks to identify the client attempting to connect — that is,
Boomi Integration. If you do not specify the application, QuickBooks adds
one at run-time.
Company File Name (Full Path) (Desktop)
The full path and name of the company file (for example, C:\Program Files\
Intuit\My Company File.QBW). You can easily find the path from within
QuickBooks. Open a company file, press F2, then copy and paste the
entire value from Company File > Location.
If not set, the connector attempts to connect to the Open Session. If
QuickBooks is open, the connector connects to the company file with
which you are working. If QuickBooks is not open, you get a connection
error.
File Mode (Desktop)
Sets the way Boomi Integration opens the company file. Select the same
mode that your QuickBooks client uses:
Single-User Mode
Multi-User Mode
Auto Detect (recommended)
The following table provides information about file modes:
Who started
Mode Who can obtain access
QuickBooks
Integrated Single-
All other integrated applications = access
Application user
Who started
Mode Who can obtain access
QuickBooks
QuickBooks users on same machine = no
access
Integrated Multi-
All other integrated applications = access
Application user
QuickBooks users on other machines =
access
Single- QuickBooks user signed in. Only one
QuickBooks User
user integrated application = access
Multi- QuickBooks users = access
QuickBooks User
user Integrated applications = access
Application Connection Ticket (Online)
QuickBooks generates an authentication key to permit Boomi Integration
to access your account. Click Get Connection Ticket. See the
instructions for generating the ticket.
Consumer Key (Online REST API)
A value used by the consumer to identify yourself to Intuit. Obtain this key
from your Intuit developer account on the Intuit Partner Platform site.
Consumer Secret (Online REST API)
A secret used by the consumer to establish ownership of the consumer
key. Obtain this secret from your Intuit developer account on the Intuit
Partner Platform site.
Generate (Online REST API)
Used to generate an access token, token secret, and token expiration
date. The Generate OAuth Access Tokens dialog opens with the
following fields:
Link — A link to a page from Intuit where you can provide sign-in
credentials.
PIN — A field to enter the PIN from Intuit.
Access Token (Online REST API)
A value used by the consumer to gain access to the protected resources
on behalf of the user instead of using the user's Intuit credentials.
Token Secret (Online REST API)
A secret used by the consumer to establish ownership of a given token.
Token Expiration Date (Online REST API)
The expiration date for the access token.
Company ID (Online REST API)
Your QuickBooks Online company ID. Intuit assigns the ID when you get
your PIN (sometimes called the Realm ID).
QuickBooks operation
The QuickBooks operation defines how to interact with the company file.
The operation represents a specific action to perform against a specific
QuickBooks record type or object, such as Purchase Order, Item Receipt,
Customer, and so on. Not all actions are available for every object. Create
a separate operation component for each action and object combination
needed for your integration.
The QuickBooks operation supports the following actions:
Inbound: Get (Get/Read, Query)
The operation requires unique parameters at runtime to request data in all
inbound operations.
See the Parameter values topic to read about statically or dynamically
defining values in a process at runtime.
Outbound: Send (Create or Add, Delete, Update or Modify)
For all outbound operations send request XML data to the connector that
corresponds with the defined XML profile. Usually, request documents are
the destination output of a map, however you can set parameters on the
outbound connector's Parameters tab in the process to supply static or
dynamic request values.
Adding a QuickBooks Online REST API connection
By adding a QuickBooks Online REST API connection, you authorize Boomi
Integration to access QuickBooks Online data.
Before you begin
To complete this task, you must have an authorized Boomi Integration as
a QuickBooks REST API Application and your OAuth Consumer Key and
OAuth Consumer Secret.
About this task
QuickBooks Online REST API users need to authorize (using OAuth)
Integration’s access to their QuickBooks Online data. You can do this by
creating a QuickBooks connection that uses the Online REST API.
Procedure
1. In the Boomi Integration QuickBooks connection component, select
the QuickBooks Online REST API 2013 edition.
2. Enter your OAuth Consumer Key.
3. In the Consumer Secret field, select Click to Set, enter your
Consumer Secret, and click OK.
4. In the Access Token field, click Generate.
The Generate OAuth Access Tokens dialog opens. The page referenced by
the link in this dialog opens automatically in a new browser window.
5. Optional: If the browser does not allow pop-ups, open the link
manually.
6. On the external application page, type your sign-in credentials.
7. Choose the QuickBooks company (boomi.com), and
click Authorize.
You receive a PIN and Company ID. Make a note of both because you need
them in the following steps.
8. Copy the PIN from the external application into the PIN field in
the Generate OAuth Access Tokens dialog.
9. Click Next.
The Generate OAuth Access Tokens dialog closes and populates
the Access Token, Token Secret, and Token Expiration Date fields.
10. Enter the Company ID.
11. Click Save.
Connection tab
URL - The base URL for the Radius by Campus Management service. For
example: https://useastapi.hobsonsradius.com.
User Name - User name for logging into the Radius by Campus
Management service.
Password - The password for your Radius by Campus Management
service.
Connection tab
URL - The URL for the RightNow service:
http://<rightnow_host_name>/cgi-bin/<site_name>.cfg/php/xml_api/
soap_api.php
Configure the RightNow host name and site name (in angle brackets) to
connect to the proper instance.
User - The user name for logging into the RightNow service.
Password - The password for logging into the RightNow service.
To understand how passwords are securely stored in a connection
component, see the Password encryption security help topic.
Partner operation
The RightNow – Partner operation defines how to interact with your
RightNow CRM account.
For all inbound operations, unique parameters are required at run-time to
request data. Outbound operations send request XML data to the
connector that corresponds to the defined XML profile. Usually, request
documents are the destination output of a map. Review the topic
Parameter values to understand how to define these values statically or
dynamically within a process for run-time.
Supported Environments
The connection supports connections to the Procore Production and
Sandbox Environement.
Prerequisites
To implement a connection to your Procore account from Integration and
use the connector operations, do the following:
Have the System API User configured in Procore
Have a runtime cloud
Have knowledge of the Procore Object Model
Tracked Properties
This connector has no pre-defiend properties.
Connection tab
To start using the RutterKey Procore - Partner Connector:
Create a connection in Boomi Integration and select the Procore
Connector in the Connector field.
Next, configure your connection. The Connector provides most of the
information you need.
Then select which Procore server to connect to - there's an option for a
custom server for advanced debugging. The Procore Company ID is an
optional field that pre-populates the Company ID field in all created
Connector Operations using this connection.
Connection Types
The connection only supports OAuth 2.0 authentication which can be
configured in the OAuth 2.0 tab.
RutterKey Procore - Partner operation
The RutterKey Procore - Partner Connector imports operations from
Procores 1100+ operations and provides dynamic documentation within
connector relevant to the operation selected.
You can create a separate action and operation component for each
action/object combination required for your integration.
Import an Operation
Once a connection has been created, create a Connector Operation using
the Procore connector.
Click the Import button on the top-right of the window.
Select the Runtime and Connection.
You can also filter the results here; this supports wildcards and is case-
insensitive. The Connector formats all operation labels in the same way
for easy filtering and uniformity wth the following format: [Endpoint]
(v[API Version])
Once the basic runtime, Connection and any required filter has been
set, click Next and the Procore connector will return all the
Operations from Procore that match the filter.
Select the required operation and click Create.
The connector creates the operation, required request and response
maps, and all required fields present with all the associated Procore API
documentation.
Connection tab
WSDL URL - The WSDL for Saba TalentSpace SOAP service.
SOAP Endpoint URL - The SOAP service URL for Saba TalentSpace. If you
leave this field blank, the endpoint URL from the WSDL is used.
Username - The username for the Saba TalentSpace account.
Password - The password for the Saba TalentSpace account.
Client SSL Certificate (Optional) - Used to select, edit, or remove a
private certificate for SSL authentication. Your private certificate is used to
verify your identity when sending requests to the endpoint. To set up a
client authentication, you must manually export your public certificate and
provide it to your endpoint owner.
Trust SSL Server Certificate (Optional) - Used to select, edit, or
remove a public certificate component for SSL server authentication.
Contact the owner of the endpoint to obtain its public certificate and then
import it into a certificate component.
Connection tab
SAP Host Name - Host name of the server hosting the SAP Business
ByDesign WSIL.
Username - User name used to connect to the SAP Business ByDesign
web services.
Password - Password used to connect to the SAP Business ByDesign web
services.
Connection tab
Connection URL - Enter the JDBC URL of the SAP HANA Database.
User Name - Enter the username for connecting to the SAP HANA DB.
Password - Enter the user’s password.
Note: The above fields are mandatory for connecting to the SAP HANA
Database. When you use Test Connection, the connector validates only
these connection properties.
The following fields are applicable for file transfer (Bulk Load and Unload)
only.
File System User - Specify the username for accessing the SAP HANA
Database file system.
File System Host - Specify the host name of the SAP HANA Database file
server.
File System Port - Specify the secure shell (SSH) port number for the
SAP HANA Database file system.
File System Key File Path - Specify the private key or .pem file path to
authenticate with SAP HANA Database host.
File System Key File Password - Set the password of the private key
file, if the key file is password protected.
Changelog
2024-08 (Version 10, Current)
Upgraded the connector SDK to version 2.18.2.
SAP HANA Database connector now supports :: double colon while
executing the stored procedure.
o For example -
Table Name (Stored procedure name) - STORED_PROCEDURE_NAME
Package Name – PACKAGE_NAME
Schema Name – SCHEMA_NAME
HANA full object name –
“SCHEMA_NAME”.”PACKAGE_NAME::STORED_PROCEDURE_NAME”.
Prerequisites
To use the connector and implement a connection to your Shopify
storefront from the Boomi Enterprise Platform, have the following
information available:
Check for TLS 1.2 support in your basic runtime. The Shopify
connector only supports TLS 1.2, and support for TLS versions 1.0
and 1.1 have been removed to better protect your customer data. If
you do not enable TLS 1.2 support in your basic runtime, you will
see connection errors. For more information about enabling TLS 1.2
support, see the related Properties panel reference topic (Advanced
tab, HTTPS Protocols property) and the related Community article
about disabling TLS 1.0.
Your storefront address provided by Shopify when you opened your
store. If you can't remember your shop name, you can find it by
logging into your Shopify admin (General Settings page, Store name
field). Also, when logged into Shopify admin, the URL should include
your store name.
For OAuth 2.0 authentication, have your Shopify client ID and secret,
the authorization and access token URL, your scope, and generate
an access token. Verify that each entry in the scope is separated by
a comma.
For Access token authentication, have your private app's Admin API
password which you generate from the Shopify admin of the store
you want to connect with your app.
Shopify Access scope defines the permissions for Boomi processes,
impacting what data can be accessed or modified. New access
tokens need to be generated when the permissions associated with
the access scope change.
Supported objects
The following objects are supported by one or more of the Shopify
operation actions:
Adv Rel
ance Calc Ca Cl Com Cr De Exe G H M O Qu eas Resc Up
Object d ulat nc os plet eat let cut e ol ov pe er e hedul dat
Que e el e e e e e t d e n y Hol e e
ry d
Custo
X X X X X X
mer
Custo
m
X X X X X
Collect
ion
Fulfillm
X X X X X X X
ent
Fulfillm
ent
(Depre X X X
cated)
*
Fulfillm
ent X X X X X X X X X
Order
Invent
ory X X X
Item
Invent
ory X X X
Level
Locatio
X X
n
Order*
X X X X X X
*
Produc
X X X X X
t
Refund X X X X
Transa
X X X
ction
Variant
(Produ X X X X X
ct)
* Fulfillment (Deprecated) must be selected if you are using the Shopify
2022-04 API version. The recommendation is to upgrade to the Shopify
2023-01 API version for new Fulfillment updates.
** Order API version is upgraded to 2023-07 and the 2020-07 version is no
longer supported. It is recommended to re-import the schemas to get
advantage of all the updates, new features and adjust for the change or
retired features.
Connection tab
The Shopify connector supports the OAuth 2.0 protocol for authentication
and authorization.
In addition, to simplify the connection setup for those with a private
Shopify app within their store, the connector supports Access token
authentication.
Description
Store Base URL
Enter the storefront address provided by Shopify when you opened your
store in the format https://<your_shop_name>.myshopify.com. If you
cannot remember your shop name, you can find it by logging into your
Shopify admin (General Settings page, Store name field). Also, when
logged into Shopify admin, the URL should include your store name.
note
In the URL, make sure and use all lowercase for http and https to ensure
the connection works.
Authentication Type
Select the authentication type for the connection.
When you select OAuth 2.0, enter your credentials on the OAuth
2.0 tab.
When you select Access token, enter your private app's Admin API
password in the Access token field.
Access token
Enter your private app's Admin API password, which you generate from
the Shopify admin of the store you want to connect with your app.
Shopify operation
The Shopify operation defines how to interact with your Shopify account.
The operation represents a specific action (Query, Create, Update, etc.) to
be performed against a specific Shopify object type (customer, fulfillment,
order, location, etc.). Create a separate operation component for each
action/object combination required for your integration.
To update your process to the latest supported version, re-import the
schemas. Once the schemas are re-imported, your process will be updated
to the latest version. Additionally, a new read-only field, "API Version," will
be added as a single-option button under the Operation property to
support the updated design of the Shopify API version update.
Note: The connector supports the API version listed in the Shopify
connector topic.
The Shopify operations use JSON format and support the following actions:
Inbound: Advanced Query, Get, Query
Outbound: Calculate, Cancel, Close, Complete, Create, Delete,
Execute, Open, Update, Hold, Move, Release Hold, Reschedule
Outbound Requests
These operations are outbound requests that perform an action on a
record on the Shopify platform. Input documents must contain an ID field
specifying the ID of the order where the operation is to be executed. Input
documents for fulfillment operations must also contain the ID of the
fulfillmentwhere the operation will be executed.
Calculate — calculates refunds for a line item and shipping or for a
partial amount of shipping.
Cancel — cancels an order or a fulfillment or a fulfillment order.
Close — marks a fulfillment order as incomplete.
Complete — completes a fulfillment order.
Open — marks the fulfillment order as open.
Move — moves a fulfillment order to a new location.
Hold — applies a fulfillment hold on an open fulfillment order.
Release Hold — releases the fulfillment hold on a fulfillment order.
Reschedule — reschedules the fulfill_at time of a scheduled
fulfillment order.
Shopify changelog
Changelog
2024-11 (Version 24.11.2, Current)
The Shopify connector has been updated to use the 2024-4 version
of the Shopify REST APIs. Additionally, the Shopify object request
and response profiles have also been updated to support this
change. Existing integrations will remain unchanged and continue to
use their original versions.
o APIs of all the objects were updated to use the 2024-04
version. Users must re-import the schemas to update their API
version to 2024-04. Backward compatibility is supported. If
users choose not to update to the latest version, no
modifications will be made.
o We have added or updated fields to Request/Response profiles
to support 2024-04 version, re-import the schemas to apply
the update. To learn more, see the topic Shopify 2024-11
updates.
2024-09 (Version 24.09.0)
Addressed flagged Sonar issues.
2024-06 (Version 24.06.2)
Necessary runtime packages are now packaged with the connector.
2024-02 (Version 24.02.0)
User is able to set the deadline with the desired date and time for
Fulfillment Orders through the connector.
2023-11
Overridable fields now include Access Token parameters,
Authorization parameters, and Extended JWT Claims (if supported by
the connector).
2023-10
The order response payment_details was incorrectly being returned
as an array when using an imported Orders profile.
The Fulfilment Order API Query action was corrected to not have any
selectable response fields and all the following fields
except order_id were deprecated from the filtering
option: created_at_min, line_items, receipt, since_id, tracking_numb
ers, tracking_urls, updated_at_max, updated_at_min.
The Fulfillment Order APIs were updated to use the 2023-07 version.
2023-09
We deleted the following deprecated fields from the
Request/Response JSON schemas:
o total_spent
o total_price_usd
o last_order_id
o last_order_name
o orders_count
o total_spent
o origin_location
o destination_location
o total_price_usd
Users must re-import the schemas to get the deletion updates reflected in
the latest 2023-07 version of the Order API.
API_KEY - The API Key to authenticate the connection. You can find your
API Key by logging in to your account at https://app.signifyd.com and
reviewing the Settings page. If you need further assistance, contact
Signifyd Support at [email protected].
Connection tab
Important
The documentation for this connector is provided by a Boomi partner. This
documentation was last updated in August 2024.
To create a connection from the Skyflow API to Integration:
1. Click Create to add a new connection.
2. Select the Skyflow - Partner connector from the list of available
connectors.
3. Enter your Skyflow Vault URL in the Server URL field.
4. Set the Authenticated Type to CUSTOM.
5. Enter your Skyflow API Key in Customer Authentication
Credentials.
Skyflow - Partner operation
Important
The documentation for this connector is provided by a Boomi partner. This
documentation was last updated in August 2024.
The Skyflow - Partner connector supports the following operations:
Insert record: Securely add new data to a Skyflow vault.
Get record: Selectively retrieve data from a Skyflow vault.
Update record: Modify existing data in a Skyflow vault while
maintaining an audit trail.
Delete record: Securely remove data from a Skyflow vault.
Tokenize: Replace sensitive data with non-sensitive tokens in a
Skyflow vault.
Detokenize: Retrieve the original sensitive data from tokens in a
Skyflow vault.
Slack connector
The Slack connector simplified the integration with Slack, the cloud-based
collaboration tool, to help improve collaboration in your organization.
Connector configuration
To configure a connector to communicate with Slack, set up these two
components:
Slack connection
Slack operation
Prerequisites
To implement a connection to Slack from Boomi Integration, do the
following:
Create a new Slack application at https://api.slack.com/apps/. Be
sure to select the Boomi workspace to create your Slack app.
Get your Slack client ID and client secret.
Make sure a basic runtime is deployed on your local machine or
hosted with Boomi Integration.
Tracked properties
There are no tracked properties in the Slack connector.
Connection tab
Complete the connection settings in the OAuth 2.0 tab.
Authentication Type - OAuth 2.0 is the default authentication and
cannot be changed.
OAuth 2.0 tab
Client ID - The Client ID obtained from Slack.
Client Secret - The client secret obtained from Slack.
Access Token - The encrypted access token retrieved from Slack that is
used to access protected resources. Click to generate the access and
refresh tokens. When a new window opens asking you to confirm offline
access, click Accept. A second message indicates the access token was
received and successfully generated.
Slack operation
The Slack operation defines how to interact with your Slack account.
The operation represents a specific action (Query, Create, Update, etc.) to
be performed against a specific Slack record type or object type (files,
groups, channels, etc.). Create a separate operation component for each
action/object combination required for your integration.
The Slack operation supports the following actions:
Get — retrieves channels info, files info, groups info, users info,
users presence
Query — searches channels history, channels list, files, files list,
groups list, group history, users list, usergroups list, messages
Create — creates channels, chat messages, groups, child groups,
usergroups
Delete — deletes chats and files
Update — updates chats and usergroups
Execute — writes to the following endpoints:
o Channels — archive, invite, join, kick, leave, mark, rename,
setPurpose, setTopic, unarchive
o Files — revokePublicURL, sharedPublicURL, upload binary,
upload text
o Groups — archive, close, invite, kick, leave, mark, open,
rename, setPurpose, setTopic, unarchive
o Usergroups — disable and enable
Slack changelog
Changelog
2024-11 (Version 24.11.0, Current)
Necessary runtime packages are packaged with the connector.
2023-11
Overridable fields now include Access Token parameters,
Authorization parameters, and Extended JWT Claims (if supported by
the connector).
Snowflake connector
The Snowflake connector allows you to leverage the capabilities of a
Snowflake data warehouse i.e. speed, usability, and flexibility.
This connector is a key part of the Integration process that makes it easy
to work with Snowflake, one of the fastest growing cloud data
management platforms.
Warning
The Snowflake connector is supported on the Boomi Local basic
runtime, runtime cluster and Private runtime cloud. The Snowflake
connector is not supported on the Public runtime cloud.
note
This is an open source connector and you can access the code in
the Snowflake connector repository.
When using the Snowflake connector in a Private runtime cloud,
make sure to update its Properties – Advanced configuration to
include the “Security Policy” as Low. For more details, refer to the
topic Security policies for Clouds and runtime clusters.
Connector configuration
To configure the connector to communicate with the Snowflake tenant, set
up the following components:
Snowflake connection
Snowflake operation
This approach provides reusable components, which contain connection
settings such as username, password, etc. After building your connection
and operation, set up your connector within a process. When you have
properly configured your connector within your process, Integration can
map to and from virtually any system using the Snowflake connector to
query, create, update and/or delete data.
Supported editions
The Snowflake connector is now packaged with the Snowflake JDBC driver
version 3.13.29.
Attention
If you encounter any issue as JDBC driver not recognized, then try
deploying the driver into Integration Account Library. To do this:
1. Download a Snowflake JDBC driver. This is a JAR file (snowflake-jdbc-
#.#.#.jar – where #.#.# is the JDBC driver version number). To
download the driver, see the Downloading the Driver topic in the
Snowflake documentation. The Snowflake JDBC driver requires a 64-
bit Boomi runtime using Java 1.8 (or higher).
2. Deploy the JDBC driver by uploading the JAR file into Integration
Account Library (Setup > Account Libraries). Then, create a Custom
Library component that referenced that file and set the ‘Custom
Library Type’ to type ‘Connector’ and the ‘Connector Type’ to
‘Snowflake’. Finally, deploy the Custom Library to the
appropriate basic runtime or runtime cluster environment(s).
If you face any error in the driver deployment, manually copy the driver
(snowflake-jdbc-x.xx.x.jar) to the runtime userlib directory, restart the
runtime, and try again. Contact support, if required.
The uploaded or imported files are first passed through a virus scanner. If
any virus detected, the upload or import will result in an error, and the file
will be rejected. Contact Boomi Support, if error persists.
Limitations
The Snowflake connector has the following limitations:
SnowSQL operation —
o When batch size is greater than 1, the update count for each
statement cannot be retrieved. However, Snowflake-specific
Behaviour can be used to return the Query ids to the user
which were executed in the batch. The user can then track
them back in snowflake.
o When batch size is greater than 1 and DQL statement is
presented in the SQL script as the first statement, an
exception is thrown with error
code : net.snowflake.client.jdbc.ErrorCode.UPDATE_FIRST_RES
ULT_NOT_UPDATE_COUNT. However, this exception can be
ignored as this cannot prevent the execution of the SQL script.
Create operation with Snowflake defaults for empty
fields option — Batching is not supported.
In the JSON Schema data type mapping, only the data types are
mapped and not the format. Hence, for seamless execution of the
process, make sure to set the required format manually in the
request and/or response profile. For example,
o Float data type with two precision and two scale - In the
profile, set the format as ##.##.
o Date/Time, Date, Time data type - If AUTO option is
selected to use the custom date, time and date/time format,
make sure to manually set the required date and/or time
format manually in the profile.
note
To apply this data type mapping, perform the profile reimport in the
existing integration process. Otherwise, the existing profiles will continue
to use old data type mapping (mapping all Snowflake data type to Boomi
characters).
All input request documents (Create, Update, Execute, and
SnowSQL) must not exceed 1 MB. Only one request can be provided
to the connector at a time. If required, use Bulk Operations to
handle large volume of data.
Warning
For the Snowflake connector, the Query Text Size (i.e., SnowSQL) should
not exceed 500 KB.
Prerequisites
The following prerequisites are necessary to implement a connection to
your Snowflake account:
a basic runtime deployed on your local machine or hosted with
Boomi Integration.
Java environment variables on your system, such as JAVA_HOME,
CLASSPATH.
Necessary authorization to access database.
Connection URL of the Snowflake server.
One of the following authentication types:
o Username and Password
o Using Private Key and Passphrase
Boomi Integration Username and Password with the necessary
credentials.
Amazon S3 access, secret keys, bucket details with the following
permissions on the S3 bucket to access files in the folder (and sub
folders):
o s3:GetObject
o s3:GetObjectVersion
o s3:ListBucket
o s3:ListAllBuckets
note
These fields are optional and applicable only when you are using the
Amazon S3 external storage.
The additional s3:PutObject and s3:DeleteObject permissions are
required if you plan to unload files to the bucket or automatically
purge the files after loading them into a table.
Tracked properties
This connector has the following tracked properties that you can set or
reference in various step parameters:
SnowSQL – Refers to the SnowSQL script containing the SnowSQL
command you want to execute.
Snowflake operation
The Snowflake operation defines how to interact with the connection,
including transactions, batching, custom properties, etc.
The Snowflake connector supports the following actions:
Inbound: Get, Query, Bulk Unload, Bulk Get, Copy Into Location
Outbound: Create, Update, Delete, Bulk Load, Bulk Put, Copy Into
Table
In/Outbound: Execute, SnowSQL
Changelog
2025-01 (Version 25, Current)
Enhanced the connector to
support Database and Schema via operation properties, which
can be overridden using the Dynamic Operation Properties.
Improved flexibility with Truncate functionality. If the
same Database and Schema combination is repeated for the
documents, truncation is performed only once per combination,
eliminating redundant truncations for each file.
2024-09 (Version 24)
Enhanced the Snowflake connector so that the Bulk Unload
operation properties (S3 Bucket Name, AWS Region, and Internal
Stage Name) are overridden via dynamic operation property.
Enhanced the Snowflake connector so that the Bulk Load operation
properties (S3 Bucket Name, AWS Region, Internal Stage Name, and
Internal Source File Path) are overridden via dynamic operation
property.
Vulnerability issue of type Denial of Service was resolved to make
the connector more secure.
2024-08 (Version 22)
Upgraded the connector SDK to version 2.18.2.
2024-07 (Version 21)
The connector successfully supports Create action for multiple
documents having different missing fields when Input Options for
Missing Fields field configured with the Snowflake defaults for
empty fields option and Batch Size set to one.
Vulnerability issue of type Denial of Service was resolved to make
the Connector more secure.
Fixed multiple critical and major code quality issues for the
Snowflake connector.
Number of SnowSQL Statements tool tip was updated to make it
more informative. While Batch Size was updated for getting query
results.
2024-06 (Version 20)
Heap memory usage complies with acceptable standards, resulting
in successful execution processes.
2024-04 (Version 19)
Supports applying multiple filters on the same column, enabling the
querying of data within a specified range, based on the column. For
example, retrieving data between two dates.
The document Tracking Direction option for the Bulk Unload is
updated to Output Documents, by default.
The Snowflake connector supports the key pair authentication
mechanism using OpenSSL for generating public/private keys.
Backward compatibility is also ensured to support older OpenSSL
versions, those are:
o OpenSSL 0.9.8h
o OpenSSL 1.1.1v
o OpenSSL v3.x.
Snowflake connector is able to connect using the key pair
authentication mechanism and generates private/public keys using
the python cryptography "RSA" function within the asymmetric
library.
2024-02 (Version 18)
When one of the documents (without batching) failed at the
SnowSQL operation, then all documents were being failed and
moved to the Catch path. Now, all documents are correctly labeled
as successful or failed.
The Snowflake Connector Class code was not loading the JDBC
Driver class correctly with Java 1.8.
The Snowflake Bulk Load operation was not accepting the Other
Format Options value for the CSV file format.
Stripe connector
The Stripe connector enables you to integrate with Stripe, where Stripe is
a payment processing platform that enables businesses to accept online
payments and handle global transactions.
You can use the Stripe connector to Create, Delete, Get, List, Execute,
Retrieve, Query, List Line Items, Update and Update Lines objects in
Stripe. You can integrate Stripe and an ERP (for example, NetSuite) to
synchronize online payment transactions with your financial records in
real-time.
Connector configuration
To configure a connector to communicate with Stripe, set-up these
components:
Stripe connection
Stripe operation
Supported editions
The connector supports the following version of the Stripe API: Version v1
Supported objects
The following parent objects are supported by one or more of the Stripe
operation actions:
Ge Delet Creat Upda Lis Quer Retrie
Object ↓ Action →
t e e te t y ve
Customers
Payment Links
Prices
Products
Invoices
Invoice Items
Payment Intents
Charges
Upcoming Invoice
Upcoming Invoice
Lines
Balance
Transactions
The following parent objects are supported by one or more of the Stripe
operation actions:
Object ↓ List Line Update
Action → Items Lines
Invoices
Payment Links
The following Business Actions are supported by the Execute action:
Reconcile Customer Balance for Payment Intent
Verify Microdeposits for Payment Intent
Authorize Increment for Payment Intent
Capture Charges
Capture Payment Intent
Confirm Payment Intent
Cancel Payment Intent
Send Invoice
Void Invoice
Pay Invoice
Finalize Invoice
Mark Invoice as Uncollectible
Tracked properties
This connector has no predefined tracked properties.
Connection tab
The Stripe connector supports the usage of API keys for authentication
and authorization.
Description
Base URL - The address is defaulted to the Stripe API Base
URL : https://api.stripe.com/v1.
API Key
Enter the unique API Key, obtained from navigating through Stripe
Dashboard at Developers > API Keys.
Stripe operation
The Stripe operation defines how to interact with your Stripe account.
The operation represents a specific action to be performed against a
specific Stripe object type. Create a separate operation component for
each action/object combination required for your integration.
The Stripe operations use JSON format and support the following actions:
Inbound: Get, List, Query, List Line Items, Retrieve.
Outbound: Create, Update, Delete, Execute, Update Lines.
Stripe Objects supported: Customers, Prices, Products, Payment Links,
Invoices, Invoice Items, Payment Intents and Charges.
Changelog
2025-01 (Version 5, Current)
New actions have been introduced for Balance Transactions and
Capture Charges objects.
The Balance Transactions object now supports the Create request,
response profiles, Get and List actions. The Charges object now
supports the Capture Charges Execute action.
2024-11 (Version 3)
This connector is now in General Availability (GA).
New actions have been introduced for various objects, including
Execute, Retrieve, List Line Items, and Update Lines for Customers,
Products, Prices, Payment Links, Invoices, Invoice Items, Payment
Intents, and Charges. Additionally, a new Query action is now
available for Customers, Products, Prices, Payment Intents, Invoices,
and Charges.
2024-09 (Version 2)
Enhanced Tech Preview connector
o When an action is selected for the Stripe connector, only the
supported object types for that action (e.g: Customers,
Charges, Invoices) will be displayed at the import tab.
o Updated Request/Response Profiles for Customers, Products,
Prices, Payment Links, Payment Intents, Charges, Invoice
Items, and Invoices to resolve earlier profile configuration
mismatches and ensure smoother integration.
o Introduced the new List action for the objects Customers,
Products, Prices, Payment Links, Invoices, Invoice Items,
Payment Intents and Charges.
o Introduced new objects Invoices, Invoice Lines, Charges and
Payment Intents.
o Enhanced user experience by making the id parameter
available as a connector step parameter for the Get action
and as part of the request profile for the Update action.
2024-08 (Version 1)
Initial Tech Preview release of the connector. To learn about support
levels of Tech Preview connectors, see the topic Feature Release
Stages.
Connection tab
The connector uses OAuth 2.0 Resource Owner Password Credentials
Grant flow type for username and password authentication.
Base URL - Enter the URL that identifies where the module is available
and its version, for
example https://dellboomiisvpr.sugarondemand.com/rest/v10/.
Client Id - The client ID obtained from SugarCRM (Admin > OAuth Keys).
Client Secret - The client secret obtained from SugarCRM.
Username - Enter the user name required to authenticate and log into
SugarCRM.
Password - Enter the password for your SugarCRM account.
Taleo connector
The Taleo connector enables you to move data into and out of the Taleo
Business Edition application.
Use the Taleo connector to integrate any on-premise applications and
legacy systems and integrate Taleo with other On Demand or SaaS-based
applications such as Intacct, Salesforce, NetSuite, etc.
The Taleo connector connects directly to the Taleo application and browses
the interfaces in real-time. If any of your Taleo objects are customized with
additional fields, these fields automatically appear when browsing the
Taleo interface with Boomi Integration.
Integrating with Taleo consists of web service calls using XML
request/response setup over an HTTPS connection. The technical details of
this connection, such as request headers, error handling, HTTPS
connection, etc. are abstracted to simplify the implementation.
Connector configuration
To configure a connector to communicate with Taleo, set up two
components:
Taleo connection
Taleo operation
This design provides reusable components, which contain connection
settings (such as URL, User, Password) and operation settings (such as
Object, Action, data definitions). After building your connection and
operation, set up your connector within a process. When you have
properly defined the Taleo connector within your process, Integration can
map to and from virtually any system using the connector to retrieve data
from or send data to Taleo.
Supported editions
The Taleo Business Edition is supported.
Prerequisites
Either create a new user within Taleo or use an existing user to access the
Taleo API with the Taleo connector. Because Integration connectivity is
subject to the same access control restrictions as users, as defined within
Taleo, validate the controls you need the API to use and define a user with
the appropriate controls.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
Organization Code - The company code of the Taleo customer.
Login User Name - The user name for logging into the Taleo instance.
Login Password - The password for logging into the Taleo instance
Connection tab
If you use multiple accounts, you need a separate connection component
for each account. You can pair this connection with different Taleo
operations to perform a unique action against a Taleo instance.
Organization Code - The unique Taleo Business Edition company
instance, referred to as a company code.
Login User Name - The unique user name of a Taleo Business Edition
user. The user must be an administrator.
Login Password - The user’s password.
Connection tab
To connect to multiple agile systems, use a separate connection for each
system and configure credentials accordingly. Pair a single connection with
different available operations to perform desired actions on the Agile PLM
system.
The Teqtron Agile PLM connection involves connection with the Agile PLM
system, as well as the SFTP server allocated for the Agile PLM system.
These settings are common, and are accessible by all operations built into
the connector. The Agile PLM connection settings are primarily used to
connect with the Agile PLM system’s application interface service
operations. The SFTP connection settings are used by the Get and Send
operations on the SFTP server allocated for the Agile PLM system.
WSDL Url - Oracle Agile PLM Custom PXs WSDL Url Extension
Version - Oracle Agile PLM application version
Protocol Type - Client Security Protocols
Host - Oracle Agile PLM application Host Name
Port - Username - Oracle Agile PLM application Username
Password - Oracle Agile PLM application Password
Trust SSL Certificate - Trust SSL Application Certificate used in case of
https protocol type
SFTP Port - Agile Content Service SFTP Destination Port
SFTP Username - Agile Content Service SFTP Destination Username
SFTP Password -
Agile Content Service SFTP Destination Password
Connection tab
Important
The documentation for this connector is provided by a Boomi partner. This
documentation was last updated in July 2024.
The Thru MFT – Partner connector uses the following information to
connect to Thru MFT.
Port - The port number to connect to the Toad Intelligence Central MySQL
database. Use the default of 3566 unless you modified it during the Toad
Intelligence Central installation.
User - The name of the Toad Intelligence Central account to use when
connecting to the server.
Twilio connector
Use the Twilio connector to query metadata about a Twilio account, phone
numbers, calls, text messages, and recordings, initiate outbound calls,
and send text (SMS) messages, in a Boomi Integration process.
note
The component of the Twilio API that provides support for handling
incoming calls through TwiML instructions is accessible to a Boomi
Integration process through the Web Services Client connector.
Connection configuration
To configure a connector to communicate with Twilio, set up two
components:
Twilio connection
Twilio operation
Prerequisites
To implement a connection to Twilio from Integration, you must:
Have a Twilio account.
Establish credentials for use with the connector.
Supported objects
The following objects are supported by one or more of the Twilio operation
actions:
*SMSMessage is deprecated and is no longer maintained or updated. It is
expected to be removed in the future versions of Twilio API.
**Message is added as a replacement for SMSMessage and it is the
recommended option to get all the latest features and updates.
Supported editions
The Twilio connector supports Version 2010-04-01 of the Twilio REST API.
Unsupported editions
The Twilio connector does not support Version 2008-08-01 of the Twilio
REST API.
Tracked properties
This connector has no predefined tracked properties.
Connection tab
Account SID - Enter the unique Twilio Account SID, obtained from Twilio.
Auth Token - Enter the Authentication token, obtained from Twilio.
Twilio operation
The Twilio operation defines how to interact with your Twilio account and
represents a specific action (Query, Create, etc.) to be performed against
a specific Twilio object type.
Create a separate operation component for each action/object
combination required for your integration. Not all objects are available for
each action.
The following list provides some generalities and specifics:
For all inbound operations, unique parameters are required at run-
time to request data.
For all outbound operations, send request XML data into the
connector that corresponds with the defined XML profile. Request
documents are typically the destination output of a map. Some
outbound actions on particular objects result in the manipulation of
Twilio account data and in the execution of a tangible action.
Review the Parameter values topic to understand how to statically
or dynamically define values in a process for run-time.
Twilio specifies phone numbers in E.164 format, for
example +13115556789. A raw caller ID string in a response means
Twilio was unable to normalize the incoming caller ID to E.164.
Twilio specifies timestamps as GMT in RFC 2822 format. For
example, 6:13:42 PM PDT on September 17, 2012, is specified
as Mon, 17 Sep 2012 01:13:42 +0000.
Connector configuration
The UKG Pro connector connects directly to the UKG Pro application and
browses the interfaces in real time. Integrating with UKG Pro consists of
web service calls utilizing XML request/response set up over an HTTPS
connection. The technical details of this connection, such as request
headers, error handling, and HTTPS connection, are abstracted to simplify
your implementation.
Connector configuration
Prerequisites
Have both your login and password to access the UKG Pro web
service.
Confirm that your UKG Pro account has privileges to connect to the
UKG Pro API. Your UKG Pro representative can confirm your
privileges by providing you with a Sender ID and password specific
to the integration API.
Connection tab
Names -
Service Root URL - Enter the base URL of your UKG Pro service, such
as https://service101.ukgpro.com/api/v1/.
None
User API Key - Enter or paste your User API key. You can retrieve the User
API and Customer API keys from your UKG Pro administrator account by
selecting System Configuration > Security > Web Services.
Get — retrieves data from UKG Pro and returns it to the process.
Ultipro (UKG) — An object having a composite primary key must use the
QUERY operation of the connector. Objects with a single primary key must
use the GET operation. Click Import Operation to select the object(s) to
integrate. The Import wizard uses the connector browser and imports the
request and response profiles for the selected action.
Options tab
The following table describes the default field definitions for each action
type:
Object - An object defines what you want to integrate or the action to run,
which you select in the Import Operation wizard.
If you clear the setting, the process stops and reports the error on
the Process Reporting page.
If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.
$skip - Skip the designated number of records and return the remaining
records. For example, entering 25 skips the first 25 records and returns
records 26 and higher.
$top - Retrieve the designated number of top records. You can combine
$top with $skip. For example, setting $skip to 25 retrieves records 26 and
up. Setting $top to 5 then receives records 26 to 30.
Veeva connector
Veeva is a cloud-based customer relationship management platform for
the life sciences and pharmaceutical industries. The Veeva connector
allows you to create and interact with objects in the Veeva Vault in various
ways.
note
This is an open source connector and you can access the code in
the Veeva connector repository.
Connector configuration
Veeva connection
Veeva operation
Prerequisites
Tracked properties
Connection tab
User Credentials
If the Vault Authentication API burst limit is exceeded while
requesting the Session ID, the connector waits 60 seconds before
retrying once when you use the User Credentials authentication
type.
note
The Veeva Developer Docs use the terms Vault DNS and Vault
Subdomain interchangeably. While using this connector, Boomi advises
you to use the same input for both Vault DNS and Vault Subdomain to
avoid issues.
API Version - The default Veeva REST API API version. The current version
is v23.3, the current GA version.
Client ID (Optional) - Enter your header name if any. If you input a value,
it will be concatenated after Boomi_. This value is trimmed before being
used in the X-VaultAPI-ClientID header.
note
OAuth 2.0
Access Token -The encrypted access token retrieved from the external
Authentication Server to access protected resources. Click to generate the
access and refresh tokens. When a new window opens asking you to
confirm offline access, click Accept. A second message indicates the
access token was received and successfully generated.
Test Connection
You can test your connection settings before you use or save the
connection in a process. The Test Connection ensures that your specified
settings are correct, valid, and accessible. If the test connection is
successful, you can save the connection. Otherwise, review and correct
any incorrect settings, then test again.
Attention
You can only run the Test Connection against containers associated with
your account (local basic runtimes, runtime clusters, and runtime clouds).
URL logging
Every time the connector executes a request to the service, the URL is
logged. The table below details where the logs for different executions are
stored.
Requests
Logger
execution
Docume
Operations
nt
Requests
Logger
execution
Containe
Test Connection
r
Docume
Browser
nt
Containe
Retries
r
Workday connector
Connector configuration
Workday connection
Workday operation
Connection tab
URL - The URL for the Workday service, such as https://<workday host
name>.workday.com/ccx/service/<tenant name>/. The trailing ‘/’ is
required. Configure the bracketed Workday host name and tenant name to
successfully connect to the correct instance.
Version - The web service version including point releases, such as 23.2.
You do not need to enter ‘v’ because Integration inserts it for you.
Username - The user name for logging into the Workday service with the
format username@tenantname.
Workday operation
The Workday operation defines how to interact with your Workday account
and represents a specific action to be performed against a specific
Workday object type (Adjust\_Time\_Off, Add\_Dependent, Add\_Stock\
_Grant, etc.).
The Workday connector operation supports the Execute action. You must
create a separate operation component for each object required for your
integration.
When importing profiles, date and dateTime fields should be in one of two
formats that are supported: date yyyy-MM-ddZZ and dateTime yyyy-MM-
dd'T'HH:mm:ss.SSSZZ.
note
Options tab
When you create a new operation or want to change the service specified,
use the Options tab. Click Import Operation to use the Import wizard
and select the object you want to integrate with. An import is always
required to change the Service Name, Custom Service Name, and to
re-import objects. Otherwise, the component XML is not updated.
The following fields appear on the Options tab:
Object - An object defines what you want to integrate or the action to run,
which you select in the Import Operation wizard.
note
If you clear the setting, the process stops and reports the error on
the Process Reporting page.
If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.
Return All Pages - If selected, the connector makes repeated calls to the
endpoint so that all pages are returned as separate documents.
note
Service Name - The name of the Workday service you want to interact
with. For new operations or to change the service, click Import and select
the Workday service you want to use.
note
You must select Custom to specify a Workday service that is available to
your account, but not displayed in the drop-down list. The Service
Name field always takes precedence over Custom Service Name.
Custom
Absence Management
Academic Advising
Academic Foundation
Admissions
Benefits Administration
Campus Engagement
Cash Management
Compensation
Compensation Review
Delegated Authentication
External Integrations
Financial Aid
Financial Management
Human Resources
Identity Management
Integrations
Inventory
Notification
Payroll
Payroll CAN
Payroll FRA
Payroll GBR
Payroll Interface
Performance Management
Professional Services Automation
Recruiting
Resource Management
Revenue Management
Settlement Services
Staffing
Student Finance
Student Records
Student Recruiting
Talent
Time Tracking
Workday Connect
Workday Planning
With the ability to bring in external data sources into Workday Prism
Analytics and govern them along with existing Workday Prism Analytics
data, the business will be enabled to generate financial and HR insights
that can be shared across your organization.
Through this connector, you can use a Boomi Integration process to
perform operations such as Get, Create, (Dataset/Table and Bucket),
Upload, Complete Bucket, and Import.
Connector configuration
Using the Workday Prism Analytics connector, you can use an Integration
process to upload large data files to a Workday Prism Analytics table. To
improve performance and efficiency, the connector breaks up the files into
smaller chunks.
note
New to the connector is support for the Workday Table object in the Data
Catalog, added in the Workday 33 release.
Prerequisites
Supported editions
Tracked properties
Connection tab
note
API Service Endpoint - Enter the URL for the Workday REST API service
endpoint. For example, the URL to your data center and production
service may
be https://wd2-impl-services1.workday.com/ccx/api/v1/{tenant_id}. The
URL is analyzed to determine and set the base host API and tenant
account. You can find the endpoint in Workday using the View API Client
task.
Client Secret - The client secret obtained from Workday. Enter the
alphanumeric client secret that is associated with the Client ID. The Client
Secret is generated in Workday while registering the API client in the
tenant using the Register API Client for Integrations task.
Refresh Token - Enter the refresh token allowing access to Workday. You
can generate a refresh token in Workday when you register the API client
in your tenant using the Register API Client for Integration task.
Workday Prism Analytics operation
The Workday Prism Analytics operation defines how to interact with your
Workday account and represents a specific action to be performed (Get,
Create (Dataset/Table and Bucket), Upload, Complete Bucket, and Import.
Inbound: Get
Options tab
Click Import Operation, then use the Import wizard to select the object
with which you want to integrate. When you configure an action, the
following fields appear on the Options tab.
Object - Defines the object with which you want to integrate and which is
selected in the Import Wizard.
note
If the tracking direction is read-only, the feature to change the direction is
either unavailable or the developer set the configuration to read-only. The
default value you see shows you which document appears in Process
Reporting.
If you clear the setting, the process stops and reports the error on
the Process Reporting page.
If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.
Get
Create
Create a table.
Create a bucket.
Creating a table
When you need to load data into Workday Prism Analytics, the first step is
to use the Create action to create a new empty table. Before you begin,
verify that you have the appropriate access rights and permissions to
create datasets and tables using the Workday user interface and
documentation at doc.workday.com. The JSON input document for the
table only requires the name, but you can optionally provide a label and
description. After creating the table, you can reuse it for the buckets
containing the CSV files that you want to upload to the table.
{
"name": "name of the table",
"displayName": "display name of the table",
"fields": [
{
"ordinal": 1,
"name": "Column Name",
"description": "Description of the column",
"precision": 255,
"scale": 0,
"type":{
"id": "fdd7dd26156610006a12d4fd1ea300ce",
"descriptor": "Text"
}
},
{
"ordinal": 2,
"name": "Column Name",
"description": "Description of the column",
"precision": 255,
"scale": 0,
"type":{
"id": "fdd7dd26156610006a12d4fd1ea300ce",
"descriptor": "Text"
}
},
]
}
note
The table name and column names of the table should be unique.
Creating a bucket
After creating the dataset, your next step is to use the Create action to
create the bucket, which is a temporary folder for the CSV files that you
want to upload to the dataset. When creating the bucket, you select either
a specific dataset or a Dynamic Dataset (you provide the dataset using a
document property) when browsing. Specify options for how the bucket
name is generated, how fields in the file to upload are enclosed and
delimited, and the number of lines to ignore in the file to upload. The JSON
input document for the bucket requires the name, some fields to define
the schema of the files to upload, and the dataset.
When browsing to create a bucket, you can select the Use existing
schema option to retrieve the schema fields from the dataset that has
already been uploaded and use the schema in the new bucket you are
creating. If the dataset does not have an uploaded schema, or you do not
select this option. You must define the schema in an input document as
shown below.
note
Buckets expire after 24 hours, and after that period they can no longer be
used to upload files. If you do not complete the entire data loading
process (create the dataset/table, create the bucket, upload files into the
dataset/table, complete the bucket) in the 24-hour period, you must start
over by creating a new bucket.
Upload
The Upload operation is used to upload the CSV files into the bucket after
it is created. Use the Set Properties step and provide two required
document properties:
Filename — the name of the CSV file that you want to upload into
the bucket.
You can also provide an optional document property for the number of
header lines to ignore. If set, the document property overrides the
operation setting.
Complete Bucket
The Complete Bucket operation is used to initiate the data transfer from
bucket to dataset once the CSV files are uploaded successfully into the
bucket. The input JSON for Complete Bucket provides the Bucket ID, and
the output is a JSON document containing the Bucket ID and the state of
the bucket (either Success or Failed). The Bucket ID should be added as an
input parameter in the Connector step. When successful, a "Success"
status is received stating that the bucket was completed. The Object Type
is Bucket.
Import
The Import operation is used to create a bucket and upload the files in the
dataset for a pre-existing Table. Upon execution, this operation will
execute the following:
Changelog
Zuora connector
Use the Zuora connector to connect to Zuora’s Z-Commerce Platform to
access your Z-Billing and Z-Payments data.
As a Zuora user, you can use the Integration Zuora connector to integrate
any on-premise applications and legacy systems, and to integrate Zuora
with other On Demand or SaaS-based applications such as Salesforce,
NetSuite, etc.
Important
When developing new processes, use the Zuora REST connector, which is
an update to this Zuora legacy connector. All legacy SOAP calls are
available in REST.
Connector configuration
Zuora connection
Zuora operation
Supported editions
The Zuora connector supports v.39, v.59, and v.69 of the WSDL. New
objects and operations are available in version 69.
In WSDL 68 and earlier versions, the Zuora SOAP API treated many date
fields as dateTime fields. In WSDL 69 and later versions, Zuora treats
those fields only as date fields and no longer accepts dateTime values.
note
Before downloading and using WSDL 69 or later, Zuora cautions that you
should check if your SOAP integration passes or receives dateTime values
to or from any of these fields.
The WSDL is common across all Zuora instances or tenants and does not
contain custom fields configured in your specific tenant. However, you can
extract and populate custom fields. See the Zuora operation and Zuora
operation’s inbound actions topics for more information.
Prerequisites
A Zuora user name with password, configured with the API Access
permission.
Tracked properties
Connection tab
Web Service Site - The Zuora web service site. The default for
Production is https://www.zuora.com/apps. For the API sandbox,
use https://apisandbox.zuora.com/apps.
User Name - The Zuora tenant user name for logging into the web
service.
Password - The Zuora tenant password for logging into the web service.
The Zuora operation defines how to interact with your Zuora account and
represents a specific action (Query, Create, etc.) performed against a
specific Zuora object type (such as Account, Invoice, Payment.).
Create a separate Operation component for each object required for your
integration.
Get
The inbound Get action, available in versions 59 and 69, returns these
objects:
Query
Account Full
Invoice Full
Amendment
Communication Profile
Contact
Invoice
Invoice Adjustment
Invoice Item
Invoice Payment
Payment
Payment Method
Product
Rate Plan
Refund
Subscription
Taxation Item
Usage
Object types that have the word “Full” in their name (Account Full, Invoice
Full, Invoice Split Full, and Payment Full) are custom Integration object
types. When you query these objects, the connector performs some joins
to build complex objects. For example, the regular Account object type
has a BillToId field that contains a Contact Id field. When you use the
Account Full object type, BillToId is replaced by BillToContact that has all
the Contact fields. When you query the Account Full object type,
Integration performs a join based on the Contact Id and then populates
the BillToContact fields.
The Zuora API WSDL is static and does not have account-specific
customizations. Therefore, you cannot import custom fields in the
automatically-generated request and response profiles. To extract custom
fields using the Query action, enter the custom API field names in the
Extra Query Fields field on the Zuora operation. Use only top-level fields
separating API field names with commas. Add custom fields as elements
on the Query action’s response profile. You then add the API field name to
the Element Name field and configure and set the Zuora namespace. You
cannot filter by custom fields.
note
Create
The outbound Create action creates new records in the Zuora object
defined in the operation.
The Zuora API WSDL is static and does not contain account-specific
customizations. You cannot import custom fields in the automatically-
generated request profile. If you use custom fields, add them manually as
elements on the Create action’s request profile, add their API field name
to the Element Name field, then configure and set the Zuora namespace.
Account
Amendment
Contact
Invoice Adjustment
Invoice Payment
Payment Method
Refund
Subscribe
Taxation Item
Usage
Use these objects in the Create action to leverage Zuora's bulk import API:
When you use the Zuora connector to import high volumes of payment,
quote, or usage data, follow these rules:
Bring the records from the source system into a Boomi Integration
process first. The incoming records can be in any format but must
be mapped to an XML profile by using the Map step.
Add static fields to the XML profile. Each object has eight to 10 static
fields that are valid for all clients. The fields are added when you
click the Import button on the Zuora connector’s operation
component.
o Make sure that the XML profile’s Options tab > Respect
Min Occurs check box is on.
Use the XML profile in a Map step. The Map step ensures that the
first record contains placeholder elements for columns it might not
have data for. This ensures that data is not truncated from
subsequent records.
Send the XML data from the Map step to the Zuora connector. The
Zuora connector converts the XML data to CSV, as required by
Zuora.
note
The inbound Delete action removes records from the Zuora object as
defined in the operation. You must supply the Zuora ID for the record.
The Zuora operation’s Delete action supports the following object types:
Account
Contact
Payment
Subscription
Execute
The outbound Execute action uses the Zuora Export ZOQL (Zuora Object
Query Language) to create exports with the Z-Commerce API. Zuora
Export ZOQL is similar to Z-Commerce API ZOQL, with a few differences.
The biggest difference is that with Zuora Export ZOQL you query a data
source, not a Z-Commerce API object.
The Export API enables you to extract fields from a parent object in a
single query instead of having to perform subsequent connector calls to
retrieve related information. For example, in one query you can retrieve
an Invoice, its InvoiceItems, the associated Product Rate Plan Charges, the
Account, and the account’s Bill To Contact. The Export API is highly
recommended for use with any transaction record types.
The Zuora Export ZOQL query is constructed from the Zuora Operation
component. (See the Zuora Operation topic for information about the
fields that are used by the connector’s Execute action: Fields to Select,
Table to Select From, Where Clause, Grouping Fields, Order By Field and
Sort By.) The Zuora connector creates an export record, queries the Zuora
database, retrieves the data, parses the data into an XML structure in the
response profile and returns a document for each logical record. The data
is returned in a flattened structure unless you use the Operation
component’s Grouping Fields field to enter field(s) to group details under
their header. For example, you can group invoice line items under their
invoice header.
note
To use the Execute action’s Export function, you should be familiar with
Zuora Export ZOQL (Zuora Object Query Language), the query language
used to create exports with Zuora SOAP API. You can read about Export
ZOQL at Zuora’s Knowledge Center as well as data sources and pre-
joined objects.
Update
The outbound Update action updates existing records in the Zuora object
defined in the operation.
The Zuora API WSDL is static and does not contain tenant-specific
customizations. Therefore, custom fields cannot be imported in the
automatically generated request profile. If you are using custom fields,
manually add them as elements on the Update action’s request profile.
Add their API field name to the Element Name field and configure and set
the Zuora namespace.
Account
Contact
Invoice
Invoice Adjustment
Payment Method
Product
Refund
Subscription
Usage
Options tab
Select a connector action, then after filling in the fields, use the Import
Wizard to select the object with which you want to integrate. The following
table describes the default field definitions for each action type:
Object - Displays the object type that you selected in the Import
Operation wizard.
note
If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.
Import Operation Timeout (Create only, and only when using Import
objects)
Sets the maximum time (in minutes) to wait for import file processing. The
default is 10 minutes.
You can insert dynamic filter values into a WHERE clause using the
request profile. The special syntax to use is: Table.FieldName='$
{arbitraryFilterName}'. For example, Invoice.InvoiceNumber='$
{invoiceNumber}'.
Connector configuration
Prerequisites
o Base URL
Supported editions
Supported objects
The following objects are supported by one or more of the Zuora REST
operation actions. The connector does not currently support the
POST_Attachments, POST_MassUpdater, POST_Usage, and
Object_POSTImport operations.
note
You can add any field that you want to an input profile. By doing so, you
can also include Zuora platform custom fields. All fields, including custom
fields, are sent to the related Zuora endpoint.
AcceptUserAccess X
Account X X X
AccountingCode X X X X
AccountingPeriod X X X
AccountSummary X
Actionamend X
Actioncreate X
Actiondelete X
Crea Dele Exec G Que Upd
Object
te te ute et ry ate
Actionexecute X
Actiongenerate X
Actionquery X
ActionqueryMore X
Actionsubscribe X
Actionupdate X
ActivateAccountingCod
X
e
AllAccountingCodes X
AllAccountingPeriods X
AllOrders X
AllSummaryJournalEntri
X
es
AmendmentsByKey X
AmendmentsBySubscri
X
ptionID
Crea Dele Exec G Que Upd
Object
te te ute et ry ate
ApplyCreditMemo X
ApplyPayment X
Attachments,
X X X X
AttachmentsList
BasicSummaryJournalE
X
ntry
BatchUpdateDebitMem
X
os
BatchUpdateInvoices X
BillingDocuments X
BillingPreview X
BillingPreviewRun X
BreakdownCreditMemo
X
ByOrder
BreakdownInvoiceByOr
X
der
CalloutHistory X
CancelAuthorization X
Crea Dele Exec G Que Upd
Object
te te ute et ry ate
CancelCreditMemo X
CancelDebitMemo X
CancelPayment X
CancelRefund X
CancelSubscription X
Catalog X X
CloseAccountingPeriod X
CM_TaxationItems X
Connections X
createToken X
CreateAndRetrieveExp
X
ort
Tracked properties
Connection tab
The Zuora REST connector uses your Client ID and Client Secret to
authenticate you.
Authentication Type - OAuth 2.0 is the default and the only type
available.
The Zuora REST operation defines how to interact with your Zuora account
and represents a specific action (Get, Query, etc.) performed against a
specific Zuora object type (Accounts, Invoice Payments, Product Features,
etc.)
Create a separate operation component for each object required for your
integration. The set of operations changes for each operation type. The
operation supports the following actions:
Important
CreateAndRetrieveExport object
When using this object, the request profile contains a header parameter,
X-Zuora-WSDL-Version. This can be used, for example, when you need to
make a request to a specific WSDL version. To do so, you add a field
named X-Zuora-WSDL-Version in your input document with the specific
WSDL version. Then, the connector adds a header with the same name in
the request executed by the operation to ensure that the API version you
are connecting to and the WSDL version specified in the input document
are the same. In this way, you can take advantage of certain operations
and objects supported in specific versions.
Options tab
Select a connector action and use the Import Wizard to select the object
with which you want to integrate. The following table describes the fields
for each action type:
Object - Displays the object type that you selected in the Import
Operation wizard.
note
If you clear the setting, the process stops and reports the error on
the Process Reporting page.
If you select the setting, processing continues and passes the error
response to the next component processed as the connection
output.
Changelog