SnapLogic API
SnapLogic API
Introduction
SnapLogic Pipelines are foundational to orchestrating data and application integration workflows, bridging the
gap between diverse internal and external business systems. When these integrations are exposed as APIs
(Application Programming Interfaces), they unlock significant value, facilitating seamless data exchanges and
interactions not just within an organization but also with external partners and customers.
The "SnapLogic Designer" facilitates a low-code, graphical approach for integrating Snaps, designed to abstract
the execution of specific technical tasks, into a cohesive integration pipeline, enabling individuals with minimal
technical expertise to create data integration workflows. However, to achieve optimal business outcomes and
adhere to industry standards, especially within the context of APIs, developers within the SnapLogic platform
must apply diligent oversight during the pipeline build process, with a specific focus on:
Ignoring these considerations may lead to negative effects for the business, for example financial repercussions
from failed data delivery to target systems, data loss or inconsistencies during high demand periods, delays in
project delivery by development teams, and reduced organizational agility due to a lack of internal
standardization. It is therefore crucial for SnapLogic developers & architects to follow best practice guidelines
and govern the API integration process effectively to ensure successful business outcomes within the platform.
This document looks to provide a comprehensive breakdown of the best practices related to API Design &
Development within SnapLogic.
www.snaplogic.com Page 1 of 40
API Development - Best Practices
Fundamentals
SnapLogic Pipelines
SnapLogic Tasks
A SnapLogic task creates a wrapper around a pipeline to enable simple, controlled and repeatable execution of
that pipeline from anywhere. In the context of API development, the task type will either be a Triggered or an
Ultra task which will allow for the Pipeline to be invoked via a HTTP/s endpoint.
To ensure efficient data handling and reduce latency between the interconnected services, it is important to
understand how SnapLogic facilitates the movement of data between each Snap through something called
“in-memory data buffering”. Specifically, each Snap employs a dedicated ring buffer between the connected
input & output views to temporarily store data during the processing phase. For example, in a Pipeline
containing three connected Snaps, there would be two distinct ring buffers in operation, each corresponding to
the data exchanged between each specific Snap within the Pipeline.
www.snaplogic.com Page 2 of 40
API Development - Best Practices
Each Ring Buffer stores up to 1024 documents, blocking on the 1025th document. A
document is the key data structure that passes through pipelines, therefore it is important
to ensure that Snaps used within the Pipeline are processing the documents efficiently to
avoid performance bottlenecks within the Pipeline. Latency experienced within Snaps
interfacing with external endpoints creates an inherent performance degradation to both
upstream and downstream systems.
When working with API Pipelines, the typical scenario involves receiving either a payload body or query
parameters from an API consumer. The Pipeline then orchestrates the implementation of the API, executing
necessary steps and returning a response back to the consumer. Both Error handling & auditing of data can be
implemented within the pipeline design, ensuring a reliable and seamless integration experience.
www.snaplogic.com Page 3 of 40
API Development - Best Practices
Design Guidelines
The following words are used to signify the requirements within the best practice guidelines:
● Must, this is required.
● Should, this is recommended and a deviation from this must be agreed by the Centre of Excellence
(CoE) agreement during Peer Review in the API Design phase.
● Could, this is optional.
An API typically consists of the following components. Each of these components can be accessed at runtime
directly within the Pipeline using the approaches described.
There are several benefits to using path parameters. First, they make it easy to identify the resource that is
being requested. For example, in the following URL, the path parameter "customer_id" identifies the specific
customer that the API is looking to operate on:
Unset
GET /customers/{customer_id}
Second, path parameters can be used to perform additional operations on the results of a request. For
example, in the following URL, the path parameter "status" can be used to filter the results to only include
accounts with a specific status:
www.snaplogic.com Page 4 of 40
API Development - Best Practices
Unset
GET /customers?status=active
Finally, path parameters can also be used to create nested resources. For example, in the following URL, the
path parameter "customer_id" can be used to access the orders for a specific customer:
Unset
GET /customers/{customer_id}/orders
● When using path parameters, it is important to follow best practices. First, path parameters should be
named in a way that is clear and concise. Second, path parameters should be placed in the URL in a
consistent order. Third, path parameters should be encoded using url-encoding.
● Depending on the position of the resource within the URL & SnapLogic task type (Triggered or Ultra),
the resource should be accessed using either of the following expressions.
www.snaplogic.com Page 5 of 40
API Development - Best Practices
HTTP Methods
HTTP methods are commands sent by an API consumer to an API server over the Hypertext Transfer Protocol
(HTTP). They specify the action that the API consumer wants the server to perform on a specific resource. The
most common HTTP methods are GET, POST, PUT, and DELETE, each of which has a specific purpose:
● GET: Retrieves a representation of the specified resource. This is the most common HTTP method and
is used to retrieve database records, images, and other resources from an API.
● POST: Submits data to the API to create or update a resource. This method is often used to submit
payload body data to an API.
● PUT: Replaces the specified resource with the data provided in the request body. This method is often
used to update existing resources.
● PATCH: Partially modifies the specified resource based on the data provided in the request body. This
method is often used to update specific attributes or fields of an existing resource without requiring
the client to send the entire representation of the resource.
● DELETE: Deletes the specified resource. This method is used to delete files, directories, and other
resources from a server.
To retrieve or reference the HTTP method used within a given API request within a Pipeline, either of the
following expressions must be used, depending on the task type.
Request Body
● The request body of an API typically contains a collection of attributes relating to the API request. This
may be a JSON object containing key value pairs of customer information. The request body must be
retrieved within the Pipeline using either of the following expressions, depending on the task type.
www.snaplogic.com Page 6 of 40
API Development - Best Practices
● In order to evaluate that the content of the request body conforms with the schema expected by the
API implementation, a sufficient level of validation may be performed. For more information, please
refer to the “Payload Validation” section of the document.
Parsing JSON
A document input view may be used when accepting JSON within the payload body. The payload content may
be parsed by using either of the following approaches.
JSON.parse($content)
Parsing XML
A binary input view must be used when accepting XML within the payload body
When parsing both XML & JSON within the same API, a binary input view must be used.
Headers
API headers play a crucial role in HTTP requests and responses for APIs. They carry metadata that defines how
the API should behave or providing information about the request/response. While most of this information is
typically handled by API policies within the API gateway, some information can be useful to reference within
the SnapLogic pipeline to serve different purposes.
Request Headers
● Content-Type: Specifies the media type of the resource or data being sent or expected in the response.
For example, application/json for JSON data, application/xml for XML data.
● Date: Provides the date and time the message was sent. This can be useful for caching mechanisms and
to ensure smooth communication between the consumer and downstream applications by aligning
time-sensitive operations.
● User-Agent: Identifies the API consumer software making the request to the API.
● Host: Specifies the server domain name and (optionally) the TCP port number on which the server is
listening.
● Correlation-ID (Custom Header): A unique identifier value to track and correlate a single API request
across different systems and services throughout the lifecycle of a transaction. This concept is
particularly useful in distributed systems and microservices architectures, where a single operation or
transaction might span multiple endpoint or database interactions within a single SnapLogic Pipeline.
For more information, please refer to the “Auditing API Requests” section
Request Headers must be referenced within the Pipeline by using the following expression, depending on the
task type.
www.snaplogic.com Page 7 of 40
API Development - Best Practices
Response Headers
Status Code: Indicates the result of the request (e.g., 200 OK, 404 Not Found).
www.snaplogic.com Page 8 of 40
API Development - Best Practices
To specify the appropriate HTTP code in the response header for Ultra Tasks, the mapper Snap should be used.
A mapper snap should be used at the beginning of a pipeline to resolve all components of an API request. This
strategy enhances reusability, maintainability, and consistency within the API development.
Placing a mapper snap at the head of the pipeline ensures that all subsequent snaps have access to the
resolved API components. This eliminates the need for repeating the mapping process in multiple snaps,
reducing the overall complexity of the pipeline.
By centralizing the API resolution logic in a single mapper snap, you enhance the reusability of your pipeline.
Any changes or updates to the API can be easily made in the mapper snap, ensuring that the entire pipeline
remains consistent and up-to-date.
Adopting a consistent approach with every API ensures that your pipelines follow a standardized pattern. This
makes it easier to understand, troubleshoot, and maintain multiple pipelines, especially when working with a
team of API developers.
www.snaplogic.com Page 9 of 40
API Development - Best Practices
● When writing mapper expressions, both Triggered and Ultra tasks may be considered to ensure
flexibility as your pipeline may need to be converted to an Ultra task in the future. Therefore, writing
expressions that support both task types ensures future proof compatibility.
The following expressions should be used within the mapper snap as a basis for resolving each API component.
$query $query
Using a Router Snap to route requests based on method and resource should be used to manage incoming
requests in a SnapLogic pipeline.
Routing Criteria:
● Method: The Router Snap can route messages based on the HTTP method specified in the request.
● Resource: Additionally, it can consider the resource path (or URI) to determine the appropriate route.
Consider a REST API for managing customer information, where requests can be of different HTTP methods
(GET, POST, PUT, DELETE) and target different resources (e.g., "/customers" or "/customers/{id}").
www.snaplogic.com Page 10 of 40
API Development - Best Practices
● Define a route for GET requests to the "/customers" resource that should be directed to a downstream
"Get Customers" Pipeline.
● Create another route for POST requests to the "/customers" resource that should invoke a "Create
Customer" Pipeline.
● Similarly, configure routes for PUT and DELETE requests to update or delete individual customer
records.
true error
www.snaplogic.com Page 11 of 40
API Development - Best Practices
and APIs. For instance, if there's a need to perform repetitive tasks relating to an API, the logic can be
implemented once in a child pipeline and then reused across all the endpoints that require it.
Improved Maintainability: Separating method-specific logic into child pipelines enhances the maintainability
of the pipeline. Each child pipeline becomes a self-contained unit, making it easier to understand, modify, and
debug the code. Developers can focus on a specific method implementation without being distracted by
unrelated logic.
Modular and Scalable Architecture: A modular architecture based on child pipelines enables scalability. It
becomes easier to add support for new HTTP methods or modify existing implementations without affecting
other parts of the code. Child pipelines can be easily added, removed, or rearranged to accommodate
changing requirements.
Consistent Error Handling: Encapsulating method-specific logic within child pipelines allows for consistent
error handling. Each child pipeline can have its own error handling mechanism, ensuring that errors are
handled gracefully and consistently across all HTTP methods. This improves the overall reliability and user
experience of the API.
Streamlined Testing: Testing becomes more manageable when method-specific logic is encapsulated in child
pipelines. Each child pipeline can be tested independently, ensuring that the API behaves as expected for each
HTTP method. This approach reduces the testing effort and improves the overall quality of the API.
● In addition to the Router Snap, the HTTP Router Snap may be used to
route requests based on the method used within the API request. This
snap is generated automatically when using the “Design/Specification
first approach” to API design.
www.snaplogic.com Page 12 of 40
API Development - Best Practices
www.snaplogic.com Page 13 of 40
API Development - Best Practices
API Pipelines can be developed within SnapLogic to support both synchronous and asynchronous processing,
depending on the processing requirements of the upstream application/API consumer. In synchronous
processing, the API consumer sends a request to the API and waits for a response before continuing further
upstream processing. This approach is suitable for simple tasks that have immediate process dependencies
relative to the API Consumers request.
In asynchronous processing, the API consumer sends a request to the API provider and continues processing
without waiting for a response. The API provider then provides a mechanism for the API consumer to request
feedback regarding the status of the by making additional API requests. This approach is suitable for tasks that
require immediate feedback or that can be processed in parallel.
● When designing an API in SnapLogic, the communication style that is most appropriate for the intended
use of the API must be considered.
● The style of the API must be clearly communicated to the API consumer. This should be done in the API
documentation. By providing clear documentation and examples, API providers can help API consumers
to use the API effectively and efficiently.
www.snaplogic.com Page 14 of 40
API Development - Best Practices
The example Pipeline below illustrates the use of synchronous API design in SnapLogic. It consists of several
Snaps connected in a linear sequence. Each Snap performs a specific task, such as preparing data sent by the
API consumer, transforming the data and retrieving the data from a target system, SQL Server. The Pipeline is
designed in a way that each Snap's execution depends on the successful completion of the previous Snap. The
HTTP response is returned to the API consumer once the “Prepare Response” snap has finished processing.
● Simple Request-Response Interactions: For operations where an API Consumer expects an immediate
result or feedback. Examples include querying a database for specific data, retrieving file contents, or
performing calculations.
● Linear Integration Workflows: In scenarios where integration tasks need to be executed in a specific
order, and each subsequent task depends on the results of the previous one, implementing logic and
control flow.
www.snaplogic.com Page 15 of 40
API Development - Best Practices
● Low-Latency Operations: For operations that are inherently quick and where the overhead of
managing asynchronous operations might outweigh the benefits.
● CRUD Operations: Creating, reading, updating, and deleting resources in many web applications are
often handled synchronously, as these operations are typically fast and the user is waiting for an
immediate response to continue their workflow.
www.snaplogic.com Page 16 of 40
API Development - Best Practices
2. Asynchronous API design enhances the scalability for long running processes. By separating the API
response from the Pipeline execution, SnapLogic can process multiple Pipelines concurrently without
impacting the performance of the upstream API consumer.
Thirdly, it provides greater flexibility in designing Pipelines. Asynchronous API design allows developers to
create complex Pipelines with multiple branches and parallel processing, without worrying about the impact
on the API response time.
www.snaplogic.com Page 17 of 40
API Development - Best Practices
When developing Asynchronous APIs, the API request payload from the API consumer should be placed on a
queue/topic within the Messaging Layer (i.e. JMS, Kafka, etc.). The configuration of the queue/topic will allow
the message/payload to be received by multiple consumers subscribed to the queue/topic.
www.snaplogic.com Page 18 of 40
API Development - Best Practices
● Handling Long-Running Operations: Asynchronous APIs are ideal for operations that take a significant
amount of time to complete, file based operations, or processing large data sets. They allow the main
execution thread to remain responsive, improving user experience in applications.
● Improving Application Responsiveness: For UI based interactions that involve API requests, keeping
the UI thread unblocked is crucial to maintain a smooth user experience. Asynchronous APIs allow
time-consuming operations to be handled in the background, ensuring the UI remains responsive to
user inputs.
● Scalable Web Applications: For applications expecting high traffic, asynchronous APIs can help handle
multiple requests concurrently, improving throughput and reducing latency compared to a synchronous
model where each request might block a thread until completion.
● Microservices and Distributed Systems: In distributed systems where services might depend on
responses from other services, asynchronous communication can help prevent bottlenecks and
improve resilience. It allows services to issue requests and continue processing until the responses are
received, rather than waiting idly.
● Efficient Resource Utilization: Asynchronous APIs often lead to more efficient use of resources since
threads are not blocked waiting for operations to complete.
www.snaplogic.com Page 19 of 40
API Development - Best Practices
Payload Validation
Validating incoming request payloads is an important part of effective API design. This can be achieved in a
number of different ways in SnapLogic.
The JSON Validator policy allows for the validation of an API request payload against a predefined JSON
schema allowing specific constraints to be applied on various JSON structures, such as arrays and strings,
minimizing the risk posed by content-level attacks. The policy gets applied and is executed within the API
Gateway, prior to the execution of the Pipeline.
● The JSON validator policy should be used as the default approach for API payload validation
● The JSON schema must be a valid schema - https://json-schema.org/draft/2019-09/schema
● JSON schemas should be managed and versioned
The Data Validator Snap provides functionality to apply constraints to specific fields contained within an API
payload and to handle when the constraints are not met, allowing you to return the appropriate details within
the API error response. Values can be validated against the following constraint types
- Regex Pattern
- Minimum (numeric)
- Maximum (numeric)
- MinLength (string length)
- MaxLength (string length)
- Required (true/false)
- Field Type (string, number, object, array, boolean, date, time, date-time, local-date-time)
The Data Validator snap can be used to handle complex validations that may not be possible with JSON
validator policy
www.snaplogic.com Page 20 of 40
API Development - Best Practices
Performance Optimization
Parallel Processing
Parallel/concurrent processing should be used within the Pipeline Design to improve performance of
large/complex API workloads. This can be achieved through either the use of a Parent/Child Pipeline
relationship using the Pipeline Execute Snap and/or parallel processing of Snaps within the Pipeline design.
In the below example, the API Pipeline will perform both the SQL Server & PostgreSQL execute operations in
parallel.
● After routing the HTTP request, add a Copy Snap. The Copy Snap duplicates its input document(s) to
multiple output views, enabling parallel processing. Configure the Copy Snap to have “n” output views,
as you'll be working with multiple operations.
● From the Copy Snap, connect each output to the Snap that performs the desired operation. These
database operations will now execute in parallel. Each Snap will wait for its respective operation to
www.snaplogic.com Page 21 of 40
API Development - Best Practices
complete, but they do not wait on each other. This setup is ideal for non-blocking operations, where
one operation's execution time does not affect the other.
● If the results from each operation need to be combined before proceeding, use a Union Snap after the
target operation Snaps. This will allow you to merge the results into a single data stream for further
processing.
● After merging (if necessary), you can add further processing Snaps or direct the output to another
endpoint, such as sending a notification, writing to a file, or updating another database.
Best Practices
● The Copy Snap must be used to create parallel branches within your pipeline. This snap can duplicate
incoming data streams, allowing subsequent snaps to process these streams independently and in
parallel.
● The operations in each branch should have roughly balanced workloads to maximize the efficiency of
parallel processing.
● The pipeline branches should be designed to perform operations that are independent of each other.
Dependencies across branches can serialize the process, negating the benefits of parallelism.
● Potential bottlenecks should be identified where data processing might converge and cause delays.
● The Router Snap can be used to direct specific data streams to different branches based on data
content or other criteria, facilitating more effective parallel processing.
● Join Snaps can be used to merge data streams from parallel branches providing that the join conditions
are optimized to prevent delays in data processing.
● The removal of unnecessary data fields early in the pipeline should be carried out to reduce the volume
of data processed in parallel branches, enhancing overall pipeline efficiency.
www.snaplogic.com Page 22 of 40
API Development - Best Practices
When dealing with complex integration scenarios within an API where multiple operations are required against
the same logical steps, organizing these operations into a Child Pipeline and using the Pipeline Execute Snap's
“pool size” feature for parallel processing can significantly enhance performance and efficiency of the API.
This approach not only organizes the integration logic better but also exploits parallelism to execute similar
tasks concurrently.
In the example below, an API consumer sends a POST request to the /Orders endpoint where the payload body
contains a collection of 4 orders. The Parent Pipeline that receives the HTTP request is configured to process
each Order within a Child Pipeline using a Pool Size of 4. Each order is processed in Parallel, improving the
overall performance of the endpoint.
www.snaplogic.com Page 23 of 40
API Development - Best Practices
● Insert a Pipeline Execute Snap: In your main (parent) pipeline, add a Pipeline Execute Snap where you
want to perform the parallel operations.
● Configure the Pipeline Execute Snap: Set the Pipeline Execute Snap to call your Child Pipeline. Pass any
required parameters to the Child Pipeline through the configuration settings.
● Adjust the Pool Size: In the Pipeline Execute Snap settings, find the property for specifying the pool size.
Increase this number to a value greater than 1 to enable parallel processing. The optimal pool size
depends on the complexity of the operations and the resources available within the Snaplex
environment.
Best Practices
● The pool size for the Pipeline Execute Snap must be sized based on the workload and Snaplex capacity.
Avoid setting the pool size too high as it may lead to resource contention within the Snaplex.
● Child pipelines should be generic and reusable across multiple parent pipelines. Pass dynamic
parameters to the child pipeline to customize its behavior for different use cases.
● For operations that can be batched, records should be grouped before sending them to the child
pipeline. This reduces the overhead of pipeline execution and can significantly improve throughput.
● Workloads should be distributed evenly across multiple child pipeline executions to prevent
overloading a single pipeline instance or external service endpoint.
● Error handling should be implemented within child pipelines to ensure that errors in one instance do
not impact the overall process. Capture and log errors for analysis without stopping the entire batch of
parallel executions
● Fallback mechanisms such as retries or alternative processing routes in case of repeated failures can be
implemented within the Pipeline Design
● Scalability tests should be conducted to understand how your pipeline behaves under high load
conditions. This helps in identifying the maximum load each pipeline can handle before needing
adjustments.
www.snaplogic.com Page 24 of 40
API Development - Best Practices
Pagination
Pagination in API development is a method used to divide large sets of data into manageable, smaller subsets,
called "pages." This method is essential in enhancing the performance of APIs, especially when dealing with
extensive databases or resources.
SnapLogic recommends using Page or Offset Pagination. This is the most common type of pagination, where
data is divided into pages based on a specified "offset" (the starting point) and "pageSize" (the number of
items per page). The following pagination metadata should be used within the API pipeline design.
Pagination Metadata
www.snaplogic.com Page 25 of 40
API Development - Best Practices
JavaScript
{
"data": [
// Array of data objects/items
],
"pagination": {
"currentPage": 1,
"pageSize": 10,
"totalPages": 5,
www.snaplogic.com Page 26 of 40
API Development - Best Practices
"totalItems": 50,
"links": {
"first": "/api/items?page=1",
"last": "/api/items?page=5",
"prev": null,
"next": "/api/items?page=2"
}
}
}
Approach
Example Pipeline
www.snaplogic.com Page 27 of 40
API Development - Best Practices
Step 2 - Retrieve data from Target Database table using the calculated offset value
www.snaplogic.com Page 28 of 40
API Development - Best Practices
www.snaplogic.com Page 29 of 40
API Development - Best Practices
● Default and Maximum Limits should be specified. Define reasonable default values for page size (limit)
to ensure a good user experience, including a maximum limit to prevent abuse and server overload.
● Pagination metadata must be included within the API along with the paginated results, including
metadata such as total count of items, current page number, next/previous page URLs, etc. This helps
clients understand the context of the returned data and navigate through the paginated results more
easily.
● Sorting & filtering options should be added along with pagination parameters. This enhances flexibility
and usability, especially when dealing with large datasets.
● The implemented pagination functionality should be tested under various scenarios, including large
datasets and high concurrent loads, to ensure that it performs well and scales effectively.
● The pagination functionality should be designed in such a way that is consistent across all APIs and can
be reused easily.
www.snaplogic.com Page 30 of 40
API Development - Best Practices
Snowflake
SELECT column1, column2, ...
FROM your_table
ORDER BY column1 -- Optional: Specify the column(s) by which you want to order
the results
LIMIT <page_size> OFFSET <offset_value>;
SQL Server
SELECT column1, column2, ...
FROM your_table
ORDER BY column1 -- Optional: Specify the column(s) by which you want to order
the results
OFFSET <offset_value> ROWS FETCH NEXT <page_size> ROWS ONLY;
MySQL
SELECT column1, column2, ...
FROM your_table
ORDER BY column1 -- Optional: Specify the column(s) by which you want to order
the results
LIMIT <offset_value>, <page_size>;
PostgreSQL
SELECT column1, column2, ...
FROM your_table
ORDER BY column1 -- Optional: Specify the column(s) by which you want to order
www.snaplogic.com Page 31 of 40
API Development - Best Practices
the results
OFFSET <offset_value> ROWS FETCH NEXT <page_size> ROWS ONLY;
Oracle
SELECT column1, column2, ...
FROM your_table
ORDER BY column1 -- Optional: Specify the column(s) by which you want to order
the results
OFFSET <offset_value> ROWS FETCH NEXT <fetch_value> ROWS ONLY;
Redshift
SELECT column1, column2, ...
FROM your_table
ORDER BY column1 -- Optional: Specify the column(s) by which you want to order
the results
LIMIT <fetch_value> OFFSET <offset_value>;
www.snaplogic.com Page 32 of 40
API Development - Best Practices
Filtering
Filtering is a crucial aspect of API design, allowing API consumers to narrow down the results returned by API
endpoints based on specific criteria. Please see Pagination Metadata for details on how to derive the filter field
from the API query parameter for use within underlying database or application queries.
● A standardized approach should be used for filtering across all endpoints. Choose clear and intuitive
parameter names (e.g., "filter") and document their usage in the API documentation.
● API consumers should be allowed to specify multiple filters to support more complex filtering
requirements. For example, "filter=field1:value1,field2:value2" should filter results based on multiple
criteria.
● A standardized syntax should be used for specifying filter conditions. Common conventions include
using ":" or "=" for equality comparisons, ">" and "<" for range comparisons, and keywords like "in" for
inclusion filters.
● Filtering should be integrated with pagination to enable clients to retrieve filtered subsets of large
result sets efficiently. Ensure that filtering is applied before pagination to maintain consistency in the
ordering of paginated results.
www.snaplogic.com Page 33 of 40
API Development - Best Practices
Sorting
Sorting is an important aspect of API development & design, especially when dealing with endpoints that
return collections of resources. This can be achieved by adding a query parameter to the API. API consumers
can provide their desired sort field and sort order using this parameter. Please see Pagination Metadata for
details on how to derive the sort field from the API query parameter for use within underlying database or
application queries.
● A standardized approach should be used for sorting across all API endpoints. Choose a clear and
intuitive parameter name (e.g., "sort") and document its usage in the API documentation.
● API consumers should be allowed to specify multiple fields for sorting to support more complex sorting
requirements. For example, "sort=field1,-field2" should sort by field1 in ascending order and then by
field2 in descending order.
● A default sorting order should be defined to ensure consistent behavior when API consumers do not
explicitly specify a sorting order in the request. Consider using a meaningful default based on the
nature of the data or business requirements.
● Sorting should be integrated with pagination to enable clients to retrieve sorted subsets of large result
sets efficiently. Ensure that sorting is applied before pagination to maintain consistency in the ordering
of paginated results.
www.snaplogic.com Page 34 of 40
API Development - Best Practices
Error Handling
Status Codes
● Appropriate error HTTP status codes should be used within Pipeline error handling to effectively
communicate the result back to the API consumer. Implementation specific HTTP error codes are as
follows
404 NOT FOUND Resource not found When the API tries to access a Resource
that either the API does not provide
support for or doesn't exist
405 METHOD The requested method is not supported After routing the HTTP request within
NOT for the requested resource the Pipeline, either using the Router or
ALLOWED HTTP router snap
500 INTERNAL The general catch-all error when the When any of the above HTTP error codes
SERVER server-side throws an Exception. are not applicable
ERROR
www.snaplogic.com Page 35 of 40
API Development - Best Practices
● Retry mechanisms should be configured within Snap settings to address transient network errors
effectively.
● Error simulations should be conducted during the development and testing phases to validate the
efficiency of error handling mechanisms.
● Monitoring and Detection: Auditing helps monitor access and usage of the API, allowing for the
detection of unauthorized or malicious activities. By keeping detailed logs of who accessed what data
and when, you can quickly identify and respond to security breaches.
● Accountability: With proper auditing, each action within the API Pipeline/s can be traced back to an
individual user and system endpoint, providing a sense of accountability among users and developers.
● Regulatory Requirements: Many industries are subject to regulations that mandate the logging and
auditing of access to data and systems. This can include healthcare (HIPAA), finance (SOX, GDPR for
personal data protection in Europe), and others. Auditing ensures that the API and its data usage
comply with these legal and regulatory requirements.
● Audit Trails: Auditing creates a detailed trail of data access and movement, which can be critical for
compliance audits. These trails can demonstrate that the organization adheres to required standards
and policies.
● Error Tracking: Auditing can include error information and exceptions that occur during API Pipeline
operations, providing essential information for debugging and troubleshooting issues.
● Performance Analysis: By auditing API calls and their response times, organizations can identify
performance bottlenecks within the Pipeline and optimize the system's efficiency.
Operational Insights
● Usage & Consumption Patterns: Auditing can provide granular insights into how the API is being used,
which endpoints are most popular, and peak usage times. This information is valuable for capacity
planning, development, and optimizing resource allocation.
● API Consumer Experience: Analysis of audit logs can help identify API endpoints with slow response
times or frequent errors, highlighting areas for improvement in the user experience.
www.snaplogic.com Page 36 of 40
API Development - Best Practices
To implement effective auditing within the API Pipeline layer, the following best practices should be
considered:
● A common reusable auditing pattern should be used within the Pipeline to ensure consistency across
projects
● Auditing logging should be selective. Log only the information necessary for the purposes above to
avoid excessive data collection that can overwhelm storage and downstream analysis systems.
● Targets for the audit logs must be stored securely, with access tightly controlled to prevent tampering
or unauthorized access.
● External tools should be used for real-time monitoring and alerting based on audit logs to quickly
identify and respond to potential issues.
Auditing Pattern
The following auditing pattern implements audit steps at each critical step of the Pipeline using the “Map
Audit Fields” mapper. See Mapping Audit Fields for more information regarding the metadata that may be
captured as part of the audit step. The penultimate “Union” snap consolidates each audit record and writes
the records to an external storage Audit database table.
www.snaplogic.com Page 37 of 40
API Development - Best Practices
www.snaplogic.com Page 38 of 40
API Development - Best Practices
Correlation ID
A “Correlation ID” is a unique
identifier used to track and correlate
requests and responses across
distributed systems within an API
ecosystem. The identifier is typically
passed as part of the HTTP request
header by the API consumer and is
propagated through all services and
components involved in processing
the request.
Services & endpoints referenced within API Pipeline can be designed to recognize this header, log it with any
related processing actions, and include it in any outgoing requests to other services to ensure the entire
transaction chain can be correlated.
www.snaplogic.com Page 39 of 40
API Development - Best Practices
The specific header name used for the CorrelationID can vary based on the organization's or application's
conventions, but common names include:
● X-Correlation-ID
● X-Request-ID
● Correlation-ID
● Request-ID
Unset
A Correlation ID sent within an request to an API Pipeline can be extracted by using the mechanisms described
in the Headers section of the document.
www.snaplogic.com Page 40 of 40