Create an Amazon SageMaker inference endpoint Generally available

PUT /_inference/{task_type}/{amazonsagemaker_inference_id}

Create an inference endpoint to perform an inference task with the amazon_sagemaker service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are text_embedding, completion, chat_completion, sparse_embedding, or rerank.

  • amazonsagemaker_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is amazon_sagemaker.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • access_key string Required

      A valid AWS access key that has permissions to use Amazon SageMaker and access to models for invoking requests.

    • endpoint_name string Required

      The name of the SageMaker endpoint.

      External documentation
    • api string Required

      Values are openai or elastic.

    • region string Required

      The region that your endpoint or Amazon Resource Name (ARN) is deployed in. The list of available regions per model can be found in the Amazon SageMaker documentation.

      External documentation
    • secret_key string Required

      A valid AWS secret key that is paired with the access_key. For information about creating and managing access and secret keys, refer to the AWS documentation.

      External documentation
    • target_model string

      The model ID when calling a multi-model endpoint.

      External documentation
    • target_container_hostname string

      The container to directly invoke when calling a multi-container endpoint.

      External documentation
    • inference_component_name string

      The inference component to directly invoke when calling a multi-component endpoint.

      External documentation
    • batch_size number

      The maximum number of inputs in each batch. This value is used by inference ingestion pipelines when processing semantic values. It correlates to the number of times the SageMaker endpoint is invoked (one per batch of input).

      Default value is 256.

    • dimensions number

      The number of dimensions returned by the text embedding models. If this value is not provided, then it is guessed by making invoking the endpoint for the text_embedding task.

  • task_settings object
    Hide task_settings attributes Show task_settings attributes object
    • custom_attributes string

      The AWS custom attributes passed verbatim through to the model running in the SageMaker Endpoint. Values will be returned in the X-elastic-sagemaker-custom-attributes header.

      External documentation
    • enable_explanations string

      The optional JMESPath expression used to override the EnableExplanations provided during endpoint creation.

      External documentation
    • inference_id string

      The capture data ID when enabled in the endpoint.

      External documentation
    • session_id string

      The stateful session identifier for a new or existing session. New sessions will be returned in the X-elastic-sagemaker-new-session-id header. Closed sessions will be returned in the X-elastic-sagemaker-closed-session-id header.

      External documentation
    • target_variant string

      Specifies the variant when running with multi-variant Endpoints.

      External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding, completion, chat_completion, sparse_embedding, or rerank.

PUT /_inference/{task_type}/{amazonsagemaker_inference_id}
curl \
 --request PUT 'http://api.example.com/_inference/{task_type}/{amazonsagemaker_inference_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"chunking_settings":{"max_chunk_size":250,"overlap":100,"sentence_overlap":1,"separator_group":"string","separators":["string"],"strategy":"sentence"},"service":"amazon_sagemaker","service_settings":{"access_key":"string","endpoint_name":"string","api":"openai","region":"string","secret_key":"string","target_model":"string","target_container_hostname":"string","inference_component_name":"string","batch_size":256,"dimensions":42.0},"task_settings":{"custom_attributes":"string","enable_explanations":"string","inference_id":"string","session_id":"string","target_variant":"string"}}'