Behavioral analytics













Compact and aligned text (CAT)

The compact and aligned text (CAT) APIs aim are intended only for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, it's recommend to use a corresponding JSON API. All the cat commands accept a query string parameter help to see all the headers and info they provide, and the /_cat command alone lists all the available commands.













Get CAT help Generally available

GET /_cat

Get help for the CAT APIs.

Responses

  • 200 application/json
GET /_cat
curl \
 --request GET 'http://api.example.com/_cat' \
 --header "Authorization: $API_KEY"








Get datafeeds Generally available

GET /_cat/ml/datafeeds/{datafeed_id}

All methods and paths for this operation:

GET /_cat/ml/datafeeds

GET /_cat/ml/datafeeds/{datafeed_id}

Get configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no datafeeds that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The datafeed identifier.

    • state string

      Values are started, stopped, starting, or stopping.

    • assignment_explanation string

      For started datafeeds only, contains messages relating to the selection of a node.

    • buckets.count string

      The number of buckets processed.

    • search.count string

      The number of searches run by the datafeed.

    • search.time string

      The total time the datafeed spent searching, in milliseconds.

    • search.bucket_avg string

      The average search time per bucket, in milliseconds.

    • search.exp_avg_hour string

      The exponential average search time per hour, in milliseconds.

    • node.id string

      The unique identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.name string

      The name of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.ephemeral_id string

      The ephemeral identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.address string

      The network address of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

GET /_cat/ml/datafeeds/{datafeed_id}
GET _cat/ml/datafeeds?v=true&format=json
resp = client.cat.ml_datafeeds(
    v=True,
    format="json",
)
const response = await client.cat.mlDatafeeds({
  v: "true",
  format: "json",
});
response = client.cat.ml_datafeeds(
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlDatafeeds([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/datafeeds?v=true&format=json"
client.cat().mlDatafeeds();
Response examples (200)
A successful response from `GET _cat/ml/datafeeds?v=true&format=json`.
[
  {
    "id": "datafeed-high_sum_total_sales",
    "state": "stopped",
    "buckets.count": "743",
    "search.count": "7"
  },
  {
    "id": "datafeed-low_request_rate",
    "state": "stopped",
    "buckets.count": "1457",
    "search.count": "3"
  },
  {
    "id": "datafeed-response_code_rates",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  },
  {
    "id": "datafeed-url_scanning",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  }
]






































Get all connectors Beta

GET /_connector

Get information about all connectors.

Query parameters

  • from number

    Starting offset (default: 0)

  • size number

    Specifies a max number of results to get

  • index_name string | array[string]

    A comma-separated list of connector index names to fetch connector documents for

  • connector_name string | array[string]

    A comma-separated list of connector names to fetch connector documents for

  • service_type string | array[string]

    A comma-separated list of connector service types to fetch connector documents for

  • include_deleted boolean

    A flag to indicate if the desired connector should be fetched, even if it was soft-deleted.

  • query string

    A wildcard query string that filters connectors with matching name, description or index name

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • results array[object] Required
      Hide results attributes Show results attributes object
      • api_key_id string
      • api_key_secret_id string
      • configuration object Required
        Hide configuration attribute Show configuration attribute object
      • custom_scheduling object Required
        Hide custom_scheduling attribute Show custom_scheduling attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • configuration_overrides object Required
            Hide configuration_overrides attributes Show configuration_overrides attributes object
            • max_crawl_depth number
            • sitemap_discovery_disabled boolean
            • domain_allowlist array[string]
            • sitemap_urls array[string]
            • seed_urls array[string]
          • enabled boolean Required
          • interval string Required
          • last_synced string
          • name string Required
      • deleted boolean Required
      • description string
      • features object
        Hide features attributes Show features attributes object
        • document_level_security object
          Hide document_level_security attribute Show document_level_security attribute object
          • enabled boolean Required
        • incremental_sync object
          Hide incremental_sync attribute Show incremental_sync attribute object
          • enabled boolean Required
        • native_connector_api_keys object
          Hide native_connector_api_keys attribute Show native_connector_api_keys attribute object
          • enabled boolean Required
        • sync_rules object
          Hide sync_rules attributes Show sync_rules attributes object
          • advanced object
            Hide advanced attribute Show advanced attribute object
            • enabled boolean Required
          • basic object
            Hide basic attribute Show basic attribute object
            • enabled boolean Required
      • filtering array[object] Required
        Hide filtering attributes Show filtering attributes object
        • active object Required
          Hide active attributes Show active attributes object
          • advanced_snippet object Required
          • rules array[object] Required
          • validation object Required
        • domain string
        • draft object Required
          Hide draft attributes Show draft attributes object
          • advanced_snippet object Required
          • rules array[object] Required
          • validation object Required
      • id string
      • index_name string | null

      • is_native boolean Required
      • language string
      • last_access_control_sync_error string
      • last_access_control_sync_scheduled_at string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        One of:
      • last_access_control_sync_status string

        Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

      • last_deleted_document_count number
      • last_incremental_sync_scheduled_at string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        One of:
      • last_indexed_document_count number
      • last_seen string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        One of:
      • last_sync_error string
      • last_sync_scheduled_at string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        One of:
      • last_sync_status string

        Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

      • last_synced string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        One of:
      • name string
      • pipeline object
        Hide pipeline attributes Show pipeline attributes object
        • extract_binary_content boolean Required
        • name string Required
        • reduce_whitespace boolean Required
        • run_ml_inference boolean Required
      • scheduling object Required
        Hide scheduling attributes Show scheduling attributes object
        • access_control object
          Hide access_control attributes Show access_control attributes object
          • enabled boolean Required
          • interval string Required

            The interval is expressed using the crontab syntax

        • full object
          Hide full attributes Show full attributes object
          • enabled boolean Required
          • interval string Required

            The interval is expressed using the crontab syntax

        • incremental object
          Hide incremental attributes Show incremental attributes object
          • enabled boolean Required
          • interval string Required

            The interval is expressed using the crontab syntax

      • service_type string
      • status string Required

        Values are created, needs_configuration, configured, connected, or error.

      • sync_cursor object
      • sync_now boolean Required
GET _connector
resp = client.connector.list()
const response = await client.connector.list();
response = client.connector.list
$resp = $client->connector()->list();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector"
client.connector().list(l -> l);

Create a connector Beta

POST /_connector

Connectors are Elasticsearch integrations that bring content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. Elastic managed connectors (Native connectors) are a managed service on Elastic Cloud. Self-managed connectors (Connector clients) are self-managed on your infrastructure.

application/json

Body

  • description string
  • index_name string
  • is_native boolean
  • language string
  • name string
  • service_type string

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

    • id string Required
POST /_connector
curl \
 --request POST 'http://api.example.com/_connector' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"description":"string","index_name":"string","is_native":true,"language":"string","name":"string","service_type":"string"}'




























Update the connector configuration Beta

PUT /_connector/{connector_id}/_configuration

Update the configuration field in the connector document.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_configuration
PUT _connector/my-spo-connector/_configuration
{
    "values": {
        "tenant_id": "my-tenant-id",
        "tenant_name": "my-sharepoint-site",
        "client_id": "foo",
        "secret_value": "bar",
        "site_collections": "*"
    }
}
resp = client.connector.update_configuration(
    connector_id="my-spo-connector",
    values={
        "tenant_id": "my-tenant-id",
        "tenant_name": "my-sharepoint-site",
        "client_id": "foo",
        "secret_value": "bar",
        "site_collections": "*"
    },
)
const response = await client.connector.updateConfiguration({
  connector_id: "my-spo-connector",
  values: {
    tenant_id: "my-tenant-id",
    tenant_name: "my-sharepoint-site",
    client_id: "foo",
    secret_value: "bar",
    site_collections: "*",
  },
});
response = client.connector.update_configuration(
  connector_id: "my-spo-connector",
  body: {
    "values": {
      "tenant_id": "my-tenant-id",
      "tenant_name": "my-sharepoint-site",
      "client_id": "foo",
      "secret_value": "bar",
      "site_collections": "*"
    }
  }
)
$resp = $client->connector()->updateConfiguration([
    "connector_id" => "my-spo-connector",
    "body" => [
        "values" => [
            "tenant_id" => "my-tenant-id",
            "tenant_name" => "my-sharepoint-site",
            "client_id" => "foo",
            "secret_value" => "bar",
            "site_collections" => "*",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"values":{"tenant_id":"my-tenant-id","tenant_name":"my-sharepoint-site","client_id":"foo","secret_value":"bar","site_collections":"*"}}' "$ELASTICSEARCH_URL/_connector/my-spo-connector/_configuration"
client.connector().updateConfiguration(u -> u
    .connectorId("my-spo-connector")
    .values(Map.of("tenant_id", JsonData.fromJson("\"my-tenant-id\""),"tenant_name", JsonData.fromJson("\"my-sharepoint-site\""),"secret_value", JsonData.fromJson("\"bar\""),"client_id", JsonData.fromJson("\"foo\""),"site_collections", JsonData.fromJson("\"*\"")))
);
{
    "values": {
        "tenant_id": "my-tenant-id",
        "tenant_name": "my-sharepoint-site",
        "client_id": "foo",
        "secret_value": "bar",
        "site_collections": "*"
    }
}
{
    "values": {
        "secret_value": "foo-bar"
    }
}
Response examples (200)
{
  "result": "updated"
}




Update the connector filtering Beta

PUT /_connector/{connector_id}/_filtering

Update the draft filtering configuration of a connector and marks the draft validation state as edited. The filtering draft is activated once validated by the running Elastic connector service. The filtering property is used to configure sync rules (both basic and advanced) for a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • filtering array[object]
    Hide filtering attributes Show filtering attributes object
    • active object Required
      Hide active attributes Show active attributes object
      • advanced_snippet object Required
        Hide advanced_snippet attributes Show advanced_snippet attributes object
        • created_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          One of:
        • updated_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          One of:
        • value object Required
      • rules array[object] Required
        Hide rules attributes Show rules attributes object
        • created_at string
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • id string Required
        • order number Required
        • policy string Required

          Values are exclude or include.

        • rule string Required

          Values are contains, ends_with, equals, regex, starts_with, >, or <.

        • updated_at string
        • value string Required
      • validation object Required
        Hide validation attributes Show validation attributes object
        • errors array[object] Required
          Hide errors attributes Show errors attributes object
          • ids array[string] Required
          • messages array[string] Required
        • state string Required

          Values are edited, invalid, or valid.

    • domain string
    • draft object Required
      Hide draft attributes Show draft attributes object
      • advanced_snippet object Required
        Hide advanced_snippet attributes Show advanced_snippet attributes object
        • created_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          One of:
        • updated_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          One of:
        • value object Required
      • rules array[object] Required
        Hide rules attributes Show rules attributes object
        • created_at string
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • id string Required
        • order number Required
        • policy string Required

          Values are exclude or include.

        • rule string Required

          Values are contains, ends_with, equals, regex, starts_with, >, or <.

        • updated_at string
        • value string Required
      • validation object Required
        Hide validation attributes Show validation attributes object
        • errors array[object] Required
          Hide errors attributes Show errors attributes object
          • ids array[string] Required
          • messages array[string] Required
        • state string Required

          Values are edited, invalid, or valid.

  • rules array[object]
    Hide rules attributes Show rules attributes object
    • created_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • order number Required
    • policy string Required

      Values are exclude or include.

    • rule string Required

      Values are contains, ends_with, equals, regex, starts_with, >, or <.

    • updated_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • value string Required
  • advanced_snippet object
    Hide advanced_snippet attributes Show advanced_snippet attributes object
    • created_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • updated_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • value object Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_filtering
PUT _connector/my-g-drive-connector/_filtering
{
    "rules": [
         {
            "field": "file_extension",
            "id": "exclude-txt-files",
            "order": 0,
            "policy": "exclude",
            "rule": "equals",
            "value": "txt"
        },
        {
            "field": "_",
            "id": "DEFAULT",
            "order": 1,
            "policy": "include",
            "rule": "regex",
            "value": ".*"
        }
    ]
}
resp = client.connector.update_filtering(
    connector_id="my-g-drive-connector",
    rules=[
        {
            "field": "file_extension",
            "id": "exclude-txt-files",
            "order": 0,
            "policy": "exclude",
            "rule": "equals",
            "value": "txt"
        },
        {
            "field": "_",
            "id": "DEFAULT",
            "order": 1,
            "policy": "include",
            "rule": "regex",
            "value": ".*"
        }
    ],
)
const response = await client.connector.updateFiltering({
  connector_id: "my-g-drive-connector",
  rules: [
    {
      field: "file_extension",
      id: "exclude-txt-files",
      order: 0,
      policy: "exclude",
      rule: "equals",
      value: "txt",
    },
    {
      field: "_",
      id: "DEFAULT",
      order: 1,
      policy: "include",
      rule: "regex",
      value: ".*",
    },
  ],
});
response = client.connector.update_filtering(
  connector_id: "my-g-drive-connector",
  body: {
    "rules": [
      {
        "field": "file_extension",
        "id": "exclude-txt-files",
        "order": 0,
        "policy": "exclude",
        "rule": "equals",
        "value": "txt"
      },
      {
        "field": "_",
        "id": "DEFAULT",
        "order": 1,
        "policy": "include",
        "rule": "regex",
        "value": ".*"
      }
    ]
  }
)
$resp = $client->connector()->updateFiltering([
    "connector_id" => "my-g-drive-connector",
    "body" => [
        "rules" => array(
            [
                "field" => "file_extension",
                "id" => "exclude-txt-files",
                "order" => 0,
                "policy" => "exclude",
                "rule" => "equals",
                "value" => "txt",
            ],
            [
                "field" => "_",
                "id" => "DEFAULT",
                "order" => 1,
                "policy" => "include",
                "rule" => "regex",
                "value" => ".*",
            ],
        ),
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"rules":[{"field":"file_extension","id":"exclude-txt-files","order":0,"policy":"exclude","rule":"equals","value":"txt"},{"field":"_","id":"DEFAULT","order":1,"policy":"include","rule":"regex","value":".*"}]}' "$ELASTICSEARCH_URL/_connector/my-g-drive-connector/_filtering"
client.connector().updateFiltering(u -> u
    .connectorId("my-g-drive-connector")
    .rules(List.of(FilteringRule.of(f -> f
            .field("file_extension")
            .id("exclude-txt-files")
            .order(0)
            .policy(FilteringPolicy.Exclude)
            .rule(FilteringRuleRule.Equals)
            .value("txt")),FilteringRule.of(f -> f
            .field("_")
            .id("DEFAULT")
            .order(1)
            .policy(FilteringPolicy.Include)
            .rule(FilteringRuleRule.Regex)
            .value(".*"))))
);
Request examples
{
    "rules": [
         {
            "field": "file_extension",
            "id": "exclude-txt-files",
            "order": 0,
            "policy": "exclude",
            "rule": "equals",
            "value": "txt"
        },
        {
            "field": "_",
            "id": "DEFAULT",
            "order": 1,
            "policy": "include",
            "rule": "regex",
            "value": ".*"
        }
    ]
}
{
    "advanced_snippet": {
        "value": [{
            "tables": [
                "users",
                "orders"
            ],
            "query": "SELECT users.id AS id, orders.order_id AS order_id FROM users JOIN orders ON users.id = orders.user_id"
        }]
    }
}
Response examples (200)
{
  "result": "updated"
}
















Update the connector pipeline Beta

PUT /_connector/{connector_id}/_pipeline

When you create a new connector, the configuration of an ingest pipeline is populated with default settings.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • pipeline object Required
    Hide pipeline attributes Show pipeline attributes object
    • extract_binary_content boolean Required
    • name string Required
    • reduce_whitespace boolean Required
    • run_ml_inference boolean Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_pipeline
PUT _connector/my-connector/_pipeline
{
    "pipeline": {
        "extract_binary_content": true,
        "name": "my-connector-pipeline",
        "reduce_whitespace": true,
        "run_ml_inference": true
    }
}
resp = client.connector.update_pipeline(
    connector_id="my-connector",
    pipeline={
        "extract_binary_content": True,
        "name": "my-connector-pipeline",
        "reduce_whitespace": True,
        "run_ml_inference": True
    },
)
const response = await client.connector.updatePipeline({
  connector_id: "my-connector",
  pipeline: {
    extract_binary_content: true,
    name: "my-connector-pipeline",
    reduce_whitespace: true,
    run_ml_inference: true,
  },
});
response = client.connector.update_pipeline(
  connector_id: "my-connector",
  body: {
    "pipeline": {
      "extract_binary_content": true,
      "name": "my-connector-pipeline",
      "reduce_whitespace": true,
      "run_ml_inference": true
    }
  }
)
$resp = $client->connector()->updatePipeline([
    "connector_id" => "my-connector",
    "body" => [
        "pipeline" => [
            "extract_binary_content" => true,
            "name" => "my-connector-pipeline",
            "reduce_whitespace" => true,
            "run_ml_inference" => true,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"pipeline":{"extract_binary_content":true,"name":"my-connector-pipeline","reduce_whitespace":true,"run_ml_inference":true}}' "$ELASTICSEARCH_URL/_connector/my-connector/_pipeline"
client.connector().updatePipeline(u -> u
    .connectorId("my-connector")
    .pipeline(p -> p
        .extractBinaryContent(true)
        .name("my-connector-pipeline")
        .reduceWhitespace(true)
        .runMlInference(true)
    )
);
Request example
{
    "pipeline": {
        "extract_binary_content": true,
        "name": "my-connector-pipeline",
        "reduce_whitespace": true,
        "run_ml_inference": true
    }
}
Response examples (200)
{
  "result": "updated"
}




Update the connector service type Beta

PUT /_connector/{connector_id}/_service_type

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • service_type string Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_service_type
PUT _connector/my-connector/_service_type
{
    "service_type": "sharepoint_online"
}
resp = client.connector.update_service_type(
    connector_id="my-connector",
    service_type="sharepoint_online",
)
const response = await client.connector.updateServiceType({
  connector_id: "my-connector",
  service_type: "sharepoint_online",
});
response = client.connector.update_service_type(
  connector_id: "my-connector",
  body: {
    "service_type": "sharepoint_online"
  }
)
$resp = $client->connector()->updateServiceType([
    "connector_id" => "my-connector",
    "body" => [
        "service_type" => "sharepoint_online",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service_type":"sharepoint_online"}' "$ELASTICSEARCH_URL/_connector/my-connector/_service_type"
client.connector().updateServiceType(u -> u
    .connectorId("my-connector")
    .serviceType("sharepoint_online")
);
Request example
{
    "service_type": "sharepoint_online"
}
Response examples (200)
{
  "result": "updated"
}

Update the connector status Technical preview

PUT /_connector/{connector_id}/_status

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • status string Required

    Values are created, needs_configuration, configured, connected, or error.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_status
PUT _connector/my-connector/_status
{
    "status": "needs_configuration"
}
resp = client.connector.update_status(
    connector_id="my-connector",
    status="needs_configuration",
)
const response = await client.connector.updateStatus({
  connector_id: "my-connector",
  status: "needs_configuration",
});
response = client.connector.update_status(
  connector_id: "my-connector",
  body: {
    "status": "needs_configuration"
  }
)
$resp = $client->connector()->updateStatus([
    "connector_id" => "my-connector",
    "body" => [
        "status" => "needs_configuration",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"status":"needs_configuration"}' "$ELASTICSEARCH_URL/_connector/my-connector/_status"
client.connector().updateStatus(u -> u
    .connectorId("my-connector")
    .status(ConnectorStatus.NeedsConfiguration)
);
Request example
{
    "status": "needs_configuration"
}
Response examples (200)
{
  "result": "updated"
}





















Update data stream lifecycles Generally available

PUT /_data_stream/{name}/_lifecycle

Update the data stream lifecycle of the specified data streams.

External documentation

Path parameters

  • name string | array[string] Required

    Comma-separated list of data streams used to limit the request. Supports wildcards (*). To target all data streams use * or _all.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body

  • data_retention string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • downsampling object
    Hide downsampling attribute Show downsampling attribute object
    • rounds array[object] Required

      The list of downsampling rounds to execute as part of this downsampling configuration

      Hide rounds attributes Show rounds attributes object
      • after string Required

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • config object Required
        Hide config attribute Show config attribute object
        • fixed_interval string Required

          A date histogram interval. Similar to Duration with additional units: w (week), M (month), q (quarter) and y (year)

  • enabled boolean

    If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

    Default value is true.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_data_stream/{name}/_lifecycle
PUT _data_stream/my-data-stream/_lifecycle
{
  "data_retention": "7d"
}
resp = client.indices.put_data_lifecycle(
    name="my-data-stream",
    data_retention="7d",
)
const response = await client.indices.putDataLifecycle({
  name: "my-data-stream",
  data_retention: "7d",
});
response = client.indices.put_data_lifecycle(
  name: "my-data-stream",
  body: {
    "data_retention": "7d"
  }
)
$resp = $client->indices()->putDataLifecycle([
    "name" => "my-data-stream",
    "body" => [
        "data_retention" => "7d",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"data_retention":"7d"}' "$ELASTICSEARCH_URL/_data_stream/my-data-stream/_lifecycle"
client.indices().putDataLifecycle(p -> p
    .dataRetention(d -> d
        .time("7d")
    )
    .name("my-data-stream")
);
{
  "data_retention": "7d"
}
This example configures two downsampling rounds.
{
    "downsampling": [
      {
        "after": "1d",
        "fixed_interval": "10m"
      },
      {
        "after": "7d",
        "fixed_interval": "1d"
      }
    ]
}
Response examples (200)
A successful response for configuring a data stream lifecycle.
{
  "acknowledged": true
}



























































































































Get EQL search results Generally available

POST /{index}/_eql/search

All methods and paths for this operation:

GET /{index}/_eql/search

POST /{index}/_eql/search

Returns search results for an Event Query Language (EQL) query. EQL assumes each document in a data stream or index corresponds to an event.

External documentation

Path parameters

  • index string | array[string] Required

    The name of the index to scope the operation

Query parameters

  • allow_no_indices boolean

    Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes _all string or when no indices have been specified)

  • allow_partial_search_results boolean

    If true, returns partial results if there are shard failures. If false, returns an error with no partial results.

  • allow_partial_sequence_results boolean

    If true, sequence queries will return partial results in case of shard failures. If false, they will return no results at all. This flag has effect only if allow_partial_search_results is true.

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ccs_minimize_roundtrips boolean

    Indicates whether network round-trips should be minimized as part of cross-cluster search requests execution

  • ignore_unavailable boolean

    If true, missing or closed indices are not included in the response.

  • keep_alive string

    Period for which the search and its results are stored on the cluster.

    Values are -1 or 0.

  • keep_on_completion boolean

    If true, the search and its results are stored on the cluster.

  • wait_for_completion_timeout string

    Timeout duration to wait for the request to finish. Defaults to no timeout, meaning the request waits for complete search results.

    Values are -1 or 0.

application/json

Body Required

  • query string Required

    EQL query you wish to run.

  • case_sensitive boolean
  • event_category_field string

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • tiebreaker_field string

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • timestamp_field string

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • fetch_size number
  • filter object | array[object]

    Query, written in Query DSL, used to filter the events on which the EQL query runs.

    One of:

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • keep_alive string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • keep_on_completion boolean
  • wait_for_completion_timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • allow_partial_search_results boolean

    Allow query execution also in case of shard failures. If true, the query will keep running and will return results based on the available shards. For sequences, the behavior can be further refined using allow_partial_sequence_results

    Default value is true.

  • allow_partial_sequence_results boolean

    This flag applies only to sequences and has effect only if allow_partial_search_results=true. If true, the sequence query will return results based on the available shards, ignoring the others. If false, the sequence query will return successfully, but will always have empty results.

    Default value is false.

  • size number
  • fields object | array[object]

    Array of wildcard (*) patterns. The response returns values for field names matching these patterns in the fields property of each hit.

    One of:

    A reference to a field with formatting instructions on how to return the value

    Hide attributes Show attributes
    • field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • format string

      The format in which the values are returned.

    • include_unmapped boolean
  • result_position string

    Values are tail or head.

  • runtime_mappings object
    Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • target_field string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • target_index string
      • script object
        Hide script attributes Show script attributes object
        • source string | object

          One of:
        • id string
        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Any of:

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • max_samples_per_key number

    By default, the response of a sample query contains up to 10 samples, with one sample per unique set of join keys. Use the size parameter to get a smaller or larger set of samples. To retrieve more than one sample per set of join keys, use the max_samples_per_key parameter. Pipes are not supported for sample queries.

    Default value is 1.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • is_partial boolean

      If true, the response does not contain complete search results.

    • is_running boolean

      If true, the search request is still executing.

    • took number

      Time unit for milliseconds

    • timed_out boolean

      If true, the request timed out before completion.

    • hits object Required
      Hide hits attributes Show hits attributes object
      • total object
        Hide total attributes Show total attributes object
        • relation string Required

          Values are eq or gte.

        • value number Required
      • events array[object]

        Contains events matching the query. Each object represents a matching event.

        Hide events attributes Show events attributes object
        • _index string Required
        • _id string Required
        • _source object Required

          Original JSON body passed for the event at index time.

        • missing boolean

          Set to true for events in a timespan-constrained sequence that do not meet a given condition.

        • fields object
          Hide fields attribute Show fields attribute object
          • * array[object] Additional properties
      • sequences array[object]

        Contains event sequences matching the query. Each object represents a matching sequence. This parameter is only returned for EQL queries containing a sequence.

        Hide sequences attributes Show sequences attributes object
        • events array[object] Required

          Contains events matching the query. Each object represents a matching event.

          Hide events attributes Show events attributes object
          • _index string Required
          • _id string Required
          • _source object Required

            Original JSON body passed for the event at index time.

          • missing boolean

            Set to true for events in a timespan-constrained sequence that do not meet a given condition.

          • fields object
        • join_keys array[object]

          Shared field values used to constrain matches in the sequence. These are defined using the by keyword in the EQL query syntax.

    • shard_failures array[object]

      Contains information about shard failures (if any), in case allow_partial_search_results=true

      Hide shard_failures attributes Show shard_failures attributes object
      • index string
      • node string
      • reason object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide reason attributes Show reason attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • shard number Required
      • status string
GET /my-data-stream/_eql/search
{
  "query": """
    process where (process.name == "cmd.exe" and process.pid != 2013)
  """
}
resp = client.eql.search(
    index="my-data-stream",
    query="\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  ",
)
const response = await client.eql.search({
  index: "my-data-stream",
  query:
    '\n    process where (process.name == "cmd.exe" and process.pid != 2013)\n  ',
});
response = client.eql.search(
  index: "my-data-stream",
  body: {
    "query": "\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  "
  }
)
$resp = $client->eql()->search([
    "index" => "my-data-stream",
    "body" => [
        "query" => "\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  ",
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":"\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  "}' "$ELASTICSEARCH_URL/my-data-stream/_eql/search"
client.eql().search(s -> s
    .index("my-data-stream")
    .query(" process where (process.name == \"cmd.exe\" and process.pid != 2013) ")
);
Request examples
Run `GET /my-data-stream/_eql/search` to search for events that have a `process.name` of `cmd.exe` and a `process.pid` other than `2013`.
{
  "query": """
    process where (process.name == "cmd.exe" and process.pid != 2013)
  """
}
Run `GET /my-data-stream/_eql/search` to search for a sequence of events. The sequence starts with an event with an `event.category` of `file`, a `file.name` of `cmd.exe`, and a `process.pid` other than `2013`. It is followed by an event with an `event.category` of `process` and a `process.executable` that contains the substring `regsvr32`. These events must also share the same `process.pid` value.
{
  "query": """
    sequence by process.pid
      [ file where file.name == "cmd.exe" and process.pid != 2013 ]
      [ process where stringContains(process.executable, "regsvr32") ]
  """
}
Response examples (200)
{
  "is_partial": false,
  "is_running": false,
  "took": 6,
  "timed_out": false,
  "hits": {
    "total": {
      "value": 1,
      "relation": "eq"
    },
    "sequences": [
      {
        "join_keys": [
          2012
        ],
        "events": [
          {
            "_index": ".ds-my-data-stream-2099.12.07-000001",
            "_id": "AtOJ4UjUBAAx3XR5kcCM",
            "_source": {
              "@timestamp": "2099-12-06T11:04:07.000Z",
              "event": {
                "category": "file",
                "id": "dGCHwoeS",
                "sequence": 2
              },
              "file": {
                "accessed": "2099-12-07T11:07:08.000Z",
                "name": "cmd.exe",
                "path": "C:\\Windows\\System32\\cmd.exe",
                "type": "file",
                "size": 16384
              },
              "process": {
                "pid": 2012,
                "name": "cmd.exe",
                "executable": "C:\\Windows\\System32\\cmd.exe"
              }
            }
          },
          {
            "_index": ".ds-my-data-stream-2099.12.07-000001",
            "_id": "OQmfCaduce8zoHT93o4H",
            "_source": {
              "@timestamp": "2099-12-07T11:07:09.000Z",
              "event": {
                "category": "process",
                "id": "aR3NWVOs",
                "sequence": 4
              },
              "process": {
                "pid": 2012,
                "name": "regsvr32.exe",
                "command_line": "regsvr32.exe  /s /u /i:https://...RegSvr32.sct scrobj.dll",
                "executable": "C:\\Windows\\System32\\regsvr32.exe"
              }
            }
          }
        ]
      }
    ]
  }
}


















Index

Index APIs enable you to manage individual indices, index settings, aliases, mappings, and index templates.





































Delete indices Generally available

DELETE /{index}

Deleting an index deletes its documents, shards, and metadata. It does not delete related Kibana components, such as data views, visualizations, or dashboards.

You cannot delete the current write index of a data stream. To delete the index, you must roll over the data stream so a new write index is created. You can then use the delete index API to delete the previous write index.

Required authorization

  • Index privileges: delete_index

Path parameters

  • index string | array[string] Required

    Comma-separated list of indices to delete. You cannot specify index aliases. By default, this parameter does not support wildcards (*) or _all. To use wildcards or _all, set the action.destructive_requires_name cluster setting to false.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • _shards object
      Hide _shards attributes Show _shards attributes object
      • failed number Required
      • successful number Required
      • total number Required
      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Hide reason attributes Show reason attributes object
          • type string Required

            The type of error

          • reason
          • stack_trace string

            The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • caused_by object

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • root_cause array[object]
          • suppressed array[object]
        • shard number Required
        • status string
      • skipped number
DELETE /books
resp = client.indices.delete(
    index="books",
)
const response = await client.indices.delete({
  index: "books",
});
response = client.indices.delete(
  index: "books"
)
$resp = $client->indices()->delete([
    "index" => "books",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/books"
client.indices().delete(d -> d
    .index("books")
);

Check indices Generally available

HEAD /{index}

Check if one or more indices, index aliases, or data streams exist.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases. Supports wildcards (*).

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flat_settings boolean

    If true, returns settings in flat format.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • include_defaults boolean

    If true, return all default settings in the response.

  • local boolean

    If true, the request retrieves information from the local node only.

Responses

  • 200 application/json
HEAD my-data-stream
resp = client.indices.exists(
    index="my-data-stream",
)
const response = await client.indices.exists({
  index: "my-data-stream",
});
response = client.indices.exists(
  index: "my-data-stream"
)
$resp = $client->indices()->exists([
    "index" => "my-data-stream",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-data-stream"
client.indices().exists(e -> e
    .index("my-data-stream")
);

Create or update an alias Generally available

POST /{index}/_aliases/{name}

All methods and paths for this operation:

PUT /{index}/_alias/{name}

POST /{index}/_alias/{name}
PUT /{index}/_aliases/{name}
POST /{index}/_aliases/{name}

Adds a data stream or index to an alias.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices to add. Supports wildcards (*). Wildcard patterns that match both data streams and indices return an error.

  • name string Required

    Alias to update. If the alias doesn’t exist, the request creates it. Index alias names support date math.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body

  • filter object

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • index_routing string
  • is_write_index boolean

    If true, sets the write index or data stream for the alias. If an alias points to multiple indices or data streams and is_write_index isn’t set, the alias rejects write requests. If an index alias points to one index and is_write_index isn’t set, the index automatically acts as the write index. Data stream aliases don’t automatically set a write data stream, even if the alias points to one data stream.

  • routing string
  • search_routing string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST _aliases
{
  "actions": [
    {
      "add": {
        "index": "my-data-stream",
        "alias": "my-alias"
      }
    }
  ]
}
resp = client.indices.update_aliases(
    actions=[
        {
            "add": {
                "index": "my-data-stream",
                "alias": "my-alias"
            }
        }
    ],
)
const response = await client.indices.updateAliases({
  actions: [
    {
      add: {
        index: "my-data-stream",
        alias: "my-alias",
      },
    },
  ],
});
response = client.indices.update_aliases(
  body: {
    "actions": [
      {
        "add": {
          "index": "my-data-stream",
          "alias": "my-alias"
        }
      }
    ]
  }
)
$resp = $client->indices()->updateAliases([
    "body" => [
        "actions" => array(
            [
                "add" => [
                    "index" => "my-data-stream",
                    "alias" => "my-alias",
                ],
            ],
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"actions":[{"add":{"index":"my-data-stream","alias":"my-alias"}}]}' "$ELASTICSEARCH_URL/_aliases"
client.indices().updateAliases(u -> u
    .actions(a -> a
        .add(ad -> ad
            .alias("my-alias")
            .index("my-data-stream")
        )
    )
);
Request example
{
  "actions": [
    {
      "add": {
        "index": "my-data-stream",
        "alias": "my-alias"
      }
    }
  ]
}




















Get aliases Generally available

GET /{index}/_alias/{name}

All methods and paths for this operation:

GET /_alias

GET /_alias/{name}
GET /{index}/_alias
GET /{index}/_alias/{name}

Retrieves information for one or more data stream or index aliases.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • name string | array[string] Required

    Comma-separated list of aliases to retrieve. Supports wildcards (*). To retrieve all aliases, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • aliases object Required
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

            External documentation
          • index_routing string

            Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string

            Value used to route indexing and search operations to a specific shard.

          • search_routing string

            Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

          • is_hidden boolean Generally available

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

GET _alias
resp = client.indices.get_alias()
const response = await client.indices.getAlias();
response = client.indices.get_alias
$resp = $client->indices()->getAlias();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_alias"
client.indices().getAlias(g -> g);

Check aliases Generally available

HEAD /{index}/_alias/{name}

All methods and paths for this operation:

HEAD /_alias/{name}

HEAD /{index}/_alias/{name}

Check if one or more data stream or index aliases exist.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • name string | array[string] Required

    Comma-separated list of aliases to check. Supports wildcards (*).

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, requests that include a missing data stream or index in the target indices or data streams return an error.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
HEAD _alias/my-alias
resp = client.indices.exists_alias(
    name="my-alias",
)
const response = await client.indices.existsAlias({
  name: "my-alias",
});
response = client.indices.exists_alias(
  name: "my-alias"
)
$resp = $client->indices()->existsAlias([
    "name" => "my-alias",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_alias/my-alias"
client.indices().existsAlias(e -> e
    .name("my-alias")
);

Get mapping definitions Generally available

GET /{index}/_mapping

All methods and paths for this operation:

GET /_mapping

GET /{index}/_mapping

For data streams, the API retrieves mappings for the stream’s backing indices.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • local boolean Deprecated

    If true, the request retrieves information from the local node only.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • item object
        Hide item attributes Show item attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
          • mode string

            Values are disabled, stored, or synthetic.

        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

              Hide fields attribute Show fields attribute object
              • * object Additional properties
            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

            • input_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_index string
            • script object
              Hide script attributes Show script attributes object
              • source
              • id string
              • params object

                Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              • lang
              • options object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
      • mappings object Required
        Hide mappings attributes Show mappings attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
          • mode string

            Values are disabled, stored, or synthetic.

        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

              Hide fields attribute Show fields attribute object
              • * object Additional properties
            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

            • input_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_index string
            • script object
              Hide script attributes Show script attributes object
              • source
              • id string
              • params object

                Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              • lang
              • options object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
GET /books/_mapping
resp = client.indices.get_mapping(
    index="books",
)
const response = await client.indices.getMapping({
  index: "books",
});
response = client.indices.get_mapping(
  index: "books"
)
$resp = $client->indices()->getMapping([
    "index" => "books",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/books/_mapping"
client.indices().getMapping(g -> g
    .index("books")
);








































Inference

Inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.













Create an inference endpoint Generally available

PUT /_inference/{task_type}/{inference_id}

All methods and paths for this operation:

PUT /_inference/{inference_id}

PUT /_inference/{task_type}/{inference_id}

IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

The following integrations are available through the inference API. You can find the available task types next to the integration name:

  • AlibabaCloud AI Search (completion, rerank, sparse_embedding, text_embedding)
  • Amazon Bedrock (completion, text_embedding)
  • Anthropic (completion)
  • Azure AI Studio (completion, 'rerank', text_embedding)
  • Azure OpenAI (completion, text_embedding)
  • Cohere (completion, rerank, text_embedding)
  • DeepSeek (completion, chat_completion)
  • Elasticsearch (rerank, sparse_embedding, text_embedding - this service is for built-in models and models uploaded through Eland)
  • ELSER (sparse_embedding)
  • Google AI Studio (completion, text_embedding)
  • Google Vertex AI (rerank, text_embedding)
  • Hugging Face (chat_completion, completion, rerank, text_embedding)
  • Mistral (chat_completion, completion, text_embedding)
  • OpenAI (chat_completion, completion, text_embedding)
  • VoyageAI (text_embedding, rerank)
  • Watsonx inference integration (text_embedding)
  • JinaAI (text_embedding, rerank)

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string Required

    The task type. Refer to the integration list in the API description for the available task types.

    Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

  • inference_id string Required

    The inference Id

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body Required

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    The service type

  • service_settings object Required
  • task_settings object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object

    Represents an inference endpoint as returned by the GET API

    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

PUT /_inference/{task_type}/{inference_id}
PUT _inference/rerank/my-rerank-model
{
 "service": "cohere",
 "service_settings": {
   "model_id": "rerank-english-v3.0",
   "api_key": "{{COHERE_API_KEY}}"
 }
 "chunking_settings": {
   "strategy": "recursive",
   "max_chunk_size": 200,
   "separator_group": "markdown"
}
resp = client.inference.put(
    task_type="rerank",
    inference_id="my-rerank-model",
    inference_config={
        "service": "cohere",
        "service_settings": {
            "model_id": "rerank-english-v3.0",
            "api_key": "{{COHERE_API_KEY}}"
        }
    },
)
const response = await client.inference.put({
  task_type: "rerank",
  inference_id: "my-rerank-model",
  inference_config: {
    service: "cohere",
    service_settings: {
      model_id: "rerank-english-v3.0",
      api_key: "{{COHERE_API_KEY}}",
    },
  },
});
response = client.inference.put(
  task_type: "rerank",
  inference_id: "my-rerank-model",
  body: {
    "service": "cohere",
    "service_settings": {
      "model_id": "rerank-english-v3.0",
      "api_key": "{{COHERE_API_KEY}}"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "rerank",
    "inference_id" => "my-rerank-model",
    "body" => [
        "service" => "cohere",
        "service_settings" => [
            "model_id" => "rerank-english-v3.0",
            "api_key" => "{{COHERE_API_KEY}}",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"cohere","service_settings":{"model_id":"rerank-english-v3.0","api_key":"{{COHERE_API_KEY}}"}}' "$ELASTICSEARCH_URL/_inference/rerank/my-rerank-model"
client.inference().put(p -> p
    .inferenceId("my-rerank-model")
    .taskType(TaskType.Rerank)
    .inferenceConfig(i -> i
        .service("cohere")
        .serviceSettings(JsonData.fromJson("{\"model_id\":\"rerank-english-v3.0\",\"api_key\":\"{{COHERE_API_KEY}}\"}"))
    )
);
Request example
An example body for a `PUT _inference/rerank/my-rerank-model` request.
{
 "service": "cohere",
 "service_settings": {
   "model_id": "rerank-english-v3.0",
   "api_key": "{{COHERE_API_KEY}}"
 }
 "chunking_settings": {
   "strategy": "recursive",
   "max_chunk_size": 200,
   "separator_group": "markdown"
}
























Create an Azure OpenAI inference endpoint Generally available

PUT /_inference/{task_type}/{azureopenai_inference_id}

Create an inference endpoint to perform an inference task with the azureopenai service.

The list of chat completion models that you can choose from in your Azure OpenAI deployment include:

The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API.

    Values are completion or text_embedding.

  • azureopenai_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is azureopenai.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string

      A valid API key for your Azure OpenAI account. You must specify either api_key or entra_id. If you do not provide either or you provide both, you will receive an error when you try to create your model.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • api_version string Required

      The Azure API version ID to use. It is recommended to use the latest supported non-preview version.

    • deployment_id string Required

      The deployment name of your deployed models. Your Azure OpenAI deployments can be found though the Azure OpenAI Studio portal that is linked to your subscription.

      External documentation
    • entra_id string

      A valid Microsoft Entra token. You must specify either api_key or entra_id. If you do not provide either or you provide both, you will receive an error when you try to create your model.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
    • resource_name string Required

      The name of your Azure OpenAI resource. You can find this from the list of resources in the Azure Portal for your subscription.

      External documentation
  • task_settings object
    Hide task_settings attribute Show task_settings attribute object
    • user string

      For a completion or text_embedding task, specify the user issuing the request. This information can be used for abuse detection.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{azureopenai_inference_id}
PUT _inference/text_embedding/azure_openai_embeddings
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="azure_openai_embeddings",
    inference_config={
        "service": "azureopenai",
        "service_settings": {
            "api_key": "Api-Key",
            "resource_name": "Resource-name",
            "deployment_id": "Deployment-id",
            "api_version": "2024-02-01"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "azure_openai_embeddings",
  inference_config: {
    service: "azureopenai",
    service_settings: {
      api_key: "Api-Key",
      resource_name: "Resource-name",
      deployment_id: "Deployment-id",
      api_version: "2024-02-01",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "azure_openai_embeddings",
  body: {
    "service": "azureopenai",
    "service_settings": {
      "api_key": "Api-Key",
      "resource_name": "Resource-name",
      "deployment_id": "Deployment-id",
      "api_version": "2024-02-01"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "azure_openai_embeddings",
    "body" => [
        "service" => "azureopenai",
        "service_settings" => [
            "api_key" => "Api-Key",
            "resource_name" => "Resource-name",
            "deployment_id" => "Deployment-id",
            "api_version" => "2024-02-01",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"azureopenai","service_settings":{"api_key":"Api-Key","resource_name":"Resource-name","deployment_id":"Deployment-id","api_version":"2024-02-01"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/azure_openai_embeddings"
client.inference().put(p -> p
    .inferenceId("azure_openai_embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("azureopenai")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Api-Key\",\"resource_name\":\"Resource-name\",\"deployment_id\":\"Deployment-id\",\"api_version\":\"2024-02-01\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/azure_openai_embeddings` to create an inference endpoint that performs a `text_embedding` task. You do not specify a model, as it is defined already in the Azure OpenAI deployment.
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}
Run `PUT _inference/completion/azure_openai_completion` to create an inference endpoint that performs a `completion` task.
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}
















Create an ELSER inference endpoint Deprecated Generally available

PUT /_inference/{task_type}/{elser_inference_id}

Create an inference endpoint to perform an inference task with the elser service. You can also deploy ELSER by using the Elasticsearch inference integration.


Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings.

The API request will automatically download and deploy the ELSER model if it isn't already downloaded.


You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.

After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Value is sparse_embedding.

  • elser_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is elser.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • adaptive_allocations object
      Hide adaptive_allocations attributes Show adaptive_allocations attributes object
      • enabled boolean

        Turn on adaptive_allocations.

        Default value is false.

      • max_number_of_allocations number

        The maximum number of allocations to scale to. If set, it must be greater than or equal to min_number_of_allocations.

      • min_number_of_allocations number

        The minimum number of allocations to scale to. If set, it must be greater than or equal to 0. If not defined, the deployment scales to 0.

    • num_allocations number Required

      The total number of allocations this model is assigned across machine learning nodes. Increasing this value generally increases the throughput. If adaptive allocations is enabled, do not set this value because it's automatically set.

    • num_threads number Required

      The number of threads used by each model allocation during inference. Increasing this value generally increases the speed per inference request. The inference process is a compute-bound process; threads_per_allocations must not exceed the number of available allocated processors per node. The value must be a power of 2. The maximum value is 32.


      If you want to optimize your ELSER endpoint for ingest, set the number of threads to 1. If you want to optimize your ELSER endpoint for search, set the number of threads to greater than 1.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Value is sparse_embedding.

PUT /_inference/{task_type}/{elser_inference_id}
PUT _inference/sparse_embedding/my-elser-model
{
    "service": "elser",
    "service_settings": {
        "num_allocations": 1,
        "num_threads": 1
    }
}
resp = client.inference.put(
    task_type="sparse_embedding",
    inference_id="my-elser-model",
    inference_config={
        "service": "elser",
        "service_settings": {
            "num_allocations": 1,
            "num_threads": 1
        }
    },
)
const response = await client.inference.put({
  task_type: "sparse_embedding",
  inference_id: "my-elser-model",
  inference_config: {
    service: "elser",
    service_settings: {
      num_allocations: 1,
      num_threads: 1,
    },
  },
});
response = client.inference.put(
  task_type: "sparse_embedding",
  inference_id: "my-elser-model",
  body: {
    "service": "elser",
    "service_settings": {
      "num_allocations": 1,
      "num_threads": 1
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "sparse_embedding",
    "inference_id" => "my-elser-model",
    "body" => [
        "service" => "elser",
        "service_settings" => [
            "num_allocations" => 1,
            "num_threads" => 1,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"elser","service_settings":{"num_allocations":1,"num_threads":1}}' "$ELASTICSEARCH_URL/_inference/sparse_embedding/my-elser-model"
client.inference().put(p -> p
    .inferenceId("my-elser-model")
    .taskType(TaskType.SparseEmbedding)
    .inferenceConfig(i -> i
        .service("elser")
        .serviceSettings(JsonData.fromJson("{\"num_allocations\":1,\"num_threads\":1}"))
    )
);
Request examples
Run `PUT _inference/sparse_embedding/my-elser-model` to create an inference endpoint that performs a `sparse_embedding` task. The request will automatically download the ELSER model if it isn't already downloaded and then deploy the model.
{
    "service": "elser",
    "service_settings": {
        "num_allocations": 1,
        "num_threads": 1
    }
}
Run `PUT _inference/sparse_embedding/my-elser-model` to create an inference endpoint that performs a `sparse_embedding` task with adaptive allocations. When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load.
{
    "service": "elser",
    "service_settings": {
        "adaptive_allocations": {
            "enabled": true,
            "min_number_of_allocations": 3,
            "max_number_of_allocations": 10
        },
        "num_threads": 1
    }
}
Response examples (200)
A successful response when creating an ELSER inference endpoint.
{
  "inference_id": "my-elser-model",
  "task_type": "sparse_embedding",
  "service": "elser",
  "service_settings": {
    "num_allocations": 1,
    "num_threads": 1
  },
  "task_settings": {}
}

Create an Google AI Studio inference endpoint Generally available

PUT /_inference/{task_type}/{googleaistudio_inference_id}

Create an inference endpoint to perform an inference task with the googleaistudio service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion or text_embedding.

  • googleaistudio_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is googleaistudio.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Google Gemini account.

    • model_id string Required

      The name of the model to use for the inference task. Refer to the Google documentation for the list of supported models.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{googleaistudio_inference_id}
PUT _inference/completion/google_ai_studio_completion
{
    "service": "googleaistudio",
    "service_settings": {
        "api_key": "api-key",
        "model_id": "model-id"
    }
}
resp = client.inference.put(
    task_type="completion",
    inference_id="google_ai_studio_completion",
    inference_config={
        "service": "googleaistudio",
        "service_settings": {
            "api_key": "api-key",
            "model_id": "model-id"
        }
    },
)
const response = await client.inference.put({
  task_type: "completion",
  inference_id: "google_ai_studio_completion",
  inference_config: {
    service: "googleaistudio",
    service_settings: {
      api_key: "api-key",
      model_id: "model-id",
    },
  },
});
response = client.inference.put(
  task_type: "completion",
  inference_id: "google_ai_studio_completion",
  body: {
    "service": "googleaistudio",
    "service_settings": {
      "api_key": "api-key",
      "model_id": "model-id"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "completion",
    "inference_id" => "google_ai_studio_completion",
    "body" => [
        "service" => "googleaistudio",
        "service_settings" => [
            "api_key" => "api-key",
            "model_id" => "model-id",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"googleaistudio","service_settings":{"api_key":"api-key","model_id":"model-id"}}' "$ELASTICSEARCH_URL/_inference/completion/google_ai_studio_completion"
client.inference().put(p -> p
    .inferenceId("google_ai_studio_completion")
    .taskType(TaskType.Completion)
    .inferenceConfig(i -> i
        .service("googleaistudio")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"api-key\",\"model_id\":\"model-id\"}"))
    )
);
Request example
Run `PUT _inference/completion/google_ai_studio_completion` to create an inference endpoint to perform a `completion` task type.
{
    "service": "googleaistudio",
    "service_settings": {
        "api_key": "api-key",
        "model_id": "model-id"
    }
}

Create a Google Vertex AI inference endpoint Generally available

PUT /_inference/{task_type}/{googlevertexai_inference_id}

Create an inference endpoint to perform an inference task with the googlevertexai service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are rerank, text_embedding, completion, or chat_completion.

  • googlevertexai_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is googlevertexai.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • location string Required

      The name of the location to use for the inference task. Refer to the Google documentation for the list of supported locations.

      External documentation
    • model_id string Required

      The name of the model to use for the inference task. Refer to the Google documentation for the list of supported models.

      External documentation
    • project_id string Required

      The name of the project to use for the inference task.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
    • service_account_json string Required

      A valid service account in JSON format for the Google Vertex AI API.

  • task_settings object
    Hide task_settings attributes Show task_settings attributes object
    • auto_truncate boolean

      For a text_embedding task, truncate inputs longer than the maximum token length automatically.

    • top_n number

      For a rerank task, the number of the top N documents that should be returned.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding or rerank.

PUT /_inference/{task_type}/{googlevertexai_inference_id}
PUT _inference/text_embedding/google_vertex_ai_embeddingss
{
    "service": "googlevertexai",
    "service_settings": {
        "service_account_json": "service-account-json",
        "model_id": "model-id",
        "location": "location",
        "project_id": "project-id"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="google_vertex_ai_embeddingss",
    inference_config={
        "service": "googlevertexai",
        "service_settings": {
            "service_account_json": "service-account-json",
            "model_id": "model-id",
            "location": "location",
            "project_id": "project-id"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "google_vertex_ai_embeddingss",
  inference_config: {
    service: "googlevertexai",
    service_settings: {
      service_account_json: "service-account-json",
      model_id: "model-id",
      location: "location",
      project_id: "project-id",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "google_vertex_ai_embeddingss",
  body: {
    "service": "googlevertexai",
    "service_settings": {
      "service_account_json": "service-account-json",
      "model_id": "model-id",
      "location": "location",
      "project_id": "project-id"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "google_vertex_ai_embeddingss",
    "body" => [
        "service" => "googlevertexai",
        "service_settings" => [
            "service_account_json" => "service-account-json",
            "model_id" => "model-id",
            "location" => "location",
            "project_id" => "project-id",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"googlevertexai","service_settings":{"service_account_json":"service-account-json","model_id":"model-id","location":"location","project_id":"project-id"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/google_vertex_ai_embeddingss"
client.inference().put(p -> p
    .inferenceId("google_vertex_ai_embeddingss")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("googlevertexai")
        .serviceSettings(JsonData.fromJson("{\"service_account_json\":\"service-account-json\",\"model_id\":\"model-id\",\"location\":\"location\",\"project_id\":\"project-id\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/google_vertex_ai_embeddings` to create an inference endpoint to perform a `text_embedding` task type.
{
    "service": "googlevertexai",
    "service_settings": {
        "service_account_json": "service-account-json",
        "model_id": "model-id",
        "location": "location",
        "project_id": "project-id"
    }
}
Run `PUT _inference/rerank/google_vertex_ai_rerank` to create an inference endpoint to perform a `rerank` task type.
{
    "service": "googlevertexai",
    "service_settings": {
        "service_account_json": "service-account-json",
        "project_id": "project-id"
    }
}




















Create a Watsonx inference endpoint Generally available

PUT /_inference/{task_type}/{watsonx_inference_id}

Create an inference endpoint to perform an inference task with the watsonxai service. You need an IBM Cloud Databases for Elasticsearch deployment to use the watsonxai inference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are text_embedding, chat_completion, or completion.

  • watsonx_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • service string Required

    Value is watsonxai.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Watsonx account. You can find your Watsonx API keys or you can create a new one on the API keys page.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • api_version string Required

      A version parameter that takes a version date in the format of YYYY-MM-DD. For the active version data parameters, refer to the Wastonx documentation.

      External documentation
    • model_id string Required

      The name of the model to use for the inference task. Refer to the IBM Embedding Models section in the Watsonx documentation for the list of available text embedding models. Refer to the IBM library - Foundation models in Watsonx.ai.

      External documentation
    • project_id string Required

      The identifier of the IBM Cloud project to use for the inference task.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
    • url string Required

      The URL of the inference endpoint that you created on Watsonx.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding, chat_completion, or completion.

PUT /_inference/{task_type}/{watsonx_inference_id}
PUT _inference/text_embedding/watsonx-embeddings
{
  "service": "watsonxai",
  "service_settings": {
      "api_key": "Watsonx-API-Key", 
      "url": "Wastonx-URL", 
      "model_id": "ibm/slate-30m-english-rtrvr",
      "project_id": "IBM-Cloud-ID", 
      "api_version": "2024-03-14"
  }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="watsonx-embeddings",
    inference_config={
        "service": "watsonxai",
        "service_settings": {
            "api_key": "Watsonx-API-Key",
            "url": "Wastonx-URL",
            "model_id": "ibm/slate-30m-english-rtrvr",
            "project_id": "IBM-Cloud-ID",
            "api_version": "2024-03-14"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "watsonx-embeddings",
  inference_config: {
    service: "watsonxai",
    service_settings: {
      api_key: "Watsonx-API-Key",
      url: "Wastonx-URL",
      model_id: "ibm/slate-30m-english-rtrvr",
      project_id: "IBM-Cloud-ID",
      api_version: "2024-03-14",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "watsonx-embeddings",
  body: {
    "service": "watsonxai",
    "service_settings": {
      "api_key": "Watsonx-API-Key",
      "url": "Wastonx-URL",
      "model_id": "ibm/slate-30m-english-rtrvr",
      "project_id": "IBM-Cloud-ID",
      "api_version": "2024-03-14"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "watsonx-embeddings",
    "body" => [
        "service" => "watsonxai",
        "service_settings" => [
            "api_key" => "Watsonx-API-Key",
            "url" => "Wastonx-URL",
            "model_id" => "ibm/slate-30m-english-rtrvr",
            "project_id" => "IBM-Cloud-ID",
            "api_version" => "2024-03-14",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"watsonxai","service_settings":{"api_key":"Watsonx-API-Key","url":"Wastonx-URL","model_id":"ibm/slate-30m-english-rtrvr","project_id":"IBM-Cloud-ID","api_version":"2024-03-14"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/watsonx-embeddings"
client.inference().put(p -> p
    .inferenceId("watsonx-embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("watsonxai")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Watsonx-API-Key\",\"url\":\"Wastonx-URL\",\"model_id\":\"ibm/slate-30m-english-rtrvr\",\"project_id\":\"IBM-Cloud-ID\",\"api_version\":\"2024-03-14\"}"))
    )
);
Request example
Run `PUT _inference/text_embedding/watsonx-embeddings` to create an Watonsx inference endpoint that performs a text embedding task.
{
  "service": "watsonxai",
  "service_settings": {
      "api_key": "Watsonx-API-Key", 
      "url": "Wastonx-URL", 
      "model_id": "ibm/slate-30m-english-rtrvr",
      "project_id": "IBM-Cloud-ID", 
      "api_version": "2024-03-14"
  }
}











































Logstash

Logstash APIs enable you to manage pipelines that are used by Logstash Central Management.

Learn more about centralized pipeline management












Machine learning anomaly detection





































Delete a datafeed Generally available

DELETE /_ml/datafeeds/{datafeed_id}

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

Query parameters

  • force boolean

    Use to forcefully delete a started datafeed; this method is quicker than stopping and deleting the datafeed.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ml/datafeeds/{datafeed_id}
DELETE _ml/datafeeds/datafeed-total-requests
resp = client.ml.delete_datafeed(
    datafeed_id="datafeed-total-requests",
)
const response = await client.ml.deleteDatafeed({
  datafeed_id: "datafeed-total-requests",
});
response = client.ml.delete_datafeed(
  datafeed_id: "datafeed-total-requests"
)
$resp = $client->ml()->deleteDatafeed([
    "datafeed_id" => "datafeed-total-requests",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-total-requests"
client.ml().deleteDatafeed(d -> d
    .datafeedId("datafeed-total-requests")
);
Response examples (200)
A successful response when deleting a datafeed.
{
  "acknowledged": true
}








Delete a filter Generally available

DELETE /_ml/filters/{filter_id}

If an anomaly detection job references the filter, you cannot delete the filter. You must update or delete the job before you can delete the filter.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • filter_id string Required

    A string that uniquely identifies a filter.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE _ml/filters/safe_domains
resp = client.ml.delete_filter(
    filter_id="safe_domains",
)
const response = await client.ml.deleteFilter({
  filter_id: "safe_domains",
});
response = client.ml.delete_filter(
  filter_id: "safe_domains"
)
$resp = $client->ml()->deleteFilter([
    "filter_id" => "safe_domains",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/filters/safe_domains"
client.ml().deleteFilter(d -> d
    .filterId("safe_domains")
);
Response examples (200)
A successful response when deleting a filter.
{
  "acknowledged": true
}








































Open anomaly detection jobs Generally available

POST /_ml/anomaly_detectors/{job_id}/_open

An anomaly detection job must be opened to be ready to receive and analyze data. It can be opened and closed multiple times throughout its lifecycle. When you open a new job, it starts with an empty model. When you open an existing job, the most recent model state is automatically loaded. The job is ready to resume its analysis from where it left off, once new data is received.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

Query parameters

  • timeout string

    Controls the time to wait until a job has opened.

    Values are -1 or 0.

application/json

Body

  • timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • opened boolean Required
    • node string Required
POST /_ml/anomaly_detectors/{job_id}/_open
POST /_ml/anomaly_detectors/job-01/_open
{
  "timeout": "35m"
}
resp = client.ml.open_job(
    job_id="job-01",
    timeout="35m",
)
const response = await client.ml.openJob({
  job_id: "job-01",
  timeout: "35m",
});
response = client.ml.open_job(
  job_id: "job-01",
  body: {
    "timeout": "35m"
  }
)
$resp = $client->ml()->openJob([
    "job_id" => "job-01",
    "body" => [
        "timeout" => "35m",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"timeout":"35m"}' "$ELASTICSEARCH_URL/_ml/anomaly_detectors/job-01/_open"
client.ml().openJob(o -> o
    .jobId("job-01")
    .timeout(t -> t
        .time("35m")
    )
);
Request example
A request to open anomaly detection jobs. The timeout specifies to wait 35 minutes for the job to open.
{
  "timeout": "35m"
}
Response examples (200)
A successful response when opening an anomaly detection job.
{
  "opened": true,
  "node": "node-1"
}








Start datafeeds Generally available

POST /_ml/datafeeds/{datafeed_id}/_start

A datafeed must be started in order to retrieve data from Elasticsearch. A datafeed can be started and stopped multiple times throughout its lifecycle.

Before you can start a datafeed, the anomaly detection job must be open. Otherwise, an error occurs.

If you restart a stopped datafeed, it continues processing input data from the next millisecond after it was stopped. If new data was indexed for that exact millisecond between stopping and starting, it will be ignored.

When Elasticsearch security features are enabled, your datafeed remembers which roles the last user to create or update it had at the time of creation or update and runs the query using those same roles. If you provided secondary authorization headers when you created or updated the datafeed, those credentials are used instead.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

Query parameters

  • end string | number

    The time that the datafeed should end, which can be specified by using one of the following formats:

    • ISO 8601 format with milliseconds, for example 2017-01-22T06:00:00.000Z
    • ISO 8601 format without milliseconds, for example 2017-01-22T06:00:00+00:00
    • Milliseconds since the epoch, for example 1485061200000

    Date-time arguments using either of the ISO 8601 formats must have a time zone designator, where Z is accepted as an abbreviation for UTC time. When a URL is expected (for example, in browsers), the + used in time zone designators must be encoded as %2B. The end time value is exclusive. If you do not specify an end time, the datafeed runs continuously.

  • start string | number

    The time that the datafeed should begin, which can be specified by using the same formats as the end parameter. This value is inclusive. If you do not specify a start time and the datafeed is associated with a new anomaly detection job, the analysis starts from the earliest time for which data is available. If you restart a stopped datafeed and specify a start value that is earlier than the timestamp of the latest processed record, the datafeed continues from 1 millisecond after the timestamp of the latest processed record.

  • timeout string

    Specifies the amount of time to wait until a datafeed starts.

    Values are -1 or 0.

application/json

Body

  • end string | number

    A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    One of:
  • start string | number

    A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    One of:
  • timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • node string | array[string] Required

    • started boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ml/datafeeds/{datafeed_id}/_start
POST _ml/datafeeds/datafeed-low_request_rate/_start
{
  "start": "2019-04-07T18:22:16Z"
}
resp = client.ml.start_datafeed(
    datafeed_id="datafeed-low_request_rate",
    start="2019-04-07T18:22:16Z",
)
const response = await client.ml.startDatafeed({
  datafeed_id: "datafeed-low_request_rate",
  start: "2019-04-07T18:22:16Z",
});
response = client.ml.start_datafeed(
  datafeed_id: "datafeed-low_request_rate",
  body: {
    "start": "2019-04-07T18:22:16Z"
  }
)
$resp = $client->ml()->startDatafeed([
    "datafeed_id" => "datafeed-low_request_rate",
    "body" => [
        "start" => "2019-04-07T18:22:16Z",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"start":"2019-04-07T18:22:16Z"}' "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-low_request_rate/_start"
client.ml().startDatafeed(s -> s
    .datafeedId("datafeed-low_request_rate")
    .start(DateTime.of("2019-04-07T18:22:16Z"))
);
Request example
An example body for a `POST _ml/datafeeds/datafeed-low_request_rate/_start` request.
{
  "start": "2019-04-07T18:22:16Z"
}





















Create a data frame analytics job Generally available

PUT /_ml/data_frame/analytics/{id}

This API creates a data frame analytics job that performs an analysis on the source indices and stores the outcome in a destination index. By default, the query used in the source configuration is {"match_all": {}}.

If the destination index does not exist, it is created automatically when you start the job.

If you supply only a subset of the regression or classification parameters, hyperparameter optimization occurs. It determines a value for each of the undefined parameters.

Required authorization

  • Index privileges: create_index,index,manage,read,view_index_metadata
  • Cluster privileges: manage_ml

Path parameters

  • id string Required

    Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

application/json

Body Required

  • allow_lazy_start boolean

    Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node. If set to false and a machine learning node with capacity to run the job cannot be immediately found, the API returns an error. If set to true, the API does not return an error; the job waits in the starting state until sufficient machine learning node capacity is available. This behavior is also affected by the cluster-wide xpack.ml.max_lazy_ml_nodes setting.

    Default value is false.

  • analysis object Required
    Hide analysis attributes Show analysis attributes object
    • classification object
      Hide classification attributes Show classification attributes object
      • alpha number

        Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.

      • dependent_variable string Required

        Defines which field of the document is to be predicted. It must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable. For classification analysis, the data type of the field must be numeric (integer, short, long, byte), categorical (ip or keyword), or boolean. There must be no more than 30 different values in this field. For regression analysis, the data type of the field must be numeric.

      • downsample_factor number

        Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.

      • early_stopping_enabled boolean

        Advanced configuration option. Specifies whether the training process should finish if it is not finding any better performing models. If disabled, the training process can take significantly longer and the chance of finding a better performing model is unremarkable.

        Default value is true.

      • eta number

        Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.

      • eta_growth_rate_per_tree number

        Advanced configuration option. Specifies the rate at which eta increases for each new tree that is added to the forest. For example, a rate of 1.05 increases eta by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2.

      • feature_bag_fraction number

        Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.

      • feature_processors array[object]

        Advanced configuration option. A collection of feature preprocessors that modify one or more included fields. The analysis uses the resulting one or more features instead of the original document field. However, these features are ephemeral; they are not stored in the destination index. Multiple feature_processors entries can refer to the same document fields. Automatic categorical feature encoding still occurs for the fields that are unprocessed by a custom processor or that have categorical values. Use this property only if you want to override the automatic feature encoding of the specified fields.

        Hide feature_processors attributes Show feature_processors attributes object
        • frequency_encoding object
          Hide frequency_encoding attributes Show frequency_encoding attributes object
          • feature_name string Required
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • frequency_map object Required

            The resulting frequency map for the field value. If the field value is missing from the frequency_map, the resulting value is 0.

        • multi_encoding object
          Hide multi_encoding attribute Show multi_encoding attribute object
          • processors array[number] Required

            The ordered array of custom processors to execute. Must be more than 1.

        • n_gram_encoding object
          Hide n_gram_encoding attributes Show n_gram_encoding attributes object
          • feature_prefix string

            The feature name prefix. Defaults to ngram__.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • length number

            Specifies the length of the n-gram substring. Defaults to 50. Must be greater than 0.

          • n_grams array[number] Required

            Specifies which n-grams to gather. It’s an array of integer values where the minimum value is 1, and a maximum value is 5.

          • start number

            Specifies the zero-indexed start of the n-gram substring. Negative values are allowed for encoding n-grams of string suffixes. Defaults to 0.

          • custom boolean
        • one_hot_encoding object
          Hide one_hot_encoding attributes Show one_hot_encoding attributes object
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • hot_map string Required

            The one hot map mapping the field value with the column name.

        • target_mean_encoding object
          Hide target_mean_encoding attributes Show target_mean_encoding attributes object
          • default_value number Required

            The default value if field value is not found in the target_map.

          • feature_name string Required
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • target_map object Required

            The field value to target mean transition map.

      • gamma number

        Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

      • lambda number

        Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

      • max_optimization_rounds_per_hyperparameter number

        Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.

      • max_trees number

        Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.

      • num_top_feature_importance_values number

        Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, no feature importance calculation occurs.

        Default value is 0.

      • prediction_field_name string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • randomize_seed number

        Defines the seed for the random generator that is used to pick training data. By default, it is randomly generated. Set it to a specific value to use the same training data each time you start a job (assuming other related parameters such as source and analyzed_fields are the same).

      • soft_tree_depth_limit number

        Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This soft limit combines with the soft_tree_depth_tolerance to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.

      • soft_tree_depth_tolerance number

        Advanced configuration option. This option controls how quickly the regularized loss increases when the tree depth exceeds soft_tree_depth_limit. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01.

      • training_percent string | number

      • class_assignment_objective string
      • num_top_classes number

        Defines the number of categories for which the predicted probabilities are reported. It must be non-negative or -1. If it is -1 or greater than the total number of categories, probabilities are reported for all categories; if you have a large number of categories, there could be a significant effect on the size of your destination index. NOTE: To use the AUC ROC evaluation method, num_top_classes must be set to -1 or a value greater than or equal to the total number of categories.

        Default value is 2.

    • outlier_detection object
      Hide outlier_detection attributes Show outlier_detection attributes object
      • compute_feature_influence boolean

        Specifies whether the feature influence calculation is enabled.

        Default value is true.

      • feature_influence_threshold number

        The minimum outlier score that a document needs to have in order to calculate its feature influence score. Value range: 0-1.

        Default value is 0.1.

      • method string

        The method that outlier detection uses. Available methods are lof, ldof, distance_kth_nn, distance_knn, and ensemble. The default value is ensemble, which means that outlier detection uses an ensemble of different methods and normalises and combines their individual outlier scores to obtain the overall outlier score.

        Default value is ensemble.

      • n_neighbors number

        Defines the value for how many nearest neighbors each method of outlier detection uses to calculate its outlier score. When the value is not set, different values are used for different ensemble members. This default behavior helps improve the diversity in the ensemble; only override it if you are confident that the value you choose is appropriate for the data set.

      • outlier_fraction number

        The proportion of the data set that is assumed to be outlying prior to outlier detection. For example, 0.05 means it is assumed that 5% of values are real outliers and 95% are inliers.

      • standardization_enabled boolean

        If true, the following operation is performed on the columns before computing outlier scores: (x_i - mean(x_i)) / sd(x_i).

        Default value is true.

    • regression object
      Hide regression attributes Show regression attributes object
      • alpha number

        Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.

      • dependent_variable string Required

        Defines which field of the document is to be predicted. It must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable. For classification analysis, the data type of the field must be numeric (integer, short, long, byte), categorical (ip or keyword), or boolean. There must be no more than 30 different values in this field. For regression analysis, the data type of the field must be numeric.

      • downsample_factor number

        Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.

      • early_stopping_enabled boolean

        Advanced configuration option. Specifies whether the training process should finish if it is not finding any better performing models. If disabled, the training process can take significantly longer and the chance of finding a better performing model is unremarkable.

        Default value is true.

      • eta number

        Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.

      • eta_growth_rate_per_tree number

        Advanced configuration option. Specifies the rate at which eta increases for each new tree that is added to the forest. For example, a rate of 1.05 increases eta by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2.

      • feature_bag_fraction number

        Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.

      • feature_processors array[object]

        Advanced configuration option. A collection of feature preprocessors that modify one or more included fields. The analysis uses the resulting one or more features instead of the original document field. However, these features are ephemeral; they are not stored in the destination index. Multiple feature_processors entries can refer to the same document fields. Automatic categorical feature encoding still occurs for the fields that are unprocessed by a custom processor or that have categorical values. Use this property only if you want to override the automatic feature encoding of the specified fields.

        Hide feature_processors attributes Show feature_processors attributes object
        • frequency_encoding object
          Hide frequency_encoding attributes Show frequency_encoding attributes object
          • feature_name string Required
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • frequency_map object Required

            The resulting frequency map for the field value. If the field value is missing from the frequency_map, the resulting value is 0.

        • multi_encoding object
          Hide multi_encoding attribute Show multi_encoding attribute object
          • processors array[number] Required

            The ordered array of custom processors to execute. Must be more than 1.

        • n_gram_encoding object
          Hide n_gram_encoding attributes Show n_gram_encoding attributes object
          • feature_prefix string

            The feature name prefix. Defaults to ngram__.

          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • length number

            Specifies the length of the n-gram substring. Defaults to 50. Must be greater than 0.

          • n_grams array[number] Required

            Specifies which n-grams to gather. It’s an array of integer values where the minimum value is 1, and a maximum value is 5.

          • start number

            Specifies the zero-indexed start of the n-gram substring. Negative values are allowed for encoding n-grams of string suffixes. Defaults to 0.

          • custom boolean
        • one_hot_encoding object
          Hide one_hot_encoding attributes Show one_hot_encoding attributes object
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • hot_map string Required

            The one hot map mapping the field value with the column name.

        • target_mean_encoding object
          Hide target_mean_encoding attributes Show target_mean_encoding attributes object
          • default_value number Required

            The default value if field value is not found in the target_map.

          • feature_name string Required
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • target_map object Required

            The field value to target mean transition map.

      • gamma number

        Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

      • lambda number

        Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

      • max_optimization_rounds_per_hyperparameter number

        Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.

      • max_trees number

        Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.

      • num_top_feature_importance_values number

        Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, no feature importance calculation occurs.

        Default value is 0.

      • prediction_field_name string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • randomize_seed number

        Defines the seed for the random generator that is used to pick training data. By default, it is randomly generated. Set it to a specific value to use the same training data each time you start a job (assuming other related parameters such as source and analyzed_fields are the same).

      • soft_tree_depth_limit number

        Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This soft limit combines with the soft_tree_depth_tolerance to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.

      • soft_tree_depth_tolerance number

        Advanced configuration option. This option controls how quickly the regularized loss increases when the tree depth exceeds soft_tree_depth_limit. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01.

      • training_percent string | number

      • loss_function string

        The loss function used during regression. Available options are mse (mean squared error), msle (mean squared logarithmic error), huber (Pseudo-Huber loss).

        Default value is mse.

      • loss_function_parameter number

        A positive number that is used as a parameter to the loss_function.

  • analyzed_fields object
    Hide analyzed_fields attributes Show analyzed_fields attributes object
    • includes array[string]

      An array of strings that defines the fields that will be excluded from the analysis. You do not need to add fields with unsupported data types to excludes, these fields are excluded from the analysis automatically.

    • excludes array[string]

      An array of strings that defines the fields that will be included in the analysis.

  • description string

    A description of the job.

  • dest object Required
    Hide dest attributes Show dest attributes object
    • index string Required
    • results_field string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • max_num_threads number

    The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself.

    Default value is 1.

  • _meta object
    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • model_memory_limit string

    The approximate maximum amount of memory resources that are permitted for analytical processing. If your elasticsearch.yml file contains an xpack.ml.max_model_memory_limit setting, an error occurs when you try to create data frame analytics jobs that have model_memory_limit values greater than that setting.

    Default value is 1gb.

  • source object Required
    Hide source attributes Show source attributes object
    • index string | array[string] Required
    • runtime_mappings object
      Hide runtime_mappings attribute Show runtime_mappings attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • format string
        • format string

          A custom format for date type runtime fields.

        • input_field string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • target_field string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • target_index string
        • script object
          Hide script attributes Show script attributes object
          • source string | object

            One of:
          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • type string Required

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • _source object
      Hide _source attributes Show _source attributes object
      • includes array[string]

        An array of strings that defines the fields that will be excluded from the analysis. You do not need to add fields with unsupported data types to excludes, these fields are excluded from the analysis automatically.

      • excludes array[string]

        An array of strings that defines the fields that will be included in the analysis.

    • query object

      The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. By default, this property has the following value: {"match_all": {}}.

      Query DSL
  • headers object
  • version string

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • authorization object
      Hide authorization attributes Show authorization attributes object
      • api_key object
        Hide api_key attributes Show api_key attributes object
        • id string Required

          The identifier for the API key.

        • name string Required

          The name of the API key.

      • roles array[string]

        If a user ID was used for the most recent update to the job, its roles at the time of the update are listed in the response.

      • service_account string

        If a service account was used for the most recent update to the job, the account name is listed in the response.

    • allow_lazy_start boolean Required
    • analysis object Required
      Hide analysis attributes Show analysis attributes object
      • classification object
        Hide classification attributes Show classification attributes object
        • alpha number

          Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.

        • dependent_variable string Required

          Defines which field of the document is to be predicted. It must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable. For classification analysis, the data type of the field must be numeric (integer, short, long, byte), categorical (ip or keyword), or boolean. There must be no more than 30 different values in this field. For regression analysis, the data type of the field must be numeric.

        • downsample_factor number

          Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.

        • early_stopping_enabled boolean

          Advanced configuration option. Specifies whether the training process should finish if it is not finding any better performing models. If disabled, the training process can take significantly longer and the chance of finding a better performing model is unremarkable.

          Default value is true.

        • eta number

          Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.

        • eta_growth_rate_per_tree number

          Advanced configuration option. Specifies the rate at which eta increases for each new tree that is added to the forest. For example, a rate of 1.05 increases eta by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2.

        • feature_bag_fraction number

          Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.

        • feature_processors array[object]

          Advanced configuration option. A collection of feature preprocessors that modify one or more included fields. The analysis uses the resulting one or more features instead of the original document field. However, these features are ephemeral; they are not stored in the destination index. Multiple feature_processors entries can refer to the same document fields. Automatic categorical feature encoding still occurs for the fields that are unprocessed by a custom processor or that have categorical values. Use this property only if you want to override the automatic feature encoding of the specified fields.

          Hide feature_processors attributes Show feature_processors attributes object
          • frequency_encoding object
          • multi_encoding object
          • n_gram_encoding object
          • one_hot_encoding object
          • target_mean_encoding object
        • gamma number

          Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

        • lambda number

          Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

        • max_optimization_rounds_per_hyperparameter number

          Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.

        • max_trees number

          Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.

        • num_top_feature_importance_values number

          Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, no feature importance calculation occurs.

          Default value is 0.

        • prediction_field_name string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • randomize_seed number

          Defines the seed for the random generator that is used to pick training data. By default, it is randomly generated. Set it to a specific value to use the same training data each time you start a job (assuming other related parameters such as source and analyzed_fields are the same).

        • soft_tree_depth_limit number

          Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This soft limit combines with the soft_tree_depth_tolerance to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.

        • soft_tree_depth_tolerance number

          Advanced configuration option. This option controls how quickly the regularized loss increases when the tree depth exceeds soft_tree_depth_limit. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01.

        • training_percent string | number

        • class_assignment_objective string
        • num_top_classes number

          Defines the number of categories for which the predicted probabilities are reported. It must be non-negative or -1. If it is -1 or greater than the total number of categories, probabilities are reported for all categories; if you have a large number of categories, there could be a significant effect on the size of your destination index. NOTE: To use the AUC ROC evaluation method, num_top_classes must be set to -1 or a value greater than or equal to the total number of categories.

          Default value is 2.

      • outlier_detection object
        Hide outlier_detection attributes Show outlier_detection attributes object
        • compute_feature_influence boolean

          Specifies whether the feature influence calculation is enabled.

          Default value is true.

        • feature_influence_threshold number

          The minimum outlier score that a document needs to have in order to calculate its feature influence score. Value range: 0-1.

          Default value is 0.1.

        • method string

          The method that outlier detection uses. Available methods are lof, ldof, distance_kth_nn, distance_knn, and ensemble. The default value is ensemble, which means that outlier detection uses an ensemble of different methods and normalises and combines their individual outlier scores to obtain the overall outlier score.

          Default value is ensemble.

        • n_neighbors number

          Defines the value for how many nearest neighbors each method of outlier detection uses to calculate its outlier score. When the value is not set, different values are used for different ensemble members. This default behavior helps improve the diversity in the ensemble; only override it if you are confident that the value you choose is appropriate for the data set.

        • outlier_fraction number

          The proportion of the data set that is assumed to be outlying prior to outlier detection. For example, 0.05 means it is assumed that 5% of values are real outliers and 95% are inliers.

        • standardization_enabled boolean

          If true, the following operation is performed on the columns before computing outlier scores: (x_i - mean(x_i)) / sd(x_i).

          Default value is true.

      • regression object
        Hide regression attributes Show regression attributes object
        • alpha number

          Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.

        • dependent_variable string Required

          Defines which field of the document is to be predicted. It must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable. For classification analysis, the data type of the field must be numeric (integer, short, long, byte), categorical (ip or keyword), or boolean. There must be no more than 30 different values in this field. For regression analysis, the data type of the field must be numeric.

        • downsample_factor number

          Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.

        • early_stopping_enabled boolean

          Advanced configuration option. Specifies whether the training process should finish if it is not finding any better performing models. If disabled, the training process can take significantly longer and the chance of finding a better performing model is unremarkable.

          Default value is true.

        • eta number

          Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.

        • eta_growth_rate_per_tree number

          Advanced configuration option. Specifies the rate at which eta increases for each new tree that is added to the forest. For example, a rate of 1.05 increases eta by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2.

        • feature_bag_fraction number

          Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.

        • feature_processors array[object]

          Advanced configuration option. A collection of feature preprocessors that modify one or more included fields. The analysis uses the resulting one or more features instead of the original document field. However, these features are ephemeral; they are not stored in the destination index. Multiple feature_processors entries can refer to the same document fields. Automatic categorical feature encoding still occurs for the fields that are unprocessed by a custom processor or that have categorical values. Use this property only if you want to override the automatic feature encoding of the specified fields.

          Hide feature_processors attributes Show feature_processors attributes object
          • frequency_encoding object
          • multi_encoding object
          • n_gram_encoding object
          • one_hot_encoding object
          • target_mean_encoding object
        • gamma number

          Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

        • lambda number

          Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.

        • max_optimization_rounds_per_hyperparameter number

          Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.

        • max_trees number

          Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.

        • num_top_feature_importance_values number

          Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, no feature importance calculation occurs.

          Default value is 0.

        • prediction_field_name string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • randomize_seed number

          Defines the seed for the random generator that is used to pick training data. By default, it is randomly generated. Set it to a specific value to use the same training data each time you start a job (assuming other related parameters such as source and analyzed_fields are the same).

        • soft_tree_depth_limit number

          Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This soft limit combines with the soft_tree_depth_tolerance to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.

        • soft_tree_depth_tolerance number

          Advanced configuration option. This option controls how quickly the regularized loss increases when the tree depth exceeds soft_tree_depth_limit. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01.

        • training_percent string | number

        • loss_function string

          The loss function used during regression. Available options are mse (mean squared error), msle (mean squared logarithmic error), huber (Pseudo-Huber loss).

          Default value is mse.

        • loss_function_parameter number

          A positive number that is used as a parameter to the loss_function.

    • analyzed_fields object
      Hide analyzed_fields attributes Show analyzed_fields attributes object
      • includes array[string]

        An array of strings that defines the fields that will be excluded from the analysis. You do not need to add fields with unsupported data types to excludes, these fields are excluded from the analysis automatically.

      • excludes array[string]

        An array of strings that defines the fields that will be included in the analysis.

    • create_time number

      Time unit for milliseconds

    • description string
    • dest object Required
      Hide dest attributes Show dest attributes object
      • index string Required
      • results_field string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • max_num_threads number Required
    • _meta object
      Hide _meta attribute Show _meta attribute object
      • * object Additional properties
    • model_memory_limit string Required
    • source object Required
      Hide source attributes Show source attributes object
      • index string | array[string] Required
      • runtime_mappings object
        Hide runtime_mappings attribute Show runtime_mappings attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

            Hide fields attribute Show fields attribute object
            • * object Additional properties
              Hide * attribute Show * attribute object
              • type string Required

                Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

          • fetch_fields array[object]

            For type lookup

            Hide fetch_fields attributes Show fetch_fields attributes object
            • field string Required

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • format string
          • format string

            A custom format for date type runtime fields.

          • input_field string

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • target_field string

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • target_index string
          • script object
            Hide script attributes Show script attributes object
            • id string
            • params object

              Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              Hide params attribute Show params attribute object
              • * object Additional properties
            • lang string

              Any of:

              Values are painless, expression, mustache, or java.

            • options object
              Hide options attribute Show options attribute object
              • * string Additional properties
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • _source object
        Hide _source attributes Show _source attributes object
        • includes array[string]

          An array of strings that defines the fields that will be excluded from the analysis. You do not need to add fields with unsupported data types to excludes, these fields are excluded from the analysis automatically.

        • excludes array[string]

          An array of strings that defines the fields that will be included in the analysis.

      • query object

        The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. By default, this property has the following value: {"match_all": {}}.

        Query DSL
    • version string Required
PUT /_ml/data_frame/analytics/{id}
PUT _ml/data_frame/analytics/model-flight-delays-pre
{
  "source": {
    "index": [
      "kibana_sample_data_flights"
    ],
    "query": {
      "range": {
        "DistanceKilometers": {
          "gt": 0
        }
      }
    },
    "_source": {
      "includes": [],
      "excludes": [
        "FlightDelay",
        "FlightDelayType"
      ]
    }
  },
  "dest": {
    "index": "df-flight-delays",
    "results_field": "ml-results"
  },
  "analysis": {
  "regression": {
    "dependent_variable": "FlightDelayMin",
    "training_percent": 90
    }
  },
  "analyzed_fields": {
    "includes": [],
    "excludes": [
      "FlightNum"
    ]
  },
  "model_memory_limit": "100mb"
}
resp = client.ml.put_data_frame_analytics(
    id="model-flight-delays-pre",
    source={
        "index": [
            "kibana_sample_data_flights"
        ],
        "query": {
            "range": {
                "DistanceKilometers": {
                    "gt": 0
                }
            }
        },
        "_source": {
            "includes": [],
            "excludes": [
                "FlightDelay",
                "FlightDelayType"
            ]
        }
    },
    dest={
        "index": "df-flight-delays",
        "results_field": "ml-results"
    },
    analysis={
        "regression": {
            "dependent_variable": "FlightDelayMin",
            "training_percent": 90
        }
    },
    analyzed_fields={
        "includes": [],
        "excludes": [
            "FlightNum"
        ]
    },
    model_memory_limit="100mb",
)
const response = await client.ml.putDataFrameAnalytics({
  id: "model-flight-delays-pre",
  source: {
    index: ["kibana_sample_data_flights"],
    query: {
      range: {
        DistanceKilometers: {
          gt: 0,
        },
      },
    },
    _source: {
      includes: [],
      excludes: ["FlightDelay", "FlightDelayType"],
    },
  },
  dest: {
    index: "df-flight-delays",
    results_field: "ml-results",
  },
  analysis: {
    regression: {
      dependent_variable: "FlightDelayMin",
      training_percent: 90,
    },
  },
  analyzed_fields: {
    includes: [],
    excludes: ["FlightNum"],
  },
  model_memory_limit: "100mb",
});
response = client.ml.put_data_frame_analytics(
  id: "model-flight-delays-pre",
  body: {
    "source": {
      "index": [
        "kibana_sample_data_flights"
      ],
      "query": {
        "range": {
          "DistanceKilometers": {
            "gt": 0
          }
        }
      },
      "_source": {
        "includes": [],
        "excludes": [
          "FlightDelay",
          "FlightDelayType"
        ]
      }
    },
    "dest": {
      "index": "df-flight-delays",
      "results_field": "ml-results"
    },
    "analysis": {
      "regression": {
        "dependent_variable": "FlightDelayMin",
        "training_percent": 90
      }
    },
    "analyzed_fields": {
      "includes": [],
      "excludes": [
        "FlightNum"
      ]
    },
    "model_memory_limit": "100mb"
  }
)
$resp = $client->ml()->putDataFrameAnalytics([
    "id" => "model-flight-delays-pre",
    "body" => [
        "source" => [
            "index" => array(
                "kibana_sample_data_flights",
            ),
            "query" => [
                "range" => [
                    "DistanceKilometers" => [
                        "gt" => 0,
                    ],
                ],
            ],
            "_source" => [
                "includes" => array(
                ),
                "excludes" => array(
                    "FlightDelay",
                    "FlightDelayType",
                ),
            ],
        ],
        "dest" => [
            "index" => "df-flight-delays",
            "results_field" => "ml-results",
        ],
        "analysis" => [
            "regression" => [
                "dependent_variable" => "FlightDelayMin",
                "training_percent" => 90,
            ],
        ],
        "analyzed_fields" => [
            "includes" => array(
            ),
            "excludes" => array(
                "FlightNum",
            ),
        ],
        "model_memory_limit" => "100mb",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"source":{"index":["kibana_sample_data_flights"],"query":{"range":{"DistanceKilometers":{"gt":0}}},"_source":{"includes":[],"excludes":["FlightDelay","FlightDelayType"]}},"dest":{"index":"df-flight-delays","results_field":"ml-results"},"analysis":{"regression":{"dependent_variable":"FlightDelayMin","training_percent":90}},"analyzed_fields":{"includes":[],"excludes":["FlightNum"]},"model_memory_limit":"100mb"}' "$ELASTICSEARCH_URL/_ml/data_frame/analytics/model-flight-delays-pre"
client.ml().putDataFrameAnalytics(p -> p
    .analysis(a -> a
        .regression(r -> r
            .dependentVariable("FlightDelayMin")
            .trainingPercent("90")
        )
    )
    .analyzedFields(an -> an
        .excludes("FlightNum")
    )
    .dest(d -> d
        .index("df-flight-delays")
        .resultsField("ml-results")
    )
    .id("model-flight-delays-pre")
    .modelMemoryLimit("100mb")
    .source(s -> s
        .index("kibana_sample_data_flights")
        .query(q -> q
            .range(r -> r
                .untyped(u -> u
                    .field("DistanceKilometers")
                    .gt(JsonData.fromJson("0"))
                )
            )
        )
        .source(so -> so
            .excludes(List.of("FlightDelay","FlightDelayType"))
        )
    )
);
Request example
An example body for a `PUT _ml/data_frame/analytics/model-flight-delays-pre` request.
{
  "source": {
    "index": [
      "kibana_sample_data_flights"
    ],
    "query": {
      "range": {
        "DistanceKilometers": {
          "gt": 0
        }
      }
    },
    "_source": {
      "includes": [],
      "excludes": [
        "FlightDelay",
        "FlightDelayType"
      ]
    }
  },
  "dest": {
    "index": "df-flight-delays",
    "results_field": "ml-results"
  },
  "analysis": {
  "regression": {
    "dependent_variable": "FlightDelayMin",
    "training_percent": 90
    }
  },
  "analyzed_fields": {
    "includes": [],
    "excludes": [
      "FlightNum"
    ]
  },
  "model_memory_limit": "100mb"
}

































Create a trained model Generally available

PUT /_ml/trained_models/{model_id}

Enable you to supply a trained model that is not created by data frame analytics.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • model_id string Required

    The unique identifier of the trained model.

Query parameters

  • defer_definition_decompression boolean Generally available

    If set to true and a compressed_definition is provided, the request defers definition decompression and skips relevant validations.

  • wait_for_completion boolean Generally available

    Whether to wait for all child operations (e.g. model download) to complete.

application/json

Body Required

  • compressed_definition string

    The compressed (GZipped and Base64 encoded) inference definition of the model. If compressed_definition is specified, then definition cannot be specified.

  • definition object
    Hide definition attributes Show definition attributes object
    • preprocessors array[object]

      Collection of preprocessors

      Hide preprocessors attributes Show preprocessors attributes object
      • frequency_encoding object
        Hide frequency_encoding attributes Show frequency_encoding attributes object
        • field string Required
        • feature_name string Required
        • frequency_map object Required
          Hide frequency_map attribute Show frequency_map attribute object
          • * number Additional properties
      • one_hot_encoding object
        Hide one_hot_encoding attributes Show one_hot_encoding attributes object
        • field string Required
        • hot_map object Required
          Hide hot_map attribute Show hot_map attribute object
          • * string Additional properties
      • target_mean_encoding object
        Hide target_mean_encoding attributes Show target_mean_encoding attributes object
        • field string Required
        • feature_name string Required
        • target_map object Required
          Hide target_map attribute Show target_map attribute object
          • * number Additional properties
        • default_value number Required
    • trained_model object Required
      Hide trained_model attributes Show trained_model attributes object
      • tree object
        Hide tree attributes Show tree attributes object
        • classification_labels array[string]
        • feature_names array[string] Required
        • target_type string
        • tree_structure array[object] Required
          Hide tree_structure attributes Show tree_structure attributes object
          • decision_type string
          • default_left boolean
          • leaf_value number
          • left_child number
          • node_index number Required
          • right_child number
          • split_feature number
          • split_gain number
          • threshold number
      • tree_node object
        Hide tree_node attributes Show tree_node attributes object
        • decision_type string
        • default_left boolean
        • leaf_value number
        • left_child number
        • node_index number Required
        • right_child number
        • split_feature number
        • split_gain number
        • threshold number
      • ensemble object
        Hide ensemble attributes Show ensemble attributes object
        • aggregate_output object
          Hide aggregate_output attributes Show aggregate_output attributes object
          • logistic_regression object
            Hide logistic_regression attribute Show logistic_regression attribute object
            • weights number Required
          • weighted_sum object
            Hide weighted_sum attribute Show weighted_sum attribute object
            • weights number Required
          • weighted_mode object
            Hide weighted_mode attribute Show weighted_mode attribute object
            • weights number Required
          • exponent object
            Hide exponent attribute Show exponent attribute object
            • weights number Required
        • classification_labels array[string]
        • feature_names array[string]
        • target_type string
        • trained_models array[object] Required
  • description string

    A human-readable description of the inference trained model.

  • inference_config object

    Inference configuration provided when storing the model config

    Hide inference_config attributes Show inference_config attributes object
    • regression object
      Hide regression attributes Show regression attributes object
      • results_field string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • num_top_feature_importance_values number

        Specifies the maximum number of feature importance values per document.

        Default value is 0.

    • classification object
      Hide classification attributes Show classification attributes object
      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • num_top_feature_importance_values number

        Specifies the maximum number of feature importance values per document.

        Default value is 0.

      • prediction_field_type string

        Specifies the type of the predicted field to write. Acceptable values are: string, number, boolean. When boolean is provided 1.0 is transformed to true and 0.0 to false.

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • top_classes_results_field string

        Specifies the field to which the top classes are written. Defaults to top_classes.

    • text_classification object

      Text classification configuration options

      Hide text_classification attributes Show text_classification attributes object
      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • tokenization object

        Tokenization options stored in inference configuration

        Hide tokenization attributes Show tokenization attributes object
        • bert object

          BERT and MPNet tokenization configuration options

          Hide bert attributes Show bert attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • bert_ja object

          BERT and MPNet tokenization configuration options

          Hide bert_ja attributes Show bert_ja attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • mpnet object

          BERT and MPNet tokenization configuration options

          Hide mpnet attributes Show mpnet attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • roberta object

          RoBERTa tokenization configuration options

          Hide roberta attributes Show roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

          • add_prefix_space boolean

            Should the tokenizer prefix input with a space character

            Default value is false.

        • xlm_roberta object
          Hide xlm_roberta attributes Show xlm_roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • classification_labels array[string]

        Classification labels to apply other than the stored labels. Must have the same deminsions as the default configured labels

      • vocabulary object
        Hide vocabulary attribute Show vocabulary attribute object
        • index string Required
    • zero_shot_classification object

      Zero shot classification configuration options

      Hide zero_shot_classification attributes Show zero_shot_classification attributes object
      • tokenization object

        Tokenization options stored in inference configuration

        Hide tokenization attributes Show tokenization attributes object
        • bert object

          BERT and MPNet tokenization configuration options

          Hide bert attributes Show bert attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • bert_ja object

          BERT and MPNet tokenization configuration options

          Hide bert_ja attributes Show bert_ja attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • mpnet object

          BERT and MPNet tokenization configuration options

          Hide mpnet attributes Show mpnet attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • roberta object

          RoBERTa tokenization configuration options

          Hide roberta attributes Show roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

          • add_prefix_space boolean

            Should the tokenizer prefix input with a space character

            Default value is false.

        • xlm_roberta object
          Hide xlm_roberta attributes Show xlm_roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

      • hypothesis_template string

        Hypothesis template used when tokenizing labels for prediction

        Default value is "This example is {}.".

      • classification_labels array[string] Required

        The zero shot classification labels indicating entailment, neutral, and contradiction Must contain exactly and only entailment, neutral, and contradiction

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • multi_label boolean

        Indicates if more than one true label exists.

        Default value is false.

      • labels array[string]

        The labels to predict.

    • fill_mask object

      Fill mask inference options

      Hide fill_mask attributes Show fill_mask attributes object
      • mask_token string

        The string/token which will be removed from incoming documents and replaced with the inference prediction(s). In a response, this field contains the mask token for the specified model/tokenizer. Each model and tokenizer has a predefined mask token which cannot be changed. Thus, it is recommended not to set this value in requests. However, if this field is present in a request, its value must match the predefined value for that model/tokenizer, otherwise the request will fail.

      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • tokenization object

        Tokenization options stored in inference configuration

        Hide tokenization attributes Show tokenization attributes object
        • bert object

          BERT and MPNet tokenization configuration options

          Hide bert attributes Show bert attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • bert_ja object

          BERT and MPNet tokenization configuration options

          Hide bert_ja attributes Show bert_ja attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • mpnet object

          BERT and MPNet tokenization configuration options

          Hide mpnet attributes Show mpnet attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • roberta object

          RoBERTa tokenization configuration options

          Hide roberta attributes Show roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

          • add_prefix_space boolean

            Should the tokenizer prefix input with a space character

            Default value is false.

        • xlm_roberta object
          Hide xlm_roberta attributes Show xlm_roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object Required
        Hide vocabulary attribute Show vocabulary attribute object
        • index string Required
    • learning_to_rank object
      Hide learning_to_rank attributes Show learning_to_rank attributes object
      • default_params object
        Hide default_params attribute Show default_params attribute object
        • * object Additional properties
      • feature_extractors array[object]
      • num_top_feature_importance_values number Required
    • ner object

      Named entity recognition options

      Hide ner attributes Show ner attributes object
      • tokenization object

        Tokenization options stored in inference configuration

        Hide tokenization attributes Show tokenization attributes object
        • bert object

          BERT and MPNet tokenization configuration options

          Hide bert attributes Show bert attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • bert_ja object

          BERT and MPNet tokenization configuration options

          Hide bert_ja attributes Show bert_ja attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • mpnet object

          BERT and MPNet tokenization configuration options

          Hide mpnet attributes Show mpnet attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • roberta object

          RoBERTa tokenization configuration options

          Hide roberta attributes Show roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

          • add_prefix_space boolean

            Should the tokenizer prefix input with a space character

            Default value is false.

        • xlm_roberta object
          Hide xlm_roberta attributes Show xlm_roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • classification_labels array[string]

        The token classification labels. Must be IOB formatted tags

      • vocabulary object
        Hide vocabulary attribute Show vocabulary attribute object
        • index string Required
    • pass_through object

      Pass through configuration options

      Hide pass_through attributes Show pass_through attributes object
      • tokenization object

        Tokenization options stored in inference configuration

        Hide tokenization attributes Show tokenization attributes object
        • bert object

          BERT and MPNet tokenization configuration options

          Hide bert attributes Show bert attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • bert_ja object

          BERT and MPNet tokenization configuration options

          Hide bert_ja attributes Show bert_ja attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • mpnet object

          BERT and MPNet tokenization configuration options

          Hide mpnet attributes Show mpnet attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • roberta object

          RoBERTa tokenization configuration options

          Hide roberta attributes Show roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

          • add_prefix_space boolean

            Should the tokenizer prefix input with a space character

            Default value is false.

        • xlm_roberta object
          Hide xlm_roberta attributes Show xlm_roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object
        Hide vocabulary attribute Show vocabulary attribute object
        • index string Required
    • text_embedding object

      Text embedding inference options

      Hide text_embedding attributes Show text_embedding attributes object
      • embedding_size number

        The number of dimensions in the embedding output

      • tokenization object

        Tokenization options stored in inference configuration

        Hide tokenization attributes Show tokenization attributes object
        • bert object

          BERT and MPNet tokenization configuration options

          Hide bert attributes Show bert attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • bert_ja object

          BERT and MPNet tokenization configuration options

          Hide bert_ja attributes Show bert_ja attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • mpnet object

          BERT and MPNet tokenization configuration options

          Hide mpnet attributes Show mpnet attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • roberta object

          RoBERTa tokenization configuration options

          Hide roberta attributes Show roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

          • add_prefix_space boolean

            Should the tokenizer prefix input with a space character

            Default value is false.

        • xlm_roberta object
          Hide xlm_roberta attributes Show xlm_roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object Required
        Hide vocabulary attribute Show vocabulary attribute object
        • index string Required
    • text_expansion object

      Text expansion inference options

      Hide text_expansion attributes Show text_expansion attributes object
      • tokenization object

        Tokenization options stored in inference configuration

        Hide tokenization attributes Show tokenization attributes object
        • bert object

          BERT and MPNet tokenization configuration options

          Hide bert attributes Show bert attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • bert_ja object

          BERT and MPNet tokenization configuration options

          Hide bert_ja attributes Show bert_ja attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • mpnet object

          BERT and MPNet tokenization configuration options

          Hide mpnet attributes Show mpnet attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • roberta object

          RoBERTa tokenization configuration options

          Hide roberta attributes Show roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

          • add_prefix_space boolean

            Should the tokenizer prefix input with a space character

            Default value is false.

        • xlm_roberta object
          Hide xlm_roberta attributes Show xlm_roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object Required
        Hide vocabulary attribute Show vocabulary attribute object
        • index string Required
    • question_answering object

      Question answering inference options

      Hide question_answering attributes Show question_answering attributes object
      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • tokenization object

        Tokenization options stored in inference configuration

        Hide tokenization attributes Show tokenization attributes object
        • bert object

          BERT and MPNet tokenization configuration options

          Hide bert attributes Show bert attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • bert_ja object

          BERT and MPNet tokenization configuration options

          Hide bert_ja attributes Show bert_ja attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • mpnet object

          BERT and MPNet tokenization configuration options

          Hide mpnet attributes Show mpnet attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

        • roberta object

          RoBERTa tokenization configuration options

          Hide roberta attributes Show roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

          • add_prefix_space boolean

            Should the tokenizer prefix input with a space character

            Default value is false.

        • xlm_roberta object
          Hide xlm_roberta attributes Show xlm_roberta attributes object
          • do_lower_case boolean

            Should the tokenizer lower case the text

            Default value is false.

          • max_sequence_length number

            Maximum input sequence length for the model

            Default value is 512.

          • span number

            Tokenization spanning options. Special value of -1 indicates no spanning takes place

            Default value is -1.

          • truncate string

            Values are first, second, or none.

          • with_special_tokens boolean

            Is tokenization completed with special tokens

            Default value is true.

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • max_answer_length number

        The maximum answer length to consider

  • input object
    Hide input attribute Show input attribute object
    • field_names string | array[string] Required
  • metadata object

    An object map that contains metadata about the model.

  • model_type string

    Values are tree_ensemble, lang_ident, or pytorch.

  • model_size_bytes number

    The estimated memory usage in bytes to keep the trained model in memory. This property is supported only if defer_definition_decompression is true or the model definition is not supplied.

  • platform_architecture string

    The platform architecture (if applicable) of the trained mode. If the model only works on one platform, because it is heavily optimized for a particular processor architecture and OS combination, then this field specifies which. The format of the string must match the platform identifiers used by Elasticsearch, so one of, linux-x86_64, linux-aarch64, darwin-x86_64, darwin-aarch64, or windows-x86_64. For portable models (those that work independent of processor architecture or OS features), leave this field unset.

  • tags array[string]

    An array of tags to organize the model.

  • prefix_strings object
    Hide prefix_strings attributes Show prefix_strings attributes object
    • ingest string

      String prepended to input at ingest

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • model_id string Required
    • model_type string

      Values are tree_ensemble, lang_ident, or pytorch.

    • tags array[string] Required

      A comma delimited string of tags. A trained model can have many tags, or none.

    • version string
    • compressed_definition string
    • created_by string

      Information on the creator of the trained model.

    • create_time string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • default_field_map object

      Any field map described in the inference configuration takes precedence.

      Hide default_field_map attribute Show default_field_map attribute object
      • * string Additional properties
    • description string

      The free-text description of the trained model.

    • estimated_heap_memory_usage_bytes number

      The estimated heap usage in bytes to keep the trained model in memory.

    • estimated_operations number

      The estimated number of operations to use the trained model.

    • fully_defined boolean

      True if the full model definition is present.

    • inference_config object

      Inference configuration provided when storing the model config

      Hide inference_config attributes Show inference_config attributes object
      • regression object
        Hide regression attributes Show regression attributes object
        • results_field string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • num_top_feature_importance_values number

          Specifies the maximum number of feature importance values per document.

          Default value is 0.

      • classification object
        Hide classification attributes Show classification attributes object
        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • num_top_feature_importance_values number

          Specifies the maximum number of feature importance values per document.

          Default value is 0.

        • prediction_field_type string

          Specifies the type of the predicted field to write. Acceptable values are: string, number, boolean. When boolean is provided 1.0 is transformed to true and 0.0 to false.

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • top_classes_results_field string

          Specifies the field to which the top classes are written. Defaults to top_classes.

      • text_classification object

        Text classification configuration options

        Hide text_classification attributes Show text_classification attributes object
        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • tokenization object

          Tokenization options stored in inference configuration

          Hide tokenization attributes Show tokenization attributes object
          • bert object

            BERT and MPNet tokenization configuration options

            Hide bert attributes Show bert attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • bert_ja object

            BERT and MPNet tokenization configuration options

            Hide bert_ja attributes Show bert_ja attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • mpnet object

            BERT and MPNet tokenization configuration options

            Hide mpnet attributes Show mpnet attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • roberta object

            RoBERTa tokenization configuration options

            Hide roberta attributes Show roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

            • add_prefix_space boolean

              Should the tokenizer prefix input with a space character

              Default value is false.

          • xlm_roberta object
            Hide xlm_roberta attributes Show xlm_roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • classification_labels array[string]

          Classification labels to apply other than the stored labels. Must have the same deminsions as the default configured labels

        • vocabulary object
          Hide vocabulary attribute Show vocabulary attribute object
          • index string Required
      • zero_shot_classification object

        Zero shot classification configuration options

        Hide zero_shot_classification attributes Show zero_shot_classification attributes object
        • tokenization object

          Tokenization options stored in inference configuration

          Hide tokenization attributes Show tokenization attributes object
          • bert object

            BERT and MPNet tokenization configuration options

            Hide bert attributes Show bert attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • bert_ja object

            BERT and MPNet tokenization configuration options

            Hide bert_ja attributes Show bert_ja attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • mpnet object

            BERT and MPNet tokenization configuration options

            Hide mpnet attributes Show mpnet attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • roberta object

            RoBERTa tokenization configuration options

            Hide roberta attributes Show roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

            • add_prefix_space boolean

              Should the tokenizer prefix input with a space character

              Default value is false.

          • xlm_roberta object
            Hide xlm_roberta attributes Show xlm_roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

        • hypothesis_template string

          Hypothesis template used when tokenizing labels for prediction

          Default value is "This example is {}.".

        • classification_labels array[string] Required

          The zero shot classification labels indicating entailment, neutral, and contradiction Must contain exactly and only entailment, neutral, and contradiction

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • multi_label boolean

          Indicates if more than one true label exists.

          Default value is false.

        • labels array[string]

          The labels to predict.

      • fill_mask object

        Fill mask inference options

        Hide fill_mask attributes Show fill_mask attributes object
        • mask_token string

          The string/token which will be removed from incoming documents and replaced with the inference prediction(s). In a response, this field contains the mask token for the specified model/tokenizer. Each model and tokenizer has a predefined mask token which cannot be changed. Thus, it is recommended not to set this value in requests. However, if this field is present in a request, its value must match the predefined value for that model/tokenizer, otherwise the request will fail.

        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • tokenization object

          Tokenization options stored in inference configuration

          Hide tokenization attributes Show tokenization attributes object
          • bert object

            BERT and MPNet tokenization configuration options

            Hide bert attributes Show bert attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • bert_ja object

            BERT and MPNet tokenization configuration options

            Hide bert_ja attributes Show bert_ja attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • mpnet object

            BERT and MPNet tokenization configuration options

            Hide mpnet attributes Show mpnet attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • roberta object

            RoBERTa tokenization configuration options

            Hide roberta attributes Show roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

            • add_prefix_space boolean

              Should the tokenizer prefix input with a space character

              Default value is false.

          • xlm_roberta object
            Hide xlm_roberta attributes Show xlm_roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object Required
          Hide vocabulary attribute Show vocabulary attribute object
          • index string Required
      • learning_to_rank object
        Hide learning_to_rank attributes Show learning_to_rank attributes object
        • default_params object
          Hide default_params attribute Show default_params attribute object
          • * object Additional properties
        • feature_extractors array[object]
        • num_top_feature_importance_values number Required
      • ner object

        Named entity recognition options

        Hide ner attributes Show ner attributes object
        • tokenization object

          Tokenization options stored in inference configuration

          Hide tokenization attributes Show tokenization attributes object
          • bert object

            BERT and MPNet tokenization configuration options

            Hide bert attributes Show bert attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • bert_ja object

            BERT and MPNet tokenization configuration options

            Hide bert_ja attributes Show bert_ja attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • mpnet object

            BERT and MPNet tokenization configuration options

            Hide mpnet attributes Show mpnet attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • roberta object

            RoBERTa tokenization configuration options

            Hide roberta attributes Show roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

            • add_prefix_space boolean

              Should the tokenizer prefix input with a space character

              Default value is false.

          • xlm_roberta object
            Hide xlm_roberta attributes Show xlm_roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • classification_labels array[string]

          The token classification labels. Must be IOB formatted tags

        • vocabulary object
          Hide vocabulary attribute Show vocabulary attribute object
          • index string Required
      • pass_through object

        Pass through configuration options

        Hide pass_through attributes Show pass_through attributes object
        • tokenization object

          Tokenization options stored in inference configuration

          Hide tokenization attributes Show tokenization attributes object
          • bert object

            BERT and MPNet tokenization configuration options

            Hide bert attributes Show bert attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • bert_ja object

            BERT and MPNet tokenization configuration options

            Hide bert_ja attributes Show bert_ja attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • mpnet object

            BERT and MPNet tokenization configuration options

            Hide mpnet attributes Show mpnet attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • roberta object

            RoBERTa tokenization configuration options

            Hide roberta attributes Show roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

            • add_prefix_space boolean

              Should the tokenizer prefix input with a space character

              Default value is false.

          • xlm_roberta object
            Hide xlm_roberta attributes Show xlm_roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object
          Hide vocabulary attribute Show vocabulary attribute object
          • index string Required
      • text_embedding object

        Text embedding inference options

        Hide text_embedding attributes Show text_embedding attributes object
        • embedding_size number

          The number of dimensions in the embedding output

        • tokenization object

          Tokenization options stored in inference configuration

          Hide tokenization attributes Show tokenization attributes object
          • bert object

            BERT and MPNet tokenization configuration options

            Hide bert attributes Show bert attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • bert_ja object

            BERT and MPNet tokenization configuration options

            Hide bert_ja attributes Show bert_ja attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • mpnet object

            BERT and MPNet tokenization configuration options

            Hide mpnet attributes Show mpnet attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • roberta object

            RoBERTa tokenization configuration options

            Hide roberta attributes Show roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

            • add_prefix_space boolean

              Should the tokenizer prefix input with a space character

              Default value is false.

          • xlm_roberta object
            Hide xlm_roberta attributes Show xlm_roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object Required
          Hide vocabulary attribute Show vocabulary attribute object
          • index string Required
      • text_expansion object

        Text expansion inference options

        Hide text_expansion attributes Show text_expansion attributes object
        • tokenization object

          Tokenization options stored in inference configuration

          Hide tokenization attributes Show tokenization attributes object
          • bert object

            BERT and MPNet tokenization configuration options

            Hide bert attributes Show bert attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • bert_ja object

            BERT and MPNet tokenization configuration options

            Hide bert_ja attributes Show bert_ja attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • mpnet object

            BERT and MPNet tokenization configuration options

            Hide mpnet attributes Show mpnet attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • roberta object

            RoBERTa tokenization configuration options

            Hide roberta attributes Show roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

            • add_prefix_space boolean

              Should the tokenizer prefix input with a space character

              Default value is false.

          • xlm_roberta object
            Hide xlm_roberta attributes Show xlm_roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object Required
          Hide vocabulary attribute Show vocabulary attribute object
          • index string Required
      • question_answering object

        Question answering inference options

        Hide question_answering attributes Show question_answering attributes object
        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • tokenization object

          Tokenization options stored in inference configuration

          Hide tokenization attributes Show tokenization attributes object
          • bert object

            BERT and MPNet tokenization configuration options

            Hide bert attributes Show bert attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • bert_ja object

            BERT and MPNet tokenization configuration options

            Hide bert_ja attributes Show bert_ja attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • mpnet object

            BERT and MPNet tokenization configuration options

            Hide mpnet attributes Show mpnet attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

          • roberta object

            RoBERTa tokenization configuration options

            Hide roberta attributes Show roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

            • add_prefix_space boolean

              Should the tokenizer prefix input with a space character

              Default value is false.

          • xlm_roberta object
            Hide xlm_roberta attributes Show xlm_roberta attributes object
            • do_lower_case boolean

              Should the tokenizer lower case the text

              Default value is false.

            • max_sequence_length number

              Maximum input sequence length for the model

              Default value is 512.

            • span number

              Tokenization spanning options. Special value of -1 indicates no spanning takes place

              Default value is -1.

            • truncate string

              Values are first, second, or none.

            • with_special_tokens boolean

              Is tokenization completed with special tokens

              Default value is true.

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • max_answer_length number

          The maximum answer length to consider

    • input object Required
      Hide input attribute Show input attribute object
      • field_names array[string] Required

        An array of input field names for the model.

    • license_level string

      The license level of the trained model.

    • metadata object
      Hide metadata attributes Show metadata attributes object
      • model_aliases array[string]
      • feature_importance_baseline object

        An object that contains the baseline for feature importance values. For regression analysis, it is a single value. For classification analysis, there is a value for each class.

        Hide feature_importance_baseline attribute Show feature_importance_baseline attribute object
        • * string Additional properties
      • hyperparameters array[object]

        List of the available hyperparameters optimized during the fine_parameter_tuning phase as well as specified by the user.

        Hide hyperparameters attributes Show hyperparameters attributes object
        • absolute_importance number

          A positive number showing how much the parameter influences the variation of the loss function. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.

        • name string Required
        • relative_importance number

          A number between 0 and 1 showing the proportion of influence on the variation of the loss function among all tuned hyperparameters. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.

        • supplied boolean Required

          Indicates if the hyperparameter is specified by the user (true) or optimized (false).

        • value number Required

          The value of the hyperparameter, either optimized or specified by the user.

      • total_feature_importance array[object]

        An array of the total feature importance for each feature used from the training data set. This array of objects is returned if data frame analytics trained the model and the request includes total_feature_importance in the include request parameter.

        Hide total_feature_importance attributes Show total_feature_importance attributes object
        • feature_name string Required
        • importance array[object] Required

          A collection of feature importance statistics related to the training data set for this particular feature.

          Hide importance attributes Show importance attributes object
          • mean_magnitude number Required

            The average magnitude of this feature across all the training data. This value is the average of the absolute values of the importance for this feature.

          • max number Required

            The maximum importance value across all the training data for this feature.

          • min number Required

            The minimum importance value across all the training data for this feature.

        • classes array[object] Required

          If the trained model is a classification model, feature importance statistics are gathered per target class value.

          Hide classes attributes Show classes attributes object
          • class_name string Required
          • importance array[object] Required

            A collection of feature importance statistics related to the training data set for this particular feature.

    • model_size_bytes number | string

    • model_package object
      Hide model_package attributes Show model_package attributes object
      • create_time number

        Time unit for milliseconds

      • description string
      • inference_config object
        Hide inference_config attribute Show inference_config attribute object
        • * object Additional properties
      • metadata object
        Hide metadata attribute Show metadata attribute object
        • * object Additional properties
      • minimum_version string
      • model_repository string
      • model_type string
      • packaged_model_id string Required
      • platform_architecture string
      • prefix_strings object
        Hide prefix_strings attributes Show prefix_strings attributes object
        • ingest string

          String prepended to input at ingest

      • size number | string

      • sha256 string
      • tags array[string]
      • vocabulary_file string
    • location object
      Hide location attribute Show location attribute object
      • index object Required
        Hide index attribute Show index attribute object
        • name string Required
    • platform_architecture string
    • prefix_strings object
      Hide prefix_strings attributes Show prefix_strings attributes object
      • ingest string

        String prepended to input at ingest

PUT /_ml/trained_models/{model_id}
curl \
 --request PUT 'http://api.example.com/_ml/trained_models/{model_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"compressed_definition":"string","definition":{"preprocessors":[{"frequency_encoding":{"field":"string","feature_name":"string","frequency_map":{"additionalProperty1":42.0,"additionalProperty2":42.0}},"one_hot_encoding":{"field":"string","hot_map":{"additionalProperty1":"string","additionalProperty2":"string"}},"target_mean_encoding":{"field":"string","feature_name":"string","target_map":{"additionalProperty1":42.0,"additionalProperty2":42.0},"default_value":42.0}}],"trained_model":{"tree":{"classification_labels":["string"],"feature_names":["string"],"target_type":"string","tree_structure":[{"decision_type":"string","default_left":true,"leaf_value":42.0,"left_child":42.0,"node_index":42.0,"right_child":42.0,"split_feature":42.0,"split_gain":42.0,"threshold":42.0}]},"tree_node":{"decision_type":"string","default_left":true,"leaf_value":42.0,"left_child":42.0,"node_index":42.0,"right_child":42.0,"split_feature":42.0,"split_gain":42.0,"threshold":42.0},"ensemble":{"aggregate_output":{"logistic_regression":{"weights":42.0},"weighted_sum":{"weights":42.0},"weighted_mode":{"weights":42.0},"exponent":{"weights":42.0}},"classification_labels":["string"],"feature_names":["string"],"target_type":"string","trained_models":[{}]}}},"description":"string","inference_config":{"regression":{"results_field":"string","num_top_feature_importance_values":0},"classification":{"num_top_classes":42.0,"num_top_feature_importance_values":0,"prediction_field_type":"string","results_field":"string","top_classes_results_field":"string"},"text_classification":{"num_top_classes":42.0,"tokenization":{"bert":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"bert_ja":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"mpnet":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true,"add_prefix_space":false},"xlm_roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true}},"results_field":"string","classification_labels":["string"],"vocabulary":{"index":"string"}},"zero_shot_classification":{"tokenization":{"bert":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"bert_ja":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"mpnet":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true,"add_prefix_space":false},"xlm_roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true}},"hypothesis_template":"\"This example is {}.\"","classification_labels":["string"],"results_field":"string","multi_label":false,"labels":["string"]},"fill_mask":{"mask_token":"string","num_top_classes":42.0,"tokenization":{"bert":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"bert_ja":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"mpnet":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true,"add_prefix_space":false},"xlm_roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true}},"results_field":"string","vocabulary":{"index":"string"}},"learning_to_rank":{"default_params":{"additionalProperty1":{},"additionalProperty2":{}},"feature_extractors":[{}],"num_top_feature_importance_values":42.0},"ner":{"tokenization":{"bert":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"bert_ja":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"mpnet":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true,"add_prefix_space":false},"xlm_roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true}},"results_field":"string","classification_labels":["string"],"vocabulary":{"index":"string"}},"pass_through":{"tokenization":{"bert":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"bert_ja":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"mpnet":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true,"add_prefix_space":false},"xlm_roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true}},"results_field":"string","vocabulary":{"index":"string"}},"text_embedding":{"embedding_size":42.0,"tokenization":{"bert":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"bert_ja":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"mpnet":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true,"add_prefix_space":false},"xlm_roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true}},"results_field":"string","vocabulary":{"index":"string"}},"text_expansion":{"tokenization":{"bert":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"bert_ja":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"mpnet":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true,"add_prefix_space":false},"xlm_roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true}},"results_field":"string","vocabulary":{"index":"string"}},"question_answering":{"num_top_classes":42.0,"tokenization":{"bert":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"bert_ja":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"mpnet":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true},"roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true,"add_prefix_space":false},"xlm_roberta":{"do_lower_case":false,"max_sequence_length":512,"span":-1,"truncate":"first","with_special_tokens":true}},"results_field":"string","max_answer_length":42.0}},"input":{"field_names":"string"},"metadata":{},"model_type":"tree_ensemble","model_size_bytes":42.0,"platform_architecture":"string","tags":["string"],"prefix_strings":{"ingest":"string","search":"string"}}'





























































Delete a query ruleset Generally available

DELETE /_query_rules/{ruleset_id}

Remove a query ruleset and its associated data. This is a destructive action that is not recoverable.

Required authorization

  • Cluster privileges: manage_search_query_rules

Path parameters

  • ruleset_id string Required

    The unique identifier of the query ruleset to delete

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_query_rules/{ruleset_id}
DELETE _query_rules/my-ruleset/
resp = client.query_rules.delete_ruleset(
    ruleset_id="my-ruleset",
)
const response = await client.queryRules.deleteRuleset({
  ruleset_id: "my-ruleset",
});
response = client.query_rules.delete_ruleset(
  ruleset_id: "my-ruleset"
)
$resp = $client->queryRules()->deleteRuleset([
    "ruleset_id" => "my-ruleset",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_query_rules/my-ruleset/"
client.queryRules().deleteRuleset(d -> d
    .rulesetId("my-ruleset")
);