Elasticsearch API

Base URL
http://api.example.com

Elasticsearch provides REST APIs that are used by the UI components and can be called directly to configure and access Elasticsearch features.

Documentation source and versions

This documentation is derived from the 8.19 branch of the elasticsearch-specification repository. It is provided under license Attribution-NonCommercial-NoDerivatives 4.0 International. This documentation contains work-in-progress information for future Elastic Stack releases.

Last update on Oct 6, 2025.

This API is provided under license Apache 2.0.

Authentication

The API accepts 3 different authentication methods:

Api key auth (http_api_key)

Elasticsearch APIs support key-based authentication. You must create an API key and use the encoded value in the request header. For example:

curl -X GET "${ES_URL}/_cat/indices?v=true" \
  -H "Authorization: ApiKey ${API_KEY}"

To get API keys, use the /_security/api_key APIs.

Basic auth (http)

Basic auth tokens are constructed with the Basic keyword, followed by a space, followed by a base64-encoded string of your username:password (separated by a : colon).

Example: send a Authorization: Basic aGVsbG86aGVsbG8= HTTP header with your requests to authenticate with the API.

Bearer auth (http)

Elasticsearch APIs support the use of bearer tokens in the Authorization HTTP header to authenticate with the API. For examples, refer to Token-based authentication services


Get an autoscaling policy Generally available; Added in 7.11.0

GET /_autoscaling/policy/{name}

NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.

External documentation

Path parameters

  • name string Required

    the name of the autoscaling policy

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • roles array[string] Required
    • deciders object Required

      Decider settings.

      External documentation
      Hide deciders attribute Show deciders attribute object
      • * object Additional properties
GET /_autoscaling/policy/{name}
GET /_autoscaling/policy/my_autoscaling_policy
resp = client.autoscaling.get_autoscaling_policy(
    name="my_autoscaling_policy",
)
const response = await client.autoscaling.getAutoscalingPolicy({
  name: "my_autoscaling_policy",
});
response = client.autoscaling.get_autoscaling_policy(
  name: "my_autoscaling_policy"
)
$resp = $client->autoscaling()->getAutoscalingPolicy([
    "name" => "my_autoscaling_policy",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_autoscaling/policy/my_autoscaling_policy"
client.autoscaling().getAutoscalingPolicy(g -> g
    .name("my_autoscaling_policy")
);
Response examples (200)
This may be a response to `GET /_autoscaling/policy/my_autoscaling_policy`.
{
   "roles": <roles>,
   "deciders": <deciders>
}








Get the autoscaling capacity Generally available; Added in 7.11.0

GET /_autoscaling/capacity

NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.

This API gets the current autoscaling capacity based on the configured autoscaling policy. It will return information to size the cluster appropriately to the current workload.

The required_capacity is calculated as the maximum of the required_capacity result of all individual deciders that are enabled for the policy.

The operator should verify that the current_nodes match the operator’s knowledge of the cluster to avoid making autoscaling decisions based on stale or incomplete information.

The response contains decider-specific information you can use to diagnose how and why autoscaling determined a certain capacity was required. This information is provided for diagnosis only. Do not use this information to make autoscaling decisions.

External documentation

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • policies object Required
      Hide policies attribute Show policies attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • required_capacity object Required
          Hide required_capacity attributes Show required_capacity attributes object
          • node object Required
          • total object Required
        • current_capacity object Required
          Hide current_capacity attributes Show current_capacity attributes object
          • node object Required
          • total object Required
        • current_nodes array[object] Required
          Hide current_nodes attribute Show current_nodes attribute object
          • name string Required
        • deciders object Required
          Hide deciders attribute Show deciders attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • required_capacity object Required
            • reason_summary string
            • reason_details object
GET /_autoscaling/capacity
GET /_autoscaling/capacity
resp = client.autoscaling.get_autoscaling_capacity()
const response = await client.autoscaling.getAutoscalingCapacity();
response = client.autoscaling.get_autoscaling_capacity
$resp = $client->autoscaling()->getAutoscalingCapacity();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_autoscaling/capacity"
client.autoscaling().getAutoscalingCapacity(g -> g);
Response examples (200)
This may be a response to `GET /_autoscaling/capacity`.
{
  policies: {}
}

Behavioral analytics





Create a behavioral analytics collection Technical preview; Added in 8.8.0

PUT /_application/analytics/{name}

Path parameters

  • name string Required

    The name of the analytics collection to be created or updated.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • name string Required

      The name of the analytics collection created or updated

PUT /_application/analytics/{name}
PUT _application/analytics/my_analytics_collection
resp = client.search_application.put_behavioral_analytics(
    name="my_analytics_collection",
)
const response = await client.searchApplication.putBehavioralAnalytics({
  name: "my_analytics_collection",
});
response = client.search_application.put_behavioral_analytics(
  name: "my_analytics_collection"
)
$resp = $client->searchApplication()->putBehavioralAnalytics([
    "name" => "my_analytics_collection",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/analytics/my_analytics_collection"
client.searchApplication().putBehavioralAnalytics(p -> p
    .name("my_analytics_collection")
);




Create a behavioral analytics collection event Technical preview

POST /_application/analytics/{collection_name}/event/{event_type} External documentation

Path parameters

  • collection_name string Required

    The name of the behavioral analytics collection.

  • event_type string

    The analytics event type.

    Values are page_view, search, or search_click.

Query parameters

  • debug boolean

    Whether the response type has to include more details

application/json

Body Required

object object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • accepted boolean Required
    • event object
POST /_application/analytics/{collection_name}/event/{event_type}
POST _application/analytics/my_analytics_collection/event/search_click
{
  "session": {
    "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
  },
  "user": {
    "id": "5f26f01a-bbee-4202-9298-81261067abbd"
  },
  "search":{
    "query": "search term",
    "results": {
      "items": [
        {
          "document": {
            "id": "123",
            "index": "products"
          }
        }
      ],
      "total_results": 10
    },
    "sort": {
      "name": "relevance"
    },
    "search_application": "website"
  },
  "document":{
    "id": "123",
    "index": "products"
  }
}
resp = client.search_application.post_behavioral_analytics_event(
    collection_name="my_analytics_collection",
    event_type="search_click",
    payload={
        "session": {
            "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
        },
        "user": {
            "id": "5f26f01a-bbee-4202-9298-81261067abbd"
        },
        "search": {
            "query": "search term",
            "results": {
                "items": [
                    {
                        "document": {
                            "id": "123",
                            "index": "products"
                        }
                    }
                ],
                "total_results": 10
            },
            "sort": {
                "name": "relevance"
            },
            "search_application": "website"
        },
        "document": {
            "id": "123",
            "index": "products"
        }
    },
)
const response = await client.searchApplication.postBehavioralAnalyticsEvent({
  collection_name: "my_analytics_collection",
  event_type: "search_click",
  payload: {
    session: {
      id: "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9",
    },
    user: {
      id: "5f26f01a-bbee-4202-9298-81261067abbd",
    },
    search: {
      query: "search term",
      results: {
        items: [
          {
            document: {
              id: "123",
              index: "products",
            },
          },
        ],
        total_results: 10,
      },
      sort: {
        name: "relevance",
      },
      search_application: "website",
    },
    document: {
      id: "123",
      index: "products",
    },
  },
});
response = client.search_application.post_behavioral_analytics_event(
  collection_name: "my_analytics_collection",
  event_type: "search_click",
  body: {
    "session": {
      "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
    },
    "user": {
      "id": "5f26f01a-bbee-4202-9298-81261067abbd"
    },
    "search": {
      "query": "search term",
      "results": {
        "items": [
          {
            "document": {
              "id": "123",
              "index": "products"
            }
          }
        ],
        "total_results": 10
      },
      "sort": {
        "name": "relevance"
      },
      "search_application": "website"
    },
    "document": {
      "id": "123",
      "index": "products"
    }
  }
)
$resp = $client->searchApplication()->postBehavioralAnalyticsEvent([
    "collection_name" => "my_analytics_collection",
    "event_type" => "search_click",
    "body" => [
        "session" => [
            "id" => "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9",
        ],
        "user" => [
            "id" => "5f26f01a-bbee-4202-9298-81261067abbd",
        ],
        "search" => [
            "query" => "search term",
            "results" => [
                "items" => array(
                    [
                        "document" => [
                            "id" => "123",
                            "index" => "products",
                        ],
                    ],
                ),
                "total_results" => 10,
            ],
            "sort" => [
                "name" => "relevance",
            ],
            "search_application" => "website",
        ],
        "document" => [
            "id" => "123",
            "index" => "products",
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"session":{"id":"1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"},"user":{"id":"5f26f01a-bbee-4202-9298-81261067abbd"},"search":{"query":"search term","results":{"items":[{"document":{"id":"123","index":"products"}}],"total_results":10},"sort":{"name":"relevance"},"search_application":"website"},"document":{"id":"123","index":"products"}}' "$ELASTICSEARCH_URL/_application/analytics/my_analytics_collection/event/search_click"
client.searchApplication().postBehavioralAnalyticsEvent(p -> p
    .collectionName("my_analytics_collection")
    .eventType(EventType.SearchClick)
    .payload(JsonData.fromJson("{\"session\":{\"id\":\"1797ca95-91c9-4e2e-b1bd-9c38e6f386a9\"},\"user\":{\"id\":\"5f26f01a-bbee-4202-9298-81261067abbd\"},\"search\":{\"query\":\"search term\",\"results\":{\"items\":[{\"document\":{\"id\":\"123\",\"index\":\"products\"}}],\"total_results\":10},\"sort\":{\"name\":\"relevance\"},\"search_application\":\"website\"},\"document\":{\"id\":\"123\",\"index\":\"products\"}}"))
);
Request example
Run `POST _application/analytics/my_analytics_collection/event/search_click` to send a `search_click` event to an analytics collection called `my_analytics_collection`.
{
  "session": {
    "id": "1797ca95-91c9-4e2e-b1bd-9c38e6f386a9"
  },
  "user": {
    "id": "5f26f01a-bbee-4202-9298-81261067abbd"
  },
  "search":{
    "query": "search term",
    "results": {
      "items": [
        {
          "document": {
            "id": "123",
            "index": "products"
          }
        }
      ],
      "total_results": 10
    },
    "sort": {
      "name": "relevance"
    },
    "search_application": "website"
  },
  "document":{
    "id": "123",
    "index": "products"
  }
}

Get aliases Generally available

GET /_cat/aliases/{name}

All methods and paths for this operation:

GET /_cat/aliases

GET /_cat/aliases/{name}

Get the cluster's index aliases, including filter and routing information. This API does not return data stream aliases.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • name string | array[string]

    A comma-separated list of aliases to retrieve. Supports wildcards (*). To retrieve all aliases, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • alias (or a): The name of the alias.
    • index (or i, idx): The name of the index the alias points to.
    • filter (or f, fi): The filter applied to the alias.
    • routing.index (or ri, routingIndex): Index routing value for the alias.
    • routing.search (or rs, routingSearch): Search routing value for the alias.
    • is_write_index (or w, isWriteIndex): Indicates if the index is the write index for the alias.

    Values are alias, a, index, i, idx, filter, f, fi, routing.index, ri, routingIndex, routing.search, rs, routingSearch, is_write_index, w, or isWriteIndex.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • alias string

      alias name

    • index string

      index alias points to

    • filter string

      filter

    • routing.index string

      index routing

    • is_write_index string

      write index

GET /_cat/aliases/{name}
GET _cat/aliases?format=json&v=true
resp = client.cat.aliases(
    format="json",
    v=True,
)
const response = await client.cat.aliases({
  format: "json",
  v: "true",
});
response = client.cat.aliases(
  format: "json",
  v: "true"
)
$resp = $client->cat()->aliases([
    "format" => "json",
    "v" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/aliases?format=json&v=true"
client.cat().aliases();
Response examples (200)
A successful response from `GET _cat/aliases?format=json&v=true`. This response shows that `alias2` has configured a filter and `alias3` and `alias4` have routing configurations.
[
  {
    "alias": "alias1",
    "index": "test1",
    "filter": "-",
    "routing.index": "-",
    "routing.search": "-",
    "is_write_index": "true"
  },
  {
    "alias": "alias1",
    "index": "test1",
    "filter": "*",
    "routing.index": "-",
    "routing.search": "-",
    "is_write_index": "true"
  },
  {
    "alias": "alias3",
    "index": "test1",
    "filter": "-",
    "routing.index": "1",
    "routing.search": "1",
    "is_write_index": "true"
  },
  {
    "alias": "alias4",
    "index": "test1",
    "filter": "-",
    "routing.index": "2",
    "routing.search": "1,2",
    "is_write_index": "true"
  }
]




Get component templates Generally available; Added in 5.1.0

GET /_cat/component_templates/{name}

All methods and paths for this operation:

GET /_cat/component_templates

GET /_cat/component_templates/{name}

Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • name string Required

    The name of the component template. It accepts wildcard expressions. If it is omitted, all component templates are returned.

Query parameters

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • name (or n): The name of the component template.
    • version (or v): The version number of the component template.
    • alias_count (or a): The number of aliases in the component template.
    • mapping_count (or m): The number of mappings in the component template.
    • settings_count (or s): The number of settings in the component template.
    • metadata_count (or me): The number of metadata entries in the component template.
    • included_in (or i): The index templates that include this component template.

    Values are name, n, version, v, alias_count, a, mapping_count, m, settings_count, s, metadata_count, me, included_in, or i.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    The period to wait for a connection to the master node.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • name string Required
    • version string | null Required

    • alias_count string Required
    • mapping_count string Required
    • settings_count string Required
    • metadata_count string Required
    • included_in string Required
GET /_cat/component_templates/{name}
GET _cat/component_templates/my-template-*?v=true&s=name&format=json
resp = client.cat.component_templates(
    name="my-template-*",
    v=True,
    s="name",
    format="json",
)
const response = await client.cat.componentTemplates({
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json",
});
response = client.cat.component_templates(
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json"
)
$resp = $client->cat()->componentTemplates([
    "name" => "my-template-*",
    "v" => "true",
    "s" => "name",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/component_templates/my-template-*?v=true&s=name&format=json"
client.cat().componentTemplates();
Response examples (200)
A successful response from `GET _cat/component_templates/my-template-*?v=true&s=name&format=json`.
[
  {
    "name": "my-template-1",
    "version": "null",
    "alias_count": "0",
    "mapping_count": "0",
    "settings_count": "1",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  },
    {
    "name": "my-template-2",
    "version": null,
    "alias_count": "0",
    "mapping_count": "3",
    "settings_count": "0",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  }
]

Get a document count Generally available

GET /_cat/count/{index}

All methods and paths for this operation:

GET /_cat/count

GET /_cat/count/{index}

Get quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.

Required authorization

  • Index privileges: read

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. It supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • epoch (or t, time): The Unix epoch time in seconds since 1970-01-01 00:00:00.
    • timestamp (or ts, hms, hhmmss): The current time in HH:MM:SS format.
    • count (or dc, docs.count, docsCount): The document count in the cluster or index.

    Values are epoch, t, time, timestamp, ts, hms, hhmmss, count, dc, docs.count, or docsCount.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • epoch number | string

      seconds since 1970-01-01 00:00:00

      One of:

      seconds since 1970-01-01 00:00:00

    • timestamp string

      time in HH:MM:SS

    • count string

      the document count

GET /_cat/count/{index}
GET /_cat/count/my-index-000001?v=true&format=json
resp = client.cat.count(
    index="my-index-000001",
    v=True,
    format="json",
)
const response = await client.cat.count({
  index: "my-index-000001",
  v: "true",
  format: "json",
});
response = client.cat.count(
  index: "my-index-000001",
  v: "true",
  format: "json"
)
$resp = $client->cat()->count([
    "index" => "my-index-000001",
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/count/my-index-000001?v=true&format=json"
client.cat().count();
Response examples (200)
A successful response from `GET /_cat/count/my-index-000001?v=true&format=json`. It retrieves the document count for the `my-index-000001` data stream or index.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "120"
  }
]
A successful response from `GET /_cat/count?v=true&format=json`. It retrieves the document count for all data streams and indices in the cluster.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "121"
  }
]

Get field data cache information Generally available

GET /_cat/fielddata/{fields}

All methods and paths for this operation:

GET /_cat/fielddata

GET /_cat/fielddata/{fields}

Get the amount of heap memory currently used by the field data cache on every data node in the cluster.

IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • fields string | array[string] Required

    Comma-separated list of fields used to limit returned information. To retrieve all fields, omit this parameter.

Query parameters

  • fields string | array[string]

    Comma-separated list of fields used to limit returned information.

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • id: The node ID.
    • host (or h): The host name of the node.
    • ip: The IP address of the node.
    • node (or n): The node name.
    • field (or f): The field name.
    • size (or s): The field data usage.

    Values are id, host, h, ip, node, n, field, f, size, or s.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      node id

    • host string

      host name

    • ip string

      ip address

    • node string

      node name

    • field string

      field name

    • size string

      field data usage

GET /_cat/fielddata/{fields}
GET /_cat/fielddata?v=true&fields=body&format=json
resp = client.cat.fielddata(
    v=True,
    fields="body",
    format="json",
)
const response = await client.cat.fielddata({
  v: "true",
  fields: "body",
  format: "json",
});
response = client.cat.fielddata(
  v: "true",
  fields: "body",
  format: "json"
)
$resp = $client->cat()->fielddata([
    "v" => "true",
    "fields" => "body",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/fielddata?v=true&fields=body&format=json"
client.cat().fielddata();
Response examples (200)
A successful response from `GET /_cat/fielddata?v=true&fields=body&format=json`. You can specify an individual field in the request body or URL path. This example retrieves heap memory size information for the `body` field.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  }
]
A successful response from `GET /_cat/fielddata/body,soul?v=true&format=json`. You can specify a comma-separated list of fields in the request body or URL path. This example retrieves heap memory size information for the `body` and `soul` fields. To get information for all fields, run `GET /_cat/fielddata?v=true`.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "1127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  },
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "soul",
    "size": "480b"
  }
]

Get the cluster health status Generally available

GET /_cat/health

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the cluster health API. This API is often used to check malfunctioning clusters. To help you track cluster health alongside log files and alerting systems, the API returns timestamps in two formats: HH:MM:SS, which is human-readable but includes no date information; Unix epoch time, which is machine-sortable and includes date information. The latter format is useful for cluster recoveries that take multiple days. You can use the cat health API to verify cluster health across multiple nodes. You also can use the API to track the recovery of a large cluster over a longer period of time.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • ts boolean

    If true, returns HH:MM:SS and Unix epoch timestamps.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • epoch number | string

      seconds since 1970-01-01 00:00:00

      One of:

      seconds since 1970-01-01 00:00:00

    • timestamp string

      time in HH:MM:SS

    • cluster string

      cluster name

    • status string

      health status

    • node.total string

      total number of nodes

    • node.data string

      number of nodes that can store data

    • shards string

      total number of shards

    • pri string

      number of primary shards

    • relo string

      number of relocating nodes

    • init string

      number of initializing nodes

    • unassign.pri string

      number of unassigned primary shards

    • unassign string

      number of unassigned shards

    • pending_tasks string

      number of pending tasks

    • max_task_wait_time string

      wait time of longest task pending

    • active_shards_percent string

      active number of shards in percent

GET /_cat/health
GET /_cat/health?v=true&format=json
resp = client.cat.health(
    v=True,
    format="json",
)
const response = await client.cat.health({
  v: "true",
  format: "json",
});
response = client.cat.health(
  v: "true",
  format: "json"
)
$resp = $client->cat()->health([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/health?v=true&format=json"
client.cat().health();
Response examples (200)
A successful response from `GET /_cat/health?v=true&format=json`. By default, it returns `HH:MM:SS` and Unix epoch timestamps.
[
  {
    "epoch": "1475871424",
    "timestamp": "16:17:04",
    "cluster": "elasticsearch",
    "status": "green",
    "node.total": "1",
    "node.data": "1",
    "shards": "1",
    "pri": "1",
    "relo": "0",
    "init": "0",
    "unassign": "0",
    "unassign.pri": "0",
    "pending_tasks": "0",
    "max_task_wait_time": "-",
    "active_shards_percent": "100.0%"
  }
]




Get index information Generally available

GET /_cat/indices/{index}

All methods and paths for this operation:

GET /_cat/indices

GET /_cat/indices/{index}

Get high-level information about indices in a cluster, including backing indices for data streams.

Use this request to get the following information for each index in a cluster:

  • shard count
  • document count
  • deleted document count
  • primary store size
  • total store size of all shards, including shard replicas

These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs.

CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • health string

    The health status used to limit returned indices. By default, the response includes indices of any health status.

    Supported values include:

    • green (or GREEN): All shards are assigned.
    • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
    • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
    • unknown
    • unavailable

    Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

  • include_unloaded_segments boolean

    If true, the response includes information from segments that are not loaded into memory.

  • pri boolean

    If true, the response only includes information from primary shards.

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation
  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • health string

      current health status

    • status string

      open/close status

    • index string

      index name

    • uuid string

      index uuid

    • pri string

      number of primary shards

    • rep string

      number of replica shards

    • docs.count string | null

      available docs

    • docs.deleted string | null

      deleted docs

    • creation.date string

      index creation date (millisecond value)

    • creation.date.string string

      index creation date (as string)

    • store.size string | null

      store size of primaries & replicas

    • pri.store.size string | null

      store size of primaries

    • dataset.size string | null

      total size of dataset (including the cache for partially mounted indices)

    • completion.size string

      size of completion

    • pri.completion.size string

      size of completion

    • fielddata.memory_size string

      used fielddata cache

    • pri.fielddata.memory_size string

      used fielddata cache

    • fielddata.evictions string

      fielddata evictions

    • pri.fielddata.evictions string

      fielddata evictions

    • query_cache.memory_size string

      used query cache

    • pri.query_cache.memory_size string

      used query cache

    • query_cache.evictions string

      query cache evictions

    • pri.query_cache.evictions string

      query cache evictions

    • request_cache.memory_size string

      used request cache

    • pri.request_cache.memory_size string

      used request cache

    • request_cache.evictions string

      request cache evictions

    • pri.request_cache.evictions string

      request cache evictions

    • request_cache.hit_count string

      request cache hit count

    • pri.request_cache.hit_count string

      request cache hit count

    • request_cache.miss_count string

      request cache miss count

    • pri.request_cache.miss_count string

      request cache miss count

    • flush.total string

      number of flushes

    • pri.flush.total string

      number of flushes

    • flush.total_time string

      time spent in flush

    • pri.flush.total_time string

      time spent in flush

    • get.current string

      number of current get ops

    • pri.get.current string

      number of current get ops

    • get.time string

      time spent in get

    • pri.get.time string

      time spent in get

    • get.total string

      number of get ops

    • pri.get.total string

      number of get ops

    • get.exists_time string

      time spent in successful gets

    • pri.get.exists_time string

      time spent in successful gets

    • get.exists_total string

      number of successful gets

    • pri.get.exists_total string

      number of successful gets

    • get.missing_time string

      time spent in failed gets

    • pri.get.missing_time string

      time spent in failed gets

    • get.missing_total string

      number of failed gets

    • pri.get.missing_total string

      number of failed gets

    • indexing.delete_current string

      number of current deletions

    • pri.indexing.delete_current string

      number of current deletions

    • indexing.delete_time string

      time spent in deletions

    • pri.indexing.delete_time string

      time spent in deletions

    • indexing.delete_total string

      number of delete ops

    • pri.indexing.delete_total string

      number of delete ops

    • indexing.index_current string

      number of current indexing ops

    • pri.indexing.index_current string

      number of current indexing ops

    • indexing.index_time string

      time spent in indexing

    • pri.indexing.index_time string

      time spent in indexing

    • indexing.index_total string

      number of indexing ops

    • pri.indexing.index_total string

      number of indexing ops

    • indexing.index_failed string

      number of failed indexing ops

    • pri.indexing.index_failed string

      number of failed indexing ops

    • merges.current string

      number of current merges

    • pri.merges.current string

      number of current merges

    • merges.current_docs string

      number of current merging docs

    • pri.merges.current_docs string

      number of current merging docs

    • merges.current_size string

      size of current merges

    • pri.merges.current_size string

      size of current merges

    • merges.total string

      number of completed merge ops

    • pri.merges.total string

      number of completed merge ops

    • merges.total_docs string

      docs merged

    • pri.merges.total_docs string

      docs merged

    • merges.total_size string

      size merged

    • pri.merges.total_size string

      size merged

    • merges.total_time string

      time spent in merges

    • pri.merges.total_time string

      time spent in merges

    • refresh.total string

      total refreshes

    • pri.refresh.total string

      total refreshes

    • refresh.time string

      time spent in refreshes

    • pri.refresh.time string

      time spent in refreshes

    • refresh.external_total string

      total external refreshes

    • pri.refresh.external_total string

      total external refreshes

    • refresh.external_time string

      time spent in external refreshes

    • pri.refresh.external_time string

      time spent in external refreshes

    • refresh.listeners string

      number of pending refresh listeners

    • pri.refresh.listeners string

      number of pending refresh listeners

    • search.fetch_current string

      current fetch phase ops

    • pri.search.fetch_current string

      current fetch phase ops

    • search.fetch_time string

      time spent in fetch phase

    • pri.search.fetch_time string

      time spent in fetch phase

    • search.fetch_total string

      total fetch ops

    • pri.search.fetch_total string

      total fetch ops

    • search.open_contexts string

      open search contexts

    • pri.search.open_contexts string

      open search contexts

    • search.query_current string

      current query phase ops

    • pri.search.query_current string

      current query phase ops

    • search.query_time string

      time spent in query phase

    • pri.search.query_time string

      time spent in query phase

    • search.query_total string

      total query phase ops

    • pri.search.query_total string

      total query phase ops

    • search.scroll_current string

      open scroll contexts

    • pri.search.scroll_current string

      open scroll contexts

    • search.scroll_time string

      time scroll contexts held open

    • pri.search.scroll_time string

      time scroll contexts held open

    • search.scroll_total string

      completed scroll contexts

    • pri.search.scroll_total string

      completed scroll contexts

    • segments.count string

      number of segments

    • pri.segments.count string

      number of segments

    • segments.memory string

      memory used by segments

    • pri.segments.memory string

      memory used by segments

    • segments.index_writer_memory string

      memory used by index writer

    • pri.segments.index_writer_memory string

      memory used by index writer

    • segments.version_map_memory string

      memory used by version map

    • pri.segments.version_map_memory string

      memory used by version map

    • segments.fixed_bitset_memory string

      memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields

    • pri.segments.fixed_bitset_memory string

      memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields

    • warmer.current string

      current warmer ops

    • pri.warmer.current string

      current warmer ops

    • warmer.total string

      total warmer ops

    • pri.warmer.total string

      total warmer ops

    • warmer.total_time string

      time spent in warmers

    • pri.warmer.total_time string

      time spent in warmers

    • suggest.current string

      number of current suggest ops

    • pri.suggest.current string

      number of current suggest ops

    • suggest.time string

      time spend in suggest

    • pri.suggest.time string

      time spend in suggest

    • suggest.total string

      number of suggest ops

    • pri.suggest.total string

      number of suggest ops

    • memory.total string

      total used memory

    • pri.memory.total string

      total user memory

    • search.throttled string

      indicates if the index is search throttled

    • bulk.total_operations string

      number of bulk shard ops

    • pri.bulk.total_operations string

      number of bulk shard ops

    • bulk.total_time string

      time spend in shard bulk

    • pri.bulk.total_time string

      time spend in shard bulk

    • bulk.total_size_in_bytes string

      total size in bytes of shard bulk

    • pri.bulk.total_size_in_bytes string

      total size in bytes of shard bulk

    • bulk.avg_time string

      average time spend in shard bulk

    • pri.bulk.avg_time string

      average time spend in shard bulk

    • bulk.avg_size_in_bytes string

      average size in bytes of shard bulk

    • pri.bulk.avg_size_in_bytes string

      average size in bytes of shard bulk

GET /_cat/indices/{index}
GET /_cat/indices/my-index-*?v=true&s=index&format=json
resp = client.cat.indices(
    index="my-index-*",
    v=True,
    s="index",
    format="json",
)
const response = await client.cat.indices({
  index: "my-index-*",
  v: "true",
  s: "index",
  format: "json",
});
response = client.cat.indices(
  index: "my-index-*",
  v: "true",
  s: "index",
  format: "json"
)
$resp = $client->cat()->indices([
    "index" => "my-index-*",
    "v" => "true",
    "s" => "index",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/indices/my-index-*?v=true&s=index&format=json"
client.cat().indices();
Response examples (200)
A successful response from `GET /_cat/indices/my-index-*?v=true&s=index&format=json`.
[
  {
    "health": "yellow",
    "status": "open",
    "index": "my-index-000001",
    "uuid": "u8FNjxh8Rfy_awN11oDKYQ",
    "pri": "1",
    "rep": "1",
    "docs.count": "1200",
    "docs.deleted": "0",
    "store.size": "88.1kb",
    "pri.store.size": "88.1kb",
    "dataset.size": "88.1kb"
  },
  {
    "health": "green",
    "status": "open",
    "index": "my-index-000002",
    "uuid": "nYFWZEO7TUiOjLQXBaYJpA ",
    "pri": "1",
    "rep": "0",
    "docs.count": "0",
    "docs.deleted": "0",
    "store.size": "260b",
    "pri.store.size": "260b",
    "dataset.size": "260b"
  }
]

Get master node information Generally available

GET /_cat/master

Get information about the master node, including the ID, bound IP address, and name.

IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      node id

    • host string

      host name

    • ip string

      ip address

    • node string

      node name

GET /_cat/master
GET /_cat/master?v=true&format=json
resp = client.cat.master(
    v=True,
    format="json",
)
const response = await client.cat.master({
  v: "true",
  format: "json",
});
response = client.cat.master(
  v: "true",
  format: "json"
)
$resp = $client->cat()->master([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/master?v=true&format=json"
client.cat().master();
Response examples (200)
A successful response from `GET /_cat/master?v=true&format=json`.
[
  {
    "id": "YzWoH_2BT-6UjVGDyPdqYg",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "YzWoH_2"
  }
]

Get data frame analytics jobs Generally available; Added in 7.7.0

GET /_cat/ml/data_frame/analytics/{id}

All methods and paths for this operation:

GET /_cat/ml/data_frame/analytics

GET /_cat/ml/data_frame/analytics/{id}

Get configuration and usage information about data frame analytics jobs.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • id string Required

    The ID of the data frame analytics to fetch

Query parameters

  • allow_no_match boolean

    Whether to ignore if a wildcard expression matches no configs. (This includes _all string or when no configs have been specified)

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): Contains messages relating to the selection of a node.
    • create_time (or ct, createTime): The time when the data frame analytics job was created.
    • description (or d): A description of a job.
    • dest_index (or di, destIndex): Name of the destination index.
    • failure_reason (or fr, failureReason): Contains messages about the reason why a data frame analytics job failed.
    • id: Identifier for the data frame analytics job.
    • model_memory_limit (or mml, modelMemoryLimit): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.
    • node.address (or na, nodeAddress): The network address of the node that the data frame analytics job is assigned to.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that the data frame analytics job is assigned to.
    • node.id (or ni, nodeId): The unique identifier of the node that the data frame analytics job is assigned to.
    • node.name (or nn, nodeName): The name of the node that the data frame analytics job is assigned to.
    • progress (or p): The progress report of the data frame analytics job by phase.
    • source_index (or si, sourceIndex): Name of the source index.
    • state (or s): Current state of the data frame analytics job.
    • type (or t): The type of analysis that the data frame analytics job performs.
    • version (or v): The Elasticsearch version number in which the data frame analytics job was created.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): Contains messages relating to the selection of a node.
    • create_time (or ct, createTime): The time when the data frame analytics job was created.
    • description (or d): A description of a job.
    • dest_index (or di, destIndex): Name of the destination index.
    • failure_reason (or fr, failureReason): Contains messages about the reason why a data frame analytics job failed.
    • id: Identifier for the data frame analytics job.
    • model_memory_limit (or mml, modelMemoryLimit): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.
    • node.address (or na, nodeAddress): The network address of the node that the data frame analytics job is assigned to.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that the data frame analytics job is assigned to.
    • node.id (or ni, nodeId): The unique identifier of the node that the data frame analytics job is assigned to.
    • node.name (or nn, nodeName): The name of the node that the data frame analytics job is assigned to.
    • progress (or p): The progress report of the data frame analytics job by phase.
    • source_index (or si, sourceIndex): Name of the source index.
    • state (or s): Current state of the data frame analytics job.
    • type (or t): The type of analysis that the data frame analytics job performs.
    • version (or v): The Elasticsearch version number in which the data frame analytics job was created.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The identifier for the job.

    • type string

      The type of analysis that the job performs.

    • create_time string

      The time when the job was created.

    • version string

      The version of Elasticsearch when the job was created.

    • source_index string

      The name of the source index.

    • dest_index string

      The name of the destination index.

    • description string

      A description of the job.

    • model_memory_limit string

      The approximate maximum amount of memory resources that are permitted for the job.

    • state string

      The current status of the job.

    • failure_reason string

      Messages about the reason why the job failed.

    • progress string

      The progress report for the job by phase.

    • assignment_explanation string

      Messages related to the selection of a node.

    • node.id string

      The unique identifier of the assigned node.

    • node.name string

      The name of the assigned node.

    • node.ephemeral_id string

      The ephemeral identifier of the assigned node.

    • node.address string

      The network address of the assigned node.

GET /_cat/ml/data_frame/analytics/{id}
GET _cat/ml/data_frame/analytics?v=true&format=json
resp = client.cat.ml_data_frame_analytics(
    v=True,
    format="json",
)
const response = await client.cat.mlDataFrameAnalytics({
  v: "true",
  format: "json",
});
response = client.cat.ml_data_frame_analytics(
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlDataFrameAnalytics([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/data_frame/analytics?v=true&format=json"
client.cat().mlDataFrameAnalytics();
Response examples (200)
A successful response from `GET _cat/ml/data_frame/analytics?v=true&format=json`.
[
  {
    "id": "classifier_job_1",
    "type": "classification",
    "create_time": "2020-02-12T11:49:09.594Z",
    "state": "stopped"
  },
    {
    "id": "classifier_job_2",
    "type": "classification",
    "create_time": "2020-02-12T11:49:14.479Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_3",
    "type": "classification",
    "create_time": "2020-02-12T11:49:16.928Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_4",
    "type": "classification",
    "create_time": "2020-02-12T11:49:19.127Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_5",
    "type": "classification",
    "create_time": "2020-02-12T11:49:21.349Z",
    "state": "stopped"
  }
]

Get datafeeds Generally available; Added in 7.7.0

GET /_cat/ml/datafeeds/{datafeed_id}

All methods and paths for this operation:

GET /_cat/ml/datafeeds

GET /_cat/ml/datafeeds/{datafeed_id}

Get configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no datafeeds that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The datafeed identifier.

    • state string

      The status of the datafeed.

      Values are started, stopped, starting, or stopping.

    • assignment_explanation string

      For started datafeeds only, contains messages relating to the selection of a node.

    • buckets.count string

      The number of buckets processed.

    • search.count string

      The number of searches run by the datafeed.

    • search.time string

      The total time the datafeed spent searching, in milliseconds.

    • search.bucket_avg string

      The average search time per bucket, in milliseconds.

    • search.exp_avg_hour string

      The exponential average search time per hour, in milliseconds.

    • node.id string

      The unique identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.name string

      The name of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.ephemeral_id string

      The ephemeral identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • node.address string

      The network address of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

GET /_cat/ml/datafeeds/{datafeed_id}
GET _cat/ml/datafeeds?v=true&format=json
resp = client.cat.ml_datafeeds(
    v=True,
    format="json",
)
const response = await client.cat.mlDatafeeds({
  v: "true",
  format: "json",
});
response = client.cat.ml_datafeeds(
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlDatafeeds([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/datafeeds?v=true&format=json"
client.cat().mlDatafeeds();
Response examples (200)
A successful response from `GET _cat/ml/datafeeds?v=true&format=json`.
[
  {
    "id": "datafeed-high_sum_total_sales",
    "state": "stopped",
    "buckets.count": "743",
    "search.count": "7"
  },
  {
    "id": "datafeed-low_request_rate",
    "state": "stopped",
    "buckets.count": "1457",
    "search.count": "3"
  },
  {
    "id": "datafeed-response_code_rates",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  },
  {
    "id": "datafeed-url_scanning",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  }
]












Get node information Generally available

GET /_cat/nodes

Get information about the nodes in a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • full_id boolean | string

    If true, return the full node ID. If false, return the shortened node ID.

  • include_unloaded_segments boolean

    If true, the response includes information from segments that are not loaded into memory.

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • build (or b): The Elasticsearch build hash. For example: 5c03844.
    • completion.size (or cs, completionSize): The size of completion. For example: 0b.
    • cpu: The percentage of recent system CPU used.
    • disk.avail (or d, disk, diskAvail): The available disk space. For example: 198.4gb.
    • disk.total (or dt, diskTotal): The total disk space. For example: 458.3gb.
    • disk.used (or du, diskUsed): The used disk space. For example: 259.8gb.
    • disk.used_percent (or dup, diskUsedPercent): The percentage of disk space used.
    • fielddata.evictions (or fe, fielddataEvictions): The number of fielddata cache evictions.
    • fielddata.memory_size (or fm, fielddataMemory): The fielddata cache memory used. For example: 0b.
    • file_desc.current (or fdc, fileDescriptorCurrent): The number of file descriptors used.
    • file_desc.max (or fdm, fileDescriptorMax): The maximum number of file descriptors.
    • file_desc.percent (or fdp, fileDescriptorPercent): The percentage of file descriptors used.
    • flush.total (or ft, flushTotal): The number of flushes.
    • flush.total_time (or ftt, flushTotalTime): The amount of time spent in flush.
    • get.current (or gc, getCurrent): The number of current get operations.
    • get.exists_time (or geti, getExistsTime): The time spent in successful get operations. For example: 14ms.
    • get.exists_total (or geto, getExistsTotal): The number of successful get operations.
    • get.missing_time (or gmti, getMissingTime): The time spent in failed get operations. For example: 0s.
    • get.missing_total (or gmto, getMissingTotal): The number of failed get operations.
    • get.time (or gti, getTime): The amount of time spent in get operations. For example: 14ms.
    • get.total (or gto, getTotal): The number of get operations.
    • heap.current (or hc, heapCurrent): The used heap size. For example: 311.2mb.
    • heap.max (or hm, heapMax): The total heap size. For example: 4gb.
    • heap.percent (or hp, heapPercent): The used percentage of total allocated Elasticsearch JVM heap. This value reflects only the Elasticsearch process running within the operating system and is the most direct indicator of its JVM, heap, or memory resource performance.
    • http_address (or http): The bound HTTP address.
    • id (or nodeId): The identifier for the node.
    • indexing.delete_current (or idc, indexingDeleteCurrent): The number of current deletion operations.
    • indexing.delete_time (or idti, indexingDeleteTime): The time spent in deletion operations. For example: 2ms.
    • indexing.delete_total (or idto, indexingDeleteTotal): The number of deletion operations.
    • indexing.index_current (or iic, indexingIndexCurrent): The number of current indexing operations.
    • indexing.index_failed (or iif, indexingIndexFailed): The number of failed indexing operations.
    • indexing.index_failed_due_to_version_conflict (or iifvc, indexingIndexFailedDueToVersionConflict): The number of indexing operations that failed due to version conflict.
    • indexing.index_time (or iiti, indexingIndexTime): The time spent in indexing operations. For example: 134ms.
    • indexing.index_total (or iito, indexingIndexTotal): The number of indexing operations.
    • ip (or i): The IP address.
    • jdk (or j): The Java version. For example: 1.8.0.
    • load_1m (or l): The most recent load average. For example: 0.22.
    • load_5m (or l): The load average for the last five minutes. For example: 0.78.
    • load_15m (or l): The load average for the last fifteen minutes. For example: 1.24.
    • mappings.total_count (or mtc, mappingsTotalCount): The number of mappings, including runtime and object fields.
    • mappings.total_estimated_overhead_in_bytes (or mteo, mappingsTotalEstimatedOverheadInBytes): The estimated heap overhead, in bytes, of mappings on this node, which allows for 1KiB of heap for every mapped field.
    • master (or m): Indicates whether the node is the elected master node. Returned values include * (elected master) and - (not elected master).
    • merges.current (or mc, mergesCurrent): The number of current merge operations.
    • merges.current_docs (or mcd, mergesCurrentDocs): The number of current merging documents.
    • merges.current_size (or mcs, mergesCurrentSize): The size of current merges. For example: 0b.
    • merges.total (or mt, mergesTotal): The number of completed merge operations.
    • merges.total_docs (or mtd, mergesTotalDocs): The number of merged documents.
    • merges.total_size (or mts, mergesTotalSize): The total size of merges. For example: 0b.
    • merges.total_time (or mtt, mergesTotalTime): The time spent merging documents. For example: 0s.
    • name (or n): The node name.
    • node.role (or r, role, nodeRole): The roles of the node. Returned values include c (cold node), d (data node), f (frozen node), h (hot node), i (ingest node), l (machine learning node), m (master-eligible node), r (remote cluster client node), s (content node), t (transform node), v (voting-only node), w (warm node), and - (coordinating node only). For example, dim indicates a master-eligible data and ingest node.
    • pid (or p): The process identifier.
    • port (or po): The bound transport port number.
    • query_cache.memory_size (or qcm, queryCacheMemory): The used query cache memory. For example: 0b.
    • query_cache.evictions (or qce, queryCacheEvictions): The number of query cache evictions.
    • query_cache.hit_count (or qchc, queryCacheHitCount): The query cache hit count.
    • query_cache.miss_count (or qcmc, queryCacheMissCount): The query cache miss count.
    • ram.current (or rc, ramCurrent): The used total memory. For example: 513.4mb.
    • ram.max (or rm, ramMax): The total memory. For example: 2.9gb.
    • ram.percent (or rp, ramPercent): The used percentage of the total operating system memory. This reflects all processes running on the operating system instead of only Elasticsearch and is not guaranteed to correlate to its performance.
    • refresh.total (or rto, refreshTotal): The number of refresh operations.
    • refresh.time (or rti, refreshTime): The time spent in refresh operations. For example: 91ms.
    • request_cache.memory_size (or rcm, requestCacheMemory): The used request cache memory. For example: 0b.
    • request_cache.evictions (or rce, requestCacheEvictions): The number of request cache evictions.
    • request_cache.hit_count (or rchc, requestCacheHitCount): The request cache hit count.
    • request_cache.miss_count (or rcmc, requestCacheMissCount): The request cache miss count.
    • script.compilations (or scrcc, scriptCompilations): The number of total script compilations.
    • script.cache_evictions (or scrce, scriptCacheEvictions): The number of total compiled scripts evicted from cache.
    • search.fetch_current (or sfc, searchFetchCurrent): The number of current fetch phase operations.
    • search.fetch_time (or sfti, searchFetchTime): The time spent in fetch phase. For example: 37ms.
    • search.fetch_total (or sfto, searchFetchTotal): The number of fetch operations.
    • search.open_contexts (or so, searchOpenContexts): The number of open search contexts.
    • search.query_current (or sqc, searchQueryCurrent): The number of current query phase operations.
    • search.query_time (or sqti, searchQueryTime): The time spent in query phase. For example: 43ms.
    • search.query_total (or sqto, searchQueryTotal): The number of query operations.
    • search.scroll_current (or scc, searchScrollCurrent): The number of open scroll contexts.
    • search.scroll_time (or scti, searchScrollTime): The amount of time scroll contexts were held open. For example: 2m.
    • search.scroll_total (or scto, searchScrollTotal): The number of completed scroll contexts.
    • segments.count (or sc, segmentsCount): The number of segments.
    • segments.fixed_bitset_memory (or sfbm, fixedBitsetMemory): The memory used by fixed bit sets for nested object field types and type filters for types referred in join fields. For example: 1.0kb.
    • segments.index_writer_memory (or siwm, segmentsIndexWriterMemory): The memory used by the index writer. For example: 18mb.
    • segments.memory (or sm, segmentsMemory): The memory used by segments. For example: 1.4kb.
    • segments.version_map_memory (or svmm, segmentsVersionMapMemory): The memory used by the version map. For example: 1.0kb.
    • shard_stats.total_count (or sstc, shards, shardStatsTotalCount): The number of shards assigned.
    • suggest.current (or suc, suggestCurrent): The number of current suggest operations.
    • suggest.time (or suti, suggestTime): The time spent in suggest operations.
    • suggest.total (or suto, suggestTotal): The number of suggest operations.
    • uptime (or u): The amount of node uptime. For example: 17.3m.
    • version (or v): The Elasticsearch version. For example: 9.0.0.
  • s string | array[string]

    A comma-separated list of column names or aliases that determines the sort order. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • master_timeout string

    The period to wait for a connection to the master node.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The unique node identifier.

    • pid string

      The process identifier.

    • ip string

      The IP address.

    • port string

      The bound transport port.

    • http_address string

      The bound HTTP address.

    • version string

      The Elasticsearch version.

    • flavor string

      The Elasticsearch distribution flavor.

    • type string

      The Elasticsearch distribution type.

    • build string

      The Elasticsearch build hash.

    • jdk string

      The Java version.

    • disk.total number | string

      The total disk space.

      One of:

      The total disk space.

    • disk.used number | string

      The used disk space.

      One of:

      The used disk space.

    • disk.avail number | string

      The available disk space.

      One of:

      The available disk space.

    • disk.used_percent string | number

      The used disk space percentage.

      One of:

      The used disk space percentage.

    • heap.current string

      The used heap.

    • heap.percent string | number

      The used heap ratio.

      One of:

      The used heap ratio.

    • heap.max string

      The maximum configured heap.

    • ram.current string

      The used machine memory.

    • ram.percent string | number

      The used machine memory ratio.

      One of:

      The used machine memory ratio.

    • ram.max string

      The total machine memory.

    • file_desc.current string

      The used file descriptors.

    • file_desc.percent string | number

      The used file descriptor ratio.

      One of:

      The used file descriptor ratio.

    • file_desc.max string

      The maximum number of file descriptors.

    • cpu string

      The recent system CPU usage as a percentage.

    • load_1m string

      The load average for the most recent minute.

    • load_5m string

      The load average for the last five minutes.

    • load_15m string

      The load average for the last fifteen minutes.

    • uptime string

      The node uptime.

    • node.role string

      The roles of the node. Returned values include c(cold node), d(data node), f(frozen node), h(hot node), i(ingest node), l(machine learning node), m (master eligible node), r(remote cluster client node), s(content node), t(transform node), v(voting-only node), w(warm node),and -(coordinating node only).

    • master string

      Indicates whether the node is the elected master node. Returned values include *(elected master) and -(not elected master).

    • name string

      The node name.

    • completion.size string

      The size of completion.

    • fielddata.memory_size string

      The used fielddata cache.

    • fielddata.evictions string

      The fielddata evictions.

    • query_cache.memory_size string

      The used query cache.

    • query_cache.evictions string

      The query cache evictions.

    • query_cache.hit_count string

      The query cache hit counts.

    • query_cache.miss_count string

      The query cache miss counts.

    • request_cache.memory_size string

      The used request cache.

    • request_cache.evictions string

      The request cache evictions.

    • request_cache.hit_count string

      The request cache hit counts.

    • request_cache.miss_count string

      The request cache miss counts.

    • flush.total string

      The number of flushes.

    • flush.total_time string

      The time spent in flush.

    • get.current string

      The number of current get ops.

    • get.time string

      The time spent in get.

    • get.total string

      The number of get ops.

    • get.exists_time string

      The time spent in successful gets.

    • get.exists_total string

      The number of successful get operations.

    • get.missing_time string

      The time spent in failed gets.

    • get.missing_total string

      The number of failed gets.

    • indexing.delete_current string

      The number of current deletions.

    • indexing.delete_time string

      The time spent in deletions.

    • indexing.delete_total string

      The number of delete operations.

    • indexing.index_current string

      The number of current indexing operations.

    • indexing.index_time string

      The time spent in indexing.

    • indexing.index_total string

      The number of indexing operations.

    • indexing.index_failed string

      The number of failed indexing operations.

    • merges.current string

      The number of current merges.

    • merges.current_docs string

      The number of current merging docs.

    • merges.current_size string

      The size of current merges.

    • merges.total string

      The number of completed merge operations.

    • merges.total_docs string

      The docs merged.

    • merges.total_size string

      The size merged.

    • merges.total_time string

      The time spent in merges.

    • refresh.total string

      The total refreshes.

    • refresh.time string

      The time spent in refreshes.

    • refresh.external_total string

      The total external refreshes.

    • refresh.external_time string

      The time spent in external refreshes.

    • refresh.listeners string

      The number of pending refresh listeners.

    • script.compilations string

      The total script compilations.

    • script.cache_evictions string

      The total compiled scripts evicted from the cache.

    • script.compilation_limit_triggered string

      The script cache compilation limit triggered.

    • search.fetch_current string

      The current fetch phase operations.

    • search.fetch_time string

      The time spent in fetch phase.

    • search.fetch_total string

      The total fetch operations.

    • search.open_contexts string

      The open search contexts.

    • search.query_current string

      The current query phase operations.

    • search.query_time string

      The time spent in query phase.

    • search.query_total string

      The total query phase operations.

    • search.scroll_current string

      The open scroll contexts.

    • search.scroll_time string

      The time scroll contexts held open.

    • search.scroll_total string

      The completed scroll contexts.

    • segments.count string

      The number of segments.

    • segments.memory string

      The memory used by segments.

    • segments.index_writer_memory string

      The memory used by the index writer.

    • segments.version_map_memory string

      The memory used by the version map.

    • segments.fixed_bitset_memory string

      The memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields.

    • suggest.current string

      The number of current suggest operations.

    • suggest.time string

      The time spend in suggest.

    • suggest.total string

      The number of suggest operations.

    • bulk.total_operations string

      The number of bulk shard operations.

    • bulk.total_time string

      The time spend in shard bulk.

    • bulk.total_size_in_bytes string

      The total size in bytes of shard bulk.

    • bulk.avg_time string

      The average time spend in shard bulk.

    • bulk.avg_size_in_bytes string

      The average size in bytes of shard bulk.

GET /_cat/nodes
GET /_cat/nodes?v=true&h=id,ip,port,v,m&format=json
resp = client.cat.nodes(
    v=True,
    h="id,ip,port,v,m",
    format="json",
)
const response = await client.cat.nodes({
  v: "true",
  h: "id,ip,port,v,m",
  format: "json",
});
response = client.cat.nodes(
  v: "true",
  h: "id,ip,port,v,m",
  format: "json"
)
$resp = $client->cat()->nodes([
    "v" => "true",
    "h" => "id,ip,port,v,m",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/nodes?v=true&h=id,ip,port,v,m&format=json"
client.cat().nodes();
Response examples (200)
A successful response from `GET /_cat/nodes?v=true&format=json`. The `ip`, `heap.percent`, `ram.percent`, `cpu`, and `load_*` columns provide the IP addresses and performance information of each node. The `node.role`, `master`, and `name` columns provide information useful for monitoring an entire cluster, particularly large ones.
[
  {
    "ip": "127.0.0.1",
    "heap.percent": "65",
    "ram.percent": "99",
    "cpu": "42",
    "load_1m": "3.07",
    "load_5m": null,
    "load_15m": null,
    "node.role": "cdfhilmrstw",
    "master": "*",
    "name": "mJw06l1"
  }
]
A successful response from `GET /_cat/nodes?v=true&h=id,ip,port,v,m&format=json`. It returns the `id`, `ip`, `port`, `v` (version), and `m` (master) columns.
[
  {
    "id": "veJR",
    "ip": "127.0.0.1",
    "port": "59938",
    "v": "9.0.0",
    "m": "*"
  }
]

Get pending task information Generally available

GET /_cat/pending_tasks

Get information about cluster-level changes that have not yet taken effect. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the pending cluster tasks API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • insertOrder string

      The task insertion order.

    • timeInQueue string

      Indicates how long the task has been in queue.

    • priority string

      The task priority.

    • source string

      The task source.

GET /_cat/pending_tasks
GET /_cat/pending_tasks?v=trueh=insertOrder,timeInQueue,priority,source&format=json
resp = client.cat.pending_tasks(
    v="trueh=insertOrder,timeInQueue,priority,source",
    format="json",
)
const response = await client.cat.pendingTasks({
  v: "trueh=insertOrder,timeInQueue,priority,source",
  format: "json",
});
response = client.cat.pending_tasks(
  v: "trueh=insertOrder,timeInQueue,priority,source",
  format: "json"
)
$resp = $client->cat()->pendingTasks([
    "v" => "trueh=insertOrder,timeInQueue,priority,source",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/pending_tasks?v=trueh=insertOrder,timeInQueue,priority,source&format=json"
client.cat().pendingTasks();
Response examples (200)
A successful response from `GET /_cat/pending_tasks?v=trueh=insertOrder,timeInQueue,priority,source&format=json`.
[
  { "insertOrder": "1685", "timeInQueue": "855ms", "priority": "HIGH", "source": "update-mapping [foo][t]"},
    { "insertOrder": "1686", "timeInQueue": "843ms", "priority": "HIGH", "source": "update-mapping [foo][t]"},
    { "insertOrder": "1693", "timeInQueue": "753ms", "priority": "HIGH", "source": "refresh-mapping [foo][[t]]"},
    { "insertOrder": "1688", "timeInQueue": "816ms", "priority": "HIGH", "source": "update-mapping [foo][t]"},
    { "insertOrder": "1689", "timeInQueue": "802ms", "priority": "HIGH", "source": "update-mapping [foo][t]"},
    { "insertOrder": "1690", "timeInQueue": "787ms", "priority": "HIGH", "source": "update-mapping [foo][t]"},
    { "insertOrder": "1691", "timeInQueue": "773ms", "priority": "HIGH", "source": "update-mapping [foo][t]"}
]




Get shard recovery information Generally available

GET /_cat/recovery/{index}

All methods and paths for this operation:

GET /_cat/recovery

GET /_cat/recovery/{index}

Get information about ongoing and completed shard recoveries. Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or syncing a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing. For data streams, the API returns information about the stream’s backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index recovery API.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • active_only boolean

    If true, the response only includes ongoing shard recoveries.

  • detailed boolean

    If true, the response includes detailed information about shard recoveries.

  • index string | array[string]

    Comma-separated list or wildcard expression of index names to limit the returned information

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • index (or i, idx): The name of the index.
    • shard (or s, sh): The name of the shard.
    • time (or t, ti, primaryOrReplica): The recovery time elasped.
    • type: The type of recovery, from a peer or a snapshot.
    • stage (or st): The stage of the recovery. Returned values are: INIT, INDEX: recovery of lucene files, either reusing local ones are copying new ones, VERIFY_INDEX: potentially running check index, TRANSLOG: starting up the engine, replaying the translog, FINALIZE: performing final task after all translog ops have been done, DONE
    • source_host (or shost): The host address the index is moving from.
    • source_node (or snode): The node name the index is moving from.
    • target_host (or thost): The host address the index is moving to.
    • target_node (or tnode): The node name the index is moving to.
    • repository (or tnode): The name of the repository being used. if not relevant 'n/a'.
    • snapshot (or snap): The name of the snapshot being used. if not relevant 'n/a'.
    • files (or f): The total number of files to recover.
    • files_recovered (or fr): The number of files currently recovered.
    • files_percent (or fp): The percentage of files currently recovered.
    • files_total (or tf): The total number of files.
    • bytes (or b): The total number of bytes to recover.
    • bytes_recovered (or br): Total number of bytes currently recovered.
    • bytes_percent (or bp): The percentage of bytes currently recovered.
    • bytes_total (or tb): The total number of bytes.
    • translog_ops (or to): The total number of translog ops to recover.
    • translog_ops_recovered (or tor): The total number of translog ops currently recovered.
    • translog_ops_percent (or top): The percentage of translog ops currently recovered.
    • start_time (or start): The start time of the recovery operation.
    • start_time_millis (or start_millis): The start time of the recovery operation in eopch milliseconds.
    • stop_time (or stop): The end time of the recovery operation. If ongoing '1970-01-01T00:00:00.000Z'
    • stop_time_millis (or stop_millis): The end time of the recovery operation in eopch milliseconds. If ongoing '0'

    Values are index, i, idx, shard, s, sh, time, t, ti, primaryOrReplica, type, stage, st, source_host, shost, source_node, snode, target_host, thost, target_node, tnode, repository, snapshot, snap, files, f, files_recovered, fr, files_percent, fp, files_total, tf, bytes, b, bytes_recovered, br, bytes_percent, bp, bytes_total, tb, translog_ops, to, translog_ops_recovered, tor, translog_ops_percent, top, start_time, start, start_time_millis, start_millis, stop_time, stop, stop_time_millis, or stop_millis.

  • s string | array[string]

    A comma-separated list of column names or aliases that determines the sort order. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • index string

      The index name.

    • shard string

      The shard name.

    • start_time string | number

      The recovery start time.

      One of:

      The recovery start time.

    • start_time_millis number

      Time unit for milliseconds

    • stop_time string | number

      The recovery stop time.

      One of:

      The recovery stop time.

    • stop_time_millis number

      Time unit for milliseconds

    • time string

      The recovery time.

      External documentation
    • type string

      The recovery type.

    • stage string

      The recovery stage.

    • source_host string

      The source host.

    • source_node string

      The source node name.

    • target_host string

      The target host.

    • target_node string

      The target node name.

    • repository string

      The repository name.

    • snapshot string

      The snapshot name.

    • files string

      The number of files to recover.

    • files_recovered string

      The files recovered.

    • files_percent string | number

      The ratio of files recovered.

      One of:

      The ratio of files recovered.

    • files_total string

      The total number of files.

    • bytes string

      The number of bytes to recover.

    • bytes_recovered string

      The bytes recovered.

    • bytes_percent string | number

      The ratio of bytes recovered.

      One of:

      The ratio of bytes recovered.

    • bytes_total string

      The total number of bytes.

    • translog_ops string

      The number of translog operations to recover.

    • translog_ops_recovered string

      The translog operations recovered.

    • translog_ops_percent string | number

      The ratio of translog operations recovered.

      One of:

      The ratio of translog operations recovered.

GET /_cat/recovery/{index}
GET _cat/recovery?v=true&format=json
resp = client.cat.recovery(
    v=True,
    format="json",
)
const response = await client.cat.recovery({
  v: "true",
  format: "json",
});
response = client.cat.recovery(
  v: "true",
  format: "json"
)
$resp = $client->cat()->recovery([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/recovery?v=true&format=json"
client.cat().recovery();
Response examples (200)
A successful response from `GET _cat/recovery?v=true&format=json`. In this example, the source and target nodes are the same because the recovery type is `store`, meaning they were read from local storage on node start.
[
  {
    "index": "my-index-000001 ",
    "shard": "0",
    "time": "13ms",
    "type": "store",
    "stage": "done",
    "source_host": "n/a",
    "source_node": "n/a",
    "target_host": "127.0.0.1",
    "target_node": "node-0",
    "repository": "n/a",
    "snapshot": "n/a",
    "files": "0",
    "files_recovered": "0",
    "files_percent": "100.0%",
    "files_total": "13",
    "bytes": "0b",
    "bytes_recovered": "0b",
    "bytes_percent": "100.0%",
    "bytes_total": "9928b",
    "translog_ops": "0",
    "translog_ops_recovered": "0",
    "translog_ops_percent": "100.0%"
  }
]
A successful response from `GET _cat/recovery?v=true&h=i,s,t,ty,st,shost,thost,f,fp,b,bp&format=json`. You can retrieve information about an ongoing recovery for example when you increase the replica count of an index and bring another node online to host the replicas. In this example, the recovery type is `peer`, meaning the shard recovered from another node. The `files` and `bytes` are real-time measurements.
[
  {
    "i": "my-index-000001",
    "s": "0",
    "t": "1252ms",
    "ty": "peer",
    "st": "done",
    "shost": "192.168.1.1",
    "thost": "192.168.1.1",
    "f": "0",
    "fp": "100.0%",
    "b": "0b",
    "bp": "100.0%",
  }
]
A successful response from `GET _cat/recovery?v=true&h=i,s,t,ty,st,rep,snap,f,fp,b,bp&format=json`. You can restore backups of an index using the snapshot and restore API. You can use the cat recovery API to get information about a snapshot recovery.
[
  {
    "i": "my-index-000001",
    "s": "0",
    "t": "1978ms",
    "ty": "snapshot",
    "st": "done",
    "rep": "my-repo",
    "snap": "snap-1",
    "f": "79",
    "fp": "8.0%",
    "b": "12086",
    "bp": "9.0%"
  }
]

Get snapshot repository information Generally available; Added in 2.1.0

GET /_cat/repositories

Get a list of snapshot repositories for a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot repository API.

Required authorization

  • Cluster privileges: monitor_snapshot

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The unique repository identifier.

    • type string

      The repository type.

GET /_cat/repositories
GET /_cat/repositories?v=true&format=json
resp = client.cat.repositories(
    v=True,
    format="json",
)
const response = await client.cat.repositories({
  v: "true",
  format: "json",
});
response = client.cat.repositories(
  v: "true",
  format: "json"
)
$resp = $client->cat()->repositories([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/repositories?v=true&format=json"
client.cat().repositories();
Response examples (200)
A successful response from `GET /_cat/repositories?v=true&format=json`.
[
  {
    "id": "repo1",
    "type": "fs"
  },
  {
    "id": "repo2",
    "type": "s3"
  }
]

Get segment information Generally available

GET /_cat/segments/{index}

All methods and paths for this operation:

GET /_cat/segments

GET /_cat/segments/{index}

Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index segments API.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • index (or i, idx): The name of the index.
    • shard (or s, sh): The name of the shard.
    • prirep (or p, pr, primaryOrReplica): The shard type. Returned values are 'primary' or 'replica'.
    • ip: IP address of the segment’s shard, such as '127.0.1.1'.
    • segment: The name of the segment, such as '_0'. The segment name is derived from the segment generation and used internally to create file names in the directory of the shard.
    • generation: Generation number, such as '0'. Elasticsearch increments this generation number for each segment written. Elasticsearch then uses this number to derive the segment name.
    • docs.count: The number of documents as reported by Lucene. This excludes deleted documents and counts any nested documents separately from their parents. It also excludes documents which were indexed recently and do not yet belong to a segment.
    • docs.deleted: The number of deleted documents as reported by Lucene, which may be higher or lower than the number of delete operations you have performed. This number excludes deletes that were performed recently and do not yet belong to a segment. Deleted documents are cleaned up by the automatic merge process if it makes sense to do so. Also, Elasticsearch creates extra deleted documents to internally track the recent history of operations on a shard.
    • size: The disk space used by the segment, such as '50kb'.
    • size.memory: The bytes of segment data stored in memory for efficient search, such as '1264'. A value of '-1' indicates Elasticsearch was unable to compute this number.
    • committed: If 'true', the segments is synced to disk. Segments that are synced can survive a hard reboot. If 'false', the data from uncommitted segments is also stored in the transaction log so that Elasticsearch is able to replay changes on the next start.
    • searchable: If 'true', the segment is searchable. If 'false', the segment has most likely been written to disk but needs a refresh to be searchable.
    • version: The version of Lucene used to write the segment.
    • compound: If 'true', the segment is stored in a compound file. This means Lucene merged all files from the segment in a single file to save file descriptors.
    • id: The ID of the node, such as 'k0zy'.

    Values are index, i, idx, shard, s, sh, prirep, p, pr, primaryOrReplica, ip, segment, generation, docs.count, docs.deleted, size, size.memory, committed, searchable, version, compound, or id.

  • s string | array[string]

    A comma-separated list of column names or aliases that determines the sort order. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • index string

      The index name.

    • shard string

      The shard name.

    • prirep string

      The shard type: primary or replica.

    • ip string

      The IP address of the node where it lives.

    • id string

      The unique identifier of the node where it lives.

    • segment string

      The segment name, which is derived from the segment generation and used internally to create file names in the directory of the shard.

    • generation string

      The segment generation number. Elasticsearch increments this generation number for each segment written then uses this number to derive the segment name.

    • docs.count string

      The number of documents in the segment. This excludes deleted documents and counts any nested documents separately from their parents. It also excludes documents which were indexed recently and do not yet belong to a segment.

    • docs.deleted string

      The number of deleted documents in the segment, which might be higher or lower than the number of delete operations you have performed. This number excludes deletes that were performed recently and do not yet belong to a segment. Deleted documents are cleaned up by the automatic merge process if it makes sense to do so. Also, Elasticsearch creates extra deleted documents to internally track the recent history of operations on a shard.

    • size number | string

      The segment size in bytes.

      One of:

      The segment size in bytes.

    • size.memory number | string

      The segment memory in bytes. A value of -1 indicates Elasticsearch was unable to compute this number.

      One of:

      The segment memory in bytes. A value of -1 indicates Elasticsearch was unable to compute this number.

    • committed string

      If true, the segment is synced to disk. Segments that are synced can survive a hard reboot. If false, the data from uncommitted segments is also stored in the transaction log so that Elasticsearch is able to replay changes on the next start.

    • searchable string

      If true, the segment is searchable. If false, the segment has most likely been written to disk but needs a refresh to be searchable.

    • version string

      The version of Lucene used to write the segment.

    • compound string

      If true, the segment is stored in a compound file. This means Lucene merged all files from the segment in a single file to save file descriptors.

GET /_cat/segments/{index}
GET /_cat/segments?v=true&format=json
resp = client.cat.segments(
    v=True,
    format="json",
)
const response = await client.cat.segments({
  v: "true",
  format: "json",
});
response = client.cat.segments(
  v: "true",
  format: "json"
)
$resp = $client->cat()->segments([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/segments?v=true&format=json"
client.cat().segments();
Response examples (200)
A successful response from `GET /_cat/segments?v=true&format=json`.
[
  {
    "index": "test",
    "shard": "0",
    "prirep": "p",
    "ip": "127.0.0.1",
    "segment": "_0",
    "generation": "0",
    "docs.count": "1",
    "docs.deleted": "0",
    "size": "3kb",
    "size.memory": "0",
    "committed": "false",
    "searchable": "true",
    "version": "9.12.0",
    "compound": "true"
  },
  {
    "index": "test1",
    "shard": "0",
    "prirep": "p",
    "ip": "127.0.0.1",
    "segment": "_0",
    "generation": "0",
    "docs.count": "1",
    "docs.deleted": "0",
    "size": "3kb",
    "size.memory": "0",
    "committed": "false",
    "searchable": "true",
    "version": "9.12.0",
    "compound": "true"
  }
]

Get shard information Generally available

GET /_cat/shards/{index}

All methods and paths for this operation:

GET /_cat/shards

GET /_cat/shards/{index}

Get information about the shards in a cluster. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

    Supported values include:

    • completion.size (or cs, completionSize): Size of completion. For example: 0b.
    • dataset.size: Disk space used by the shard’s dataset, which may or may not be the size on disk, but includes space used by the shard on object storage. Reported as a size value for example: 5kb.
    • dense_vector.value_count (or dvc, denseVectorCount): Number of indexed dense vectors.
    • docs (or d, dc): Number of documents in shard, for example: 25.
    • fielddata.evictions (or fe, fielddataEvictions): Fielddata cache evictions, for example: 0.
    • fielddata.memory_size (or fm, fielddataMemory): Used fielddata cache memory, for example: 0b.
    • flush.total (or ft, flushTotal): Number of flushes, for example: 1.
    • flush.total_time (or ftt, flushTotalTime): Time spent in flush, for example: 1.
    • get.current (or gc, getCurrent): Number of current get operations, for example: 0.
    • get.exists_time (or geti, getExistsTime): Time spent in successful gets, for example: 14ms.
    • get.exists_total (or geto, getExistsTotal): Number of successful get operations, for example: 2.
    • get.missing_time (or gmti, getMissingTime): Time spent in failed gets, for example: 0s.
    • get.missing_total (or gmto, getMissingTotal): Number of failed get operations, for example: 1.
    • get.time (or gti, getTime): Time spent in get, for example: 14ms.
    • get.total (or gto, getTotal): Number of get operations, for example: 2.
    • id: ID of the node, for example: k0zy.
    • index (or i, idx): Name of the index.
    • indexing.delete_current (or idc, indexingDeleteCurrent): Number of current deletion operations, for example: 0.
    • indexing.delete_time (or idti, indexingDeleteTime): Time spent in deletions, for example: 2ms.
    • indexing.delete_total (or idto, indexingDeleteTotal): Number of deletion operations, for example: 2.
    • indexing.index_current (or iic, indexingIndexCurrent): Number of current indexing operations, for example: 0.
    • indexing.index_failed_due_to_version_conflict (or iifvc, indexingIndexFailedDueToVersionConflict): Number of failed indexing operations due to version conflict, for example: 0.
    • indexing.index_failed (or iif, indexingIndexFailed): Number of failed indexing operations, for example: 0.
    • indexing.index_time (or iiti, indexingIndexTime): Time spent in indexing, such as for example: 134ms.
    • indexing.index_total (or iito, indexingIndexTotal): Number of indexing operations, for example: 1.
    • ip: IP address of the node, for example: 127.0.1.1.
    • merges.current (or mc, mergesCurrent): Number of current merge operations, for example: 0.
    • merges.current_docs (or mcd, mergesCurrentDocs): Number of current merging documents, for example: 0.
    • merges.current_size (or mcs, mergesCurrentSize): Size of current merges, for example: 0b.
    • merges.total (or mt, mergesTotal): Number of completed merge operations, for example: 0.
    • merges.total_docs (or mtd, mergesTotalDocs): Number of merged documents, for example: 0.
    • merges.total_size (or mts, mergesTotalSize): Size of current merges, for example: 0b.
    • merges.total_time (or mtt, mergesTotalTime): Time spent merging documents, for example: 0s.
    • node (or n): Node name, for example: I8hydUG.
    • prirep (or p, pr, primaryOrReplica): Shard type. Returned values are primary or replica.
    • query_cache.evictions (or qce, queryCacheEvictions): Query cache evictions, for example: 0.
    • query_cache.memory_size (or qcm, queryCacheMemory): Used query cache memory, for example: 0b.
    • recoverysource.type (or rs): Type of recovery source.
    • refresh.time (or rti, refreshTime): Time spent in refreshes, for example: 91ms.
    • refresh.total (or rto, refreshTotal): Number of refreshes, for example: 16.
    • search.fetch_current (or sfc, searchFetchCurrent): Current fetch phase operations, for example: 0.
    • search.fetch_time (or sfti, searchFetchTime): Time spent in fetch phase, for example: 37ms.
    • search.fetch_total (or sfto, searchFetchTotal): Number of fetch operations, for example: 7.
    • search.open_contexts (or so, searchOpenContexts): Open search contexts, for example: 0.
    • search.query_current (or sqc, searchQueryCurrent): Current query phase operations, for example: 0.
    • search.query_time (or sqti, searchQueryTime): Time spent in query phase, for example: 43ms.
    • search.query_total (or sqto, searchQueryTotal): Number of query operations, for example: 9.
    • search.scroll_current (or scc, searchScrollCurrent): Open scroll contexts, for example: 2.
    • search.scroll_time (or scti, searchScrollTime): Time scroll contexts held open, for example: 2m.
    • search.scroll_total (or scto, searchScrollTotal): Completed scroll contexts, for example: 1.
    • segments.count (or sc, segmentsCount): Number of segments, for example: 4.
    • segments.fixed_bitset_memory (or sfbm, fixedBitsetMemory): Memory used by fixed bit sets for nested object field types and type filters for types referred in join fields, for example: 1.0kb.
    • segments.index_writer_memory (or siwm, segmentsIndexWriterMemory): Memory used by index writer, for example: 18mb.
    • segments.memory (or sm, segmentsMemory): Memory used by segments, for example: 1.4kb.
    • segments.version_map_memory (or svmm, segmentsVersionMapMemory): Memory used by version map, for example: 1.0kb.
    • seq_no.global_checkpoint (or sqg, globalCheckpoint): Global checkpoint.
    • seq_no.local_checkpoint (or sql, localCheckpoint): Local checkpoint.
    • seq_no.max (or sqm, maxSeqNo): Maximum sequence number.
    • shard (or s, sh): Name of the shard.
    • dsparse_vector.value_count (or svc, sparseVectorCount): Number of indexed sparse vectors.
    • state (or st): State of the shard. Returned values are:
      • INITIALIZING: The shard is recovering from a peer shard or gateway.
      • RELOCATING: The shard is relocating.
      • STARTED: The shard has started.
      • UNASSIGNED: The shard is not assigned to any node.
    • store (or sto): Disk space used by the shard, for example: 5kb.
    • suggest.current (or suc, suggestCurrent): Number of current suggest operations, for example: 0.
    • suggest.time (or suti, suggestTime): Time spent in suggest, for example: 0.
    • suggest.total (or suto, suggestTotal): Number of suggest operations, for example: 0.
    • sync_id: Sync ID of the shard.
    • unassigned.at (or ua): Time at which the shard became unassigned in Coordinated Universal Time (UTC).
    • unassigned.details (or ud): Details about why the shard became unassigned. This does not explain why the shard is currently unassigned. To understand why a shard is not assigned, use the Cluster allocation explain API.
    • unassigned.for (or uf): Time at which the shard was requested to be unassigned in Coordinated Universal Time (UTC).
    • unassigned.reason (or ur): Indicates the reason for the last change to the state of this unassigned shard. This does not explain why the shard is currently unassigned. To understand why a shard is not assigned, use the Cluster allocation explain API. Returned values include:

      • ALLOCATION_FAILED: Unassigned as a result of a failed allocation of the shard.
      • CLUSTER_RECOVERED: Unassigned as a result of a full cluster recovery.
      • DANGLING_INDEX_IMPORTED: Unassigned as a result of importing a dangling index.
      • EXISTING_INDEX_RESTORED: Unassigned as a result of restoring into a closed index.
      • FORCED_EMPTY_PRIMARY: The shard’s allocation was last modified by forcing an empty primary using the Cluster reroute API.
      • INDEX_CLOSED: Unassigned because the index was closed.
      • INDEX_CREATED: Unassigned as a result of an API creation of an index.
      • INDEX_REOPENED: Unassigned as a result of opening a closed index.
      • MANUAL_ALLOCATION: The shard’s allocation was last modified by the Cluster reroute API.
      • NEW_INDEX_RESTORED: Unassigned as a result of restoring into a new index.
      • NODE_LEFT: Unassigned as a result of the node hosting it leaving the cluster.
      • NODE_RESTARTING: Similar to NODE_LEFT, except that the node was registered as restarting using the Node shutdown API.
      • PRIMARY_FAILED: The shard was initializing as a replica, but the primary shard failed before the initialization completed.
      • REALLOCATED_REPLICA: A better replica location is identified and causes the existing replica allocation to be cancelled.
      • REINITIALIZED: When a shard moves from started back to initializing.
      • REPLICA_ADDED: Unassigned as a result of explicit addition of a replica.
      • REROUTE_CANCELLED: Unassigned as a result of explicit cancel reroute command.
  • s string | array[string]

    A comma-separated list of column names or aliases that determines the sort order. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • master_timeout string

    The period to wait for a connection to the master node.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • index string

      The index name.

    • shard string

      The shard name.

    • prirep string

      The shard type: primary or replica.

    • state string

      The shard state. Returned values include: INITIALIZING: The shard is recovering from a peer shard or gateway. RELOCATING: The shard is relocating. STARTED: The shard has started. UNASSIGNED: The shard is not assigned to any node.

    • docs string | null

      The number of documents in the shard.

    • store string | null

      The disk space used by the shard.

    • dataset string | null

      total size of dataset (including the cache for partially mounted indices)

    • ip string | null

      The IP address of the node.

    • id string

      The unique identifier for the node.

    • node string | null

      The name of node.

    • sync_id string

      The sync identifier.

    • unassigned.reason string

      The reason for the last change to the state of an unassigned shard. It does not explain why the shard is currently unassigned; use the cluster allocation explain API for that information. Returned values include: ALLOCATION_FAILED: Unassigned as a result of a failed allocation of the shard. CLUSTER_RECOVERED: Unassigned as a result of a full cluster recovery. DANGLING_INDEX_IMPORTED: Unassigned as a result of importing a dangling index. EXISTING_INDEX_RESTORED: Unassigned as a result of restoring into a closed index. FORCED_EMPTY_PRIMARY: The shard’s allocation was last modified by forcing an empty primary using the cluster reroute API. INDEX_CLOSED: Unassigned because the index was closed. INDEX_CREATED: Unassigned as a result of an API creation of an index. INDEX_REOPENED: Unassigned as a result of opening a closed index. MANUAL_ALLOCATION: The shard’s allocation was last modified by the cluster reroute API. NEW_INDEX_RESTORED: Unassigned as a result of restoring into a new index. NODE_LEFT: Unassigned as a result of the node hosting it leaving the cluster. NODE_RESTARTING: Similar to NODE_LEFT, except that the node was registered as restarting using the node shutdown API. PRIMARY_FAILED: The shard was initializing as a replica, but the primary shard failed before the initialization completed. REALLOCATED_REPLICA: A better replica location is identified and causes the existing replica allocation to be cancelled. REINITIALIZED: When a shard moves from started back to initializing. REPLICA_ADDED: Unassigned as a result of explicit addition of a replica. REROUTE_CANCELLED: Unassigned as a result of explicit cancel reroute command.

    • unassigned.at string

      The time at which the shard became unassigned in Coordinated Universal Time (UTC).

    • unassigned.for string

      The time at which the shard was requested to be unassigned in Coordinated Universal Time (UTC).

    • unassigned.details string

      Additional details as to why the shard became unassigned. It does not explain why the shard is not assigned; use the cluster allocation explain API for that information.

    • recoverysource.type string

      The type of recovery source.

    • completion.size string

      The size of completion.

    • fielddata.memory_size string

      The used fielddata cache memory.

    • fielddata.evictions string

      The fielddata cache evictions.

    • query_cache.memory_size string

      The used query cache memory.

    • query_cache.evictions string

      The query cache evictions.

    • flush.total string

      The number of flushes.

    • flush.total_time string

      The time spent in flush.

    • get.current string

      The number of current get operations.

    • get.time string

      The time spent in get operations.

    • get.total string

      The number of get operations.

    • get.exists_time string

      The time spent in successful get operations.

    • get.exists_total string

      The number of successful get operations.

    • get.missing_time string

      The time spent in failed get operations.

    • get.missing_total string

      The number of failed get operations.

    • indexing.delete_current string

      The number of current deletion operations.

    • indexing.delete_time string

      The time spent in deletion operations.

    • indexing.delete_total string

      The number of delete operations.

    • indexing.index_current string

      The number of current indexing operations.

    • indexing.index_time string

      The time spent in indexing operations.

    • indexing.index_total string

      The number of indexing operations.

    • indexing.index_failed string

      The number of failed indexing operations.

    • merges.current string

      The number of current merge operations.

    • merges.current_docs string

      The number of current merging documents.

    • merges.current_size string

      The size of current merge operations.

    • merges.total string

      The number of completed merge operations.

    • merges.total_docs string

      The nuber of merged documents.

    • merges.total_size string

      The size of current merges.

    • merges.total_time string

      The time spent merging documents.

    • refresh.total string

      The total number of refreshes.

    • refresh.time string

      The time spent in refreshes.

    • refresh.external_total string

      The total nunber of external refreshes.

    • refresh.external_time string

      The time spent in external refreshes.

    • refresh.listeners string

      The number of pending refresh listeners.

    • search.fetch_current string

      The current fetch phase operations.

    • search.fetch_time string

      The time spent in fetch phase.

    • search.fetch_total string

      The total number of fetch operations.

    • search.open_contexts string

      The number of open search contexts.

    • search.query_current string

      The current query phase operations.

    • search.query_time string

      The time spent in query phase.

    • search.query_total string

      The total number of query phase operations.

    • search.scroll_current string

      The open scroll contexts.

    • search.scroll_time string

      The time scroll contexts were held open.

    • search.scroll_total string

      The number of completed scroll contexts.

    • segments.count string

      The number of segments.

    • segments.memory string

      The memory used by segments.

    • segments.index_writer_memory string

      The memory used by the index writer.

    • segments.version_map_memory string

      The memory used by the version map.

    • segments.fixed_bitset_memory string

      The memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields.

    • seq_no.max string

      The maximum sequence number.

    • seq_no.local_checkpoint string

      The local checkpoint.

    • seq_no.global_checkpoint string

      The global checkpoint.

    • warmer.current string

      The number of current warmer operations.

    • warmer.total string

      The total number of warmer operations.

    • warmer.total_time string

      The time spent in warmer operations.

    • path.data string

      The shard data path.

    • path.state string

      The shard state path.

    • bulk.total_operations string

      The number of bulk shard operations.

    • bulk.total_time string

      The time spent in shard bulk operations.

    • bulk.total_size_in_bytes string

      The total size in bytes of shard bulk operations.

    • bulk.avg_time string

      The average time spent in shard bulk operations.

    • bulk.avg_size_in_bytes string

      The average size in bytes of shard bulk operations.

GET /_cat/shards/{index}
GET _cat/shards?format=json
resp = client.cat.shards(
    format="json",
)
const response = await client.cat.shards({
  format: "json",
});
response = client.cat.shards(
  format: "json"
)
$resp = $client->cat()->shards([
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/shards?format=json"
client.cat().shards();
Response examples (200)
A successful response from `GET _cat/shards?format=json`.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  }
]
A successful response from `GET _cat/shards/my-index-*?format=json`. It returns information for any data streams or indices beginning with `my-index-`.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  }
]
A successful response from `GET _cat/shards?format=json`. The `RELOCATING` value in the `state` column indicates the index shard is relocating.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "RELOCATING",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA -> -> 192.168.56.30 bGG90GE"
  }
]
A successful response from `GET _cat/shards?format=json`. Before a shard is available for use, it goes through an `INITIALIZING` state. You can use the cat shards API to see which shards are initializing.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "docs": "3014",
    "store": "31.1mb",
    "dataset": "249b",
    "ip": "192.168.56.10",
    "node": "H5dfFeA"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "INITIALIZING",
    "docs": "0",
    "store": "14.3mb",
    "dataset": "249b",
    "ip": "192.168.56.30",
    "node": "bGG90GE"
  }
]
A successful response from `GET _cat/shards?h=index,shard,prirep,state,unassigned.reason&format=json`. It includes the `unassigned.reason` column, which indicates why a shard is unassigned.
[
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "p",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.10 H5dfFeA"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.30 bGG90GE"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "STARTED",
    "unassigned.reason": "3014 31.1mb 192.168.56.20 I8hydUG"
  },
  {
    "index": "my-index-000001",
    "shard": "0",
    "prirep": "r",
    "state": "UNASSIGNED",
    "unassigned.reason": "ALLOCATION_FAILED"
  }
]








Get index template information Generally available; Added in 5.2.0

GET /_cat/templates/{name}

All methods and paths for this operation:

GET /_cat/templates

GET /_cat/templates/{name}

Get information about the index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • name string Required

    The name of the template to return. Accepts wildcard expressions. If omitted, all templates are returned.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • name string

      The template name.

    • index_patterns string

      The template index patterns.

    • order string

      The template application order or priority number.

    • version string | null

      The template version.

    • composed_of string

      The component templates that comprise the index template.

GET /_cat/templates/{name}
GET _cat/templates/my-template-*?v=true&s=name&format=json
resp = client.cat.templates(
    name="my-template-*",
    v=True,
    s="name",
    format="json",
)
const response = await client.cat.templates({
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json",
});
response = client.cat.templates(
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json"
)
$resp = $client->cat()->templates([
    "name" => "my-template-*",
    "v" => "true",
    "s" => "name",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/templates/my-template-*?v=true&s=name&format=json"
client.cat().templates();
Response examples (200)
A successful response from `GET _cat/templates/my-template-*?v=true&s=name&format=json`.
[
  {
    "name": "my-template-0",
    "index_patterns": "[te*]",
    "order": "500",
    "version": null,
    "composed_of": "[]"
  },
  {
    "name": "my-template-1",
    "index_patterns": "[tea*]",
    "order": "501",
    "version": null,
    "composed_of": "[]"
  },
  {
    "name": "my-template-2",
    "index_patterns": "[teak*]",
    "order": "502",
    "version": "7",
    "composed_of": "[]"
  }
]

Get thread pool statistics Generally available

GET /_cat/thread_pool/{thread_pool_patterns}

All methods and paths for this operation:

GET /_cat/thread_pool

GET /_cat/thread_pool/{thread_pool_patterns}

Get thread pool statistics for each node in a cluster. Returned information includes all built-in thread pools and custom thread pools. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • thread_pool_patterns string | array[string] Required

    A comma-separated list of thread pool names used to limit the request. Accepts wildcard expressions.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

    Supported values include:

    • active (or a): Number of active threads in the current thread pool.
    • completed (or c): Number of tasks completed by the thread pool executor.
    • core (or cr): Configured core number of active threads allowed in the current thread pool.
    • ephemeral_id (or eid): Ephemeral node ID.
    • host (or h): Hostname for the current node.
    • ip (or i): IP address for the current node.
    • keep_alive (or k): Configured keep alive time for threads.
    • largest (or l): Highest number of active threads in the current thread pool.
    • max (or mx): Configured maximum number of active threads allowed in the current thread pool.
    • name: Name of the thread pool, such as analyze or generic.
    • node_id (or id): ID of the node, such as k0zy.
    • node_name: Node name, such as I8hydUG.
    • pid (or p): Process ID of the running node.
    • pool_size (or psz): Number of threads in the current thread pool.
    • port (or po): Bound transport port for the current node.
    • queue (or q): Number of tasks in the queue for the current thread pool.
    • queue_size (or qs): Maximum number of tasks permitted in the queue for the current thread pool.
    • rejected (or r): Number of tasks rejected by the thread pool executor.
    • size (or sz): Configured fixed number of active threads allowed in the current thread pool.
    • type (or t): Type of thread pool. Returned values are fixed, fixed_auto_queue_size, direct, or scaling.

    Values are active, a, completed, c, core, cr, ephemeral_id, eid, host, h, ip, i, keep_alive, k, largest, l, max, mx, name, node_id, id, node_name, pid, p, pool_size, psz, port, po, queue, q, queue_size, qs, rejected, r, size, sz, type, or t.

  • s string | array[string]

    A comma-separated list of column names or aliases that determines the sort order. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    The period to wait for a connection to the master node.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • node_name string

      The node name.

    • node_id string

      The persistent node identifier.

    • ephemeral_node_id string

      The ephemeral node identifier.

    • pid string

      The process identifier.

    • host string

      The host name for the current node.

    • ip string

      The IP address for the current node.

    • port string

      The bound transport port for the current node.

    • name string

      The thread pool name.

    • type string

      The thread pool type. Returned values include fixed, fixed_auto_queue_size, direct, and scaling.

    • active string

      The number of active threads in the current thread pool.

    • pool_size string

      The number of threads in the current thread pool.

    • queue string

      The number of tasks currently in queue.

    • queue_size string

      The maximum number of tasks permitted in the queue.

    • rejected string

      The number of rejected tasks.

    • largest string

      The highest number of active threads in the current thread pool.

    • completed string

      The number of completed tasks.

    • core string | null

      The core number of active threads allowed in a scaling thread pool.

    • max string | null

      The maximum number of active threads allowed in a scaling thread pool.

    • size string | null

      The number of active threads allowed in a fixed thread pool.

    • keep_alive string | null

      The thread keep alive time.

GET /_cat/thread_pool/{thread_pool_patterns}
GET /_cat/thread_pool?format=json
resp = client.cat.thread_pool(
    format="json",
)
const response = await client.cat.threadPool({
  format: "json",
});
response = client.cat.thread_pool(
  format: "json"
)
$resp = $client->cat()->threadPool([
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/thread_pool?format=json"
client.cat().threadPool();
Response examples (200)
A successful response from `GET /_cat/thread_pool?format=json`.
[
  {
    "node_name": "node-0",
    "name": "analyze",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "fetch_shard_started",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "fetch_shard_store",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "flush",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  },
  {
    "node_name": "node-0",
    "name": "write",
    "active": "0",
    "queue": "0",
    "rejected": "0"
  }
]
A successful response from `GET /_cat/thread_pool/generic?v=true&h=id,name,active,rejected,completed&format=json`. It returns the `id`, `name`, `active`, `rejected`, and `completed` columns. It also limits returned information to the generic thread pool.
[
  {
    "id": "0EWUhXeBQtaVGlexUeVwMg",
    "name": "generic",
    "active": "0",
    "rejected": "0",
    "completed": "70"
  }
]

Get transform information Generally available; Added in 7.7.0

GET /_cat/transforms/{transform_id}

All methods and paths for this operation:

GET /_cat/transforms

GET /_cat/transforms/{transform_id}

Get configuration and usage information about transforms.

CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.

Required authorization

  • Cluster privileges: monitor_transform

Path parameters

  • transform_id string Required

    A transform identifier or a wildcard expression. If you do not specify one of these options, the API returns information for all transforms.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, it returns an empty transforms array when there are no matches and the subset of results when there are partial matches. If false, the request returns a 404 status code when there are no matches or only partial matches.

  • from number

    Skips the specified number of transforms.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

  • size number

    The maximum number of transforms to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The transform identifier.

    • state string

      The status of the transform. Returned values include: aborting: The transform is aborting. failed: The transform failed. For more information about the failure, check thereasonfield. indexing: The transform is actively processing data and creating new documents. started: The transform is running but not actively indexing data. stopped: The transform is stopped. stopping`: The transform is stopping.

    • checkpoint string

      The sequence number for the checkpoint.

    • documents_processed string

      The number of documents that have been processed from the source index of the transform.

    • checkpoint_progress string | null

      The progress of the next checkpoint that is currently in progress.

    • last_search_time string | null

      The timestamp of the last search in the source indices. This field is shown only if the transform is running.

    • changes_last_detection_time string | null

      The timestamp when changes were last detected in the source indices.

    • create_time string

      The time the transform was created.

    • version string

      The version of Elasticsearch that existed on the node when the transform was created.

    • source_index string

      The source indices for the transform.

    • dest_index string

      The destination index for the transform.

    • pipeline string

      The unique identifier for the ingest pipeline.

    • description string

      The description of the transform.

    • transform_type string

      The type of transform: batch or continuous.

    • frequency string

      The interval between checks for changes in the source indices when the transform is running continuously.

    • max_page_search_size string

      The initial page size that is used for the composite aggregation for each checkpoint.

    • docs_per_second string

      The number of input documents per second.

    • reason string

      If a transform has a failed state, these details describe the reason for failure.

    • search_total string

      The total number of search operations on the source index for the transform.

    • search_failure string

      The total number of search failures.

    • search_time string

      The total amount of search time, in milliseconds.

    • index_total string

      The total number of index operations done by the transform.

    • index_failure string

      The total number of indexing failures.

    • index_time string

      The total time spent indexing documents, in milliseconds.

    • documents_indexed string

      The number of documents that have been indexed into the destination index for the transform.

    • delete_time string

      The total time spent deleting documents, in milliseconds.

    • documents_deleted string

      The number of documents deleted from the destination index due to the retention policy for the transform.

    • trigger_count string

      The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • pages_processed string

      The number of search or bulk index operations processed. Documents are processed in batches instead of individually.

    • processing_time string

      The total time spent processing results, in milliseconds.

    • checkpoint_duration_time_exp_avg string

      The exponential moving average of the duration of the checkpoint, in milliseconds.

    • indexed_documents_exp_avg string

      The exponential moving average of the number of new documents that have been indexed.

    • processed_documents_exp_avg string

      The exponential moving average of the number of documents that have been processed.

GET /_cat/transforms/{transform_id}
GET /_cat/transforms?v=true&format=json
resp = client.cat.transforms(
    v=True,
    format="json",
)
const response = await client.cat.transforms({
  v: "true",
  format: "json",
});
response = client.cat.transforms(
  v: "true",
  format: "json"
)
$resp = $client->cat()->transforms([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/transforms?v=true&format=json"
client.cat().transforms();
Response examples (200)
A successful response from `GET /_cat/transforms?v=true&format=json`.
[
  {
    "id" : "ecommerce_transform",
    "state" : "started",
    "checkpoint" : "1",
    "documents_processed" : "705",
    "checkpoint_progress" : "100.00",
    "changes_last_detection_time" : null
  }
]

Cluster





Update voting configuration exclusions Generally available; Added in 7.0.0

POST /_cluster/voting_config_exclusions

Update the cluster voting config exclusions by node IDs or node names. By default, if there are more than three master-eligible nodes in the cluster and you remove fewer than half of the master-eligible nodes in the cluster at once, the voting configuration automatically shrinks. If you want to shrink the voting configuration to contain fewer than three nodes or to remove half or more of the master-eligible nodes in the cluster at once, use this API to remove departing nodes from the voting configuration manually. The API adds an entry for each specified node to the cluster’s voting configuration exclusions list. It then waits until the cluster has reconfigured its voting configuration to exclude the specified nodes.

Clusters should have no voting configuration exclusions in normal operation. Once the excluded nodes have stopped, clear the voting configuration exclusions with DELETE /_cluster/voting_config_exclusions. This API waits for the nodes to be fully removed from the cluster before it returns. If your cluster has voting configuration exclusions for nodes that you no longer intend to remove, use DELETE /_cluster/voting_config_exclusions?wait_for_removal=false to clear the voting configuration exclusions without waiting for the nodes to leave the cluster.

A response to POST /_cluster/voting_config_exclusions with an HTTP status code of 200 OK guarantees that the node has been removed from the voting configuration and will not be reinstated until the voting configuration exclusions are cleared by calling DELETE /_cluster/voting_config_exclusions. If the call to POST /_cluster/voting_config_exclusions fails or returns a response with an HTTP status code other than 200 OK then the node may not have been removed from the voting configuration. In that case, you may safely retry the call.

NOTE: Voting exclusions are required only when you remove at least half of the master-eligible nodes from a cluster in a short time period. They are not required when removing master-ineligible nodes or when removing fewer than half of the master-eligible nodes.

External documentation

Query parameters

  • node_names string | array[string]

    A comma-separated list of the names of the nodes to exclude from the voting configuration. If specified, you may not also specify node_ids.

  • node_ids string | array[string]

    A comma-separated list of the persistent ids of the nodes to exclude from the voting configuration. If specified, you may not also specify node_names.

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation
  • timeout string

    When adding a voting configuration exclusion, the API waits for the specified nodes to be excluded from the voting configuration before returning. If the timeout expires before the appropriate condition is satisfied, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
POST /_cluster/voting_config_exclusions
curl \
 --request POST 'http://api.example.com/_cluster/voting_config_exclusions' \
 --header "Authorization: $API_KEY"












Get the cluster health status Generally available; Added in 1.3.0

GET /_cluster/health/{index}

All methods and paths for this operation:

GET /_cluster/health

GET /_cluster/health/{index}

You can also use the API to get the health status of only specified data streams and indices. For data streams, the API retrieves the health status of the stream’s backing indices.

The cluster health status is: green, yellow or red. On the shard level, a red status indicates that the specific shard is not allocated in the cluster. Yellow means that the primary shard is allocated but replicas are not. Green means that all shards are allocated. The index level status is controlled by the worst shard status.

One of the main benefits of the API is the ability to wait until the cluster reaches a certain high watermark health level. The cluster status is controlled by the worst index status.

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*) are supported. To target all data streams and indices in a cluster, omit this parameter or use _all or *.

Query parameters

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • level string

    Can be one of cluster, indices or shards. Controls the details level of the health information returned.

    Values are cluster, indices, or shards.

  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • wait_for_active_shards number | string

    A number controlling to how many active shards to wait for, all to wait for all shards in the cluster to be active, or 0 to not wait.

    Values are all or index-setting.

  • wait_for_events string

    Can be one of immediate, urgent, high, normal, low, languid. Wait until all currently queued events with the given priority are processed.

    Values are immediate, urgent, high, normal, low, or languid.

  • wait_for_nodes string | number

    The request waits until the specified number N of nodes is available. It also accepts >=N, <=N, >N and <N. Alternatively, it is possible to use ge(N), le(N), gt(N) and lt(N) notation.

  • wait_for_no_initializing_shards boolean

    A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard initializations. Defaults to false, which means it will not wait for initializing shards.

  • wait_for_no_relocating_shards boolean

    A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard relocations. Defaults to false, which means it will not wait for relocating shards.

  • wait_for_status string

    One of green, yellow or red. Will wait (until the timeout provided) until the status of the cluster changes to the one provided or better, i.e. green > yellow > red. By default, will not wait for any status.

    Supported values include:

    • green (or GREEN): All shards are assigned.
    • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
    • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
    • unknown
    • unavailable

    Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • active_primary_shards number Required

      The number of active primary shards.

    • active_shards number Required

      The total number of active primary and replica shards.

    • active_shards_percent string

      The ratio of active shards in the cluster expressed as a string formatted percentage.

    • active_shards_percent_as_number number Required

      The ratio of active shards in the cluster expressed as a percentage.

    • cluster_name string Required

      The name of the cluster.

    • delayed_unassigned_shards number Required

      The number of shards whose allocation has been delayed by the timeout settings.

    • indices object
      Hide indices attribute Show indices attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • active_primary_shards number Required
        • active_shards number Required
        • initializing_shards number Required
        • number_of_replicas number Required
        • number_of_shards number Required
        • relocating_shards number Required
        • shards object
          Hide shards attribute Show shards attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • active_shards number Required
            • initializing_shards number Required
            • primary_active boolean Required
            • relocating_shards number Required
            • status string Required

              Supported values include:

              • green (or GREEN): All shards are assigned.
              • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
              • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
              • unknown
              • unavailable

              Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

            • unassigned_shards number Required
            • unassigned_primary_shards number Required
        • status string Required

          Supported values include:

          • green (or GREEN): All shards are assigned.
          • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
          • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
          • unknown
          • unavailable

          Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

        • unassigned_shards number Required
        • unassigned_primary_shards number Required
    • initializing_shards number Required

      The number of shards that are under initialization.

    • number_of_data_nodes number Required

      The number of nodes that are dedicated data nodes.

    • number_of_in_flight_fetch number Required

      The number of unfinished fetches.

    • number_of_nodes number Required

      The number of nodes within the cluster.

    • number_of_pending_tasks number Required

      The number of cluster-level changes that have not yet been executed.

    • relocating_shards number Required

      The number of shards that are under relocation.

    • status string Required

      Supported values include:

      • green (or GREEN): All shards are assigned.
      • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
      • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
      • unknown
      • unavailable

      Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

    • task_max_waiting_in_queue string

      The time since the earliest initiated task is waiting for being performed.

      External documentation
    • task_max_waiting_in_queue_millis number

      Time unit for milliseconds

    • timed_out boolean Required

      If false the response returned within the period of time that is specified by the timeout parameter (30s by default)

    • unassigned_primary_shards number Required

      The number of primary shards that are not allocated.

    • unassigned_shards number Required

      The number of shards that are not allocated.

GET /_cluster/health/{index}
GET _cluster/health
resp = client.cluster.health()
const response = await client.cluster.health();
response = client.cluster.health
$resp = $client->cluster()->health();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cluster/health"
client.cluster().health(h -> h);
Response examples (200)
A successful response from `GET _cluster/health`. It is the health status of a quiet single node cluster with a single index with one shard and one replica.
{
  "cluster_name" : "testcluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 1,
  "active_shards" : 1,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 50.0
}

Get cluster info Generally available; Added in 8.9.0

GET /_info/{target}

Returns basic information about the cluster.

Path parameters

  • target string | array[string]

    Limits the information returned to the specific target. Supports a comma-separated list, such as http,ingest.

    Supported values include: _all, http, ingest, thread_pool, script

    Values are _all, http, ingest, thread_pool, or script.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cluster_name string Required
    • http object
      Hide http attributes Show http attributes object
      • current_open number

        Current number of open HTTP connections for the node.

      • total_opened number

        Total number of HTTP connections opened for the node.

      • clients array[object]

        Information on current and recently-closed HTTP client connections. Clients that have been closed longer than the http.client_stats.closed_channels.max_age setting will not be represented here.

        Hide clients attributes Show clients attributes object
        • id number

          Unique ID for the HTTP client.

        • agent string

          Reported agent for the HTTP client. If unavailable, this property is not included in the response.

        • local_address string

          Local address for the HTTP connection.

        • remote_address string

          Remote address for the HTTP connection.

        • last_uri string

          The URI of the client’s most recent request.

        • opened_time_millis number

          Time at which the client opened the connection.

        • closed_time_millis number

          Time at which the client closed the connection if the connection is closed.

        • last_request_time_millis number

          Time of the most recent request from this client.

        • request_count number

          Number of requests from this client.

        • request_size_bytes number

          Cumulative size in bytes of all requests from this client.

        • x_opaque_id string

          Value from the client’s x-opaque-id HTTP header. If unavailable, this property is not included in the response.

      • routes object Required Generally available; Added in 8.12.0

        Detailed HTTP stats broken down by route

        Hide routes attribute Show routes attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • requests object Required
          • responses object Required
    • ingest object
      Hide ingest attributes Show ingest attributes object
      • pipelines object

        Contains statistics about ingest pipelines for the node.

        Hide pipelines attribute Show pipelines attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • count number Required

            Total number of documents ingested during the lifetime of this node.

          • current number Required

            Total number of documents currently being ingested.

          • failed number Required

            Total number of failed ingest operations during the lifetime of this node.

          • processors array[object] Required

            Total number of ingest processors.

          • ingested_as_first_pipeline_in_bytes number Required Generally available; Added in 8.15.0

            Total number of bytes of all documents ingested by the pipeline. This field is only present on pipelines which are the first to process a document. Thus, it is not present on pipelines which only serve as a final pipeline after a default pipeline, a pipeline run after a reroute processor, or pipelines in pipeline processors.

          • produced_as_first_pipeline_in_bytes number Required Generally available; Added in 8.15.0

            Total number of bytes of all documents produced by the pipeline. This field is only present on pipelines which are the first to process a document. Thus, it is not present on pipelines which only serve as a final pipeline after a default pipeline, a pipeline run after a reroute processor, or pipelines in pipeline processors. In situations where there are subsequent pipelines, the value represents the size of the document after all pipelines have run.

      • total object

        Contains statistics about ingest operations for the node.

        Hide total attributes Show total attributes object
        • count number Required

          Total number of documents ingested during the lifetime of this node.

        • current number Required

          Total number of documents currently being ingested.

        • failed number Required

          Total number of failed ingest operations during the lifetime of this node.

    • thread_pool object
      Hide thread_pool attribute Show thread_pool attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • active number

          Number of active threads in the thread pool.

        • completed number

          Number of tasks completed by the thread pool executor.

        • largest number

          Highest number of active threads in the thread pool.

        • queue number

          Number of tasks in queue for the thread pool.

        • rejected number

          Number of tasks rejected by the thread pool executor.

        • threads number

          Number of threads in the thread pool.

    • script object
      Hide script attributes Show script attributes object
      • cache_evictions number

        Total number of times the script cache has evicted old data.

      • compilations number

        Total number of inline script compilations performed by the node.

      • compilations_history object

        Contains this recent history of script compilations.

        Hide compilations_history attribute Show compilations_history attribute object
        • * number Additional properties
      • compilation_limit_triggered number

        Total number of times the script compilation circuit breaker has limited inline script compilations.

      • contexts array[object]
        Hide contexts attributes Show contexts attributes object
        • context string
        • compilations number
        • cache_evictions number
        • compilation_limit_triggered number
GET /_info/{target}
GET /_info/_all
resp = client.cluster.info(
    target="_all",
)
const response = await client.cluster.info({
  target: "_all",
});
response = client.cluster.info(
  target: "_all"
)
$resp = $client->cluster()->info([
    "target" => "_all",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_info/_all"
client.cluster().info(i -> i
    .target("_all")
);




Get remote cluster information Generally available; Added in 6.1.0

GET /_remote/info

Get information about configured remote clusters. The API returns connection and endpoint information keyed by the configured remote cluster alias.


This API returns information that reflects current state on the local cluster. The connected field does not necessarily reflect whether a remote cluster is down or unavailable, only whether there is currently an open connection to it. Elasticsearch does not spontaneously try to reconnect to a disconnected remote cluster. To trigger a reconnection, attempt a cross-cluster search, ES|QL cross-cluster search, or try the /_resolve/cluster endpoint.

Required authorization

  • Cluster privileges: monitor
External documentation

Responses

  • 200 application/json
GET /_remote/info
GET /_remote/info
resp = client.cluster.remote_info()
const response = await client.cluster.remoteInfo();
response = client.cluster.remote_info
$resp = $client->cluster()->remoteInfo();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_remote/info"
client.cluster().remoteInfo();




Get the cluster state Generally available; Added in 1.3.0

GET /_cluster/state/{metric}/{index}

All methods and paths for this operation:

GET /_cluster/state

GET /_cluster/state/{metric}
GET /_cluster/state/{metric}/{index}

Get comprehensive information about the state of the cluster.

The cluster state is an internal data structure which keeps track of a variety of information needed by every node, including the identity and attributes of the other nodes in the cluster; cluster-wide settings; index metadata, including the mapping and settings for each index; the location and status of every shard copy in the cluster.

The elected master node ensures that every node in the cluster has a copy of the same cluster state. This API lets you retrieve a representation of this internal state for debugging or diagnostic purposes. You may need to consult the Elasticsearch source code to determine the precise meaning of the response.

By default the API will route requests to the elected master node since this node is the authoritative source of cluster states. You can also retrieve the cluster state held on the node handling the API request by adding the ?local=true query parameter.

Elasticsearch may need to expend significant effort to compute a response to this API in larger clusters, and the response may comprise a very large quantity of data. If you use this API repeatedly, your cluster may become unstable.

WARNING: The response is a representation of an internal data structure. Its format is not subject to the same compatibility guarantees as other more stable APIs and may change from version to version. Do not query this API using external monitoring tools. Instead, obtain the information you require using other more stable cluster APIs.

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • metric string | array[string] Required

    Limit the information returned to the specified metrics

  • index string | array[string] Required

    A comma-separated list of index names; use _all or empty string to perform the operation on all indices

Query parameters

  • allow_no_indices boolean

    Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes _all string or when no indices have been specified)

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flat_settings boolean

    Return settings in flat format (default: false)

  • ignore_unavailable boolean

    Whether specified concrete indices should be ignored when unavailable (missing or closed)

  • local boolean

    Return local information, do not retrieve the state from master node (default: false)

  • master_timeout string

    Specify timeout for connection to master

    External documentation
  • wait_for_metadata_version number

    Wait for the metadata version to be equal or greater than the specified metadata version

  • wait_for_timeout string

    The maximum time to wait for wait_for_metadata_version before timing out

    External documentation

Responses

  • 200 application/json
GET /_cluster/state/{metric}/{index}
GET /_cluster/state?filter_path=metadata.cluster_coordination.last_committed_config
resp = client.cluster.state(
    filter_path="metadata.cluster_coordination.last_committed_config",
)
const response = await client.cluster.state({
  filter_path: "metadata.cluster_coordination.last_committed_config",
});
response = client.cluster.state(
  filter_path: "metadata.cluster_coordination.last_committed_config"
)
$resp = $client->cluster()->state([
    "filter_path" => "metadata.cluster_coordination.last_committed_config",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cluster/state?filter_path=metadata.cluster_coordination.last_committed_config"

Get cluster statistics Generally available; Added in 1.3.0

GET /_cluster/stats/nodes/{node_id}

All methods and paths for this operation:

GET /_cluster/stats

GET /_cluster/stats/nodes/{node_id}

Get basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).

Required authorization

  • Cluster privileges: monitor

Path parameters

  • node_id string | array[string] Required

    Comma-separated list of node filters used to limit returned information. Defaults to all nodes in the cluster.

Query parameters

  • include_remotes boolean

    Include remote cluster data into the response

  • timeout string

    Period to wait for each node to respond. If a node does not respond before its timeout expires, the response does not include its stats. However, timed out nodes are included in the response’s _nodes.failed property. Defaults to no timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required

      Name of the cluster, based on the cluster name setting.

    • cluster_uuid string Required

      Unique identifier for the cluster.

    • indices object Required

      Contains statistics about indices with shards assigned to selected nodes.

      Hide indices attributes Show indices attributes object
      • analysis object

        Contains statistics about analyzers and analyzer components used in selected nodes.

        Hide analysis attributes Show analysis attributes object
        • analyzer_types array[object] Required

          Contains statistics about analyzer types used in selected nodes.

        • built_in_analyzers array[object] Required

          Contains statistics about built-in analyzers used in selected nodes.

        • built_in_char_filters array[object] Required

          Contains statistics about built-in character filters used in selected nodes.

        • built_in_filters array[object] Required

          Contains statistics about built-in token filters used in selected nodes.

        • built_in_tokenizers array[object] Required

          Contains statistics about built-in tokenizers used in selected nodes.

        • char_filter_types array[object] Required

          Contains statistics about character filter types used in selected nodes.

        • filter_types array[object] Required

          Contains statistics about token filter types used in selected nodes.

        • tokenizer_types array[object] Required

          Contains statistics about tokenizer types used in selected nodes.

        • synonyms object Required

          Contains statistics about synonyms types used in selected nodes.

      • completion object Required

        Contains statistics about memory used for completion in selected nodes.

        Hide completion attributes Show completion attributes object
        • size_in_bytes number Required

          Total amount, in bytes, of memory used for completion across all shards assigned to selected nodes.

        • fields object
      • count number Required

        Total number of indices with shards assigned to selected nodes.

      • docs object Required

        Contains counts for documents in selected nodes.

        Hide docs attributes Show docs attributes object
        • count number Required

          Total number of non-deleted documents across all primary shards assigned to selected nodes. This number is based on documents in Lucene segments and may include documents from nested fields.

        • deleted number

          Total number of deleted documents across all primary shards assigned to selected nodes. This number is based on documents in Lucene segments. Elasticsearch reclaims the disk space of deleted Lucene documents when a segment is merged.

        • total_size_in_bytes number Required

          Returns the total size in bytes of all documents in this stats. This value may be more reliable than store_stats.size_in_bytes in estimating the index size.

      • fielddata object Required

        Contains statistics about the field data cache of selected nodes.

        Hide fielddata attributes Show fielddata attributes object
        • evictions number
        • memory_size_in_bytes number Required
        • fields object
      • query_cache object Required

        Contains statistics about the query cache of selected nodes.

        Hide query_cache attributes Show query_cache attributes object
        • cache_count number Required

          Total number of entries added to the query cache across all shards assigned to selected nodes. This number includes current and evicted entries.

        • cache_size number Required

          Total number of entries currently in the query cache across all shards assigned to selected nodes.

        • evictions number Required

          Total number of query cache evictions across all shards assigned to selected nodes.

        • hit_count number Required

          Total count of query cache hits across all shards assigned to selected nodes.

        • memory_size_in_bytes number Required

          Total amount, in bytes, of memory used for the query cache across all shards assigned to selected nodes.

        • miss_count number Required

          Total count of query cache misses across all shards assigned to selected nodes.

        • total_count number Required

          Total count of hits and misses in the query cache across all shards assigned to selected nodes.

      • segments object Required

        Contains statistics about segments in selected nodes.

        Hide segments attributes Show segments attributes object
        • count number Required

          Total number of segments across all shards assigned to selected nodes.

        • doc_values_memory_in_bytes number Required

          Total amount, in bytes, of memory used for doc values across all shards assigned to selected nodes.

        • file_sizes object Required

          This object is not populated by the cluster stats API. To get information on segment files, use the node stats API.

        • fixed_bit_set_memory_in_bytes number Required

          Total amount of memory, in bytes, used by fixed bit sets across all shards assigned to selected nodes.

        • index_writer_memory_in_bytes number Required

          Total amount, in bytes, of memory used by all index writers across all shards assigned to selected nodes.

        • max_unsafe_auto_id_timestamp number Required

          Unix timestamp, in milliseconds, of the most recently retried indexing request.

        • memory_in_bytes number Required

          Total amount, in bytes, of memory used for segments across all shards assigned to selected nodes.

        • norms_memory_in_bytes number Required

          Total amount, in bytes, of memory used for normalization factors across all shards assigned to selected nodes.

        • points_memory_in_bytes number Required

          Total amount, in bytes, of memory used for points across all shards assigned to selected nodes.

        • stored_fields_memory_in_bytes number Required

          Total amount, in bytes, of memory used for stored fields across all shards assigned to selected nodes.

        • terms_memory_in_bytes number Required

          Total amount, in bytes, of memory used for terms across all shards assigned to selected nodes.

        • term_vectors_memory_in_bytes number Required

          Total amount, in bytes, of memory used for term vectors across all shards assigned to selected nodes.

        • version_map_memory_in_bytes number Required

          Total amount, in bytes, of memory used by all version maps across all shards assigned to selected nodes.

      • shards object Required

        Contains statistics about indices with shards assigned to selected nodes.

        Hide shards attributes Show shards attributes object
        • primaries number

          Number of primary shards assigned to selected nodes.

        • replication number

          Ratio of replica shards to primary shards across all selected nodes.

        • total number

          Total number of shards assigned to selected nodes.

      • store object Required

        Contains statistics about the size of shards assigned to selected nodes.

        Hide store attributes Show store attributes object
        • size_in_bytes number Required

          Total size, in bytes, of all shards assigned to selected nodes.

        • reserved_in_bytes number Required

          A prediction, in bytes, of how much larger the shard stores will eventually grow due to ongoing peer recoveries, restoring snapshots, and similar activities.

        • total_data_set_size_in_bytes number

          Total data set size, in bytes, of all shards assigned to selected nodes. This includes the size of shards not stored fully on the nodes, such as the cache for partially mounted indices.

      • mappings object

        Contains statistics about field mappings in selected nodes.

        Hide mappings attributes Show mappings attributes object
        • field_types array[object] Required

          Contains statistics about field data types used in selected nodes.

        • runtime_field_types array[object] Required

          Contains statistics about runtime field data types used in selected nodes.

        • total_field_count number

          Total number of fields in all non-system indices.

        • total_deduplicated_field_count number

          Total number of fields in all non-system indices, accounting for mapping deduplication.

        • total_deduplicated_mapping_size_in_bytes number

          Total size of all mappings, in bytes, after deduplication and compression.

        • source_modes object Required

          Source mode usage count.

      • versions array[object]

        Contains statistics about analyzers and analyzer components used in selected nodes.

        Hide versions attributes Show versions attributes object
        • index_count number Required
        • primary_shard_count number Required
        • total_primary_bytes number Required
        • total_primary_size
        • version
      • dense_vector object Required

        Contains statistics about indexed dense vector

        Hide dense_vector attribute Show dense_vector attribute object
        • value_count number Required
      • sparse_vector object Required

        Contains statistics about indexed sparse vector

        Hide sparse_vector attribute Show sparse_vector attribute object
        • value_count number Required
    • nodes object Required

      Contains statistics about nodes selected by the request’s node filters.

      Hide nodes attributes Show nodes attributes object
      • count object Required

        Contains counts for nodes selected by the request’s node filters.

        Hide count attributes Show count attributes object
        • total number Required
        • coordinating_only number
        • data number
        • data_cold number
        • data_content number
        • data_frozen number Generally available; Added in 7.13.0
        • data_hot number
        • data_warm number
        • index number
        • ingest number
        • master number
        • ml number
        • remote_cluster_client number
        • transform number
        • voting_only number
      • discovery_types object Required

        Contains statistics about the discovery types used by selected nodes.

        Hide discovery_types attribute Show discovery_types attribute object
        • * number Additional properties
      • fs object Required

        Contains statistics about file stores by selected nodes.

        Hide fs attributes Show fs attributes object
        • path string
        • mount string
        • type string
        • available_in_bytes number

          Total number of bytes available to JVM in file stores across all selected nodes. Depending on operating system or process-level restrictions, this number may be less than nodes.fs.free_in_byes. This is the actual amount of free disk space the selected Elasticsearch nodes can use.

        • free_in_bytes number

          Total number, in bytes, of unallocated bytes in file stores across all selected nodes.

        • total_in_bytes number

          Total size, in bytes, of all file stores across all selected nodes.

        • low_watermark_free_space_in_bytes number
        • high_watermark_free_space_in_bytes number
        • flood_stage_free_space_in_bytes number
        • frozen_flood_stage_free_space_in_bytes number
      • indexing_pressure object Required
      • ingest object Required
        Hide ingest attributes Show ingest attributes object
        • number_of_pipelines number Required
        • processor_stats object Required
      • jvm object Required

        Contains statistics about the Java Virtual Machines (JVMs) used by selected nodes.

        Hide jvm attributes Show jvm attributes object
        • threads number Required

          Number of active threads in use by JVM across all selected nodes.

        • versions array[object] Required

          Contains statistics about the JVM versions used by selected nodes.

      • network_types object Required

        Contains statistics about the transport and HTTP networks used by selected nodes.

        Hide network_types attributes Show network_types attributes object
        • http_types object Required

          Contains statistics about the HTTP network types used by selected nodes.

        • transport_types object Required

          Contains statistics about the transport network types used by selected nodes.

      • os object Required

        Contains statistics about the operating systems used by selected nodes.

        Hide os attributes Show os attributes object
        • allocated_processors number Required

          Number of processors used to calculate thread pool size across all selected nodes. This number can be set with the processors setting of a node and defaults to the number of processors reported by the operating system. In both cases, this number will never be larger than 32.

        • architectures array[object]

          Contains statistics about processor architectures (for example, x86_64 or aarch64) used by selected nodes.

        • available_processors number Required

          Number of processors available to JVM across all selected nodes.

        • names array[object] Required

          Contains statistics about operating systems used by selected nodes.

        • pretty_names array[object] Required

          Contains statistics about operating systems used by selected nodes.

      • packaging_types array[object] Required

        Contains statistics about Elasticsearch distributions installed on selected nodes.

        Hide packaging_types attributes Show packaging_types attributes object
        • count number Required

          Number of selected nodes using the distribution flavor and file type.

        • flavor string Required

          Type of Elasticsearch distribution. This is always default.

        • type string Required

          File type (such as tar or zip) used for the distribution package.

      • plugins array[object] Required

        Contains statistics about installed plugins and modules by selected nodes. If no plugins or modules are installed, this array is empty.

        Hide plugins attributes Show plugins attributes object
        • classname string Required
        • description string Required
        • elasticsearch_version
        • extended_plugins array[string] Required
        • has_native_controller boolean Required
        • java_version
        • name
        • version
        • licensed boolean Required
      • process object Required

        Contains statistics about processes used by selected nodes.

      • versions array[string] Required

        Array of Elasticsearch versions used on selected nodes.

    • repositories object Required

      Contains stats on repository feature usage exposed in cluster stats for telemetry.

      Hide repositories attribute Show repositories attribute object
      • * object Additional properties
        Hide * attribute Show * attribute object
        • * number Additional properties
    • snapshots object Required

      Contains stats cluster snapshots.

      Hide snapshots attributes Show snapshots attributes object
      • current_counts object Required
        Hide current_counts attributes Show current_counts attributes object
        • snapshots number Required

          Snapshots currently in progress

        • shard_snapshots number Required

          Incomplete shard snapshots

        • snapshot_deletions number Required

          Snapshots deletions in progress

        • concurrent_operations number Required

          Sum of snapshots and snapshot_deletions

        • cleanups number Required

          Cleanups in progress, not counted in concurrent_operations as they are not concurrent

      • repositories object Required
        Hide repositories attribute Show repositories attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required
    • status string

      Health status of the cluster, based on the state of its primary and replica shards.

      Supported values include:

      • green (or GREEN): All shards are assigned.
      • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
      • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
      • unknown
      • unavailable

      Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

    • timestamp number Required

      Unix timestamp, in milliseconds, for the last time the cluster statistics were refreshed.

    • ccs object Required

      Cross-cluster stats

      Hide ccs attributes Show ccs attributes object
      • clusters object

        Contains remote cluster settings and metrics collected from them. The keys are cluster names, and the values are per-cluster data. Only present if include_remotes option is set to true.

        Hide clusters attribute Show clusters attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • cluster_uuid string Required

            The UUID of the remote cluster.

          • mode string Required

            The connection mode used to communicate with the remote cluster.

          • skip_unavailable boolean Required

            The skip_unavailable setting used for this remote cluster.

          • transport.compress string Required

            Transport compression setting used for this remote cluster.

          • version array[string] Required

            The list of Elasticsearch versions used by the nodes on the remote cluster.

          • nodes_count number Required

            The total count of nodes in the remote cluster.

          • shards_count number Required

            The total number of shards in the remote cluster.

          • indices_count number Required

            The total number of indices in the remote cluster.

          • indices_total_size_in_bytes number Required

            Total data set size, in bytes, of all shards assigned to selected nodes.

          • indices_total_size string

            Total data set size of all shards assigned to selected nodes, as a human-readable string.

          • max_heap_in_bytes number Required

            Maximum amount of memory, in bytes, available for use by the heap across the nodes of the remote cluster.

          • max_heap string

            Maximum amount of memory available for use by the heap across the nodes of the remote cluster, as a human-readable string.

          • mem_total_in_bytes number Required

            Total amount, in bytes, of physical memory across the nodes of the remote cluster.

          • mem_total string

            Total amount of physical memory across the nodes of the remote cluster, as a human-readable string.

      • _esql object

        Information about ES|QL cross-cluster query usage.

        Hide _esql attributes Show _esql attributes object
        • total number Required

          The total number of cross-cluster search requests that have been executed by the cluster.

        • success number Required

          The total number of cross-cluster search requests that have been successfully executed by the cluster.

        • skipped number Required

          The total number of cross-cluster search requests (successful or failed) that had at least one remote cluster skipped.

        • remotes_per_search_max number Required

          The maximum number of remote clusters that were queried in a single cross-cluster search request.

        • remotes_per_search_avg number Required

          The average number of remote clusters that were queried in a single cross-cluster search request.

        • failure_reasons object Required

          Statistics about the reasons for cross-cluster search request failures. The keys are the failure reason names and the values are the number of requests that failed for that reason.

        • features object Required

          The keys are the names of the search feature, and the values are the number of requests that used that feature. Single request can use more than one feature (e.g. both async and wildcard).

        • clients object Required

          Statistics about the clients that executed cross-cluster search requests. The keys are the names of the clients, and the values are the number of requests that were executed by that client. Only known clients (such as kibana or elasticsearch) are counted.

        • clusters object Required

          Statistics about the clusters that were queried in cross-cluster search requests. The keys are cluster names, and the values are per-cluster telemetry data. This also includes the local cluster itself, which uses the name (local).

GET /_cluster/stats/nodes/{node_id}
GET _cluster/stats?human&filter_path=indices.mappings.total_deduplicated_mapping_size*
resp = client.cluster.stats(
    human=True,
    filter_path="indices.mappings.total_deduplicated_mapping_size*",
)
const response = await client.cluster.stats({
  human: "true",
  filter_path: "indices.mappings.total_deduplicated_mapping_size*",
});
response = client.cluster.stats(
  human: "true",
  filter_path: "indices.mappings.total_deduplicated_mapping_size*"
)
$resp = $client->cluster()->stats([
    "human" => "true",
    "filter_path" => "indices.mappings.total_deduplicated_mapping_size*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cluster/stats?human&filter_path=indices.mappings.total_deduplicated_mapping_size*"

Ping the cluster Generally available

HEAD /

Get information about whether the cluster is running.

Responses

  • 200 application/json
HEAD /
curl \
 --request HEAD 'http://api.example.com/' \
 --header "Authorization: $API_KEY"

Clear the archived repositories metering Technical preview; Added in 7.16.0

DELETE /_nodes/{node_id}/_repositories_metering/{max_archive_version}

Clear the archived repositories metering information in the cluster.

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • node_id string | array[string] Required

    Comma-separated list of node IDs or names used to limit returned information.

  • max_archive_version number Required

    Specifies the maximum archive_version to be cleared from the archive.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required

      Name of the cluster. Based on the cluster.name setting.

    • nodes object Required

      Contains repositories metering information for the nodes selected by the request.

      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • repository_name string Required

          Repository name.

        • repository_type string Required

          Repository type.

        • repository_location object Required

          Represents an unique location within the repository.

          Hide repository_location attributes Show repository_location attributes object
          • base_path string Required
          • container string

            Container name (Azure)

          • bucket string

            Bucket name (GCP, S3)

        • repository_ephemeral_id string Required

          An identifier that changes every time the repository is updated.

        • Time unit for milliseconds

        • Time unit for milliseconds

        • archived boolean Required

          A flag that tells whether or not this object has been archived. When a repository is closed or updated the repository metering information is archived and kept for a certain period of time. This allows retrieving the repository metering information of previous repository instantiations.

        • cluster_version number

          The cluster state version when this object was archived, this field can be used as a logical timestamp to delete all the archived metrics up to an observed version. This field is only present for archived repository metering information objects. The main purpose of this field is to avoid possible race conditions during repository metering information deletions, i.e. deleting archived repositories metering information that we haven’t observed yet.

        • request_counts object Required

          An object with the number of request performed against the repository grouped by request type.

          Hide request_counts attributes Show request_counts attributes object
          • GetBlobProperties number

            Number of Get Blob Properties requests (Azure)

          • GetBlob number

            Number of Get Blob requests (Azure)

          • ListBlobs number

            Number of List Blobs requests (Azure)

          • PutBlob number

            Number of Put Blob requests (Azure)

          • PutBlock number

            Number of Put Block (Azure)

          • PutBlockList number

            Number of Put Block List requests

          • GetObject number

            Number of get object requests (GCP, S3)

          • ListObjects number

            Number of list objects requests (GCP, S3)

          • InsertObject number

            Number of insert object requests, including simple, multipart and resumable uploads. Resumable uploads can perform multiple http requests to insert a single object but they are considered as a single request since they are billed as an individual operation. (GCP)

          • PutObject number

            Number of PutObject requests (S3)

          • PutMultipartObject number

            Number of Multipart requests, including CreateMultipartUpload, UploadPart and CompleteMultipartUpload requests (S3)

DELETE /_nodes/{node_id}/_repositories_metering/{max_archive_version}
curl \
 --request DELETE 'http://api.example.com/_nodes/{node_id}/_repositories_metering/{max_archive_version}' \
 --header "Authorization: $API_KEY"

Get cluster repositories metering Technical preview; Added in 7.16.0

GET /_nodes/{node_id}/_repositories_metering

Get repositories metering information for a cluster. This API exposes monotonically non-decreasing counters and it is expected that clients would durably store the information needed to compute aggregations over a period of time. Additionally, the information exposed by this API is volatile, meaning that it will not be present after node restarts.

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • node_id string | array[string] Required

    Comma-separated list of node IDs or names used to limit returned information. For more information about the nodes selective options, refer to the node specification documentation.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required

      Name of the cluster. Based on the cluster.name setting.

    • nodes object Required

      Contains repositories metering information for the nodes selected by the request.

      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • repository_name string Required

          Repository name.

        • repository_type string Required

          Repository type.

        • repository_location object Required

          Represents an unique location within the repository.

          Hide repository_location attributes Show repository_location attributes object
          • base_path string Required
          • container string

            Container name (Azure)

          • bucket string

            Bucket name (GCP, S3)

        • repository_ephemeral_id string Required

          An identifier that changes every time the repository is updated.

        • Time unit for milliseconds

        • Time unit for milliseconds

        • archived boolean Required

          A flag that tells whether or not this object has been archived. When a repository is closed or updated the repository metering information is archived and kept for a certain period of time. This allows retrieving the repository metering information of previous repository instantiations.

        • cluster_version number

          The cluster state version when this object was archived, this field can be used as a logical timestamp to delete all the archived metrics up to an observed version. This field is only present for archived repository metering information objects. The main purpose of this field is to avoid possible race conditions during repository metering information deletions, i.e. deleting archived repositories metering information that we haven’t observed yet.

        • request_counts object Required

          An object with the number of request performed against the repository grouped by request type.

          Hide request_counts attributes Show request_counts attributes object
          • GetBlobProperties number

            Number of Get Blob Properties requests (Azure)

          • GetBlob number

            Number of Get Blob requests (Azure)

          • ListBlobs number

            Number of List Blobs requests (Azure)

          • PutBlob number

            Number of Put Blob requests (Azure)

          • PutBlock number

            Number of Put Block (Azure)

          • PutBlockList number

            Number of Put Block List requests

          • GetObject number

            Number of get object requests (GCP, S3)

          • ListObjects number

            Number of list objects requests (GCP, S3)

          • InsertObject number

            Number of insert object requests, including simple, multipart and resumable uploads. Resumable uploads can perform multiple http requests to insert a single object but they are considered as a single request since they are billed as an individual operation. (GCP)

          • PutObject number

            Number of PutObject requests (S3)

          • PutMultipartObject number

            Number of Multipart requests, including CreateMultipartUpload, UploadPart and CompleteMultipartUpload requests (S3)

GET /_nodes/{node_id}/_repositories_metering
curl \
 --request GET 'http://api.example.com/_nodes/{node_id}/_repositories_metering' \
 --header "Authorization: $API_KEY"

Get the hot threads for nodes Generally available

GET /_nodes/{node_id}/hot_threads

All methods and paths for this operation:

GET /_nodes/hot_threads

GET /_nodes/{node_id}/hot_threads

Get a breakdown of the hot threads on each selected node in the cluster. The output is plain text with a breakdown of the top hot threads for each node.

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • node_id string | array[string] Required

    List of node IDs or names used to limit returned information.

Query parameters

  • ignore_idle_threads boolean

    If true, known idle threads (e.g. waiting in a socket select, or to get a task from an empty queue) are filtered out.

  • interval string

    The interval to do the second sampling of threads.

    External documentation
  • snapshots number

    Number of samples of thread stacktrace.

  • threads number

    Specifies the number of hot threads to provide information for.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • type string

    The type to sample.

    Values are cpu, wait, block, gpu, or mem.

  • sort string

    The sort order for 'cpu' type (default: total)

    Values are cpu, wait, block, gpu, or mem.

Responses

  • 200 application/json
GET /_nodes/{node_id}/hot_threads
GET /_nodes/hot_threads
resp = client.nodes.hot_threads()
const response = await client.nodes.hotThreads();
response = client.nodes.hot_threads
$resp = $client->nodes()->hotThreads();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_nodes/hot_threads"
client.nodes().hotThreads(h -> h);




Reload the keystore on nodes in the cluster Generally available; Added in 6.5.0

POST /_nodes/{node_id}/reload_secure_settings

All methods and paths for this operation:

POST /_nodes/reload_secure_settings

POST /_nodes/{node_id}/reload_secure_settings

Secure settings are stored in an on-disk keystore. Certain of these settings are reloadable. That is, you can change them on disk and reload them without restarting any nodes in the cluster. When you have updated reloadable secure settings in your keystore, you can use this API to reload those settings on each node.

When the Elasticsearch keystore is password protected and not simply obfuscated, you must provide the password for the keystore when you reload the secure settings. Reloading the settings for the whole cluster assumes that the keystores for all nodes are protected with the same password; this method is allowed only when inter-node communications are encrypted. Alternatively, you can reload the secure settings on each node by locally accessing the API and passing the node-specific Elasticsearch keystore password.

Path parameters

  • node_id string | array[string] Required

    The names of particular nodes in the cluster to target.

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
application/json

Body

  • secure_settings_password string

    The password for the Elasticsearch keystore.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • name string Required
        • reload_exception object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Hide reload_exception attributes Show reload_exception attributes object
          • type string Required

            The type of error

          • reason
          • stack_trace string

            The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • root_cause array[object]
          • suppressed array[object]
POST /_nodes/{node_id}/reload_secure_settings
POST _nodes/reload_secure_settings
{
  "secure_settings_password": "keystore-password"
}
resp = client.nodes.reload_secure_settings(
    secure_settings_password="keystore-password",
)
const response = await client.nodes.reloadSecureSettings({
  secure_settings_password: "keystore-password",
});
response = client.nodes.reload_secure_settings(
  body: {
    "secure_settings_password": "keystore-password"
  }
)
$resp = $client->nodes()->reloadSecureSettings([
    "body" => [
        "secure_settings_password" => "keystore-password",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"secure_settings_password":"keystore-password"}' "$ELASTICSEARCH_URL/_nodes/reload_secure_settings"
client.nodes().reloadSecureSettings(r -> r
    .secureSettingsPassword("keystore-password")
);
Request example
Run `POST _nodes/reload_secure_settings` to reload the keystore on nodes in the cluster.
{
  "secure_settings_password": "keystore-password"
}
Response examples (200)
A successful response when reloading keystore on nodes in your cluster.
{
  "_nodes": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "cluster_name": "my_cluster",
  "nodes": {
    "pQHNt5rXTTWNvUgOrdynKg": {
      "name": "node-0"
    }
  }
}

Get node statistics Generally available

GET /_nodes/{node_id}/stats/{metric}/{index_metric}

All methods and paths for this operation:

GET /_nodes/stats

GET /_nodes/stats/{metric}
GET /_nodes/{node_id}/stats
GET /_nodes/{node_id}/stats/{metric}
GET /_nodes/stats/{metric}/{index_metric}
GET /_nodes/{node_id}/stats/{metric}/{index_metric}

Get statistics for nodes in a cluster. By default, all stats are returned. You can limit the returned information by using metrics.

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • node_id string | array[string] Required

    Comma-separated list of node IDs or names used to limit returned information.

  • metric string | array[string] Required

    Limit the information returned to the specified metrics

  • index_metric string | array[string] Required

    Limit the information returned for indices metric to the specific index metrics. It can be used only if indices (or all) metric is specified.

Query parameters

  • completion_fields string | array[string]

    Comma-separated list or wildcard expressions of fields to include in fielddata and suggest statistics.

  • fielddata_fields string | array[string]

    Comma-separated list or wildcard expressions of fields to include in fielddata statistics.

  • fields string | array[string]

    Comma-separated list or wildcard expressions of fields to include in the statistics.

  • groups boolean

    Comma-separated list of search groups to include in the search statistics.

  • include_segment_file_sizes boolean

    If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested).

  • level string

    Indicates whether statistics are aggregated at the cluster, index, or shard level.

    Values are cluster, indices, or shards.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • types array[string]

    A comma-separated list of document types for the indexing index metric.

  • include_unloaded_segments boolean

    If true, the response includes information from segments that are not loaded into memory.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • adaptive_selection object

          Statistics about adaptive replica selection.

          Hide adaptive_selection attribute Show adaptive_selection attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • avg_queue_size number

              The exponentially weighted moving average queue size of search requests on the keyed node.

            • avg_response_time_ns number

              The exponentially weighted moving average response time, in nanoseconds, of search requests on the keyed node.

            • avg_service_time_ns number

              The exponentially weighted moving average service time, in nanoseconds, of search requests on the keyed node.

            • outgoing_searches number

              The number of outstanding search requests to the keyed node from the node these stats are for.

            • rank string

              The rank of this node; used for shard selection when routing search requests.

        • breakers object

          Statistics about the field data circuit breaker.

          Hide breakers attribute Show breakers attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • estimated_size string

              Estimated memory used for the operation.

            • estimated_size_in_bytes number

              Estimated memory used, in bytes, for the operation.

            • limit_size string

              Memory limit for the circuit breaker.

            • limit_size_in_bytes number

              Memory limit, in bytes, for the circuit breaker.

            • overhead number

              A constant that all estimates for the circuit breaker are multiplied with to calculate a final estimate.

            • tripped number

              Total number of times the circuit breaker has been triggered and prevented an out of memory error.

        • fs object

          File system information, data path, free disk space, read/write stats.

          Hide fs attributes Show fs attributes object
          • data array[object]

            List of all file stores.

          • timestamp number

            Last time the file stores statistics were refreshed. Recorded in milliseconds since the Unix Epoch.

        • host string

          Network host for the node, based on the network host setting.

        • http object

          HTTP connection information.

          Hide http attributes Show http attributes object
          • current_open number

            Current number of open HTTP connections for the node.

          • total_opened number

            Total number of HTTP connections opened for the node.

          • clients array[object]

            Information on current and recently-closed HTTP client connections. Clients that have been closed longer than the http.client_stats.closed_channels.max_age setting will not be represented here.

          • routes object Required Generally available; Added in 8.12.0

            Detailed HTTP stats broken down by route

        • ingest object

          Statistics about ingest preprocessing.

          Hide ingest attribute Show ingest attribute object
          • pipelines object

            Contains statistics about ingest pipelines for the node.

        • ip string | array[string]

          IP address and port for the node.

        • jvm object

          JVM stats, memory pool information, garbage collection, buffer pools, number of loaded/unloaded classes.

          Hide jvm attributes Show jvm attributes object
          • buffer_pools object

            Contains statistics about JVM buffer pools for the node.

          • timestamp number

            Last time JVM statistics were refreshed.

          • uptime string

            Human-readable JVM uptime. Only returned if the human query parameter is true.

          • uptime_in_millis number

            JVM uptime in milliseconds.

        • name string

          Human-readable identifier for the node. Based on the node name setting.

        • os object

          Operating system stats, load average, mem, swap.

          Hide os attribute Show os attribute object
          • timestamp number
        • process object

          Process statistics, memory consumption, cpu usage, open file descriptors.

          Hide process attributes Show process attributes object
          • open_file_descriptors number

            Number of opened file descriptors associated with the current or -1 if not supported.

          • max_file_descriptors number

            Maximum number of file descriptors allowed on the system, or -1 if not supported.

          • timestamp number

            Last time the statistics were refreshed. Recorded in milliseconds since the Unix Epoch.

        • roles array[string]

          Roles assigned to the node.

          Values are master, data, data_cold, data_content, data_frozen, data_hot, data_warm, client, ingest, ml, voting_only, transform, remote_cluster_client, or coordinating_only.

        • script object

          Contains script statistics for the node.

          Hide script attributes Show script attributes object
          • cache_evictions number

            Total number of times the script cache has evicted old data.

          • compilations number

            Total number of inline script compilations performed by the node.

          • compilations_history object

            Contains this recent history of script compilations.

          • compilation_limit_triggered number

            Total number of times the script compilation circuit breaker has limited inline script compilations.

          • contexts array[object]
        • script_cache object
        • thread_pool object

          Statistics about each thread pool, including current size, queue and rejected tasks.

          Hide thread_pool attribute Show thread_pool attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • active number

              Number of active threads in the thread pool.

            • completed number

              Number of tasks completed by the thread pool executor.

            • largest number

              Highest number of active threads in the thread pool.

            • queue number

              Number of tasks in queue for the thread pool.

            • rejected number

              Number of tasks rejected by the thread pool executor.

            • threads number

              Number of threads in the thread pool.

        • timestamp number
        • transport object

          Transport statistics about sent and received bytes in cluster communication.

          Hide transport attributes Show transport attributes object
          • inbound_handling_time_histogram array[object]

            The distribution of the time spent handling each inbound message on a transport thread, represented as a histogram.

          • outbound_handling_time_histogram array[object]

            The distribution of the time spent sending each outbound transport message on a transport thread, represented as a histogram.

          • rx_count number

            Total number of RX (receive) packets received by the node during internal cluster communication.

          • rx_size string

            Size of RX packets received by the node during internal cluster communication.

          • rx_size_in_bytes number

            Size, in bytes, of RX packets received by the node during internal cluster communication.

          • server_open number

            Current number of inbound TCP connections used for internal communication between nodes.

          • tx_count number

            Total number of TX (transmit) packets sent by the node during internal cluster communication.

          • tx_size string

            Size of TX packets sent by the node during internal cluster communication.

          • tx_size_in_bytes number

            Size, in bytes, of TX packets sent by the node during internal cluster communication.

          • total_outbound_connections number

            The cumulative number of outbound transport connections that this node has opened since it started. Each transport connection may comprise multiple TCP connections but is only counted once in this statistic. Transport connections are typically long-lived so this statistic should remain constant in a stable cluster.

        • transport_address string

          Host and port for the transport layer, used for internal communication between nodes in a cluster.

        • attributes object

          Contains a list of attributes for the node.

          Hide attributes attribute Show attributes attribute object
          • * string Additional properties
        • discovery object

          Contains node discovery statistics for the node.

          Hide discovery attribute Show discovery attribute object
          • cluster_state_update object

            Contains low-level statistics about how long various activities took during cluster state updates while the node was the elected master. Omitted if the node is not master-eligible. Every field whose name ends in _time within this object is also represented as a raw number of milliseconds in a field whose name ends in _time_millis. The human-readable fields with a _time suffix are only returned if requested with the ?human=true query parameter.

        • indexing_pressure object

          Contains indexing pressure statistics for the node.

        • indices object

          Indices stats about size, document count, indexing and deletion times, search times, field cache size, merges and flushes.

          Hide indices attribute Show indices attribute object
          • shards object Generally available; Added in 7.15.0
GET /_nodes/{node_id}/stats/{metric}/{index_metric}
GET _nodes/stats/process?filter_path=**.max_file_descriptors
resp = client.nodes.stats(
    metric="process",
    filter_path="**.max_file_descriptors",
)
const response = await client.nodes.stats({
  metric: "process",
  filter_path: "**.max_file_descriptors",
});
response = client.nodes.stats(
  metric: "process",
  filter_path: "**.max_file_descriptors"
)
$resp = $client->nodes()->stats([
    "metric" => "process",
    "filter_path" => "**.max_file_descriptors",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_nodes/stats/process?filter_path=**.max_file_descriptors"

Get feature usage information Generally available; Added in 6.0.0

GET /_nodes/{node_id}/usage/{metric}

All methods and paths for this operation:

GET /_nodes/usage

GET /_nodes/usage/{metric}
GET /_nodes/{node_id}/usage
GET /_nodes/{node_id}/usage/{metric}

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • node_id string | array[string] Required

    A comma-separated list of node IDs or names to limit the returned information; use _local to return information from the node you're connecting to, leave empty to get information from all nodes

  • metric string | array[string] Required

    Limits the information returned to the specific metrics. A comma-separated list of the following options: _all, rest_actions.

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • rest_actions object Required
          Hide rest_actions attribute Show rest_actions attribute object
          • * number Additional properties
        • Time unit for milliseconds

        • Time unit for milliseconds

        • aggregations object Required
          Hide aggregations attribute Show aggregations attribute object
          • * object Additional properties
GET /_nodes/{node_id}/usage/{metric}
GET _nodes/usage
resp = client.nodes.usage()
const response = await client.nodes.usage();
response = client.nodes.usage
$resp = $client->nodes()->usage();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_nodes/usage"
client.nodes().usage(u -> u);

Cluster - Health

Get the cluster health Generally available; Added in 8.7.0

GET /_health_report/{feature}

All methods and paths for this operation:

GET /_health_report

GET /_health_report/{feature}

Get a report with the health status of an Elasticsearch cluster. The report contains a list of indicators that compose Elasticsearch functionality.

Each indicator has a health status of: green, unknown, yellow or red. The indicator will provide an explanation and metadata describing the reason for its current health status.

The cluster’s status is controlled by the worst indicator status.

In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.

Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. The root cause and remediation steps are encapsulated in a diagnosis. A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.

NOTE: The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.

Path parameters

  • feature string | array[string] Required

    A feature of the cluster, as returned by the top-level health report API.

Query parameters

  • timeout string

    Explicit operation timeout.

    External documentation
  • verbose boolean

    Opt-in for more information about the health of the system.

  • size number

    Limit the number of affected resources the health report API returns.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cluster_name string Required
    • indicators object Required
      Hide indicators attributes Show indicators attributes object
      • master_is_stable object

        MASTER_IS_STABLE

        Hide master_is_stable attributes Show master_is_stable attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • shards_availability object

        SHARDS_AVAILABILITY

        Hide shards_availability attributes Show shards_availability attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • disk object

        DISK

        Hide disk attributes Show disk attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • repository_integrity object

        REPOSITORY_INTEGRITY

        Hide repository_integrity attributes Show repository_integrity attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • data_stream_lifecycle object

        DATA_STREAM_LIFECYCLE

        Hide data_stream_lifecycle attributes Show data_stream_lifecycle attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • ilm object

        ILM

        Hide ilm attributes Show ilm attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • slm object

        SLM

        Hide slm attributes Show slm attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • shards_capacity object

        SHARDS_CAPACITY

        Hide shards_capacity attributes Show shards_capacity attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
    • status string

      Values are green, yellow, red, unknown, or unavailable.

GET /_health_report/{feature}
GET _health_report
resp = client.health_report()
const response = await client.healthReport();
response = client.health_report
$resp = $client->healthReport();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_health_report"
client.healthReport(h -> h);









Create or update a connector Beta; Added in 8.12.0

PUT /_connector/{connector_id}

All methods and paths for this operation:

PUT /_connector

PUT /_connector/{connector_id}

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be created or updated. ID is auto-generated if not provided.

application/json

Body

  • description string
  • index_name string
  • is_native boolean
  • language string
  • name string
  • service_type string

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

    • id string Required
PUT /_connector/{connector_id}
PUT _connector/my-connector
{
  "index_name": "search-google-drive",
  "name": "My Connector",
  "service_type": "google_drive"
}
resp = client.connector.put(
    connector_id="my-connector",
    index_name="search-google-drive",
    name="My Connector",
    service_type="google_drive",
)
const response = await client.connector.put({
  connector_id: "my-connector",
  index_name: "search-google-drive",
  name: "My Connector",
  service_type: "google_drive",
});
response = client.connector.put(
  connector_id: "my-connector",
  body: {
    "index_name": "search-google-drive",
    "name": "My Connector",
    "service_type": "google_drive"
  }
)
$resp = $client->connector()->put([
    "connector_id" => "my-connector",
    "body" => [
        "index_name" => "search-google-drive",
        "name" => "My Connector",
        "service_type" => "google_drive",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_name":"search-google-drive","name":"My Connector","service_type":"google_drive"}' "$ELASTICSEARCH_URL/_connector/my-connector"
client.connector().put(p -> p
    .connectorId("my-connector")
    .indexName("search-google-drive")
    .name("My Connector")
    .serviceType("google_drive")
);
Request examples
{
  "index_name": "search-google-drive",
  "name": "My Connector",
  "service_type": "google_drive"
}
{
  "index_name": "search-google-drive",
  "name": "My Connector",
  "description": "My Connector to sync data to Elastic index from Google Drive",
  "service_type": "google_drive",
  "language": "english"
}
Response examples (200)
{
  "result": "created",
  "id": "my-connector"
}

Delete a connector Beta; Added in 8.12.0

DELETE /_connector/{connector_id}

Removes a connector and associated sync jobs. This is a destructive action that is not recoverable. NOTE: This action doesn’t delete any API keys, ingest pipelines, or data indices associated with the connector. These need to be removed manually.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be deleted

Query parameters

  • delete_sync_jobs boolean

    A flag indicating if associated sync jobs should be also removed. Defaults to false.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_connector/{connector_id}
DELETE _connector/my-connector-id&delete_sync_jobs=true
resp = client.connector.delete(
    connector_id="my-connector-id&delete_sync_jobs=true",
)
const response = await client.connector.delete({
  connector_id: "my-connector-id&delete_sync_jobs=true",
});
response = client.connector.delete(
  connector_id: "my-connector-id&delete_sync_jobs=true"
)
$resp = $client->connector()->delete([
    "connector_id" => "my-connector-id&delete_sync_jobs=true",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/my-connector-id&delete_sync_jobs=true"
client.connector().delete(d -> d
    .connectorId("my-connector-id&delete_sync_jobs=true")
);
Response examples (200)
{
    "acknowledged": true
}

Get all connectors Beta; Added in 8.12.0

GET /_connector

Get information about all connectors.

Query parameters

  • from number

    Starting offset (default: 0)

  • size number

    Specifies a max number of results to get

  • index_name string | array[string]

    A comma-separated list of connector index names to fetch connector documents for

  • connector_name string | array[string]

    A comma-separated list of connector names to fetch connector documents for

  • service_type string | array[string]

    A comma-separated list of connector service types to fetch connector documents for

  • query string

    A wildcard query string that filters connectors with matching name, description or index name

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • results array[object] Required
      Hide results attributes Show results attributes object
      • api_key_id string
      • api_key_secret_id string
      • configuration object Required
        Hide configuration attribute Show configuration attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • category string
          • depends_on array[object] Required
          • label string Required
          • options array[object] Required
          • order number
          • placeholder string
          • required boolean Required
          • sensitive boolean Required
          • tooltip
          • ui_restrictions array[string]
          • validations array[object]
          • value object Required
      • custom_scheduling object Required
        Hide custom_scheduling attribute Show custom_scheduling attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • enabled boolean Required
          • interval string Required
          • name string Required
      • description string
      • features object
        Hide features attributes Show features attributes object
        • document_level_security object

          Indicates whether document-level security is enabled.

        • incremental_sync object

          Indicates whether incremental syncs are enabled.

        • native_connector_api_keys object

          Indicates whether managed connector API keys are enabled.

        • sync_rules object
      • filtering array[object] Required
        Hide filtering attributes Show filtering attributes object
        • active object Required
        • domain string
        • draft object Required
      • id string
      • index_name string | null

      • is_native boolean Required
      • language string
      • last_access_control_sync_error string
      • last_access_control_sync_scheduled_at string | number

        One of:
      • last_access_control_sync_status string

        Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

      • last_deleted_document_count number
      • last_incremental_sync_scheduled_at string | number

        One of:
      • last_indexed_document_count number
      • last_seen string | number

        One of:
      • last_sync_error string
      • last_sync_scheduled_at string | number

        One of:
      • last_sync_status string

        Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

      • last_synced string | number

        One of:
      • name string
      • pipeline object
        Hide pipeline attributes Show pipeline attributes object
        • extract_binary_content boolean Required
        • name string Required
        • reduce_whitespace boolean Required
        • run_ml_inference boolean Required
      • scheduling object Required
        Hide scheduling attributes Show scheduling attributes object
        • access_control object
        • full object
        • incremental object
      • service_type string
      • status string Required

        Values are created, needs_configuration, configured, connected, or error.

      • sync_cursor object
      • sync_now boolean Required
GET /_connector
GET _connector
resp = client.connector.list()
const response = await client.connector.list();
response = client.connector.list
$resp = $client->connector()->list();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector"
client.connector().list(l -> l);

Create a connector Beta; Added in 8.12.0

POST /_connector

Connectors are Elasticsearch integrations that bring content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. Elastic managed connectors (Native connectors) are a managed service on Elastic Cloud. Self-managed connectors (Connector clients) are self-managed on your infrastructure.

application/json

Body

  • description string
  • index_name string
  • is_native boolean
  • language string
  • name string
  • service_type string

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

    • id string Required
POST /_connector
curl \
 --request POST 'http://api.example.com/_connector' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"description":"string","index_name":"string","is_native":true,"language":"string","name":"string","service_type":"string"}'




Check in a connector sync job Technical preview

PUT /_connector/_sync_job/{connector_sync_job_id}/_check_in

Check in a connector sync job and set the last_seen field to the current time before updating it in the internal index.

To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job to be checked in.

Responses

  • 200 application/json
PUT /_connector/_sync_job/{connector_sync_job_id}/_check_in
PUT _connector/_sync_job/my-connector-sync-job/_check_in
resp = client.connector.sync_job_check_in(
    connector_sync_job_id="my-connector-sync-job",
)
const response = await client.connector.syncJobCheckIn({
  connector_sync_job_id: "my-connector-sync-job",
});
response = client.connector.sync_job_check_in(
  connector_sync_job_id: "my-connector-sync-job"
)
$resp = $client->connector()->syncJobCheckIn([
    "connector_sync_job_id" => "my-connector-sync-job",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job/_check_in"
client.connector().syncJobCheckIn(s -> s
    .connectorSyncJobId("my-connector-sync-job")
);

Claim a connector sync job Technical preview

PUT /_connector/_sync_job/{connector_sync_job_id}/_claim

This action updates the job status to in_progress and sets the last_seen and started_at timestamps to the current time. Additionally, it can set the sync_cursor property for the sync job.

This API is not intended for direct connector management by users. It supports the implementation of services that utilize the connector protocol to communicate with Elasticsearch.

To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job.

application/json

Body Required

  • sync_cursor object

    The cursor object from the last incremental sync job. This should reference the sync_cursor field in the connector state for which the job runs.

  • worker_hostname string Required

    The host name of the current system that will run the job.

Responses

  • 200 application/json
PUT /_connector/_sync_job/{connector_sync_job_id}/_claim
PUT _connector/_sync_job/my-connector-sync-job-id/_claim
{
  "worker_hostname": "some-machine"
}
resp = client.connector.sync_job_claim(
    connector_sync_job_id="my-connector-sync-job-id",
    worker_hostname="some-machine",
)
const response = await client.connector.syncJobClaim({
  connector_sync_job_id: "my-connector-sync-job-id",
  worker_hostname: "some-machine",
});
response = client.connector.sync_job_claim(
  connector_sync_job_id: "my-connector-sync-job-id",
  body: {
    "worker_hostname": "some-machine"
  }
)
$resp = $client->connector()->syncJobClaim([
    "connector_sync_job_id" => "my-connector-sync-job-id",
    "body" => [
        "worker_hostname" => "some-machine",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"worker_hostname":"some-machine"}' "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id/_claim"
client.connector().syncJobClaim(s -> s
    .connectorSyncJobId("my-connector-sync-job-id")
    .workerHostname("some-machine")
);
Request example
An example body for a `PUT _connector/_sync_job/my-connector-sync-job-id/_claim` request.
{
  "worker_hostname": "some-machine"
}




Delete a connector sync job Beta; Added in 8.12.0

DELETE /_connector/_sync_job/{connector_sync_job_id}

Remove a connector sync job and its associated data. This is a destructive action that is not recoverable.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job to be deleted

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_connector/_sync_job/{connector_sync_job_id}
DELETE _connector/_sync_job/my-connector-sync-job-id
resp = client.connector.sync_job_delete(
    connector_sync_job_id="my-connector-sync-job-id",
)
const response = await client.connector.syncJobDelete({
  connector_sync_job_id: "my-connector-sync-job-id",
});
response = client.connector.sync_job_delete(
  connector_sync_job_id: "my-connector-sync-job-id"
)
$resp = $client->connector()->syncJobDelete([
    "connector_sync_job_id" => "my-connector-sync-job-id",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id"
client.connector().syncJobDelete(s -> s
    .connectorSyncJobId("my-connector-sync-job-id")
);
Response examples (200)
{
  "acknowledged": true
}








Create a connector sync job Beta; Added in 8.12.0

POST /_connector/_sync_job

Create a connector sync job document in the internal index and initialize its counters and timestamps with default values.

application/json

Body Required

  • id string Required

    The id of the associated connector

  • job_type string

    Values are full, incremental, or access_control.

  • trigger_method string

    Values are on_demand or scheduled.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • id string Required
POST /_connector/_sync_job
POST _connector/_sync_job
{
  "id": "connector-id",
  "job_type": "full",
  "trigger_method": "on_demand"
}
resp = client.connector.sync_job_post(
    id="connector-id",
    job_type="full",
    trigger_method="on_demand",
)
const response = await client.connector.syncJobPost({
  id: "connector-id",
  job_type: "full",
  trigger_method: "on_demand",
});
response = client.connector.sync_job_post(
  body: {
    "id": "connector-id",
    "job_type": "full",
    "trigger_method": "on_demand"
  }
)
$resp = $client->connector()->syncJobPost([
    "body" => [
        "id" => "connector-id",
        "job_type" => "full",
        "trigger_method" => "on_demand",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"id":"connector-id","job_type":"full","trigger_method":"on_demand"}' "$ELASTICSEARCH_URL/_connector/_sync_job"
client.connector().syncJobPost(s -> s
    .id("connector-id")
    .jobType(SyncJobType.Full)
    .triggerMethod(SyncJobTriggerMethod.OnDemand)
);
Request example
{
  "id": "connector-id",
  "job_type": "full",
  "trigger_method": "on_demand"
}

Set the connector sync job stats Technical preview

PUT /_connector/_sync_job/{connector_sync_job_id}/_stats

Stats include: deleted_document_count, indexed_document_count, indexed_document_volume, and total_document_count. You can also update last_seen. This API is mainly used by the connector service for updating sync job information.

To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job.

application/json

Body Required

  • deleted_document_count number Required

    The number of documents the sync job deleted.

  • indexed_document_count number Required

    The number of documents the sync job indexed.

  • indexed_document_volume number Required

    The total size of the data (in MiB) the sync job indexed.

  • last_seen string

    The timestamp to use in the last_seen property for the connector sync job.

    External documentation
  • metadata object

    The connector-specific metadata.

    Hide metadata attribute Show metadata attribute object
    • * object Additional properties
  • total_document_count number

    The total number of documents in the target index after the sync job finished.

Responses

  • 200 application/json
PUT /_connector/_sync_job/{connector_sync_job_id}/_stats
PUT _connector/_sync_job/my-connector-sync-job/_stats
{
    "deleted_document_count": 10,
    "indexed_document_count": 20,
    "indexed_document_volume": 1000,
    "total_document_count": 2000,
    "last_seen": "2023-01-02T10:00:00Z"
}
resp = client.connector.sync_job_update_stats(
    connector_sync_job_id="my-connector-sync-job",
    deleted_document_count=10,
    indexed_document_count=20,
    indexed_document_volume=1000,
    total_document_count=2000,
    last_seen="2023-01-02T10:00:00Z",
)
const response = await client.connector.syncJobUpdateStats({
  connector_sync_job_id: "my-connector-sync-job",
  deleted_document_count: 10,
  indexed_document_count: 20,
  indexed_document_volume: 1000,
  total_document_count: 2000,
  last_seen: "2023-01-02T10:00:00Z",
});
response = client.connector.sync_job_update_stats(
  connector_sync_job_id: "my-connector-sync-job",
  body: {
    "deleted_document_count": 10,
    "indexed_document_count": 20,
    "indexed_document_volume": 1000,
    "total_document_count": 2000,
    "last_seen": "2023-01-02T10:00:00Z"
  }
)
$resp = $client->connector()->syncJobUpdateStats([
    "connector_sync_job_id" => "my-connector-sync-job",
    "body" => [
        "deleted_document_count" => 10,
        "indexed_document_count" => 20,
        "indexed_document_volume" => 1000,
        "total_document_count" => 2000,
        "last_seen" => "2023-01-02T10:00:00Z",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"deleted_document_count":10,"indexed_document_count":20,"indexed_document_volume":1000,"total_document_count":2000,"last_seen":"2023-01-02T10:00:00Z"}' "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job/_stats"
client.connector().syncJobUpdateStats(s -> s
    .connectorSyncJobId("my-connector-sync-job")
    .deletedDocumentCount(10L)
    .indexedDocumentCount(20L)
    .indexedDocumentVolume(1000L)
    .lastSeen(l -> l
        .time("2023-01-02T10:00:00Z")
    )
    .totalDocumentCount(2000)
);
Request example
An example body for a `PUT _connector/_sync_job/my-connector-sync-job/_stats` request.
{
    "deleted_document_count": 10,
    "indexed_document_count": 20,
    "indexed_document_volume": 1000,
    "total_document_count": 2000,
    "last_seen": "2023-01-02T10:00:00Z"
}

Activate the connector draft filter Technical preview; Added in 8.12.0

PUT /_connector/{connector_id}/_filtering/_activate

Activates the valid draft filtering for a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_filtering/_activate
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_filtering/_activate' \
 --header "Authorization: $API_KEY"

Update the connector API key ID Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_api_key_id

Update the api_key_id and api_key_secret_id fields of a connector. You can specify the ID of the API key used for authorization and the ID of the connector secret where the API key is stored. The connector secret ID is required only for Elastic managed (native) connectors. Self-managed connectors (connector clients) do not use this field.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • api_key_id string
  • api_key_secret_id string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_api_key_id
PUT _connector/my-connector/_api_key_id
{
    "api_key_id": "my-api-key-id",
    "api_key_secret_id": "my-connector-secret-id"
}
resp = client.connector.update_api_key_id(
    connector_id="my-connector",
    api_key_id="my-api-key-id",
    api_key_secret_id="my-connector-secret-id",
)
const response = await client.connector.updateApiKeyId({
  connector_id: "my-connector",
  api_key_id: "my-api-key-id",
  api_key_secret_id: "my-connector-secret-id",
});
response = client.connector.update_api_key_id(
  connector_id: "my-connector",
  body: {
    "api_key_id": "my-api-key-id",
    "api_key_secret_id": "my-connector-secret-id"
  }
)
$resp = $client->connector()->updateApiKeyId([
    "connector_id" => "my-connector",
    "body" => [
        "api_key_id" => "my-api-key-id",
        "api_key_secret_id" => "my-connector-secret-id",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"api_key_id":"my-api-key-id","api_key_secret_id":"my-connector-secret-id"}' "$ELASTICSEARCH_URL/_connector/my-connector/_api_key_id"
client.connector().updateApiKeyId(u -> u
    .apiKeyId("my-api-key-id")
    .apiKeySecretId("my-connector-secret-id")
    .connectorId("my-connector")
);
Request example
{
    "api_key_id": "my-api-key-id",
    "api_key_secret_id": "my-connector-secret-id"
}
Response examples (200)
{
  "result": "updated"
}

Update the connector configuration Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_configuration

Update the configuration field in the connector document.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_configuration
PUT _connector/my-spo-connector/_configuration
{
    "values": {
        "tenant_id": "my-tenant-id",
        "tenant_name": "my-sharepoint-site",
        "client_id": "foo",
        "secret_value": "bar",
        "site_collections": "*"
    }
}
resp = client.connector.update_configuration(
    connector_id="my-spo-connector",
    values={
        "tenant_id": "my-tenant-id",
        "tenant_name": "my-sharepoint-site",
        "client_id": "foo",
        "secret_value": "bar",
        "site_collections": "*"
    },
)
const response = await client.connector.updateConfiguration({
  connector_id: "my-spo-connector",
  values: {
    tenant_id: "my-tenant-id",
    tenant_name: "my-sharepoint-site",
    client_id: "foo",
    secret_value: "bar",
    site_collections: "*",
  },
});
response = client.connector.update_configuration(
  connector_id: "my-spo-connector",
  body: {
    "values": {
      "tenant_id": "my-tenant-id",
      "tenant_name": "my-sharepoint-site",
      "client_id": "foo",
      "secret_value": "bar",
      "site_collections": "*"
    }
  }
)
$resp = $client->connector()->updateConfiguration([
    "connector_id" => "my-spo-connector",
    "body" => [
        "values" => [
            "tenant_id" => "my-tenant-id",
            "tenant_name" => "my-sharepoint-site",
            "client_id" => "foo",
            "secret_value" => "bar",
            "site_collections" => "*",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"values":{"tenant_id":"my-tenant-id","tenant_name":"my-sharepoint-site","client_id":"foo","secret_value":"bar","site_collections":"*"}}' "$ELASTICSEARCH_URL/_connector/my-spo-connector/_configuration"
client.connector().updateConfiguration(u -> u
    .connectorId("my-spo-connector")
    .values(Map.of("tenant_id", JsonData.fromJson("\"my-tenant-id\""),"tenant_name", JsonData.fromJson("\"my-sharepoint-site\""),"secret_value", JsonData.fromJson("\"bar\""),"client_id", JsonData.fromJson("\"foo\""),"site_collections", JsonData.fromJson("\"*\"")))
);
Request examples
{
    "values": {
        "tenant_id": "my-tenant-id",
        "tenant_name": "my-sharepoint-site",
        "client_id": "foo",
        "secret_value": "bar",
        "site_collections": "*"
    }
}
{
    "values": {
        "secret_value": "foo-bar"
    }
}
Response examples (200)
{
  "result": "updated"
}

Update the connector error field Technical preview; Added in 8.12.0

PUT /_connector/{connector_id}/_error

Set the error field for the connector. If the error provided in the request body is non-null, the connector’s status is updated to error. Otherwise, if the error is reset to null, the connector status is updated to connected.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • error string | null Required

    One of:

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_error
PUT _connector/my-connector/_error
{
    "error": "Houston, we have a problem!"
}
resp = client.connector.update_error(
    connector_id="my-connector",
    error="Houston, we have a problem!",
)
const response = await client.connector.updateError({
  connector_id: "my-connector",
  error: "Houston, we have a problem!",
});
response = client.connector.update_error(
  connector_id: "my-connector",
  body: {
    "error": "Houston, we have a problem!"
  }
)
$resp = $client->connector()->updateError([
    "connector_id" => "my-connector",
    "body" => [
        "error" => "Houston, we have a problem!",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"error":"Houston, we have a problem!"}' "$ELASTICSEARCH_URL/_connector/my-connector/_error"
client.connector().updateError(u -> u
    .connectorId("my-connector")
    .error("Houston, we have a problem!")
);
Request example
{
    "error": "Houston, we have a problem!"
}
Response examples (200)
{
  "result": "updated"
}

Update the connector features Technical preview

PUT /_connector/{connector_id}/_features

Update the connector features in the connector document. This API can be used to control the following aspects of a connector:

  • document-level security
  • incremental syncs
  • advanced sync rules
  • basic sync rules

Normally, the running connector service automatically manages these features. However, you can use this API to override the default behavior.

To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated.

application/json

Body Required

  • features object Required
    Hide features attributes Show features attributes object
    • document_level_security object

      Indicates whether document-level security is enabled.

      Hide document_level_security attribute Show document_level_security attribute object
      • enabled boolean Required
    • incremental_sync object

      Indicates whether incremental syncs are enabled.

      Hide incremental_sync attribute Show incremental_sync attribute object
      • enabled boolean Required
    • native_connector_api_keys object

      Indicates whether managed connector API keys are enabled.

      Hide native_connector_api_keys attribute Show native_connector_api_keys attribute object
      • enabled boolean Required
    • sync_rules object
      Hide sync_rules attributes Show sync_rules attributes object
      • advanced object

        Indicates whether advanced sync rules are enabled.

        Hide advanced attribute Show advanced attribute object
        • enabled boolean Required
      • basic object

        Indicates whether basic sync rules are enabled.

        Hide basic attribute Show basic attribute object
        • enabled boolean Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_features
PUT _connector/my-connector/_features
{
  "features": {
    "document_level_security": {
      "enabled": true
    },
    "incremental_sync": {
      "enabled": true
    },
    "sync_rules": {
      "advanced": {
        "enabled": false
      },
      "basic": {
        "enabled": true
      }
    }
  }
}
resp = client.connector.update_features(
    connector_id="my-connector",
    features={
        "document_level_security": {
            "enabled": True
        },
        "incremental_sync": {
            "enabled": True
        },
        "sync_rules": {
            "advanced": {
                "enabled": False
            },
            "basic": {
                "enabled": True
            }
        }
    },
)
const response = await client.connector.updateFeatures({
  connector_id: "my-connector",
  features: {
    document_level_security: {
      enabled: true,
    },
    incremental_sync: {
      enabled: true,
    },
    sync_rules: {
      advanced: {
        enabled: false,
      },
      basic: {
        enabled: true,
      },
    },
  },
});
response = client.connector.update_features(
  connector_id: "my-connector",
  body: {
    "features": {
      "document_level_security": {
        "enabled": true
      },
      "incremental_sync": {
        "enabled": true
      },
      "sync_rules": {
        "advanced": {
          "enabled": false
        },
        "basic": {
          "enabled": true
        }
      }
    }
  }
)
$resp = $client->connector()->updateFeatures([
    "connector_id" => "my-connector",
    "body" => [
        "features" => [
            "document_level_security" => [
                "enabled" => true,
            ],
            "incremental_sync" => [
                "enabled" => true,
            ],
            "sync_rules" => [
                "advanced" => [
                    "enabled" => false,
                ],
                "basic" => [
                    "enabled" => true,
                ],
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"features":{"document_level_security":{"enabled":true},"incremental_sync":{"enabled":true},"sync_rules":{"advanced":{"enabled":false},"basic":{"enabled":true}}}}' "$ELASTICSEARCH_URL/_connector/my-connector/_features"
client.connector().updateFeatures(u -> u
    .connectorId("my-connector")
    .features(f -> f
        .documentLevelSecurity(d -> d
            .enabled(true)
        )
        .incrementalSync(i -> i
            .enabled(true)
        )
        .syncRules(s -> s
            .advanced(a -> a
                .enabled(false)
            )
            .basic(b -> b
                .enabled(true)
            )
        )
    )
);
Request examples
{
  "features": {
    "document_level_security": {
      "enabled": true
    },
    "incremental_sync": {
      "enabled": true
    },
    "sync_rules": {
      "advanced": {
        "enabled": false
      },
      "basic": {
        "enabled": true
      }
    }
  }
}
{
  "features": {
    "document_level_security": {
      "enabled": true
    }
  }
}
Response examples (200)
{
  "result": "updated"
}

Update the connector filtering Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_filtering

Update the draft filtering configuration of a connector and marks the draft validation state as edited. The filtering draft is activated once validated by the running Elastic connector service. The filtering property is used to configure sync rules (both basic and advanced) for a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • filtering array[object]
    Hide filtering attributes Show filtering attributes object
    • active object Required
      Hide active attributes Show active attributes object
      • advanced_snippet object Required
        Hide advanced_snippet attribute Show advanced_snippet attribute object
        • value object Required
      • rules array[object] Required
        Hide rules attributes Show rules attributes object
        • created_at
        • field
        • id
        • order number Required
        • policy
        • rule
        • updated_at
        • value string Required
      • validation object Required
        Hide validation attribute Show validation attribute object
        • errors array[object] Required
    • domain string
    • draft object Required
      Hide draft attributes Show draft attributes object
      • advanced_snippet object Required
        Hide advanced_snippet attribute Show advanced_snippet attribute object
        • value object Required
      • rules array[object] Required
        Hide rules attributes Show rules attributes object
        • created_at
        • field
        • id
        • order number Required
        • policy
        • rule
        • updated_at
        • value string Required
      • validation object Required
        Hide validation attribute Show validation attribute object
        • errors array[object] Required
  • rules array[object]
    Hide rules attributes Show rules attributes object
    • created_at string | number

      One of:
    • field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • order number Required
    • policy string Required

      Values are exclude or include.

    • rule string Required

      Values are contains, ends_with, equals, regex, starts_with, >, or <.

    • updated_at string | number

      One of:
    • value string Required
  • advanced_snippet object
    Hide advanced_snippet attributes Show advanced_snippet attributes object

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_filtering
PUT _connector/my-g-drive-connector/_filtering
{
    "rules": [
         {
            "field": "file_extension",
            "id": "exclude-txt-files",
            "order": 0,
            "policy": "exclude",
            "rule": "equals",
            "value": "txt"
        },
        {
            "field": "_",
            "id": "DEFAULT",
            "order": 1,
            "policy": "include",
            "rule": "regex",
            "value": ".*"
        }
    ]
}
resp = client.connector.update_filtering(
    connector_id="my-g-drive-connector",
    rules=[
        {
            "field": "file_extension",
            "id": "exclude-txt-files",
            "order": 0,
            "policy": "exclude",
            "rule": "equals",
            "value": "txt"
        },
        {
            "field": "_",
            "id": "DEFAULT",
            "order": 1,
            "policy": "include",
            "rule": "regex",
            "value": ".*"
        }
    ],
)
const response = await client.connector.updateFiltering({
  connector_id: "my-g-drive-connector",
  rules: [
    {
      field: "file_extension",
      id: "exclude-txt-files",
      order: 0,
      policy: "exclude",
      rule: "equals",
      value: "txt",
    },
    {
      field: "_",
      id: "DEFAULT",
      order: 1,
      policy: "include",
      rule: "regex",
      value: ".*",
    },
  ],
});
response = client.connector.update_filtering(
  connector_id: "my-g-drive-connector",
  body: {
    "rules": [
      {
        "field": "file_extension",
        "id": "exclude-txt-files",
        "order": 0,
        "policy": "exclude",
        "rule": "equals",
        "value": "txt"
      },
      {
        "field": "_",
        "id": "DEFAULT",
        "order": 1,
        "policy": "include",
        "rule": "regex",
        "value": ".*"
      }
    ]
  }
)
$resp = $client->connector()->updateFiltering([
    "connector_id" => "my-g-drive-connector",
    "body" => [
        "rules" => array(
            [
                "field" => "file_extension",
                "id" => "exclude-txt-files",
                "order" => 0,
                "policy" => "exclude",
                "rule" => "equals",
                "value" => "txt",
            ],
            [
                "field" => "_",
                "id" => "DEFAULT",
                "order" => 1,
                "policy" => "include",
                "rule" => "regex",
                "value" => ".*",
            ],
        ),
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"rules":[{"field":"file_extension","id":"exclude-txt-files","order":0,"policy":"exclude","rule":"equals","value":"txt"},{"field":"_","id":"DEFAULT","order":1,"policy":"include","rule":"regex","value":".*"}]}' "$ELASTICSEARCH_URL/_connector/my-g-drive-connector/_filtering"
client.connector().updateFiltering(u -> u
    .connectorId("my-g-drive-connector")
    .rules(List.of(FilteringRule.of(f -> f
            .field("file_extension")
            .id("exclude-txt-files")
            .order(0)
            .policy(FilteringPolicy.Exclude)
            .rule(FilteringRuleRule.Equals)
            .value("txt")),FilteringRule.of(f -> f
            .field("_")
            .id("DEFAULT")
            .order(1)
            .policy(FilteringPolicy.Include)
            .rule(FilteringRuleRule.Regex)
            .value(".*"))))
);
Request examples
{
    "rules": [
         {
            "field": "file_extension",
            "id": "exclude-txt-files",
            "order": 0,
            "policy": "exclude",
            "rule": "equals",
            "value": "txt"
        },
        {
            "field": "_",
            "id": "DEFAULT",
            "order": 1,
            "policy": "include",
            "rule": "regex",
            "value": ".*"
        }
    ]
}
{
    "advanced_snippet": {
        "value": [{
            "tables": [
                "users",
                "orders"
            ],
            "query": "SELECT users.id AS id, orders.order_id AS order_id FROM users JOIN orders ON users.id = orders.user_id"
        }]
    }
}
Response examples (200)
{
  "result": "updated"
}








Update the connector name and description Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_name

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • name string
  • description string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_name
PUT _connector/my-connector/_name
{
    "name": "Custom connector",
    "description": "This is my customized connector"
}
resp = client.connector.update_name(
    connector_id="my-connector",
    name="Custom connector",
    description="This is my customized connector",
)
const response = await client.connector.updateName({
  connector_id: "my-connector",
  name: "Custom connector",
  description: "This is my customized connector",
});
response = client.connector.update_name(
  connector_id: "my-connector",
  body: {
    "name": "Custom connector",
    "description": "This is my customized connector"
  }
)
$resp = $client->connector()->updateName([
    "connector_id" => "my-connector",
    "body" => [
        "name" => "Custom connector",
        "description" => "This is my customized connector",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"name":"Custom connector","description":"This is my customized connector"}' "$ELASTICSEARCH_URL/_connector/my-connector/_name"
client.connector().updateName(u -> u
    .connectorId("my-connector")
    .description("This is my customized connector")
    .name("Custom connector")
);
Request example
{
    "name": "Custom connector",
    "description": "This is my customized connector"
}
Response examples (200)
{
  "result": "updated"
}

Update the connector is_native flag Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_native

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • is_native boolean Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_native
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_native' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"is_native":true}'

Update the connector pipeline Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_pipeline

When you create a new connector, the configuration of an ingest pipeline is populated with default settings.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • pipeline object Required
    Hide pipeline attributes Show pipeline attributes object
    • extract_binary_content boolean Required
    • name string Required
    • reduce_whitespace boolean Required
    • run_ml_inference boolean Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_pipeline
PUT _connector/my-connector/_pipeline
{
    "pipeline": {
        "extract_binary_content": true,
        "name": "my-connector-pipeline",
        "reduce_whitespace": true,
        "run_ml_inference": true
    }
}
resp = client.connector.update_pipeline(
    connector_id="my-connector",
    pipeline={
        "extract_binary_content": True,
        "name": "my-connector-pipeline",
        "reduce_whitespace": True,
        "run_ml_inference": True
    },
)
const response = await client.connector.updatePipeline({
  connector_id: "my-connector",
  pipeline: {
    extract_binary_content: true,
    name: "my-connector-pipeline",
    reduce_whitespace: true,
    run_ml_inference: true,
  },
});
response = client.connector.update_pipeline(
  connector_id: "my-connector",
  body: {
    "pipeline": {
      "extract_binary_content": true,
      "name": "my-connector-pipeline",
      "reduce_whitespace": true,
      "run_ml_inference": true
    }
  }
)
$resp = $client->connector()->updatePipeline([
    "connector_id" => "my-connector",
    "body" => [
        "pipeline" => [
            "extract_binary_content" => true,
            "name" => "my-connector-pipeline",
            "reduce_whitespace" => true,
            "run_ml_inference" => true,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"pipeline":{"extract_binary_content":true,"name":"my-connector-pipeline","reduce_whitespace":true,"run_ml_inference":true}}' "$ELASTICSEARCH_URL/_connector/my-connector/_pipeline"
client.connector().updatePipeline(u -> u
    .connectorId("my-connector")
    .pipeline(p -> p
        .extractBinaryContent(true)
        .name("my-connector-pipeline")
        .reduceWhitespace(true)
        .runMlInference(true)
    )
);
Request example
{
    "pipeline": {
        "extract_binary_content": true,
        "name": "my-connector-pipeline",
        "reduce_whitespace": true,
        "run_ml_inference": true
    }
}
Response examples (200)
{
  "result": "updated"
}

Update the connector scheduling Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_scheduling

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • scheduling object Required
    Hide scheduling attributes Show scheduling attributes object
    • access_control object
      Hide access_control attributes Show access_control attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

    • full object
      Hide full attributes Show full attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

    • incremental object
      Hide incremental attributes Show incremental attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_scheduling
PUT _connector/my-connector/_scheduling
{
    "scheduling": {
        "access_control": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": true,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": false,
            "interval": "0 30 0 * * ?"
        }
    }
}
resp = client.connector.update_scheduling(
    connector_id="my-connector",
    scheduling={
        "access_control": {
            "enabled": True,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": True,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": False,
            "interval": "0 30 0 * * ?"
        }
    },
)
const response = await client.connector.updateScheduling({
  connector_id: "my-connector",
  scheduling: {
    access_control: {
      enabled: true,
      interval: "0 10 0 * * ?",
    },
    full: {
      enabled: true,
      interval: "0 20 0 * * ?",
    },
    incremental: {
      enabled: false,
      interval: "0 30 0 * * ?",
    },
  },
});
response = client.connector.update_scheduling(
  connector_id: "my-connector",
  body: {
    "scheduling": {
      "access_control": {
        "enabled": true,
        "interval": "0 10 0 * * ?"
      },
      "full": {
        "enabled": true,
        "interval": "0 20 0 * * ?"
      },
      "incremental": {
        "enabled": false,
        "interval": "0 30 0 * * ?"
      }
    }
  }
)
$resp = $client->connector()->updateScheduling([
    "connector_id" => "my-connector",
    "body" => [
        "scheduling" => [
            "access_control" => [
                "enabled" => true,
                "interval" => "0 10 0 * * ?",
            ],
            "full" => [
                "enabled" => true,
                "interval" => "0 20 0 * * ?",
            ],
            "incremental" => [
                "enabled" => false,
                "interval" => "0 30 0 * * ?",
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"scheduling":{"access_control":{"enabled":true,"interval":"0 10 0 * * ?"},"full":{"enabled":true,"interval":"0 20 0 * * ?"},"incremental":{"enabled":false,"interval":"0 30 0 * * ?"}}}' "$ELASTICSEARCH_URL/_connector/my-connector/_scheduling"
client.connector().updateScheduling(u -> u
    .connectorId("my-connector")
    .scheduling(s -> s
        .accessControl(a -> a
            .enabled(true)
            .interval("0 10 0 * * ?")
        )
        .full(f -> f
            .enabled(true)
            .interval("0 20 0 * * ?")
        )
        .incremental(i -> i
            .enabled(false)
            .interval("0 30 0 * * ?")
        )
    )
);
Request examples
{
    "scheduling": {
        "access_control": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": true,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": false,
            "interval": "0 30 0 * * ?"
        }
    }
}
{
    "scheduling": {
        "full": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        }
    }
}
Response examples (200)
{
  "result": "updated"
}

Update the connector service type Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_service_type

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • service_type string Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_service_type
PUT _connector/my-connector/_service_type
{
    "service_type": "sharepoint_online"
}
resp = client.connector.update_service_type(
    connector_id="my-connector",
    service_type="sharepoint_online",
)
const response = await client.connector.updateServiceType({
  connector_id: "my-connector",
  service_type: "sharepoint_online",
});
response = client.connector.update_service_type(
  connector_id: "my-connector",
  body: {
    "service_type": "sharepoint_online"
  }
)
$resp = $client->connector()->updateServiceType([
    "connector_id" => "my-connector",
    "body" => [
        "service_type" => "sharepoint_online",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service_type":"sharepoint_online"}' "$ELASTICSEARCH_URL/_connector/my-connector/_service_type"
client.connector().updateServiceType(u -> u
    .connectorId("my-connector")
    .serviceType("sharepoint_online")
);
Request example
{
    "service_type": "sharepoint_online"
}
Response examples (200)
{
  "result": "updated"
}




Cross-cluster replication

Get auto-follow patterns Generally available; Added in 6.5.0

GET /_ccr/auto_follow/{name}

All methods and paths for this operation:

GET /_ccr/auto_follow

GET /_ccr/auto_follow/{name}

Get cross-cluster replication auto-follow patterns.

Required authorization

  • Cluster privileges: manage_ccr
External documentation

Path parameters

  • name string Required

    The auto-follow pattern collection that you want to retrieve. If you do not specify a name, the API returns information for all collections.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • patterns array[object] Required
      Hide patterns attributes Show patterns attributes object
      • name string Required
      • pattern object Required
        Hide pattern attributes Show pattern attributes object
        • active boolean Required
        • remote_cluster string Required

          The remote cluster containing the leader indices to match against.

        • follow_index_pattern string

          The name of follower index.

        • leader_index_patterns array[string] Required

          An array of simple index patterns to match against indices in the remote cluster specified by the remote_cluster field.

        • leader_index_exclusion_patterns array[string] Required

          An array of simple index patterns that can be used to exclude indices from being auto-followed.

        • max_outstanding_read_requests number Required

          The maximum number of outstanding reads requests from the remote cluster.

GET /_ccr/auto_follow/{name}
GET /_ccr/auto_follow/my_auto_follow_pattern
resp = client.ccr.get_auto_follow_pattern(
    name="my_auto_follow_pattern",
)
const response = await client.ccr.getAutoFollowPattern({
  name: "my_auto_follow_pattern",
});
response = client.ccr.get_auto_follow_pattern(
  name: "my_auto_follow_pattern"
)
$resp = $client->ccr()->getAutoFollowPattern([
    "name" => "my_auto_follow_pattern",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ccr/auto_follow/my_auto_follow_pattern"
client.ccr().getAutoFollowPattern(g -> g
    .name("my_auto_follow_pattern")
);
Response examples (200)
A successful response from `GET /_ccr/auto_follow/my_auto_follow_pattern`, which gets auto-follow patterns.
{
  "patterns": [
    {
      "name": "my_auto_follow_pattern",
      "pattern": {
        "active": true,
        "remote_cluster" : "remote_cluster",
        "leader_index_patterns" :
        [
          "leader_index*"
        ],
        "leader_index_exclusion_patterns":
        [
          "leader_index_001"
        ],
        "follow_index_pattern" : "{{leader_index}}-follower"
      }
    }
  ]
}

Create or update auto-follow patterns Generally available; Added in 6.5.0

PUT /_ccr/auto_follow/{name}

Create a collection of cross-cluster replication auto-follow patterns for a remote cluster. Newly created indices on the remote cluster that match any of the patterns are automatically configured as follower indices. Indices on the remote cluster that were created before the auto-follow pattern was created will not be auto-followed even if they match the pattern.

This API can also be used to update auto-follow patterns. NOTE: Follower indices that were configured automatically before updating an auto-follow pattern will remain unchanged even if they do not match against the new patterns.

External documentation

Path parameters

  • name string Required

    The name of the collection of auto-follow patterns.

Query parameters

application/json

Body Required

  • remote_cluster string Required

    The remote cluster containing the leader indices to match against.

  • follow_index_pattern string

    The name of follower index. The template {{leader_index}} can be used to derive the name of the follower index from the name of the leader index. When following a data stream, use {{leader_index}}; CCR does not support changes to the names of a follower data stream’s backing indices.

  • leader_index_patterns array[string]

    An array of simple index patterns to match against indices in the remote cluster specified by the remote_cluster field.

  • leader_index_exclusion_patterns array[string]

    An array of simple index patterns that can be used to exclude indices from being auto-followed. Indices in the remote cluster whose names are matching one or more leader_index_patterns and one or more leader_index_exclusion_patterns won’t be followed.

  • max_outstanding_read_requests number

    The maximum number of outstanding reads requests from the remote cluster.

    Default value is 12.

  • settings object

    Settings to override from the leader index. Note that certain settings can not be overrode (e.g., index.number_of_shards).

    Hide settings attribute Show settings attribute object
    • * object Additional properties
  • max_outstanding_write_requests number

    The maximum number of outstanding reads requests from the remote cluster.

    Default value is 9.

  • read_poll_timeout string

    The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again.

    External documentation
  • max_read_request_operation_count number

    The maximum number of operations to pull per read from the remote cluster.

    Default value is 5120.

  • max_read_request_size number | string

    The maximum size in bytes of per read of a batch of operations pulled from the remote cluster.

    One of:

    The maximum size in bytes of per read of a batch of operations pulled from the remote cluster.

  • max_retry_delay string

    The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying.

    External documentation
  • max_write_buffer_count number

    The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit.

    Default value is 2147483647.

  • max_write_buffer_size number | string

    The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit.

    One of:

    The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit.

  • max_write_request_operation_count number

    The maximum number of operations per bulk write request executed on the follower.

    Default value is 5120.

  • max_write_request_size number | string

    The maximum total bytes of operations per bulk write request executed on the follower.

    One of:

    The maximum total bytes of operations per bulk write request executed on the follower.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_ccr/auto_follow/{name}
PUT /_ccr/auto_follow/my_auto_follow_pattern
{
  "remote_cluster" : "remote_cluster",
  "leader_index_patterns" :
  [
    "leader_index*"
  ],
  "follow_index_pattern" : "{{leader_index}}-follower",
  "settings": {
    "index.number_of_replicas": 0
  },
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
resp = client.ccr.put_auto_follow_pattern(
    name="my_auto_follow_pattern",
    remote_cluster="remote_cluster",
    leader_index_patterns=[
        "leader_index*"
    ],
    follow_index_pattern="{{leader_index}}-follower",
    settings={
        "index.number_of_replicas": 0
    },
    max_read_request_operation_count=1024,
    max_outstanding_read_requests=16,
    max_read_request_size="1024k",
    max_write_request_operation_count=32768,
    max_write_request_size="16k",
    max_outstanding_write_requests=8,
    max_write_buffer_count=512,
    max_write_buffer_size="512k",
    max_retry_delay="10s",
    read_poll_timeout="30s",
)
const response = await client.ccr.putAutoFollowPattern({
  name: "my_auto_follow_pattern",
  remote_cluster: "remote_cluster",
  leader_index_patterns: ["leader_index*"],
  follow_index_pattern: "{{leader_index}}-follower",
  settings: {
    "index.number_of_replicas": 0,
  },
  max_read_request_operation_count: 1024,
  max_outstanding_read_requests: 16,
  max_read_request_size: "1024k",
  max_write_request_operation_count: 32768,
  max_write_request_size: "16k",
  max_outstanding_write_requests: 8,
  max_write_buffer_count: 512,
  max_write_buffer_size: "512k",
  max_retry_delay: "10s",
  read_poll_timeout: "30s",
});
response = client.ccr.put_auto_follow_pattern(
  name: "my_auto_follow_pattern",
  body: {
    "remote_cluster": "remote_cluster",
    "leader_index_patterns": [
      "leader_index*"
    ],
    "follow_index_pattern": "{{leader_index}}-follower",
    "settings": {
      "index.number_of_replicas": 0
    },
    "max_read_request_operation_count": 1024,
    "max_outstanding_read_requests": 16,
    "max_read_request_size": "1024k",
    "max_write_request_operation_count": 32768,
    "max_write_request_size": "16k",
    "max_outstanding_write_requests": 8,
    "max_write_buffer_count": 512,
    "max_write_buffer_size": "512k",
    "max_retry_delay": "10s",
    "read_poll_timeout": "30s"
  }
)
$resp = $client->ccr()->putAutoFollowPattern([
    "name" => "my_auto_follow_pattern",
    "body" => [
        "remote_cluster" => "remote_cluster",
        "leader_index_patterns" => array(
            "leader_index*",
        ),
        "follow_index_pattern" => "{{leader_index}}-follower",
        "settings" => [
            "index.number_of_replicas" => 0,
        ],
        "max_read_request_operation_count" => 1024,
        "max_outstanding_read_requests" => 16,
        "max_read_request_size" => "1024k",
        "max_write_request_operation_count" => 32768,
        "max_write_request_size" => "16k",
        "max_outstanding_write_requests" => 8,
        "max_write_buffer_count" => 512,
        "max_write_buffer_size" => "512k",
        "max_retry_delay" => "10s",
        "read_poll_timeout" => "30s",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"remote_cluster":"remote_cluster","leader_index_patterns":["leader_index*"],"follow_index_pattern":"{{leader_index}}-follower","settings":{"index.number_of_replicas":0},"max_read_request_operation_count":1024,"max_outstanding_read_requests":16,"max_read_request_size":"1024k","max_write_request_operation_count":32768,"max_write_request_size":"16k","max_outstanding_write_requests":8,"max_write_buffer_count":512,"max_write_buffer_size":"512k","max_retry_delay":"10s","read_poll_timeout":"30s"}' "$ELASTICSEARCH_URL/_ccr/auto_follow/my_auto_follow_pattern"
client.ccr().putAutoFollowPattern(p -> p
    .followIndexPattern("{{leader_index}}-follower")
    .leaderIndexPatterns("leader_index*")
    .maxOutstandingReadRequests(16)
    .maxOutstandingWriteRequests(8)
    .maxReadRequestOperationCount(1024)
    .maxReadRequestSize("1024k")
    .maxRetryDelay(m -> m
        .time("10s")
    )
    .maxWriteBufferCount(512)
    .maxWriteBufferSize("512k")
    .maxWriteRequestOperationCount(32768)
    .maxWriteRequestSize("16k")
    .name("my_auto_follow_pattern")
    .readPollTimeout(r -> r
        .time("30s")
    )
    .remoteCluster("remote_cluster")
    .settings("index.number_of_replicas", JsonData.fromJson("0"))
);
Request example
Run `PUT /_ccr/auto_follow/my_auto_follow_pattern` to creates an auto-follow pattern.
{
  "remote_cluster" : "remote_cluster",
  "leader_index_patterns" :
  [
    "leader_index*"
  ],
  "follow_index_pattern" : "{{leader_index}}-follower",
  "settings": {
    "index.number_of_replicas": 0
  },
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
Response examples (200)
A successful response for creating an auto-follow pattern.
{
  "acknowledged": true
}

Delete auto-follow patterns Generally available; Added in 6.5.0

DELETE /_ccr/auto_follow/{name}

Delete a collection of cross-cluster replication auto-follow patterns.

Required authorization

  • Cluster privileges: manage_ccr
External documentation

Path parameters

  • name string Required

    The auto-follow pattern collection to delete.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ccr/auto_follow/{name}
DELETE /_ccr/auto_follow/my_auto_follow_pattern
resp = client.ccr.delete_auto_follow_pattern(
    name="my_auto_follow_pattern",
)
const response = await client.ccr.deleteAutoFollowPattern({
  name: "my_auto_follow_pattern",
});
response = client.ccr.delete_auto_follow_pattern(
  name: "my_auto_follow_pattern"
)
$resp = $client->ccr()->deleteAutoFollowPattern([
    "name" => "my_auto_follow_pattern",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ccr/auto_follow/my_auto_follow_pattern"
client.ccr().deleteAutoFollowPattern(d -> d
    .name("my_auto_follow_pattern")
);
Response examples (200)
A successful response from `DELETE /_ccr/auto_follow/my_auto_follow_pattern`, which deletes an auto-follow pattern.
{
  "acknowledged" : true
}

Create a follower Generally available; Added in 6.5.0

PUT /{index}/_ccr/follow

Create a cross-cluster replication follower index that follows a specific leader index. When the API returns, the follower index exists and cross-cluster replication starts replicating operations from the leader index to the follower index.

Path parameters

  • index string Required

    The name of the follower index.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation
  • wait_for_active_shards number | string

    Specifies the number of shards to wait on being active before responding. This defaults to waiting on none of the shards to be active. A shard must be restored from the leader index before being active. Restoring a follower shard requires transferring all the remote Lucene segment files to the follower index.

    Values are all or index-setting.

application/json

Body Required

  • data_stream_name string

    If the leader index is part of a data stream, the name to which the local data stream for the followed index should be renamed.

  • leader_index string Required

    The name of the index in the leader cluster to follow.

  • max_outstanding_read_requests number

    The maximum number of outstanding reads requests from the remote cluster.

  • max_outstanding_write_requests number

    The maximum number of outstanding write requests on the follower.

  • max_read_request_operation_count number

    The maximum number of operations to pull per read from the remote cluster.

  • max_read_request_size number | string

    The maximum size in bytes of per read of a batch of operations pulled from the remote cluster.

    One of:

    The maximum size in bytes of per read of a batch of operations pulled from the remote cluster.

  • max_retry_delay string

    The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying.

    External documentation
  • max_write_buffer_count number

    The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit.

  • max_write_buffer_size number | string

    The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit.

    One of:

    The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit.

  • max_write_request_operation_count number

    The maximum number of operations per bulk write request executed on the follower.

  • max_write_request_size number | string

    The maximum total bytes of operations per bulk write request executed on the follower.

    One of:

    The maximum total bytes of operations per bulk write request executed on the follower.

  • read_poll_timeout string

    The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again.

    External documentation
  • remote_cluster string Required

    The remote cluster containing the leader index.

  • settings object

    Settings to override from the leader index.

    Index settings

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • follow_index_created boolean Required
    • follow_index_shards_acked boolean Required
    • index_following_started boolean Required
PUT /{index}/_ccr/follow
PUT /follower_index/_ccr/follow?wait_for_active_shards=1
{
  "remote_cluster" : "remote_cluster",
  "leader_index" : "leader_index",
  "settings": {
    "index.number_of_replicas": 0
  },
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
resp = client.ccr.follow(
    index="follower_index",
    wait_for_active_shards="1",
    remote_cluster="remote_cluster",
    leader_index="leader_index",
    settings={
        "index.number_of_replicas": 0
    },
    max_read_request_operation_count=1024,
    max_outstanding_read_requests=16,
    max_read_request_size="1024k",
    max_write_request_operation_count=32768,
    max_write_request_size="16k",
    max_outstanding_write_requests=8,
    max_write_buffer_count=512,
    max_write_buffer_size="512k",
    max_retry_delay="10s",
    read_poll_timeout="30s",
)
const response = await client.ccr.follow({
  index: "follower_index",
  wait_for_active_shards: 1,
  remote_cluster: "remote_cluster",
  leader_index: "leader_index",
  settings: {
    "index.number_of_replicas": 0,
  },
  max_read_request_operation_count: 1024,
  max_outstanding_read_requests: 16,
  max_read_request_size: "1024k",
  max_write_request_operation_count: 32768,
  max_write_request_size: "16k",
  max_outstanding_write_requests: 8,
  max_write_buffer_count: 512,
  max_write_buffer_size: "512k",
  max_retry_delay: "10s",
  read_poll_timeout: "30s",
});
response = client.ccr.follow(
  index: "follower_index",
  wait_for_active_shards: "1",
  body: {
    "remote_cluster": "remote_cluster",
    "leader_index": "leader_index",
    "settings": {
      "index.number_of_replicas": 0
    },
    "max_read_request_operation_count": 1024,
    "max_outstanding_read_requests": 16,
    "max_read_request_size": "1024k",
    "max_write_request_operation_count": 32768,
    "max_write_request_size": "16k",
    "max_outstanding_write_requests": 8,
    "max_write_buffer_count": 512,
    "max_write_buffer_size": "512k",
    "max_retry_delay": "10s",
    "read_poll_timeout": "30s"
  }
)
$resp = $client->ccr()->follow([
    "index" => "follower_index",
    "wait_for_active_shards" => "1",
    "body" => [
        "remote_cluster" => "remote_cluster",
        "leader_index" => "leader_index",
        "settings" => [
            "index.number_of_replicas" => 0,
        ],
        "max_read_request_operation_count" => 1024,
        "max_outstanding_read_requests" => 16,
        "max_read_request_size" => "1024k",
        "max_write_request_operation_count" => 32768,
        "max_write_request_size" => "16k",
        "max_outstanding_write_requests" => 8,
        "max_write_buffer_count" => 512,
        "max_write_buffer_size" => "512k",
        "max_retry_delay" => "10s",
        "read_poll_timeout" => "30s",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"remote_cluster":"remote_cluster","leader_index":"leader_index","settings":{"index.number_of_replicas":0},"max_read_request_operation_count":1024,"max_outstanding_read_requests":16,"max_read_request_size":"1024k","max_write_request_operation_count":32768,"max_write_request_size":"16k","max_outstanding_write_requests":8,"max_write_buffer_count":512,"max_write_buffer_size":"512k","max_retry_delay":"10s","read_poll_timeout":"30s"}' "$ELASTICSEARCH_URL/follower_index/_ccr/follow?wait_for_active_shards=1"
client.ccr().follow(f -> f
    .index("follower_index")
    .leaderIndex("leader_index")
    .maxOutstandingReadRequests(16L)
    .maxOutstandingWriteRequests(8)
    .maxReadRequestOperationCount(1024)
    .maxReadRequestSize("1024k")
    .maxRetryDelay(m -> m
        .time("10s")
    )
    .maxWriteBufferCount(512)
    .maxWriteBufferSize("512k")
    .maxWriteRequestOperationCount(32768)
    .maxWriteRequestSize("16k")
    .readPollTimeout(r -> r
        .time("30s")
    )
    .remoteCluster("remote_cluster")
    .settings(s -> s
        .otherSettings("index.number_of_replicas", JsonData.fromJson("0"))
    )
    .waitForActiveShards(w -> w
        .count(1)
    )
);
Request example
Run `PUT /follower_index/_ccr/follow?wait_for_active_shards=1` to create a follower index named `follower_index`.
{
  "remote_cluster" : "remote_cluster",
  "leader_index" : "leader_index",
  "settings": {
    "index.number_of_replicas": 0
  },
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
Response examples (200)
A successful response from `PUT /follower_index/_ccr/follow?wait_for_active_shards=1`.
{
  "follow_index_created" : true,
  "follow_index_shards_acked" : true,
  "index_following_started" : true
}

Get follower information Generally available; Added in 6.7.0

GET /{index}/_ccr/info

Get information about all cross-cluster replication follower indices. For example, the results include follower index names, leader index names, replication options, and whether the follower indices are active or paused.

Required authorization

  • Cluster privileges: monitor
External documentation

Path parameters

  • index string | array[string] Required

    A comma-delimited list of follower index patterns.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • follower_indices array[object] Required
      Hide follower_indices attributes Show follower_indices attributes object
      • follower_index string Required

        The name of the follower index.

      • leader_index string Required

        The name of the index in the leader cluster that is followed.

      • parameters object

        An object that encapsulates cross-cluster replication parameters. If the follower index's status is paused, this object is omitted.

        Hide parameters attributes Show parameters attributes object
        • max_outstanding_read_requests number

          The maximum number of outstanding reads requests from the remote cluster.

        • max_outstanding_write_requests number

          The maximum number of outstanding write requests on the follower.

        • max_read_request_operation_count number

          The maximum number of operations to pull per read from the remote cluster.

        • max_read_request_size
        • max_retry_delay string

          The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying.

        • max_write_buffer_count number

          The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit.

        • max_write_buffer_size
        • max_write_request_operation_count number

          The maximum number of operations per bulk write request executed on the follower.

        • max_write_request_size
        • read_poll_timeout string

          The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again.

      • remote_cluster string Required

        The remote cluster that contains the leader index.

      • status string Required

        The status of the index following: active or paused.

        Values are active or paused.

GET /{index}/_ccr/info
GET /follower_index/_ccr/info
resp = client.ccr.follow_info(
    index="follower_index",
)
const response = await client.ccr.followInfo({
  index: "follower_index",
});
response = client.ccr.follow_info(
  index: "follower_index"
)
$resp = $client->ccr()->followInfo([
    "index" => "follower_index",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/follower_index/_ccr/info"
client.ccr().followInfo(f -> f
    .index("follower_index")
);
Response examples (200)
A successful response from `GET /follower_index/_ccr/info` when the follower index is active.
{
  "follower_indices": [
    {
      "follower_index": "follower_index",
      "remote_cluster": "remote_cluster",
      "leader_index": "leader_index",
      "status": "active",
      "parameters": {
        "max_read_request_operation_count": 5120,
        "max_read_request_size": "32mb",
        "max_outstanding_read_requests": 12,
        "max_write_request_operation_count": 5120,
        "max_write_request_size": "9223372036854775807b",
        "max_outstanding_write_requests": 9,
        "max_write_buffer_count": 2147483647,
        "max_write_buffer_size": "512mb",
        "max_retry_delay": "500ms",
        "read_poll_timeout": "1m"
      }
    }
  ]
}
A successful response from `GET /follower_index/_ccr/info` when the follower index is paused.
{
  "follower_indices": [
    {
      "follower_index": "follower_index",
      "remote_cluster": "remote_cluster",
      "leader_index": "leader_index",
      "status": "paused"
    }
  ]
}




Forget a follower Generally available; Added in 6.7.0

POST /{index}/_ccr/forget_follower

Remove the cross-cluster replication follower retention leases from the leader.

A following index takes out retention leases on its leader index. These leases are used to increase the likelihood that the shards of the leader index retain the history of operations that the shards of the following index need to run replication. When a follower index is converted to a regular index by the unfollow API (either by directly calling the API or by index lifecycle management tasks), these leases are removed. However, removal of the leases can fail, for example when the remote cluster containing the leader index is unavailable. While the leases will eventually expire on their own, their extended existence can cause the leader index to hold more history than necessary and prevent index lifecycle management from performing some operations on the leader index. This API exists to enable manually removing the leases when the unfollow API is unable to do so.

NOTE: This API does not stop replication by a following index. If you use this API with a follower index that is still actively following, the following index will add back retention leases on the leader. The only purpose of this API is to handle the case of failure to remove the following retention leases after the unfollow API is invoked.

External documentation

Path parameters

  • index string Required

    the name of the leader index for which specified follower retention leases should be removed

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
application/json

Body Required

  • follower_cluster string
  • follower_index string
  • follower_index_uuid string
  • leader_remote_cluster string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • _shards object Required
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
POST /{index}/_ccr/forget_follower
POST /<leader_index>/_ccr/forget_follower
{
  "follower_cluster" : "<follower_cluster>",
  "follower_index" : "<follower_index>",
  "follower_index_uuid" : "<follower_index_uuid>",
  "leader_remote_cluster" : "<leader_remote_cluster>"
}
resp = client.ccr.forget_follower(
    index="<leader_index>",
    follower_cluster="<follower_cluster>",
    follower_index="<follower_index>",
    follower_index_uuid="<follower_index_uuid>",
    leader_remote_cluster="<leader_remote_cluster>",
)
const response = await client.ccr.forgetFollower({
  index: "<leader_index>",
  follower_cluster: "<follower_cluster>",
  follower_index: "<follower_index>",
  follower_index_uuid: "<follower_index_uuid>",
  leader_remote_cluster: "<leader_remote_cluster>",
});
response = client.ccr.forget_follower(
  index: "<leader_index>",
  body: {
    "follower_cluster": "<follower_cluster>",
    "follower_index": "<follower_index>",
    "follower_index_uuid": "<follower_index_uuid>",
    "leader_remote_cluster": "<leader_remote_cluster>"
  }
)
$resp = $client->ccr()->forgetFollower([
    "index" => "<leader_index>",
    "body" => [
        "follower_cluster" => "<follower_cluster>",
        "follower_index" => "<follower_index>",
        "follower_index_uuid" => "<follower_index_uuid>",
        "leader_remote_cluster" => "<leader_remote_cluster>",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"follower_cluster":"<follower_cluster>","follower_index":"<follower_index>","follower_index_uuid":"<follower_index_uuid>","leader_remote_cluster":"<leader_remote_cluster>"}' "$ELASTICSEARCH_URL/<leader_index>/_ccr/forget_follower"
client.ccr().forgetFollower(f -> f
    .followerCluster("<follower_cluster>")
    .followerIndex("<follower_index>")
    .followerIndexUuid("<follower_index_uuid>")
    .index("<leader_index>")
    .leaderRemoteCluster("<leader_remote_cluster>")
);
Request example
Run `POST /<leader_index>/_ccr/forget_follower`.
{
  "follower_cluster" : "<follower_cluster>",
  "follower_index" : "<follower_index>",
  "follower_index_uuid" : "<follower_index_uuid>",
  "leader_remote_cluster" : "<leader_remote_cluster>"
}
Response examples (200)
A successful response for removing the follower retention leases from the leader index.
{
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "failed" : 0,
    "failures" : [ ]
  }
}

Pause an auto-follow pattern Generally available; Added in 7.5.0

POST /_ccr/auto_follow/{name}/pause

Pause a cross-cluster replication auto-follow pattern. When the API returns, the auto-follow pattern is inactive. New indices that are created on the remote cluster and match the auto-follow patterns are ignored.

You can resume auto-following with the resume auto-follow pattern API. When it resumes, the auto-follow pattern is active again and automatically configures follower indices for newly created indices on the remote cluster that match its patterns. Remote indices that were created while the pattern was paused will also be followed, unless they have been deleted or closed in the interim.

Required authorization

  • Cluster privileges: manage_ccr
External documentation

Path parameters

  • name string Required

    The name of the auto-follow pattern to pause.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ccr/auto_follow/{name}/pause
POST /_ccr/auto_follow/my_auto_follow_pattern/pause
resp = client.ccr.pause_auto_follow_pattern(
    name="my_auto_follow_pattern",
)
const response = await client.ccr.pauseAutoFollowPattern({
  name: "my_auto_follow_pattern",
});
response = client.ccr.pause_auto_follow_pattern(
  name: "my_auto_follow_pattern"
)
$resp = $client->ccr()->pauseAutoFollowPattern([
    "name" => "my_auto_follow_pattern",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ccr/auto_follow/my_auto_follow_pattern/pause"
client.ccr().pauseAutoFollowPattern(p -> p
    .name("my_auto_follow_pattern")
);
Response examples (200)
A successful response from `POST /_ccr/auto_follow/my_auto_follow_pattern/pause`, which pauses an auto-follow pattern.
{
  "acknowledged" : true
}

Pause a follower Generally available; Added in 6.5.0

POST /{index}/_ccr/pause_follow

Pause a cross-cluster replication follower index. The follower index will not fetch any additional operations from the leader index. You can resume following with the resume follower API. You can pause and resume a follower index to change the configuration of the following task.

Required authorization

  • Cluster privileges: manage_ccr

Path parameters

  • index string Required

    The name of the follower index.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /{index}/_ccr/pause_follow
POST /follower_index/_ccr/pause_follow
resp = client.ccr.pause_follow(
    index="follower_index",
)
const response = await client.ccr.pauseFollow({
  index: "follower_index",
});
response = client.ccr.pause_follow(
  index: "follower_index"
)
$resp = $client->ccr()->pauseFollow([
    "index" => "follower_index",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/follower_index/_ccr/pause_follow"
client.ccr().pauseFollow(p -> p
    .index("follower_index")
);
Response examples (200)
A successful response from `POST /follower_index/_ccr/pause_follow`, which pauses a follower index.
{
  "acknowledged" : true
}

Resume an auto-follow pattern Generally available; Added in 7.5.0

POST /_ccr/auto_follow/{name}/resume

Resume a cross-cluster replication auto-follow pattern that was paused. The auto-follow pattern will resume configuring following indices for newly created indices that match its patterns on the remote cluster. Remote indices created while the pattern was paused will also be followed unless they have been deleted or closed in the interim.

Required authorization

  • Cluster privileges: manage_ccr
External documentation

Path parameters

  • name string Required

    The name of the auto-follow pattern to resume.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ccr/auto_follow/{name}/resume
POST /_ccr/auto_follow/my_auto_follow_pattern/resume
resp = client.ccr.resume_auto_follow_pattern(
    name="my_auto_follow_pattern",
)
const response = await client.ccr.resumeAutoFollowPattern({
  name: "my_auto_follow_pattern",
});
response = client.ccr.resume_auto_follow_pattern(
  name: "my_auto_follow_pattern"
)
$resp = $client->ccr()->resumeAutoFollowPattern([
    "name" => "my_auto_follow_pattern",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ccr/auto_follow/my_auto_follow_pattern/resume"
client.ccr().resumeAutoFollowPattern(r -> r
    .name("my_auto_follow_pattern")
);
Response examples (200)
A successful response `POST /_ccr/auto_follow/my_auto_follow_pattern/resume`, which resumes an auto-follow pattern.
{
  "acknowledged" : true
}

Resume a follower Generally available; Added in 6.5.0

POST /{index}/_ccr/resume_follow

Resume a cross-cluster replication follower index that was paused. The follower index could have been paused with the pause follower API. Alternatively it could be paused due to replication that cannot be retried due to failures during following tasks. When this API returns, the follower index will resume fetching operations from the leader index.

External documentation

Path parameters

  • index string Required

    The name of the follow index to resume following.

Query parameters

application/json

Body

  • max_outstanding_read_requests number
  • max_outstanding_write_requests number
  • max_read_request_operation_count number
  • max_read_request_size string
  • max_retry_delay string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    External documentation
  • max_write_buffer_count number
  • max_write_buffer_size string
  • max_write_request_operation_count number
  • max_write_request_size string
  • read_poll_timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /{index}/_ccr/resume_follow
POST /follower_index/_ccr/resume_follow
{
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
resp = client.ccr.resume_follow(
    index="follower_index",
    max_read_request_operation_count=1024,
    max_outstanding_read_requests=16,
    max_read_request_size="1024k",
    max_write_request_operation_count=32768,
    max_write_request_size="16k",
    max_outstanding_write_requests=8,
    max_write_buffer_count=512,
    max_write_buffer_size="512k",
    max_retry_delay="10s",
    read_poll_timeout="30s",
)
const response = await client.ccr.resumeFollow({
  index: "follower_index",
  max_read_request_operation_count: 1024,
  max_outstanding_read_requests: 16,
  max_read_request_size: "1024k",
  max_write_request_operation_count: 32768,
  max_write_request_size: "16k",
  max_outstanding_write_requests: 8,
  max_write_buffer_count: 512,
  max_write_buffer_size: "512k",
  max_retry_delay: "10s",
  read_poll_timeout: "30s",
});
response = client.ccr.resume_follow(
  index: "follower_index",
  body: {
    "max_read_request_operation_count": 1024,
    "max_outstanding_read_requests": 16,
    "max_read_request_size": "1024k",
    "max_write_request_operation_count": 32768,
    "max_write_request_size": "16k",
    "max_outstanding_write_requests": 8,
    "max_write_buffer_count": 512,
    "max_write_buffer_size": "512k",
    "max_retry_delay": "10s",
    "read_poll_timeout": "30s"
  }
)
$resp = $client->ccr()->resumeFollow([
    "index" => "follower_index",
    "body" => [
        "max_read_request_operation_count" => 1024,
        "max_outstanding_read_requests" => 16,
        "max_read_request_size" => "1024k",
        "max_write_request_operation_count" => 32768,
        "max_write_request_size" => "16k",
        "max_outstanding_write_requests" => 8,
        "max_write_buffer_count" => 512,
        "max_write_buffer_size" => "512k",
        "max_retry_delay" => "10s",
        "read_poll_timeout" => "30s",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"max_read_request_operation_count":1024,"max_outstanding_read_requests":16,"max_read_request_size":"1024k","max_write_request_operation_count":32768,"max_write_request_size":"16k","max_outstanding_write_requests":8,"max_write_buffer_count":512,"max_write_buffer_size":"512k","max_retry_delay":"10s","read_poll_timeout":"30s"}' "$ELASTICSEARCH_URL/follower_index/_ccr/resume_follow"
client.ccr().resumeFollow(r -> r
    .index("follower_index")
    .maxOutstandingReadRequests(16L)
    .maxOutstandingWriteRequests(8L)
    .maxReadRequestOperationCount(1024L)
    .maxReadRequestSize("1024k")
    .maxRetryDelay(m -> m
        .time("10s")
    )
    .maxWriteBufferCount(512L)
    .maxWriteBufferSize("512k")
    .maxWriteRequestOperationCount(32768L)
    .maxWriteRequestSize("16k")
    .readPollTimeout(re -> re
        .time("30s")
    )
);
Request example
Run `POST /follower_index/_ccr/resume_follow` to resume the follower index.
{
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
Response examples (200)
A successful response from resuming a folower index.
{
  "acknowledged" : true
}

Get cross-cluster replication stats Generally available; Added in 6.5.0

GET /_ccr/stats

This API returns stats about auto-following and the same shard-level stats as the get follower stats API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation
  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • auto_follow_stats object Required

      Statistics for the auto-follow coordinator.

      Hide auto_follow_stats attributes Show auto_follow_stats attributes object
      • auto_followed_clusters array[object] Required
        Hide auto_followed_clusters attributes Show auto_followed_clusters attributes object
        • cluster_name string Required
        • last_seen_metadata_version number Required
        • time_since_last_check_millis
      • number_of_failed_follow_indices number Required

        The number of indices that the auto-follow coordinator failed to automatically follow. The causes of recent failures are captured in the logs of the elected master node and in the auto_follow_stats.recent_auto_follow_errors field.

      • number_of_failed_remote_cluster_state_requests number Required

        The number of times that the auto-follow coordinator failed to retrieve the cluster state from a remote cluster registered in a collection of auto-follow patterns.

      • number_of_successful_follow_indices number Required

        The number of indices that the auto-follow coordinator successfully followed.

      • recent_auto_follow_errors array[object] Required

        An array of objects representing failures by the auto-follow coordinator.

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide recent_auto_follow_errors attributes Show recent_auto_follow_errors attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

    • follow_stats object Required

      Shard-level statistics for follower indices.

      Hide follow_stats attribute Show follow_stats attribute object
      • indices array[object] Required
        Hide indices attributes Show indices attributes object
        • index string Required

          The name of the follower index.

        • shards array[object] Required

          An array of shard-level following task statistics.

GET /_ccr/stats
GET /_ccr/stats
resp = client.ccr.stats()
const response = await client.ccr.stats();
response = client.ccr.stats
$resp = $client->ccr()->stats();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ccr/stats"
client.ccr().stats(s -> s);
Response examples (200)
A successful response from `GET /_ccr/stats` that returns cross-cluster replication stats.
{
  "auto_follow_stats" : {
    "number_of_failed_follow_indices" : 0,
    "number_of_failed_remote_cluster_state_requests" : 0,
    "number_of_successful_follow_indices" : 1,
    "recent_auto_follow_errors" : [],
    "auto_followed_clusters" : []
  },
  "follow_stats" : {
    "indices" : [
      {
        "index" : "follower_index",
        "total_global_checkpoint_lag" : 256,
        "shards" : [
          {
            "remote_cluster" : "remote_cluster",
            "leader_index" : "leader_index",
            "follower_index" : "follower_index",
            "shard_id" : 0,
            "leader_global_checkpoint" : 1024,
            "leader_max_seq_no" : 1536,
            "follower_global_checkpoint" : 768,
            "follower_max_seq_no" : 896,
            "last_requested_seq_no" : 897,
            "outstanding_read_requests" : 8,
            "outstanding_write_requests" : 2,
            "write_buffer_operation_count" : 64,
            "follower_mapping_version" : 4,
            "follower_settings_version" : 2,
            "follower_aliases_version" : 8,
            "total_read_time_millis" : 32768,
            "total_read_remote_exec_time_millis" : 16384,
            "successful_read_requests" : 32,
            "failed_read_requests" : 0,
            "operations_read" : 896,
            "bytes_read" : 32768,
            "total_write_time_millis" : 16384,
            "write_buffer_size_in_bytes" : 1536,
            "successful_write_requests" : 16,
            "failed_write_requests" : 0,
            "operations_written" : 832,
            "read_exceptions" : [ ],
            "time_since_last_read_millis" : 8
          }
        ]
      }
    ]
  }
}

Unfollow an index Generally available; Added in 6.5.0

POST /{index}/_ccr/unfollow

Convert a cross-cluster replication follower index to a regular index. The API stops the following task associated with a follower index and removes index metadata and settings associated with cross-cluster replication. The follower index must be paused and closed before you call the unfollow API.


Currently cross-cluster replication does not support converting an existing regular index to a follower index. Converting a follower index to a regular index is an irreversible operation.

Required authorization

  • Index privileges: manage_follow_index
External documentation

Path parameters

  • index string Required

    The name of the follower index.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /{index}/_ccr/unfollow
POST /follower_index/_ccr/unfollow
resp = client.ccr.unfollow(
    index="follower_index",
)
const response = await client.ccr.unfollow({
  index: "follower_index",
});
response = client.ccr.unfollow(
  index: "follower_index"
)
$resp = $client->ccr()->unfollow([
    "index" => "follower_index",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/follower_index/_ccr/unfollow"
client.ccr().unfollow(u -> u
    .index("follower_index")
);
Response examples (200)
A successful response from `POST /follower_index/_ccr/unfollow`.
{
  "acknowledged" : true
}

Get data streams Generally available; Added in 7.9.0

GET /_data_stream/{name}

All methods and paths for this operation:

GET /_data_stream

GET /_data_stream/{name}

Get information about one or more data streams.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • name string | array[string]

    Comma-separated list of data stream names used to limit the request. Wildcard (*) expressions are supported. If omitted, all data streams are returned.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • include_defaults boolean Generally available; Added in 8.11.0

    If true, returns all relevant default configurations for the index template.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • verbose boolean

    Whether the maximum timestamp for each data stream should be calculated and returned.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • _meta object

        Custom metadata for the stream, copied from the _meta object of the stream’s matching index template. If empty, the response omits this property.

        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • allow_custom_routing boolean

        If true, the data stream allows custom routing on write request.

      • failure_store object

        Information about failure store backing indices

        Hide failure_store attributes Show failure_store attributes object
        • enabled boolean Required
        • indices array[object] Required
        • rollover_on_write boolean Required
      • generation number Required

        Current generation for the data stream. This number acts as a cumulative count of the stream’s rollovers, starting at 1.

      • hidden boolean Required

        If true, the data stream is hidden.

      • ilm_policy string

        Name of the current ILM lifecycle policy in the stream’s matching index template. This lifecycle policy is set in the index.lifecycle.name setting. If the template does not include a lifecycle policy, this property is not included in the response. NOTE: A data stream’s backing indices may be assigned different lifecycle policies. To retrieve the lifecycle policy for individual backing indices, use the get index settings API.

      • next_generation_managed_by string Required

        Name of the lifecycle system that'll manage the next generation of the data stream.

        Values are Index Lifecycle Management, Data stream lifecycle, or Unmanaged.

      • prefer_ilm boolean Required

        Indicates if ILM should take precedence over DSL in case both are configured to managed this data stream.

      • indices array[object] Required

        Array of objects containing information about the data stream’s backing indices. The last item in this array contains information about the stream’s current write index.

        Hide indices attributes Show indices attributes object
        • index_name string Required

          Name of the backing index.

        • index_uuid string Required

          Universally unique identifier (UUID) for the index.

        • ilm_policy string

          Name of the current ILM lifecycle policy configured for this backing index.

        • managed_by string

          Name of the lifecycle system that's currently managing this backing index.

          Values are Index Lifecycle Management, Data stream lifecycle, or Unmanaged.

        • prefer_ilm boolean

          Indicates if ILM should take precedence over DSL in case both are configured to manage this index.

      • lifecycle object

        Contains the configuration for the data stream lifecycle of this data stream.

        Hide lifecycle attribute Show lifecycle attribute object
        • enabled boolean

          If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

          Default value is true.

      • name string Required

        Name of the data stream.

      • replicated boolean

        If true, the data stream is created and managed by cross-cluster replication and the local cluster can not write into this data stream or change its mappings.

      • rollover_on_write boolean Required

        If true, the next write to this data stream will trigger a rollover first and the document will be indexed in the new backing index. If the rollover fails the indexing request will fail too.

      • settings object Required

        The settings specific to this data stream that will take precedence over the settings in the matching index template.

        Index settings
      • status string Required

        Health status of the data stream. This health status is based on the state of the primary and replica shards of the stream’s backing indices.

        Supported values include:

        • green (or GREEN): All shards are assigned.
        • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
        • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
        • unknown
        • unavailable

        Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

      • system boolean Generally available; Added in 7.10.0

        If true, the data stream is created and managed by an Elastic stack component and cannot be modified through normal user interaction.

      • template string Required

        Name of the index template used to create the data stream’s backing indices. The template’s index pattern must match the name of this data stream.

      • timestamp_field object Required

        Information about the @timestamp field in the data stream.

        Hide timestamp_field attribute Show timestamp_field attribute object
        • name string Required

          Name of the timestamp field for the data stream, which must be @timestamp. The @timestamp field must be included in every document indexed to the data stream.

GET /_data_stream/{name}
GET _data_stream/my-data-stream
resp = client.indices.get_data_stream(
    name="my-data-stream",
)
const response = await client.indices.getDataStream({
  name: "my-data-stream",
});
response = client.indices.get_data_stream(
  name: "my-data-stream"
)
$resp = $client->indices()->getDataStream([
    "name" => "my-data-stream",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/my-data-stream"
client.indices().getDataStream(g -> g
    .name("my-data-stream")
);
Response examples (200)
A successful response for retrieving information about a data stream.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-my-data-stream-2099.03.07-000001",
          "index_uuid": "xCEhwsp8Tey0-FLNFYVwSg",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        },
        {
          "index_name": ".ds-my-data-stream-2099.03.08-000002",
          "index_uuid": "PA_JquKGSiKcAKBA8DJ5gw",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        }
      ],
      "generation": 2,
      "_meta": {
        "my-meta-field": "foo"
      },
      "status": "GREEN",
      "next_generation_managed_by": "Index Lifecycle Management",
      "prefer_ilm": true,
      "template": "my-index-template",
      "ilm_policy": "my-lifecycle-policy",
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false,
      "rollover_on_write": false
    },
    {
      "name": "my-data-stream-two",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-my-data-stream-two-2099.03.08-000001",
          "index_uuid": "3liBu2SYS5axasRt6fUIpA",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        }
      ],
      "generation": 1,
      "_meta": {
        "my-meta-field": "foo"
      },
      "status": "YELLOW",
      "next_generation_managed_by": "Index Lifecycle Management",
      "prefer_ilm": true,
      "template": "my-index-template",
      "ilm_policy": "my-lifecycle-policy",
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false,
      "rollover_on_write": false
    }
  ]
}




Delete data streams Generally available; Added in 7.9.0

DELETE /_data_stream/{name}

Deletes one or more data streams and their backing indices.

Required authorization

  • Index privileges: delete_index

Path parameters

  • name string | array[string] Required

    Comma-separated list of data streams to delete. Wildcard (*) expressions are supported.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values,such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_data_stream/{name}
DELETE _data_stream/my-data-stream
resp = client.indices.delete_data_stream(
    name="my-data-stream",
)
const response = await client.indices.deleteDataStream({
  name: "my-data-stream",
});
response = client.indices.delete_data_stream(
  name: "my-data-stream"
)
$resp = $client->indices()->deleteDataStream([
    "name" => "my-data-stream",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/my-data-stream"
client.indices().deleteDataStream(d -> d
    .name("my-data-stream")
);




Get data stream lifecycles Generally available; Added in 8.11.0

GET /_data_stream/{name}/_lifecycle

Get the data stream lifecycle configuration of one or more data streams.

External documentation

Path parameters

  • name string | array[string] Required

    Comma-separated list of data streams to limit the request. Supports wildcards (*). To target all data streams, omit this parameter or use * or _all.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • include_defaults boolean

    If true, return all default settings in the response.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • name string Required
      • lifecycle object

        Data stream lifecycle with rollover can be used to display the configuration including the default rollover conditions, if asked.

        Hide lifecycle attribute Show lifecycle attribute object
        • enabled boolean

          If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

          Default value is true.

GET /_data_stream/{name}/_lifecycle
GET /_data_stream/{name}/_lifecycle?human&pretty
resp = client.indices.get_data_lifecycle(
    name="{name}",
    human=True,
    pretty=True,
)
const response = await client.indices.getDataLifecycle({
  name: "{name}",
  human: "true",
  pretty: "true",
});
response = client.indices.get_data_lifecycle(
  name: "{name}",
  human: "true",
  pretty: "true"
)
$resp = $client->indices()->getDataLifecycle([
    "name" => "{name}",
    "human" => "true",
    "pretty" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/%7Bname%7D/_lifecycle?human&pretty"
Response examples (200)
A successful response from `GET /_data_stream/{name}/_lifecycle?human&pretty`.
{
  "data_streams": [
    {
      "name": "my-data-stream-1",
      "lifecycle": {
        "enabled": true,
        "data_retention": "7d"
      }
    },
    {
      "name": "my-data-stream-2",
      "lifecycle": {
        "enabled": true,
        "data_retention": "7d"
      }
    }
  ]
}




Downsample an index Technical preview; Added in 8.5.0

POST /{index}/_downsample/{target_index}

Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min, max, sum, value_count and avg) for each metric field grouped by a configured time interval. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index.

NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read only (index.blocks.write: true).

Path parameters

  • index string Required

    Name of the time series index to downsample.

  • target_index string Required

    Name of the index to create.

application/json

Body Required

  • fixed_interval string Required

    The interval at which to aggregate the original time series index.

Responses

  • 200 application/json
POST /{index}/_downsample/{target_index}
POST /my-time-series-index/_downsample/my-downsampled-time-series-index
{
  "fixed_interval": "1d"
}
resp = client.indices.downsample(
    index="my-time-series-index",
    target_index="my-downsampled-time-series-index",
    config={
        "fixed_interval": "1d"
    },
)
const response = await client.indices.downsample({
  index: "my-time-series-index",
  target_index: "my-downsampled-time-series-index",
  config: {
    fixed_interval: "1d",
  },
});
response = client.indices.downsample(
  index: "my-time-series-index",
  target_index: "my-downsampled-time-series-index",
  body: {
    "fixed_interval": "1d"
  }
)
$resp = $client->indices()->downsample([
    "index" => "my-time-series-index",
    "target_index" => "my-downsampled-time-series-index",
    "body" => [
        "fixed_interval" => "1d",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"fixed_interval":"1d"}' "$ELASTICSEARCH_URL/my-time-series-index/_downsample/my-downsampled-time-series-index"
client.indices().downsample(d -> d
    .index("my-time-series-index")
    .targetIndex("my-downsampled-time-series-index")
    .config(c -> c
        .fixedInterval(f -> f
            .time("1d")
        )
    )
);
Request example
{
  "fixed_interval": "1d"
}




Get data stream lifecycle stats Generally available; Added in 8.12.0

GET /_lifecycle/stats

Get statistics about the data streams that are managed by a data stream lifecycle.

Required authorization

  • Cluster privileges: monitor
External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • data_stream_count number Required

      The count of data streams currently being managed by the data stream lifecycle.

    • data_streams array[object] Required

      Information about the data streams that are managed by the data stream lifecycle.

      Hide data_streams attributes Show data_streams attributes object
      • backing_indices_in_error number Required

        The count of the backing indices for the data stream.

      • backing_indices_in_total number Required

        The count of the backing indices for the data stream that have encountered an error.

      • name string Required

        The name of the data stream.

    • last_run_duration_in_millis number

      Time unit for milliseconds

    • time_between_starts_in_millis number

      Time unit for milliseconds

GET /_lifecycle/stats
GET _lifecycle/stats?human&pretty
resp = client.indices.get_data_lifecycle_stats(
    human=True,
    pretty=True,
)
const response = await client.indices.getDataLifecycleStats({
  human: "true",
  pretty: "true",
});
response = client.indices.get_data_lifecycle_stats(
  human: "true",
  pretty: "true"
)
$resp = $client->indices()->getDataLifecycleStats([
    "human" => "true",
    "pretty" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_lifecycle/stats?human&pretty"
client.indices().getDataLifecycleStats();
Response examples (200)
A successful response for `GET _lifecycle/stats?human&pretty`
{
  "last_run_duration_in_millis": 2,
  "last_run_duration": "2ms",
  "time_between_starts_in_millis": 9998,
  "time_between_starts": "9.99s",
  "data_streams_count": 2,
  "data_streams": [
    {
      "name": "my-data-stream",
      "backing_indices_in_total": 2,
      "backing_indices_in_error": 0
    },
    {
      "name": "my-other-stream",
      "backing_indices_in_total": 2,
      "backing_indices_in_error": 1
    }
  ]
}

Get data stream settings Generally available

GET /_data_stream/{name}/_settings

Get setting information for one or more data streams.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • name string | array[string] Required

    A comma-separated list of data streams or data stream patterns. Supports wildcards (*).

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • name string Required

        The name of the data stream.

      • settings object Required

        The settings specific to this data stream

        Index settings
      • effective_settings object Required

        The settings specific to this data stream merged with the settings from its template. These effective_settings are the settings that will be used when a new index is created for this data stream.

        Index settings
GET /_data_stream/{name}/_settings
curl \
 --request GET 'http://api.example.com/_data_stream/{name}/_settings' \
 --header "Authorization: $API_KEY"
Response examples (200)
This is a response to `GET /_data_stream/my-data-stream/_settings` where my-data-stream that has two settings set. The `effective_settings` field shows additional settings that are pulled from its template.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "settings": {
        "index": {
          "lifecycle": {
            "name": "new-test-policy"
          },
          "number_of_shards": "11"
        }
      },
      "effective_settings": {
        "index": {
          "lifecycle": {
            "name": "new-test-policy"
          },
          "mode": "standard",
          "number_of_shards": "11",
          "number_of_replicas": "0"
        }
      }
    }
  ]
}




Convert an index alias to a data stream Generally available; Added in 7.9.0

POST /_data_stream/_migrate/{name}

Converts an index alias to a data stream. You must have a matching index template that is data stream enabled. The alias must meet the following criteria: The alias must have a write index; All indices for the alias must have a @timestamp field mapping of a date or date_nanos field type; The alias must not have any filters; The alias must not use custom routing. If successful, the request removes the alias and creates a data stream with the same name. The indices for the alias become hidden backing indices for the stream. The write index for the alias becomes the write index for the stream.

Required authorization

  • Index privileges: manage

Path parameters

  • name string Required

    Name of the index alias to convert to a data stream.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_data_stream/_migrate/{name}
POST _data_stream/_migrate/my-time-series-data
resp = client.indices.migrate_to_data_stream(
    name="my-time-series-data",
)
const response = await client.indices.migrateToDataStream({
  name: "my-time-series-data",
});
response = client.indices.migrate_to_data_stream(
  name: "my-time-series-data"
)
$resp = $client->indices()->migrateToDataStream([
    "name" => "my-time-series-data",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/_migrate/my-time-series-data"
client.indices().migrateToDataStream(m -> m
    .name("my-time-series-data")
);








Bulk index or delete documents Generally available

PUT /{index}/_bulk

All methods and paths for this operation:

POST /_bulk

PUT /_bulk
POST /{index}/_bulk
PUT /{index}/_bulk

Perform multiple index, create, delete, and update actions in a single request. This reduces overhead and can greatly increase indexing speed.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:

  • To use the create action, you must have the create_doc, create, index, or write index privilege. Data streams support only the create action.
  • To use the index action, you must have the create, index, or write index privilege.
  • To use the delete action, you must have the delete or write index privilege.
  • To use the update action, you must have the index or write index privilege.
  • To automatically create a data stream or index with a bulk API request, you must have the auto_configure, create_index, or manage index privilege.
  • To make the result of a bulk operation visible to search using the refresh parameter, you must have the maintenance or manage index privilege.

Automatic data stream creation requires a matching index template with data stream enabled.

The actions are specified in the request body using a newline delimited JSON (NDJSON) structure:

action_and_meta_data\n
optional_source\n
action_and_meta_data\n
optional_source\n
....
action_and_meta_data\n
optional_source\n

The index and create actions expect a source on the next line and have the same semantics as the op_type parameter in the standard index API. A create action fails if a document with the same ID already exists in the target An index action adds or replaces a document as necessary.

NOTE: Data streams support only the create action. To update or delete a document in a data stream, you must target the backing index containing the document.

An update action expects that the partial doc, upsert, and script and its options are specified on the next line.

A delete action does not expect a source on the next line and has the same semantics as the standard delete API.

NOTE: The final line of data must end with a newline character (\n). Each newline character may be preceded by a carriage return (\r). When sending NDJSON data to the _bulk endpoint, use a Content-Type header of application/json or application/x-ndjson. Because this format uses literal newline characters (\n) as delimiters, make sure that the JSON actions and sources are not pretty printed.

If you provide a target in the request path, it is used for any actions that don't explicitly specify an _index argument.

A note on the format: the idea here is to make processing as fast as possible. As some of the actions are redirected to other shards on other nodes, only action_meta_data is parsed on the receiving node side.

Client libraries using this protocol should try and strive to do something similar on the client side, and reduce buffering as much as possible.

There is no "correct" number of actions to perform in a single bulk request. Experiment with different settings to find the optimal size for your particular workload. Note that Elasticsearch limits the maximum size of a HTTP request to 100mb by default so clients must ensure that no request exceeds this size. It is not possible to index a single document that exceeds the size limit, so you must pre-process any such documents into smaller pieces before sending them to Elasticsearch. For instance, split documents into pages or chapters before indexing them, or store raw binary data in a system outside Elasticsearch and replace the raw data with a link to the external system in the documents that you send to Elasticsearch.

Client suppport for bulk requests

Some of the officially supported clients provide helpers to assist with bulk requests and reindexing:

  • Go: Check out esutil.BulkIndexer
  • Perl: Check out Search::Elasticsearch::Client::5_0::Bulk and Search::Elasticsearch::Client::5_0::Scroll
  • Python: Check out elasticsearch.helpers.*
  • JavaScript: Check out client.helpers.*
  • .NET: Check out BulkAllObservable
  • PHP: Check out bulk indexing.
  • Ruby: Check out Elasticsearch::Helpers::BulkHelper

Submitting bulk requests with cURL

If you're providing text file input to curl, you must use the --data-binary flag instead of plain -d. The latter doesn't preserve newlines. For example:

$ cat requests
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
$ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@requests"; echo
{"took":7, "errors": false, "items":[{"index":{"_index":"test","_id":"1","_version":1,"result":"created","forced_refresh":false}}]}

Optimistic concurrency control

Each index and delete action within a bulk API call may include the if_seq_no and if_primary_term parameters in their respective action and meta data lines. The if_seq_no and if_primary_term parameters control how operations are run, based on the last modification to existing documents. See Optimistic concurrency control for more details.

Versioning

Each bulk item can include the version value using the version field. It automatically follows the behavior of the index or delete operation based on the _version mapping. It also support the version_type.

Routing

Each bulk item can include the routing value using the routing field. It automatically follows the behavior of the index or delete operation based on the _routing mapping.

NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing setting enabled in the template.

Wait for active shards

When making bulk calls, you can set the wait_for_active_shards parameter to require a minimum number of shard copies to be active before starting to process the bulk request.

Refresh

Control when the changes made by this request are visible to search.

NOTE: Only the shards that receive the bulk request will be affected by refresh. Imagine a _bulk?refresh=wait_for request with three documents in it that happen to be routed to different shards in an index with five shards. The request will only wait for those three shards to refresh. The other two shards that make up the index do not participate in the _bulk request at all.

Path parameters

  • index string Required

    The name of the data stream, index, or index alias to perform bulk actions on.

Query parameters

  • include_source_on_error boolean

    True or false if to include the document source in the error message in case of parsing errors.

  • list_executed_pipelines boolean

    If true, the response will include the ingest pipelines that were run for each index or create.

  • pipeline string

    The pipeline identifier to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, wait for a refresh to make this operation visible to search. If false, do nothing with refreshes. Valid values: true, false, wait_for.

    Values are true, false, or wait_for.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or contains a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • timeout string

    The period each action waits for the following operations: automatic index creation, dynamic mapping updates, and waiting for active shards. The default is 1m (one minute), which guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.

    External documentation
  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default is 1, which waits for each primary shard to be active.

    Values are all or index-setting.

  • require_alias boolean

    If true, the request's actions must target an index alias.

  • require_data_stream boolean

    If true, the request's actions must target a data stream (existing or to be created).

application/json

Body object Required

One of:
  • index object

    Index the specified document. If the document exists, it replaces the document and increments the version. The following line must contain the source data to be indexed.

    Hide index attributes Show index attributes object
    • if_primary_term number
    • dynamic_templates object

      A map from the full name of fields to the name of dynamic templates. It defaults to an empty map. If a name matches a dynamic template, that template will be applied regardless of other match predicates defined in the template. If a field is already defined in the mapping, then this parameter won't be used.

    • pipeline string

      The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

    • require_alias boolean

      If true, the request's actions must target an index alias.

      Default value is false.

  • create object

    Index the specified document if it does not already exist. The following line must contain the source data to be indexed.

    Hide create attributes Show create attributes object
    • if_primary_term number
    • dynamic_templates object

      A map from the full name of fields to the name of dynamic templates. It defaults to an empty map. If a name matches a dynamic template, that template will be applied regardless of other match predicates defined in the template. If a field is already defined in the mapping, then this parameter won't be used.

    • pipeline string

      The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

    • require_alias boolean

      If true, the request's actions must target an index alias.

      Default value is false.

  • update object

    Perform a partial document update. The following line must contain the partial document and update options.

    Hide update attributes Show update attributes object
    • _id string

      The document ID.

    • _index string

      The name of the index or index alias to perform the action on.

    • routing string

      A custom value used to route operations to a specific shard.

    • if_primary_term number
    • if_seq_no number
    • version number
    • version_type string

      Supported values include:

      • internal: Use internal versioning that starts at 1 and increments with each update or delete.
      • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
      • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
      • force: This option is deprecated because it can cause primary and replica shards to diverge.

      Values are internal, external, external_gte, or force.

    • require_alias boolean

      If true, the request's actions must target an index alias.

      Default value is false.

    • retry_on_conflict number

      The number of times an update should be retried in the case of a version conflict.

  • delete object

    Remove the specified document from the index.

    Hide delete attributes Show delete attributes object
    • _id string

      The document ID.

    • _index string

      The name of the index or index alias to perform the action on.

    • routing string

      A custom value used to route operations to a specific shard.

    • if_primary_term number
    • if_seq_no number
    • version number
    • version_type string

      Supported values include:

      • internal: Use internal versioning that starts at 1 and increments with each update or delete.
      • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
      • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
      • force: This option is deprecated because it can cause primary and replica shards to diverge.

      Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • errors boolean Required

      If true, one or more of the operations in the bulk request did not complete successfully.

    • items array[object] Required

      The result of each operation in the bulk request, in the order they were submitted.

      Hide items attribute Show items attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • _id string | null

          The document ID associated with the operation.

        • _index string Required

          The name of the index associated with the operation. If the operation targeted a data stream, this is the backing index into which the document was written.

        • status number Required

          The HTTP status code returned for the operation.

        • failure_store string

          Values are not_applicable_or_unknown, used, not_enabled, or failed.

        • error object

          Additional information about the failed operation. The property is returned only for failed operations.

          Hide error attributes Show error attributes object
          • type string Required

            The type of error

          • reason
          • stack_trace string

            The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • root_cause array[object]
          • suppressed array[object]
        • _primary_term number

          The primary term assigned to the document for the operation. This property is returned only for successful operations.

        • result string

          The result of the operation. Successful values are created, deleted, and updated.

        • _seq_no number

          The sequence number assigned to the document for the operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

        • _shards object

          Shard information for the operation.

          Hide _shards attribute Show _shards attribute object
          • failures array[object]
        • _version number

          The document version associated with the operation. The document version is incremented each time the document is updated. This property is returned only for successful actions.

        • forced_refresh boolean
        • get object
          Hide get attributes Show get attributes object
          • fields object
          • found boolean Required
          • _primary_term number
          • _source object
    • took number Required

      The length of time, in milliseconds, it took to process the bulk request.

    • ingest_took number
PUT /{index}/_bulk
POST _bulk
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_id" : "2" } }
{ "create" : { "_index" : "test", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
resp = client.bulk(
    operations=[
        {
            "index": {
                "_index": "test",
                "_id": "1"
            }
        },
        {
            "field1": "value1"
        },
        {
            "delete": {
                "_index": "test",
                "_id": "2"
            }
        },
        {
            "create": {
                "_index": "test",
                "_id": "3"
            }
        },
        {
            "field1": "value3"
        },
        {
            "update": {
                "_id": "1",
                "_index": "test"
            }
        },
        {
            "doc": {
                "field2": "value2"
            }
        }
    ],
)
const response = await client.bulk({
  operations: [
    {
      index: {
        _index: "test",
        _id: "1",
      },
    },
    {
      field1: "value1",
    },
    {
      delete: {
        _index: "test",
        _id: "2",
      },
    },
    {
      create: {
        _index: "test",
        _id: "3",
      },
    },
    {
      field1: "value3",
    },
    {
      update: {
        _id: "1",
        _index: "test",
      },
    },
    {
      doc: {
        field2: "value2",
      },
    },
  ],
});
response = client.bulk(
  body: [
    {
      "index": {
        "_index": "test",
        "_id": "1"
      }
    },
    {
      "field1": "value1"
    },
    {
      "delete": {
        "_index": "test",
        "_id": "2"
      }
    },
    {
      "create": {
        "_index": "test",
        "_id": "3"
      }
    },
    {
      "field1": "value3"
    },
    {
      "update": {
        "_id": "1",
        "_index": "test"
      }
    },
    {
      "doc": {
        "field2": "value2"
      }
    }
  ]
)
$resp = $client->bulk([
    "body" => array(
        [
            "index" => [
                "_index" => "test",
                "_id" => "1",
            ],
        ],
        [
            "field1" => "value1",
        ],
        [
            "delete" => [
                "_index" => "test",
                "_id" => "2",
            ],
        ],
        [
            "create" => [
                "_index" => "test",
                "_id" => "3",
            ],
        ],
        [
            "field1" => "value3",
        ],
        [
            "update" => [
                "_id" => "1",
                "_index" => "test",
            ],
        ],
        [
            "doc" => [
                "field2" => "value2",
            ],
        ],
    ),
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '[{"index":{"_index":"test","_id":"1"}},{"field1":"value1"},{"delete":{"_index":"test","_id":"2"}},{"create":{"_index":"test","_id":"3"}},{"field1":"value3"},{"update":{"_id":"1","_index":"test"}},{"doc":{"field2":"value2"}}]' "$ELASTICSEARCH_URL/_bulk"
Request examples
Run `POST _bulk` to perform multiple operations.
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_id" : "2" } }
{ "create" : { "_index" : "test", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
When you run `POST _bulk` and use the `update` action, you can use `retry_on_conflict` as a field in the action itself (not in the extra payload line) to specify how many times an update should be retried in the case of a version conflict.
{ "update" : {"_id" : "1", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"} }
{ "update" : { "_id" : "0", "_index" : "index1", "retry_on_conflict" : 3} }
{ "script" : { "source": "ctx._source.counter += params.param1", "lang" : "painless", "params" : {"param1" : 1}}, "upsert" : {"counter" : 1}}
{ "update" : {"_id" : "2", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"}, "doc_as_upsert" : true }
{ "update" : {"_id" : "3", "_index" : "index1", "_source" : true} }
{ "doc" : {"field" : "value"} }
{ "update" : {"_id" : "4", "_index" : "index1"} }
{ "doc" : {"field" : "value"}, "_source": true}
To return only information about failed operations, run `POST /_bulk?filter_path=items.*.error`.
{ "update": {"_id": "5", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "update": {"_id": "6", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "create": {"_id": "7", "_index": "index1"} }
{ "my_field": "foo" }
Run `POST /_bulk` to perform a bulk request that consists of index and create actions with the `dynamic_templates` parameter. The bulk request creates two new fields `work_location` and `home_location` with type `geo_point` according to the `dynamic_templates` parameter. However, the `raw_location` field is created using default dynamic mapping rules, as a text field in that case since it is supplied as a string in the JSON document.
{ "index" : { "_index" : "my_index", "_id" : "1", "dynamic_templates": {"work_location": "geo_point"}} }
{ "field" : "value1", "work_location": "41.12,-71.34", "raw_location": "41.12,-71.34"}
{ "create" : { "_index" : "my_index", "_id" : "2", "dynamic_templates": {"home_location": "geo_point"}} }
{ "field" : "value2", "home_location": "41.12,-71.34"}
Response examples (200)
{
   "took": 30,
   "errors": false,
   "items": [
      {
         "index": {
            "_index": "test",
            "_id": "1",
            "_version": 1,
            "result": "created",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 201,
            "_seq_no" : 0,
            "_primary_term": 1
         }
      },
      {
         "delete": {
            "_index": "test",
            "_id": "2",
            "_version": 1,
            "result": "not_found",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 404,
            "_seq_no" : 1,
            "_primary_term" : 2
         }
      },
      {
         "create": {
            "_index": "test",
            "_id": "3",
            "_version": 1,
            "result": "created",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 201,
            "_seq_no" : 2,
            "_primary_term" : 3
         }
      },
      {
         "update": {
            "_index": "test",
            "_id": "1",
            "_version": 2,
            "result": "updated",
            "_shards": {
                "total": 2,
                "successful": 1,
                "failed": 0
            },
            "status": 200,
            "_seq_no" : 3,
            "_primary_term" : 4
         }
      }
   ]
}
If you run `POST /_bulk` with operations that update non-existent documents, the operations cannot complete successfully. The API returns a response with an `errors` property value `true`. The response also includes an error object for any failed operations. The error object contains additional information about the failure, such as the error type and reason.
{
  "took": 486,
  "errors": true,
  "items": [
    {
      "update": {
        "_index": "index1",
        "_id": "5",
        "status": 404,
        "error": {
          "type": "document_missing_exception",
          "reason": "[5]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "update": {
        "_index": "index1",
        "_id": "6",
        "status": 404,
        "error": {
          "type": "document_missing_exception",
          "reason": "[6]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "create": {
        "_index": "index1",
        "_id": "7",
        "_version": 1,
        "result": "created",
        "_shards": {
          "total": 2,
          "successful": 1,
          "failed": 0
        },
        "_seq_no": 0,
        "_primary_term": 1,
        "status": 201
      }
    }
  ]
}
An example response from `POST /_bulk?filter_path=items.*.error`, which returns only information about failed operations.
{
  "items": [
    {
      "update": {
        "error": {
          "type": "document_missing_exception",
          "reason": "[5]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "update": {
        "error": {
          "type": "document_missing_exception",
          "reason": "[6]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    }
  ]
}

Create a new document in the index Generally available; Added in 5.0.0

POST /{index}/_create/{id}

All methods and paths for this operation:

PUT /{index}/_create/{id}

POST /{index}/_create/{id}

You can index a new JSON document with the /<target>/_doc/ or /<target>/_create/<_id> APIs Using _create guarantees that the document is indexed only if it does not already exist. It returns a 409 response when a document with a same ID already exists in the index. To update an existing document, you must use the /<target>/_doc/ API.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:

  • To add a document using the PUT /<target>/_create/<_id> or POST /<target>/_create/<_id> request formats, you must have the create_doc, create, index, or write index privilege.
  • To automatically create a data stream or index with this API request, you must have the auto_configure, create_index, or manage index privilege.

Automatic data stream creation requires a matching index template with data stream enabled.

Automatically create data streams and indices

If the request's target doesn't exist and matches an index template with a data_stream definition, the index operation automatically creates the data stream.

If the target doesn't exist and doesn't match a data stream template, the operation automatically creates the index and applies any matching index templates.

NOTE: Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation.

If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed.

Automatic index creation is controlled by the action.auto_create_index setting. If it is true, any index can be created automatically. You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to false to turn off automatic index creation entirely. Specify a comma-separated list of patterns you want to allow or prefix each pattern with + or - to indicate whether it should be allowed or blocked. When a list is specified, the default behaviour is to disallow.

NOTE: The action.auto_create_index setting affects the automatic creation of indices only. It does not affect the creation of data streams.

Routing

By default, shard placement — or routing — is controlled by using a hash of the document's ID value. For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the routing parameter.

When setting up explicit mapping, you can also use the _routing field to direct the index operation to extract the routing value from the document itself. This does come at the (very minimal) cost of an additional document parsing pass. If the _routing mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted.

NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing setting enabled in the template.

Distributed

The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas.

Active shards

To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation. If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs. By default, write operations only wait for the primary shards to be active before proceeding (that is to say wait_for_active_shards is 1). This default can be overridden in the index settings dynamically by setting index.write.wait_for_active_shards. To alter this behavior per operation, use the wait_for_active_shards request parameter.

Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is number_of_replicas+1). Specifying a negative value or a number greater than the number of shard copies will throw an error.

For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes). If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding. This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data. If wait_for_active_shards is set on the request to 3 (and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding. This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard. However, if you set wait_for_active_shards to all (or to 4, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index. The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard.

It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts. After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary. The _shards section of the API response reveals the number of shard copies on which replication succeeded and failed.

Required authorization

  • Index privileges: create
External documentation

Path parameters

  • index string Required

    The name of the data stream or index to target. If the target doesn't exist and matches the name or wildcard (*) pattern of an index template with a data_stream definition, this request creates the data stream. If the target doesn't exist and doesn’t match a data stream template, this request creates the index.

  • id string Required

    A unique identifier for the document. To automatically generate a document ID, use the POST /<target>/_doc/ request format.

Query parameters

  • include_source_on_error boolean

    True or false if to include the document source in the error message in case of parsing errors.

  • pipeline string

    The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, it does nothing with refreshes.

    Values are true, false, or wait_for.

  • require_alias boolean

    If true, the destination must be an index alias.

  • require_data_stream boolean

    If true, the request's actions must target a data stream (existing or to be created).

  • routing string

    A custom value that is used to route operations to a specific shard.

  • timeout string

    The period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards. Elasticsearch waits for at least the specified timeout period before failing. The actual wait time could be longer, particularly when multiple waits occur.

    This parameter is useful for situations where the primary shard assigned to perform the operation might not be available when the operation runs. Some reasons for this might be that the primary shard is currently recovering from a gateway or undergoing relocation. By default, the operation will wait on the primary shard to become available for at least 1 minute before failing and responding with an error. The actual wait time could be longer, particularly when multiple waits occur.

    External documentation
  • version number

    The explicit version number for concurrency control. It must be a non-negative long number.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. You can set it to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

    Values are all or index-setting.

application/json

Body Required

object object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _id string Required

      The unique identifier for the added document.

    • _index string Required

      The name of the index the document was added to.

    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • result string Required

      The result of the indexing operation: created or updated.

      Values are created, updated, deleted, not_found, or noop.

    • _seq_no number

      The sequence number assigned to the document for the indexing operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

    • _shards object Required

      Information about the replication process of the operation.

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • _version number Required

      The document version, which is incremented each time the document is updated.

    • failure_store string

      The role of the failure store in this document response

      Values are not_applicable_or_unknown, used, not_enabled, or failed.

    • forced_refresh boolean
POST /{index}/_create/{id}
PUT my-index-000001/_create/1
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
resp = client.create(
    index="my-index-000001",
    id="1",
    document={
        "@timestamp": "2099-11-15T13:12:00",
        "message": "GET /search HTTP/1.1 200 1070000",
        "user": {
            "id": "kimchy"
        }
    },
)
const response = await client.create({
  index: "my-index-000001",
  id: 1,
  document: {
    "@timestamp": "2099-11-15T13:12:00",
    message: "GET /search HTTP/1.1 200 1070000",
    user: {
      id: "kimchy",
    },
  },
});
response = client.create(
  index: "my-index-000001",
  id: "1",
  body: {
    "@timestamp": "2099-11-15T13:12:00",
    "message": "GET /search HTTP/1.1 200 1070000",
    "user": {
      "id": "kimchy"
    }
  }
)
$resp = $client->create([
    "index" => "my-index-000001",
    "id" => "1",
    "body" => [
        "@timestamp" => "2099-11-15T13:12:00",
        "message" => "GET /search HTTP/1.1 200 1070000",
        "user" => [
            "id" => "kimchy",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"@timestamp":"2099-11-15T13:12:00","message":"GET /search HTTP/1.1 200 1070000","user":{"id":"kimchy"}}' "$ELASTICSEARCH_URL/my-index-000001/_create/1"
client.create(c -> c
    .id("1")
    .index("my-index-000001")
    .document(JsonData.fromJson("{\"@timestamp\":\"2099-11-15T13:12:00\",\"message\":\"GET /search HTTP/1.1 200 1070000\",\"user\":{\"id\":\"kimchy\"}}"))
);
Request example
Run `PUT my-index-000001/_create/1` to index a document into the `my-index-000001` index if no document with that ID exists.
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
Response examples (200)
A successful response from `PUT my-index-000001/_create/1` which indexes a document.
{
   "_index": "my-index-000001",
   "_id": "1",
   "_version": 1,
   "result": "created",
   "_shards": {
     "total": 1,
     "successful": 1,
     "failed": 0
   },
   "_seq_no": 0,
   "_primary_term": 1
}

Get a document by its ID Generally available

GET /{index}/_doc/{id}

Get a document and its source or stored fields from an index.

By default, this API is realtime and is not affected by the refresh rate of the index (when data will become visible for search). In the case where stored fields are requested with the stored_fields parameter and the document has been updated but is not yet refreshed, the API will have to parse and analyze the source to extract the stored fields. To turn off realtime behavior, set the realtime parameter to false.

Source filtering

By default, the API returns the contents of the _source field unless you have used the stored_fields parameter or the _source field is turned off. You can turn off _source retrieval by using the _source parameter:

GET my-index-000001/_doc/0?_source=false

If you only need one or two fields from the _source, use the _source_includes or _source_excludes parameters to include or filter out particular fields. This can be helpful with large documents where partial retrieval can save on network overhead Both parameters take a comma separated list of fields or wildcard expressions. For example:

GET my-index-000001/_doc/0?_source_includes=*.id&_source_excludes=entities

If you only want to specify includes, you can use a shorter notation:

GET my-index-000001/_doc/0?_source=*.id

Routing

If routing is used during indexing, the routing value also needs to be specified to retrieve a document. For example:

GET my-index-000001/_doc/2?routing=user1

This request gets the document with ID 2, but it is routed based on the user. The document is not fetched if the correct routing is not specified.

Distributed

The GET operation is hashed into a specific shard ID. It is then redirected to one of the replicas within that shard ID and returns the result. The replicas are the primary shard and its replicas within that shard ID group. This means that the more replicas you have, the better your GET scaling will be.

Versioning support

You can use the version parameter to retrieve the document only if its current version is equal to the specified one.

Internally, Elasticsearch has marked the old document as deleted and added an entirely new document. The old version of the document doesn't disappear immediately, although you won't be able to access it. Elasticsearch cleans up deleted documents in the background as you continue to index more data.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    The name of the index that contains the document.

  • id string Required

    A unique document identifier.

Query parameters

  • preference string

    The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

    If it is set to _local, the operation will prefer to be run on a local allocated shard when possible. If it is set to a custom value, the value is used to guarantee that the same shards will be used for the same custom value. This can help with "jumping values" when hitting different shards in different refresh states. A sample value can be something like the web session ID or the user name.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. Only leaf fields can be retrieved with the stored_field option. Object fields can't be returned;​if specified, the request fails.

  • version number

    The version number for concurrency control. It must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _index string Required

      The name of the index the document belongs to.

    • fields object

      If the stored_fields parameter is set to true and found is true, it contains the document fields stored in the index.

      Hide fields attribute Show fields attribute object
      • * object Additional properties
    • _ignored array[string]
    • found boolean Required

      Indicates whether the document exists.

    • _id string Required

      The unique identifier for the document.

    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • _routing string

      The explicit routing, if set.

    • _seq_no number

      The sequence number assigned to the document for the indexing operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

    • _source object

      If found is true, it contains the document data formatted in JSON. If the _source parameter is set to false or the stored_fields parameter is set to true, it is excluded.

    • _version number

      The document version, which is ncremented each time the document is updated.

GET /{index}/_doc/{id}
GET my-index-000001/_doc/1?stored_fields=tags,counter
resp = client.get(
    index="my-index-000001",
    id="1",
    stored_fields="tags,counter",
)
const response = await client.get({
  index: "my-index-000001",
  id: 1,
  stored_fields: "tags,counter",
});
response = client.get(
  index: "my-index-000001",
  id: "1",
  stored_fields: "tags,counter"
)
$resp = $client->get([
    "index" => "my-index-000001",
    "id" => "1",
    "stored_fields" => "tags,counter",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_doc/1?stored_fields=tags,counter"
Response examples (200)
A successful response from `GET my-index-000001/_doc/0`. It retrieves the JSON document with the `_id` 0 from the `my-index-000001` index.
{
  "_index": "my-index-000001",
  "_id": "0",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "found": true,
  "_source": {
    "@timestamp": "2099-11-15T14:12:12",
    "http": {
      "request": {
        "method": "get"
      },
      "response": {
        "status_code": 200,
        "bytes": 1070000
      },
      "version": "1.1"
    },
    "source": {
      "ip": "127.0.0.1"
    },
    "message": "GET /search HTTP/1.1 200 1070000",
    "user": {
      "id": "kimchy"
    }
  }
}
A successful response from `GET my-index-000001/_doc/1?stored_fields=tags,counter`, which retrieves a set of stored fields. Field values fetched from the document itself are always returned as an array. Any requested fields that are not stored (such as the counter field in this example) are ignored.
{
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "_seq_no" : 22,
  "_primary_term" : 1,
  "found": true,
  "fields": {
      "tags": [
        "production"
      ]
  }
}
A successful response from `GET my-index-000001/_doc/2?routing=user1&stored_fields=tags,counter`, which retrieves the `_routing` metadata field.
{
  "_index": "my-index-000001",
  "_id": "2",
  "_version": 1,
  "_seq_no" : 13,
  "_primary_term" : 1,
  "_routing": "user1",
  "found": true,
  "fields": {
      "tags": [
        "env2"
      ]
  }
}

Create or update a document in an index Generally available

POST /{index}/_doc/{id}

All methods and paths for this operation:

POST /{index}/_doc

PUT /{index}/_doc/{id}
POST /{index}/_doc/{id}

Add a JSON document to the specified data stream or index and make it searchable. If the target is an index and the document already exists, the request updates the document and increments its version.

NOTE: You cannot use this API to send update requests for existing documents in a data stream.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:

  • To add or overwrite a document using the PUT /<target>/_doc/<_id> request format, you must have the create, index, or write index privilege.
  • To add a document using the POST /<target>/_doc/ request format, you must have the create_doc, create, index, or write index privilege.
  • To automatically create a data stream or index with this API request, you must have the auto_configure, create_index, or manage index privilege.

Automatic data stream creation requires a matching index template with data stream enabled.

NOTE: Replica shards might not all be started when an indexing operation returns successfully. By default, only the primary is required. Set wait_for_active_shards to change this default behavior.

Automatically create data streams and indices

If the request's target doesn't exist and matches an index template with a data_stream definition, the index operation automatically creates the data stream.

If the target doesn't exist and doesn't match a data stream template, the operation automatically creates the index and applies any matching index templates.

NOTE: Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation.

If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed.

Automatic index creation is controlled by the action.auto_create_index setting. If it is true, any index can be created automatically. You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to false to turn off automatic index creation entirely. Specify a comma-separated list of patterns you want to allow or prefix each pattern with + or - to indicate whether it should be allowed or blocked. When a list is specified, the default behaviour is to disallow.

NOTE: The action.auto_create_index setting affects the automatic creation of indices only. It does not affect the creation of data streams.

Optimistic concurrency control

Index operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the if_seq_no and if_primary_term parameters. If a mismatch is detected, the operation will result in a VersionConflictException and a status code of 409.

Routing

By default, shard placement — or routing — is controlled by using a hash of the document's ID value. For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the routing parameter.

When setting up explicit mapping, you can also use the _routing field to direct the index operation to extract the routing value from the document itself. This does come at the (very minimal) cost of an additional document parsing pass. If the _routing mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted.

NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing setting enabled in the template.

Distributed

The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas.

Active shards

To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation. If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs. By default, write operations only wait for the primary shards to be active before proceeding (that is to say wait_for_active_shards is 1). This default can be overridden in the index settings dynamically by setting index.write.wait_for_active_shards. To alter this behavior per operation, use the wait_for_active_shards request parameter.

Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is number_of_replicas+1). Specifying a negative value or a number greater than the number of shard copies will throw an error.

For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes). If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding. This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data. If wait_for_active_shards is set on the request to 3 (and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding. This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard. However, if you set wait_for_active_shards to all (or to 4, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index. The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard.

It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts. After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary. The _shards section of the API response reveals the number of shard copies on which replication succeeded and failed.

No operation (noop) updates

When updating a document by using this API, a new version of the document is always created even if the document hasn't changed. If this isn't acceptable use the _update API with detect_noop set to true. The detect_noop option isn't available on this API because it doesn’t fetch the old source and isn't able to compare it against the new source.

There isn't a definitive rule for when noop updates aren't acceptable. It's a combination of lots of factors like how frequently your data source sends updates that are actually noops and how many queries per second Elasticsearch runs on the shard receiving the updates.

Versioning

Each indexed document is given a version number. By default, internal versioning is used that starts at 1 and increments with each update, deletes included. Optionally, the version number can be set to an external value (for example, if maintained in a database). To enable this functionality, version_type should be set to external. The value provided must be a numeric, long value greater than or equal to 0, and less than around 9.2e+18.

NOTE: Versioning is completely real time, and is not affected by the near real time aspects of search operations. If no version is provided, the operation runs without any version checks.

When using the external version type, the system checks to see if the version number passed to the index request is greater than the version of the currently stored document. If true, the document will be indexed and the new version number used. If the value provided is less than or equal to the stored document's version number, a version conflict will occur and the index operation will fail. For example:

PUT my-index-000001/_doc/1?version=2&version_type=external
{
  "user": {
    "id": "elkbee"
  }
}

In this example, the operation will succeed since the supplied version of 2 is higher than the current document version of 1.
If the document was already updated and its version was set to 2 or higher, the indexing command will fail and result in a conflict (409 HTTP status code).

A nice side effect is that there is no need to maintain strict ordering of async indexing operations run as a result of changes to a source database, as long as version numbers from the source database are used.
Even the simple case of updating the Elasticsearch index using data from a database is simplified if external versioning is used, as only the latest version will be used if the index operations arrive out of order.

## Required authorization

* Index privileges: `index`
External documentation

Path parameters

  • index string Required

    The name of the data stream or index to target. If the target doesn't exist and matches the name or wildcard (*) pattern of an index template with a data_stream definition, this request creates the data stream. If the target doesn't exist and doesn't match a data stream template, this request creates the index. You can check for existing targets with the resolve index API.

  • id string Required

    A unique identifier for the document. To automatically generate a document ID, use the POST /<target>/_doc/ request format and omit this parameter.

Query parameters

  • if_primary_term number

    Only perform the operation if the document has this primary term.

  • if_seq_no number

    Only perform the operation if the document has this sequence number.

  • include_source_on_error boolean

    True or false if to include the document source in the error message in case of parsing errors.

  • op_type string

    Set to create to only index the document if it does not already exist (put if absent). If a document with the specified _id already exists, the indexing operation will fail. The behavior is the same as using the <index>/_create endpoint. If a document ID is specified, this paramater defaults to index. Otherwise, it defaults to create. If the request targets a data stream, an op_type of create is required.

    Supported values include:

    • index: Overwrite any documents that already exist.
    • create: Only index documents that do not already exist.

    Values are index or create.

  • pipeline string

    The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to _none disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, it does nothing with refreshes.

    Values are true, false, or wait_for.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • timeout string

    The period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards.

    This parameter is useful for situations where the primary shard assigned to perform the operation might not be available when the operation runs. Some reasons for this might be that the primary shard is currently recovering from a gateway or undergoing relocation. By default, the operation will wait on the primary shard to become available for at least 1 minute before failing and responding with an error. The actual wait time could be longer, particularly when multiple waits occur.

    External documentation
  • version number

    An explicit version number for concurrency control. It must be a non-negative long number.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. You can set it to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

    Values are all or index-setting.

  • require_alias boolean

    If true, the destination must be an index alias.

  • require_data_stream boolean

    If true, the request's actions must target a data stream (existing or to be created).

application/json

Body Required

object object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _id string Required

      The unique identifier for the added document.

    • _index string Required

      The name of the index the document was added to.

    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • result string Required

      The result of the indexing operation: created or updated.

      Values are created, updated, deleted, not_found, or noop.

    • _seq_no number

      The sequence number assigned to the document for the indexing operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

    • _shards object Required

      Information about the replication process of the operation.

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • _version number Required

      The document version, which is incremented each time the document is updated.

    • failure_store string

      The role of the failure store in this document response

      Values are not_applicable_or_unknown, used, not_enabled, or failed.

    • forced_refresh boolean
POST /{index}/_doc/{id}
POST my-index-000001/_doc/
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
resp = client.index(
    index="my-index-000001",
    document={
        "@timestamp": "2099-11-15T13:12:00",
        "message": "GET /search HTTP/1.1 200 1070000",
        "user": {
            "id": "kimchy"
        }
    },
)
const response = await client.index({
  index: "my-index-000001",
  document: {
    "@timestamp": "2099-11-15T13:12:00",
    message: "GET /search HTTP/1.1 200 1070000",
    user: {
      id: "kimchy",
    },
  },
});
response = client.index(
  index: "my-index-000001",
  body: {
    "@timestamp": "2099-11-15T13:12:00",
    "message": "GET /search HTTP/1.1 200 1070000",
    "user": {
      "id": "kimchy"
    }
  }
)
$resp = $client->index([
    "index" => "my-index-000001",
    "body" => [
        "@timestamp" => "2099-11-15T13:12:00",
        "message" => "GET /search HTTP/1.1 200 1070000",
        "user" => [
            "id" => "kimchy",
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"@timestamp":"2099-11-15T13:12:00","message":"GET /search HTTP/1.1 200 1070000","user":{"id":"kimchy"}}' "$ELASTICSEARCH_URL/my-index-000001/_doc/"
client.index(i -> i
    .index("my-index-000001")
    .document(JsonData.fromJson("{\"@timestamp\":\"2099-11-15T13:12:00\",\"message\":\"GET /search HTTP/1.1 200 1070000\",\"user\":{\"id\":\"kimchy\"}}"))
);
Request examples
Run `POST my-index-000001/_doc/` to index a document. When you use the `POST /<target>/_doc/` request format, the `op_type` is automatically set to `create` and the index operation generates a unique ID for the document.
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
Run `PUT my-index-000001/_doc/1` to insert a JSON document into the `my-index-000001` index with an `_id` of 1.
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
Response examples (200)
A successful response from `POST my-index-000001/_doc/`, which contains an automated document ID.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "W0tpsmIBdwcYyG50zbta",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "result": "created"
}
A successful response from `PUT my-index-000001/_doc/1`.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "result": "created"
}

Delete a document Generally available

DELETE /{index}/_doc/{id}

Remove a JSON document from the specified index.

NOTE: You cannot send deletion requests directly to a data stream. To delete a document in a data stream, you must target the backing index containing the document.

Optimistic concurrency control

Delete operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the if_seq_no and if_primary_term parameters. If a mismatch is detected, the operation will result in a VersionConflictException and a status code of 409.

Versioning

Each document indexed is versioned. When deleting a document, the version can be specified to make sure the relevant document you are trying to delete is actually being deleted and it has not changed in the meantime. Every write operation run on a document, deletes included, causes its version to be incremented. The version number of a deleted document remains available for a short time after deletion to allow for control of concurrent operations. The length of time for which a deleted document's version remains available is determined by the index.gc_deletes index setting.

Routing

If routing is used during indexing, the routing value also needs to be specified to delete a document.

If the _routing mapping is set to required and no routing value is specified, the delete API throws a RoutingMissingException and rejects the request.

For example:

DELETE /my-index-000001/_doc/1?routing=shard-1

This request deletes the document with ID 1, but it is routed based on the user. The document is not deleted if the correct routing is not specified.

Distributed

The delete operation gets hashed into a specific shard ID. It then gets redirected into the primary shard within that ID group and replicated (if needed) to shard replicas within that ID group.

Required authorization

  • Index privileges: delete

Path parameters

  • index string Required

    The name of the target index.

  • id string Required

    A unique identifier for the document.

Query parameters

  • if_primary_term number

    Only perform the operation if the document has this primary term.

  • if_seq_no number

    Only perform the operation if the document has this sequence number.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, it does nothing with refreshes.

    Values are true, false, or wait_for.

  • routing string

    A custom value used to route operations to a specific shard.

  • timeout string

    The period to wait for active shards.

    This parameter is useful for situations where the primary shard assigned to perform the delete operation might not be available when the delete operation runs. Some reasons for this might be that the primary shard is currently recovering from a store or undergoing relocation. By default, the delete operation will wait on the primary shard to become available for up to 1 minute before failing and responding with an error.

    External documentation
  • version number

    An explicit version number for concurrency control. It must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

  • wait_for_active_shards number | string

    The minimum number of shard copies that must be active before proceeding with the operation. You can set it to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

    Values are all or index-setting.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _id string Required

      The unique identifier for the added document.

    • _index string Required

      The name of the index the document was added to.

    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • result string Required

      The result of the indexing operation: created or updated.

      Values are created, updated, deleted, not_found, or noop.

    • _seq_no number

      The sequence number assigned to the document for the indexing operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

    • _shards object Required

      Information about the replication process of the operation.

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • _version number Required

      The document version, which is incremented each time the document is updated.

    • failure_store string

      The role of the failure store in this document response

      Values are not_applicable_or_unknown, used, not_enabled, or failed.

    • forced_refresh boolean
DELETE /{index}/_doc/{id}
DELETE /my-index-000001/_doc/1
resp = client.delete(
    index="my-index-000001",
    id="1",
)
const response = await client.delete({
  index: "my-index-000001",
  id: 1,
});
response = client.delete(
  index: "my-index-000001",
  id: "1"
)
$resp = $client->delete([
    "index" => "my-index-000001",
    "id" => "1",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_doc/1"
client.delete(d -> d
    .id("1")
    .index("my-index-000001")
);
Response examples (200)
A successful response from `DELETE /my-index-000001/_doc/1`, which deletes the JSON document 1 from the `my-index-000001` index.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 2,
  "_primary_term": 1,
  "_seq_no": 5,
  "result": "deleted"
}








Throttle a delete by query operation Generally available; Added in 6.5.0

POST /_delete_by_query/{task_id}/_rethrottle

Change the number of requests per second for a particular delete by query operation. Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.

Path parameters

  • task_id string | number Required

    The ID for the task.

Query parameters

  • requests_per_second number

    The throttle for this request in sub-requests per second. To disable throttling, set it to -1.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • node_failures array[object]

      Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      Hide node_failures attributes Show node_failures attributes object
      • type string Required

        The type of error

      • reason string | null

        A human-readable explanation of the error, in English.

      • stack_trace string

        The server stack trace. Present only if the error_trace=true parameter was sent with the request.

      • caused_by object

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • root_cause array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • suppressed array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

    • task_failures array[object]
      Hide task_failures attributes Show task_failures attributes object
      • task_id number Required
      • node_id string Required
      • status string Required
      • reason object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide reason attributes Show reason attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

    • nodes object

      Task information grouped by node, if group_by was set to node (the default).

      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • name string
        • transport_address string
        • host string
        • ip string
        • roles array[string]
        • attributes object
          Hide attributes attribute Show attributes attribute object
          • * string Additional properties
        • tasks object Required
          Hide tasks attribute Show tasks attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • action string Required
            • cancelled boolean
            • cancellable boolean Required
            • description string

              Human readable text that identifies the particular request that the task is performing. For example, it might identify the search request being performed by a search task. Other kinds of tasks have different descriptions, like _reindex which has the source and the destination, or _bulk which just has the number of requests and the destination indices. Many requests will have only an empty description because more detailed information about the request is not easily available or particularly helpful in identifying the request.

            • headers object Required
              Hide headers attribute Show headers attribute object
              • * string Additional properties
            • id number Required
            • node string Required
            • running_time string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • running_time_in_nanos
            • start_time_in_millis
            • status object

              The internal status of the task, which varies from task to task. The format also varies. While the goal is to keep the status for a particular task consistent from version to version, this is not always possible because sometimes the implementation changes. Fields might be removed from the status for a particular request so any parsing you do of the status might break in minor releases.

            • type string Required
            • parent_task_id
    • tasks array[object] | object

      Either a flat list of tasks if group_by was set to none, or grouped by parents if group_by was set to parents.

      One of:

      Either a flat list of tasks if group_by was set to none, or grouped by parents if group_by was set to parents.

POST /_delete_by_query/{task_id}/_rethrottle
POST _delete_by_query/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1
resp = client.delete_by_query_rethrottle(
    task_id="r1A2WoRbTwKZ516z6NEs5A:36619",
    requests_per_second="-1",
)
const response = await client.deleteByQueryRethrottle({
  task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
  requests_per_second: "-1",
});
response = client.delete_by_query_rethrottle(
  task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
  requests_per_second: "-1"
)
$resp = $client->deleteByQueryRethrottle([
    "task_id" => "r1A2WoRbTwKZ516z6NEs5A:36619",
    "requests_per_second" => "-1",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_delete_by_query/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1"
client.deleteByQueryRethrottle(d -> d
    .requestsPerSecond(-1.0F)
    .taskId("r1A2WoRbTwKZ516z6NEs5A:36619")
);

Get a document's source Generally available

GET /{index}/_source/{id}

Get the source of a document. For example:

GET my-index-000001/_source/1

You can use the source filtering parameters to control which parts of the _source are returned:

GET my-index-000001/_source/1/?_source_includes=*.id&_source_excludes=entities

Required authorization

  • Index privileges: read
External documentation

Path parameters

  • index string Required

    The name of the index that contains the document.

  • id string Required

    A unique document identifier.

Query parameters

  • preference string

    The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude in the response.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response.

  • version number

    The version number for concurrency control. It must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
GET /{index}/_source/{id}
GET my-index-000001/_source/1
resp = client.get_source(
    index="my-index-000001",
    id="1",
)
const response = await client.getSource({
  index: "my-index-000001",
  id: 1,
});
response = client.get_source(
  index: "my-index-000001",
  id: "1"
)
$resp = $client->getSource([
    "index" => "my-index-000001",
    "id" => "1",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_source/1"
client.getSource(g -> g
    .id("1")
    .index("my-index-000001")
);

Check for a document source Generally available; Added in 5.4.0

HEAD /{index}/_source/{id}

Check whether a document source exists in an index. For example:

HEAD my-index-000001/_source/1

A document's source is not available if it is disabled in the mapping.

Required authorization

  • Index privileges: read
External documentation

Path parameters

  • index string Required

    A comma-separated list of data streams, indices, and aliases. It supports wildcards (*).

  • id string Required

    A unique identifier for the document.

Query parameters

  • preference string

    The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude in the response.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response.

  • version number

    The version number for concurrency control. It must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
HEAD /{index}/_source/{id}
HEAD my-index-000001/_source/1
resp = client.exists_source(
    index="my-index-000001",
    id="1",
)
const response = await client.existsSource({
  index: "my-index-000001",
  id: 1,
});
response = client.exists_source(
  index: "my-index-000001",
  id: "1"
)
$resp = $client->existsSource([
    "index" => "my-index-000001",
    "id" => "1",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_source/1"
client.existsSource(e -> e
    .id("1")
    .index("my-index-000001")
);




Get multiple term vectors Generally available

POST /{index}/_mtermvectors

All methods and paths for this operation:

GET /_mtermvectors

POST /_mtermvectors
GET /{index}/_mtermvectors
POST /{index}/_mtermvectors

Get multiple term vectors with a single request. You can specify existing documents by index and ID or provide artificial documents in the body of the request. You can specify the index in the request body or request URI. The response contains a docs array with all the fetched termvectors. Each element has the structure provided by the termvectors API.

Artificial documents

You can also use mtermvectors to generate term vectors for artificial documents provided in the body of the request. The mapping used is determined by the specified _index.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    The name of the index that contains the documents.

Query parameters

  • ids array[string]

    A comma-separated list of documents ids. You must define ids as parameter or set "ids" or "docs" in the request body

  • fields string | array[string]

    A comma-separated list or wildcard expressions of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters.

  • field_statistics boolean

    If true, the response includes the document count, sum of document frequencies, and sum of total term frequencies.

  • offsets boolean

    If true, the response includes term offsets.

  • payloads boolean

    If true, the response includes term payloads.

  • positions boolean

    If true, the response includes term positions.

  • preference string

    The node or shard the operation should be performed on. It is random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • routing string

    A custom value used to route operations to a specific shard.

  • term_statistics boolean

    If true, the response includes term frequency and document frequency.

  • version number

    If true, returns the document version as part of a hit.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

application/json

Body

  • docs array[object]

    An array of existing or artificial documents.

    Hide docs attributes Show docs attributes object
    • _id string

      The ID of the document.

    • _index string

      The index of the document.

    • doc object

      An artificial document (a document not present in the index) for which you want to retrieve term vectors.

    • fields string | array[string]

      Comma-separated list or wildcard expressions of fields to include in the statistics. Used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters.

    • field_statistics boolean

      If true, the response includes the document count, sum of document frequencies, and sum of total term frequencies.

      Default value is true.

    • filter object

      Filter terms based on their tf-idf scores.

      Hide filter attributes Show filter attributes object
      • max_doc_freq number

        Ignore words which occur in more than this many docs. Defaults to unbounded.

      • max_num_terms number

        The maximum number of terms that must be returned per field.

        Default value is 25.

      • max_term_freq number

        Ignore words with more than this frequency in the source doc. It defaults to unbounded.

      • max_word_length number

        The maximum word length above which words will be ignored. Defaults to unbounded.

        Default value is 0.

      • min_doc_freq number

        Ignore terms which do not occur in at least this many docs.

        Default value is 1.

      • min_term_freq number

        Ignore words with less than this frequency in the source doc.

        Default value is 1.

      • min_word_length number

        The minimum word length below which words will be ignored.

        Default value is 0.

    • offsets boolean

      If true, the response includes term offsets.

      Default value is true.

    • payloads boolean

      If true, the response includes term payloads.

      Default value is true.

    • positions boolean

      If true, the response includes term positions.

      Default value is true.

    • routing string

      Custom value used to route operations to a specific shard.

    • term_statistics boolean

      If true, the response includes term frequency and document frequency.

      Default value is false.

    • version number

      If true, returns the document version as part of a hit.

    • version_type string

      Specific version type.

      Supported values include:

      • internal: Use internal versioning that starts at 1 and increments with each update or delete.
      • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
      • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
      • force: This option is deprecated because it can cause primary and replica shards to diverge.

      Values are internal, external, external_gte, or force.

  • ids array[string]

    A simplified syntax to specify documents by their ID if they're in the same index.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required
      Hide docs attributes Show docs attributes object
      • _id string
      • _index string Required
      • _version number
      • took number
      • found boolean
      • term_vectors object
        Hide term_vectors attribute Show term_vectors attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • field_statistics object
          • terms object Required
            Hide terms attribute Show terms attribute object
            • * object Additional properties
      • error object

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide error attributes Show error attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

POST /{index}/_mtermvectors
POST /my-index-000001/_mtermvectors
{
  "docs": [
      {
        "_id": "2",
        "fields": [
            "message"
        ],
        "term_statistics": true
      },
      {
        "_id": "1"
      }
  ]
}
resp = client.mtermvectors(
    index="my-index-000001",
    docs=[
        {
            "_id": "2",
            "fields": [
                "message"
            ],
            "term_statistics": True
        },
        {
            "_id": "1"
        }
    ],
)
const response = await client.mtermvectors({
  index: "my-index-000001",
  docs: [
    {
      _id: "2",
      fields: ["message"],
      term_statistics: true,
    },
    {
      _id: "1",
    },
  ],
});
response = client.mtermvectors(
  index: "my-index-000001",
  body: {
    "docs": [
      {
        "_id": "2",
        "fields": [
          "message"
        ],
        "term_statistics": true
      },
      {
        "_id": "1"
      }
    ]
  }
)
$resp = $client->mtermvectors([
    "index" => "my-index-000001",
    "body" => [
        "docs" => array(
            [
                "_id" => "2",
                "fields" => array(
                    "message",
                ),
                "term_statistics" => true,
            ],
            [
                "_id" => "1",
            ],
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"docs":[{"_id":"2","fields":["message"],"term_statistics":true},{"_id":"1"}]}' "$ELASTICSEARCH_URL/my-index-000001/_mtermvectors"
client.mtermvectors(m -> m
    .docs(List.of(MultiTermVectorsOperation.of(mu -> mu
            .id("2")
            .fields("message")
            .termStatistics(true)),MultiTermVectorsOperation.of(mu -> mu
            .id("1"))))
    .index("my-index-000001")
);
Request examples
Run `POST /my-index-000001/_mtermvectors`. When you specify an index in the request URI, the index does not need to be specified for each documents in the request body.
{
  "docs": [
      {
        "_id": "2",
        "fields": [
            "message"
        ],
        "term_statistics": true
      },
      {
        "_id": "1"
      }
  ]
}
Run `POST /my-index-000001/_mtermvectors`. If all requested documents are in same index and the parameters are the same, you can use a simplified syntax.
{
  "ids": [ "1", "2" ],
  "fields": [
    "message"
  ],
  "term_statistics": true
}
Run `POST /_mtermvectors` to generate term vectors for artificial documents provided in the body of the request. The mapping used is determined by the specified `_index`.
{
  "docs": [
      {
        "_index": "my-index-000001",
        "doc" : {
            "message" : "test test test"
        }
      },
      {
        "_index": "my-index-000001",
        "doc" : {
          "message" : "Another test ..."
        }
      }
  ]
}




Throttle a reindex operation Generally available; Added in 2.4.0

POST /_reindex/{task_id}/_rethrottle

Change the number of requests per second for a particular reindex operation. For example:

POST _reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1

Rethrottling that speeds up the query takes effect immediately. Rethrottling that slows down the query will take effect after completing the current batch. This behavior prevents scroll timeouts.

Path parameters

  • task_id string Required

    The task identifier, which can be found by using the tasks API.

Query parameters

  • requests_per_second number

    The throttle for this request in sub-requests per second. It can be either -1 to turn off throttling or any decimal number like 1.7 or 12 to throttle to that level.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • nodes object Required
POST /_reindex/{task_id}/_rethrottle
POST _reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1
resp = client.reindex_rethrottle(
    task_id="r1A2WoRbTwKZ516z6NEs5A:36619",
    requests_per_second="-1",
)
const response = await client.reindexRethrottle({
  task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
  requests_per_second: "-1",
});
response = client.reindex_rethrottle(
  task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
  requests_per_second: "-1"
)
$resp = $client->reindexRethrottle([
    "task_id" => "r1A2WoRbTwKZ516z6NEs5A:36619",
    "requests_per_second" => "-1",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1"
client.reindexRethrottle(r -> r
    .requestsPerSecond(-1.0F)
    .taskId("r1A2WoRbTwKZ516z6NEs5A:36619")
);

Get term vector information Generally available

POST /{index}/_termvectors/{id}

All methods and paths for this operation:

GET /{index}/_termvectors

POST /{index}/_termvectors
GET /{index}/_termvectors/{id}
POST /{index}/_termvectors/{id}

Get information and statistics about terms in the fields of a particular document.

You can retrieve term vectors for documents stored in the index or for artificial documents passed in the body of the request. You can specify the fields you are interested in through the fields parameter or by adding the fields to the request body. For example:

GET /my-index-000001/_termvectors/1?fields=message

Fields can be specified using wildcards, similar to the multi match query.

Term vectors are real-time by default, not near real-time. This can be changed by setting realtime parameter to false.

You can request three types of values: term information, term statistics, and field statistics. By default, all term information and field statistics are returned for all fields but term statistics are excluded.

Term information

  • term frequency in the field (always returned)
  • term positions (positions: true)
  • start and end offsets (offsets: true)
  • term payloads (payloads: true), as base64 encoded bytes

If the requested information wasn't stored in the index, it will be computed on the fly if possible. Additionally, term vectors could be computed for documents not even existing in the index, but instead provided by the user.


Start and end offsets assume UTF-16 encoding is being used. If you want to use these offsets in order to get the original text that produced this token, you should make sure that the string you are taking a sub-string of is also encoded using UTF-16.

Behaviour

The term and field statistics are not accurate. Deleted documents are not taken into account. The information is only retrieved for the shard the requested document resides in. The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context. By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected. Use routing only to hit a particular shard.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    The name of the index that contains the document.

  • id string Required

    A unique identifier for the document.

Query parameters

  • fields string | array[string]

    A comma-separated list or wildcard expressions of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters.

  • field_statistics boolean

    If true, the response includes:

    • The document count (how many documents contain this field).
    • The sum of document frequencies (the sum of document frequencies for all terms in this field).
    • The sum of total term frequencies (the sum of total term frequencies of each term in this field).
  • offsets boolean

    If true, the response includes term offsets.

  • payloads boolean

    If true, the response includes term payloads.

  • positions boolean

    If true, the response includes term positions.

  • preference string

    The node or shard the operation should be performed on. It is random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • term_statistics boolean

    If true, the response includes:

    • The total term frequency (how often a term occurs in all documents).
    • The document frequency (the number of documents containing the current term).

    By default these values are not returned since term statistics can have a serious performance impact.

  • version number

    If true, returns the document version as part of a hit.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

application/json

Body

  • doc object

    An artificial document (a document not present in the index) for which you want to retrieve term vectors.

  • filter object

    Filter terms based on their tf-idf scores. This could be useful in order find out a good characteristic vector of a document. This feature works in a similar manner to the second phase of the More Like This Query.

    Hide filter attributes Show filter attributes object
    • max_doc_freq number

      Ignore words which occur in more than this many docs. Defaults to unbounded.

    • max_num_terms number

      The maximum number of terms that must be returned per field.

      Default value is 25.

    • max_term_freq number

      Ignore words with more than this frequency in the source doc. It defaults to unbounded.

    • max_word_length number

      The maximum word length above which words will be ignored. Defaults to unbounded.

      Default value is 0.

    • min_doc_freq number

      Ignore terms which do not occur in at least this many docs.

      Default value is 1.

    • min_term_freq number

      Ignore words with less than this frequency in the source doc.

      Default value is 1.

    • min_word_length number

      The minimum word length below which words will be ignored.

      Default value is 0.

  • per_field_analyzer object

    Override the default per-field analyzer. This is useful in order to generate term vectors in any fashion, especially when using artificial documents. When providing an analyzer for a field that already stores term vectors, the term vectors will be regenerated.

    Hide per_field_analyzer attribute Show per_field_analyzer attribute object
    • * string Additional properties
  • fields array[string]

    A list of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters.

  • field_statistics boolean

    If true, the response includes:

    • The document count (how many documents contain this field).
    • The sum of document frequencies (the sum of document frequencies for all terms in this field).
    • The sum of total term frequencies (the sum of total term frequencies of each term in this field).

    Default value is true.

  • offsets boolean

    If true, the response includes term offsets.

    Default value is true.

  • payloads boolean

    If true, the response includes term payloads.

    Default value is true.

  • positions boolean

    If true, the response includes term positions.

    Default value is true.

  • term_statistics boolean

    If true, the response includes:

    • The total term frequency (how often a term occurs in all documents).
    • The document frequency (the number of documents containing the current term).

    By default these values are not returned since term statistics can have a serious performance impact.

    Default value is false.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • version number

    If true, returns the document version as part of a hit.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • found boolean Required
    • _id string
    • _index string Required
    • term_vectors object
      Hide term_vectors attribute Show term_vectors attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • field_statistics object
          Hide field_statistics attributes Show field_statistics attributes object
          • doc_count number Required
          • sum_doc_freq number Required
          • sum_ttf number Required
        • terms object Required
          Hide terms attribute Show terms attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • doc_freq number
            • score number
            • term_freq number Required
            • tokens array[object]
            • ttf number
    • took number Required
    • _version number Required
POST /{index}/_termvectors/{id}
GET /my-index-000001/_termvectors/1
{
  "fields" : ["text"],
  "offsets" : true,
  "payloads" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
resp = client.termvectors(
    index="my-index-000001",
    id="1",
    fields=[
        "text"
    ],
    offsets=True,
    payloads=True,
    positions=True,
    term_statistics=True,
    field_statistics=True,
)
const response = await client.termvectors({
  index: "my-index-000001",
  id: 1,
  fields: ["text"],
  offsets: true,
  payloads: true,
  positions: true,
  term_statistics: true,
  field_statistics: true,
});
response = client.termvectors(
  index: "my-index-000001",
  id: "1",
  body: {
    "fields": [
      "text"
    ],
    "offsets": true,
    "payloads": true,
    "positions": true,
    "term_statistics": true,
    "field_statistics": true
  }
)
$resp = $client->termvectors([
    "index" => "my-index-000001",
    "id" => "1",
    "body" => [
        "fields" => array(
            "text",
        ),
        "offsets" => true,
        "payloads" => true,
        "positions" => true,
        "term_statistics" => true,
        "field_statistics" => true,
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"fields":["text"],"offsets":true,"payloads":true,"positions":true,"term_statistics":true,"field_statistics":true}' "$ELASTICSEARCH_URL/my-index-000001/_termvectors/1"
client.termvectors(t -> t
    .fieldStatistics(true)
    .fields("text")
    .id("1")
    .index("my-index-000001")
    .offsets(true)
    .payloads(true)
    .positions(true)
    .termStatistics(true)
);
Request examples
Run `GET /my-index-000001/_termvectors/1` to return all information and statistics for field `text` in document 1.
{
  "fields" : ["text"],
  "offsets" : true,
  "payloads" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors/1` to set per-field analyzers. A different analyzer than the one at the field may be provided by using the `per_field_analyzer` parameter.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  },
  "fields": ["fullname"],
  "per_field_analyzer" : {
    "fullname": "keyword"
  }
}
Run `GET /imdb/_termvectors` to filter the terms returned based on their tf-idf scores. It returns the three most "interesting" keywords from the artificial document having the given "plot" field value. Notice that the keyword "Tony" or any stop words are not part of the response, as their tf-idf must be too low.
{
  "doc": {
    "plot": "When wealthy industrialist Tony Stark is forced to build an armored suit after a life-threatening incident, he ultimately decides to use its technology to fight against evil."
  },
  "term_statistics": true,
  "field_statistics": true,
  "positions": false,
  "offsets": false,
  "filter": {
    "max_num_terms": 3,
    "min_term_freq": 1,
    "min_doc_freq": 1
  }
}
Run `GET /my-index-000001/_termvectors/1`. Term vectors which are not explicitly stored in the index are automatically computed on the fly. This request returns all information and statistics for the fields in document 1, even though the terms haven't been explicitly stored in the index. Note that for the field text, the terms are not regenerated.
{
  "fields" : ["text", "some_field_without_term_vectors"],
  "offsets" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors`. Term vectors can be generated for artificial documents, that is for documents not present in the index. If dynamic mapping is turned on (default), the document fields not in the original mapping will be dynamically created.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  }
}
Response examples (200)
A successful response from `GET /my-index-000001/_termvectors/1`.
{
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "found": true,
  "took": 6,
  "term_vectors": {
    "text": {
      "field_statistics": {
        "sum_doc_freq": 4,
        "doc_count": 2,
        "sum_ttf": 6
      },
      "terms": {
        "test": {
          "doc_freq": 2,
          "ttf": 4,
          "term_freq": 3,
          "tokens": [
            {
              "position": 0,
              "start_offset": 0,
              "end_offset": 4,
              "payload": "d29yZA=="
            },
            {
              "position": 1,
              "start_offset": 5,
              "end_offset": 9,
              "payload": "d29yZA=="
            },
            {
              "position": 2,
              "start_offset": 10,
              "end_offset": 14,
              "payload": "d29yZA=="
            }
          ]
        }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with `per_field_analyzer` in the request body.
{
  "_index": "my-index-000001",
  "_version": 0,
  "found": true,
  "took": 6,
  "term_vectors": {
    "fullname": {
      "field_statistics": {
          "sum_doc_freq": 2,
          "doc_count": 4,
          "sum_ttf": 4
      },
      "terms": {
          "John Doe": {
            "term_freq": 1,
            "tokens": [
                {
                  "position": 0,
                  "start_offset": 0,
                  "end_offset": 8
                }
            ]
          }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with a `filter` in the request body.
{
  "_index": "imdb",
  "_version": 0,
  "found": true,
  "term_vectors": {
      "plot": {
        "field_statistics": {
            "sum_doc_freq": 3384269,
            "doc_count": 176214,
            "sum_ttf": 3753460
        },
        "terms": {
            "armored": {
              "doc_freq": 27,
              "ttf": 27,
              "term_freq": 1,
              "score": 9.74725
            },
            "industrialist": {
              "doc_freq": 88,
              "ttf": 88,
              "term_freq": 1,
              "score": 8.590818
            },
            "stark": {
              "doc_freq": 44,
              "ttf": 47,
              "term_freq": 1,
              "score": 9.272792
            }
        }
      }
  }
}




Update documents Generally available; Added in 2.4.0

POST /{index}/_update_by_query

Updates documents that match the specified query. If no query is specified, performs an update on every document in the data stream or index without modifying the source, which is useful for picking up mapping changes.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:

  • read
  • index or write

You can specify the query criteria in the request URI or the request body using the same syntax as the search API.

When you submit an update by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and updates matching documents using internal versioning. When the versions match, the document is updated and the version number is incremented. If a document changes between the time that the snapshot is taken and the update operation is processed, it results in a version conflict and the operation fails. You can opt to count version conflicts instead of halting and returning by setting conflicts to proceed. Note that if you opt to count version conflicts, the operation could attempt to update more documents from the source than max_docs until it has successfully updated max_docs documents or it has gone through every document in the source query.

NOTE: Documents with a version equal to 0 cannot be updated using update by query because internal versioning does not support 0 as a valid version number.

While processing an update by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents. A bulk update request is performed for each batch of matching documents. Any query or update failures cause the update by query request to fail and the failures are shown in the response. Any update requests that completed successfully still stick, they are not rolled back.

Throttling update requests

To control the rate at which update by query issues batches of update operations, you can set requests_per_second to any positive decimal number. This pads each batch with a wait time to throttle the rate. Set requests_per_second to -1 to turn off throttling.

Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account. The padding time is the difference between the batch size divided by the requests_per_second and the time spent writing. By default the batch size is 1000, so if requests_per_second is set to 500:

target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds

Since the batch is issued as a single _bulk request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth".

Slicing

Update by query supports sliced scroll to parallelize the update process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.

Setting slices to auto chooses a reasonable number for most data streams and indices. This setting will use one slice per shard, up to a certain limit. If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards.

Adding slices to _update_by_query just automates the manual process of creating sub-requests, which means it has some quirks:

  • You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
  • Fetching the status of the task for the request with slices only contains the status of completed slices.
  • These sub-requests are individually addressable for things like cancellation and rethrottling.
  • Rethrottling the request with slices will rethrottle the unfinished sub-request proportionally.
  • Canceling the request with slices will cancel each sub-request.
  • Due to the nature of slices each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
  • Parameters like requests_per_second and max_docs on a request with slices are distributed proportionally to each sub-request. Combine that with the point above about distribution being uneven and you should conclude that using max_docs with slices might not result in exactly max_docs documents being updated.
  • Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.

If you're slicing manually or otherwise tuning automatic slicing, keep in mind that:

  • Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many slices hurts performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.
  • Update performance scales linearly across available resources with the number of slices.

Whether query or update performance dominates the runtime depends on the documents being reindexed and cluster resources.

Update the document source

Update by query supports scripts to update the document source. As with the update API, you can set ctx.op to change the operation that is performed.

Set ctx.op = "noop" if your script decides that it doesn't have to make any changes. The update by query operation skips updating the document and increments the noop counter.

Set ctx.op = "delete" if your script decides that the document should be deleted. The update by query operation deletes the document and increments the deleted counter.

Update by query supports only index, noop, and delete. Setting ctx.op to anything else is an error. Setting any other field in ctx is an error. This API enables you to only modify the source of matching documents; you cannot move them.

Required authorization

  • Index privileges: read,write

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (*). To search all data streams or indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • analyzer string

    The analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • analyze_wildcard boolean

    If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • conflicts string

    The preferred behavior when update by query hits version conflicts: abort or proceed.

    Supported values include:

    • abort: Stop reindexing if there are conflicts.
    • proceed: Continue reindexing even if there are conflicts.

    Values are abort or proceed.

  • default_operator string

    The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as default where no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • from number

    Skips the specified number of documents.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • max_docs number

    The maximum number of documents to process. It defaults to all documents. When set to a value less then or equal to scroll_size then a scroll will not be used to retrieve the results for the operation.

  • pipeline string

    The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to _none disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter.

  • preference string

    The node or shard the operation should be performed on. It is random by default.

  • q string

    A query in the Lucene query string syntax.

  • refresh boolean

    If true, Elasticsearch refreshes affected shards to make the operation visible to search after the request completes. This is different than the update API's refresh parameter, which causes just the shard that received the request to be refreshed.

  • request_cache boolean

    If true, the request cache is used for this request. It defaults to the index-level setting.

  • requests_per_second number

    The throttle for this request in sub-requests per second.

  • routing string

    A custom value used to route operations to a specific shard.

  • scroll string

    The period to retain the search context for scrolling.

    External documentation
  • scroll_size number

    The size of the scroll request that powers the operation.

  • search_timeout string

    An explicit timeout for each search request. By default, there is no timeout.

    External documentation
  • search_type string

    The type of the search operation. Available options include query_then_fetch and dfs_query_then_fetch.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • slices number | string

    The number of slices this task should be divided into.

    Value is auto.

  • sort array[string]

    A comma-separated list of : pairs.

  • stats array[string]

    The specific tag of the request for logging and statistical purposes.

  • terminate_after number

    The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.

    IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.

  • timeout string

    The period each update request waits for the following operations: dynamic mapping updates, waiting for active shards. By default, it is one minute. This guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.

    External documentation
  • version boolean

    If true, returns the document version as part of a hit.

  • version_type boolean

    Should the document increment the version number (internal) on hit or not (reindex)

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The timeout parameter controls how long each write request waits for unavailable shards to become available. Both work exactly the way they work in the bulk API.

    Values are all or index-setting.

  • wait_for_completion boolean

    If true, the request blocks until the operation is complete. If false, Elasticsearch performs some preflight checks, launches the request, and returns a task ID that you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at .tasks/task/${taskId}.

application/json

Body

  • max_docs number

    The maximum number of documents to update.

  • query object

    The documents to update using the Query DSL.

    External documentation
  • script object

    The script to run to update the document source or metadata when updating.

    Hide script attributes Show script attributes object
    • source string

      The script source.

    • id string

      The id for a stored script.

    • params object

      Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

      Hide params attribute Show params attribute object
      • * object Additional properties
    • lang string

      Specifies the language the script is written in.

      Supported values include:

      • painless: Painless scripting language, purpose-built for Elasticsearch.
      • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
      • mustache: Mustache templated, used for templates.
      • java: Expert Java API
      Any of:

      Specifies the language the script is written in.

      Supported values include:

      • painless: Painless scripting language, purpose-built for Elasticsearch.
      • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
      • mustache: Mustache templated, used for templates.
      • java: Expert Java API

      Values are painless, expression, mustache, or java.

    • options object
      Hide options attribute Show options attribute object
      • * string Additional properties
  • slice object

    Slice the request manually using the provided slice ID and total number of slices.

    Hide slice attributes Show slice attributes object
    • field string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • max number Required
  • conflicts string

    The preferred behavior when update by query hits version conflicts: abort or proceed.

    Supported values include:

    • abort: Stop reindexing if there are conflicts.
    • proceed: Continue reindexing even if there are conflicts.

    Values are abort or proceed.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • batches number

      The number of scroll responses pulled back by the update by query.

    • failures array[object]

      Array of failures if there were any unrecoverable errors during the process. If this is non-empty then the request ended because of those failures. Update by query is implemented using batches. Any failure causes the entire process to end, but all failures in the current batch are collected into the array. You can use the conflicts option to prevent reindex from ending when version conflicts occur.

      Hide failures attributes Show failures attributes object
      • cause object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide cause attributes Show cause attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • id string Required
      • index string Required
      • status number Required
    • noops number

      The number of documents that were ignored because the script used for the update by query returned a noop value for ctx.op.

    • deleted number

      The number of documents that were successfully deleted.

    • requests_per_second number

      The number of requests per second effectively run during the update by query.

    • retries object

      The number of retries attempted by update by query. bulk is the number of bulk actions retried. search is the number of search actions retried.

      Hide retries attributes Show retries attributes object
      • bulk number Required

        The number of bulk actions retried.

    • task string | number

    • timed_out boolean

      If true, some requests timed out during the update by query.

    • took number

      Time unit for milliseconds

    • total number

      The number of documents that were successfully processed.

    • updated number

      The number of documents that were successfully updated.

    • version_conflicts number

      The number of version conflicts that the update by query hit.

    • throttled string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      External documentation
    • throttled_millis number

      Time unit for milliseconds

    • throttled_until string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      External documentation
    • throttled_until_millis number

      Time unit for milliseconds

POST /{index}/_update_by_query
POST my-index-000001/_update_by_query?conflicts=proceed
{
  "query": { 
    "term": {
      "user.id": "kimchy"
    }
  }
}
resp = client.update_by_query(
    index="my-index-000001",
    conflicts="proceed",
    query={
        "term": {
            "user.id": "kimchy"
        }
    },
)
const response = await client.updateByQuery({
  index: "my-index-000001",
  conflicts: "proceed",
  query: {
    term: {
      "user.id": "kimchy",
    },
  },
});
response = client.update_by_query(
  index: "my-index-000001",
  conflicts: "proceed",
  body: {
    "query": {
      "term": {
        "user.id": "kimchy"
      }
    }
  }
)
$resp = $client->updateByQuery([
    "index" => "my-index-000001",
    "conflicts" => "proceed",
    "body" => [
        "query" => [
            "term" => [
                "user.id" => "kimchy",
            ],
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":{"term":{"user.id":"kimchy"}}}' "$ELASTICSEARCH_URL/my-index-000001/_update_by_query?conflicts=proceed"
client.updateByQuery(u -> u
    .conflicts(Conflicts.Proceed)
    .index("my-index-000001")
    .query(q -> q
        .term(t -> t
            .field("user.id")
            .value(FieldValue.of("kimchy"))
        )
    )
);
Request examples
Run `POST my-index-000001/_update_by_query?conflicts=proceed` to update documents that match a query.
{
  "query": { 
    "term": {
      "user.id": "kimchy"
    }
  }
}
Run `POST my-index-000001/_update_by_query` with a script to update the document source. It increments the `count` field for all documents with a `user.id` of `kimchy` in `my-index-000001`.
{
  "script": {
    "source": "ctx._source.count++",
    "lang": "painless"
  },
  "query": {
    "term": {
      "user.id": "kimchy"
    }
  }
}
Run `POST my-index-000001/_update_by_query` to slice an update by query manually. Provide a slice ID and total number of slices to each request.
{
  "slice": {
    "id": 0,
    "max": 2
  },
  "script": {
    "source": "ctx._source['extra'] = 'test'"
  }
}
Run `POST my-index-000001/_update_by_query?refresh&slices=5` to use automatic slicing. It automatically parallelizes using sliced scroll to slice on `_id`.
{
  "script": {
    "source": "ctx._source['extra'] = 'test'"
  }
}




Enrich





Create an enrich policy Generally available; Added in 7.5.0

PUT /_enrich/policy/{name}

Creates an enrich policy.

Path parameters

  • name string Required

    Name of the enrich policy to create or update.

Query parameters

application/json

Body Required

  • geo_match object Additional properties

    Matches enrich data to incoming documents based on a geo_shape query.

    Hide geo_match attributes Show geo_match attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string
  • match object Additional properties

    Matches enrich data to incoming documents based on a term query.

    Hide match attributes Show match attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string
  • range object Additional properties

    Matches a number, date, or IP address in incoming documents to a range in the enrich index based on a term query.

    Hide range attributes Show range attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_enrich/policy/{name}
PUT /_enrich/policy/postal_policy
{
  "geo_match": {
    "indices": "postal_codes",
    "match_field": "location",
    "enrich_fields": [ "location", "postal_code" ]
  }
}
resp = client.enrich.put_policy(
    name="postal_policy",
    geo_match={
        "indices": "postal_codes",
        "match_field": "location",
        "enrich_fields": [
            "location",
            "postal_code"
        ]
    },
)
const response = await client.enrich.putPolicy({
  name: "postal_policy",
  geo_match: {
    indices: "postal_codes",
    match_field: "location",
    enrich_fields: ["location", "postal_code"],
  },
});
response = client.enrich.put_policy(
  name: "postal_policy",
  body: {
    "geo_match": {
      "indices": "postal_codes",
      "match_field": "location",
      "enrich_fields": [
        "location",
        "postal_code"
      ]
    }
  }
)
$resp = $client->enrich()->putPolicy([
    "name" => "postal_policy",
    "body" => [
        "geo_match" => [
            "indices" => "postal_codes",
            "match_field" => "location",
            "enrich_fields" => array(
                "location",
                "postal_code",
            ),
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"geo_match":{"indices":"postal_codes","match_field":"location","enrich_fields":["location","postal_code"]}}' "$ELASTICSEARCH_URL/_enrich/policy/postal_policy"
client.enrich().putPolicy(p -> p
    .geoMatch(g -> g
        .enrichFields(List.of("location","postal_code"))
        .indices("postal_codes")
        .matchField("location")
    )
    .name("postal_policy")
);
Request example
An example body for a `PUT /_enrich/policy/postal_policy` request.
{
  "geo_match": {
    "indices": "postal_codes",
    "match_field": "location",
    "enrich_fields": [ "location", "postal_code" ]
  }
}

Delete an enrich policy Generally available; Added in 7.5.0

DELETE /_enrich/policy/{name}

Deletes an existing enrich policy and its enrich index.

Path parameters

  • name string Required

    Enrich policy to delete.

Query parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_enrich/policy/{name}
DELETE /_enrich/policy/my-policy
resp = client.enrich.delete_policy(
    name="my-policy",
)
const response = await client.enrich.deletePolicy({
  name: "my-policy",
});
response = client.enrich.delete_policy(
  name: "my-policy"
)
$resp = $client->enrich()->deletePolicy([
    "name" => "my-policy",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy"
client.enrich().deletePolicy(d -> d
    .name("my-policy")
);

Run an enrich policy Generally available; Added in 7.5.0

PUT /_enrich/policy/{name}/_execute

Create the enrich index for an existing enrich policy.

Path parameters

  • name string Required

    Enrich policy to execute.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation
  • wait_for_completion boolean

    If true, the request blocks other enrich policy execution requests until complete.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • status object
      Hide status attributes Show status attributes object
      • phase string Required

        Values are SCHEDULED, RUNNING, COMPLETE, FAILED, or CANCELLED.

      • step string
    • task string | number

PUT /_enrich/policy/{name}/_execute
PUT /_enrich/policy/my-policy/_execute?wait_for_completion=false
resp = client.enrich.execute_policy(
    name="my-policy",
    wait_for_completion=False,
)
const response = await client.enrich.executePolicy({
  name: "my-policy",
  wait_for_completion: "false",
});
response = client.enrich.execute_policy(
  name: "my-policy",
  wait_for_completion: "false"
)
$resp = $client->enrich()->executePolicy([
    "name" => "my-policy",
    "wait_for_completion" => "false",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy/_execute?wait_for_completion=false"
client.enrich().executePolicy(e -> e
    .name("my-policy")
    .waitForCompletion(false)
);




EQL

Event Query Language (EQL) is a query language for event-based time series data, such as logs, metrics, and traces.

Learn more about EQL search

Get async EQL search results Generally available; Added in 7.9.0

GET /_eql/search/{id}

Get the current status and available results for an async EQL search or a stored synchronous EQL search.

Path parameters

  • id string Required

    Identifier for the search.

Query parameters

  • keep_alive string

    Period for which the search and its results are stored on the cluster. Defaults to the keep_alive value set by the search’s EQL search API request.

    External documentation
  • wait_for_completion_timeout string

    Timeout duration to wait for the request to finish. Defaults to no timeout, meaning the request waits for complete search results.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      Identifier for the search.

    • is_partial boolean

      If true, the response does not contain complete search results.

    • is_running boolean

      If true, the search request is still executing.

    • took number

      Time unit for milliseconds

    • timed_out boolean

      If true, the request timed out before completion.

    • hits object Required

      Contains matching events and sequences. Also contains related metadata.

      Hide hits attributes Show hits attributes object
      • total object

        Metadata about the number of matching events or sequences.

        Hide total attributes Show total attributes object
        • relation string Required

          Supported values include:

          • eq: Accurate
          • gte: Lower bound, including returned events or sequences

          Values are eq or gte.

        • value number Required
      • events array[object]

        Contains events matching the query. Each object represents a matching event.

        Hide events attributes Show events attributes object
        • _index string Required

          Name of the index containing the event.

        • _id string Required

          Unique identifier for the event. This ID is only unique within the index.

        • _source object Required

          Original JSON body passed for the event at index time.

        • missing boolean

          Set to true for events in a timespan-constrained sequence that do not meet a given condition.

        • fields object
          Hide fields attribute Show fields attribute object
          • * array[object] Additional properties
      • sequences array[object]

        Contains event sequences matching the query. Each object represents a matching sequence. This parameter is only returned for EQL queries containing a sequence.

        Hide sequences attributes Show sequences attributes object
        • events array[object] Required

          Contains events matching the query. Each object represents a matching event.

        • join_keys array[object]

          Shared field values used to constrain matches in the sequence. These are defined using the by keyword in the EQL query syntax.

    • shard_failures array[object]

      Contains information about shard failures (if any), in case allow_partial_search_results=true

      Hide shard_failures attributes Show shard_failures attributes object
      • index string
      • node string
      • reason object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide reason attributes Show reason attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • shard number
      • status string
      • primary boolean
GET /_eql/search/{id}
GET /_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?wait_for_completion_timeout=2s
resp = client.eql.get(
    id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
    wait_for_completion_timeout="2s",
)
const response = await client.eql.get({
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
  wait_for_completion_timeout: "2s",
});
response = client.eql.get(
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
  wait_for_completion_timeout: "2s"
)
$resp = $client->eql()->get([
    "id" => "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
    "wait_for_completion_timeout" => "2s",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?wait_for_completion_timeout=2s"
client.eql().get(g -> g
    .id("FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=")
    .waitForCompletionTimeout(w -> w
        .offset(2)
    )
);








Get EQL search results Generally available; Added in 7.9.0

POST /{index}/_eql/search

All methods and paths for this operation:

GET /{index}/_eql/search

POST /{index}/_eql/search

Returns search results for an Event Query Language (EQL) query. EQL assumes each document in a data stream or index corresponds to an event.

External documentation

Path parameters

  • index string | array[string] Required

    The name of the index to scope the operation

Query parameters

  • allow_no_indices boolean

    Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes _all string or when no indices have been specified)

  • allow_partial_search_results boolean

    If true, returns partial results if there are shard failures. If false, returns an error with no partial results.

  • allow_partial_sequence_results boolean

    If true, sequence queries will return partial results in case of shard failures. If false, they will return no results at all. This flag has effect only if allow_partial_search_results is true.

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ccs_minimize_roundtrips boolean

    Indicates whether network round-trips should be minimized as part of cross-cluster search requests execution

  • ignore_unavailable boolean

    If true, missing or closed indices are not included in the response.

  • keep_alive string

    Period for which the search and its results are stored on the cluster.

    External documentation
  • keep_on_completion boolean

    If true, the search and its results are stored on the cluster.

  • wait_for_completion_timeout string

    Timeout duration to wait for the request to finish. Defaults to no timeout, meaning the request waits for complete search results.

    External documentation
application/json

Body Required

  • query string Required

    EQL query you wish to run.

  • case_sensitive boolean
  • event_category_field string

    Field containing the event classification, such as process, file, or network.

  • tiebreaker_field string

    Field used to sort hits with the same timestamp in ascending order

  • timestamp_field string

    Field containing event timestamp. Default "@timestamp"

  • fetch_size number

    Maximum number of events to search at a time for sequence queries.

  • filter object | array[object]

    Query, written in Query DSL, used to filter the events on which the EQL query runs.

    One of:

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • keep_alive string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    External documentation
  • keep_on_completion boolean
  • wait_for_completion_timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    External documentation
  • allow_partial_search_results boolean

    Allow query execution also in case of shard failures. If true, the query will keep running and will return results based on the available shards. For sequences, the behavior can be further refined using allow_partial_sequence_results

    Default value is true.

  • allow_partial_sequence_results boolean

    This flag applies only to sequences and has effect only if allow_partial_search_results=true. If true, the sequence query will return results based on the available shards, ignoring the others. If false, the sequence query will return successfully, but will always have empty results.

    Default value is false.

  • size number

    For basic queries, the maximum number of matching events to return. Defaults to 10

  • fields object | array[object]

    Array of wildcard (*) patterns. The response returns values for field names matching these patterns in the fields property of each hit.

    One of:

    A reference to a field with formatting instructions on how to return the value

    Hide attributes Show attributes
    • field string Required

      A wildcard pattern. The request returns values for field names matching this pattern.

    • format string

      The format in which the values are returned.

    • include_unmapped boolean
  • result_position string

    Supported values include:

    • tail: Return the most recent matches, similar to the Unix tail command.
    • head: Return the earliest matches, similar to the Unix head command.

    Values are tail or head.

  • runtime_mappings object
    Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        For type lookup

      • target_field string

        For type lookup

      • target_index string

        For type lookup

      • script object

        Painless script executed at query time.

        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang
        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • max_samples_per_key number

    By default, the response of a sample query contains up to 10 samples, with one sample per unique set of join keys. Use the size parameter to get a smaller or larger set of samples. To retrieve more than one sample per set of join keys, use the max_samples_per_key parameter. Pipes are not supported for sample queries.

    Default value is 1.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      Identifier for the search.

    • is_partial boolean

      If true, the response does not contain complete search results.

    • is_running boolean

      If true, the search request is still executing.

    • took number

      Time unit for milliseconds

    • timed_out boolean

      If true, the request timed out before completion.

    • hits object Required

      Contains matching events and sequences. Also contains related metadata.

      Hide hits attributes Show hits attributes object
      • total object

        Metadata about the number of matching events or sequences.

        Hide total attributes Show total attributes object
        • relation string Required

          Supported values include:

          • eq: Accurate
          • gte: Lower bound, including returned events or sequences

          Values are eq or gte.

        • value number Required
      • events array[object]

        Contains events matching the query. Each object represents a matching event.

        Hide events attributes Show events attributes object
        • _index string Required

          Name of the index containing the event.

        • _id string Required

          Unique identifier for the event. This ID is only unique within the index.

        • _source object Required

          Original JSON body passed for the event at index time.

        • missing boolean

          Set to true for events in a timespan-constrained sequence that do not meet a given condition.

        • fields object
          Hide fields attribute Show fields attribute object
          • * array[object] Additional properties
      • sequences array[object]

        Contains event sequences matching the query. Each object represents a matching sequence. This parameter is only returned for EQL queries containing a sequence.

        Hide sequences attributes Show sequences attributes object
        • events array[object] Required

          Contains events matching the query. Each object represents a matching event.

        • join_keys array[object]

          Shared field values used to constrain matches in the sequence. These are defined using the by keyword in the EQL query syntax.

    • shard_failures array[object]

      Contains information about shard failures (if any), in case allow_partial_search_results=true

      Hide shard_failures attributes Show shard_failures attributes object
      • index string
      • node string
      • reason object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide reason attributes Show reason attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • shard number
      • status string
      • primary boolean
POST /{index}/_eql/search
GET /my-data-stream/_eql/search
{
  "query": """
    process where (process.name == "cmd.exe" and process.pid != 2013)
  """
}
resp = client.eql.search(
    index="my-data-stream",
    query="\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  ",
)
const response = await client.eql.search({
  index: "my-data-stream",
  query:
    '\n    process where (process.name == "cmd.exe" and process.pid != 2013)\n  ',
});
response = client.eql.search(
  index: "my-data-stream",
  body: {
    "query": "\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  "
  }
)
$resp = $client->eql()->search([
    "index" => "my-data-stream",
    "body" => [
        "query" => "\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  ",
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":"\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  "}' "$ELASTICSEARCH_URL/my-data-stream/_eql/search"
client.eql().search(s -> s
    .index("my-data-stream")
    .query(" process where (process.name == \"cmd.exe\" and process.pid != 2013) ")
);
Request examples
Run `GET /my-data-stream/_eql/search` to search for events that have a `process.name` of `cmd.exe` and a `process.pid` other than `2013`.
{
  "query": """
    process where (process.name == "cmd.exe" and process.pid != 2013)
  """
}
Run `GET /my-data-stream/_eql/search` to search for a sequence of events. The sequence starts with an event with an `event.category` of `file`, a `file.name` of `cmd.exe`, and a `process.pid` other than `2013`. It is followed by an event with an `event.category` of `process` and a `process.executable` that contains the substring `regsvr32`. These events must also share the same `process.pid` value.
{
  "query": """
    sequence by process.pid
      [ file where file.name == "cmd.exe" and process.pid != 2013 ]
      [ process where stringContains(process.executable, "regsvr32") ]
  """
}
Response examples (200)
{
  "is_partial": false,
  "is_running": false,
  "took": 6,
  "timed_out": false,
  "hits": {
    "total": {
      "value": 1,
      "relation": "eq"
    },
    "sequences": [
      {
        "join_keys": [
          2012
        ],
        "events": [
          {
            "_index": ".ds-my-data-stream-2099.12.07-000001",
            "_id": "AtOJ4UjUBAAx3XR5kcCM",
            "_source": {
              "@timestamp": "2099-12-06T11:04:07.000Z",
              "event": {
                "category": "file",
                "id": "dGCHwoeS",
                "sequence": 2
              },
              "file": {
                "accessed": "2099-12-07T11:07:08.000Z",
                "name": "cmd.exe",
                "path": "C:\\Windows\\System32\\cmd.exe",
                "type": "file",
                "size": 16384
              },
              "process": {
                "pid": 2012,
                "name": "cmd.exe",
                "executable": "C:\\Windows\\System32\\cmd.exe"
              }
            }
          },
          {
            "_index": ".ds-my-data-stream-2099.12.07-000001",
            "_id": "OQmfCaduce8zoHT93o4H",
            "_source": {
              "@timestamp": "2099-12-07T11:07:09.000Z",
              "event": {
                "category": "process",
                "id": "aR3NWVOs",
                "sequence": 4
              },
              "process": {
                "pid": 2012,
                "name": "regsvr32.exe",
                "command_line": "regsvr32.exe  /s /u /i:https://...RegSvr32.sct scrobj.dll",
                "executable": "C:\\Windows\\System32\\regsvr32.exe"
              }
            }
          }
        ]
      }
    ]
  }
}

Run an async ES|QL query Generally available; Added in 8.13.0

POST /_query/async

Asynchronously run an ES|QL (Elasticsearch query language) query, monitor its progress, and retrieve results when they become available.

The API accepts the same parameters and request body as the synchronous query API, along with additional async related properties.

Required authorization

  • Index privileges: read
External documentation

Query parameters

  • allow_partial_results boolean

    If true, partial results will be returned if there are shard failures, but the query can continue to execute on other clusters and shards. If false, the query will fail if there are any failures.

    To override the default behavior, you can set the esql.query.allow_partial_results cluster setting to false.

  • delimiter string

    The character to use between values within a CSV row. It is valid only for the CSV format.

  • drop_null_columns boolean

    Indicates whether columns that are entirely null will be removed from the columns and values portion of the results. If true, the response will include an extra section under the name all_columns which has the name of all the columns.

  • format string

    A short version of the Accept header, e.g. json, yaml.

    csv, tsv, and txt formats will return results in a tabular format, excluding other metadata fields from the response.

    For async requests, nothing will be returned if the async query doesn't finish within the timeout. The query ID and running status are available in the X-Elasticsearch-Async-Id and X-Elasticsearch-Async-Is-Running HTTP headers of the response, respectively.

    Values are csv, json, tsv, txt, yaml, cbor, smile, or arrow.

application/json

Body Required

  • columnar boolean

    By default, ES|QL returns results as rows. For example, FROM returns each individual document as one row. For the JSON, YAML, CBOR and smile formats, ES|QL can return the results in a columnar fashion where one row represents all the values of a certain column in the results.

  • filter object

    Specify a Query DSL query in the filter parameter to filter the set of documents that an ES|QL query runs on.

    External documentation
  • locale string
  • params array[number | string | boolean | null | object]

    To avoid any attempts of hacking or code injection, extract the values in a separate list of parameters. Use question mark placeholders (?) in the query string for each of the parameters.

    A field value.

  • profile boolean

    If provided and true the response will include an extra profile object with information on how the query was executed. This information is for human debugging and its format can change at any time but it can give some insight into the performance of each part of the query.

  • query string Required

    The ES|QL query API accepts an ES|QL query string in the query parameter, runs it, and returns the results.

  • tables object

    Tables to use with the LOOKUP operation. The top level key is the table name and the next level key is the column name.

    Hide tables attribute Show tables attribute object
  • include_ccs_metadata boolean

    When set to true and performing a cross-cluster query, the response will include an extra _clusters object with information about the clusters that participated in the search along with info such as shards count.

    Default value is false.

  • wait_for_completion_timeout string

    The period to wait for the request to finish. By default, the request waits for 1 second for the query results. If the query completes during this period, results are returned Otherwise, a query ID is returned that can later be used to retrieve the results.

    External documentation
  • keep_alive string

    The period for which the query and its results are stored in the cluster. The default period is five days. When this period expires, the query and its results are deleted, even if the query is still ongoing. If the keep_on_completion parameter is false, Elasticsearch only stores async queries that do not complete within the period set by the wait_for_completion_timeout parameter, regardless of this value.

    External documentation
  • keep_on_completion boolean

    Indicates whether the query and its results are stored in the cluster. If false, the query and its results are stored in the cluster only if the request does not complete during the period set by the wait_for_completion_timeout parameter.

    Default value is false.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number

      Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
      • name string Required
      • type string Required
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
      • name string Required
      • type string Required
    • values array[array] Required
    • _clusters object

      Cross-cluster search information. Present if include_ccs_metadata was true in the request and a cross-cluster search was performed.

      Hide _clusters attributes Show _clusters attributes object
      • total number Required
      • successful number Required
      • running number Required
      • skipped number Required
      • partial number Required
      • failed number Required
      • details object Required
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • _shards object
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

POST /_query/async
POST /_query/async
{
  "query": """
    FROM library,remote-*:library
    | EVAL year = DATE_TRUNC(1 YEARS, release_date)
    | STATS MAX(page_count) BY year
    | SORT year
    | LIMIT 5
  """,
  "wait_for_completion_timeout": "2s",
  "include_ccs_metadata": true
}
resp = client.esql.async_query(
    query="\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
    wait_for_completion_timeout="2s",
    include_ccs_metadata=True,
)
const response = await client.esql.asyncQuery({
  query:
    "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
  wait_for_completion_timeout: "2s",
  include_ccs_metadata: true,
});
response = client.esql.async_query(
  body: {
    "query": "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
    "wait_for_completion_timeout": "2s",
    "include_ccs_metadata": true
  }
)
$resp = $client->esql()->asyncQuery([
    "body" => [
        "query" => "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
        "wait_for_completion_timeout" => "2s",
        "include_ccs_metadata" => true,
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":"\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ","wait_for_completion_timeout":"2s","include_ccs_metadata":true}' "$ELASTICSEARCH_URL/_query/async"
Request example
{
  "query": """
    FROM library,remote-*:library
    | EVAL year = DATE_TRUNC(1 YEARS, release_date)
    | STATS MAX(page_count) BY year
    | SORT year
    | LIMIT 5
  """,
  "wait_for_completion_timeout": "2s",
  "include_ccs_metadata": true
}




Delete an async ES|QL query Generally available; Added in 8.13.0

DELETE /_query/async/{id}

If the query is still running, it is cancelled. Otherwise, the stored results are deleted.

If the Elasticsearch security features are enabled, only the following users can use this API to delete a query:

  • The authenticated user that submitted the original query request
  • Users with the cancel_task cluster privilege
External documentation

Path parameters

  • id string Required

    The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with the keep_on_completion parameter set to true.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_query/async/{id}
DELETE /_query/async/FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=
resp = client.esql.async_query_delete(
    id="FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=",
)
const response = await client.esql.asyncQueryDelete({
  id: "FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=",
});
response = client.esql.async_query_delete(
  id: "FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI="
)
$resp = $client->esql()->asyncQueryDelete([
    "id" => "FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_query/async/FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI="

Stop async ES|QL query Generally available; Added in 8.18.0

POST /_query/async/{id}/stop

This API interrupts the query execution and returns the results so far. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can stop it.

External documentation

Path parameters

  • id string Required

    The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with the keep_on_completion parameter set to true.

Query parameters

  • drop_null_columns boolean

    Indicates whether columns that are entirely null will be removed from the columns and values portion of the results. If true, the response will include an extra section under the name all_columns which has the name of all the columns.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number

      Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
      • name string Required
      • type string Required
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
      • name string Required
      • type string Required
    • values array[array] Required
    • _clusters object

      Cross-cluster search information. Present if include_ccs_metadata was true in the request and a cross-cluster search was performed.

      Hide _clusters attributes Show _clusters attributes object
      • total number Required
      • successful number Required
      • running number Required
      • skipped number Required
      • partial number Required
      • failed number Required
      • details object Required
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • _shards object
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

POST /_query/async/{id}/stop
POST /_query/async/FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=/stop
resp = client.esql.async_query_stop(
    id="FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=",
)
const response = await client.esql.asyncQueryStop({
  id: "FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=",
});
response = client.esql.async_query_stop(
  id: "FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM="
)
$resp = $client->esql()->asyncQueryStop([
    "id" => "FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_query/async/FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=/stop"

Run an ES|QL query Generally available; Added in 8.11.0

POST /_query

Get search results for an ES|QL (Elasticsearch query language) query.

External documentation

Query parameters

  • format string

    A short version of the Accept header, e.g. json, yaml.

    csv, tsv, and txt formats will return results in a tabular format, excluding other metadata fields from the response.

    Values are csv, json, tsv, txt, yaml, cbor, smile, or arrow.

  • delimiter string

    The character to use between values within a CSV row. Only valid for the CSV format.

  • drop_null_columns boolean

    Should columns that are entirely null be removed from the columns and values portion of the results? Defaults to false. If true then the response will include an extra section under the name all_columns which has the name of all columns.

  • allow_partial_results boolean

    If true, partial results will be returned if there are shard failures, but the query can continue to execute on other clusters and shards. If false, the query will fail if there are any failures.

    To override the default behavior, you can set the esql.query.allow_partial_results cluster setting to false.

application/json

Body Required

  • columnar boolean

    By default, ES|QL returns results as rows. For example, FROM returns each individual document as one row. For the JSON, YAML, CBOR and smile formats, ES|QL can return the results in a columnar fashion where one row represents all the values of a certain column in the results.

  • filter object

    Specify a Query DSL query in the filter parameter to filter the set of documents that an ES|QL query runs on.

    External documentation
  • locale string
  • params array[number | string | boolean | null | object]

    To avoid any attempts of hacking or code injection, extract the values in a separate list of parameters. Use question mark placeholders (?) in the query string for each of the parameters.

    A field value.

  • profile boolean

    If provided and true the response will include an extra profile object with information on how the query was executed. This information is for human debugging and its format can change at any time but it can give some insight into the performance of each part of the query.

  • query string Required

    The ES|QL query API accepts an ES|QL query string in the query parameter, runs it, and returns the results.

  • tables object

    Tables to use with the LOOKUP operation. The top level key is the table name and the next level key is the column name.

    Hide tables attribute Show tables attribute object
  • include_ccs_metadata boolean

    When set to true and performing a cross-cluster query, the response will include an extra _clusters object with information about the clusters that participated in the search along with info such as shards count.

    Default value is false.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number

      Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
      • name string Required
      • type string Required
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
      • name string Required
      • type string Required
    • values array[array] Required
    • _clusters object

      Cross-cluster search information. Present if include_ccs_metadata was true in the request and a cross-cluster search was performed.

      Hide _clusters attributes Show _clusters attributes object
      • total number Required
      • successful number Required
      • running number Required
      • skipped number Required
      • partial number Required
      • failed number Required
      • details object Required
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • _shards object
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

POST /_query
POST /_query
{
  "query": """
    FROM library,remote-*:library
    | EVAL year = DATE_TRUNC(1 YEARS, release_date)
    | STATS MAX(page_count) BY year
    | SORT year
    | LIMIT 5
  """,
  "include_ccs_metadata": true
}
resp = client.esql.query(
    query="\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
    include_ccs_metadata=True,
)
const response = await client.esql.query({
  query:
    "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
  include_ccs_metadata: true,
});
response = client.esql.query(
  body: {
    "query": "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
    "include_ccs_metadata": true
  }
)
$resp = $client->esql()->query([
    "body" => [
        "query" => "\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ",
        "include_ccs_metadata" => true,
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":"\n    FROM library,remote-*:library\n    | EVAL year = DATE_TRUNC(1 YEARS, release_date)\n    | STATS MAX(page_count) BY year\n    | SORT year\n    | LIMIT 5\n  ","include_ccs_metadata":true}' "$ELASTICSEARCH_URL/_query"
client.esql().query(q -> q
    .includeCcsMetadata(true)
    .query(" FROM library,remote-*:library | EVAL year = DATE_TRUNC(1 YEARS, release_date) | STATS MAX(page_count) BY year | SORT year | LIMIT 5 ")
);
Request example
Run `POST /_query` to get results for an ES|QL query.
{
  "query": """
    FROM library,remote-*:library
    | EVAL year = DATE_TRUNC(1 YEARS, release_date)
    | STATS MAX(page_count) BY year
    | SORT year
    | LIMIT 5
  """,
  "include_ccs_metadata": true
}

Features

The feature APIs enable you to introspect and manage features provided by Elasticsearch and Elasticsearch plugins.

Get the features Generally available; Added in 7.12.0

GET /_features

Get a list of features that can be included in snapshots using the feature_states field when creating a snapshot. You can use this API to determine which feature states to include when taking a snapshot. By default, all feature states are included in a snapshot if that snapshot includes the global state, or none if it does not.

A feature state includes one or more system indices necessary for a given feature to function. In order to ensure data integrity, all system indices that comprise a feature state are snapshotted and restored together.

The features listed by this API are a combination of built-in features and features defined by plugins. In order for a feature state to be listed in this API and recognized as a valid feature state by the create snapshot API, the plugin that defines that feature must be installed on the master node.

External documentation

Query parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • features array[object] Required
      Hide features attributes Show features attributes object
      • name string Required
      • description string Required
GET /_features
GET _features
resp = client.features.get_features()
const response = await client.features.getFeatures();
response = client.features.get_features
$resp = $client->features()->getFeatures();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_features"
client.features().getFeatures(g -> g);
Response examples (200)
A successful response for retrieving a list of feature states that can be included when taking a snapshot.
{
  "features": [
    {
      "name": "tasks",
      "description": "Manages task results"
    },
    {
      "name": "kibana",
      "description": "Manages Kibana configuration and reports"
    }
  ]
}

Reset the features Technical preview; Added in 7.12.0

POST /_features/_reset

Clear all of the state information stored in system indices by Elasticsearch features, including the security and machine learning indices.

WARNING: Intended for development and testing use only. Do not reset features on a production cluster.

Return a cluster to the same state as a new installation by resetting the feature state for all Elasticsearch features. This deletes all state information stored in system indices.

The response code is HTTP 200 if the state is successfully reset for all features. It is HTTP 500 if the reset operation failed for any feature.

Note that select features might provide a way to reset particular system indices. Using this API resets all features, both those that are built-in and implemented as plugins.

To list the features that will be affected, use the get features API.

IMPORTANT: The features installed on the node you submit this request to are the features that will be reset. Run on the master node if you have any doubts about which plugins are installed on individual nodes.

Query parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • features array[object] Required
      Hide features attributes Show features attributes object
      • name string Required
      • description string Required
POST /_features/_reset
POST /_features/_reset
resp = client.features.reset_features()
const response = await client.features.resetFeatures();
response = client.features.reset_features
$resp = $client->features()->resetFeatures();
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_features/_reset"
client.features().resetFeatures(r -> r);
Response examples (200)
A successful response for clearing state information stored in system indices by Elasticsearch features.
{
  "features" : [
    {
      "feature_name" : "security",
      "status" : "SUCCESS"
    },
    {
      "feature_name" : "tasks",
      "status" : "SUCCESS"
    }
  ]
}

Fleet









The purpose of the fleet search api is to provide a search api where the search will only be executed Technical preview; Added in 7.16.0

POST /{index}/_fleet/_fleet_search

All methods and paths for this operation:

GET /{index}/_fleet/_fleet_search

POST /{index}/_fleet/_fleet_search

after provided checkpoint has been processed and is visible for searches inside of Elasticsearch.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    A single target to search. If the target is an index alias, it must resolve to a single index.

Query parameters

  • allow_no_indices boolean
  • analyzer string
  • analyze_wildcard boolean
  • batched_reduce_size number
  • ccs_minimize_roundtrips boolean
  • default_operator string

    Values are and, AND, or, or OR.

  • df string
  • docvalue_fields string | array[string]
  • expand_wildcards string | array[string]

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • explain boolean
  • ignore_throttled boolean
  • ignore_unavailable boolean
  • lenient boolean
  • max_concurrent_shard_requests number
  • min_compatible_shard_node string
  • preference string
  • pre_filter_shard_size number
  • request_cache boolean
  • routing string
  • scroll string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    External documentation
  • search_type string

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • stats array[string]
  • stored_fields string | array[string]
  • suggest_field string

    Specifies which field to use for suggestions.

  • suggest_mode string

    Supported values include:

    • missing: Only generate suggestions for terms that are not in the shard.
    • popular: Only suggest terms that occur in more docs on the shard than the original term.
    • always: Suggest any matching suggestions based on terms in the suggest text.

    Values are missing, popular, or always.

  • suggest_size number
  • suggest_text string

    The source text for which the suggestions should be returned.

  • terminate_after number
  • timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    External documentation
  • track_total_hits boolean | number

    Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits.

  • track_scores boolean
  • typed_keys boolean
  • rest_total_hits_as_int boolean
  • version boolean
  • _source boolean | string | array[string]

    Defines how to fetch a source. Fetching can be disabled entirely, or the source can be filtered. Used as a query parameter along with the _source_includes and _source_excludes parameters.

  • _source_excludes string | array[string]
  • _source_includes string | array[string]
  • seq_no_primary_term boolean
  • q string
  • size number
  • from number
  • sort string | array[string]
  • wait_for_checkpoints array[number]

    A comma separated list of checkpoints. When configured, the search API will only be executed on a shard after the relevant checkpoint has become visible for search. Defaults to an empty list which will cause Elasticsearch to immediately execute the search.

  • allow_partial_search_results boolean

    If true, returns partial results if there are shard request timeouts or shard failures. If false, returns an error with no partial results. Defaults to the configured cluster setting search.default_allow_partial_results which is true by default.

application/json

Body

  • aggregations object
  • collapse object
    External documentation
  • explain boolean

    If true, returns detailed information about score computation as part of a hit.

    Default value is false.

  • ext object

    Configuration of search extensions defined by Elasticsearch plugins.

    Hide ext attribute Show ext attribute object
    • * object Additional properties
  • from number

    Starting document offset. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter.

    Default value is 0.

  • highlight object
    Hide highlight attributes Show highlight attributes object
    • type string

      Supported values include:

      • plain: The plain highlighter uses the standard Lucene highlighter
      • fvh: The fvh highlighter uses the Lucene Fast Vector highlighter.
      • unified: The unified highlighter uses the Lucene Unified Highlighter.
      Any of:

      Supported values include:

      • plain: The plain highlighter uses the standard Lucene highlighter
      • fvh: The fvh highlighter uses the Lucene Fast Vector highlighter.
      • unified: The unified highlighter uses the Lucene Unified Highlighter.

      Values are plain, fvh, or unified.

    • boundary_chars string

      A string that contains each boundary character.

      Default value is .,!? \t\n.

    • boundary_max_scan number

      How far to scan for boundary characters.

      Default value is 20.

    • boundary_scanner string

      Specifies how to break the highlighted fragments: chars, sentence, or word. Only valid for the unified and fvh highlighters. Defaults to sentence for the unified highlighter. Defaults to chars for the fvh highlighter.

      Supported values include:

      • chars: Use the characters specified by boundary_chars as highlighting boundaries. The boundary_max_scan setting controls how far to scan for boundary characters. Only valid for the fvh highlighter.
      • sentence: Break highlighted fragments at the next sentence boundary, as determined by Java’s BreakIterator. You can specify the locale to use with boundary_scanner_locale. When used with the unified highlighter, the sentence scanner splits sentences bigger than fragment_size at the first word boundary next to fragment_size. You can set fragment_size to 0 to never split any sentence.
      • word: Break highlighted fragments at the next word boundary, as determined by Java’s BreakIterator. You can specify the locale to use with boundary_scanner_locale.

      Values are chars, sentence, or word.

    • boundary_scanner_locale string

      Controls which locale is used to search for sentence and word boundaries. This parameter takes a form of a language tag, for example: "en-US", "fr-FR", "ja-JP".

      Default value is Locale.ROOT.

    • force_source boolean Deprecated
    • fragmenter string

      Specifies how text should be broken up in highlight snippets: simple or span. Only valid for the plain highlighter.

      Values are simple or span.

    • fragment_size number

      The size of the highlighted fragment in characters.

      Default value is 100.

    • highlight_filter boolean
    • highlight_query object

      Highlight matches for a query other than the search query. This is especially useful if you use a rescore query because those are not taken into account by highlighting by default.

      External documentation
    • max_fragment_length number
    • max_analyzed_offset number

      If set to a non-negative value, highlighting stops at this defined maximum limit. The rest of the text is not processed, thus not highlighted and no error is returned The max_analyzed_offset query setting does not override the index.highlight.max_analyzed_offset setting, which prevails when it’s set to lower value than the query setting.

    • no_match_size number

      The amount of text you want to return from the beginning of the field if there are no matching fragments to highlight.

      Default value is 0.

    • number_of_fragments number

      The maximum number of fragments to return. If the number of fragments is set to 0, no fragments are returned. Instead, the entire field contents are highlighted and returned. This can be handy when you need to highlight short texts such as a title or address, but fragmentation is not required. If number_of_fragments is 0, fragment_size is ignored.

      Default value is 5.

    • options object
      Hide options attribute Show options attribute object
      • * object Additional properties
    • order string

      Sorts highlighted fragments by score when set to score. By default, fragments will be output in the order they appear in the field (order: none). Setting this option to score will output the most relevant fragments first. Each highlighter applies its own logic to compute relevancy scores.

      Value is score.

    • phrase_limit number

      Controls the number of matching phrases in a document that are considered. Prevents the fvh highlighter from analyzing too many phrases and consuming too much memory. When using matched_fields, phrase_limit phrases per matched field are considered. Raising the limit increases query time and consumes more memory. Only supported by the fvh highlighter.

      Default value is 256.

    • post_tags array[string]

      Use in conjunction with pre_tags to define the HTML tags to use for the highlighted text. By default, highlighted text is wrapped in <em> and </em> tags.

    • pre_tags array[string]

      Use in conjunction with post_tags to define the HTML tags to use for the highlighted text. By default, highlighted text is wrapped in <em> and </em> tags.

    • require_field_match boolean

      By default, only fields that contains a query match are highlighted. Set to false to highlight all fields.

      Default value is true.

    • tags_schema string

      Set to styled to use the built-in tag schema.

      Value is styled.

    • encoder string

      Values are default or html.

    • fields object Required
  • track_total_hits boolean | number

    Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits.

  • indices_boost array[object]

    Boosts the _score of documents from specified indices.

    Hide indices_boost attribute Show indices_boost attribute object
    • * number Additional properties
  • docvalue_fields array[object]

    Array of wildcard (*) patterns. The request returns doc values for field names matching these patterns in the hits.fields property of the response.

    A reference to a field with formatting instructions on how to return the value

    Hide docvalue_fields attributes Show docvalue_fields attributes object
    • field string Required

      A wildcard pattern. The request returns values for field names matching this pattern.

    • format string

      The format in which the values are returned.

    • include_unmapped boolean
  • min_score number

    Minimum _score for matching documents. Documents with a lower _score are not included in search results and results collected by aggregations.

  • post_filter object

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • profile boolean
  • query object

    Defines the search definition using the Query DSL.

    External documentation
  • rescore object | array[object]

    One of:
    Hide attributes Show attributes
    • window_size number
    • query object
      Hide query attributes Show query attributes object
      • rescore_query object Required

        The query to use for rescoring. This query is only run on the Top-K results returned by the query and post_filter phases.

      • query_weight number

        Relative importance of the original query versus the rescore query.

        Default value is 1.

      • rescore_query_weight number

        Relative importance of the rescore query versus the original query.

        Default value is 1.

      • score_mode string

        Determines how scores are combined.

        Supported values include:

        • avg: Average the original score and the rescore query score.
        • max: Take the max of original score and the rescore query score.
        • min: Take the min of the original score and the rescore query score.
        • multiply: Multiply the original score by the rescore query score. Useful for function query rescores.
        • total: Add the original score and the rescore query score.

        Values are avg, max, min, multiply, or total.

    • learning_to_rank object
      Hide learning_to_rank attributes Show learning_to_rank attributes object
      • model_id string Required

        The unique identifier of the trained model uploaded to Elasticsearch

      • params object

        Named parameters to be passed to the query templates used for feature

        Hide params attribute Show params attribute object
        • * object Additional properties
  • script_fields object

    Retrieve a script evaluation (based on different fields) for each hit.

    Hide script_fields attribute Show script_fields attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • script object Required
        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API
          Any of:

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • ignore_failure boolean
  • search_after array[number | string | boolean | null | object]

    A field value.

  • size number

    The number of hits to return. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter.

    Default value is 10.

  • slice object
    Hide slice attributes Show slice attributes object
    • field string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • max number Required
  • sort string | object | array[string | object]

    One of:

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • _source boolean | object

    Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response.

    One of:

    Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response.

  • fields array[object]

    Array of wildcard (*) patterns. The request returns values for field names matching these patterns in the hits.fields property of the response.

    A reference to a field with formatting instructions on how to return the value

    Hide fields attributes Show fields attributes object
    • field string Required

      A wildcard pattern. The request returns values for field names matching this pattern.

    • format string

      The format in which the values are returned.

    • include_unmapped boolean
  • suggest object
    Hide suggest attribute Show suggest attribute object
    • text string

      Global suggest text, to avoid repetition when the same text is used in several suggesters

  • terminate_after number

    Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. Defaults to 0, which does not terminate query execution early.

    Default value is 0.

  • timeout string

    Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.

  • track_scores boolean

    If true, calculate and return document scores, even if the scores are not used for sorting.

    Default value is false.

  • version boolean

    If true, returns document version as part of a hit.

    Default value is false.

  • seq_no_primary_term boolean

    If true, returns sequence number and primary term of the last modification of each hit. See Optimistic concurrency control.

  • stored_fields string | array[string]

    List of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. You can pass _source: true to return both source fields and stored fields in the search response.

  • pit object

    Limits the search to a point in time (PIT). If you provide a PIT, you cannot specify an in the request path.

    Hide pit attributes Show pit attributes object
    • id string Required
    • keep_alive string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      External documentation
  • runtime_mappings object

    Defines one or more runtime fields in the search request. These fields take precedence over mapped fields with the same name.

    Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        For type lookup

      • target_field string

        For type lookup

      • target_index string

        For type lookup

      • script object

        Painless script executed at query time.

        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang
        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • stats array[string]

    Stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number Required
    • timed_out boolean Required
    • _shards object Required
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • hits object Required
      Hide hits attributes Show hits attributes object
      • total object | number

        Total hit count information, present only if track_total_hits wasn't false in the search request.

        One of:
        Hide attributes Show attributes
        • relation string Required

          Supported values include:

          • eq: Accurate
          • gte: Lower bound, including returned events or sequences

          Values are eq or gte.

        • value number Required
      • hits array[object] Required
        Hide hits attributes Show hits attributes object
        • _index string Required
        • _id string
        • _score number | string | null

        • _explanation object
        • fields object
          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • highlight object
          Hide highlight attribute Show highlight attribute object
          • * array[string] Additional properties
        • inner_hits object
          Hide inner_hits attribute Show inner_hits attribute object
          • * object Additional properties
        • matched_queries array[string] | object

        • _nested object
        • _ignored array[string]
        • ignored_field_values object
          Hide ignored_field_values attribute Show ignored_field_values attribute object
          • * array[number | string | boolean | null | object] Additional properties
        • _shard string
        • _node string
        • _routing string
        • _source object
        • _rank number
        • _seq_no number
        • _primary_term number
        • _version number
        • sort array[number | string | boolean | null | object]
      • max_score number | string | null

    • aggregations object
    • _clusters object
      Hide _clusters attributes Show _clusters attributes object
      • skipped number Required
      • successful number Required
      • total number Required
      • running number Required
      • partial number Required
      • failed number Required
      • details object
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • timed_out boolean Required
          • _shards object
          • failures array[object]
    • fields object
      Hide fields attribute Show fields attribute object
      • * object Additional properties
    • max_score number
    • num_reduce_phases number
    • profile object
      Hide profile attribute Show profile attribute object
      • shards array[object] Required
        Hide shards attributes Show shards attributes object
        • aggregations array[object] Required
        • cluster string Required
        • dfs object
        • fetch object
        • id string Required
        • index string Required
        • node_id string Required
        • searches array[object] Required
        • shard_id number Required
    • pit_id string
    • _scroll_id string
    • suggest object
      Hide suggest attribute Show suggest attribute object
      • * array[object] Additional properties
        One of:
        Hide attributes Show attributes
        • length number Required
        • offset number Required
        • text string Required
        • options
    • terminated_early boolean
POST /{index}/_fleet/_fleet_search
curl \
 --request POST 'http://api.example.com/{index}/_fleet/_fleet_search' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"aggregations":{},"collapse":{},"explain":false,"ext":{"additionalProperty1":{},"additionalProperty2":{}},"from":0,"highlight":{"type":"plain","boundary_chars":".,!? \\t\\n","boundary_max_scan":20,"boundary_scanner":"chars","boundary_scanner_locale":"Locale.ROOT","force_source":true,"fragmenter":"simple","fragment_size":100,"highlight_filter":true,"highlight_query":{},"max_fragment_length":42.0,"max_analyzed_offset":42.0,"no_match_size":0,"number_of_fragments":5,"options":{"additionalProperty1":{},"additionalProperty2":{}},"order":"score","phrase_limit":256,"post_tags":["string"],"pre_tags":["string"],"require_field_match":true,"tags_schema":"styled","encoder":"default","fields":{}},"track_total_hits":true,"indices_boost":[{"additionalProperty1":42.0,"additionalProperty2":42.0}],"docvalue_fields":[{"field":"string","format":"string","include_unmapped":true}],"min_score":42.0,"post_filter":{},"profile":true,"query":{},"rescore":{"window_size":42.0,"query":{"rescore_query":{},"query_weight":1,"rescore_query_weight":1,"score_mode":"avg"},"learning_to_rank":{"model_id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}}}},"script_fields":{"additionalProperty1":{"script":{"source":"string","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"lang":"painless","options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true},"additionalProperty2":{"script":{"source":"string","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"lang":"painless","options":{"additionalProperty1":"string","additionalProperty2":"string"}},"ignore_failure":true}},"search_after":[42.0],"size":10,"slice":{"field":"string","id":"string","max":42.0},"sort":"string","_source":true,"fields":[{"field":"string","format":"string","include_unmapped":true}],"suggest":{"text":"string"},"terminate_after":0,"timeout":"string","track_scores":false,"version":false,"seq_no_primary_term":true,"stored_fields":"string","pit":{"id":"string","keep_alive":"string"},"runtime_mappings":{"additionalProperty1":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"source":"string","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"},"additionalProperty2":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"source":"string","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"}},"stats":["string"]}'

Explore graph analytics Generally available

POST /{index}/_graph/explore

All methods and paths for this operation:

GET /{index}/_graph/explore

POST /{index}/_graph/explore

Extract and summarize information about the documents and terms in an Elasticsearch data stream or index. The easiest way to understand the behavior of this API is to use the Graph UI to explore connections. An initial request to the _explore API contains a seed query that identifies the documents of interest and specifies the fields that define the vertices and connections you want to include in the graph. Subsequent requests enable you to spider out from one more vertices of interest. You can exclude vertices that have already been returned.

External documentation

Path parameters

  • index string | array[string] Required

    Name of the index.

Query parameters

  • routing string

    Custom value used to route operations to a specific shard.

  • timeout string

    Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.

    External documentation
application/json

Body

  • connections object

    Specifies or more fields from which you want to extract terms that are associated with the specified vertices.

    Hide connections attributes Show connections attributes object
    • connections object

      Specifies one or more fields from which you want to extract terms that are associated with the specified vertices.

    • query object Required

      An optional guiding query that constrains the Graph API as it explores connected terms.

      External documentation
    • vertices array[object] Required

      Contains the fields you are interested in.

      Hide vertices attributes Show vertices attributes object
      • exclude array[string]

        Prevents the specified terms from being included in the results.

      • field string Required

        Identifies a field in the documents of interest.

      • include array[object]

        Identifies the terms of interest that form the starting points from which you want to spider out.

        Hide include attributes Show include attributes object
        • boost number Required
        • term string Required
      • min_doc_count number

        Specifies how many documents must contain a pair of terms before it is considered to be a useful connection. This setting acts as a certainty threshold.

        Default value is 3.

      • shard_min_doc_count number

        Controls how many documents on a particular shard have to contain a pair of terms before the connection is returned for global consideration.

        Default value is 2.

      • size number

        Specifies the maximum number of vertex terms returned for each field.

        Default value is 5.

  • controls object

    Direct the Graph API how to build the graph.

    Hide controls attributes Show controls attributes object
    • sample_diversity object

      To avoid the top-matching documents sample being dominated by a single source of results, it is sometimes necessary to request diversity in the sample. You can do this by selecting a single-value field and setting a maximum number of documents per value for that field.

      Hide sample_diversity attributes Show sample_diversity attributes object
      • field string Required

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • max_docs_per_value number Required
    • sample_size number

      Each hop considers a sample of the best-matching documents on each shard. Using samples improves the speed of execution and keeps exploration focused on meaningfully-connected terms. Very small values (less than 50) might not provide sufficient weight-of-evidence to identify significant connections between terms. Very large sample sizes can dilute the quality of the results and increase execution times.

      Default value is 100.

    • timeout string

      The length of time in milliseconds after which exploration will be halted and the results gathered so far are returned. This timeout is honored on a best-effort basis. Execution might overrun this timeout if, for example, a long pause is encountered while FieldData is loaded for a field.

      External documentation
    • use_significance boolean Required

      Filters associated terms so only those that are significantly associated with your query are included.

  • query object

    A seed query that identifies the documents of interest. Can be any valid Elasticsearch query.

    External documentation
  • vertices array[object]

    Specifies one or more fields that contain the terms you want to include in the graph as vertices.

    Hide vertices attributes Show vertices attributes object
    • exclude array[string]

      Prevents the specified terms from being included in the results.

    • field string Required

      Identifies a field in the documents of interest.

    • include array[object]

      Identifies the terms of interest that form the starting points from which you want to spider out.

      Hide include attributes Show include attributes object
      • boost number Required
      • term string Required
    • min_doc_count number

      Specifies how many documents must contain a pair of terms before it is considered to be a useful connection. This setting acts as a certainty threshold.

      Default value is 3.

    • shard_min_doc_count number

      Controls how many documents on a particular shard have to contain a pair of terms before the connection is returned for global consideration.

      Default value is 2.

    • size number

      Specifies the maximum number of vertex terms returned for each field.

      Default value is 5.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • connections array[object] Required
      Hide connections attributes Show connections attributes object
      • doc_count number Required
      • source number Required
      • target number Required
      • weight number Required
    • failures array[object] Required
      Hide failures attributes Show failures attributes object
      • index string
      • node string
      • reason object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide reason attributes Show reason attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • shard number
      • status string
      • primary boolean
    • timed_out boolean Required
    • took number Required
    • vertices array[object] Required
      Hide vertices attributes Show vertices attributes object
      • depth number Required
      • field string Required

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • term string Required
      • weight number Required
POST /{index}/_graph/explore
POST clicklogs/_graph/explore
{
  "query": {
    "match": {
      "query.raw": "midi"
    }
  },
  "vertices": [
    {
      "field": "product"
    }
  ],
  "connections": {
    "vertices": [
      {
        "field": "query.raw"
      }
    ]
  }
}
resp = client.graph.explore(
    index="clicklogs",
    query={
        "match": {
            "query.raw": "midi"
        }
    },
    vertices=[
        {
            "field": "product"
        }
    ],
    connections={
        "vertices": [
            {
                "field": "query.raw"
            }
        ]
    },
)
const response = await client.graph.explore({
  index: "clicklogs",
  query: {
    match: {
      "query.raw": "midi",
    },
  },
  vertices: [
    {
      field: "product",
    },
  ],
  connections: {
    vertices: [
      {
        field: "query.raw",
      },
    ],
  },
});
response = client.graph.explore(
  index: "clicklogs",
  body: {
    "query": {
      "match": {
        "query.raw": "midi"
      }
    },
    "vertices": [
      {
        "field": "product"
      }
    ],
    "connections": {
      "vertices": [
        {
          "field": "query.raw"
        }
      ]
    }
  }
)
$resp = $client->graph()->explore([
    "index" => "clicklogs",
    "body" => [
        "query" => [
            "match" => [
                "query.raw" => "midi",
            ],
        ],
        "vertices" => array(
            [
                "field" => "product",
            ],
        ),
        "connections" => [
            "vertices" => array(
                [
                    "field" => "query.raw",
                ],
            ),
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":{"match":{"query.raw":"midi"}},"vertices":[{"field":"product"}],"connections":{"vertices":[{"field":"query.raw"}]}}' "$ELASTICSEARCH_URL/clicklogs/_graph/explore"
client.graph().explore(e -> e
    .connections(c -> c
        .vertices(v -> v
            .field("query.raw")
        )
    )
    .index("clicklogs")
    .query(q -> q
        .match(m -> m
            .field("query.raw")
            .query(FieldValue.of("midi"))
        )
    )
    .vertices(v -> v
        .field("product")
    )
);
Request example
Run `POST clicklogs/_graph/explore` for a basic exploration An initial graph explore query typically begins with a query to identify strongly related terms. Seed the exploration with a query. This example is searching `clicklogs` for people who searched for the term `midi`.Identify the vertices to include in the graph. This example is looking for product codes that are significantly associated with searches for `midi`. Find the connections. This example is looking for other search terms that led people to click on the products that are associated with searches for `midi`.
{
  "query": {
    "match": {
      "query.raw": "midi"
    }
  },
  "vertices": [
    {
      "field": "product"
    }
  ],
  "connections": {
    "vertices": [
      {
        "field": "query.raw"
      }
    ]
  }
}

Get component templates Generally available; Added in 7.8.0

GET /_component_template/{name}

All methods and paths for this operation:

GET /_component_template

GET /_component_template/{name}

Get information about component templates.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Comma-separated list of component template names used to limit the request. Wildcard (*) expressions are supported.

Query parameters

  • flat_settings boolean

    If true, returns settings in flat format.

  • settings_filter string | array[string]

    Filter out results, for example to filter out sensitive information. Supports wildcards or full settings keys

  • include_defaults boolean Generally available; Added in 8.11.0

    Return all default configurations for the component template (default: false)

  • local boolean

    If true, the request retrieves information from the local node only. If false, information is retrieved from the master node.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • component_templates array[object] Required
      Hide component_templates attributes Show component_templates attributes object
      • name string Required
      • component_template object Required
        Hide component_template attributes Show component_template attributes object
        • template object Required
        • version number
        • _meta object
        • deprecated boolean
GET /_component_template/{name}
GET /_component_template/template_1
resp = client.cluster.get_component_template(
    name="template_1",
)
const response = await client.cluster.getComponentTemplate({
  name: "template_1",
});
response = client.cluster.get_component_template(
  name: "template_1"
)
$resp = $client->cluster()->getComponentTemplate([
    "name" => "template_1",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_component_template/template_1"
client.cluster().getComponentTemplate(g -> g
    .name("template_1")
);

Create or update a component template Generally available; Added in 7.8.0

POST /_component_template/{name}

All methods and paths for this operation:

PUT /_component_template/{name}

POST /_component_template/{name}

Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.

An index template can be composed of multiple component templates. To use a component template, specify it in an index template’s composed_of list. Component templates are only applied to new data streams and indices as part of a matching index template.

Settings and mappings specified directly in the index template or the create index request override any settings or mappings specified in a component template.

Component templates are only used during index creation. For data streams, this includes data stream creation and the creation of a stream’s backing indices. Changes to component templates do not affect existing indices, including a stream’s backing indices.

You can use C-style /* *\/ block comments in component templates. You can include comments anywhere in the request body except before the opening curly bracket.

Applying component templates

You cannot directly apply a component template to a data stream or index. To be applied, a component template must be included in an index template's composed_of list.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Name of the component template to create. Elasticsearch includes the following built-in component templates: logs-mappings; logs-settings; metrics-mappings; metrics-settings;synthetics-mapping; synthetics-settings. Elastic Agent uses these templates to configure backing indices for its data streams. If you use Elastic Agent and want to overwrite one of these templates, set the version for your replacement template higher than the current version. If you don’t use Elastic Agent and want to disable all built-in component and index templates, set stack.templates.enabled to false using the cluster update settings API.

Query parameters

  • create boolean

    If true, this request cannot replace or update existing component templates.

  • cause string

    User defined reason for create the component template.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
application/json

Body Required

  • template object Required

    The template to be applied which includes mappings, settings, or aliases configuration.

    Hide template attributes Show template attributes object
    • aliases object
      Hide aliases attribute Show aliases attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • filter object

          Query used to limit documents the alias can access.

          External documentation
        • index_routing string

          Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

        • is_hidden boolean

          If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

          Default value is false.

        • is_write_index boolean

          If true, the index is the write index for the alias.

          Default value is false.

        • routing string

          Value used to route indexing and search operations to a specific shard.

        • search_routing string

          Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

    • mappings object
      Hide mappings attributes Show mappings attributes object
      • all_field object
        Hide all_field attributes Show all_field attributes object
        • analyzer string Required
        • enabled boolean Required
        • omit_norms boolean Required
        • search_analyzer string Required
        • similarity string Required
        • store boolean Required
        • store_term_vector_offsets boolean Required
        • store_term_vector_payloads boolean Required
        • store_term_vector_positions boolean Required
        • store_term_vectors boolean Required
      • date_detection boolean
      • dynamic string

        Values are strict, runtime, true, or false.

      • dynamic_date_formats array[string]
      • dynamic_templates array[object]
      • _field_names object
        Hide _field_names attribute Show _field_names attribute object
        • enabled boolean Required
      • index_field object
        Hide index_field attribute Show index_field attribute object
        • enabled boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • numeric_detection boolean
      • properties object
      • _routing object
        Hide _routing attribute Show _routing attribute object
        • required boolean Required
      • _size object
        Hide _size attribute Show _size attribute object
        • enabled boolean Required
      • _source object
        Hide _source attributes Show _source attributes object
        • compress boolean
        • compress_threshold string
        • enabled boolean
        • excludes array[string]
        • includes array[string]
      • runtime object
        Hide runtime attribute Show runtime attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

          • fetch_fields array[object]

            For type lookup

          • format string

            A custom format for date type runtime fields.

      • enabled boolean
      • subobjects string

        Values are true or false.

      • _data_stream_timestamp object
        Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
        • enabled boolean Required
    • settings object
      Index settings
    • defaults object

      Default settings, included when the request's include_default is true.

      Index settings
    • data_stream string
    • lifecycle object

      Data stream lifecycle applicable if this is a data stream.

      Hide lifecycle attributes Show lifecycle attributes object
      • data_retention string

        If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

        External documentation
      • downsampling object

        The downsampling configuration to execute for the managed backing index after rollover.

        Hide downsampling attribute Show downsampling attribute object
        • rounds array[object] Required

          The list of downsampling rounds to execute as part of this downsampling configuration

      • enabled boolean

        If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

        Default value is true.

  • version number

    Version number used to manage component templates externally. This number isn't automatically generated or incremented by Elasticsearch. To unset a version, replace the template without specifying a version.

  • _meta object

    Optional user metadata about the component template. It may have any contents. This map is not automatically generated by Elasticsearch. This information is stored in the cluster state, so keeping it short is preferable. To unset _meta, replace the template without specifying this information.

    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • deprecated boolean

    Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_component_template/{name}
PUT _component_template/template_1
{
  "template": {
    "settings": {
      "number_of_shards": 1
    },
    "mappings": {
      "_source": {
        "enabled": false
      },
      "properties": {
        "host_name": {
          "type": "keyword"
        },
        "created_at": {
          "type": "date",
          "format": "EEE MMM dd HH:mm:ss Z yyyy"
        }
      }
    }
  }
}
resp = client.cluster.put_component_template(
    name="template_1",
    template={
        "settings": {
            "number_of_shards": 1
        },
        "mappings": {
            "_source": {
                "enabled": False
            },
            "properties": {
                "host_name": {
                    "type": "keyword"
                },
                "created_at": {
                    "type": "date",
                    "format": "EEE MMM dd HH:mm:ss Z yyyy"
                }
            }
        }
    },
)
const response = await client.cluster.putComponentTemplate({
  name: "template_1",
  template: {
    settings: {
      number_of_shards: 1,
    },
    mappings: {
      _source: {
        enabled: false,
      },
      properties: {
        host_name: {
          type: "keyword",
        },
        created_at: {
          type: "date",
          format: "EEE MMM dd HH:mm:ss Z yyyy",
        },
      },
    },
  },
});
response = client.cluster.put_component_template(
  name: "template_1",
  body: {
    "template": {
      "settings": {
        "number_of_shards": 1
      },
      "mappings": {
        "_source": {
          "enabled": false
        },
        "properties": {
          "host_name": {
            "type": "keyword"
          },
          "created_at": {
            "type": "date",
            "format": "EEE MMM dd HH:mm:ss Z yyyy"
          }
        }
      }
    }
  }
)
$resp = $client->cluster()->putComponentTemplate([
    "name" => "template_1",
    "body" => [
        "template" => [
            "settings" => [
                "number_of_shards" => 1,
            ],
            "mappings" => [
                "_source" => [
                    "enabled" => false,
                ],
                "properties" => [
                    "host_name" => [
                        "type" => "keyword",
                    ],
                    "created_at" => [
                        "type" => "date",
                        "format" => "EEE MMM dd HH:mm:ss Z yyyy",
                    ],
                ],
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"template":{"settings":{"number_of_shards":1},"mappings":{"_source":{"enabled":false},"properties":{"host_name":{"type":"keyword"},"created_at":{"type":"date","format":"EEE MMM dd HH:mm:ss Z yyyy"}}}}}' "$ELASTICSEARCH_URL/_component_template/template_1"
Request examples
{
  "template": {
    "settings": {
      "number_of_shards": 1
    },
    "mappings": {
      "_source": {
        "enabled": false
      },
      "properties": {
        "host_name": {
          "type": "keyword"
        },
        "created_at": {
          "type": "date",
          "format": "EEE MMM dd HH:mm:ss Z yyyy"
        }
      }
    }
  }
}
You can include index aliases in a component template. During index creation, the `{index}` placeholder in the alias name will be replaced with the actual index name that the template gets applied to.
{
  "template": null,
  "settings": {
    "number_of_shards": 1
  },
  "aliases": {
    "alias1": {},
    "alias2": {
      "filter": {
        "term": {
          "user.id": "kimchy"
        }
      },
      "routing": "shard-1"
    },
    "{index}-alias": {}
  }
}

Delete component templates Generally available; Added in 7.8.0

DELETE /_component_template/{name}

Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string | array[string] Required

    Comma-separated list or wildcard expression of component template names used to limit the request.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_component_template/{name}
DELETE _component_template/template_1
resp = client.cluster.delete_component_template(
    name="template_1",
)
const response = await client.cluster.deleteComponentTemplate({
  name: "template_1",
});
response = client.cluster.delete_component_template(
  name: "template_1"
)
$resp = $client->cluster()->deleteComponentTemplate([
    "name" => "template_1",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_component_template/template_1"
client.cluster().deleteComponentTemplate(d -> d
    .name("template_1")
);

Check component templates Generally available; Added in 7.8.0

HEAD /_component_template/{name}

Returns information about whether a particular component template exists.

Path parameters

  • name string | array[string] Required

    Comma-separated list of component template names used to limit the request. Wildcard (*) expressions are supported.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

Responses

  • 200 application/json
HEAD /_component_template/{name}
curl \
 --request HEAD 'http://api.example.com/_component_template/{name}' \
 --header "Authorization: $API_KEY"




Delete a dangling index Generally available; Added in 7.9.0

DELETE /_dangling/{index_uuid}

If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling. For example, this can happen if you delete more than cluster.indices.tombstones.size indices while an Elasticsearch node is offline.

Required authorization

  • Cluster privileges: manage

Path parameters

  • index_uuid string Required

    The UUID of the index to delete. Use the get dangling indices API to find the UUID.

Query parameters

  • accept_data_loss boolean Required

    This parameter must be set to true to acknowledge that it will no longer be possible to recove data from the dangling index.

  • master_timeout string

    Specify timeout for connection to master

    External documentation
  • timeout string

    Explicit operation timeout

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_dangling/{index_uuid}
DELETE /_dangling/<index-uuid>?accept_data_loss=true
resp = client.dangling_indices.delete_dangling_index(
    index_uuid="<index-uuid>",
    accept_data_loss=True,
)
const response = await client.danglingIndices.deleteDanglingIndex({
  index_uuid: "<index-uuid>",
  accept_data_loss: "true",
});
response = client.dangling_indices.delete_dangling_index(
  index_uuid: "<index-uuid>",
  accept_data_loss: "true"
)
$resp = $client->danglingIndices()->deleteDanglingIndex([
    "index_uuid" => "<index-uuid>",
    "accept_data_loss" => "true",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_dangling/<index-uuid>?accept_data_loss=true"
client.danglingIndices().deleteDanglingIndex(d -> d
    .acceptDataLoss(true)
    .indexUuid("<index-uuid>")
);
















Clone an index Generally available; Added in 7.4.0

POST /{index}/_clone/{target}

All methods and paths for this operation:

PUT /{index}/_clone/{target}

POST /{index}/_clone/{target}

Clone an existing index into a new index. Each original primary shard is cloned into a new primary shard in the new index.

IMPORTANT: Elasticsearch does not apply index templates to the resulting index. The API also does not copy index metadata from the original index. Index metadata includes aliases, index lifecycle management phase definitions, and cross-cluster replication (CCR) follower information. For example, if you clone a CCR follower index, the resulting clone will not be a follower index.

The clone API copies most index settings from the source index to the resulting index, with the exception of index.number_of_replicas and index.auto_expand_replicas. To set the number of replicas in the resulting index, configure these settings in the clone request.

Cloning works as follows:

  • First, it creates a new target index with the same definition as the source index.
  • Then it hard-links segments from the source index into the target index. If the file system does not support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
  • Finally, it recovers the target index as though it were a closed index which had just been re-opened.

IMPORTANT: Indices can only be cloned if they meet the following requirements:

  • The index must be marked as read-only and have a cluster health status of green.
  • The target index must not exist.
  • The source index must have the same number of primary shards as the target index.
  • The node handling the clone process must have sufficient free disk space to accommodate a second copy of the existing index.

The current write index on a data stream cannot be cloned. In order to clone the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be cloned.

NOTE: Mappings cannot be specified in the _clone request. The mappings of the source index will be used for the target index.

Monitor the cloning process

The cloning process can be monitored with the cat recovery API or the cluster health API can be used to wait until all primary shards have been allocated by setting the wait_for_status parameter to yellow.

The _clone API returns as soon as the target index has been added to the cluster state, before any shards have been allocated. At this point, all shards are in the state unassigned. If, for any reason, the target index can't be allocated, its primary shard will remain unassigned until it can be allocated on that node.

Once the primary shard is allocated, it moves to state initializing, and the clone process begins. When the clone operation completes, the shard will become active. At that point, Elasticsearch will try to allocate any replicas and may decide to relocate the primary shard to another node.

Wait for active shards

Because the clone operation creates a new index to clone the shards to, the wait for active shards setting on index creation applies to the clone index action as well.

Required authorization

  • Index privileges: manage

Path parameters

  • index string Required

    Name of the source index to clone.

  • target string Required

    Name of the target index to create.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

application/json

Body

  • aliases object

    Aliases for the resulting index.

    Hide aliases attribute Show aliases attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • filter object

        Query used to limit documents the alias can access.

        External documentation
      • index_routing string

        Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

      • is_hidden boolean

        If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

        Default value is false.

      • is_write_index boolean

        If true, the index is the write index for the alias.

        Default value is false.

      • routing string

        Value used to route indexing and search operations to a specific shard.

      • search_routing string

        Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

  • settings object

    Configuration options for the target index.

    Hide settings attribute Show settings attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • index string Required
    • shards_acknowledged boolean Required
POST /{index}/_clone/{target}
POST /my_source_index/_clone/my_target_index
{
  "settings": {
    "index.number_of_shards": 5
  },
  "aliases": {
    "my_search_indices": {}
  }
}
resp = client.indices.clone(
    index="my_source_index",
    target="my_target_index",
    settings={
        "index.number_of_shards": 5
    },
    aliases={
        "my_search_indices": {}
    },
)
const response = await client.indices.clone({
  index: "my_source_index",
  target: "my_target_index",
  settings: {
    "index.number_of_shards": 5,
  },
  aliases: {
    my_search_indices: {},
  },
});
response = client.indices.clone(
  index: "my_source_index",
  target: "my_target_index",
  body: {
    "settings": {
      "index.number_of_shards": 5
    },
    "aliases": {
      "my_search_indices": {}
    }
  }
)
$resp = $client->indices()->clone([
    "index" => "my_source_index",
    "target" => "my_target_index",
    "body" => [
        "settings" => [
            "index.number_of_shards" => 5,
        ],
        "aliases" => [
            "my_search_indices" => new ArrayObject([]),
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"settings":{"index.number_of_shards":5},"aliases":{"my_search_indices":{}}}' "$ELASTICSEARCH_URL/my_source_index/_clone/my_target_index"
client.indices().clone(c -> c
    .aliases("my_search_indices", a -> a)
    .index("my_source_index")
    .settings("index.number_of_shards", JsonData.fromJson("5"))
    .target("my_target_index")
);
Request example
Clone `my_source_index` into a new index called `my_target_index` with `POST /my_source_index/_clone/my_target_index`. The API accepts `settings` and `aliases` parameters for the target index.
{
  "settings": {
    "index.number_of_shards": 5
  },
  "aliases": {
    "my_search_indices": {}
  }
}

Close an index Generally available

POST /{index}/_close

A closed index is blocked for read or write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. Closed indices do not have to maintain internal data structures for indexing or searching documents, which results in a smaller overhead on the cluster.

When opening or closing an index, the master node is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened and closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.

You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behaviour can be turned off using the ignore_unavailable=true parameter.

By default, you must explicitly name the indices you are opening or closing. To open or close indices with _all, *, or other wildcard expressions, change theaction.destructive_requires_name setting to false. This setting can also be changed with the cluster update settings API.

Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting cluster.indices.close.enable to false.

Required authorization

  • Index privileges: manage

Path parameters

  • index string | array[string] Required

    Comma-separated list or wildcard expression of index names used to limit the request.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • indices object Required
      Hide indices attribute Show indices attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • closed boolean Required
        • shards object
          Hide shards attribute Show shards attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • failures array[object] Required
    • shards_acknowledged boolean Required
POST /{index}/_close
POST /my-index-00001/_close
resp = client.indices.close(
    index="my-index-00001",
)
const response = await client.indices.close({
  index: "my-index-00001",
});
response = client.indices.close(
  index: "my-index-00001"
)
$resp = $client->indices()->close([
    "index" => "my-index-00001",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-00001/_close"
client.indices().close(c -> c
    .index("my-index-00001")
);
Response examples (200)
A successful response for closing an index.
{
  "acknowledged": true,
  "shards_acknowledged": true,
  "indices": {
    "my-index-000001": {
      "closed": true
    }
  }
}

Get index information Generally available

GET /{index}

Get information about one or more indices. For data streams, the API returns information about the stream’s backing indices.

Required authorization

  • Index privileges: view_index_metadata,manage

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*) are supported.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flat_settings boolean

    If true, returns settings in flat format.

  • ignore_unavailable boolean

    If false, requests that target a missing index return an error.

  • include_defaults boolean

    If true, return all default settings in the response.

  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • features string | array[string] Generally available; Added in 8.1.0

    Return only information on specified index features

    Supported values include: aliases, mappings, settings

    Values are aliases, mappings, or settings.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object
      Hide * attributes Show * attributes object
      • aliases object
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            Query used to limit documents the alias can access.

            External documentation
          • index_routing string

            Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

          • is_hidden boolean

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string

            Value used to route indexing and search operations to a specific shard.

          • search_routing string

            Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

      • mappings object
        Hide mappings attributes Show mappings attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
      • settings object
        Index settings
      • defaults object

        Default settings, included when the request's include_default is true.

        Index settings
      • data_stream string
      • lifecycle object

        Data stream lifecycle applicable if this is a data stream.

        Hide lifecycle attributes Show lifecycle attributes object
        • data_retention string

          If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

          External documentation
        • downsampling object

          The downsampling configuration to execute for the managed backing index after rollover.

          Hide downsampling attribute Show downsampling attribute object
          • rounds array[object] Required

            The list of downsampling rounds to execute as part of this downsampling configuration

        • enabled boolean

          If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

          Default value is true.

GET /{index}
GET /my-index-000001
resp = client.indices.get(
    index="my-index-000001",
)
const response = await client.indices.get({
  index: "my-index-000001",
});
response = client.indices.get(
  index: "my-index-000001"
)
$resp = $client->indices()->get([
    "index" => "my-index-000001",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001"
client.indices().get(g -> g
    .index("my-index-000001")
);

Create an index Generally available

PUT /{index}

You can use the create index API to add a new index to an Elasticsearch cluster. When creating an index, you can specify the following:

  • Settings for the index.
  • Mappings for fields in the index.
  • Index aliases

Wait for active shards

By default, index creation will only return a response to the client when the primary copies of each shard have been started, or the request times out. The index creation response will indicate what happened. For example, acknowledged indicates whether the index was successfully created in the cluster, while shards_acknowledged indicates whether the requisite number of shard copies were started for each shard in the index before timing out. Note that it is still possible for either acknowledged or shards_acknowledged to be false, but for the index creation to be successful. These values simply indicate whether the operation completed before the timeout. If acknowledged is false, the request timed out before the cluster state was updated with the newly created index, but it probably will be created sometime soon. If shards_acknowledged is false, then the request timed out before the requisite number of shards were started (by default just the primaries), even if the cluster state was successfully updated to reflect the newly created index (that is to say, acknowledged is true).

You can change the default of only waiting for the primary shards to start through the index setting index.write.wait_for_active_shards. Note that changing this setting will also affect the wait_for_active_shards value on all subsequent write operations.

Required authorization

  • Index privileges: create_index,manage

Path parameters

  • index string Required

    Name of the index you wish to create.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

application/json

Body

  • aliases object

    Aliases for the index.

    Hide aliases attribute Show aliases attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • filter object

        Query used to limit documents the alias can access.

        External documentation
      • index_routing string

        Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

      • is_hidden boolean

        If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

        Default value is false.

      • is_write_index boolean

        If true, the index is the write index for the alias.

        Default value is false.

      • routing string

        Value used to route indexing and search operations to a specific shard.

      • search_routing string

        Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

  • mappings object

    Mapping for fields in the index. If specified, this mapping can include:

    • Field names
    • Field data types
    • Mapping parameters
    Hide mappings attributes Show mappings attributes object
    • all_field object
      Hide all_field attributes Show all_field attributes object
      • analyzer string Required
      • enabled boolean Required
      • omit_norms boolean Required
      • search_analyzer string Required
      • similarity string Required
      • store boolean Required
      • store_term_vector_offsets boolean Required
      • store_term_vector_payloads boolean Required
      • store_term_vector_positions boolean Required
      • store_term_vectors boolean Required
    • date_detection boolean
    • dynamic string

      Values are strict, runtime, true, or false.

    • dynamic_date_formats array[string]
    • dynamic_templates array[object]
    • _field_names object
      Hide _field_names attribute Show _field_names attribute object
      • enabled boolean Required
    • index_field object
      Hide index_field attribute Show index_field attribute object
      • enabled boolean Required
    • _meta object
      Hide _meta attribute Show _meta attribute object
      • * object Additional properties
    • numeric_detection boolean
    • properties object
    • _routing object
      Hide _routing attribute Show _routing attribute object
      • required boolean Required
    • _size object
      Hide _size attribute Show _size attribute object
      • enabled boolean Required
    • _source object
      Hide _source attributes Show _source attributes object
      • compress boolean
      • compress_threshold string
      • enabled boolean
      • excludes array[string]
      • includes array[string]
      • mode string

        Supported values include:

        • disabled
        • stored
        • synthetic: Instead of storing source documents on disk exactly as you send them, Elasticsearch can reconstruct source content on the fly upon retrieval.

        Values are disabled, stored, or synthetic.

    • runtime object
      Hide runtime attribute Show runtime attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field
          • format string
        • format string

          A custom format for date type runtime fields.

        • input_field string

          For type lookup

        • target_field string

          For type lookup

        • target_index string

          For type lookup

        • script object

          Painless script executed at query time.

          Hide script attributes Show script attributes object
          • source string

            The script source.

          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          • options object
        • type string Required

          Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • enabled boolean
    • subobjects string

      Values are true or false.

    • _data_stream_timestamp object
      Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
      • enabled boolean Required
  • settings object

    Configuration options for the index.

    Index settings

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • index string Required
    • shards_acknowledged boolean Required
    • acknowledged boolean Required
PUT /{index}
PUT /my-index-000001
{
  "settings": {
    "number_of_shards": 3,
    "number_of_replicas": 2
  }
}
resp = client.indices.create(
    index="my-index-000001",
    settings={
        "number_of_shards": 3,
        "number_of_replicas": 2
    },
)
const response = await client.indices.create({
  index: "my-index-000001",
  settings: {
    number_of_shards: 3,
    number_of_replicas: 2,
  },
});
response = client.indices.create(
  index: "my-index-000001",
  body: {
    "settings": {
      "number_of_shards": 3,
      "number_of_replicas": 2
    }
  }
)
$resp = $client->indices()->create([
    "index" => "my-index-000001",
    "body" => [
        "settings" => [
            "number_of_shards" => 3,
            "number_of_replicas" => 2,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"settings":{"number_of_shards":3,"number_of_replicas":2}}' "$ELASTICSEARCH_URL/my-index-000001"
client.indices().create(c -> c
    .index("my-index-000001")
    .settings(s -> s
        .numberOfShards("3")
        .numberOfReplicas("2")
    )
);
Request examples
This request specifies the `number_of_shards` and `number_of_replicas`.
{
  "settings": {
    "number_of_shards": 3,
    "number_of_replicas": 2
  }
}
You can provide mapping definitions in the create index API requests.
{
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "properties": {
      "field1": { "type": "text" }
    }
  }
}
You can provide mapping definitions in the create index API requests. Index alias names also support date math.
{
  "aliases": {
    "alias_1": {},
    "alias_2": {
      "filter": {
        "term": {
          "user.id": "kimchy"
        }
      },
      "routing": "shard-1"
    }
  }
}

Delete indices Generally available

DELETE /{index}

Deleting an index deletes its documents, shards, and metadata. It does not delete related Kibana components, such as data views, visualizations, or dashboards.

You cannot delete the current write index of a data stream. To delete the index, you must roll over the data stream so a new write index is created. You can then use the delete index API to delete the previous write index.

Required authorization

  • Index privileges: delete_index

Path parameters

  • index string | array[string] Required

    Comma-separated list of indices to delete. You cannot specify index aliases. By default, this parameter does not support wildcards (*) or _all. To use wildcards or _all, set the action.destructive_requires_name cluster setting to false.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • _shards object
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index
        • node string
        • reason
        • shard number
        • status string
        • primary boolean
      • skipped number
DELETE /{index}
DELETE /books
resp = client.indices.delete(
    index="books",
)
const response = await client.indices.delete({
  index: "books",
});
response = client.indices.delete(
  index: "books"
)
$resp = $client->indices()->delete([
    "index" => "books",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/books"
client.indices().delete(d -> d
    .index("books")
);

Check indices Generally available

HEAD /{index}

Check if one or more indices, index aliases, or data streams exist.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases. Supports wildcards (*).

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flat_settings boolean

    If true, returns settings in flat format.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • include_defaults boolean

    If true, return all default settings in the response.

  • local boolean

    If true, the request retrieves information from the local node only.

Responses

  • 200 application/json
HEAD /{index}
HEAD my-data-stream
resp = client.indices.exists(
    index="my-data-stream",
)
const response = await client.indices.exists({
  index: "my-data-stream",
});
response = client.indices.exists(
  index: "my-data-stream"
)
$resp = $client->indices()->exists([
    "index" => "my-data-stream",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-data-stream"
client.indices().exists(e -> e
    .index("my-data-stream")
);

Create or update an alias Generally available

POST /{index}/_aliases/{name}

All methods and paths for this operation:

PUT /{index}/_alias/{name}

POST /{index}/_alias/{name}
PUT /{index}/_aliases/{name}
POST /{index}/_aliases/{name}

Adds a data stream or index to an alias.

Aliases

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices to add. Supports wildcards (*). Wildcard patterns that match both data streams and indices return an error.

  • name string Required

    Alias to update. If the alias doesn’t exist, the request creates it. Index alias names support date math.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
application/json

Body

  • filter object

    Query used to limit documents the alias can access.

    External documentation
  • index_routing string

    Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations. Data stream aliases don’t support this parameter.

  • is_write_index boolean

    If true, sets the write index or data stream for the alias. If an alias points to multiple indices or data streams and is_write_index isn’t set, the alias rejects write requests. If an index alias points to one index and is_write_index isn’t set, the index automatically acts as the write index. Data stream aliases don’t automatically set a write data stream, even if the alias points to one data stream.

  • routing string

    Value used to route indexing and search operations to a specific shard. Data stream aliases don’t support this parameter.

  • search_routing string

    Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations. Data stream aliases don’t support this parameter.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /{index}/_aliases/{name}
POST /my-index-2099.05.06-000001/_alias/my-alias
{
  "filter": {
    "bool": {
      "filter": [
        {
          "range": {
            "@timestamp": {
              "gte": "now-1d/d",
              "lt": "now/d"
            }
          }
        },
        {
          "term": {
            "user.id": "kimchy"
          }
        }
      ]
    }
  }
}
resp = client.indices.put_alias(
    index="my-index-2099.05.06-000001",
    name="my-alias",
    filter={
        "bool": {
            "filter": [
                {
                    "range": {
                        "@timestamp": {
                            "gte": "now-1d/d",
                            "lt": "now/d"
                        }
                    }
                },
                {
                    "term": {
                        "user.id": "kimchy"
                    }
                }
            ]
        }
    },
)
const response = await client.indices.putAlias({
  index: "my-index-2099.05.06-000001",
  name: "my-alias",
  filter: {
    bool: {
      filter: [
        {
          range: {
            "@timestamp": {
              gte: "now-1d/d",
              lt: "now/d",
            },
          },
        },
        {
          term: {
            "user.id": "kimchy",
          },
        },
      ],
    },
  },
});
response = client.indices.put_alias(
  index: "my-index-2099.05.06-000001",
  name: "my-alias",
  body: {
    "filter": {
      "bool": {
        "filter": [
          {
            "range": {
              "@timestamp": {
                "gte": "now-1d/d",
                "lt": "now/d"
              }
            }
          },
          {
            "term": {
              "user.id": "kimchy"
            }
          }
        ]
      }
    }
  }
)
$resp = $client->indices()->putAlias([
    "index" => "my-index-2099.05.06-000001",
    "name" => "my-alias",
    "body" => [
        "filter" => [
            "bool" => [
                "filter" => array(
                    [
                        "range" => [
                            "@timestamp" => [
                                "gte" => "now-1d/d",
                                "lt" => "now/d",
                            ],
                        ],
                    ],
                    [
                        "term" => [
                            "user.id" => "kimchy",
                        ],
                    ],
                ),
            ],
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"filter":{"bool":{"filter":[{"range":{"@timestamp":{"gte":"now-1d/d","lt":"now/d"}}},{"term":{"user.id":"kimchy"}}]}}}' "$ELASTICSEARCH_URL/my-index-2099.05.06-000001/_alias/my-alias"
client.indices().updateAliases(u -> u
    .actions(a -> a
        .add(ad -> ad
            .alias("my-alias")
            .index("my-data-stream")
        )
    )
);
Request examples
The filter option uses Query DSL to limit the documents an alias can access.
{
  "filter": {
    "bool": {
      "filter": [
        {
          "range": {
            "@timestamp": {
              "gte": "now-1d/d",
              "lt": "now/d"
            }
          }
        },
        {
          "term": {
            "user.id": "kimchy"
          }
        }
      ]
    }
  }
}
You can use is_write_index to specify a write index or data stream for an alias. Elasticsearch routes any write requests for the alias to this index or data stream.
{
  "is_write_index": true
}
Use the routing option to route requests for an alias to a specific shard.
{
  "routing": "1"
}

Delete an alias Generally available

DELETE /{index}/_aliases/{name}

All methods and paths for this operation:

DELETE /{index}/_alias/{name}

DELETE /{index}/_aliases/{name}

Removes a data stream or index from an alias.

Required authorization

  • Index privileges: manage

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices used to limit the request. Supports wildcards (*).

  • name string | array[string] Required

    Comma-separated list of aliases to remove. Supports wildcards (*). To remove all aliases, use * or _all.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • errors boolean
DELETE /{index}/_aliases/{name}
DELETE my-data-stream/_alias/my-alias
resp = client.indices.delete_alias(
    index="my-data-stream",
    name="my-alias",
)
const response = await client.indices.deleteAlias({
  index: "my-data-stream",
  name: "my-alias",
});
response = client.indices.delete_alias(
  index: "my-data-stream",
  name: "my-alias"
)
$resp = $client->indices()->deleteAlias([
    "index" => "my-data-stream",
    "name" => "my-alias",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-data-stream/_alias/my-alias"
client.indices().deleteAlias(d -> d
    .index("my-data-stream")
    .name("my-alias")
);

Delete data stream lifecycles Generally available; Added in 8.11.0

DELETE /_data_stream/{name}/_lifecycle

Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.

External documentation

Path parameters

  • name string | array[string] Required

    A comma-separated list of data streams of which the data stream lifecycle will be deleted; use * to get all data streams

Query parameters

  • expand_wildcards string | array[string]

    Whether wildcard expressions should get expanded to open or closed indices (default: open)

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • master_timeout string

    Specify timeout for connection to master

    External documentation
  • timeout string

    Explicit timestamp for the document

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_data_stream/{name}/_lifecycle
DELETE _data_stream/my-data-stream/_lifecycle
resp = client.indices.delete_data_lifecycle(
    name="my-data-stream",
)
const response = await client.indices.deleteDataLifecycle({
  name: "my-data-stream",
});
response = client.indices.delete_data_lifecycle(
  name: "my-data-stream"
)
$resp = $client->indices()->deleteDataLifecycle([
    "name" => "my-data-stream",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/my-data-stream/_lifecycle"
client.indices().deleteDataLifecycle(d -> d
    .name("my-data-stream")
);
Response examples (200)
A successful response for deleting a data stream lifecycle.
{
  "acknowledged": true
}

Get index templates Generally available; Added in 7.9.0

GET /_index_template/{name}

All methods and paths for this operation:

GET /_index_template

GET /_index_template/{name}

Get information about one or more index templates.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported.

Query parameters

  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

  • flat_settings boolean

    If true, returns settings in flat format.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • include_defaults boolean Generally available; Added in 8.11.0

    If true, returns all relevant default configurations for the index template.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • index_templates array[object] Required
      Hide index_templates attributes Show index_templates attributes object
      • name string Required
      • index_template object Required
        Hide index_template attributes Show index_template attributes object
        • index_patterns string | array[string] Required

          Name of the index template.

        • composed_of array[string] Required

          An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence.

        • template object

          Template to be applied. It may optionally include an aliases, mappings, or settings configuration.

        • version number

          Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch.

        • priority number

          Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch.

        • _meta object

          Optional user metadata about the index template. May have any contents. This map is not automatically generated by Elasticsearch.

        • allow_auto_create boolean
        • data_stream object

          If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with a data_stream object.

        • deprecated boolean Generally available; Added in 8.12.0

          Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning.

        • ignore_missing_component_templates string | array[string]

          A list of component template names that are allowed to be absent.

GET /_index_template/{name}
GET _index_template/*?filter_path=index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream
resp = client.indices.get_index_template(
    name="*",
    filter_path="index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream",
)
const response = await client.indices.getIndexTemplate({
  name: "*",
  filter_path:
    "index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream",
});
response = client.indices.get_index_template(
  name: "*",
  filter_path: "index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream"
)
$resp = $client->indices()->getIndexTemplate([
    "name" => "*",
    "filter_path" => "index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_index_template/*?filter_path=index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream"

Create or update an index template Generally available; Added in 7.9.0

POST /_index_template/{name}

All methods and paths for this operation:

PUT /_index_template/{name}

POST /_index_template/{name}

Index templates define settings, mappings, and aliases that can be applied automatically to new indices.

Elasticsearch applies templates to new indices based on an wildcard pattern that matches the index name. Index templates are applied during data stream or index creation. For data streams, these settings and mappings are applied when the stream's backing indices are created. Settings and mappings specified in a create index API request override any settings or mappings specified in an index template. Changes to index templates do not affect existing indices, including the existing backing indices of a data stream.

You can use C-style /* *\/ block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.

Multiple matching templates

If multiple index templates match the name of a new index or data stream, the template with the highest priority is used.

Multiple templates with overlapping index patterns at the same priority are not allowed and an error will be thrown when attempting to create a template matching an existing index template at identical priorities.

Composing aliases, mappings, and settings

When multiple component templates are specified in the composed_of field for an index template, they are merged in the order specified, meaning that later component templates override earlier component templates. Any mappings, settings, or aliases from the parent index template are merged in next. Finally, any configuration on the index request itself is merged. Mapping definitions are merged recursively, which means that later mapping components can introduce new field mappings and update the mapping configuration. If a field mapping is already contained in an earlier component, its definition will be completely overwritten by the later one. This recursive merging strategy applies not only to field mappings, but also root options like dynamic_templates and meta. If an earlier component contains a dynamic_templates block, then by default new dynamic_templates entries are appended onto the end. If an entry already exists with the same key, then it is overwritten by the new definition.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Index or template name

Query parameters

  • create boolean

    If true, this request cannot replace or update existing index templates.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • cause string

    User defined reason for creating/updating the index template

application/json

Body Required

  • index_patterns string | array[string]

    Name of the index template to create.

  • composed_of array[string]

    An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence.

  • template object

    Template to be applied. It may optionally include an aliases, mappings, or settings configuration.

    Hide template attributes Show template attributes object
    • aliases object

      Aliases to add. If the index template includes a data_stream object, these are data stream aliases. Otherwise, these are index aliases. Data stream aliases ignore the index_routing, routing, and search_routing options.

      Hide aliases attribute Show aliases attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • filter object

          Query used to limit documents the alias can access.

          External documentation
        • index_routing string

          Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

        • is_hidden boolean

          If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

          Default value is false.

        • is_write_index boolean

          If true, the index is the write index for the alias.

          Default value is false.

        • routing string

          Value used to route indexing and search operations to a specific shard.

        • search_routing string

          Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

    • mappings object

      Mapping for fields in the index. If specified, this mapping can include field names, field data types, and mapping parameters.

      Hide mappings attributes Show mappings attributes object
      • all_field object
        Hide all_field attributes Show all_field attributes object
        • analyzer string Required
        • enabled boolean Required
        • omit_norms boolean Required
        • search_analyzer string Required
        • similarity string Required
        • store boolean Required
        • store_term_vector_offsets boolean Required
        • store_term_vector_payloads boolean Required
        • store_term_vector_positions boolean Required
        • store_term_vectors boolean Required
      • date_detection boolean
      • dynamic string

        Values are strict, runtime, true, or false.

      • dynamic_date_formats array[string]
      • dynamic_templates array[object]
      • _field_names object
        Hide _field_names attribute Show _field_names attribute object
        • enabled boolean Required
      • index_field object
        Hide index_field attribute Show index_field attribute object
        • enabled boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • numeric_detection boolean
      • properties object
      • _routing object
        Hide _routing attribute Show _routing attribute object
        • required boolean Required
      • _size object
        Hide _size attribute Show _size attribute object
        • enabled boolean Required
      • _source object
        Hide _source attributes Show _source attributes object
        • compress boolean
        • compress_threshold string
        • enabled boolean
        • excludes array[string]
        • includes array[string]
      • runtime object
        Hide runtime attribute Show runtime attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

          • fetch_fields array[object]

            For type lookup

          • format string

            A custom format for date type runtime fields.

      • enabled boolean
      • subobjects string

        Values are true or false.

      • _data_stream_timestamp object
        Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
        • enabled boolean Required
    • settings object

      Configuration options for the index.

      Index settings
    • lifecycle object

      Data stream lifecycle denotes that a data stream is managed by the data stream lifecycle and contains the configuration.

      Hide lifecycle attributes Show lifecycle attributes object
      • data_retention string

        If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

        External documentation
      • downsampling object

        The downsampling configuration to execute for the managed backing index after rollover.

        Hide downsampling attribute Show downsampling attribute object
        • rounds array[object] Required

          The list of downsampling rounds to execute as part of this downsampling configuration

      • enabled boolean

        If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

        Default value is true.

  • data_stream object

    If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with a data_stream object.

    Hide data_stream attributes Show data_stream attributes object
    • hidden boolean
    • allow_custom_routing boolean
  • priority number

    Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch.

  • version number

    Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch. External systems can use these version numbers to simplify template management. To unset a version, replace the template without specifying one.

  • _meta object

    Optional user metadata about the index template. It may have any contents. It is not automatically generated or used by Elasticsearch. This user-defined object is stored in the cluster state, so keeping it short is preferable To unset the metadata, replace the template without specifying it.

    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • allow_auto_create boolean

    This setting overrides the value of the action.auto_create_index cluster setting. If set to true in a template, then indices can be automatically created using that template even if auto-creation of indices is disabled via actions.auto_create_index. If set to false, then indices or data streams matching the template must always be explicitly created, and may never be automatically created.

  • ignore_missing_component_templates array[string]

    The configuration option ignore_missing_component_templates can be used when an index template references a component template that might not exist

  • deprecated boolean

    Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_index_template/{name}
PUT /_index_template/template_1
{
  "index_patterns" : ["template*"],
  "priority" : 1,
  "template": {
    "settings" : {
      "number_of_shards" : 2
    }
  }
}
resp = client.indices.put_index_template(
    name="template_1",
    index_patterns=[
        "template*"
    ],
    priority=1,
    template={
        "settings": {
            "number_of_shards": 2
        }
    },
)
const response = await client.indices.putIndexTemplate({
  name: "template_1",
  index_patterns: ["template*"],
  priority: 1,
  template: {
    settings: {
      number_of_shards: 2,
    },
  },
});
response = client.indices.put_index_template(
  name: "template_1",
  body: {
    "index_patterns": [
      "template*"
    ],
    "priority": 1,
    "template": {
      "settings": {
        "number_of_shards": 2
      }
    }
  }
)
$resp = $client->indices()->putIndexTemplate([
    "name" => "template_1",
    "body" => [
        "index_patterns" => array(
            "template*",
        ),
        "priority" => 1,
        "template" => [
            "settings" => [
                "number_of_shards" => 2,
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_patterns":["template*"],"priority":1,"template":{"settings":{"number_of_shards":2}}}' "$ELASTICSEARCH_URL/_index_template/template_1"
client.indices().putIndexTemplate(p -> p
    .indexPatterns("template*")
    .name("template_1")
    .priority(1L)
    .template(t -> t
        .settings(s -> s
            .numberOfShards("2")
        )
    )
);
Request examples
{
  "index_patterns" : ["template*"],
  "priority" : 1,
  "template": {
    "settings" : {
      "number_of_shards" : 2
    }
  }
}
You can include index aliases in an index template. During index creation, the `{index}` placeholder in the alias name will be replaced with the actual index name that the template gets applied to.
{
  "index_patterns": [
    "template*"
  ],
  "template": {
    "settings": {
      "number_of_shards": 1
    },
    "aliases": {
      "alias1": {},
      "alias2": {
        "filter": {
          "term": {
            "user.id": "kimchy"
          }
        },
        "routing": "shard-1"
      },
      "{index}-alias": {}
    }
  }
}




Check index templates Generally available

HEAD /_index_template/{name}

Check whether index templates exist.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported.

Query parameters

  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

  • flat_settings boolean

    If true, returns settings in flat format.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
HEAD /_index_template/{name}
curl \
 --request HEAD 'http://api.example.com/_index_template/{name}' \
 --header "Authorization: $API_KEY"

Get legacy index templates Deprecated Generally available

GET /_template/{name}

All methods and paths for this operation:

GET /_template

GET /_template/{name}

Get information about one or more index templates.

IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

Required authorization

  • Cluster privileges: manage_index_templates
External documentation

Path parameters

  • name string | array[string] Required

    Comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported. To return all index templates, omit this parameter or use a value of _all or *.

Query parameters

  • flat_settings boolean

    If true, returns settings in flat format.

  • local boolean

    If true, the request retrieves information from the local node only.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • aliases object Required
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            Query used to limit documents the alias can access.

            External documentation
          • index_routing string

            Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

          • is_hidden boolean

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string

            Value used to route indexing and search operations to a specific shard.

          • search_routing string

            Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

      • index_patterns array[string] Required
      • mappings object Required
        Hide mappings attributes Show mappings attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
      • order number Required
      • settings object Required
        Hide settings attribute Show settings attribute object
        • * object Additional properties
      • version number
GET /_template/{name}
GET /_template/.monitoring-*
resp = client.indices.get_template(
    name=".monitoring-*",
)
const response = await client.indices.getTemplate({
  name: ".monitoring-*",
});
response = client.indices.get_template(
  name: ".monitoring-*"
)
$resp = $client->indices()->getTemplate([
    "name" => ".monitoring-*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_template/.monitoring-*"
client.indices().getTemplate(g -> g
    .name(".monitoring-*")
);

Create or update a legacy index template Deprecated Generally available

POST /_template/{name}

All methods and paths for this operation:

PUT /_template/{name}

POST /_template/{name}

Index templates define settings, mappings, and aliases that can be applied automatically to new indices. Elasticsearch applies templates to new indices based on an index pattern that matches the index name.

IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

Composable templates always take precedence over legacy templates. If no composable template matches a new index, matching legacy templates are applied according to their order.

Index templates are only applied during index creation. Changes to index templates do not affect existing indices. Settings and mappings specified in create index API requests override any settings or mappings specified in an index template.

You can use C-style /* *\/ block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.

Indices matching multiple templates

Multiple index templates can potentially match an index, in this case, both the settings and mappings are merged into the final configuration of the index. The order of the merging can be controlled using the order parameter, with lower order being applied first, and higher orders overriding them. NOTE: Multiple matching templates with the same order value will result in a non-deterministic merging order.

Required authorization

  • Cluster privileges: manage_index_templates,manage
External documentation

Path parameters

  • name string Required

    The name of the template

Query parameters

  • create boolean

    If true, this request cannot replace or update existing index templates.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • order number

    Order in which Elasticsearch applies this template if index matches multiple templates.

    Templates with lower 'order' values are merged first. Templates with higher 'order' values are merged later, overriding templates with lower values.

  • cause string

    User defined reason for creating/updating the index template

application/json

Body Required

  • aliases object

    Aliases for the index.

    Hide aliases attribute Show aliases attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • filter object

        Query used to limit documents the alias can access.

        External documentation
      • index_routing string

        Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

      • is_hidden boolean

        If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

        Default value is false.

      • is_write_index boolean

        If true, the index is the write index for the alias.

        Default value is false.

      • routing string

        Value used to route indexing and search operations to a specific shard.

      • search_routing string

        Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

  • index_patterns string | array[string]

    Array of wildcard expressions used to match the names of indices during creation.

  • mappings object

    Mapping for fields in the index.

    Hide mappings attributes Show mappings attributes object
    • all_field object
      Hide all_field attributes Show all_field attributes object
      • analyzer string Required
      • enabled boolean Required
      • omit_norms boolean Required
      • search_analyzer string Required
      • similarity string Required
      • store boolean Required
      • store_term_vector_offsets boolean Required
      • store_term_vector_payloads boolean Required
      • store_term_vector_positions boolean Required
      • store_term_vectors boolean Required
    • date_detection boolean
    • dynamic string

      Values are strict, runtime, true, or false.

    • dynamic_date_formats array[string]
    • dynamic_templates array[object]
    • _field_names object
      Hide _field_names attribute Show _field_names attribute object
      • enabled boolean Required
    • index_field object
      Hide index_field attribute Show index_field attribute object
      • enabled boolean Required
    • _meta object
      Hide _meta attribute Show _meta attribute object
      • * object Additional properties
    • numeric_detection boolean
    • properties object
    • _routing object
      Hide _routing attribute Show _routing attribute object
      • required boolean Required
    • _size object
      Hide _size attribute Show _size attribute object
      • enabled boolean Required
    • _source object
      Hide _source attributes Show _source attributes object
      • compress boolean
      • compress_threshold string
      • enabled boolean
      • excludes array[string]
      • includes array[string]
      • mode string

        Supported values include:

        • disabled
        • stored
        • synthetic: Instead of storing source documents on disk exactly as you send them, Elasticsearch can reconstruct source content on the fly upon retrieval.

        Values are disabled, stored, or synthetic.

    • runtime object
      Hide runtime attribute Show runtime attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field
          • format string
        • format string

          A custom format for date type runtime fields.

        • input_field string

          For type lookup

        • target_field string

          For type lookup

        • target_index string

          For type lookup

        • script object

          Painless script executed at query time.

          Hide script attributes Show script attributes object
          • source string

            The script source.

          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          • options object
        • type string Required

          Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • enabled boolean
    • subobjects string

      Values are true or false.

    • _data_stream_timestamp object
      Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
      • enabled boolean Required
  • order number

    Order in which Elasticsearch applies this template if index matches multiple templates.

    Templates with lower 'order' values are merged first. Templates with higher 'order' values are merged later, overriding templates with lower values.

  • settings object

    Configuration options for the index.

    Index settings
  • version number

    Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch. To unset a version, replace the template without specifying one.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_template/{name}
PUT _template/template_1
{
  "index_patterns": [
    "te*",
    "bar*"
  ],
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "_source": {
      "enabled": false
    },
    "properties": {
      "host_name": {
        "type": "keyword"
      },
      "created_at": {
        "type": "date",
        "format": "EEE MMM dd HH:mm:ss Z yyyy"
      }
    }
  }
}
resp = client.indices.put_template(
    name="template_1",
    index_patterns=[
        "te*",
        "bar*"
    ],
    settings={
        "number_of_shards": 1
    },
    mappings={
        "_source": {
            "enabled": False
        },
        "properties": {
            "host_name": {
                "type": "keyword"
            },
            "created_at": {
                "type": "date",
                "format": "EEE MMM dd HH:mm:ss Z yyyy"
            }
        }
    },
)
const response = await client.indices.putTemplate({
  name: "template_1",
  index_patterns: ["te*", "bar*"],
  settings: {
    number_of_shards: 1,
  },
  mappings: {
    _source: {
      enabled: false,
    },
    properties: {
      host_name: {
        type: "keyword",
      },
      created_at: {
        type: "date",
        format: "EEE MMM dd HH:mm:ss Z yyyy",
      },
    },
  },
});
response = client.indices.put_template(
  name: "template_1",
  body: {
    "index_patterns": [
      "te*",
      "bar*"
    ],
    "settings": {
      "number_of_shards": 1
    },
    "mappings": {
      "_source": {
        "enabled": false
      },
      "properties": {
        "host_name": {
          "type": "keyword"
        },
        "created_at": {
          "type": "date",
          "format": "EEE MMM dd HH:mm:ss Z yyyy"
        }
      }
    }
  }
)
$resp = $client->indices()->putTemplate([
    "name" => "template_1",
    "body" => [
        "index_patterns" => array(
            "te*",
            "bar*",
        ),
        "settings" => [
            "number_of_shards" => 1,
        ],
        "mappings" => [
            "_source" => [
                "enabled" => false,
            ],
            "properties" => [
                "host_name" => [
                    "type" => "keyword",
                ],
                "created_at" => [
                    "type" => "date",
                    "format" => "EEE MMM dd HH:mm:ss Z yyyy",
                ],
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_patterns":["te*","bar*"],"settings":{"number_of_shards":1},"mappings":{"_source":{"enabled":false},"properties":{"host_name":{"type":"keyword"},"created_at":{"type":"date","format":"EEE MMM dd HH:mm:ss Z yyyy"}}}}' "$ELASTICSEARCH_URL/_template/template_1"
client.indices().putTemplate(p -> p
    .indexPatterns(List.of("te*","bar*"))
    .mappings(m -> m
        .source(s -> s
            .enabled(false)
        )
    )
    .name("template_1")
    .settings(s -> s
        .numberOfShards("1")
    )
);
Request examples
{
  "index_patterns": [
    "te*",
    "bar*"
  ],
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "_source": {
      "enabled": false
    },
    "properties": {
      "host_name": {
        "type": "keyword"
      },
      "created_at": {
        "type": "date",
        "format": "EEE MMM dd HH:mm:ss Z yyyy"
      }
    }
  }
}
You can include index aliases in an index template. During index creation, the `{index}` placeholder in the alias name will be replaced with the actual index name that the template gets applied to.
{
  "index_patterns": [
    "te*"
  ],
  "settings": {
    "number_of_shards": 1
  },
  "aliases": {
    "alias1": {},
    "alias2": {
      "filter": {
        "term": {
          "user.id": "kimchy"
        }
      },
      "routing": "shard-1"
    },
    "{index}-alias": {}
  }
}

Delete a legacy index template Deprecated Generally available

DELETE /_template/{name}

IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    The name of the legacy index template to delete. Wildcard (*) expressions are supported.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_template/{name}
DELETE _template/.cloud-hot-warm-allocation-0
resp = client.indices.delete_template(
    name=".cloud-hot-warm-allocation-0",
)
const response = await client.indices.deleteTemplate({
  name: ".cloud-hot-warm-allocation-0",
});
response = client.indices.delete_template(
  name: ".cloud-hot-warm-allocation-0"
)
$resp = $client->indices()->deleteTemplate([
    "name" => ".cloud-hot-warm-allocation-0",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_template/.cloud-hot-warm-allocation-0"
client.indices().deleteTemplate(d -> d
    .name(".cloud-hot-warm-allocation-0")
);

Check existence of index templates Generally available

HEAD /_template/{name}

Get information about whether index templates exist. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.

IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

Required authorization

  • Cluster privileges: manage_index_templates
External documentation

Path parameters

  • name string | array[string] Required

    A comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported.

Query parameters

  • flat_settings boolean

    Indicates whether to use a flat format for the response.

  • local boolean

    Indicates whether to get information from the local node only.

  • master_timeout string

    The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to -1.

    External documentation

Responses

  • 200 application/json
HEAD /_template/{name}
HEAD /_template/template_1
resp = client.indices.exists_template(
    name="template_1",
)
const response = await client.indices.existsTemplate({
  name: "template_1",
});
response = client.indices.exists_template(
  name: "template_1"
)
$resp = $client->indices()->existsTemplate([
    "name" => "template_1",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_template/template_1"
client.indices().existsTemplate(e -> e
    .name("template_1")
);




Get aliases Generally available

GET /{index}/_alias/{name}

All methods and paths for this operation:

GET /_alias

GET /_alias/{name}
GET /{index}/_alias
GET /{index}/_alias/{name}

Retrieves information for one or more data stream or index aliases.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • name string | array[string] Required

    Comma-separated list of aliases to retrieve. Supports wildcards (*). To retrieve all aliases, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • local boolean Deprecated

    If true, the request retrieves information from the local node only.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • aliases object Required
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            Query used to limit documents the alias can access.

            External documentation
          • index_routing string

            Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string

            Value used to route indexing and search operations to a specific shard.

          • search_routing string

            Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

          • is_hidden boolean Generally available; Added in 7.16.0

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

GET /{index}/_alias/{name}
GET _alias
resp = client.indices.get_alias()
const response = await client.indices.getAlias();
response = client.indices.get_alias
$resp = $client->indices()->getAlias();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_alias"
client.indices().getAlias(g -> g);

Check aliases Generally available

HEAD /{index}/_alias/{name}

All methods and paths for this operation:

HEAD /_alias/{name}

HEAD /{index}/_alias/{name}

Check if one or more data stream or index aliases exist.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • name string | array[string] Required

    Comma-separated list of aliases to check. Supports wildcards (*).

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, requests that include a missing data stream or index in the target indices or data streams return an error.

  • local boolean Deprecated

    If true, the request retrieves information from the local node only.

Responses

  • 200 application/json
HEAD /{index}/_alias/{name}
HEAD _alias/my-alias
resp = client.indices.exists_alias(
    name="my-alias",
)
const response = await client.indices.existsAlias({
  name: "my-alias",
});
response = client.indices.exists_alias(
  name: "my-alias"
)
$resp = $client->indices()->existsAlias([
    "name" => "my-alias",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_alias/my-alias"
client.indices().existsAlias(e -> e
    .name("my-alias")
);

Get field usage stats Technical preview; Added in 7.15.0

GET /{index}/_field_usage_stats

Get field usage information for each shard and field of an index. Field usage statistics are automatically captured when queries are running on a cluster. A shard-level search request that accesses a given field, even if multiple times during that request, is counted as a single use.

The response body reports the per-shard usage count of the data structures that back the fields in the index. A given request will increment each count by a maximum value of 1, even if the request accesses the same field multiple times.

Required authorization

  • Index privileges: manage

Path parameters

  • index string | array[string] Required

    Comma-separated list or wildcard expression of index names used to limit the request.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If true, missing or closed indices are not included in the response.

  • fields string | array[string]

    Comma-separated list or wildcard expressions of fields to include in the statistics.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • _shards object Required
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
GET /{index}/_field_usage_stats
GET /my-index-000001/_field_usage_stats
resp = client.indices.field_usage_stats(
    index="my-index-000001",
)
const response = await client.indices.fieldUsageStats({
  index: "my-index-000001",
});
response = client.indices.field_usage_stats(
  index: "my-index-000001"
)
$resp = $client->indices()->fieldUsageStats([
    "index" => "my-index-000001",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_field_usage_stats"
client.indices().fieldUsageStats(f -> f
    .index("my-index-000001")
);
Response examples (200)
An abbreviated response from `GET /my-index-000001/_field_usage_stats`. The `all_fields` object reports the sums of the usage counts for all fields in the index (on the listed shard).
{
  "_shards": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "my-index-000001": {
    "shards": [
      {
        "tracking_id": "MpOl0QlTQ4SYYhEe6KgJoQ",
        "tracking_started_at_millis": 1625558985010,
        "routing": {
          "state": "STARTED",
          "primary": true,
          "node": "gA6KeeVzQkGURFCUyV-e8Q",
          "relocating_node": null
        },
        "stats": {
          "all_fields": {
            "any": "6",
            "inverted_index": {
              "terms": 1,
              "postings": 1,
              "proximity": 1,
              "positions": 0,
              "term_frequencies": 1,
              "offsets": 0,
              "payloads": 0
            },
            "stored_fields": 2,
            "doc_values": 1,
            "points": 0,
            "norms": 1,
            "term_vectors": 0,
            "knn_vectors": 0
          },
          "fields": {
            "_id": {
              "any": 1,
              "inverted_index": {
                "terms": 1,
                "postings": 1,
                "proximity": 1,
                "positions": 0,
                "term_frequencies": 1,
                "offsets": 0,
                "payloads": 0
              },
              "stored_fields": 1,
              "doc_values": 0,
              "points": 0,
              "norms": 0,
              "term_vectors": 0,
              "knn_vectors": 0
            },
            "_source": {},
            "context": {},
            "message.keyword": {}
          }
        }
      }
    ]
  }
}

Flush data streams or indices Generally available

GET /{index}/_flush

All methods and paths for this operation:

POST /_flush

GET /_flush
POST /{index}/_flush
GET /{index}/_flush

Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.

After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.

It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.

Required authorization

  • Index privileges: maintenance

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases to flush. Supports wildcards (*). To flush all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • force boolean

    If true, the request forces a flush even if there are no changes to commit to the index.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • wait_if_ongoing boolean

    If true, the flush operation blocks until execution when another flush operation is running. If false, Elasticsearch returns an error if you request a flush when another flush operation is running.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • _shards object
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
GET /{index}/_flush
POST /_flush
resp = client.indices.flush()
const response = await client.indices.flush();
response = client.indices.flush
$resp = $client->indices()->flush();
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_flush"
client.indices().flush(f -> f);

Force a merge Generally available; Added in 2.1.0

POST /{index}/_forcemerge

All methods and paths for this operation:

POST /_forcemerge

POST /{index}/_forcemerge

Perform the force merge operation on the shards of one or more indices. For data streams, the API forces a merge on the shards of the stream's backing indices.

Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents. Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.

WARNING: We recommend force merging only a read-only index (meaning the index is no longer receiving writes). When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone". These soft-deleted documents are automatically cleaned up during regular segment merges. But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges. So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance. If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can't be backed up incrementally.

Blocks during a force merge

Calls to this API block until the merge is complete (unless request contains wait_for_completion=false). If the client connection is lost before completion then the force merge process will continue in the background. Any new requests to force merge the same indices will also block until the ongoing force merge is complete.

Running force merge asynchronously

If the request contains wait_for_completion=false, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to get the status of the task. However, you can not cancel this task as the force merge task is not cancelable. Elasticsearch creates a record of this task as a document at _tasks/<task_id>. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.

Force merging multiple indices

You can force merge multiple indices with a single request by targeting:

  • One or more data streams that contain multiple backing indices
  • Multiple indices
  • One or more aliases
  • All data streams and indices in a cluster

Each targeted shard is force-merged separately using the force_merge threadpool. By default each node only has a single force_merge thread which means that the shards on that node are force-merged one at a time. If you expand the force_merge threadpool on a node then it will force merge its shards in parallel

Force merge makes the storage for the shard being merged temporarily increase, as it may require free space up to triple its size in case max_num_segments parameter is set to 1, to rewrite all segments into a new one.

Data streams and time-based indices

Force-merging is useful for managing a data stream's older backing indices and other time-based indices, particularly after a rollover. In these cases, each index only receives indexing traffic for a certain period of time. Once an index receive no more writes, its shards can be force-merged to a single segment. This can be a good idea because single-segment shards can sometimes use simpler and more efficient data structures to perform searches. For example:

POST /.ds-my-data-stream-2099.03.07-000001/_forcemerge?max_num_segments=1

Required authorization

  • Index privileges: maintenance
External documentation

Path parameters

  • index string | array[string] Required

    A comma-separated list of index names; use _all or empty string to perform the operation on all indices

Query parameters

  • allow_no_indices boolean

    Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes _all string or when no indices have been specified)

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flush boolean

    Specify whether the index should be flushed after performing the operation (default: true)

  • ignore_unavailable boolean

    Whether specified concrete indices should be ignored when unavailable (missing or closed)

  • max_num_segments number

    The number of segments the index should be merged into (default: dynamic)

  • only_expunge_deletes boolean

    Specify whether the operation should only expunge deleted documents

  • wait_for_completion boolean

    Should the request wait until the force merge is completed.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _shards object
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index
        • node string
        • reason
        • shard number
        • status string
        • primary boolean
      • skipped number
    • task string

      task contains a task id returned when wait_for_completion=false, you can use the task_id to get the status of the task at _tasks/

POST /{index}/_forcemerge
POST my-index-000001/_forcemerge
resp = client.indices.forcemerge(
    index="my-index-000001",
)
const response = await client.indices.forcemerge({
  index: "my-index-000001",
});
response = client.indices.forcemerge(
  index: "my-index-000001"
)
$resp = $client->indices()->forcemerge([
    "index" => "my-index-000001",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_forcemerge"
client.indices().forcemerge(f -> f
    .index("my-index-000001")
);

Get mapping definitions Generally available

GET /{index}/_mapping/field/{fields}

All methods and paths for this operation:

GET /_mapping/field/{fields}

GET /{index}/_mapping/field/{fields}

Retrieves mapping definitions for one or more fields. For data streams, the API retrieves field mappings for the stream’s backing indices.

This API is useful if you don't need a complete mapping or if an index mapping contains a large number of fields.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • fields string | array[string] Required

    Comma-separated list or wildcard expression of fields used to limit returned information. Supports wildcards (*).

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • include_defaults boolean

    If true, return all default settings in the response.

  • local boolean

    If true, the request retrieves information from the local node only.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • mappings object Required
        Hide mappings attribute Show mappings attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • full_name string Required
          • mapping object Required
GET /{index}/_mapping/field/{fields}
GET publications/_mapping/field/title
resp = client.indices.get_field_mapping(
    index="publications",
    fields="title",
)
const response = await client.indices.getFieldMapping({
  index: "publications",
  fields: "title",
});
response = client.indices.get_field_mapping(
  index: "publications",
  fields: "title"
)
$resp = $client->indices()->getFieldMapping([
    "index" => "publications",
    "fields" => "title",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/publications/_mapping/field/title"
client.indices().getFieldMapping(g -> g
    .fields("title")
    .index("publications")
);
Response examples (200)
A sucessful response from `GET publications/_mapping/field/title`, which returns the mapping of a field called `title`.
{
   "publications": {
      "mappings": {
          "title": {
             "full_name": "title",
             "mapping": {
                "title": {
                   "type": "text"
                }
             }
          }
       }
   }
}
A successful response from `GET publications/_mapping/field/author.id,abstract,name`. The get field mapping API also supports wildcard notation.
{
   "publications": {
      "mappings": {
        "author.id": {
           "full_name": "author.id",
           "mapping": {
              "id": {
                 "type": "text"
              }
           }
        },
        "abstract": {
           "full_name": "abstract",
           "mapping": {
              "abstract": {
                 "type": "text"
              }
           }
        }
     }
   }
}
A successful response from `GET publications/_mapping/field/a*`.
{
   "publications": {
      "mappings": {
         "author.name": {
            "full_name": "author.name",
            "mapping": {
               "name": {
                 "type": "text"
               }
            }
         },
         "abstract": {
            "full_name": "abstract",
            "mapping": {
               "abstract": {
                  "type": "text"
               }
            }
         },
         "author.id": {
            "full_name": "author.id",
            "mapping": {
               "id": {
                  "type": "text"
               }
            }
         }
      }
   }
}

Get mapping definitions Generally available

GET /{index}/_mapping

All methods and paths for this operation:

GET /_mapping

GET /{index}/_mapping

For data streams, the API retrieves mappings for the stream’s backing indices.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • local boolean Deprecated

    If true, the request retrieves information from the local node only.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • item object
        Hide item attributes Show item attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
      • mappings object Required
        Hide mappings attributes Show mappings attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
GET /{index}/_mapping
GET /books/_mapping
resp = client.indices.get_mapping(
    index="books",
)
const response = await client.indices.getMapping({
  index: "books",
});
response = client.indices.get_mapping(
  index: "books"
)
$resp = $client->indices()->getMapping([
    "index" => "books",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/books/_mapping"
client.indices().getMapping(g -> g
    .index("books")
);

Update field mappings Generally available

POST /{index}/_mapping

All methods and paths for this operation:

PUT /{index}/_mapping

POST /{index}/_mapping

Add new fields to an existing data stream or index. You can also use this API to change the search settings of existing fields and add new properties to existing object fields. For data streams, these changes are applied to all backing indices by default.

Add multi-fields to an existing field

Multi-fields let you index the same field in different ways. You can use this API to update the fields mapping parameter and enable multi-fields for an existing field. WARNING: If an index (or data stream) contains documents when you add a multi-field, those documents will not have values for the new multi-field. You can populate the new multi-field with the update by query API.

Change supported mapping parameters for an existing field

The documentation for each mapping parameter indicates whether you can update it for an existing field using this API. For example, you can use the update mapping API to update the ignore_above parameter.

Change the mapping of an existing field

Except for supported mapping parameters, you can't change the mapping or field type of an existing field. Changing an existing field could invalidate data that's already indexed.

If you need to change the mapping of a field in a data stream's backing indices, refer to documentation about modifying data streams. If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.

Rename a field

Renaming a field would invalidate data already indexed under the old field name. Instead, add an alias field to create an alternate field name.

Required authorization

  • Index privileges: manage
External documentation

Path parameters

  • index string | array[string] Required

    A comma-separated list of index names the mapping should be added to (supports wildcards); use _all or omit to add the mapping on all indices.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • write_index_only boolean

    If true, the mappings are applied only to the current write index for the target.

application/json

Body Required

  • date_detection boolean

    Controls whether dynamic date detection is enabled.

  • dynamic string

    Controls whether new fields are added dynamically.

    Values are strict, runtime, true, or false.

  • dynamic_date_formats array[string]

    If date detection is enabled then new string fields are checked against 'dynamic_date_formats' and if the value matches then a new date field is added instead of string.

  • dynamic_templates array[object]

    Specify dynamic templates for the mapping.

  • _field_names object

    Control whether field names are enabled for the index.

    Hide _field_names attribute Show _field_names attribute object
    • enabled boolean Required
  • _meta object

    A mapping type can have custom meta data associated with it. These are not used at all by Elasticsearch, but can be used to store application-specific metadata.

    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • numeric_detection boolean

    Automatically map strings into numeric data types for all fields.

    Default value is false.

  • properties object

    Mapping for a field. For new fields, this mapping can include:

    • Field name
    • Field data type
    • Mapping parameters
  • _routing object

    Enable making a routing value required on indexed documents.

    Hide _routing attribute Show _routing attribute object
    • required boolean Required
  • _source object

    Control whether the _source field is enabled on the index.

    Hide _source attributes Show _source attributes object
    • compress boolean
    • compress_threshold string
    • enabled boolean
    • excludes array[string]
    • includes array[string]
    • mode string

      Supported values include:

      • disabled
      • stored
      • synthetic: Instead of storing source documents on disk exactly as you send them, Elasticsearch can reconstruct source content on the fly upon retrieval.

      Values are disabled, stored, or synthetic.

  • runtime object

    Mapping of runtime fields for the index.

    Hide runtime attribute Show runtime attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        For type lookup

      • target_field string

        For type lookup

      • target_index string

        For type lookup

      • script object

        Painless script executed at query time.

        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang
        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • _shards object
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index
        • node string
        • reason
        • shard number
        • status string
        • primary boolean
      • skipped number
POST /{index}/_mapping
PUT /my-index-000001/_mapping
{
  "properties": {
    "user": {
      "properties": {
        "name": {
          "type": "keyword"
        }
      }
    }
  }
}
resp = client.indices.put_mapping(
    index="my-index-000001",
    properties={
        "user": {
            "properties": {
                "name": {
                    "type": "keyword"
                }
            }
        }
    },
)
const response = await client.indices.putMapping({
  index: "my-index-000001",
  properties: {
    user: {
      properties: {
        name: {
          type: "keyword",
        },
      },
    },
  },
});
response = client.indices.put_mapping(
  index: "my-index-000001",
  body: {
    "properties": {
      "user": {
        "properties": {
          "name": {
            "type": "keyword"
          }
        }
      }
    }
  }
)
$resp = $client->indices()->putMapping([
    "index" => "my-index-000001",
    "body" => [
        "properties" => [
            "user" => [
                "properties" => [
                    "name" => [
                        "type" => "keyword",
                    ],
                ],
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"properties":{"user":{"properties":{"name":{"type":"keyword"}}}}}' "$ELASTICSEARCH_URL/my-index-000001/_mapping"
client.indices().putMapping(p -> p
    .index("my-index-000001")
    .properties("user", pr -> pr
        .object(o -> o
            .properties("name", pro -> pro
                .keyword(k -> k)
            )
        )
    )
);
Request example
The update mapping API can be applied to multiple data streams or indices with a single request. For example, run `PUT /my-index-000001,my-index-000002/_mapping` to update mappings for the `my-index-000001` and `my-index-000002` indices at the same time.
{
  "properties": {
    "user": {
      "properties": {
        "name": {
          "type": "keyword"
        }
      }
    }
  }
}

Get index settings Generally available

GET /{index}/_settings/{name}

All methods and paths for this operation:

GET /_settings

GET /_settings/{name}
GET /{index}/_settings
GET /{index}/_settings/{name}

Get setting information for one or more indices. For data streams, it returns setting information for the stream's backing indices.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • name string | array[string] Required

    Comma-separated list or wildcard expression of settings to retrieve.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flat_settings boolean

    If true, returns settings in flat format.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • include_defaults boolean

    If true, return all default settings in the response.

  • local boolean

    If true, the request retrieves information from the local node only. If false, information is retrieved from the master node.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object
      Hide * attributes Show * attributes object
      • aliases object
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            Query used to limit documents the alias can access.

            External documentation
          • index_routing string

            Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

          • is_hidden boolean

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string

            Value used to route indexing and search operations to a specific shard.

          • search_routing string

            Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

      • mappings object
        Hide mappings attributes Show mappings attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
      • settings object
        Index settings
      • defaults object

        Default settings, included when the request's include_default is true.

        Index settings
      • data_stream string
      • lifecycle object

        Data stream lifecycle applicable if this is a data stream.

        Hide lifecycle attributes Show lifecycle attributes object
        • data_retention string

          If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

          External documentation
        • downsampling object

          The downsampling configuration to execute for the managed backing index after rollover.

          Hide downsampling attribute Show downsampling attribute object
          • rounds array[object] Required

            The list of downsampling rounds to execute as part of this downsampling configuration

        • enabled boolean

          If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

          Default value is true.

GET /{index}/_settings/{name}
GET _all/_settings?expand_wildcards=all&filter_path=*.settings.index.*.slowlog
resp = client.indices.get_settings(
    index="_all",
    expand_wildcards="all",
    filter_path="*.settings.index.*.slowlog",
)
const response = await client.indices.getSettings({
  index: "_all",
  expand_wildcards: "all",
  filter_path: "*.settings.index.*.slowlog",
});
response = client.indices.get_settings(
  index: "_all",
  expand_wildcards: "all",
  filter_path: "*.settings.index.*.slowlog"
)
$resp = $client->indices()->getSettings([
    "index" => "_all",
    "expand_wildcards" => "all",
    "filter_path" => "*.settings.index.*.slowlog",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_all/_settings?expand_wildcards=all&filter_path=*.settings.index.*.slowlog"

Open a closed index Generally available

POST /{index}/_open

For data streams, the API opens any closed backing indices.

A closed index is blocked for read/write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. This allows closed indices to not have to maintain internal data structures for indexing or searching documents, resulting in a smaller overhead on the cluster.

When opening or closing an index, the master is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened or closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.

You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behavior can be turned off by using the ignore_unavailable=true parameter.

By default, you must explicitly name the indices you are opening or closing. To open or close indices with _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. This setting can also be changed with the cluster update settings API.

Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting cluster.indices.close.enable to false.

Because opening or closing an index allocates its shards, the wait_for_active_shards setting on index creation applies to the _open and _close index actions as well.

Required authorization

  • Index privileges: manage

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). By default, you must explicitly name the indices you using to limit the request. To limit a request using _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. You can update this setting in the elasticsearch.yml file or using the cluster update settings API.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
POST /{index}/_open
POST /.ds-my-data-stream-2099.03.07-000001/_open/
resp = client.indices.open(
    index=".ds-my-data-stream-2099.03.07-000001",
)
const response = await client.indices.open({
  index: ".ds-my-data-stream-2099.03.07-000001",
});
response = client.indices.open(
  index: ".ds-my-data-stream-2099.03.07-000001"
)
$resp = $client->indices()->open([
    "index" => ".ds-my-data-stream-2099.03.07-000001",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/.ds-my-data-stream-2099.03.07-000001/_open/"
client.indices().open(o -> o
    .index(".ds-my-data-stream-2099.03.07-000001")
);
Response examples (200)
A successful response for opening an index.
{
  "acknowledged" : true,
  "shards_acknowledged" : true
}

Update index settings Generally available

PUT /{index}/_settings

All methods and paths for this operation:

PUT /_settings

PUT /{index}/_settings

Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.

To revert a setting to the default value, use a null value. The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation. To preserve existing settings from being updated, set the preserve_existing parameter to true.

There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example:

{
  "number_of_replicas": 1
}

Or you can use an index setting object:

{
  "index": {
    "number_of_replicas": 1
  }
}

Or you can use dot annotation:

{
  "index.number_of_replicas": 1
}

Or you can embed any of the aforementioned options in a settings object. For example:

{
  "settings": {
    "index": {
      "number_of_replicas": 1
    }
  }
}

NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.

Required authorization

  • Index privileges: manage
External documentation

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flat_settings boolean

    If true, returns settings in flat format.

  • ignore_unavailable boolean

    If true, returns settings in flat format.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • preserve_existing boolean

    If true, existing index settings remain unchanged.

  • reopen boolean

    Whether to close and reopen the index to apply non-dynamic settings. If set to true the indices to which the settings are being applied will be closed temporarily and then reopened in order to apply the changes.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
application/json

Body Required

object object
Index settings

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /{index}/_settings
PUT /my-index-000001/_settings
{
  "index" : {
    "number_of_replicas" : 2
  }
}
resp = client.indices.put_settings(
    index="my-index-000001",
    settings={
        "index": {
            "number_of_replicas": 2
        }
    },
)
const response = await client.indices.putSettings({
  index: "my-index-000001",
  settings: {
    index: {
      number_of_replicas: 2,
    },
  },
});
response = client.indices.put_settings(
  index: "my-index-000001",
  body: {
    "index": {
      "number_of_replicas": 2
    }
  }
)
$resp = $client->indices()->putSettings([
    "index" => "my-index-000001",
    "body" => [
        "index" => [
            "number_of_replicas" => 2,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index":{"number_of_replicas":2}}' "$ELASTICSEARCH_URL/my-index-000001/_settings"
client.indices().putSettings(p -> p
    .index("my-index-000001")
    .settings(s -> s
        .index(i -> i
            .numberOfReplicas("2")
        )
    )
);
Request examples
{
  "index" : {
    "number_of_replicas" : 2
  }
}
To revert a setting to the default value, use `null`.
{
  "index" : {
    "refresh_interval" : null
  }
}
To add an analyzer, you must close the index (`POST /my-index-000001/_close`), define the analyzer, then reopen the index (`POST /my-index-000001/_open`).
{
  "analysis": {
    "analyzer": {
      "content": {
        "type": "custom",
        "tokenizer": "whitespace"
      }
    }
  }
}








Reload search analyzers Generally available; Added in 7.3.0

POST /{index}/_reload_search_analyzers

All methods and paths for this operation:

GET /{index}/_reload_search_analyzers

POST /{index}/_reload_search_analyzers

Reload an index's search analyzers and their resources. For data streams, the API reloads search analyzers and resources for the stream's backing indices.

IMPORTANT: After reloading the search analyzers you should clear the request cache to make sure it doesn't contain responses derived from the previous versions of the analyzer.

You can use the reload search analyzers API to pick up changes to synonym files used in the synonym_graph or synonym token filter of a search analyzer. To be eligible, the token filter must have an updateable flag of true and only be used in search analyzers.

NOTE: This API does not perform a reload for each shard of an index. Instead, it performs a reload for each node containing index shards. As a result, the total shard count returned by the API can differ from the number of index shards. Because reloading affects every node with an index shard, it is important to update the synonym file on every data node in the cluster--including nodes that don't contain a shard replica--before using this API. This ensures the synonym file is updated everywhere in the cluster in case shards are relocated in the future.

Required authorization

  • Index privileges: manage
External documentation

Path parameters

  • index string | array[string] Required

    A comma-separated list of index names to reload analyzers for

Query parameters

  • allow_no_indices boolean

    Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes _all string or when no indices have been specified)

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    Whether specified concrete indices should be ignored when unavailable (missing or closed)

  • resource string

    Changed resource to reload analyzers from if applicable

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • reload_details array[object] Required
      Hide reload_details attributes Show reload_details attributes object
      • index string Required
      • reloaded_analyzers array[string] Required
      • reloaded_node_ids array[string] Required
    • _shards object Required
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
POST /{index}/_reload_search_analyzers
POST /my-index-000001/_reload_search_analyzers
resp = client.indices.reload_search_analyzers(
    index="my-index-000001",
)
const response = await client.indices.reloadSearchAnalyzers({
  index: "my-index-000001",
});
response = client.indices.reload_search_analyzers(
  index: "my-index-000001"
)
$resp = $client->indices()->reloadSearchAnalyzers([
    "index" => "my-index-000001",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_reload_search_analyzers"
Response examples (200)
A successful response when reloading a search analyzer of an index.
{
  "template": {
    "settings": {
      "number_of_shards": 1
    },
    "mappings": {
      "_source": {
        "enabled": false
      },
      "properties": {
        "host_name": {
          "type": "keyword"
        },
        "created_at": {
          "type": "date",
          "format": "EEE MMM dd HH:mm:ss Z yyyy"
        }
      }
    }
  }
}

Resolve the cluster Generally available; Added in 8.13.0

GET /_resolve/cluster/{name}

All methods and paths for this operation:

GET /_resolve/cluster

GET /_resolve/cluster/{name}

Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.

This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.

You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.

For each cluster in the index expression, information is returned about:

  • Whether the querying ("local") cluster is currently connected to each remote cluster specified in the index expression. Note that this endpoint actively attempts to contact the remote clusters, unlike the remote/info endpoint.
  • Whether each remote cluster is configured with skip_unavailable as true or false.
  • Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
  • Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
  • Cluster version information, including the Elasticsearch server version.

For example, GET /_resolve/cluster/my-index-*,cluster*:my-index-* returns information about the local cluster and all remotely configured clusters that start with the alias cluster*. Each cluster returns information about whether it has any indices, aliases or data streams that match my-index-*.

Note on backwards compatibility

The ability to query without an index expression was added in version 8.18, so when querying remote clusters older than that, the local cluster will send the index expression dummy* to those remote clusters. Thus, if an errors occur, you may see a reference to that index expression even though you didn't request it. If it causes a problem, you can instead include an index expression like *:* to bypass the issue.

You may want to exclude a cluster or index from a search when:

  • A remote cluster is not currently connected and is configured with skip_unavailable=false. Running a cross-cluster search under those conditions will cause the entire search to fail.
  • A cluster has no matching indices, aliases or data streams for the index expression (or your user does not have permissions to search them). For example, suppose your index expression is logs*,remote1:logs* and the remote1 cluster has no indices, aliases or data streams that match logs*. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search.
  • The index expression (combined with any query parameters you specify) will likely cause an exception to be thrown when you do the search. In these cases, the "error" field in the _resolve/cluster response will be present. (This is also where security/permission errors will be shown.)
  • A remote cluster is an older version that does not support the feature you want to use in your search.

Test availability of remote clusters

The remote/info endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not. The remote cluster may be available, while the local cluster is not currently connected to it.

You can use the _resolve/cluster API to attempt to reconnect to remote clusters. For example with GET _resolve/cluster or GET _resolve/cluster/*:*. The connected field in the response will indicate whether it was successful. If a connection was (re-)established, this will also cause the remote/info endpoint to now indicate a connected status.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • name string | array[string] Required

    A comma-separated list of names or index patterns for the indices, aliases, and data streams to resolve. Resources on remote clusters can be specified using the <cluster>:<name> syntax. Index and cluster exclusions (e.g., -cluster1:*) are also supported. If no index expression is specified, information about all remote clusters configured on the local cluster is returned without doing any index matching

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the _resolve/cluster API endpoint that takes no index expression.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the _resolve/cluster API endpoint that takes no index expression.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_throttled boolean Deprecated

    If true, concrete, expanded, or aliased indices are ignored when frozen. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the _resolve/cluster API endpoint that takes no index expression.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the _resolve/cluster API endpoint that takes no index expression.

  • timeout string

    The maximum time to wait for remote clusters to respond. If a remote cluster does not respond within this timeout period, the API response will show the cluster as not connected and include an error message that the request timed out.

    The default timeout is unset and the query can take as long as the networking layer is configured to wait for remote clusters that are not responding (typically 30 seconds).

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties

      Provides information about each cluster request relevant to doing a cross-cluster search.

      Hide * attributes Show * attributes object
      • connected boolean Required

        Whether the remote cluster is connected to the local (querying) cluster.

      • skip_unavailable boolean Required

        The skip_unavailable setting for a remote cluster.

      • matching_indices boolean

        Whether the index expression provided in the request matches any indices, aliases or data streams on the cluster.

      • error string

        Provides error messages that are likely to occur if you do a search with this index expression on the specified cluster (for example, lack of security privileges to query an index).

      • version object

        Provides version information about the cluster.

        Hide version attributes Show version attributes object
        • build_flavor string Required
        • minimum_index_compatibility_version string Required
        • minimum_wire_compatibility_version string Required
        • number string Required
GET /_resolve/cluster/{name}
GET /_resolve/cluster/not-present,clust*:my-index*,oldcluster:*?ignore_unavailable=false&timeout=5s
resp = client.indices.resolve_cluster(
    name="not-present,clust*:my-index*,oldcluster:*",
    ignore_unavailable=False,
    timeout="5s",
)
const response = await client.indices.resolveCluster({
  name: "not-present,clust*:my-index*,oldcluster:*",
  ignore_unavailable: "false",
  timeout: "5s",
});
response = client.indices.resolve_cluster(
  name: "not-present,clust*:my-index*,oldcluster:*",
  ignore_unavailable: "false",
  timeout: "5s"
)
$resp = $client->indices()->resolveCluster([
    "name" => "not-present,clust*:my-index*,oldcluster:*",
    "ignore_unavailable" => "false",
    "timeout" => "5s",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_resolve/cluster/not-present,clust*:my-index*,oldcluster:*?ignore_unavailable=false&timeout=5s"
client.indices().resolveCluster(r -> r
    .ignoreUnavailable(false)
    .name(List.of("not-present","clust*:my-index*","oldcluster:*"))
    .timeout(t -> t
        .offset(5)
    )
);
Response examples (200)
A successful response from `GET /_resolve/cluster/my-index*,clust*:my-index*`. Each cluster has its own response section. The cluster you sent the request to is labelled as "(local)".
{
  "(local)": {
    "connected": true,
    "skip_unavailable": false,
    "matching_indices": true,
    "version": {
      "number": "8.13.0",
      "build_flavor": "default",
      "minimum_wire_compatibility_version": "7.17.0",
      "minimum_index_compatibility_version": "7.0.0"
    }
  },
  "cluster_one": {
    "connected": true,
    "skip_unavailable": true,
    "matching_indices": true,
    "version": {
      "number": "8.13.0",
      "build_flavor": "default",
      "minimum_wire_compatibility_version": "7.17.0",
      "minimum_index_compatibility_version": "7.0.0"
    }
  },
  "cluster_two": {
    "connected": true,
    "skip_unavailable": false,
    "matching_indices": true,
    "version": {
      "number": "8.13.0",
      "build_flavor": "default",
      "minimum_wire_compatibility_version": "7.17.0",
      "minimum_index_compatibility_version": "7.0.0"
    }
  }
}
A successful response from `GET /_resolve/cluster/not-present,clust*:my-index*,oldcluster:*?ignore_unavailable=false&timeout=5s`. This type of request can be used to identify potential problems with your cross-cluster search. Note also that a `timeout` of 5 seconds is sent, which sets the maximum time the query will wait for remote clusters to respond. The local cluster has no index called `not_present`. Searching with `ignore_unavailable=false` would return a "no such index" error. The `cluster_one` remote cluster has no indices that match the pattern `my-index*`. There may be no indices that match the pattern or the index could be closed. The `cluster_two` remote cluster is not connected (the attempt to connect failed). Since this cluster is marked as `skip_unavailable=false`, you should probably exclude this cluster from the search by adding `-cluster_two:*` to the search index expression. For `cluster_three`, the error message indicates that this remote cluster did not respond within the 5-second timeout window specified, so it is also marked as not connected. The `oldcluster` remote cluster shows that it has matching indices, but no version information is included. This indicates that the cluster version predates the introduction of the `_resolve/cluster` API, so you may want to exclude it from your cross-cluster search.
{
  "(local)": {
    "connected": true,
    "skip_unavailable": false,
    "error": "no such index [not_present]"
  },
  "cluster_one": {
    "connected": true,
    "skip_unavailable": true,
    "matching_indices": false,
    "version": {
      "number": "8.13.0",
      "build_flavor": "default",
      "minimum_wire_compatibility_version": "7.17.0",
      "minimum_index_compatibility_version": "7.0.0"
    }
  },
  "cluster_two": {
    "connected": false,
    "skip_unavailable": false
  },
  "cluster_three": {
    "connected": false,
    "skip_unavailable": false,
    "error": "Request timed out before receiving a response from the remote cluster"
  },
  "oldcluster": {
    "connected": true,
    "skip_unavailable": false,
    "matching_indices": true
  }
}








Get index segments Generally available

GET /{index}/_segments

All methods and paths for this operation:

GET /_segments

GET /{index}/_segments

Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream's backing indices.

Required authorization

  • Index privileges: monitor

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • verbose boolean

    If true, the request returns a verbose response.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • indices object Required
      Hide indices attribute Show indices attribute object
      • * object Additional properties
        Hide * attribute Show * attribute object
        • shards object Required
    • _shards object Required
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
GET /{index}/_segments
GET /my-index-000001/_segments
resp = client.indices.segments(
    index="my-index-000001",
)
const response = await client.indices.segments({
  index: "my-index-000001",
});
response = client.indices.segments(
  index: "my-index-000001"
)
$resp = $client->indices()->segments([
    "index" => "my-index-000001",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_segments"
client.indices().segments(s -> s
    .index("my-index-000001")
);
Response examples (200)
A successful response for creating a new index for a data stream.
{
  "acknowledged": true,
  "shards_acknowledged": true,
  "old_index": ".ds-my-data-stream-2099.05.06-000001",
  "new_index": ".ds-my-data-stream-2099.05.07-000002",
  "rolled_over": true,
  "dry_run": false,
  "lazy": false,
  "conditions": {
    "[max_age: 7d]": false,
    "[max_docs: 1000]": true,
    "[max_primary_shard_size: 50gb]": false,
    "[max_primary_shard_docs: 2000]": false
  }
}




Shrink an index Generally available; Added in 5.0.0

POST /{index}/_shrink/{target}

All methods and paths for this operation:

PUT /{index}/_shrink/{target}

POST /{index}/_shrink/{target}

Shrink an index into a new index with fewer primary shards.

Before you can shrink an index:

  • The index must be read-only.
  • A copy of every shard in the index must reside on the same node.
  • The index must have a green health status.

To make shard allocation easier, we recommend you also remove the index's replica shards. You can later re-add replica shards as part of the shrink operation.

The requested number of primary shards in the target index must be a factor of the number of shards in the source index. For example an index with 8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index with 15 primary shards can be shrunk into 5, 3 or 1. If the number of shards in the index is a prime number it can only be shrunk into a single primary shard Before shrinking, a (primary or replica) copy of every shard in the index must be present on the same node.

The current write index on a data stream cannot be shrunk. In order to shrink the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be shrunk.

A shrink operation:

  • Creates a new target index with the same definition as the source index, but with a smaller number of primary shards.
  • Hard-links segments from the source index into the target index. If the file system does not support hard-linking, then all segments are copied into the new index, which is a much more time consuming process. Also if using multiple data paths, shards on different data paths require a full copy of segment files if they are not on the same disk since hardlinks do not work across disks.
  • Recovers the target index as though it were a closed index which had just been re-opened. Recovers shards to the .routing.allocation.initial_recovery._id index setting.

IMPORTANT: Indices can only be shrunk if they satisfy the following requirements:

  • The target index must not exist.
  • The source index must have more primary shards than the target index.
  • The number of primary shards in the target index must be a factor of the number of primary shards in the source index. The source index must have more primary shards than the target index.
  • The index must not contain more than 2,147,483,519 documents in total across all shards that will be shrunk into a single shard on the target index as this is the maximum number of docs that can fit into a single shard.
  • The node handling the shrink process must have sufficient free disk space to accommodate a second copy of the existing index.

Required authorization

  • Index privileges: manage

Path parameters

  • index string Required

    Name of the source index to shrink.

  • target string Required

    Name of the target index to create.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

application/json

Body

  • aliases object

    The key is the alias name. Index alias names support date math.

    Hide aliases attribute Show aliases attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • filter object

        Query used to limit documents the alias can access.

        External documentation
      • index_routing string

        Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

      • is_hidden boolean

        If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

        Default value is false.

      • is_write_index boolean

        If true, the index is the write index for the alias.

        Default value is false.

      • routing string

        Value used to route indexing and search operations to a specific shard.

      • search_routing string

        Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

  • settings object

    Configuration options for the target index.

    Hide settings attribute Show settings attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
    • index string Required
POST /{index}/_shrink/{target}
POST /my_source_index/_shrink/my_target_index
{
  "settings": {
    "index.routing.allocation.require._name": null,
    "index.blocks.write": null
  }
}
resp = client.indices.shrink(
    index="my_source_index",
    target="my_target_index",
    settings={
        "index.routing.allocation.require._name": None,
        "index.blocks.write": None
    },
)
const response = await client.indices.shrink({
  index: "my_source_index",
  target: "my_target_index",
  settings: {
    "index.routing.allocation.require._name": null,
    "index.blocks.write": null,
  },
});
response = client.indices.shrink(
  index: "my_source_index",
  target: "my_target_index",
  body: {
    "settings": {
      "index.routing.allocation.require._name": nil,
      "index.blocks.write": nil
    }
  }
)
$resp = $client->indices()->shrink([
    "index" => "my_source_index",
    "target" => "my_target_index",
    "body" => [
        "settings" => [
            "index.routing.allocation.require._name" => null,
            "index.blocks.write" => null,
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"settings":{"index.routing.allocation.require._name":null,"index.blocks.write":null}}' "$ELASTICSEARCH_URL/my_source_index/_shrink/my_target_index"
client.indices().shrink(s -> s
    .index("my_source_index")
    .settings(Map.of("index.blocks.write", JsonData.fromJson("null"),"index.routing.allocation.require._name", JsonData.fromJson("null")))
    .target("my_target_index")
);
Request example
{
  "settings": {
    "index.routing.allocation.require._name": null,
    "index.blocks.write": null
  }
}

Simulate an index Generally available; Added in 7.9.0

POST /_index_template/_simulate_index/{name}

Get the index configuration that would be applied to the specified index from an existing index template.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Name of the index to simulate

Query parameters

  • create boolean

    Whether the index template we optionally defined in the body should only be dry-run added if new or can also replace an existing one

  • cause string

    User defined reason for dry-run creating the new template for simulation purposes

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • include_defaults boolean Generally available; Added in 8.11.0

    If true, returns all relevant default configurations for the index template.

application/json

Body

  • index_patterns string | array[string] Required

    Name of the index template.

  • composed_of array[string] Required

    An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence.

  • template object

    Template to be applied. It may optionally include an aliases, mappings, or settings configuration.

    Hide template attributes Show template attributes object
    • aliases object

      Aliases to add. If the index template includes a data_stream object, these are data stream aliases. Otherwise, these are index aliases. Data stream aliases ignore the index_routing, routing, and search_routing options.

      Hide aliases attribute Show aliases attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • filter object

          Query used to limit documents the alias can access.

          External documentation
        • index_routing string

          Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

        • is_hidden boolean

          If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

          Default value is false.

        • is_write_index boolean

          If true, the index is the write index for the alias.

          Default value is false.

        • routing string

          Value used to route indexing and search operations to a specific shard.

        • search_routing string

          Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

    • mappings object

      Mapping for fields in the index. If specified, this mapping can include field names, field data types, and mapping parameters.

      Hide mappings attributes Show mappings attributes object
      • all_field object
        Hide all_field attributes Show all_field attributes object
        • analyzer string Required
        • enabled boolean Required
        • omit_norms boolean Required
        • search_analyzer string Required
        • similarity string Required
        • store boolean Required
        • store_term_vector_offsets boolean Required
        • store_term_vector_payloads boolean Required
        • store_term_vector_positions boolean Required
        • store_term_vectors boolean Required
      • date_detection boolean
      • dynamic string

        Values are strict, runtime, true, or false.

      • dynamic_date_formats array[string]
      • dynamic_templates array[object]
      • _field_names object
        Hide _field_names attribute Show _field_names attribute object
        • enabled boolean Required
      • index_field object
        Hide index_field attribute Show index_field attribute object
        • enabled boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • numeric_detection boolean
      • properties object
      • _routing object
        Hide _routing attribute Show _routing attribute object
        • required boolean Required
      • _size object
        Hide _size attribute Show _size attribute object
        • enabled boolean Required
      • _source object
        Hide _source attributes Show _source attributes object
        • compress boolean
        • compress_threshold string
        • enabled boolean
        • excludes array[string]
        • includes array[string]
      • runtime object
        Hide runtime attribute Show runtime attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

          • fetch_fields array[object]

            For type lookup

          • format string

            A custom format for date type runtime fields.

      • enabled boolean
      • subobjects string

        Values are true or false.

      • _data_stream_timestamp object
        Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
        • enabled boolean Required
    • settings object

      Configuration options for the index.

      Index settings
    • lifecycle object

      Data stream lifecycle with rollover can be used to display the configuration including the default rollover conditions, if asked.

      Hide lifecycle attributes Show lifecycle attributes object
      • data_retention string

        If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

      • downsampling object

        The downsampling configuration to execute for the managed backing index after rollover.

      • enabled boolean

        If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

        Default value is true.

      • rollover object

        The conditions which will trigger the rollover of a backing index as configured by the cluster setting cluster.lifecycle.default.rollover. This property is an implementation detail and it will only be retrieved when the query param include_defaults is set to true. The contents of this field are subject to change.

  • version number

    Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch.

  • priority number

    Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch.

  • _meta object

    Optional user metadata about the index template. May have any contents. This map is not automatically generated by Elasticsearch.

    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • allow_auto_create boolean
  • data_stream object

    If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with a data_stream object.

    Hide data_stream attributes Show data_stream attributes object
    • hidden boolean

      If true, the data stream is hidden.

      Default value is false.

    • allow_custom_routing boolean

      If true, the data stream supports custom routing.

      Default value is false.

  • deprecated boolean Generally available; Added in 8.12.0

    Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning.

  • ignore_missing_component_templates string | array[string]

    A list of component template names that are allowed to be absent.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • overlapping array[object]
      Hide overlapping attributes Show overlapping attributes object
      • name string Required
      • index_patterns array[string] Required
    • template object Required
      Hide template attributes Show template attributes object
      • aliases object Required
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            Query used to limit documents the alias can access.

          • index_routing string

            Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

          • is_hidden boolean

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string

            Value used to route indexing and search operations to a specific shard.

          • search_routing string

            Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

      • mappings object Required
        Hide mappings attributes Show mappings attributes object
        • all_field object
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
        • index_field object
        • _meta object
        • numeric_detection boolean
        • properties object
        • _routing object
        • _size object
        • _source object
        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
      • settings object Required
        Index settings
POST /_index_template/_simulate_index/{name}
POST /_index_template/_simulate_index/my-index-000001
resp = client.indices.simulate_index_template(
    name="my-index-000001",
    index_template=None,
)
const response = await client.indices.simulateIndexTemplate({
  name: "my-index-000001",
  index_template: null,
});
response = client.indices.simulate_index_template(
  name: "my-index-000001"
)
$resp = $client->indices()->simulateIndexTemplate([
    "name" => "my-index-000001",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_index_template/_simulate_index/my-index-000001"
client.indices().simulateIndexTemplate(s -> s
    .name("my-index-000001")
);
Response examples (200)
A successful response from `POST /_index_template/_simulate_index/my-index-000001`.
{
  "template" : {
    "settings" : {
      "index" : {
        "number_of_shards" : "2",
        "number_of_replicas" : "0",
        "routing" : {
          "allocation" : {
            "include" : {
              "_tier_preference" : "data_content"
            }
          }
        }
      }
    },
    "mappings" : {
      "properties" : {
        "@timestamp" : {
          "type" : "date"
        }
      }
    },
    "aliases" : { }
  },
  "overlapping" : [
    {
      "name" : "template_1",
      "index_patterns" : [
        "my-index-*"
      ]
    }
  ]
}




Split an index Generally available; Added in 6.1.0

POST /{index}/_split/{target}

All methods and paths for this operation:

PUT /{index}/_split/{target}

POST /{index}/_split/{target}

Split an index into a new index with more primary shards.

  • Before you can split an index:

  • The index must be read-only.

  • The cluster health status must be green.

You can do make an index read-only with the following request using the add index block API:

PUT /my_source_index/_block/write

The current write index on a data stream cannot be split. In order to split the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be split.

The number of times the index can be split (and the number of shards that each original shard can be split into) is determined by the index.number_of_routing_shards setting. The number of routing shards specifies the hashing space that is used internally to distribute documents across shards with consistent hashing. For instance, a 5 shard index with number_of_routing_shards set to 30 (5 x 2 x 3) could be split by a factor of 2 or 3.

A split operation:

  • Creates a new target index with the same definition as the source index, but with a larger number of primary shards.
  • Hard-links segments from the source index into the target index. If the file system doesn't support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
  • Hashes all documents again, after low level files are created, to delete documents that belong to a different shard.
  • Recovers the target index as though it were a closed index which had just been re-opened.

IMPORTANT: Indices can only be split if they satisfy the following requirements:

  • The target index must not exist.
  • The source index must have fewer primary shards than the target index.
  • The number of primary shards in the target index must be a multiple of the number of primary shards in the source index.
  • The node handling the split process must have sufficient free disk space to accommodate a second copy of the existing index.

Required authorization

  • Index privileges: manage

Path parameters

  • index string Required

    Name of the source index to split.

  • target string Required

    Name of the target index to create.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

application/json

Body

  • aliases object

    Aliases for the resulting index.

    Hide aliases attribute Show aliases attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • filter object

        Query used to limit documents the alias can access.

        External documentation
      • index_routing string

        Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

      • is_hidden boolean

        If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

        Default value is false.

      • is_write_index boolean

        If true, the index is the write index for the alias.

        Default value is false.

      • routing string

        Value used to route indexing and search operations to a specific shard.

      • search_routing string

        Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

  • settings object

    Configuration options for the target index.

    Hide settings attribute Show settings attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
    • index string Required
POST /{index}/_split/{target}
POST /my-index-000001/_split/split-my-index-000001
{
  "settings": {
    "index.number_of_shards": 2
  }
}
resp = client.indices.split(
    index="my-index-000001",
    target="split-my-index-000001",
    settings={
        "index.number_of_shards": 2
    },
)
const response = await client.indices.split({
  index: "my-index-000001",
  target: "split-my-index-000001",
  settings: {
    "index.number_of_shards": 2,
  },
});
response = client.indices.split(
  index: "my-index-000001",
  target: "split-my-index-000001",
  body: {
    "settings": {
      "index.number_of_shards": 2
    }
  }
)
$resp = $client->indices()->split([
    "index" => "my-index-000001",
    "target" => "split-my-index-000001",
    "body" => [
        "settings" => [
            "index.number_of_shards" => 2,
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"settings":{"index.number_of_shards":2}}' "$ELASTICSEARCH_URL/my-index-000001/_split/split-my-index-000001"
client.indices().split(s -> s
    .index("my-index-000001")
    .settings("index.number_of_shards", JsonData.fromJson("2"))
    .target("split-my-index-000001")
);
Request example
Split an existing index into a new index with more primary shards.
{
  "settings": {
    "index.number_of_shards": 2
  }
}




Unfreeze an index Deprecated Generally available; Added in 6.6.0

POST /{index}/_unfreeze

When a frozen index is unfrozen, the index goes through the normal recovery process and becomes writeable again.

Required authorization

  • Index privileges: manage

Path parameters

  • index string Required

    Identifier for the index.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • wait_for_active_shards string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
POST /{index}/_unfreeze
curl \
 --request POST 'http://api.example.com/{index}/_unfreeze' \
 --header "Authorization: $API_KEY"

Create or update an alias Generally available; Added in 1.3.0

POST /_aliases

Adds a data stream or index to an alias.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
application/json

Body Required

  • actions array[object]

    Actions to perform.

    Hide actions attributes Show actions attributes object
    • add object

      Adds a data stream or index to an alias. If the alias doesn’t exist, the add action creates it.

      Hide add attributes Show add attributes object
      • alias string

        Alias for the action. Index alias names support date math.

      • aliases string | array[string]

        Aliases for the action. Index alias names support date math.

      • filter object

        Query used to limit documents the alias can access.

        External documentation
      • index string

        Data stream or index for the action. Supports wildcards (*).

      • indices string | array[string]

        Data streams or indices for the action. Supports wildcards (*).

      • index_routing string

        Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations. Data stream aliases don’t support this parameter.

      • is_hidden boolean

        If true, the alias is hidden.

        Default value is false.

      • is_write_index boolean

        If true, sets the write index or data stream for the alias.

      • routing string

        Value used to route indexing and search operations to a specific shard. Data stream aliases don’t support this parameter.

      • search_routing string

        Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations. Data stream aliases don’t support this parameter.

      • must_exist boolean

        If true, the alias must exist to perform the action.

        Default value is false.

    • remove object

      Removes a data stream or index from an alias.

      Hide remove attributes Show remove attributes object
      • alias string

        Alias for the action. Index alias names support date math.

      • aliases string | array[string]

        Aliases for the action. Index alias names support date math.

      • index string

        Data stream or index for the action. Supports wildcards (*).

      • indices string | array[string]

        Data streams or indices for the action. Supports wildcards (*).

      • must_exist boolean

        If true, the alias must exist to perform the action.

        Default value is false.

    • remove_index object

      Deletes an index. You cannot use this action on aliases or data streams.

      Hide remove_index attributes Show remove_index attributes object
      • index string

        Data stream or index for the action. Supports wildcards (*).

      • indices string | array[string]

        Data streams or indices for the action. Supports wildcards (*).

      • must_exist boolean

        If true, the alias must exist to perform the action.

        Default value is false.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_aliases
POST _aliases
{
  "actions": [
    {
      "add": {
        "index": "logs-nginx.access-prod",
        "alias": "logs"
      }
    }
  ]
}
resp = client.indices.update_aliases(
    actions=[
        {
            "add": {
                "index": "logs-nginx.access-prod",
                "alias": "logs"
            }
        }
    ],
)
const response = await client.indices.updateAliases({
  actions: [
    {
      add: {
        index: "logs-nginx.access-prod",
        alias: "logs",
      },
    },
  ],
});
response = client.indices.update_aliases(
  body: {
    "actions": [
      {
        "add": {
          "index": "logs-nginx.access-prod",
          "alias": "logs"
        }
      }
    ]
  }
)
$resp = $client->indices()->updateAliases([
    "body" => [
        "actions" => array(
            [
                "add" => [
                    "index" => "logs-nginx.access-prod",
                    "alias" => "logs",
                ],
            ],
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"actions":[{"add":{"index":"logs-nginx.access-prod","alias":"logs"}}]}' "$ELASTICSEARCH_URL/_aliases"
client.indices().updateAliases(u -> u
    .actions(a -> a
        .add(ad -> ad
            .alias("logs")
            .index("logs-nginx.access-prod")
        )
    )
);
Request example
An example body for a `POST _aliases` request.
{
  "actions": [
    {
      "add": {
        "index": "logs-nginx.access-prod",
        "alias": "logs"
      }
    }
  ]
}

Validate a query Generally available; Added in 1.3.0

POST /{index}/_validate/query

All methods and paths for this operation:

GET /_validate/query

POST /_validate/query
GET /{index}/_validate/query
POST /{index}/_validate/query

Validates a query without running it.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases to search. Supports wildcards (*). To search all data streams or indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • all_shards boolean

    If true, the validation is executed on all shards instead of one random shard per index.

  • analyzer string

    Analyzer to use for the query string. This parameter can only be used when the q query string parameter is specified.

  • analyze_wildcard boolean

    If true, wildcard and prefix queries are analyzed.

  • default_operator string

    The default operator for query string query: AND or OR.

    Values are and, AND, or, or OR.

  • df string

    Field to use as default where no field prefix is given in the query string. This parameter can only be used when the q query string parameter is specified.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • explain boolean

    If true, the response returns detailed information if an error has occurred.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored.

  • rewrite boolean

    If true, returns a more detailed explanation showing the actual Lucene query that will be executed.

  • q string

    Query in the Lucene query string syntax.

application/json

Body

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • explanations array[object]
      Hide explanations attributes Show explanations attributes object
      • error string
      • explanation string
      • index string Required
      • valid boolean Required
    • _shards object
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • valid boolean Required
    • error string
POST /{index}/_validate/query
GET my-index-000001/_validate/query?q=user.id:kimchy
resp = client.indices.validate_query(
    index="my-index-000001",
    q="user.id:kimchy",
)
const response = await client.indices.validateQuery({
  index: "my-index-000001",
  q: "user.id:kimchy",
});
response = client.indices.validate_query(
  index: "my-index-000001",
  q: "user.id:kimchy"
)
$resp = $client->indices()->validateQuery([
    "index" => "my-index-000001",
    "q" => "user.id:kimchy",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_validate/query?q=user.id:kimchy"
client.indices().validateQuery(v -> v
    .index("my-index-000001")
    .q("user.id:kimchy")
);

Get lifecycle policies Generally available; Added in 6.6.0

GET /_ilm/policy/{policy}

All methods and paths for this operation:

GET /_ilm/policy

GET /_ilm/policy/{policy}

Required authorization

  • Cluster privileges: manage_ilm,read_ilm

Path parameters

  • policy string Required

    Identifier for the policy.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • modified_date string | number

        One of:
      • policy object Required
        Hide policy attributes Show policy attributes object
        • phases object Required
        • _meta object

          Arbitrary metadata that is not automatically generated or used by Elasticsearch.

          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
      • version number Required
GET /_ilm/policy/{policy}
GET _ilm/policy/my_policy
resp = client.ilm.get_lifecycle(
    name="my_policy",
)
const response = await client.ilm.getLifecycle({
  name: "my_policy",
});
response = client.ilm.get_lifecycle(
  policy: "my_policy"
)
$resp = $client->ilm()->getLifecycle([
    "policy" => "my_policy",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ilm/policy/my_policy"
client.ilm().getLifecycle(g -> g
    .name("my_policy")
);
Response examples (200)
A successful response when retrieving a lifecycle policy.
{
  "my_policy": {
    "version": 1,
    "modified_date": 82392349,
    "policy": {
      "phases": {
        "warm": {
          "min_age": "10d",
          "actions": {
            "forcemerge": {
              "max_num_segments": 1
            }
          }
        },
        "delete": {
          "min_age": "30d",
          "actions": {
            "delete": {
              "delete_searchable_snapshot": true
            }
          }
        }
      }
    },
    "in_use_by" : {
      "indices" : [],
      "data_streams" : [],
      "composable_templates" : []
    }
  }
}

Create or update a lifecycle policy Generally available; Added in 6.6.0

PUT /_ilm/policy/{policy}

If the specified policy exists, it is replaced and the policy version is incremented.

NOTE: Only the latest version of the policy is stored, you cannot revert to previous versions.

Required authorization

  • Index privileges: manage
  • Cluster privileges: manage_ilm
External documentation

Path parameters

  • policy string Required

    Identifier for the policy.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
application/json

Body

  • policy object
    Hide policy attributes Show policy attributes object
    • phases object Required
      Hide phases attributes Show phases attributes object
      • cold object
      • delete object
      • frozen object
      • hot object
      • warm object
    • _meta object

      Arbitrary metadata that is not automatically generated or used by Elasticsearch.

      Hide _meta attribute Show _meta attribute object
      • * object Additional properties

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_ilm/policy/{policy}
PUT _ilm/policy/my_policy
{
  "policy": {
    "_meta": {
      "description": "used for nginx log",
      "project": {
        "name": "myProject",
        "department": "myDepartment"
      }
    },
    "phases": {
      "warm": {
        "min_age": "10d",
        "actions": {
          "forcemerge": {
            "max_num_segments": 1
          }
        }
      },
      "delete": {
        "min_age": "30d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}
resp = client.ilm.put_lifecycle(
    name="my_policy",
    policy={
        "_meta": {
            "description": "used for nginx log",
            "project": {
                "name": "myProject",
                "department": "myDepartment"
            }
        },
        "phases": {
            "warm": {
                "min_age": "10d",
                "actions": {
                    "forcemerge": {
                        "max_num_segments": 1
                    }
                }
            },
            "delete": {
                "min_age": "30d",
                "actions": {
                    "delete": {}
                }
            }
        }
    },
)
const response = await client.ilm.putLifecycle({
  name: "my_policy",
  policy: {
    _meta: {
      description: "used for nginx log",
      project: {
        name: "myProject",
        department: "myDepartment",
      },
    },
    phases: {
      warm: {
        min_age: "10d",
        actions: {
          forcemerge: {
            max_num_segments: 1,
          },
        },
      },
      delete: {
        min_age: "30d",
        actions: {
          delete: {},
        },
      },
    },
  },
});
response = client.ilm.put_lifecycle(
  policy: "my_policy",
  body: {
    "policy": {
      "_meta": {
        "description": "used for nginx log",
        "project": {
          "name": "myProject",
          "department": "myDepartment"
        }
      },
      "phases": {
        "warm": {
          "min_age": "10d",
          "actions": {
            "forcemerge": {
              "max_num_segments": 1
            }
          }
        },
        "delete": {
          "min_age": "30d",
          "actions": {
            "delete": {}
          }
        }
      }
    }
  }
)
$resp = $client->ilm()->putLifecycle([
    "policy" => "my_policy",
    "body" => [
        "policy" => [
            "_meta" => [
                "description" => "used for nginx log",
                "project" => [
                    "name" => "myProject",
                    "department" => "myDepartment",
                ],
            ],
            "phases" => [
                "warm" => [
                    "min_age" => "10d",
                    "actions" => [
                        "forcemerge" => [
                            "max_num_segments" => 1,
                        ],
                    ],
                ],
                "delete" => [
                    "min_age" => "30d",
                    "actions" => [
                        "delete" => new ArrayObject([]),
                    ],
                ],
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"policy":{"_meta":{"description":"used for nginx log","project":{"name":"myProject","department":"myDepartment"}},"phases":{"warm":{"min_age":"10d","actions":{"forcemerge":{"max_num_segments":1}}},"delete":{"min_age":"30d","actions":{"delete":{}}}}}}' "$ELASTICSEARCH_URL/_ilm/policy/my_policy"
client.ilm().putLifecycle(p -> p
    .name("my_policy")
    .policy(po -> po
        .phases(ph -> ph
            .delete(d -> d
                .actions(a -> a
                    .delete(de -> de)
                )
                .minAge(m -> m
                    .time("30d")
                )
            )
            .warm(w -> w
                .actions(a -> a
                    .forcemerge(f -> f
                        .maxNumSegments(1)
                    )
                )
                .minAge(m -> m
                    .time("10d")
                )
            )
        )
        .meta(Map.of("description", JsonData.fromJson("\"used for nginx log\""),"project", JsonData.fromJson("{\"name\":\"myProject\",\"department\":\"myDepartment\"}")))
    )
);
Request example
Run `PUT _ilm/policy/my_policy` to create a new policy with arbitrary metadata.
{
  "policy": {
    "_meta": {
      "description": "used for nginx log",
      "project": {
        "name": "myProject",
        "department": "myDepartment"
      }
    },
    "phases": {
      "warm": {
        "min_age": "10d",
        "actions": {
          "forcemerge": {
            "max_num_segments": 1
          }
        }
      },
      "delete": {
        "min_age": "30d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}
Response examples (200)
A successful response when creating a new lifecycle policy.
{
  "acknowledged": true
}




Explain the lifecycle state Generally available; Added in 6.6.0

GET /{index}/_ilm/explain

Get the current lifecycle status for one or more indices. For data streams, the API retrieves the current lifecycle status for the stream's backing indices.

The response indicates when the index entered each lifecycle state, provides the definition of the running phase, and information about any failures.

Required authorization

  • Index privileges: view_index_metadata,manage_ilm

Path parameters

  • index string Required

    Comma-separated list of data streams, indices, and aliases to target. Supports wildcards (*). To target all data streams and indices, use * or _all.

Query parameters

  • only_errors boolean

    Filters the returned indices to only indices that are managed by ILM and are in an error state, either due to an encountering an error while executing the policy, or attempting to use a policy that does not exist.

  • only_managed boolean

    Filters the returned indices to only indices that are managed by ILM.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • indices object Required
GET /{index}/_ilm/explain
GET .ds-timeseries-*/_ilm/explain
resp = client.ilm.explain_lifecycle(
    index=".ds-timeseries-*",
)
const response = await client.ilm.explainLifecycle({
  index: ".ds-timeseries-*",
});
response = client.ilm.explain_lifecycle(
  index: ".ds-timeseries-*"
)
$resp = $client->ilm()->explainLifecycle([
    "index" => ".ds-timeseries-*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/.ds-timeseries-*/_ilm/explain"
client.ilm().explainLifecycle(e -> e
    .index(".ds-timeseries-*")
);
Response examples (200)
A successful response when retrieving the current ILM status for an index.
{
  "indices": {
    "my-index-000001": {
      "index": "my-index-000001",
      "index_creation_date_millis": 1538475653281,
      "index_creation_date": "2018-10-15T13:45:21.981Z",
      "time_since_index_creation": "15s",
      "managed": true,
      "policy": "my_policy",
      "lifecycle_date_millis": 1538475653281,
      "lifecycle_date": "2018-10-15T13:45:21.981Z",
      "age": "15s",
      "phase": "new",
      "phase_time_millis": 1538475653317,
      "phase_time": "2018-10-15T13:45:22.577Z",
      "action": "complete"
      "action_time_millis": 1538475653317,
      "action_time": "2018-10-15T13:45:22.577Z",
      "step": "complete",
      "step_time_millis": 1538475653317,
      "step_time": "2018-10-15T13:45:22.577Z"
    }
  }
}

Get the ILM status Generally available; Added in 6.6.0

GET /_ilm/status

Get the current index lifecycle management status.

Required authorization

  • Cluster privileges: read_ilm

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • operation_mode string Required

      Values are RUNNING, STOPPING, or STOPPED.

GET /_ilm/status
GET _ilm/status
resp = client.ilm.get_status()
const response = await client.ilm.getStatus();
response = client.ilm.get_status
$resp = $client->ilm()->getStatus();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ilm/status"
client.ilm().getStatus();
Response examples (200)
A successful response when retrieving the current ILM status.
{
  "operation_mode": "RUNNING"
}

Migrate to data tiers routing Generally available; Added in 7.14.0

POST /_ilm/migrate_to_data_tiers

Switch the indices, ILM policies, and legacy, composable, and component templates from using custom node attributes and attribute-based allocation filters to using data tiers. Optionally, delete one legacy index template. Using node roles enables ILM to automatically move the indices between data tiers.

Migrating away from custom node attributes routing can be manually performed. This API provides an automated way of performing three out of the four manual steps listed in the migration guide:

  1. Stop setting the custom hot attribute on new indices.
  2. Remove custom allocation settings from existing ILM policies.
  3. Replace custom allocation settings from existing indices with the corresponding tier preference.

ILM must be stopped before performing the migration. Use the stop ILM and get ILM status APIs to wait until the reported operation mode is STOPPED.

External documentation

Query parameters

  • dry_run boolean

    If true, simulates the migration from node attributes based allocation filters to data tiers, but does not perform the migration. This provides a way to retrieve the indices and ILM policies that need to be migrated.

application/json

Body

  • legacy_template_to_delete string
  • node_attribute string

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • dry_run boolean Required
    • removed_legacy_template string Required

      The name of the legacy index template that was deleted. This information is missing if no legacy index templates were deleted.

    • migrated_ilm_policies array[string] Required

      The ILM policies that were updated.

    • migrated_indices string | array[string] Required

      The indices that were migrated to tier preference routing.

    • migrated_legacy_templates array[string] Required

      The legacy index templates that were updated to not contain custom routing settings for the provided data attribute.

    • migrated_composable_templates array[string] Required

      The composable index templates that were updated to not contain custom routing settings for the provided data attribute.

    • migrated_component_templates array[string] Required

      The component templates that were updated to not contain custom routing settings for the provided data attribute.

POST /_ilm/migrate_to_data_tiers
POST /_ilm/migrate_to_data_tiers
{
  "legacy_template_to_delete": "global-template",
  "node_attribute": "custom_attribute_name"
}
resp = client.ilm.migrate_to_data_tiers(
    legacy_template_to_delete="global-template",
    node_attribute="custom_attribute_name",
)
const response = await client.ilm.migrateToDataTiers({
  legacy_template_to_delete: "global-template",
  node_attribute: "custom_attribute_name",
});
response = client.ilm.migrate_to_data_tiers(
  body: {
    "legacy_template_to_delete": "global-template",
    "node_attribute": "custom_attribute_name"
  }
)
$resp = $client->ilm()->migrateToDataTiers([
    "body" => [
        "legacy_template_to_delete" => "global-template",
        "node_attribute" => "custom_attribute_name",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"legacy_template_to_delete":"global-template","node_attribute":"custom_attribute_name"}' "$ELASTICSEARCH_URL/_ilm/migrate_to_data_tiers"
client.ilm().migrateToDataTiers(m -> m
    .legacyTemplateToDelete("global-template")
    .nodeAttribute("custom_attribute_name")
);
Request example
Run `POST /_ilm/migrate_to_data_tiers` to migrate the indices, ILM policies, legacy templates, composable, and component templates away from defining custom allocation filtering using the `custom_attribute_name` node attribute. It also deletes the legacy template with name `global-template` if it exists in the system.
{
  "legacy_template_to_delete": "global-template",
  "node_attribute": "custom_attribute_name"
}
Response examples (200)
A successful response when migrating indices, ILMs, and templates from custom node attributes to data tiers.
{
  "dry_run": false,
  "removed_legacy_template":"global-template",
  "migrated_ilm_policies":["policy_with_allocate_action"],
  "migrated_indices":["warm-index-to-migrate-000001"],
  "migrated_legacy_templates":["a-legacy-template"],
  "migrated_composable_templates":["a-composable-template"],
  "migrated_component_templates":["a-component-template"]
}




Remove policies from an index Generally available; Added in 6.6.0

POST /{index}/_ilm/remove

Remove the assigned lifecycle policies from an index or a data stream's backing indices. It also stops managing the indices.

Required authorization

  • Index privileges: manage_ilm

Path parameters

  • index string Required

    The name of the index to remove policy on

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • failed_indexes array[string] Required
    • has_failures boolean Required
POST /{index}/_ilm/remove
POST logs-my_app-default/_ilm/remove
resp = client.ilm.remove_policy(
    index="logs-my_app-default",
)
const response = await client.ilm.removePolicy({
  index: "logs-my_app-default",
});
response = client.ilm.remove_policy(
  index: "logs-my_app-default"
)
$resp = $client->ilm()->removePolicy([
    "index" => "logs-my_app-default",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/logs-my_app-default/_ilm/remove"
client.ilm().removePolicy(r -> r
    .index("logs-my_app-default")
);
Response examples (200)
A successful response when removing a lifecycle policy from an index.
{
  "has_failures" : false,
  "failed_indexes" : []
}








Stop the ILM plugin Generally available; Added in 6.6.0

POST /_ilm/stop

Halt all lifecycle management operations and stop the index lifecycle management plugin. This is useful when you are performing maintenance on the cluster and need to prevent ILM from performing any actions on your indices.

The API returns as soon as the stop request has been acknowledged, but the plugin might continue to run until in-progress operations complete and the plugin can be safely stopped. Use the get ILM status API to check whether ILM is running.

Required authorization

  • Cluster privileges: manage_ilm

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ilm/stop
POST _ilm/stop
resp = client.ilm.stop()
const response = await client.ilm.stop();
response = client.ilm.stop
$resp = $client->ilm()->stop();
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ilm/stop"
client.ilm().stop(s -> s);
Response examples (200)
A successful response when stopping the ILM plugin.
{
  "acknowledged": true
}

Inference

Inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

Perform chat completion inference Generally available; Added in 8.18.0

POST /_inference/chat_completion/{inference_id}/_stream

The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the chat_completion task type for openai and elastic inference services.

NOTE: The chat_completion task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use the openai, hugging_face or the elastic service, use the Chat completion inference API.

Path parameters

  • inference_id string Required

    The inference Id

Query parameters

application/json

Body Required

  • messages array[object] Required

    A list of objects representing the conversation. Requests should generally only add new messages from the user (role user). The other message roles (assistant, system, or tool) should generally only be copied from the response to a previous completion request, such that the messages array is built up throughout a conversation.

    An object representing part of the conversation.

    Hide messages attributes Show messages attributes object
    • content string | array[object]

      The content of the message.

      String example:

      {
         "content": "Some string"
      }
      

      Object example:

      {
        "content": [
            {
             "text": "Some text",
             "type": "text"
            }
         ]
      }
      
      One of:

      The content of the message.

      String example:

      {
         "content": "Some string"
      }
      

      Object example:

      {
        "content": [
            {
             "text": "Some text",
             "type": "text"
            }
         ]
      }
      
    • role string Required

      The role of the message author. Valid values are user, assistant, system, and tool.

    • tool_call_id string

      Only for tool role messages. The tool call that this message is responding to.

    • tool_calls array[object]

      Only for assistant role messages. The tool calls generated by the model. If it's specified, the content field is optional. Example:

      {
        "tool_calls": [
            {
                "id": "call_KcAjWtAww20AihPHphUh46Gd",
                "type": "function",
                "function": {
                    "name": "get_current_weather",
                    "arguments": "{\"location\":\"Boston, MA\"}"
                }
            }
        ]
      }
      

      A tool call generated by the model.

      Hide tool_calls attributes Show tool_calls attributes object
      • id string Required

        The identifier of the tool call.

      • function object Required

        The function that the model called.

        Hide function attributes Show function attributes object
        • arguments string Required

          The arguments to call the function with in JSON format.

        • name string Required

          The name of the function to call.

      • type string Required

        The type of the tool call.

  • model string

    The ID of the model to use.

  • max_completion_tokens number

    The upper bound limit for the number of tokens that can be generated for a completion request.

  • stop array[string]

    A sequence of strings to control when the model should stop generating additional tokens.

  • temperature number

    The sampling temperature to use.

  • tool_choice string | object

    Controls which tool is called by the model. String representation: One of auto, none, or requrired. auto allows the model to choose between calling tools and generating a message. none causes the model to not call any tools. required forces the model to call one or more tools. Example (object representation):

    {
      "tool_choice": {
          "type": "function",
          "function": {
              "name": "get_current_weather"
          }
      }
    }
    
    One of:

    Controls which tool is called by the model. String representation: One of auto, none, or requrired. auto allows the model to choose between calling tools and generating a message. none causes the model to not call any tools. required forces the model to call one or more tools. Example (object representation):

    {
      "tool_choice": {
          "type": "function",
          "function": {
              "name": "get_current_weather"
          }
      }
    }
    
  • tools array[object]

    A list of tools that the model can call. Example:

    {
      "tools": [
          {
              "type": "function",
              "function": {
                  "name": "get_price_of_item",
                  "description": "Get the current price of an item",
                  "parameters": {
                      "type": "object",
                      "properties": {
                          "item": {
                              "id": "12345"
                          },
                          "unit": {
                              "type": "currency"
                          }
                      }
                  }
              }
          }
      ]
    }
    

    A list of tools that the model can call.

    Hide tools attributes Show tools attributes object
    • type string Required

      The type of tool.

    • function object Required

      The function definition.

      Hide function attributes Show function attributes object
      • description string

        A description of what the function does. This is used by the model to choose when and how to call the function.

      • name string Required

        The name of the function.

      • parameters object

        The parameters the functional accepts. This should be formatted as a JSON object.

      • strict boolean

        Whether to enable schema adherence when generating the function call.

  • top_p number

    Nucleus sampling, an alternative to sampling with temperature.

Responses

  • 200 application/json
POST /_inference/chat_completion/{inference_id}/_stream
POST _inference/chat_completion/openai-completion/_stream
{
  "model": "gpt-4o",
  "messages": [
      {
          "role": "user",
          "content": "What is Elastic?"
      }
  ]
}
resp = client.inference.chat_completion_unified(
    inference_id="openai-completion",
    chat_completion_request={
        "model": "gpt-4o",
        "messages": [
            {
                "role": "user",
                "content": "What is Elastic?"
            }
        ]
    },
)
const response = await client.inference.chatCompletionUnified({
  inference_id: "openai-completion",
  chat_completion_request: {
    model: "gpt-4o",
    messages: [
      {
        role: "user",
        content: "What is Elastic?",
      },
    ],
  },
});
response = client.inference.chat_completion_unified(
  inference_id: "openai-completion",
  body: {
    "model": "gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": "What is Elastic?"
      }
    ]
  }
)
$resp = $client->inference()->chatCompletionUnified([
    "inference_id" => "openai-completion",
    "body" => [
        "model" => "gpt-4o",
        "messages" => array(
            [
                "role" => "user",
                "content" => "What is Elastic?",
            ],
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"model":"gpt-4o","messages":[{"role":"user","content":"What is Elastic?"}]}' "$ELASTICSEARCH_URL/_inference/chat_completion/openai-completion/_stream"
client.inference().chatCompletionUnified(c -> c
    .inferenceId("openai-completion")
    .chatCompletionRequest(ch -> ch
        .messages(m -> m
            .content(co -> co
                .string("What is Elastic?")
            )
            .role("user")
        )
        .model("gpt-4o")
    )
);
Request examples
Run `POST _inference/chat_completion/openai-completion/_stream` to perform a chat completion on the example question with streaming.
{
  "model": "gpt-4o",
  "messages": [
      {
          "role": "user",
          "content": "What is Elastic?"
      }
  ]
}
Run `POST _inference/chat_completion/openai-completion/_stream` to perform a chat completion using an Assistant message with `tool_calls`.
{
  "messages": [
      {
          "role": "assistant",
          "content": "Let's find out what the weather is",
          "tool_calls": [ 
              {
                  "id": "call_KcAjWtAww20AihPHphUh46Gd",
                  "type": "function",
                  "function": {
                      "name": "get_current_weather",
                      "arguments": "{\"location\":\"Boston, MA\"}"
                  }
              }
          ]
      },
      { 
          "role": "tool",
          "content": "The weather is cold",
          "tool_call_id": "call_KcAjWtAww20AihPHphUh46Gd"
      }
  ]
}
Run `POST _inference/chat_completion/openai-completion/_stream` to perform a chat completion using a User message with `tools` and `tool_choice`.
{
  "messages": [
      {
          "role": "user",
          "content": [
              {
                  "type": "text",
                  "text": "What's the price of a scarf?"
              }
          ]
      }
  ],
  "tools": [
      {
          "type": "function",
          "function": {
              "name": "get_current_price",
              "description": "Get the current price of a item",
              "parameters": {
                  "type": "object",
                  "properties": {
                      "item": {
                          "id": "123"
                      }
                  }
              }
          }
      }
  ],
  "tool_choice": {
      "type": "function",
      "function": {
          "name": "get_current_price"
      }
  }
}
Response examples (200)
A successful response when performing a chat completion task using a User message with `tools` and `tool_choice`.
event: message
data: {"chat_completion":{"id":"chatcmpl-Ae0TWsy2VPnSfBbv5UztnSdYUMFP3","choices":[{"delta":{"content":"","role":"assistant"},"index":0}],"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk"}}

event: message
data: {"chat_completion":{"id":"chatcmpl-Ae0TWsy2VPnSfBbv5UztnSdYUMFP3","choices":[{"delta":{"content":Elastic"},"index":0}],"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk"}}

event: message
data: {"chat_completion":{"id":"chatcmpl-Ae0TWsy2VPnSfBbv5UztnSdYUMFP3","choices":[{"delta":{"content":" is"},"index":0}],"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk"}}

(...)

event: message
data: {"chat_completion":{"id":"chatcmpl-Ae0TWsy2VPnSfBbv5UztnSdYUMFP3","choices":[],"model":"gpt-4o-2024-08-06","object":"chat.completion.chunk","usage":{"completion_tokens":28,"prompt_tokens":16,"total_tokens":44}}} 

event: message
data: [DONE]








Create an inference endpoint Generally available; Added in 8.11.0

PUT /_inference/{task_type}/{inference_id}

All methods and paths for this operation:

PUT /_inference/{inference_id}

PUT /_inference/{task_type}/{inference_id}

IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

The following integrations are available through the inference API. You can find the available task types next to the integration name:

  • AlibabaCloud AI Search (completion, rerank, sparse_embedding, text_embedding)
  • Amazon Bedrock (completion, text_embedding)
  • Amazon SageMaker (chat_completion, completion, rerank, sparse_embedding, text_embedding)
  • Anthropic (completion)
  • Azure AI Studio (completion, text_embedding)
  • Azure OpenAI (completion, text_embedding)
  • Cohere (completion, rerank, text_embedding)
  • DeepSeek (chat_completion, completion)
  • Elasticsearch (rerank, sparse_embedding, text_embedding - this service is for built-in models and models uploaded through Eland)
  • ELSER (sparse_embedding)
  • Google AI Studio (completion, text_embedding)
  • Google Vertex AI (chat_completion, completion, rerank, text_embedding)
  • Hugging Face (chat_completion, completion, rerank, text_embedding)
  • JinaAI (rerank, text_embedding)
  • Llama (chat_completion, completion, text_embedding)
  • Mistral (chat_completion, completion, text_embedding)
  • OpenAI (chat_completion, completion, text_embedding)
  • VoyageAI (rerank, text_embedding)
  • Watsonx inference integration (text_embedding)

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string Required

    The task type. Refer to the integration list in the API description for the available task types.

    Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

  • inference_id string Required

    The inference Id

Query parameters

application/json

Body Required

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The service type

  • service_settings object Required

    Settings specific to the service

  • task_settings object

    Task settings specific to the service and task type

Responses

  • 200 application/json
    Hide response attributes Show response attributes object

    Represents an inference endpoint as returned by the GET API

    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

PUT /_inference/{task_type}/{inference_id}
PUT _inference/rerank/my-rerank-model
{
 "service": "cohere",
 "service_settings": {
   "model_id": "rerank-english-v3.0",
   "api_key": "{{COHERE_API_KEY}}"
 }
}
resp = client.inference.put(
    task_type="rerank",
    inference_id="my-rerank-model",
    inference_config={
        "service": "cohere",
        "service_settings": {
            "model_id": "rerank-english-v3.0",
            "api_key": "{{COHERE_API_KEY}}"
        }
    },
)
const response = await client.inference.put({
  task_type: "rerank",
  inference_id: "my-rerank-model",
  inference_config: {
    service: "cohere",
    service_settings: {
      model_id: "rerank-english-v3.0",
      api_key: "{{COHERE_API_KEY}}",
    },
  },
});
response = client.inference.put(
  task_type: "rerank",
  inference_id: "my-rerank-model",
  body: {
    "service": "cohere",
    "service_settings": {
      "model_id": "rerank-english-v3.0",
      "api_key": "{{COHERE_API_KEY}}"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "rerank",
    "inference_id" => "my-rerank-model",
    "body" => [
        "service" => "cohere",
        "service_settings" => [
            "model_id" => "rerank-english-v3.0",
            "api_key" => "{{COHERE_API_KEY}}",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"cohere","service_settings":{"model_id":"rerank-english-v3.0","api_key":"{{COHERE_API_KEY}}"}}' "$ELASTICSEARCH_URL/_inference/rerank/my-rerank-model"
client.inference().put(p -> p
    .inferenceId("my-rerank-model")
    .taskType(TaskType.Rerank)
    .inferenceConfig(i -> i
        .service("cohere")
        .serviceSettings(JsonData.fromJson("{\"model_id\":\"rerank-english-v3.0\",\"api_key\":\"{{COHERE_API_KEY}}\"}"))
    )
);
Request example
An example body for a `PUT _inference/rerank/my-rerank-model` request.
{
 "service": "cohere",
 "service_settings": {
   "model_id": "rerank-english-v3.0",
   "api_key": "{{COHERE_API_KEY}}"
 }
}

Perform inference on the service Generally available; Added in 8.11.0

POST /_inference/{task_type}/{inference_id}

All methods and paths for this operation:

POST /_inference/{inference_id}

POST /_inference/{task_type}/{inference_id}

This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API.

For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation.


The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

Required authorization

  • Cluster privileges: monitor_inference

Path parameters

  • task_type string Required

    The type of inference task that the model performs.

    Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

  • inference_id string Required

    The unique identifier for the inference endpoint.

Query parameters

application/json

Body

  • query string

    The query input, which is required only for the rerank task. It is not required for other tasks.

  • input string | array[string] Required

    The text on which you want to perform the inference task. It can be a single string or an array.


    Inference endpoints for the completion task type currently only support a single string as input.

  • input_type string

    Specifies the input data type for the text embedding model. The input_type parameter only applies to Inference Endpoints with the text_embedding task type. Possible values include:

    • SEARCH
    • INGEST
    • CLASSIFICATION
    • CLUSTERING Not all services support all values. Unsupported values will trigger a validation exception. Accepted values depend on the configured inference service, refer to the relevant service-specific documentation for more info.


    The input_type parameter specified on the root level of the request body will take precedence over the input_type parameter specified in task_settings.

  • task_settings object

    Task settings for the individual inference request. These settings are specific to the task type you specified and override the task settings specified when initializing the service.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • text_embedding_bytes array[object]

      The text embedding result object for byte representation

      Hide text_embedding_bytes attribute Show text_embedding_bytes attribute object
      • embedding array[number] Required

        Text Embedding results containing bytes are represented as Dense Vectors of bytes.

    • text_embedding_bits array[object]

      The text embedding result object for byte representation

      Hide text_embedding_bits attribute Show text_embedding_bits attribute object
      • embedding array[number] Required

        Text Embedding results containing bytes are represented as Dense Vectors of bytes.

    • text_embedding array[object]

      The text embedding result object

      Hide text_embedding attribute Show text_embedding attribute object
      • embedding array[number] Required

        Text Embedding results are represented as Dense Vectors of floats.

    • sparse_embedding array[object]
      Hide sparse_embedding attribute Show sparse_embedding attribute object
      • embedding object Required

        Sparse Embedding tokens are represented as a dictionary of string to double.

        Hide embedding attribute Show embedding attribute object
        • * number Additional properties
    • completion array[object]

      The completion result object

      Hide completion attribute Show completion attribute object
      • result string Required
    • rerank array[object]

      The rerank result object representing a single ranked document id: the original index of the document in the request relevance_score: the relevance_score of the document relative to the query text: Optional, the text of the document, if requested

      Hide rerank attributes Show rerank attributes object
      • index number Required
      • relevance_score number Required
      • text string
POST /_inference/{task_type}/{inference_id}
curl \
 --request POST 'http://api.example.com/_inference/{task_type}/{inference_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"query":"string","input":"string","input_type":"string","task_settings":{}}'




Create an AlibabaCloud AI Search inference endpoint Generally available; Added in 8.16.0

PUT /_inference/{task_type}/{alibabacloud_inference_id}

Create an inference endpoint to perform an inference task with the alibabacloud-ai-search service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion, rerank, space_embedding, or text_embedding.

  • alibabacloud_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, alibabacloud-ai-search.

    Value is alibabacloud-ai-search.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the alibabacloud-ai-search service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key for the AlibabaCloud AI Search API.

    • host string Required

      The name of the host address used for the inference task. You can find the host address in the API keys section of the documentation.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from AlibabaCloud AI Search. By default, the alibabacloud-ai-search service sets the number of requests allowed per minute to 1000.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

    • service_id string Required

      The name of the model service to use for the inference task. The following service IDs are available for the completion task:

      • ops-qwen-turbo
      • qwen-turbo
      • qwen-plus
      • qwen-max ÷ qwen-max-longcontext

      The following service ID is available for the rerank task:

      • ops-bge-reranker-larger

      The following service ID is available for the sparse_embedding task:

      • ops-text-sparse-embedding-001

      The following service IDs are available for the text_embedding task:

      ops-text-embedding-001 ops-text-embedding-zh-001 ops-text-embedding-en-001 ops-text-embedding-002

    • workspace string Required

      The name of the workspace used for the inference task.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • input_type string

      For a sparse_embedding or text_embedding task, specify the type of input passed to the model. Valid values are:

      • ingest for storing document embeddings in a vector database.
      • search for storing embeddings of search queries run against a vector database to find relevant documents.
    • return_token boolean

      For a sparse_embedding task, it affects whether the token name will be returned in the response. It defaults to false, which means only the token ID will be returned in the response.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding, rerank, completion, or sparse_embedding.

PUT /_inference/{task_type}/{alibabacloud_inference_id}
PUT _inference/completion/alibabacloud_ai_search_completion
{
    "service": "alibabacloud-ai-search",
    "service_settings": {
        "host" : "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
        "api_key": "AlibabaCloud-API-Key",
        "service_id": "ops-qwen-turbo",
        "workspace" : "default"
    }
}
resp = client.inference.put(
    task_type="completion",
    inference_id="alibabacloud_ai_search_completion",
    inference_config={
        "service": "alibabacloud-ai-search",
        "service_settings": {
            "host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
            "api_key": "AlibabaCloud-API-Key",
            "service_id": "ops-qwen-turbo",
            "workspace": "default"
        }
    },
)
const response = await client.inference.put({
  task_type: "completion",
  inference_id: "alibabacloud_ai_search_completion",
  inference_config: {
    service: "alibabacloud-ai-search",
    service_settings: {
      host: "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
      api_key: "AlibabaCloud-API-Key",
      service_id: "ops-qwen-turbo",
      workspace: "default",
    },
  },
});
response = client.inference.put(
  task_type: "completion",
  inference_id: "alibabacloud_ai_search_completion",
  body: {
    "service": "alibabacloud-ai-search",
    "service_settings": {
      "host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
      "api_key": "AlibabaCloud-API-Key",
      "service_id": "ops-qwen-turbo",
      "workspace": "default"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "completion",
    "inference_id" => "alibabacloud_ai_search_completion",
    "body" => [
        "service" => "alibabacloud-ai-search",
        "service_settings" => [
            "host" => "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
            "api_key" => "AlibabaCloud-API-Key",
            "service_id" => "ops-qwen-turbo",
            "workspace" => "default",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"alibabacloud-ai-search","service_settings":{"host":"default-j01.platform-cn-shanghai.opensearch.aliyuncs.com","api_key":"AlibabaCloud-API-Key","service_id":"ops-qwen-turbo","workspace":"default"}}' "$ELASTICSEARCH_URL/_inference/completion/alibabacloud_ai_search_completion"
client.inference().put(p -> p
    .inferenceId("alibabacloud_ai_search_completion")
    .taskType(TaskType.Completion)
    .inferenceConfig(i -> i
        .service("alibabacloud-ai-search")
        .serviceSettings(JsonData.fromJson("{\"host\":\"default-j01.platform-cn-shanghai.opensearch.aliyuncs.com\",\"api_key\":\"AlibabaCloud-API-Key\",\"service_id\":\"ops-qwen-turbo\",\"workspace\":\"default\"}"))
    )
);
Request examples
Run `PUT _inference/completion/alibabacloud_ai_search_completion` to create an inference endpoint that performs a completion task.
{
    "service": "alibabacloud-ai-search",
    "service_settings": {
        "host" : "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
        "api_key": "AlibabaCloud-API-Key",
        "service_id": "ops-qwen-turbo",
        "workspace" : "default"
    }
}
Run `PUT _inference/rerank/alibabacloud_ai_search_rerank` to create an inference endpoint that performs a rerank task.
{
    "service": "alibabacloud-ai-search",
    "service_settings": {
        "api_key": "AlibabaCloud-API-Key",
        "service_id": "ops-bge-reranker-larger",
        "host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
        "workspace": "default"
    }
}
Run `PUT _inference/sparse_embedding/alibabacloud_ai_search_sparse` to create an inference endpoint that performs perform a sparse embedding task.
{
    "service": "alibabacloud-ai-search",
    "service_settings": {
        "api_key": "AlibabaCloud-API-Key",
        "service_id": "ops-text-sparse-embedding-001",
        "host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
        "workspace": "default"
    }
}
Run `PUT _inference/text_embedding/alibabacloud_ai_search_embeddings` to create an inference endpoint that performs a text embedding task.
{
    "service": "alibabacloud-ai-search",
    "service_settings": {
        "api_key": "AlibabaCloud-API-Key",
        "service_id": "ops-text-embedding-001",
        "host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
        "workspace": "default"
    }
}

Create an Amazon Bedrock inference endpoint Generally available; Added in 8.12.0

PUT /_inference/{task_type}/{amazonbedrock_inference_id}

Create an inference endpoint to perform an inference task with the amazonbedrock service.


You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion or text_embedding.

  • amazonbedrock_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, amazonbedrock.

    Value is amazonbedrock.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the amazonbedrock service.

    Hide service_settings attributes Show service_settings attributes object
    • access_key string Required

      A valid AWS access key that has permissions to use Amazon Bedrock and access to models for inference requests.

    • model string Required

      The base model ID or an ARN to a custom model based on a foundational model. The base model IDs can be found in the Amazon Bedrock documentation. Note that the model ID must be available for the provider chosen and your IAM user must have access to the model.

      External documentation
    • provider string

      The model provider for your deployment. Note that some providers may support only certain task types. Supported providers include:

      • amazontitan - available for text_embedding and completion task types
      • anthropic - available for completion task type only
      • ai21labs - available for completion task type only
      • cohere - available for text_embedding and completion task types
      • meta - available for completion task type only
      • mistral - available for completion task type only
    • region string Required

      The region that your model or ARN is deployed in. The list of available regions per model can be found in the Amazon Bedrock documentation.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Watsonx. By default, the watsonxai service sets the number of requests allowed per minute to 120.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

    • secret_key string Required

      A valid AWS secret key that is paired with the access_key. For informationg about creating and managing access and secret keys, refer to the AWS documentation.

      External documentation
  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • max_new_tokens number

      For a completion task, it sets the maximum number for the output tokens to be generated.

      Default value is 64.

    • temperature number

      For a completion task, it is a number between 0.0 and 1.0 that controls the apparent creativity of the results. At temperature 0.0 the model is most deterministic, at temperature 1.0 most random. It should not be used if top_p or top_k is specified.

    • top_k number

      For a completion task, it limits samples to the top-K most likely words, balancing coherence and variability. It is only available for anthropic, cohere, and mistral providers. It is an alternative to temperature; it should not be used if temperature is specified.

    • top_p number

      For a completion task, it is a number in the range of 0.0 to 1.0, to eliminate low-probability tokens. Top-p uses nucleus sampling to select top tokens whose sum of likelihoods does not exceed a certain value, ensuring both variety and coherence. It is an alternative to temperature; it should not be used if temperature is specified.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{amazonbedrock_inference_id}
PUT _inference/text_embedding/amazon_bedrock_embeddings
{
    "service": "amazonbedrock",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "provider": "amazontitan",
        "model": "amazon.titan-embed-text-v2:0"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="amazon_bedrock_embeddings",
    inference_config={
        "service": "amazonbedrock",
        "service_settings": {
            "access_key": "AWS-access-key",
            "secret_key": "AWS-secret-key",
            "region": "us-east-1",
            "provider": "amazontitan",
            "model": "amazon.titan-embed-text-v2:0"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "amazon_bedrock_embeddings",
  inference_config: {
    service: "amazonbedrock",
    service_settings: {
      access_key: "AWS-access-key",
      secret_key: "AWS-secret-key",
      region: "us-east-1",
      provider: "amazontitan",
      model: "amazon.titan-embed-text-v2:0",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "amazon_bedrock_embeddings",
  body: {
    "service": "amazonbedrock",
    "service_settings": {
      "access_key": "AWS-access-key",
      "secret_key": "AWS-secret-key",
      "region": "us-east-1",
      "provider": "amazontitan",
      "model": "amazon.titan-embed-text-v2:0"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "amazon_bedrock_embeddings",
    "body" => [
        "service" => "amazonbedrock",
        "service_settings" => [
            "access_key" => "AWS-access-key",
            "secret_key" => "AWS-secret-key",
            "region" => "us-east-1",
            "provider" => "amazontitan",
            "model" => "amazon.titan-embed-text-v2:0",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"amazonbedrock","service_settings":{"access_key":"AWS-access-key","secret_key":"AWS-secret-key","region":"us-east-1","provider":"amazontitan","model":"amazon.titan-embed-text-v2:0"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/amazon_bedrock_embeddings"
client.inference().put(p -> p
    .inferenceId("amazon_bedrock_embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("amazonbedrock")
        .serviceSettings(JsonData.fromJson("{\"access_key\":\"AWS-access-key\",\"secret_key\":\"AWS-secret-key\",\"region\":\"us-east-1\",\"provider\":\"amazontitan\",\"model\":\"amazon.titan-embed-text-v2:0\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/amazon_bedrock_embeddings` to create an inference endpoint that performs a text embedding task.
{
    "service": "amazonbedrock",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "provider": "amazontitan",
        "model": "amazon.titan-embed-text-v2:0"
    }
}
Run `PUT _inference/completion/amazon_bedrock_completion` to create an inference endpoint to perform a completion task.
{
    "service": "amazonbedrock",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "provider": "amazontitan",
        "model": "amazon.titan-text-premier-v1:0"
    }
}

Create an Amazon SageMaker inference endpoint Generally available; Added in 8.19.0

PUT /_inference/{task_type}/{amazonsagemaker_inference_id}

Create an inference endpoint to perform an inference task with the amazon_sagemaker service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are text_embedding, completion, chat_completion, sparse_embedding, or rerank.

  • amazonsagemaker_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, amazon_sagemaker.

    Value is amazon_sagemaker.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the amazon_sagemaker service and service_settings.api you specified.

    Hide service_settings attributes Show service_settings attributes object
    • access_key string Required

      A valid AWS access key that has permissions to use Amazon SageMaker and access to models for invoking requests.

    • endpoint_name string Required

      The name of the SageMaker endpoint.

      External documentation
    • api string Required

      The API format to use when calling SageMaker. Elasticsearch will convert the POST _inference request to this data format when invoking the SageMaker endpoint.

      Values are openai or elastic.

    • region string Required

      The region that your endpoint or Amazon Resource Name (ARN) is deployed in. The list of available regions per model can be found in the Amazon SageMaker documentation.

      External documentation
    • secret_key string Required

      A valid AWS secret key that is paired with the access_key. For information about creating and managing access and secret keys, refer to the AWS documentation.

      External documentation
    • target_model string

      The model ID when calling a multi-model endpoint.

      External documentation
    • target_container_hostname string

      The container to directly invoke when calling a multi-container endpoint.

      External documentation
    • inference_component_name string

      The inference component to directly invoke when calling a multi-component endpoint.

      External documentation
    • batch_size number

      The maximum number of inputs in each batch. This value is used by inference ingestion pipelines when processing semantic values. It correlates to the number of times the SageMaker endpoint is invoked (one per batch of input).

      Default value is 256.

    • dimensions number

      The number of dimensions returned by the text embedding models. If this value is not provided, then it is guessed by making invoking the endpoint for the text_embedding task.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type and service_settings.api you specified.

    Hide task_settings attributes Show task_settings attributes object
    • custom_attributes string

      The AWS custom attributes passed verbatim through to the model running in the SageMaker Endpoint. Values will be returned in the X-elastic-sagemaker-custom-attributes header.

      External documentation
    • enable_explanations string

      The optional JMESPath expression used to override the EnableExplanations provided during endpoint creation.

      External documentation
    • inference_id string

      The capture data ID when enabled in the endpoint.

      External documentation
    • session_id string

      The stateful session identifier for a new or existing session. New sessions will be returned in the X-elastic-sagemaker-new-session-id header. Closed sessions will be returned in the X-elastic-sagemaker-closed-session-id header.

      External documentation
    • target_variant string

      Specifies the variant when running with multi-variant Endpoints.

      External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding, completion, chat_completion, sparse_embedding, or rerank.

PUT /_inference/{task_type}/{amazonsagemaker_inference_id}
PUT _inference/text_embedding/amazon_sagemaker_embeddings
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint",
        "dimensions": 384,
        "element_type": "float"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="amazon_sagemaker_embeddings",
    inference_config={
        "service": "amazon_sagemaker",
        "service_settings": {
            "access_key": "AWS-access-key",
            "secret_key": "AWS-secret-key",
            "region": "us-east-1",
            "api": "elastic",
            "endpoint_name": "my-endpoint",
            "dimensions": 384,
            "element_type": "float"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "amazon_sagemaker_embeddings",
  inference_config: {
    service: "amazon_sagemaker",
    service_settings: {
      access_key: "AWS-access-key",
      secret_key: "AWS-secret-key",
      region: "us-east-1",
      api: "elastic",
      endpoint_name: "my-endpoint",
      dimensions: 384,
      element_type: "float",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "amazon_sagemaker_embeddings",
  body: {
    "service": "amazon_sagemaker",
    "service_settings": {
      "access_key": "AWS-access-key",
      "secret_key": "AWS-secret-key",
      "region": "us-east-1",
      "api": "elastic",
      "endpoint_name": "my-endpoint",
      "dimensions": 384,
      "element_type": "float"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "amazon_sagemaker_embeddings",
    "body" => [
        "service" => "amazon_sagemaker",
        "service_settings" => [
            "access_key" => "AWS-access-key",
            "secret_key" => "AWS-secret-key",
            "region" => "us-east-1",
            "api" => "elastic",
            "endpoint_name" => "my-endpoint",
            "dimensions" => 384,
            "element_type" => "float",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"amazon_sagemaker","service_settings":{"access_key":"AWS-access-key","secret_key":"AWS-secret-key","region":"us-east-1","api":"elastic","endpoint_name":"my-endpoint","dimensions":384,"element_type":"float"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/amazon_sagemaker_embeddings"
Request examples
Run `PUT _inference/text_embedding/amazon_sagemaker_embeddings` to create an inference endpoint that performs a text embedding task.
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint",
        "dimensions": 384,
        "element_type": "float"
    }
}
Run `PUT _inference/completion/amazon_sagemaker_completion` to create an inference endpoint that performs a completion task.
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint"
    }
}
Run `PUT _inference/chat_completion/amazon_sagemaker_chat_completion` to create an inference endpoint that performs a chat completion task.
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint"
    }
}
Run `PUT _inference/sparse_embedding/amazon_sagemaker_sparse_embedding` to create an inference endpoint that performs a sparse embedding task.
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint"
    }
}
Run `PUT _inference/rerank/amazon_sagemaker_rerank` to create an inference endpoint that performs a rerank task.
{
    "service": "amazon_sagemaker",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "api": "elastic",
        "endpoint_name": "my-endpoint"
    }
}

Create an Anthropic inference endpoint Generally available; Added in 8.16.0

PUT /_inference/{task_type}/{anthropic_inference_id}

Create an inference endpoint to perform an inference task with the anthropic service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The task type. The only valid task type for the model to perform is completion.

    Value is completion.

  • anthropic_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, anthropic.

    Value is anthropic.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the watsonxai service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key for the Anthropic API.

    • model_id string Required

      The name of the model to use for the inference task. Refer to the Anthropic documentation for the list of supported models.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Anthropic. By default, the anthropic service sets the number of requests allowed per minute to 50.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • max_tokens number Required

      For a completion task, it is the maximum number of tokens to generate before stopping.

    • temperature number

      For a completion task, it is the amount of randomness injected into the response. For more details about the supported range, refer to Anthropic documentation.

      External documentation
    • top_k number

      For a completion task, it specifies to only sample from the top K options for each subsequent token. It is recommended for advanced use cases only. You usually only need to use temperature.

    • top_p number

      For a completion task, it specifies to use Anthropic's nucleus sampling. In nucleus sampling, Anthropic computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches the specified probability. You should either alter temperature or top_p, but not both. It is recommended for advanced use cases only. You usually only need to use temperature.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Value is completion.

PUT /_inference/{task_type}/{anthropic_inference_id}
PUT _inference/completion/anthropic_completion
{
    "service": "anthropic",
    "service_settings": {
        "api_key": "Anthropic-Api-Key",
        "model_id": "Model-ID"
    },
    "task_settings": {
        "max_tokens": 1024
    }
}
resp = client.inference.put(
    task_type="completion",
    inference_id="anthropic_completion",
    inference_config={
        "service": "anthropic",
        "service_settings": {
            "api_key": "Anthropic-Api-Key",
            "model_id": "Model-ID"
        },
        "task_settings": {
            "max_tokens": 1024
        }
    },
)
const response = await client.inference.put({
  task_type: "completion",
  inference_id: "anthropic_completion",
  inference_config: {
    service: "anthropic",
    service_settings: {
      api_key: "Anthropic-Api-Key",
      model_id: "Model-ID",
    },
    task_settings: {
      max_tokens: 1024,
    },
  },
});
response = client.inference.put(
  task_type: "completion",
  inference_id: "anthropic_completion",
  body: {
    "service": "anthropic",
    "service_settings": {
      "api_key": "Anthropic-Api-Key",
      "model_id": "Model-ID"
    },
    "task_settings": {
      "max_tokens": 1024
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "completion",
    "inference_id" => "anthropic_completion",
    "body" => [
        "service" => "anthropic",
        "service_settings" => [
            "api_key" => "Anthropic-Api-Key",
            "model_id" => "Model-ID",
        ],
        "task_settings" => [
            "max_tokens" => 1024,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"anthropic","service_settings":{"api_key":"Anthropic-Api-Key","model_id":"Model-ID"},"task_settings":{"max_tokens":1024}}' "$ELASTICSEARCH_URL/_inference/completion/anthropic_completion"
client.inference().put(p -> p
    .inferenceId("anthropic_completion")
    .taskType(TaskType.Completion)
    .inferenceConfig(i -> i
        .service("anthropic")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Anthropic-Api-Key\",\"model_id\":\"Model-ID\"}"))
        .taskSettings(JsonData.fromJson("{\"max_tokens\":1024}"))
    )
);
Request example
Run `PUT _inference/completion/anthropic_completion` to create an inference endpoint that performs a completion task.
{
    "service": "anthropic",
    "service_settings": {
        "api_key": "Anthropic-Api-Key",
        "model_id": "Model-ID"
    },
    "task_settings": {
        "max_tokens": 1024
    }
}

Create an Azure AI studio inference endpoint Generally available; Added in 8.14.0

PUT /_inference/{task_type}/{azureaistudio_inference_id}

Create an inference endpoint to perform an inference task with the azureaistudio service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion or text_embedding.

  • azureaistudio_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, azureaistudio.

    Value is azureaistudio.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the openai service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Azure AI Studio model deployment. This key can be found on the overview page for your deployment in the management section of your Azure AI Studio account.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • endpoint_type string Required

      The type of endpoint that is available for deployment through Azure AI Studio: token or realtime. The token endpoint type is for "pay as you go" endpoints that are billed per token. The realtime endpoint type is for "real-time" endpoints that are billed per hour of usage.

      External documentation
    • target string Required

      The target URL of your Azure AI Studio model deployment. This can be found on the overview page for your deployment in the management section of your Azure AI Studio account.

    • provider string Required

      The model provider for your deployment. Note that some providers may support only certain task types. Supported providers include:

      • cohere - available for text_embedding and completion task types
      • databricks - available for completion task type only
      • meta - available for completion task type only
      • microsoft_phi - available for completion task type only
      • mistral - available for completion task type only
      • openai - available for text_embedding and completion task types
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Azure AI Studio. By default, the azureaistudio service sets the number of requests allowed per minute to 240.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • do_sample number

      For a completion task, instruct the inference process to perform sampling. It has no effect unless temperature or top_p is specified.

    • max_new_tokens number

      For a completion task, provide a hint for the maximum number of output tokens to be generated.

      Default value is 64.

    • temperature number

      For a completion task, control the apparent creativity of generated completions with a sampling temperature. It must be a number in the range of 0.0 to 2.0. It should not be used if top_p is specified.

    • top_p number

      For a completion task, make the model consider the results of the tokens with nucleus sampling probability. It is an alternative value to temperature and must be a number in the range of 0.0 to 2.0. It should not be used if temperature is specified.

    • user string

      For a text_embedding task, specify the user issuing the request. This information can be used for abuse detection.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{azureaistudio_inference_id}
PUT _inference/text_embedding/azure_ai_studio_embeddings
{
    "service": "azureaistudio",
    "service_settings": {
        "api_key": "Azure-AI-Studio-API-key",
        "target": "Target-Uri",
        "provider": "openai",
        "endpoint_type": "token"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="azure_ai_studio_embeddings",
    inference_config={
        "service": "azureaistudio",
        "service_settings": {
            "api_key": "Azure-AI-Studio-API-key",
            "target": "Target-Uri",
            "provider": "openai",
            "endpoint_type": "token"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "azure_ai_studio_embeddings",
  inference_config: {
    service: "azureaistudio",
    service_settings: {
      api_key: "Azure-AI-Studio-API-key",
      target: "Target-Uri",
      provider: "openai",
      endpoint_type: "token",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "azure_ai_studio_embeddings",
  body: {
    "service": "azureaistudio",
    "service_settings": {
      "api_key": "Azure-AI-Studio-API-key",
      "target": "Target-Uri",
      "provider": "openai",
      "endpoint_type": "token"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "azure_ai_studio_embeddings",
    "body" => [
        "service" => "azureaistudio",
        "service_settings" => [
            "api_key" => "Azure-AI-Studio-API-key",
            "target" => "Target-Uri",
            "provider" => "openai",
            "endpoint_type" => "token",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"azureaistudio","service_settings":{"api_key":"Azure-AI-Studio-API-key","target":"Target-Uri","provider":"openai","endpoint_type":"token"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/azure_ai_studio_embeddings"
client.inference().put(p -> p
    .inferenceId("azure_ai_studio_embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("azureaistudio")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Azure-AI-Studio-API-key\",\"target\":\"Target-Uri\",\"provider\":\"openai\",\"endpoint_type\":\"token\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/azure_ai_studio_embeddings` to create an inference endpoint that performs a text_embedding task. Note that you do not specify a model here, as it is defined already in the Azure AI Studio deployment.
{
    "service": "azureaistudio",
    "service_settings": {
        "api_key": "Azure-AI-Studio-API-key",
        "target": "Target-Uri",
        "provider": "openai",
        "endpoint_type": "token"
    }
}
Run `PUT _inference/completion/azure_ai_studio_completion` to create an inference endpoint that performs a completion task.
{
    "service": "azureaistudio",
    "service_settings": {
        "api_key": "Azure-AI-Studio-API-key",
        "target": "Target-URI",
        "provider": "databricks",
        "endpoint_type": "realtime"
    }
}

Create an Azure OpenAI inference endpoint Generally available; Added in 8.14.0

PUT /_inference/{task_type}/{azureopenai_inference_id}

Create an inference endpoint to perform an inference task with the azureopenai service.

The list of chat completion models that you can choose from in your Azure OpenAI deployment include:

The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API.

    Values are completion or text_embedding.

  • azureopenai_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, azureopenai.

    Value is azureopenai.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the azureopenai service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string

      A valid API key for your Azure OpenAI account. You must specify either api_key or entra_id. If you do not provide either or you provide both, you will receive an error when you try to create your model.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • api_version string Required

      The Azure API version ID to use. It is recommended to use the latest supported non-preview version.

    • deployment_id string Required

      The deployment name of your deployed models. Your Azure OpenAI deployments can be found though the Azure OpenAI Studio portal that is linked to your subscription.

      External documentation
    • entra_id string

      A valid Microsoft Entra token. You must specify either api_key or entra_id. If you do not provide either or you provide both, you will receive an error when you try to create your model.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Azure. The azureopenai service sets a default number of requests allowed per minute depending on the task type. For text_embedding, it is set to 1440. For completion, it is set to 120.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

    • resource_name string Required

      The name of your Azure OpenAI resource. You can find this from the list of resources in the Azure Portal for your subscription.

      External documentation
  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attribute Show task_settings attribute object
    • user string

      For a completion or text_embedding task, specify the user issuing the request. This information can be used for abuse detection.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{azureopenai_inference_id}
PUT _inference/text_embedding/azure_openai_embeddings
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="azure_openai_embeddings",
    inference_config={
        "service": "azureopenai",
        "service_settings": {
            "api_key": "Api-Key",
            "resource_name": "Resource-name",
            "deployment_id": "Deployment-id",
            "api_version": "2024-02-01"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "azure_openai_embeddings",
  inference_config: {
    service: "azureopenai",
    service_settings: {
      api_key: "Api-Key",
      resource_name: "Resource-name",
      deployment_id: "Deployment-id",
      api_version: "2024-02-01",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "azure_openai_embeddings",
  body: {
    "service": "azureopenai",
    "service_settings": {
      "api_key": "Api-Key",
      "resource_name": "Resource-name",
      "deployment_id": "Deployment-id",
      "api_version": "2024-02-01"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "azure_openai_embeddings",
    "body" => [
        "service" => "azureopenai",
        "service_settings" => [
            "api_key" => "Api-Key",
            "resource_name" => "Resource-name",
            "deployment_id" => "Deployment-id",
            "api_version" => "2024-02-01",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"azureopenai","service_settings":{"api_key":"Api-Key","resource_name":"Resource-name","deployment_id":"Deployment-id","api_version":"2024-02-01"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/azure_openai_embeddings"
client.inference().put(p -> p
    .inferenceId("azure_openai_embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("azureopenai")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Api-Key\",\"resource_name\":\"Resource-name\",\"deployment_id\":\"Deployment-id\",\"api_version\":\"2024-02-01\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/azure_openai_embeddings` to create an inference endpoint that performs a `text_embedding` task. You do not specify a model, as it is defined already in the Azure OpenAI deployment.
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}
Run `PUT _inference/completion/azure_openai_completion` to create an inference endpoint that performs a `completion` task.
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}

Create a Cohere inference endpoint Generally available; Added in 8.13.0

PUT /_inference/{task_type}/{cohere_inference_id}

Create an inference endpoint to perform an inference task with the cohere service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion, rerank, or text_embedding.

  • cohere_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, cohere.

    Value is cohere.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the cohere service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key for your Cohere account. You can find or create your Cohere API keys on the Cohere API key settings page.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • embedding_type string

      For a text_embedding task, the types of embeddings you want to get back. Use binary for binary embeddings, which are encoded as bytes with signed int8 precision. Use bit for binary embeddings, which are encoded as bytes with signed int8 precision (this is a synonym of binary). Use byte for signed int8 embeddings (this is a synonym of int8). Use float for the default float embeddings. Use int8 for signed int8 embeddings.

      Values are binary, bit, byte, float, or int8.

    • model_id string

      For a completion, rerank, or text_embedding task, the name of the model to use for the inference task.

      The default value for a text embedding task is embed-english-v2.0.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Cohere. By default, the cohere service sets the number of requests allowed per minute to 10000.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

    • similarity string

      The similarity measure. If the embedding_type is float, the default value is dot_product. If the embedding_type is int8 or byte, the default value is cosine.

      Values are cosine, dot_product, or l2_norm.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • input_type string

      For a text_embedding task, the type of input passed to the model. Valid values are:

      • classification: Use it for embeddings passed through a text classifier.
      • clustering: Use it for the embeddings run through a clustering algorithm.
      • ingest: Use it for storing document embeddings in a vector database.
      • search: Use it for storing embeddings of search queries run against a vector database to find relevant documents.

      IMPORTANT: The input_type field is required when using embedding models v3 and higher.

      Values are classification, clustering, ingest, or search.

    • return_documents boolean

      For a rerank task, return doc text within the results.

    • top_n number

      For a rerank task, the number of most relevant documents to return. It defaults to the number of the documents. If this inference endpoint is used in a text_similarity_reranker retriever query and top_n is set, it must be greater than or equal to rank_window_size in the query.

    • truncate string

      For a text_embedding task, the method to handle inputs longer than the maximum token length. Valid values are:

      • END: When the input exceeds the maximum input token length, the end of the input is discarded.
      • NONE: When the input exceeds the maximum input token length, an error is returned.
      • START: When the input exceeds the maximum input token length, the start of the input is discarded.

      Values are END, NONE, or START.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding, rerank, or completion.

PUT /_inference/{task_type}/{cohere_inference_id}
PUT _inference/text_embedding/cohere-embeddings
{
    "service": "cohere",
    "service_settings": {
        "api_key": "Cohere-Api-key",
        "model_id": "embed-english-light-v3.0",
        "embedding_type": "byte"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="cohere-embeddings",
    inference_config={
        "service": "cohere",
        "service_settings": {
            "api_key": "Cohere-Api-key",
            "model_id": "embed-english-light-v3.0",
            "embedding_type": "byte"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "cohere-embeddings",
  inference_config: {
    service: "cohere",
    service_settings: {
      api_key: "Cohere-Api-key",
      model_id: "embed-english-light-v3.0",
      embedding_type: "byte",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "cohere-embeddings",
  body: {
    "service": "cohere",
    "service_settings": {
      "api_key": "Cohere-Api-key",
      "model_id": "embed-english-light-v3.0",
      "embedding_type": "byte"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "cohere-embeddings",
    "body" => [
        "service" => "cohere",
        "service_settings" => [
            "api_key" => "Cohere-Api-key",
            "model_id" => "embed-english-light-v3.0",
            "embedding_type" => "byte",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"cohere","service_settings":{"api_key":"Cohere-Api-key","model_id":"embed-english-light-v3.0","embedding_type":"byte"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/cohere-embeddings"
client.inference().put(p -> p
    .inferenceId("cohere-embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("cohere")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Cohere-Api-key\",\"model_id\":\"embed-english-light-v3.0\",\"embedding_type\":\"byte\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/cohere-embeddings` to create an inference endpoint that performs a text embedding task.
{
    "service": "cohere",
    "service_settings": {
        "api_key": "Cohere-Api-key",
        "model_id": "embed-english-light-v3.0",
        "embedding_type": "byte"
    }
}
Run `PUT _inference/rerank/cohere-rerank` to create an inference endpoint that performs a rerank task.
{
    "service": "cohere",
    "service_settings": {
        "api_key": "Cohere-API-key",
        "model_id": "rerank-english-v3.0"
    },
    "task_settings": {
        "top_n": 10,
        "return_documents": true
    }
}
Run `PUT _inference/completion/cohere-completion` to create an inference endpoint that performs a completion task.
{
    "service": "cohere",
    "service_settings": {
        "api_key": "Cohere-API-key",
        "model_id": "command-a-03-2025"
    }
}

Create a custom inference endpoint Generally available; Added in 8.19.0

PUT /_inference/{task_type}/{custom_inference_id}

The custom service gives more control over how to interact with external inference services that aren't explicitly supported through dedicated integrations. The custom service gives you the ability to define the headers, url, query parameters, request body, and secrets. The custom service supports the template replacement functionality, which enables you to define a template that can be replaced with the value associated with that key. Templates are portions of a string that start with ${ and end with }. The parameters secret_parameters and task_settings are checked for keys for template replacement. Template replacement is supported in the request, headers, url, and query_parameters. If the definition (key) is not found for a template, an error message is returned. In case of an endpoint definition like the following:

PUT _inference/text_embedding/test-text-embedding
{
  "service": "custom",
  "service_settings": {
     "secret_parameters": {
          "api_key": "<some api key>"
     },
     "url": "...endpoints.huggingface.cloud/v1/embeddings",
     "headers": {
         "Authorization": "Bearer ${api_key}",
         "Content-Type": "application/json"
     },
     "request": "{\"input\": ${input}}",
     "response": {
         "json_parser": {
             "text_embeddings":"$.data[*].embedding[*]"
         }
     }
  }
}

To replace ${api_key} the secret_parameters and task_settings are checked for a key named api_key.


Templates should not be surrounded by quotes.

Pre-defined templates:

  • ${input} refers to the array of input strings that comes from the input field of the subsequent inference requests.
  • ${input_type} refers to the input type translation values.
  • ${query} refers to the query field used specifically for reranking tasks.
  • ${top_n} refers to the top_n field available when performing rerank requests.
  • ${return_documents} refers to the return_documents field available when performing rerank requests.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are text_embedding, sparse_embedding, rerank, or completion.

  • custom_inference_id string Required

    The unique identifier of the inference endpoint.

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, custom.

    Value is custom.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the custom service.

    Hide service_settings attributes Show service_settings attributes object
    • headers object

      Specifies the HTTPS header parameters – such as Authentication or Contet-Type – that are required to access the custom service. For example:

      "headers":{
        "Authorization": "Bearer ${api_key}",
        "Content-Type": "application/json;charset=utf-8"
      }
      
    • input_type object

      Specifies the input type translation values that are used to replace the ${input_type} template in the request body. For example:

      "input_type": {
        "translation": {
          "ingest": "do_ingest",
          "search": "do_search"
        },
        "default": "a_default"
      },
      

      If the subsequent inference requests come from a search context, the search key will be used and the template will be replaced with do_search. If it comes from the ingest context do_ingest is used. If it's a different context that is not specified, the default value will be used. If no default is specified an empty string is used. translation can be:

      • classification
      • clustering
      • ingest
      • search
    • query_parameters object

      Specifies the query parameters as a list of tuples. The arrays inside the query_parameters must have two items, a key and a value. For example:

      "query_parameters":[
        ["param_key", "some_value"],
        ["param_key", "another_value"],
        ["other_key", "other_value"]
      ]
      

      If the base url is https://www.elastic.co it results in: https://www.elastic.co?param_key=some_value&param_key=another_value&other_key=other_value.

    • request object Required

      The request configuration object.

      Hide request attribute Show request attribute object
      • content string Required

        The body structure of the request. It requires passing in the string-escaped result of the JSON format HTTP request body. For example:

        "request": "{\"input\":${input}}"
        


        The content string needs to be a single line except when using the Kibana console.

    • response object Required

      The response configuration object.

      Hide response attribute Show response attribute object
      • json_parser object Required

        Specifies the JSON parser that is used to parse the response from the custom service. Different task types require different json_parser parameters. For example:

        # text_embedding
        # For a response like this:
        
        {
         "object": "list",
         "data": [
             {
               "object": "embedding",
               "index": 0,
               "embedding": [
                   0.014539449,
                   -0.015288644
               ]
             }
         ],
         "model": "text-embedding-ada-002-v2",
         "usage": {
             "prompt_tokens": 8,
             "total_tokens": 8
         }
        }
        
        # the json_parser definition should look like this:
        
        "response":{
          "json_parser":{
            "text_embeddings":"$.data[*].embedding[*]"
          }
        }
        
        # sparse_embedding
        # For a response like this:
        
        {
          "request_id": "75C50B5B-E79E-4930-****-F48DBB392231",
          "latency": 22,
          "usage": {
             "token_count": 11
          },
          "result": {
             "sparse_embeddings": [
                {
                  "index": 0,
                  "embedding": [
                    {
                      "token_id": 6,
                      "weight": 0.101
                    },
                    {
                      "token_id": 163040,
                      "weight": 0.28417
                    }
                  ]
                }
             ]
          }
        }
        
        # the json_parser definition should look like this:
        
        "response":{
          "json_parser":{
            "token_path":"$.result.sparse_embeddings[*].embedding[*].token_id",
            "weight_path":"$.result.sparse_embeddings[*].embedding[*].weight"
          }
        }
        
        # rerank
        # For a response like this:
        
        {
          "results": [
            {
              "index": 3,
              "relevance_score": 0.999071,
              "document": "abc"
            },
            {
              "index": 4,
              "relevance_score": 0.7867867,
              "document": "123"
            },
            {
              "index": 0,
              "relevance_score": 0.32713068,
              "document": "super"
            }
          ],
        }
        
        # the json_parser definition should look like this:
        
        "response":{
          "json_parser":{
            "reranked_index":"$.result.scores[*].index",    // optional
            "relevance_score":"$.result.scores[*].score",
            "document_text":"xxx"    // optional
          }
        }
        
        # completion
        # For a response like this:
        
        {
         "id": "chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT",
         "object": "chat.completion",
         "created": 1741569952,
         "model": "gpt-4.1-2025-04-14",
         "choices": [
           {
            "index": 0,
            "message": {
              "role": "assistant",
              "content": "Hello! How can I assist you today?",
              "refusal": null,
              "annotations": []
            },
            "logprobs": null,
            "finish_reason": "stop"
          }
         ]
        }
        
        # the json_parser definition should look like this:
        
        "response":{
          "json_parser":{
            "completion_result":"$.choices[*].message.content"
          }
        }
        
    • secret_parameters object Required

      Specifies secret parameters, like api_key or api_token, that are required to access the custom service. For example:

      "secret_parameters":{
        "api_key":"<api_key>"
      }
      
    • url string

      The URL endpoint to use for the requests.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attribute Show task_settings attribute object
    • parameters object

      Specifies parameters that are required to run the custom service. The parameters depend on the model your custom service uses. For example:

      "task_settings":{
        "parameters":{
          "input_type":"query",
          "return_token":true
        }
      }
      

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding, sparse_embedding, rerank, or completion.

PUT /_inference/{task_type}/{custom_inference_id}
PUT _inference/text_embedding/custom-embeddings
{
    "service": "custom",
    "service_settings": {
        "secret_parameters": {
            "api_key": "<api key>"
        },
        "url": "https://api.openai.com/v1/embeddings",
        "headers": {
            "Authorization": "Bearer ${api_key}",
            "Content-Type": "application/json;charset=utf-8"
        },
        "request": "{\"input\": ${input}, \"model\": \"text-embedding-3-small\"}",
        "response": {
            "json_parser": {
                "text_embeddings": "$.data[*].embedding[*]"
            }
        }
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="custom-embeddings",
    inference_config={
        "service": "custom",
        "service_settings": {
            "secret_parameters": {
                "api_key": "<api key>"
            },
            "url": "https://api.openai.com/v1/embeddings",
            "headers": {
                "Authorization": "Bearer ${api_key}",
                "Content-Type": "application/json;charset=utf-8"
            },
            "request": "{\"input\": ${input}, \"model\": \"text-embedding-3-small\"}",
            "response": {
                "json_parser": {
                    "text_embeddings": "$.data[*].embedding[*]"
                }
            }
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "custom-embeddings",
  inference_config: {
    service: "custom",
    service_settings: {
      secret_parameters: {
        api_key: "<api key>",
      },
      url: "https://api.openai.com/v1/embeddings",
      headers: {
        Authorization: "Bearer ${api_key}",
        "Content-Type": "application/json;charset=utf-8",
      },
      request: '{"input": ${input}, "model": "text-embedding-3-small"}',
      response: {
        json_parser: {
          text_embeddings: "$.data[*].embedding[*]",
        },
      },
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "custom-embeddings",
  body: {
    "service": "custom",
    "service_settings": {
      "secret_parameters": {
        "api_key": "<api key>"
      },
      "url": "https://api.openai.com/v1/embeddings",
      "headers": {
        "Authorization": "Bearer ${api_key}",
        "Content-Type": "application/json;charset=utf-8"
      },
      "request": "{\"input\": ${input}, \"model\": \"text-embedding-3-small\"}",
      "response": {
        "json_parser": {
          "text_embeddings": "$.data[*].embedding[*]"
        }
      }
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "custom-embeddings",
    "body" => [
        "service" => "custom",
        "service_settings" => [
            "secret_parameters" => [
                "api_key" => "<api key>",
            ],
            "url" => "https://api.openai.com/v1/embeddings",
            "headers" => [
                "Authorization" => "Bearer ${api_key}",
                "Content-Type" => "application/json;charset=utf-8",
            ],
            "request" => "{\"input\": ${input}, \"model\": \"text-embedding-3-small\"}",
            "response" => [
                "json_parser" => [
                    "text_embeddings" => "$.data[*].embedding[*]",
                ],
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"custom","service_settings":{"secret_parameters":{"api_key":"<api key>"},"url":"https://api.openai.com/v1/embeddings","headers":{"Authorization":"Bearer ${api_key}","Content-Type":"application/json;charset=utf-8"},"request":"{\"input\": ${input}, \"model\": \"text-embedding-3-small\"}","response":{"json_parser":{"text_embeddings":"$.data[*].embedding[*]"}}}}' "$ELASTICSEARCH_URL/_inference/text_embedding/custom-embeddings"
Request examples
Run `PUT _inference/text_embedding/custom-embeddings` to create an inference endpoint that performs a text embedding task.
{
    "service": "custom",
    "service_settings": {
        "secret_parameters": {
            "api_key": "<api key>"
        },
        "url": "https://api.openai.com/v1/embeddings",
        "headers": {
            "Authorization": "Bearer ${api_key}",
            "Content-Type": "application/json;charset=utf-8"
        },
        "request": "{\"input\": ${input}, \"model\": \"text-embedding-3-small\"}",
        "response": {
            "json_parser": {
                "text_embeddings": "$.data[*].embedding[*]"
            }
        }
    }
}
Run `PUT _inference/rerank/custom-rerank` to create an inference endpoint that performs a rerank task.
{
  "service": "custom",
  "service_settings": {
      "secret_parameters": {
          "api_key": "<api key>"
      },
      "url": "https://api.cohere.com/v2/rerank",
      "headers": {
          "Authorization": "bearer ${api_key}",
          "Content-Type": "application/json"
      },
      "request": "{\"documents\": ${input}, \"query\": ${query}, \"model\": \"rerank-v3.5\"}",
      "response": {
          "json_parser": {
              "reranked_index":"$.results[*].index",
              "relevance_score":"$.results[*].relevance_score"
          }
      }
  }
}
Run `PUT _inference/text_embedding/custom-text-embedding` to create an inference endpoint that performs a text embedding task.
{
  "service": "custom",
  "service_settings": {
      "secret_parameters": {
          "api_key": "<api key>"
      },
      "url": "https://api.cohere.com/v2/embed",
      "headers": {
          "Authorization": "bearer ${api_key}",
          "Content-Type": "application/json"
      },
      "request": "{\"texts\": ${input}, \"model\": \"embed-v4.0\", \"input_type\": ${input_type}}",
      "response": {
          "json_parser": {
              "text_embeddings":"$.embeddings.float[*]"
          }
      },
      "input_type": {
          "translation": {
              "ingest": "search_document",
              "search": "search_query"
          },
          "default": "search_document"
      }
  }
}
Run `PUT _inference/rerank/custom-rerank-jina` to create an inference endpoint that performs a rerank task.
{
  "service": "custom",
  "service_settings": {
    "secret_parameters": {
      "api_key": "<api key>"
    },    
    "url": "https://api.jina.ai/v1/rerank",
    "headers": {
      "Content-Type": "application/json",
      "Authorization": "Bearer ${api_key}"
    },
    "request": "{\"model\": \"jina-reranker-v2-base-multilingual\",\"query\": ${query},\"documents\":${input}}",
    "response": {
      "json_parser": {
        "relevance_score": "$.results[*].relevance_score",
        "reranked_index": "$.results[*].index"
      }
    }
  }
}
Run `PUT _inference/text_embedding/custom-text-embedding-hf` to create an inference endpoint that performs a text embedding task by using the Qwen/Qwen3-Embedding-8B model.
{
  "service": "custom",
  "service_settings": {
      "secret_parameters": {
          "api_key": "<api key>"
      },
      "url": "<dedicated inference endpoint on HF>/v1/embeddings",
      "headers": {
          "Authorization": "Bearer ${api_key}",
          "Content-Type": "application/json"
      },
      "request": "{\"input\": ${input}}",
      "response": {
          "json_parser": {
              "text_embeddings":"$.data[*].embedding[*]"
          }
      }
  }
}

Create a DeepSeek inference endpoint Generally available; Added in 8.19.0

PUT /_inference/{task_type}/{deepseek_inference_id}

Create an inference endpoint to perform an inference task with the deepseek service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion or chat_completion.

  • deepseek_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, deepseek.

    Value is deepseek.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the deepseek service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key for your DeepSeek account. You can find or create your DeepSeek API keys on the DeepSeek API key page.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • model_id string Required

      For a completion or chat_completion task, the name of the model to use for the inference task.

      For the available completion and chat_completion models, refer to the DeepSeek Models & Pricing docs.

    • url string

      The URL endpoint to use for the requests. Defaults to https://api.deepseek.com/chat/completions.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are completion or chat_completion.

PUT /_inference/{task_type}/{deepseek_inference_id}
curl \
 --request PUT 'http://api.example.com/_inference/{task_type}/{deepseek_inference_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"chunking_settings":{"max_chunk_size":250,"overlap":100,"sentence_overlap":1,"strategy":"sentence"},"service":"deepseek","service_settings":{"api_key":"string","model_id":"string","url":"string"}}'

Create an Elasticsearch inference endpoint Generally available; Added in 8.13.0

PUT /_inference/{task_type}/{elasticsearch_inference_id}

Create an inference endpoint to perform an inference task with the elasticsearch service.


Your Elasticsearch deployment contains preconfigured ELSER and E5 inference endpoints, you only need to create the enpoints using the API if you want to customize the settings.

If you use the ELSER or the E5 model through the elasticsearch service, the API request will automatically download and deploy the model if it isn't downloaded yet.


You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.

After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are rerank, sparse_embedding, or text_embedding.

  • elasticsearch_inference_id string Required

    The unique identifier of the inference endpoint. The must not match the model_id.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, elasticsearch.

    Value is elasticsearch.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the elasticsearch service.

    Hide service_settings attributes Show service_settings attributes object
    • adaptive_allocations object

      Adaptive allocations configuration details. If enabled is true, the number of allocations of the model is set based on the current load the process gets. When the load is high, a new model allocation is automatically created, respecting the value of max_number_of_allocations if it's set. When the load is low, a model allocation is automatically removed, respecting the value of min_number_of_allocations if it's set. If enabled is true, do not set the number of allocations manually.

      Hide adaptive_allocations attributes Show adaptive_allocations attributes object
      • enabled boolean

        Turn on adaptive_allocations.

        Default value is false.

      • max_number_of_allocations number

        The maximum number of allocations to scale to. If set, it must be greater than or equal to min_number_of_allocations.

      • min_number_of_allocations number

        The minimum number of allocations to scale to. If set, it must be greater than or equal to 0. If not defined, the deployment scales to 0.

    • deployment_id string

      The deployment identifier for a trained model deployment. When deployment_id is used the model_id is optional.

    • model_id string Required

      The name of the model to use for the inference task. It can be the ID of a built-in model (for example, .multilingual-e5-small for E5) or a text embedding model that was uploaded by using the Eland client.

      External documentation
    • num_allocations number

      The total number of allocations that are assigned to the model across machine learning nodes. Increasing this value generally increases the throughput. If adaptive allocations are enabled, do not set this value because it's automatically set.

    • num_threads number Required

      The number of threads used by each model allocation during inference. This setting generally increases the speed per inference request. The inference process is a compute-bound process; threads_per_allocations must not exceed the number of available allocated processors per node. The value must be a power of 2. The maximum value is 32.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attribute Show task_settings attribute object
    • return_documents boolean

      For a rerank task, return the document instead of only the index.

      Default value is true.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are sparse_embedding, text_embedding, or rerank.

PUT /_inference/{task_type}/{elasticsearch_inference_id}
PUT _inference/sparse_embedding/my-elser-model
{
    "service": "elasticsearch",
    "service_settings": {
        "adaptive_allocations": { 
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
        },
        "num_threads": 1,
        "model_id": ".elser_model_2" 
    }
}
resp = client.inference.put(
    task_type="sparse_embedding",
    inference_id="my-elser-model",
    inference_config={
        "service": "elasticsearch",
        "service_settings": {
            "adaptive_allocations": {
                "enabled": True,
                "min_number_of_allocations": 1,
                "max_number_of_allocations": 4
            },
            "num_threads": 1,
            "model_id": ".elser_model_2"
        }
    },
)
const response = await client.inference.put({
  task_type: "sparse_embedding",
  inference_id: "my-elser-model",
  inference_config: {
    service: "elasticsearch",
    service_settings: {
      adaptive_allocations: {
        enabled: true,
        min_number_of_allocations: 1,
        max_number_of_allocations: 4,
      },
      num_threads: 1,
      model_id: ".elser_model_2",
    },
  },
});
response = client.inference.put(
  task_type: "sparse_embedding",
  inference_id: "my-elser-model",
  body: {
    "service": "elasticsearch",
    "service_settings": {
      "adaptive_allocations": {
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
      },
      "num_threads": 1,
      "model_id": ".elser_model_2"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "sparse_embedding",
    "inference_id" => "my-elser-model",
    "body" => [
        "service" => "elasticsearch",
        "service_settings" => [
            "adaptive_allocations" => [
                "enabled" => true,
                "min_number_of_allocations" => 1,
                "max_number_of_allocations" => 4,
            ],
            "num_threads" => 1,
            "model_id" => ".elser_model_2",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"elasticsearch","service_settings":{"adaptive_allocations":{"enabled":true,"min_number_of_allocations":1,"max_number_of_allocations":4},"num_threads":1,"model_id":".elser_model_2"}}' "$ELASTICSEARCH_URL/_inference/sparse_embedding/my-elser-model"
client.inference().put(p -> p
    .inferenceId("my-elser-model")
    .taskType(TaskType.SparseEmbedding)
    .inferenceConfig(i -> i
        .service("elasticsearch")
        .serviceSettings(JsonData.fromJson("{\"adaptive_allocations\":{\"enabled\":true,\"min_number_of_allocations\":1,\"max_number_of_allocations\":4},\"num_threads\":1,\"model_id\":\".elser_model_2\"}"))
    )
);
Request examples
Run `PUT _inference/sparse_embedding/my-elser-model` to create an inference endpoint that performs a `sparse_embedding` task. The `model_id` must be the ID of one of the built-in ELSER models. The API will automatically download the ELSER model if it isn't already downloaded and then deploy the model.
{
    "service": "elasticsearch",
    "service_settings": {
        "adaptive_allocations": { 
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
        },
        "num_threads": 1,
        "model_id": ".elser_model_2" 
    }
}
Run `PUT _inference/rerank/my-elastic-rerank` to create an inference endpoint that performs a rerank task using the built-in Elastic Rerank cross-encoder model. The `model_id` must be `.rerank-v1`, which is the ID of the built-in Elastic Rerank model. The API will automatically download the Elastic Rerank model if it isn't already downloaded and then deploy the model. Once deployed, the model can be used for semantic re-ranking with a `text_similarity_reranker` retriever.
{
    "service": "elasticsearch",
    "service_settings": {
        "model_id": ".rerank-v1", 
        "num_threads": 1,
        "adaptive_allocations": { 
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
        }
    }
}
Run `PUT _inference/text_embedding/my-e5-model` to create an inference endpoint that performs a `text_embedding` task. The `model_id` must be the ID of one of the built-in E5 models. The API will automatically download the E5 model if it isn't already downloaded and then deploy the model.
{
    "service": "elasticsearch",
    "service_settings": {
        "num_allocations": 1,
        "num_threads": 1,
        "model_id": ".multilingual-e5-small" 
    }
}
Run `PUT _inference/text_embedding/my-msmarco-minilm-model` to create an inference endpoint that performs a `text_embedding` task with a model that was uploaded by Eland.
{
    "service": "elasticsearch",
    "service_settings": {
        "num_allocations": 1,
        "num_threads": 1,
        "model_id": "msmarco-MiniLM-L12-cos-v5" 
    }
}
Run `PUT _inference/text_embedding/my-e5-model` to create an inference endpoint that performs a `text_embedding` task and to configure adaptive allocations. The API request will automatically download the E5 model if it isn't already downloaded and then deploy the model.
{
    "service": "elasticsearch",
    "service_settings": {
        "adaptive_allocations": {
        "enabled": true,
        "min_number_of_allocations": 3,
        "max_number_of_allocations": 10
        },
        "num_threads": 1,
        "model_id": ".multilingual-e5-small"
    }
}
Run `PUT _inference/sparse_embedding/use_existing_deployment` to use an already existing model deployment when creating an inference endpoint.
{
    "service": "elasticsearch",
    "service_settings": {
        "deployment_id": ".elser_model_2"
    }
}
Response examples (200)
A successful response from `PUT _inference/sparse_embedding/use_existing_deployment`. It contains the model ID and the threads and allocations settings from the model deployment.
{
  "inference_id": "use_existing_deployment",
  "task_type": "sparse_embedding",
  "service": "elasticsearch",
  "service_settings": {
    "num_allocations": 2,
    "num_threads": 1,
    "model_id": ".elser_model_2",
    "deployment_id": ".elser_model_2"
  },
  "chunking_settings": {
    "strategy": "sentence",
    "max_chunk_size": 250,
    "sentence_overlap": 1
  }
}

Create an ELSER inference endpoint Deprecated Generally available; Added in 8.11.0

PUT /_inference/{task_type}/{elser_inference_id}

Create an inference endpoint to perform an inference task with the elser service. You can also deploy ELSER by using the Elasticsearch inference integration.


Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings.

The API request will automatically download and deploy the ELSER model if it isn't already downloaded.


You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.

After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Value is sparse_embedding.

  • elser_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, elser.

    Value is elser.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the elser service.

    Hide service_settings attributes Show service_settings attributes object
    • adaptive_allocations object

      Adaptive allocations configuration details. If enabled is true, the number of allocations of the model is set based on the current load the process gets. When the load is high, a new model allocation is automatically created, respecting the value of max_number_of_allocations if it's set. When the load is low, a model allocation is automatically removed, respecting the value of min_number_of_allocations if it's set. If enabled is true, do not set the number of allocations manually.

      Hide adaptive_allocations attributes Show adaptive_allocations attributes object
      • enabled boolean

        Turn on adaptive_allocations.

        Default value is false.

      • max_number_of_allocations number

        The maximum number of allocations to scale to. If set, it must be greater than or equal to min_number_of_allocations.

      • min_number_of_allocations number

        The minimum number of allocations to scale to. If set, it must be greater than or equal to 0. If not defined, the deployment scales to 0.

    • num_allocations number Required

      The total number of allocations this model is assigned across machine learning nodes. Increasing this value generally increases the throughput. If adaptive allocations is enabled, do not set this value because it's automatically set.

    • num_threads number Required

      The number of threads used by each model allocation during inference. Increasing this value generally increases the speed per inference request. The inference process is a compute-bound process; threads_per_allocations must not exceed the number of available allocated processors per node. The value must be a power of 2. The maximum value is 32.


      If you want to optimize your ELSER endpoint for ingest, set the number of threads to 1. If you want to optimize your ELSER endpoint for search, set the number of threads to greater than 1.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Value is sparse_embedding.

PUT /_inference/{task_type}/{elser_inference_id}
PUT _inference/sparse_embedding/my-elser-model
{
    "service": "elser",
    "service_settings": {
        "num_allocations": 1,
        "num_threads": 1
    }
}
resp = client.inference.put(
    task_type="sparse_embedding",
    inference_id="my-elser-model",
    inference_config={
        "service": "elser",
        "service_settings": {
            "num_allocations": 1,
            "num_threads": 1
        }
    },
)
const response = await client.inference.put({
  task_type: "sparse_embedding",
  inference_id: "my-elser-model",
  inference_config: {
    service: "elser",
    service_settings: {
      num_allocations: 1,
      num_threads: 1,
    },
  },
});
response = client.inference.put(
  task_type: "sparse_embedding",
  inference_id: "my-elser-model",
  body: {
    "service": "elser",
    "service_settings": {
      "num_allocations": 1,
      "num_threads": 1
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "sparse_embedding",
    "inference_id" => "my-elser-model",
    "body" => [
        "service" => "elser",
        "service_settings" => [
            "num_allocations" => 1,
            "num_threads" => 1,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"elser","service_settings":{"num_allocations":1,"num_threads":1}}' "$ELASTICSEARCH_URL/_inference/sparse_embedding/my-elser-model"
client.inference().put(p -> p
    .inferenceId("my-elser-model")
    .taskType(TaskType.SparseEmbedding)
    .inferenceConfig(i -> i
        .service("elser")
        .serviceSettings(JsonData.fromJson("{\"num_allocations\":1,\"num_threads\":1}"))
    )
);
Request examples
Run `PUT _inference/sparse_embedding/my-elser-model` to create an inference endpoint that performs a `sparse_embedding` task. The request will automatically download the ELSER model if it isn't already downloaded and then deploy the model.
{
    "service": "elser",
    "service_settings": {
        "num_allocations": 1,
        "num_threads": 1
    }
}
Run `PUT _inference/sparse_embedding/my-elser-model` to create an inference endpoint that performs a `sparse_embedding` task with adaptive allocations. When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load.
{
    "service": "elser",
    "service_settings": {
        "adaptive_allocations": {
            "enabled": true,
            "min_number_of_allocations": 3,
            "max_number_of_allocations": 10
        },
        "num_threads": 1
    }
}
Response examples (200)
A successful response when creating an ELSER inference endpoint.
{
  "inference_id": "my-elser-model",
  "task_type": "sparse_embedding",
  "service": "elser",
  "service_settings": {
    "num_allocations": 1,
    "num_threads": 1
  },
  "task_settings": {}
}

Create an Google AI Studio inference endpoint Generally available; Added in 8.15.0

PUT /_inference/{task_type}/{googleaistudio_inference_id}

Create an inference endpoint to perform an inference task with the googleaistudio service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion or text_embedding.

  • googleaistudio_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, googleaistudio.

    Value is googleaistudio.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the googleaistudio service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Google Gemini account.

    • model_id string Required

      The name of the model to use for the inference task. Refer to the Google documentation for the list of supported models.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Google AI Studio. By default, the googleaistudio service sets the number of requests allowed per minute to 360.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{googleaistudio_inference_id}
PUT _inference/completion/google_ai_studio_completion
{
    "service": "googleaistudio",
    "service_settings": {
        "api_key": "api-key",
        "model_id": "model-id"
    }
}
resp = client.inference.put(
    task_type="completion",
    inference_id="google_ai_studio_completion",
    inference_config={
        "service": "googleaistudio",
        "service_settings": {
            "api_key": "api-key",
            "model_id": "model-id"
        }
    },
)
const response = await client.inference.put({
  task_type: "completion",
  inference_id: "google_ai_studio_completion",
  inference_config: {
    service: "googleaistudio",
    service_settings: {
      api_key: "api-key",
      model_id: "model-id",
    },
  },
});
response = client.inference.put(
  task_type: "completion",
  inference_id: "google_ai_studio_completion",
  body: {
    "service": "googleaistudio",
    "service_settings": {
      "api_key": "api-key",
      "model_id": "model-id"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "completion",
    "inference_id" => "google_ai_studio_completion",
    "body" => [
        "service" => "googleaistudio",
        "service_settings" => [
            "api_key" => "api-key",
            "model_id" => "model-id",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"googleaistudio","service_settings":{"api_key":"api-key","model_id":"model-id"}}' "$ELASTICSEARCH_URL/_inference/completion/google_ai_studio_completion"
client.inference().put(p -> p
    .inferenceId("google_ai_studio_completion")
    .taskType(TaskType.Completion)
    .inferenceConfig(i -> i
        .service("googleaistudio")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"api-key\",\"model_id\":\"model-id\"}"))
    )
);
Request example
Run `PUT _inference/completion/google_ai_studio_completion` to create an inference endpoint to perform a `completion` task type.
{
    "service": "googleaistudio",
    "service_settings": {
        "api_key": "api-key",
        "model_id": "model-id"
    }
}

Create a Google Vertex AI inference endpoint Generally available; Added in 8.15.0

PUT /_inference/{task_type}/{googlevertexai_inference_id}

Create an inference endpoint to perform an inference task with the googlevertexai service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are rerank, text_embedding, completion, or chat_completion.

  • googlevertexai_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, googlevertexai.

    Value is googlevertexai.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the googlevertexai service.

    Hide service_settings attributes Show service_settings attributes object
    • location string Required

      The name of the location to use for the inference task. Refer to the Google documentation for the list of supported locations.

      External documentation
    • model_id string Required

      The name of the model to use for the inference task. Refer to the Google documentation for the list of supported models.

      External documentation
    • project_id string Required

      The name of the project to use for the inference task.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Google Vertex AI. By default, the googlevertexai service sets the number of requests allowed per minute to 30.000.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

    • service_account_json string Required

      A valid service account in JSON format for the Google Vertex AI API.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • auto_truncate boolean

      For a text_embedding task, truncate inputs longer than the maximum token length automatically.

    • top_n number

      For a rerank task, the number of the top N documents that should be returned.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding or rerank.

PUT /_inference/{task_type}/{googlevertexai_inference_id}
PUT _inference/text_embedding/google_vertex_ai_embeddingss
{
    "service": "googlevertexai",
    "service_settings": {
        "service_account_json": "service-account-json",
        "model_id": "model-id",
        "location": "location",
        "project_id": "project-id"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="google_vertex_ai_embeddingss",
    inference_config={
        "service": "googlevertexai",
        "service_settings": {
            "service_account_json": "service-account-json",
            "model_id": "model-id",
            "location": "location",
            "project_id": "project-id"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "google_vertex_ai_embeddingss",
  inference_config: {
    service: "googlevertexai",
    service_settings: {
      service_account_json: "service-account-json",
      model_id: "model-id",
      location: "location",
      project_id: "project-id",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "google_vertex_ai_embeddingss",
  body: {
    "service": "googlevertexai",
    "service_settings": {
      "service_account_json": "service-account-json",
      "model_id": "model-id",
      "location": "location",
      "project_id": "project-id"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "google_vertex_ai_embeddingss",
    "body" => [
        "service" => "googlevertexai",
        "service_settings" => [
            "service_account_json" => "service-account-json",
            "model_id" => "model-id",
            "location" => "location",
            "project_id" => "project-id",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"googlevertexai","service_settings":{"service_account_json":"service-account-json","model_id":"model-id","location":"location","project_id":"project-id"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/google_vertex_ai_embeddingss"
client.inference().put(p -> p
    .inferenceId("google_vertex_ai_embeddingss")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("googlevertexai")
        .serviceSettings(JsonData.fromJson("{\"service_account_json\":\"service-account-json\",\"model_id\":\"model-id\",\"location\":\"location\",\"project_id\":\"project-id\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/google_vertex_ai_embeddings` to create an inference endpoint to perform a `text_embedding` task type.
{
    "service": "googlevertexai",
    "service_settings": {
        "service_account_json": "service-account-json",
        "model_id": "model-id",
        "location": "location",
        "project_id": "project-id"
    }
}
Run `PUT _inference/rerank/google_vertex_ai_rerank` to create an inference endpoint to perform a `rerank` task type.
{
    "service": "googlevertexai",
    "service_settings": {
        "service_account_json": "service-account-json",
        "project_id": "project-id"
    }
}

Create a Hugging Face inference endpoint Generally available; Added in 8.12.0

PUT /_inference/{task_type}/{huggingface_inference_id}

Create an inference endpoint to perform an inference task with the hugging_face service. Supported tasks include: text_embedding, completion, and chat_completion.

To configure the endpoint, first visit the Hugging Face Inference Endpoints page and create a new endpoint. Select a model that supports the task you intend to use.

For Elastic's text_embedding task: The selected model must support the Sentence Embeddings task. On the new endpoint creation page, select the Sentence Embeddings task under the Advanced Configuration section. After the endpoint has initialized, copy the generated endpoint URL. Recommended models for text_embedding task:

  • all-MiniLM-L6-v2
  • all-MiniLM-L12-v2
  • all-mpnet-base-v2
  • e5-base-v2
  • e5-small-v2
  • multilingual-e5-base
  • multilingual-e5-small

For Elastic's chat_completion and completion tasks: The selected model must support the Text Generation task and expose OpenAI API. HuggingFace supports both serverless and dedicated endpoints for Text Generation. When creating dedicated endpoint select the Text Generation task. After the endpoint is initialized (for dedicated) or ready (for serverless), ensure it supports the OpenAI API and includes /v1/chat/completions part in URL. Then, copy the full endpoint URL for use. Recommended models for chat_completion and completion tasks:

  • Mistral-7B-Instruct-v0.2
  • QwQ-32B
  • Phi-3-mini-128k-instruct

For Elastic's rerank task: The selected model must support the sentence-ranking task and expose OpenAI API. HuggingFace supports only dedicated (not serverless) endpoints for Rerank so far. After the endpoint is initialized, copy the full endpoint URL for use. Tested models for rerank task:

  • bge-reranker-base
  • jina-reranker-v1-turbo-en-GGUF

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are chat_completion, completion, rerank, or text_embedding.

  • huggingface_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, hugging_face.

    Value is hugging_face.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the hugging_face service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid access token for your HuggingFace account. You can create or find your access tokens on the HuggingFace settings page.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Hugging Face. By default, the hugging_face service sets the number of requests allowed per minute to 3000 for all supported tasks. Hugging Face does not publish a universal rate limit — actual limits may vary. It is recommended to adjust this value based on the capacity and limits of your specific deployment environment.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

    • url string Required

      The URL endpoint to use for the requests. For completion and chat_completion tasks, the deployed model must be compatible with the Hugging Face Chat Completion interface (see the linked external documentation for details). The endpoint URL for the request must include /v1/chat/completions. If the model supports the OpenAI Chat Completion schema, a toggle should appear in the interface. Enabling this toggle doesn't change any model behavior, it reveals the full endpoint URL needed (which should include /v1/chat/completions) when configuring the inference endpoint in Elasticsearch. If the model doesn't support this schema, the toggle may not be shown.

      External documentation
    • model_id string

      The name of the HuggingFace model to use for the inference task. For completion and chat_completion tasks, this field is optional but may be required for certain models — particularly when using serverless inference endpoints. For the text_embedding task, this field should not be included. Otherwise, the request will fail.

  • task_settings object

    Settings to configure the inference task. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • return_documents boolean

      For a rerank task, return doc text within the results.

    • top_n number

      For a rerank task, the number of most relevant documents to return. It defaults to the number of the documents.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Value is text_embedding.

PUT /_inference/{task_type}/{huggingface_inference_id}
PUT _inference/text_embedding/hugging-face-embeddings
{
    "service": "hugging_face",
    "service_settings": {
        "api_key": "hugging-face-access-token", 
        "url": "url-endpoint" 
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="hugging-face-embeddings",
    inference_config={
        "service": "hugging_face",
        "service_settings": {
            "api_key": "hugging-face-access-token",
            "url": "url-endpoint"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "hugging-face-embeddings",
  inference_config: {
    service: "hugging_face",
    service_settings: {
      api_key: "hugging-face-access-token",
      url: "url-endpoint",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "hugging-face-embeddings",
  body: {
    "service": "hugging_face",
    "service_settings": {
      "api_key": "hugging-face-access-token",
      "url": "url-endpoint"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "hugging-face-embeddings",
    "body" => [
        "service" => "hugging_face",
        "service_settings" => [
            "api_key" => "hugging-face-access-token",
            "url" => "url-endpoint",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"hugging_face","service_settings":{"api_key":"hugging-face-access-token","url":"url-endpoint"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/hugging-face-embeddings"
client.inference().put(p -> p
    .inferenceId("hugging-face-embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("hugging_face")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"hugging-face-access-token\",\"url\":\"url-endpoint\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/hugging-face-embeddings` to create an inference endpoint that performs a `text_embedding` task type.
{
    "service": "hugging_face",
    "service_settings": {
        "api_key": "hugging-face-access-token", 
        "url": "url-endpoint" 
    }
}
Run `PUT _inference/rerank/hugging-face-rerank` to create an inference endpoint that performs a `rerank` task type.
{
    "service": "hugging_face",
    "service_settings": {
        "api_key": "hugging-face-access-token", 
        "url": "url-endpoint" 
    },
    "task_settings": {
        "return_documents": true,
        "top_n": 3
    }
}




Create a Mistral inference endpoint Generally available; Added in 8.15.0

PUT /_inference/{task_type}/{mistral_inference_id}

Create an inference endpoint to perform an inference task with the mistral service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are text_embedding, completion, or chat_completion.

  • mistral_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • chunking_settings object

    The chunking configuration object.

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The type of service supported for the specified task type. In this case, mistral.

    Value is mistral.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the mistral service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Mistral account. You can find your Mistral API keys or you can create a new one on the API Keys page.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • max_input_tokens number

      The maximum number of tokens per input before chunking occurs.

    • model string Required

      The name of the model to use for the inference task. Refer to the Mistral models documentation for the list of available models.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the Mistral API. By default, the mistral service sets the number of requests allowed per minute to 240.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Value is text_embedding.

PUT /_inference/{task_type}/{mistral_inference_id}
PUT _inference/text_embedding/mistral-embeddings-test
{
  "service": "mistral",
  "service_settings": {
    "api_key": "Mistral-API-Key",
    "model": "mistral-embed" 
  }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="mistral-embeddings-test",
    inference_config={
        "service": "mistral",
        "service_settings": {
            "api_key": "Mistral-API-Key",
            "model": "mistral-embed"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "mistral-embeddings-test",
  inference_config: {
    service: "mistral",
    service_settings: {
      api_key: "Mistral-API-Key",
      model: "mistral-embed",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "mistral-embeddings-test",
  body: {
    "service": "mistral",
    "service_settings": {
      "api_key": "Mistral-API-Key",
      "model": "mistral-embed"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "mistral-embeddings-test",
    "body" => [
        "service" => "mistral",
        "service_settings" => [
            "api_key" => "Mistral-API-Key",
            "model" => "mistral-embed",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"mistral","service_settings":{"api_key":"Mistral-API-Key","model":"mistral-embed"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/mistral-embeddings-test"
client.inference().put(p -> p
    .inferenceId("mistral-embeddings-test")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("mistral")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Mistral-API-Key\",\"model\":\"mistral-embed\"}"))
    )
);
Request example
Run `PUT _inference/text_embedding/mistral-embeddings-test` to create a Mistral inference endpoint that performs a text embedding task.
{
  "service": "mistral",
  "service_settings": {
    "api_key": "Mistral-API-Key",
    "model": "mistral-embed" 
  }
}








Create a Watsonx inference endpoint Generally available; Added in 8.16.0

PUT /_inference/{task_type}/{watsonx_inference_id}

Create an inference endpoint to perform an inference task with the watsonxai service. You need an IBM Cloud Databases for Elasticsearch deployment to use the watsonxai inference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The task type. The only valid task type for the model to perform is text_embedding.

    Value is text_embedding.

  • watsonx_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body

  • service string Required

    The type of service supported for the specified task type. In this case, watsonxai.

    Value is watsonxai.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the watsonxai service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Watsonx account. You can find your Watsonx API keys or you can create a new one on the API keys page.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • api_version string Required

      A version parameter that takes a version date in the format of YYYY-MM-DD. For the active version data parameters, refer to the Wastonx documentation.

      External documentation
    • model_id string Required

      The name of the model to use for the inference task. Refer to the IBM Embedding Models section in the Watsonx documentation for the list of available text embedding models.

      External documentation
    • project_id string Required

      The identifier of the IBM Cloud project to use for the inference task.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from Watsonx. By default, the watsonxai service sets the number of requests allowed per minute to 120.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute.

    • url string Required

      The URL of the inference endpoint that you created on Watsonx.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Value is text_embedding.

PUT /_inference/{task_type}/{watsonx_inference_id}
PUT _inference/text_embedding/watsonx-embeddings
{
  "service": "watsonxai",
  "service_settings": {
      "api_key": "Watsonx-API-Key", 
      "url": "Wastonx-URL", 
      "model_id": "ibm/slate-30m-english-rtrvr",
      "project_id": "IBM-Cloud-ID", 
      "api_version": "2024-03-14"
  }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="watsonx-embeddings",
    inference_config={
        "service": "watsonxai",
        "service_settings": {
            "api_key": "Watsonx-API-Key",
            "url": "Wastonx-URL",
            "model_id": "ibm/slate-30m-english-rtrvr",
            "project_id": "IBM-Cloud-ID",
            "api_version": "2024-03-14"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "watsonx-embeddings",
  inference_config: {
    service: "watsonxai",
    service_settings: {
      api_key: "Watsonx-API-Key",
      url: "Wastonx-URL",
      model_id: "ibm/slate-30m-english-rtrvr",
      project_id: "IBM-Cloud-ID",
      api_version: "2024-03-14",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "watsonx-embeddings",
  body: {
    "service": "watsonxai",
    "service_settings": {
      "api_key": "Watsonx-API-Key",
      "url": "Wastonx-URL",
      "model_id": "ibm/slate-30m-english-rtrvr",
      "project_id": "IBM-Cloud-ID",
      "api_version": "2024-03-14"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "watsonx-embeddings",
    "body" => [
        "service" => "watsonxai",
        "service_settings" => [
            "api_key" => "Watsonx-API-Key",
            "url" => "Wastonx-URL",
            "model_id" => "ibm/slate-30m-english-rtrvr",
            "project_id" => "IBM-Cloud-ID",
            "api_version" => "2024-03-14",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"watsonxai","service_settings":{"api_key":"Watsonx-API-Key","url":"Wastonx-URL","model_id":"ibm/slate-30m-english-rtrvr","project_id":"IBM-Cloud-ID","api_version":"2024-03-14"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/watsonx-embeddings"
client.inference().put(p -> p
    .inferenceId("watsonx-embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("watsonxai")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Watsonx-API-Key\",\"url\":\"Wastonx-URL\",\"model_id\":\"ibm/slate-30m-english-rtrvr\",\"project_id\":\"IBM-Cloud-ID\",\"api_version\":\"2024-03-14\"}"))
    )
);
Request example
Run `PUT _inference/text_embedding/watsonx-embeddings` to create an Watonsx inference endpoint that performs a text embedding task.
{
  "service": "watsonxai",
  "service_settings": {
      "api_key": "Watsonx-API-Key", 
      "url": "Wastonx-URL", 
      "model_id": "ibm/slate-30m-english-rtrvr",
      "project_id": "IBM-Cloud-ID", 
      "api_version": "2024-03-14"
  }
}

Perform reranking inference on the service Generally available; Added in 8.11.0

POST /_inference/rerank/{inference_id}

Required authorization

  • Cluster privileges: monitor_inference

Path parameters

  • inference_id string Required

    The unique identifier for the inference endpoint.

Query parameters

application/json

Body

  • query string Required

    Query input.

  • input string | array[string] Required

    The text on which you want to perform the inference task. It can be a single string or an array.


    Inference endpoints for the completion task type currently only support a single string as input.

  • task_settings object

    Task settings for the individual inference request. These settings are specific to the task type you specified and override the task settings specified when initializing the service.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • rerank array[object] Required

      The rerank result object representing a single ranked document id: the original index of the document in the request relevance_score: the relevance_score of the document relative to the query text: Optional, the text of the document, if requested

      Hide rerank attributes Show rerank attributes object
      • index number Required
      • relevance_score number Required
      • text string
POST /_inference/rerank/{inference_id}
POST _inference/rerank/cohere_rerank
{
  "input": ["luke", "like", "leia", "chewy","r2d2", "star", "wars"],
  "query": "star wars main character"
}
resp = client.inference.rerank(
    inference_id="cohere_rerank",
    input=[
        "luke",
        "like",
        "leia",
        "chewy",
        "r2d2",
        "star",
        "wars"
    ],
    query="star wars main character",
)
const response = await client.inference.rerank({
  inference_id: "cohere_rerank",
  input: ["luke", "like", "leia", "chewy", "r2d2", "star", "wars"],
  query: "star wars main character",
});
response = client.inference.rerank(
  inference_id: "cohere_rerank",
  body: {
    "input": [
      "luke",
      "like",
      "leia",
      "chewy",
      "r2d2",
      "star",
      "wars"
    ],
    "query": "star wars main character"
  }
)
$resp = $client->inference()->rerank([
    "inference_id" => "cohere_rerank",
    "body" => [
        "input" => array(
            "luke",
            "like",
            "leia",
            "chewy",
            "r2d2",
            "star",
            "wars",
        ),
        "query" => "star wars main character",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"input":["luke","like","leia","chewy","r2d2","star","wars"],"query":"star wars main character"}' "$ELASTICSEARCH_URL/_inference/rerank/cohere_rerank"
client.inference().rerank(r -> r
    .inferenceId("cohere_rerank")
    .input(List.of("luke","like","leia","chewy","r2d2","star","wars"))
    .query("star wars main character")
);
Request examples
Run `POST _inference/rerank/cohere_rerank` to perform reranking on the example input.
{
  "input": ["luke", "like", "leia", "chewy","r2d2", "star", "wars"],
  "query": "star wars main character"
}
Run `PUT _inference/rerank/bge-reranker-base-mkn` to perform reranking on the example input via Hugging Face
{
  "input": ["luke", "like", "leia", "chewy","r2d2", "star", "wars"],
  "query": "star wars main character",
  "return_documents": false,
  "top_n": 2
}
Run `POST _inference/rerank/bge-reranker-base-mkn` to perform reranking on the example input via Hugging Face
{
  "input": ["luke", "like", "leia", "chewy","r2d2", "star", "wars"],
  "query": "star wars main character",
  "return_documents": true,
  "top_n": 3
}
Response examples (200)
A successful response from `POST _inference/rerank/cohere_rerank`.
{
  "rerank": [
    {
      "index": "2",
      "relevance_score": "0.011597361",
      "text": "leia"
    },
    {
      "index": "0",
      "relevance_score": "0.006338922",
      "text": "luke"
    },
    {
      "index": "5",
      "relevance_score": "0.0016166499",
      "text": "star"
    },
    {
      "index": "4",
      "relevance_score": "0.0011695103",
      "text": "r2d2"
    },
    {
      "index": "1",
      "relevance_score": "5.614787E-4",
      "text": "like"
    },
    {
      "index": "6",
      "relevance_score": "3.7850367E-4",
      "text": "wars"
    },
    {
      "index": "3",
      "relevance_score": "1.2508839E-5",
      "text": "chewy"
    }
  ]
}
A successful response from `POST _inference/rerank/bge-reranker-base-mkn`.
{
  "rerank": [
    {
      "index": 6,
      "relevance_score": 0.50955844
    },
    {
      "index": 5,
      "relevance_score": 0.084341794
    }
  ]
}
A successful response from `POST _inference/rerank/bge-reranker-base-mkn`.
{
  "rerank": [
    {
      "index": 6,
      "relevance_score": 0.50955844,
      "text": "wars"
    },
    {
      "index": 5,
      "relevance_score": 0.084341794,
      "text": "star"
    },
    {
      "index": 3,
      "relevance_score": 0.004520818,
      "text": "chewy"
    }
  ]
}












Update an inference endpoint Generally available; Added in 8.17.0

PUT /_inference/{task_type}/{inference_id}/_update

All methods and paths for this operation:

PUT /_inference/{inference_id}/_update

PUT /_inference/{task_type}/{inference_id}/_update

Modify task_settings, secrets (within service_settings), or num_allocations for an inference endpoint, depending on the specific endpoint service and task_type.

IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string Required

    The type of inference task that the model performs.

    Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

  • inference_id string Required

    The unique identifier of the inference endpoint.

application/json

Body Required

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    The service type

  • service_settings object Required

    Settings specific to the service

  • task_settings object

    Task settings specific to the service and task type

Responses

  • 200 application/json
    Hide response attributes Show response attributes object

    Represents an inference endpoint as returned by the GET API

    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

PUT /_inference/{task_type}/{inference_id}/_update
PUT _inference/my-inference-endpoint/_update
{
 "service_settings": {
   "api_key": "<API_KEY>"
 }
}
resp = client.inference.update(
    inference_id="my-inference-endpoint",
    inference_config={
        "service_settings": {
            "api_key": "<API_KEY>"
        }
    },
)
const response = await client.inference.update({
  inference_id: "my-inference-endpoint",
  inference_config: {
    service_settings: {
      api_key: "<API_KEY>",
    },
  },
});
response = client.inference.update(
  inference_id: "my-inference-endpoint",
  body: {
    "service_settings": {
      "api_key": "<API_KEY>"
    }
  }
)
$resp = $client->inference()->update([
    "inference_id" => "my-inference-endpoint",
    "body" => [
        "service_settings" => [
            "api_key" => "<API_KEY>",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service_settings":{"api_key":"<API_KEY>"}}' "$ELASTICSEARCH_URL/_inference/my-inference-endpoint/_update"
Request example
An example body for a `PUT _inference/my-inference-endpoint/_update` request.
{
 "service_settings": {
   "api_key": "<API_KEY>"
 }
}

Info





Ingest

Ingest APIs enable you to manage tasks and resources related to ingest pipelines and processors.

Get GeoIP database configurations Generally available; Added in 8.15.0

GET /_ingest/geoip/database/{id}

All methods and paths for this operation:

GET /_ingest/geoip/database

GET /_ingest/geoip/database/{id}

Get information about one or more IP geolocation database configurations.

Path parameters

  • id string | array[string] Required

    A comma-separated list of database configuration IDs to retrieve. Wildcard (*) expressions are supported. To get all database configurations, omit this parameter or use *.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • databases array[object] Required
      Hide databases attributes Show databases attributes object
      • id string Required
      • version number Required
      • modified_date_millis number

        Time unit for milliseconds

      • database object

        The configuration necessary to identify which IP geolocation provider to use to download a database, as well as any provider-specific configuration necessary for such downloading. At present, the only supported providers are maxmind and ipinfo, and the maxmind provider requires that an account_id (string) is configured. A provider (either maxmind or ipinfo) must be specified. The web and local providers can be returned as read only configurations.

GET /_ingest/geoip/database/{id}
curl \
 --request GET 'http://api.example.com/_ingest/geoip/database/{id}' \
 --header "Authorization: $API_KEY"

Create or update a GeoIP database configuration Generally available; Added in 8.15.0

PUT /_ingest/geoip/database/{id}

Refer to the create or update IP geolocation database configuration API.

Path parameters

  • id string Required

    ID of the database configuration to create or update.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
application/json

Body Required

  • name string Required

    The provider-assigned name of the IP geolocation database to download.

  • maxmind object Required

    The configuration necessary to identify which IP geolocation provider to use to download the database, as well as any provider-specific configuration necessary for such downloading. At present, the only supported provider is maxmind, and the maxmind provider requires that an account_id (string) is configured.

    Hide maxmind attribute Show maxmind attribute object
    • account_id string Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_ingest/geoip/database/{id}
curl \
 --request PUT 'http://api.example.com/_ingest/geoip/database/{id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"name":"string","maxmind":{"account_id":"string"}}'

Delete GeoIP database configurations Generally available; Added in 8.15.0

DELETE /_ingest/geoip/database/{id}

Delete one or more IP geolocation database configurations.

Path parameters

  • id string | array[string] Required

    A comma-separated list of geoip database configurations to delete

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ingest/geoip/database/{id}
curl \
 --request DELETE 'http://api.example.com/_ingest/geoip/database/{id}' \
 --header "Authorization: $API_KEY"












Get pipelines Generally available; Added in 5.0.0

GET /_ingest/pipeline/{id}

All methods and paths for this operation:

GET /_ingest/pipeline

GET /_ingest/pipeline/{id}

Get information about one or more ingest pipelines. This API returns a local reference of the pipeline.

External documentation

Path parameters

  • id string Required

    Comma-separated list of pipeline IDs to retrieve. Wildcard (*) expressions are supported. To get all ingest pipelines, omit this parameter or use *.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • summary boolean

    Return pipelines without their definitions (default: false)

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • description string

        Description of the ingest pipeline.

      • on_failure array[object]

        Processors to run immediately after a processor failure.

        Hide on_failure attributes Show on_failure attributes object
        • append object
        • attachment object
        • bytes object
        • circle object
        • community_id object
        • convert object
        • csv object
        • date object
        • date_index_name object
        • dissect object
        • dot_expander object
        • drop object
        • enrich object
        • fail object
        • fingerprint object
        • foreach object
        • ip_location object
        • geo_grid object
        • geoip object
        • grok object
        • gsub object
        • html_strip object
        • inference object
        • join object
        • json object
        • kv object
        • lowercase object
        • network_direction object
        • pipeline object
        • redact object
        • registered_domain object
        • remove object
        • rename object
        • reroute object
        • script object
        • set object
        • set_security_user object
        • sort object
        • split object
        • terminate object
        • trim object
        • uppercase object
        • urldecode object
        • uri_parts object
        • user_agent object
      • processors array[object]

        Processors used to perform transformations on documents before indexing. Processors run sequentially in the order specified.

        Hide processors attributes Show processors attributes object
        • append object
        • attachment object
        • bytes object
        • circle object
        • community_id object
        • convert object
        • csv object
        • date object
        • date_index_name object
        • dissect object
        • dot_expander object
        • drop object
        • enrich object
        • fail object
        • fingerprint object
        • foreach object
        • ip_location object
        • geo_grid object
        • geoip object
        • grok object
        • gsub object
        • html_strip object
        • inference object
        • join object
        • json object
        • kv object
        • lowercase object
        • network_direction object
        • pipeline object
        • redact object
        • registered_domain object
        • remove object
        • rename object
        • reroute object
        • script object
        • set object
        • set_security_user object
        • sort object
        • split object
        • terminate object
        • trim object
        • uppercase object
        • urldecode object
        • uri_parts object
        • user_agent object
      • version number

        Version number used by external systems to track ingest pipelines.

      • deprecated boolean

        Marks this ingest pipeline as deprecated. When a deprecated ingest pipeline is referenced as the default or final pipeline when creating or updating a non-deprecated index template, Elasticsearch will emit a deprecation warning.

        Default value is false.

      • _meta object

        Arbitrary metadata about the ingest pipeline. This map is not automatically generated by Elasticsearch.

        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
GET /_ingest/pipeline/{id}
GET /_ingest/pipeline/my-pipeline-id
resp = client.ingest.get_pipeline(
    id="my-pipeline-id",
)
const response = await client.ingest.getPipeline({
  id: "my-pipeline-id",
});
response = client.ingest.get_pipeline(
  id: "my-pipeline-id"
)
$resp = $client->ingest()->getPipeline([
    "id" => "my-pipeline-id",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ingest/pipeline/my-pipeline-id"
client.ingest().getPipeline(g -> g
    .id("my-pipeline-id")
);
Response examples (200)
A successful response for retrieving information about an ingest pipeline.
{
  "my-pipeline-id" : {
    "description" : "describe pipeline",
    "version" : 123,
    "processors" : [
      {
        "set" : {
          "field" : "foo",
          "value" : "bar"
        }
      }
    ]
  }
}




Delete pipelines Generally available; Added in 5.0.0

DELETE /_ingest/pipeline/{id}

Delete one or more ingest pipelines.

External documentation

Path parameters

  • id string Required

    Pipeline ID or wildcard expression of pipeline IDs used to limit the request. To delete all ingest pipelines in a cluster, use a value of *.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ingest/pipeline/{id}
DELETE /_ingest/pipeline/my-pipeline-id
resp = client.ingest.delete_pipeline(
    id="my-pipeline-id",
)
const response = await client.ingest.deletePipeline({
  id: "my-pipeline-id",
});
response = client.ingest.delete_pipeline(
  id: "my-pipeline-id"
)
$resp = $client->ingest()->deletePipeline([
    "id" => "my-pipeline-id",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ingest/pipeline/my-pipeline-id"
client.ingest().deletePipeline(d -> d
    .id("my-pipeline-id")
);




Run a grok processor Generally available; Added in 6.1.0

GET /_ingest/processor/grok

Extract structured fields out of a single text field within a document. You must choose which field to extract matched fields from, as well as the grok pattern you expect will match. A grok pattern is like a regular expression that supports aliased expressions that can be reused.

External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • patterns object Required
      Hide patterns attribute Show patterns attribute object
      • * string Additional properties
GET /_ingest/processor/grok
GET _ingest/processor/grok
resp = client.ingest.processor_grok()
const response = await client.ingest.processorGrok();
response = client.ingest.processor_grok
$resp = $client->ingest()->processorGrok();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ingest/processor/grok"
client.ingest().processorGrok();

Simulate a pipeline Generally available; Added in 5.0.0

POST /_ingest/pipeline/{id}/_simulate

All methods and paths for this operation:

GET /_ingest/pipeline/_simulate

POST /_ingest/pipeline/_simulate
GET /_ingest/pipeline/{id}/_simulate
POST /_ingest/pipeline/{id}/_simulate

Run an ingest pipeline against a set of provided documents. You can either specify an existing pipeline to use with the provided documents or supply a pipeline definition in the body of the request.

Required authorization

  • Cluster privileges: read_pipeline

Path parameters

  • id string Required

    The pipeline to test. If you don't specify a pipeline in the request body, this parameter is required.

Query parameters

  • verbose boolean

    If true, the response includes output data for each processor in the executed pipeline.

application/json

Body Required

  • docs array[object] Required

    Sample documents to test in the pipeline.

    Hide docs attributes Show docs attributes object
    • _id string

      Unique identifier for the document. This ID must be unique within the _index.

    • _index string

      Name of the index containing the document.

    • _source object Required

      JSON body for the document.

  • pipeline object Additional properties

    The pipeline to test. If you don't specify the pipeline request path parameter, this parameter is required. If you specify both this and the request path parameter, the API only uses the request path parameter.

    Hide pipeline attributes Show pipeline attributes object
    • description string

      Description of the ingest pipeline.

    • on_failure array[object]

      Processors to run immediately after a processor failure.

      Hide on_failure attributes Show on_failure attributes object
      • append object
      • attachment object
      • bytes object
      • circle object
      • community_id object
      • convert object
      • csv object
      • date object
      • date_index_name object
      • dissect object
      • dot_expander object
      • drop object
      • enrich object
      • fail object
      • fingerprint object
      • foreach object
      • ip_location object
      • geo_grid object
      • geoip object
      • grok object
      • gsub object
      • html_strip object
      • inference object
      • join object
      • json object
      • kv object
      • lowercase object
      • network_direction object
      • pipeline object
      • redact object
      • registered_domain object
      • remove object
      • rename object
      • reroute object
      • script object
      • set object
      • set_security_user object
      • sort object
      • split object
      • terminate object
      • trim object
      • uppercase object
      • urldecode object
      • uri_parts object
      • user_agent object
    • processors array[object]

      Processors used to perform transformations on documents before indexing. Processors run sequentially in the order specified.

      Hide processors attributes Show processors attributes object
      • append object
      • attachment object
      • bytes object
      • circle object
      • community_id object
      • convert object
      • csv object
      • date object
      • date_index_name object
      • dissect object
      • dot_expander object
      • drop object
      • enrich object
      • fail object
      • fingerprint object
      • foreach object
      • ip_location object
      • geo_grid object
      • geoip object
      • grok object
      • gsub object
      • html_strip object
      • inference object
      • join object
      • json object
      • kv object
      • lowercase object
      • network_direction object
      • pipeline object
      • redact object
      • registered_domain object
      • remove object
      • rename object
      • reroute object
      • script object
      • set object
      • set_security_user object
      • sort object
      • split object
      • terminate object
      • trim object
      • uppercase object
      • urldecode object
      • uri_parts object
      • user_agent object
    • version number

      Version number used by external systems to track ingest pipelines.

    • deprecated boolean

      Marks this ingest pipeline as deprecated. When a deprecated ingest pipeline is referenced as the default or final pipeline when creating or updating a non-deprecated index template, Elasticsearch will emit a deprecation warning.

      Default value is false.

    • _meta object

      Arbitrary metadata about the ingest pipeline. This map is not automatically generated by Elasticsearch.

      Hide _meta attribute Show _meta attribute object
      • * object Additional properties

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required
      Hide docs attributes Show docs attributes object
      • doc object

        The simulated document, with optional metadata.

        Hide doc attributes Show doc attributes object
        • _id string Required

          Unique identifier for the document. This ID must be unique within the _index.

        • _index string Required

          Name of the index containing the document.

        • _ingest object Required
        • _routing string

          Value used to send the document to a specific primary shard.

        • _source object Required

          JSON body for the document.

          Hide _source attribute Show _source attribute object
          • * object Additional properties
        • _version
        • _version_type string

          Supported values include:

          • internal: Use internal versioning that starts at 1 and increments with each update or delete.
          • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
          • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
          • force: This option is deprecated because it can cause primary and replica shards to diverge.

          Values are internal, external, external_gte, or force.

      • error object

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide error attributes Show error attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • processor_results array[object]
        Hide processor_results attributes Show processor_results attributes object
        • doc object

          The simulated document, with optional metadata.

        • tag string
        • processor_type string
        • status string

          Values are success, error, error_ignored, skipped, or dropped.

        • description string
        • ignored_error object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • error object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

POST /_ingest/pipeline/{id}/_simulate
POST /_ingest/pipeline/_simulate
{
  "pipeline" :
  {
    "description": "_description",
    "processors": [
      {
        "set" : {
          "field" : "field2",
          "value" : "_value"
        }
      }
    ]
  },
  "docs": [
    {
      "_index": "index",
      "_id": "id",
      "_source": {
        "foo": "bar"
      }
    },
    {
      "_index": "index",
      "_id": "id",
      "_source": {
        "foo": "rab"
      }
    }
  ]
}
resp = client.ingest.simulate(
    pipeline={
        "description": "_description",
        "processors": [
            {
                "set": {
                    "field": "field2",
                    "value": "_value"
                }
            }
        ]
    },
    docs=[
        {
            "_index": "index",
            "_id": "id",
            "_source": {
                "foo": "bar"
            }
        },
        {
            "_index": "index",
            "_id": "id",
            "_source": {
                "foo": "rab"
            }
        }
    ],
)
const response = await client.ingest.simulate({
  pipeline: {
    description: "_description",
    processors: [
      {
        set: {
          field: "field2",
          value: "_value",
        },
      },
    ],
  },
  docs: [
    {
      _index: "index",
      _id: "id",
      _source: {
        foo: "bar",
      },
    },
    {
      _index: "index",
      _id: "id",
      _source: {
        foo: "rab",
      },
    },
  ],
});
response = client.ingest.simulate(
  body: {
    "pipeline": {
      "description": "_description",
      "processors": [
        {
          "set": {
            "field": "field2",
            "value": "_value"
          }
        }
      ]
    },
    "docs": [
      {
        "_index": "index",
        "_id": "id",
        "_source": {
          "foo": "bar"
        }
      },
      {
        "_index": "index",
        "_id": "id",
        "_source": {
          "foo": "rab"
        }
      }
    ]
  }
)
$resp = $client->ingest()->simulate([
    "body" => [
        "pipeline" => [
            "description" => "_description",
            "processors" => array(
                [
                    "set" => [
                        "field" => "field2",
                        "value" => "_value",
                    ],
                ],
            ),
        ],
        "docs" => array(
            [
                "_index" => "index",
                "_id" => "id",
                "_source" => [
                    "foo" => "bar",
                ],
            ],
            [
                "_index" => "index",
                "_id" => "id",
                "_source" => [
                    "foo" => "rab",
                ],
            ],
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"pipeline":{"description":"_description","processors":[{"set":{"field":"field2","value":"_value"}}]},"docs":[{"_index":"index","_id":"id","_source":{"foo":"bar"}},{"_index":"index","_id":"id","_source":{"foo":"rab"}}]}' "$ELASTICSEARCH_URL/_ingest/pipeline/_simulate"
client.ingest().simulate(s -> s
    .docs(List.of(Document.of(d -> d
            .id("id")
            .index("index")
            .source(JsonData.fromJson("{\"foo\":\"bar\"}"))),Document.of(d -> d
            .id("id")
            .index("index")
            .source(JsonData.fromJson("{\"foo\":\"rab\"}")))))
    .pipeline(p -> p
        .description("_description")
        .processors(pr -> pr
            .set(se -> se
                .field("field2")
                .value(JsonData.fromJson("\"_value\""))
            )
        )
    )
);
Request example
You can specify the used pipeline either in the request body or as a path parameter.
{
  "pipeline" :
  {
    "description": "_description",
    "processors": [
      {
        "set" : {
          "field" : "field2",
          "value" : "_value"
        }
      }
    ]
  },
  "docs": [
    {
      "_index": "index",
      "_id": "id",
      "_source": {
        "foo": "bar"
      }
    },
    {
      "_index": "index",
      "_id": "id",
      "_source": {
        "foo": "rab"
      }
    }
  ]
}
Response examples (200)
A successful response for running an ingest pipeline against a set of provided documents.
{
   "docs": [
      {
         "doc": {
            "_id": "id",
            "_index": "index",
            "_version": "-3",
            "_source": {
               "field2": "_value",
               "foo": "bar"
            },
            "_ingest": {
               "timestamp": "2017-05-04T22:30:03.187Z"
            }
         }
      },
      {
         "doc": {
            "_id": "id",
            "_index": "index",
            "_version": "-3",
            "_source": {
               "field2": "_value",
               "foo": "rab"
            },
            "_ingest": {
               "timestamp": "2017-05-04T22:30:03.188Z"
            }
         }
      }
   ]
}

Simulate data ingestion Technical preview; Added in 8.12.0

POST /_ingest/{index}/_simulate

All methods and paths for this operation:

GET /_ingest/_simulate

POST /_ingest/_simulate
GET /_ingest/{index}/_simulate
POST /_ingest/{index}/_simulate

Run ingest pipelines against a set of provided documents, optionally with substitute pipeline definitions, to simulate ingesting data into an index.

This API is meant to be used for troubleshooting or pipeline development, as it does not actually index any data into Elasticsearch.

The API runs the default and final pipeline for that index against a set of documents provided in the body of the request. If a pipeline contains a reroute processor, it follows that reroute processor to the new index, running that index's pipelines as well the same way that a non-simulated ingest would. No data is indexed into Elasticsearch. Instead, the transformed document is returned, along with the list of pipelines that have been run and the name of the index where the document would have been indexed if this were not a simulation. The transformed document is validated against the mappings that would apply to this index, and any validation error is reported in the result.

This API differs from the simulate pipeline API in that you specify a single pipeline for that API, and it runs only that one pipeline. The simulate pipeline API is more useful for developing a single pipeline, while the simulate ingest API is more useful for troubleshooting the interaction of the various pipelines that get applied when ingesting into an index.

By default, the pipeline definitions that are currently in the system are used. However, you can supply substitute pipeline definitions in the body of the request. These will be used in place of the pipeline definitions that are already in the system. This can be used to replace existing pipeline definitions or to create new ones. The pipeline substitutions are used only within this request.

Required authorization

  • Index privileges: index

Path parameters

  • index string Required

    The index to simulate ingesting into. This value can be overridden by specifying an index on each document. If you specify this parameter in the request path, it is used for any documents that do not explicitly specify an index argument.

Query parameters

  • pipeline string

    The pipeline to use as the default pipeline. This value can be used to override the default pipeline of the index.

application/json

Body Required

  • docs array[object] Required

    Sample documents to test in the pipeline.

    Hide docs attributes Show docs attributes object
    • _id string

      Unique identifier for the document. This ID must be unique within the _index.

    • _index string

      Name of the index containing the document.

    • _source object Required

      JSON body for the document.

  • component_template_substitutions object

    A map of component template names to substitute component template definition objects.

    Hide component_template_substitutions attribute Show component_template_substitutions attribute object
    • * object
      Hide * attributes Show * attributes object
      • template object Required
        Hide template attributes Show template attributes object
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • version number
        • settings object
          Hide settings attribute Show settings attribute object
        • mappings object
          Hide mappings attributes Show mappings attributes object
          • date_detection boolean
          • dynamic_date_formats array[string]
          • dynamic_templates array[object]
          • numeric_detection boolean
          • properties object
          • runtime object
          • enabled boolean
        • aliases object
          Hide aliases attribute Show aliases attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • index_routing string

              Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

            • is_write_index boolean

              If true, the index is the write index for the alias.

              Default value is false.

            • routing string

              Value used to route indexing and search operations to a specific shard.

            • search_routing string

              Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

            • is_hidden boolean Generally available; Added in 7.16.0

              If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

              Default value is false.

        • Data stream lifecycle with rollover can be used to display the configuration including the default rollover conditions, if asked.

      • version number
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • deprecated boolean
  • index_template_substitutions object

    A map of index template names to substitute index template definition objects.

    Hide index_template_substitutions attribute Show index_template_substitutions attribute object
    • * object
      Hide * attributes Show * attributes object
      • index_patterns string | array[string] Required

        Name of the index template.

      • composed_of array[string] Required

        An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence.

      • template object

        Template to be applied. It may optionally include an aliases, mappings, or settings configuration.

        Hide template attributes Show template attributes object
        • aliases object

          Aliases to add. If the index template includes a data_stream object, these are data stream aliases. Otherwise, these are index aliases. Data stream aliases ignore the index_routing, routing, and search_routing options.

          Hide aliases attribute Show aliases attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • is_hidden boolean

              If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

              Default value is false.

            • is_write_index boolean

              If true, the index is the write index for the alias.

              Default value is false.

        • mappings object

          Mapping for fields in the index. If specified, this mapping can include field names, field data types, and mapping parameters.

          Hide mappings attributes Show mappings attributes object
          • date_detection boolean
          • dynamic_date_formats array[string]
          • dynamic_templates array[object]
          • numeric_detection boolean
          • properties object
          • runtime object
          • enabled boolean
        • settings object

          Configuration options for the index.

          Index settings
        • Data stream lifecycle with rollover can be used to display the configuration including the default rollover conditions, if asked.

      • version number

        Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch.

      • priority number

        Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch.

      • _meta object

        Optional user metadata about the index template. May have any contents. This map is not automatically generated by Elasticsearch.

        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • allow_auto_create boolean
      • data_stream object

        If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with a data_stream object.

        Hide data_stream attributes Show data_stream attributes object
        • hidden boolean

          If true, the data stream is hidden.

          Default value is false.

        • allow_custom_routing boolean

          If true, the data stream supports custom routing.

          Default value is false.

      • deprecated boolean Generally available; Added in 8.12.0

        Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning.

      • ignore_missing_component_templates string | array[string]

        A list of component template names that are allowed to be absent.

  • mapping_addition object
    Hide mapping_addition attributes Show mapping_addition attributes object
    • all_field object
      Hide all_field attributes Show all_field attributes object
      • analyzer string Required
      • enabled boolean Required
      • omit_norms boolean Required
      • search_analyzer string Required
      • similarity string Required
      • store boolean Required
      • store_term_vector_offsets boolean Required
      • store_term_vector_payloads boolean Required
      • store_term_vector_positions boolean Required
      • store_term_vectors boolean Required
    • date_detection boolean
    • dynamic string

      Values are strict, runtime, true, or false.

    • dynamic_date_formats array[string]
    • dynamic_templates array[object]
    • _field_names object
      Hide _field_names attribute Show _field_names attribute object
      • enabled boolean Required
    • index_field object
      Hide index_field attribute Show index_field attribute object
      • enabled boolean Required
    • _meta object
      Hide _meta attribute Show _meta attribute object
      • * object Additional properties
    • numeric_detection boolean
    • properties object
    • _routing object
      Hide _routing attribute Show _routing attribute object
      • required boolean Required
    • _size object
      Hide _size attribute Show _size attribute object
      • enabled boolean Required
    • _source object
      Hide _source attributes Show _source attributes object
      • compress boolean
      • compress_threshold string
      • enabled boolean
      • excludes array[string]
      • includes array[string]
      • mode string

        Supported values include:

        • disabled
        • stored
        • synthetic: Instead of storing source documents on disk exactly as you send them, Elasticsearch can reconstruct source content on the fly upon retrieval.

        Values are disabled, stored, or synthetic.

    • runtime object
      Hide runtime attribute Show runtime attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field
          • format string
        • format string

          A custom format for date type runtime fields.

        • input_field string

          For type lookup

        • target_field string

          For type lookup

        • target_index string

          For type lookup

        • script object

          Painless script executed at query time.

          Hide script attributes Show script attributes object
          • source string

            The script source.

          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          • options object
        • type string Required

          Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • enabled boolean
    • subobjects string

      Values are true or false.

    • _data_stream_timestamp object
      Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
      • enabled boolean Required
  • pipeline_substitutions object

    Pipelines to test. If you don’t specify the pipeline request path parameter, this parameter is required. If you specify both this and the request path parameter, the API only uses the request path parameter.

    Hide pipeline_substitutions attribute Show pipeline_substitutions attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • description string

        Description of the ingest pipeline.

      • on_failure array[object]

        Processors to run immediately after a processor failure.

        Hide on_failure attributes Show on_failure attributes object
        • append object
        • attachment object
        • bytes object
        • circle object
        • community_id object
        • convert object
        • csv object
        • date object
        • date_index_name object
        • dissect object
        • dot_expander object
        • drop object
        • enrich object
        • fail object
        • fingerprint object
        • foreach object
        • ip_location object
        • geo_grid object
        • geoip object
        • grok object
        • gsub object
        • html_strip object
        • inference object
        • join object
        • json object
        • kv object
        • lowercase object
        • network_direction object
        • pipeline object
        • redact object
        • registered_domain object
        • remove object
        • rename object
        • reroute object
        • script object
        • set object
        • set_security_user object
        • sort object
        • split object
        • terminate object
        • trim object
        • uppercase object
        • urldecode object
        • uri_parts object
        • user_agent object
      • processors array[object]

        Processors used to perform transformations on documents before indexing. Processors run sequentially in the order specified.

        Hide processors attributes Show processors attributes object
        • append object
        • attachment object
        • bytes object
        • circle object
        • community_id object
        • convert object
        • csv object
        • date object
        • date_index_name object
        • dissect object
        • dot_expander object
        • drop object
        • enrich object
        • fail object
        • fingerprint object
        • foreach object
        • ip_location object
        • geo_grid object
        • geoip object
        • grok object
        • gsub object
        • html_strip object
        • inference object
        • join object
        • json object
        • kv object
        • lowercase object
        • network_direction object
        • pipeline object
        • redact object
        • registered_domain object
        • remove object
        • rename object
        • reroute object
        • script object
        • set object
        • set_security_user object
        • sort object
        • split object
        • terminate object
        • trim object
        • uppercase object
        • urldecode object
        • uri_parts object
        • user_agent object
      • version number

        Version number used by external systems to track ingest pipelines.

      • deprecated boolean

        Marks this ingest pipeline as deprecated. When a deprecated ingest pipeline is referenced as the default or final pipeline when creating or updating a non-deprecated index template, Elasticsearch will emit a deprecation warning.

        Default value is false.

      • _meta object

        Arbitrary metadata about the ingest pipeline. This map is not automatically generated by Elasticsearch.

        Hide _meta attribute Show _meta attribute object
        • * object Additional properties

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required
      Hide docs attribute Show docs attribute object
      • doc object

        The results of ingest simulation on a single document. The _source of the document contains the results after running all pipelines listed in executed_pipelines on the document. The list of executed pipelines is derived from the pipelines that would be executed if this document had been ingested into _index.

        Hide doc attributes Show doc attributes object
        • _id string Required

          Identifier for the document.

        • _index string Required

          Name of the index that the document would be indexed into if this were not a simulation.

        • _source object Required

          JSON body for the document.

          Hide _source attribute Show _source attribute object
          • * object Additional properties
        • _version
        • executed_pipelines array[string] Required

          A list of the names of the pipelines executed on this document.

        • ignored_fields array[object]

          A list of the fields that would be ignored at the indexing step. For example, a field whose value is larger than the allowed limit would make it through all of the pipelines, but would not be indexed into Elasticsearch.

        • error object

          Any error resulting from simulatng ingest on this doc. This can be an error generated by executing a processor, or a mapping validation error when simulating indexing the resulting doc.

POST /_ingest/{index}/_simulate
POST /_ingest/_simulate
{
  "docs": [
    {
      "_id": "123",
      "_index": "my-index",
      "_source": {
        "foo": "bar"
      }
    },
    {
      "_id": "456",
      "_index": "my-index",
      "_source": {
        "foo": "rab"
      }
    }
  ]
}
resp = client.simulate.ingest(
    docs=[
        {
            "_id": "123",
            "_index": "my-index",
            "_source": {
                "foo": "bar"
            }
        },
        {
            "_id": "456",
            "_index": "my-index",
            "_source": {
                "foo": "rab"
            }
        }
    ],
)
const response = await client.simulate.ingest({
  docs: [
    {
      _id: "123",
      _index: "my-index",
      _source: {
        foo: "bar",
      },
    },
    {
      _id: "456",
      _index: "my-index",
      _source: {
        foo: "rab",
      },
    },
  ],
});
response = client.simulate.ingest(
  body: {
    "docs": [
      {
        "_id": "123",
        "_index": "my-index",
        "_source": {
          "foo": "bar"
        }
      },
      {
        "_id": "456",
        "_index": "my-index",
        "_source": {
          "foo": "rab"
        }
      }
    ]
  }
)
$resp = $client->simulate()->ingest([
    "body" => [
        "docs" => array(
            [
                "_id" => "123",
                "_index" => "my-index",
                "_source" => [
                    "foo" => "bar",
                ],
            ],
            [
                "_id" => "456",
                "_index" => "my-index",
                "_source" => [
                    "foo" => "rab",
                ],
            ],
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"docs":[{"_id":"123","_index":"my-index","_source":{"foo":"bar"}},{"_id":"456","_index":"my-index","_source":{"foo":"rab"}}]}' "$ELASTICSEARCH_URL/_ingest/_simulate"
client.simulate().ingest(i -> i
    .docs(List.of(Document.of(d -> d
            .id("123")
            .index("my-index")
            .source(JsonData.fromJson("{\"foo\":\"bar\"}"))),Document.of(d -> d
            .id("456")
            .index("my-index")
            .source(JsonData.fromJson("{\"foo\":\"rab\"}")))))
);
Request examples
In this example the index `my-index` has a default pipeline called `my-pipeline` and a final pipeline called `my-final-pipeline`. Since both documents are being ingested into `my-index`, both pipelines are run using the pipeline definitions that are already in the system.
{
  "docs": [
    {
      "_id": "123",
      "_index": "my-index",
      "_source": {
        "foo": "bar"
      }
    },
    {
      "_id": "456",
      "_index": "my-index",
      "_source": {
        "foo": "rab"
      }
    }
  ]
}
In this example the index `my-index` has a default pipeline called `my-pipeline` and a final pipeline called `my-final-pipeline`. But a substitute definition of `my-pipeline` is provided in `pipeline_substitutions`. The substitute `my-pipeline` will be used in place of the `my-pipeline` that is in the system, and then the `my-final-pipeline` that is already defined in the system will run.
{
  "docs": [
    {
      "_index": "my-index",
      "_id": "123",
      "_source": {
        "foo": "bar"
      }
    },
    {
      "_index": "my-index",
      "_id": "456",
      "_source": {
        "foo": "rab"
      }
    }
  ],
  "pipeline_substitutions": {
    "my-pipeline": {
      "processors": [
        {
          "uppercase": {
            "field": "foo"
          }
        }
      ]
    }
  }
}
In this example, imagine that the index `my-index` has a strict mapping with only the `foo` keyword field defined. Say that field mapping came from a component template named `my-mappings-template`. You want to test adding a new field, `bar`. So a substitute definition of `my-mappings-template` is provided in `component_template_substitutions`. The substitute `my-mappings-template` will be used in place of the existing mapping for `my-index` and in place of the `my-mappings-template` that is in the system.
{
  "docs": [
    {
      "_index": "my-index",
      "_id": "123",
      "_source": {
        "foo": "foo"
      }
    },
    {
      "_index": "my-index",
      "_id": "456",
      "_source": {
        "bar": "rab"
      }
    }
  ],
  "component_template_substitutions": {
    "my-mappings_template": {
      "template": {
        "mappings": {
          "dynamic": "strict",
          "properties": {
            "foo": {
              "type": "keyword"
            },
            "bar": {
              "type": "keyword"
            }
          }
        }
      }
    }
  }
}
The pipeline, component template, and index template substitutions replace the existing pipeline details for the duration of this request.
{
  "docs": [
    {
      "_id": "id",
      "_index": "my-index",
      "_source": {
        "foo": "bar"
      }
    },
    {
      "_id": "id",
      "_index": "my-index",
      "_source": {
        "foo": "rab"
      }
    }
  ],
  "pipeline_substitutions": {
    "my-pipeline": {
      "processors": [
        {
          "set": {
            "field": "field3",
            "value": "value3"
          }
        }
      ]
    }
  },
  "component_template_substitutions": {
    "my-component-template": {
      "template": {
        "mappings": {
          "dynamic": true,
          "properties": {
            "field3": {
              "type": "keyword"
            }
          }
        },
        "settings": {
          "index": {
            "default_pipeline": "my-pipeline"
          }
        }
      }
    }
  },
  "index_template_substitutions": {
    "my-index-template": {
      "index_patterns": [
        "my-index-*"
      ],
      "composed_of": [
        "component_template_1",
        "component_template_2"
      ]
    }
  },
  "mapping_addition": {
    "dynamic": "strict",
    "properties": {
      "foo": {
        "type": "keyword"
      }
    }
  }
}
Response examples (200)
A successful response when the simulation uses pipeline definitions that are already in the system.
{
  "docs": [
    {
      "doc": null,
      "_id": 123,
      "_index": "my-index",
      "_version": -3,
      "_source": {
        "field1": "value1",
        "field2": "value2",
        "foo": "bar"
      },
      "executed_pipelines": [
        "my-pipeline",
        "my-final-pipeline"
      ]
    },
    {
      "doc": null,
      "_id": 456,
      "_index": "my-index",
      "_version": "-3,",
      "_source": {
        "field1": "value1",
        "field2": "value2",
        "foo": "rab"
      },
      "executed_pipelines": [
        "my-pipeline",
        "my-final-pipeline"
      ]
    }
  ]
}
A successful response when the simulation uses pipeline substitutions.
{
  "docs": [
    {
      "doc": null,
      "_id": 123,
      "_index": "my-index",
      "_version": -3,
      "_source": {
        "field2": "value2",
        "foo": "BAR"
      },
      "executed_pipelines": [
        "my-pipeline",
        "my-final-pipeline"
      ]
    },
    {
      "doc": null,
      "_id": 456,
      "_index": "my-index",
      "_version": -3,
      "_source": {
        "field2": "value2",
        "foo": "RAB"
      },
      "executed_pipelines": [
        "my-pipeline",
        "my-final-pipeline"
      ]
    }
  ]
}
A successful response when the simulation uses pipeline substitutions.
{
  "docs": [
    {
      "doc": {
        "_id": "123",
        "_index": "my-index",
        "_version": -3,
        "_source": {
          "foo": "foo"
        },
        "executed_pipelines": []
      }
    },
    {
      "doc": {
        "_id": "456",
        "_index": "my-index",
        "_version": -3,
        "_source": {
          "bar": "rab"
        },
      "executed_pipelines": []
      }
    }
  ]
}




Update the license Generally available

POST /_license

All methods and paths for this operation:

PUT /_license

POST /_license

You can update your license at runtime without shutting down your nodes. License updates take effect immediately. If the license you are installing does not support all of the features that were available with your previous license, however, you are notified in the response. You must then re-submit the API request with the acknowledge parameter set to true.

NOTE: If Elasticsearch security features are enabled and you are installing a gold or higher license, you must enable TLS on the transport networking layer before you install the license. If the operator privileges feature is enabled, only operator users can use this API.

Required authorization

  • Cluster privileges: manage

Query parameters

  • acknowledge boolean

    Specifies whether you acknowledge the license changes.

  • master_timeout string

    The period to wait for a connection to the master node.

    External documentation
  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation
application/json

Body

  • license object
    Hide license attributes Show license attributes object
    • Time unit for milliseconds

    • Time unit for milliseconds

    • Time unit for milliseconds

    • issued_to string Required
    • issuer string Required
    • max_nodes number | string | null

    • max_resource_units number
    • signature string Required
    • type string Required

      Values are missing, trial, basic, standard, dev, silver, gold, platinum, or enterprise.

    • uid string Required
  • licenses array[object]

    A sequence of one or more JSON documents containing the license information.

    Hide licenses attributes Show licenses attributes object
    • expiry_date_in_millis number

      Time unit for milliseconds

    • issue_date_in_millis number

      Time unit for milliseconds

    • start_date_in_millis number

      Time unit for milliseconds

    • issued_to string Required
    • issuer string Required
    • max_nodes number | string | null

    • max_resource_units number
    • signature string Required
    • type string Required

      Values are missing, trial, basic, standard, dev, silver, gold, platinum, or enterprise.

    • uid string Required

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledge object
      Hide acknowledge attributes Show acknowledge attributes object
      • license array[string] Required
      • message string Required
    • acknowledged boolean Required
    • license_status string Required

      Values are active, valid, invalid, or expired.

POST /_license
PUT _license
{
  "licenses": [
    {
      "uid":"893361dc-9749-4997-93cb-802e3d7fa4xx",
      "type":"basic",
      "issue_date_in_millis":1411948800000,
      "expiry_date_in_millis":1914278399999,
      "max_nodes":1,
      "issued_to":"issuedTo",
      "issuer":"issuer",
      "signature":"xx"
    }
    ]
}
resp = client.license.post(
    licenses=[
        {
            "uid": "893361dc-9749-4997-93cb-802e3d7fa4xx",
            "type": "basic",
            "issue_date_in_millis": 1411948800000,
            "expiry_date_in_millis": 1914278399999,
            "max_nodes": 1,
            "issued_to": "issuedTo",
            "issuer": "issuer",
            "signature": "xx"
        }
    ],
)
const response = await client.license.post({
  licenses: [
    {
      uid: "893361dc-9749-4997-93cb-802e3d7fa4xx",
      type: "basic",
      issue_date_in_millis: 1411948800000,
      expiry_date_in_millis: 1914278399999,
      max_nodes: 1,
      issued_to: "issuedTo",
      issuer: "issuer",
      signature: "xx",
    },
  ],
});
response = client.license.post(
  body: {
    "licenses": [
      {
        "uid": "893361dc-9749-4997-93cb-802e3d7fa4xx",
        "type": "basic",
        "issue_date_in_millis": 1411948800000,
        "expiry_date_in_millis": 1914278399999,
        "max_nodes": 1,
        "issued_to": "issuedTo",
        "issuer": "issuer",
        "signature": "xx"
      }
    ]
  }
)
$resp = $client->license()->post([
    "body" => [
        "licenses" => array(
            [
                "uid" => "893361dc-9749-4997-93cb-802e3d7fa4xx",
                "type" => "basic",
                "issue_date_in_millis" => 1411948800000,
                "expiry_date_in_millis" => 1914278399999,
                "max_nodes" => 1,
                "issued_to" => "issuedTo",
                "issuer" => "issuer",
                "signature" => "xx",
            ],
        ),
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"licenses":[{"uid":"893361dc-9749-4997-93cb-802e3d7fa4xx","type":"basic","issue_date_in_millis":1411948800000,"expiry_date_in_millis":1914278399999,"max_nodes":1,"issued_to":"issuedTo","issuer":"issuer","signature":"xx"}]}' "$ELASTICSEARCH_URL/_license"
client.license().post(p -> p
    .licenses(l -> l
        .expiryDateInMillis(1914278399999L)
        .issueDateInMillis(1411948800000L)
        .issuedTo("issuedTo")
        .issuer("issuer")
        .maxNodes(1L)
        .signature("xx")
        .type(LicenseType.Basic)
        .uid("893361dc-9749-4997-93cb-802e3d7fa4xx")
    )
);
Request example
Run `PUT _license` to update to a basic license. NOTE: These values are invalid; you must substitute the appropriate contents from your license file.
{
  "licenses": [
    {
      "uid":"893361dc-9749-4997-93cb-802e3d7fa4xx",
      "type":"basic",
      "issue_date_in_millis":1411948800000,
      "expiry_date_in_millis":1914278399999,
      "max_nodes":1,
      "issued_to":"issuedTo",
      "issuer":"issuer",
      "signature":"xx"
    }
    ]
}
Response examples (200)
If you update to a basic license and you previously had a license with more features, you receive this type of response. You must re-submit the API request and set the `acknowledge` parameter to `true`.
{
  "acknowledged": false,
  "license_status": "valid",
  "acknowledge": {
    "message": "\"\"\"This license update requires acknowledgement. To acknowledge the license, please read the following messages and update the license again, this time with the \"acknowledge=true\" parameter:\"\"\"",
    "watcher": [
      "Watcher will be disabled"
    ],
    "logstash": [
      "Logstash will no longer poll for centrally-managed pipelines"
    ],
    "security": [
      "The following X-Pack security functionality will be disabled ..."
    ]
  }
}

Delete the license Generally available

DELETE /_license

When the license expires, your subscription level reverts to Basic.

If the operator privileges feature is enabled, only operator users can use this API.

Required authorization

  • Cluster privileges: manage
External documentation

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node.

    External documentation
  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_license
DELETE /_license
resp = client.license.delete()
const response = await client.license.delete();
response = client.license.delete
$resp = $client->license()->delete();
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_license"
client.license().delete(d -> d);

Get the basic license status Generally available; Added in 6.3.0

GET /_license/basic_status

Required authorization

  • Cluster privileges: monitor

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • eligible_to_start_basic boolean Required
GET /_license/basic_status
GET /_license/basic_status
resp = client.license.get_basic_status()
const response = await client.license.getBasicStatus();
response = client.license.get_basic_status
$resp = $client->license()->getBasicStatus();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_license/basic_status"
client.license().getBasicStatus();
Response examples (200)
A successful response from `GET /_license/basic_status`.
{
  "eligible_to_start_basic": true
}




Start a basic license Generally available; Added in 6.3.0

POST /_license/start_basic

Start an indefinite basic license, which gives access to all the basic features.

NOTE: In order to start a basic license, you must not currently have a basic license.

If the basic license does not support all of the features that are available with your current license, however, you are notified in the response. You must then re-submit the API request with the acknowledge parameter set to true.

To check the status of your basic license, use the get basic license API.

Required authorization

  • Cluster privileges: manage

Query parameters

  • acknowledge boolean

    whether the user has acknowledged acknowledge messages (default: false)

  • master_timeout string

    Period to wait for a connection to the master node.

    External documentation
  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • basic_was_started boolean Required
    • error_message string
    • type string

      Values are missing, trial, basic, standard, dev, silver, gold, platinum, or enterprise.

    • acknowledge object
POST /_license/start_basic
POST /_license/start_basic?acknowledge=true
resp = client.license.post_start_basic(
    acknowledge=True,
)
const response = await client.license.postStartBasic({
  acknowledge: "true",
});
response = client.license.post_start_basic(
  acknowledge: "true"
)
$resp = $client->license()->postStartBasic([
    "acknowledge" => "true",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_license/start_basic?acknowledge=true"
client.license().postStartBasic(p -> p
    .acknowledge(true)
);
Response examples (200)
A successful response from `POST /_license/start_basic?acknowledge=true`. If you currently have a license with more features than a basic license and you start a basic license, you must pass the acknowledge parameter.
{
  "basic_was_started": true,
  "acknowledged": true
}




Logstash

Logstash APIs enable you to manage pipelines that are used by Logstash Central Management.

Learn more about centralized pipeline management

Get Logstash pipelines Generally available; Added in 7.12.0

GET /_logstash/pipeline/{id}

All methods and paths for this operation:

GET /_logstash/pipeline

GET /_logstash/pipeline/{id}

Get pipelines that are used for Logstash Central Management.

Required authorization

  • Cluster privileges: manage_logstash_pipelines
External documentation

Path parameters

  • id string | array[string] Required

    A comma-separated list of pipeline identifiers.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • description string Required

        A description of the pipeline. This description is not used by Elasticsearch or Logstash.

      • last_modified string | number

        The date the pipeline was last updated. It must be in the yyyy-MM-dd'T'HH:mm:ss.SSSZZ strict_date_time format.

        One of:

        The date the pipeline was last updated. It must be in the yyyy-MM-dd'T'HH:mm:ss.SSSZZ strict_date_time format.

      • pipeline string Required

        The configuration for the pipeline.

        External documentation
      • pipeline_metadata object Required

        Optional metadata about the pipeline, which can have any contents. This metadata is not generated or used by Elasticsearch or Logstash.

        Hide pipeline_metadata attributes Show pipeline_metadata attributes object
        • type string Required
        • version string Required
      • pipeline_settings object Required

        Settings for the pipeline. It supports only flat keys in dot notation.

        Hide pipeline_settings attributes Show pipeline_settings attributes object
        • pipeline.workers number Required

          The number of workers that will, in parallel, execute the filter and output stages of the pipeline.

        • pipeline.batch.size number Required

          The maximum number of events an individual worker thread will collect from inputs before attempting to execute its filters and outputs.

        • pipeline.batch.delay number Required

          When creating pipeline event batches, how long in milliseconds to wait for each event before dispatching an undersized batch to pipeline workers.

        • queue.type string Required

          The internal queuing model to use for event buffering.

        • queue.max_bytes string Required

          The total capacity of the queue (queue.type: persisted) in number of bytes.

        • queue.checkpoint.writes number Required

          The maximum number of written events before forcing a checkpoint when persistent queues are enabled (queue.type: persisted).

      • username string Required

        The user who last updated the pipeline.

GET /_logstash/pipeline/{id}
GET _logstash/pipeline/my_pipeline
resp = client.logstash.get_pipeline(
    id="my_pipeline",
)
const response = await client.logstash.getPipeline({
  id: "my_pipeline",
});
response = client.logstash.get_pipeline(
  id: "my_pipeline"
)
$resp = $client->logstash()->getPipeline([
    "id" => "my_pipeline",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_logstash/pipeline/my_pipeline"
client.logstash().getPipeline(g -> g
    .id("my_pipeline")
);
Response examples (200)
A successful response from `GET _logstash/pipeline/my_pipeline`.
{
  "my_pipeline": {
    "description": "Sample pipeline for illustration purposes",
    "last_modified": "2021-01-02T02:50:51.250Z",
    "pipeline_metadata": {
      "type": "logstash_pipeline",
      "version": "1"
    },
    "username": "elastic",
    "pipeline": "input {}\\n filter { grok {} }\\n output {}",
    "pipeline_settings": {
      "pipeline.workers": 1,
      "pipeline.batch.size": 125,
      "pipeline.batch.delay": 50,
      "queue.type": "memory",
      "queue.max_bytes": "1gb",
      "queue.checkpoint.writes": 1024
    }
  }
}

Create or update a Logstash pipeline Generally available; Added in 7.12.0

PUT /_logstash/pipeline/{id}

Create a pipeline that is used for Logstash Central Management. If the specified pipeline exists, it is replaced.

Required authorization

  • Cluster privileges: manage_logstash_pipelines
External documentation

Path parameters

  • id string Required

    An identifier for the pipeline.

application/json

Body Required

  • description string Required

    A description of the pipeline. This description is not used by Elasticsearch or Logstash.

  • last_modified string | number

    The date the pipeline was last updated. It must be in the yyyy-MM-dd'T'HH:mm:ss.SSSZZ strict_date_time format.

    One of:

    The date the pipeline was last updated. It must be in the yyyy-MM-dd'T'HH:mm:ss.SSSZZ strict_date_time format.

  • pipeline string Required

    The configuration for the pipeline.

    External documentation
  • pipeline_metadata object Required

    Optional metadata about the pipeline, which can have any contents. This metadata is not generated or used by Elasticsearch or Logstash.

    Hide pipeline_metadata attributes Show pipeline_metadata attributes object
    • type string Required
    • version string Required
  • pipeline_settings object Required

    Settings for the pipeline. It supports only flat keys in dot notation.

    Hide pipeline_settings attributes Show pipeline_settings attributes object
    • pipeline.workers number Required

      The number of workers that will, in parallel, execute the filter and output stages of the pipeline.

    • pipeline.batch.size number Required

      The maximum number of events an individual worker thread will collect from inputs before attempting to execute its filters and outputs.

    • pipeline.batch.delay number Required

      When creating pipeline event batches, how long in milliseconds to wait for each event before dispatching an undersized batch to pipeline workers.

    • queue.type string Required

      The internal queuing model to use for event buffering.

    • queue.max_bytes string Required

      The total capacity of the queue (queue.type: persisted) in number of bytes.

    • queue.checkpoint.writes number Required

      The maximum number of written events before forcing a checkpoint when persistent queues are enabled (queue.type: persisted).

  • username string Required

    The user who last updated the pipeline.

Responses

  • 200 application/json
PUT /_logstash/pipeline/{id}
PUT _logstash/pipeline/my_pipeline
{
  "description": "Sample pipeline for illustration purposes",
  "last_modified": "2021-01-02T02:50:51.250Z",
  "pipeline_metadata": {
    "type": "logstash_pipeline",
    "version": 1
  },
  "username": "elastic",
  "pipeline": "input {}\\n filter { grok {} }\\n output {}",
  "pipeline_settings": {
    "pipeline.workers": 1,
    "pipeline.batch.size": 125,
    "pipeline.batch.delay": 50,
    "queue.type": "memory",
    "queue.max_bytes": "1gb",
    "queue.checkpoint.writes": 1024
  }
}
resp = client.logstash.put_pipeline(
    id="my_pipeline",
    pipeline={
        "description": "Sample pipeline for illustration purposes",
        "last_modified": "2021-01-02T02:50:51.250Z",
        "pipeline_metadata": {
            "type": "logstash_pipeline",
            "version": 1
        },
        "username": "elastic",
        "pipeline": "input {}\\n filter { grok {} }\\n output {}",
        "pipeline_settings": {
            "pipeline.workers": 1,
            "pipeline.batch.size": 125,
            "pipeline.batch.delay": 50,
            "queue.type": "memory",
            "queue.max_bytes": "1gb",
            "queue.checkpoint.writes": 1024
        }
    },
)
const response = await client.logstash.putPipeline({
  id: "my_pipeline",
  pipeline: {
    description: "Sample pipeline for illustration purposes",
    last_modified: "2021-01-02T02:50:51.250Z",
    pipeline_metadata: {
      type: "logstash_pipeline",
      version: 1,
    },
    username: "elastic",
    pipeline: "input {}\\n filter { grok {} }\\n output {}",
    pipeline_settings: {
      "pipeline.workers": 1,
      "pipeline.batch.size": 125,
      "pipeline.batch.delay": 50,
      "queue.type": "memory",
      "queue.max_bytes": "1gb",
      "queue.checkpoint.writes": 1024,
    },
  },
});
response = client.logstash.put_pipeline(
  id: "my_pipeline",
  body: {
    "description": "Sample pipeline for illustration purposes",
    "last_modified": "2021-01-02T02:50:51.250Z",
    "pipeline_metadata": {
      "type": "logstash_pipeline",
      "version": 1
    },
    "username": "elastic",
    "pipeline": "input {}\\n filter { grok {} }\\n output {}",
    "pipeline_settings": {
      "pipeline.workers": 1,
      "pipeline.batch.size": 125,
      "pipeline.batch.delay": 50,
      "queue.type": "memory",
      "queue.max_bytes": "1gb",
      "queue.checkpoint.writes": 1024
    }
  }
)
$resp = $client->logstash()->putPipeline([
    "id" => "my_pipeline",
    "body" => [
        "description" => "Sample pipeline for illustration purposes",
        "last_modified" => "2021-01-02T02:50:51.250Z",
        "pipeline_metadata" => [
            "type" => "logstash_pipeline",
            "version" => 1,
        ],
        "username" => "elastic",
        "pipeline" => "input {}\\n filter { grok {} }\\n output {}",
        "pipeline_settings" => [
            "pipeline.workers" => 1,
            "pipeline.batch.size" => 125,
            "pipeline.batch.delay" => 50,
            "queue.type" => "memory",
            "queue.max_bytes" => "1gb",
            "queue.checkpoint.writes" => 1024,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"description":"Sample pipeline for illustration purposes","last_modified":"2021-01-02T02:50:51.250Z","pipeline_metadata":{"type":"logstash_pipeline","version":1},"username":"elastic","pipeline":"input {}\\n filter { grok {} }\\n output {}","pipeline_settings":{"pipeline.workers":1,"pipeline.batch.size":125,"pipeline.batch.delay":50,"queue.type":"memory","queue.max_bytes":"1gb","queue.checkpoint.writes":1024}}' "$ELASTICSEARCH_URL/_logstash/pipeline/my_pipeline"
client.logstash().putPipeline(p -> p
    .id("my_pipeline")
    .pipeline(pi -> pi
        .description("Sample pipeline for illustration purposes")
        .lastModified(DateTime.of("2021-01-02T02:50:51.250Z"))
        .pipeline("input {}\n filter { grok {} }\n output {}")
        .pipelineMetadata(pip -> pip
            .type("logstash_pipeline")
            .version("1")
        )
        .pipelineSettings(pip -> pip
            .pipelineWorkers(1)
            .pipelineBatchSize(125)
            .pipelineBatchDelay(50)
            .queueType("memory")
            .queueMaxBytes("1gb")
            .queueCheckpointWrites(1024)
        )
        .username("elastic")
    )
);
Request example
Run `PUT _logstash/pipeline/my_pipeline` to create a pipeline.
{
  "description": "Sample pipeline for illustration purposes",
  "last_modified": "2021-01-02T02:50:51.250Z",
  "pipeline_metadata": {
    "type": "logstash_pipeline",
    "version": 1
  },
  "username": "elastic",
  "pipeline": "input {}\\n filter { grok {} }\\n output {}",
  "pipeline_settings": {
    "pipeline.workers": 1,
    "pipeline.batch.size": 125,
    "pipeline.batch.delay": 50,
    "queue.type": "memory",
    "queue.max_bytes": "1gb",
    "queue.checkpoint.writes": 1024
  }
}

Delete a Logstash pipeline Generally available; Added in 7.12.0

DELETE /_logstash/pipeline/{id}

Delete a pipeline that is used for Logstash Central Management. If the request succeeds, you receive an empty response with an appropriate status code.

Required authorization

  • Cluster privileges: manage_logstash_pipelines
External documentation

Path parameters

  • id string Required

    An identifier for the pipeline.

Responses

  • 200 application/json
DELETE /_logstash/pipeline/{id}
DELETE _logstash/pipeline/my_pipeline
resp = client.logstash.delete_pipeline(
    id="my_pipeline",
)
const response = await client.logstash.deletePipeline({
  id: "my_pipeline",
});
response = client.logstash.delete_pipeline(
  id: "my_pipeline"
)
$resp = $client->logstash()->deletePipeline([
    "id" => "my_pipeline",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_logstash/pipeline/my_pipeline"
client.logstash().deletePipeline(d -> d
    .id("my_pipeline")
);

Machine learning





Get machine learning information Generally available; Added in 6.3.0

GET /_ml/info

Get defaults and limits used by machine learning. This endpoint is designed to be used by a user interface that needs to fully understand machine learning configurations where some options are not specified, meaning that the defaults should be used. This endpoint may be used to find out what those defaults are. It also provides information about the maximum size of machine learning jobs that could run in the current cluster configuration.

Required authorization

  • Cluster privileges: monitor_ml

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • defaults object Required
      Hide defaults attributes Show defaults attributes object
      • anomaly_detectors object Required
        Hide anomaly_detectors attributes Show anomaly_detectors attributes object
        • categorization_analyzer
        • categorization_examples_limit number Required
        • model_memory_limit string Required
        • model_snapshot_retention_days number Required
        • daily_model_snapshot_retention_after_days number Required
      • datafeeds object Required
        Hide datafeeds attribute Show datafeeds attribute object
        • scroll_size number Required
    • limits object Required
      Hide limits attributes Show limits attributes object
    • upgrade_mode boolean Required
    • native_code object Required
      Hide native_code attributes Show native_code attributes object
      • build_hash string Required
      • version string Required
GET /_ml/info
GET _ml/info
resp = client.ml.info()
const response = await client.ml.info();
response = client.ml.info
$resp = $client->ml()->info();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/info"
client.ml().info();




Machine learning anomaly detection

Close anomaly detection jobs Generally available; Added in 5.4.0

POST /_ml/anomaly_detectors/{job_id}/_close

A job can be opened and closed multiple times throughout its lifecycle. A closed job cannot receive data or perform analysis operations, but you can still explore and navigate results. When you close a job, it runs housekeeping tasks such as pruning the model history, flushing buffers, calculating final results and persisting the model snapshots. Depending upon the size of the job, it could take several minutes to close and the equivalent time to re-open. After it is closed, the job has a minimal overhead on the cluster except for maintaining its meta data. Therefore it is a best practice to close jobs that are no longer required to process data. If you close an anomaly detection job whose datafeed is running, the request first tries to stop the datafeed. This behavior is equivalent to calling stop datafeed API with the same timeout and force parameters as the close job request. When a datafeed that has a specified end date stops, it automatically closes its associated job.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job. It can be a job identifier, a group name, or a wildcard expression. You can close multiple anomaly detection jobs in a single API request by using a group name, a comma-separated list of jobs, or a wildcard expression. You can close all jobs by using _all or by specifying * as the job identifier.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request: contains wildcard expressions and there are no jobs that match; contains the _all string or no identifiers and there are no matches; or contains wildcard expressions and there are only partial matches. By default, it returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the request returns a 404 status code when there are no matches or only partial matches.

  • force boolean

    Use to close a failed job, or to forcefully close a job which has not responded to its initial close request; the request returns without performing the associated actions such as flushing buffers and persisting the model snapshots. If you want the job to be in a consistent state after the close job API returns, do not set to true. This parameter should be used only in situations where the job has already failed or where you are not interested in results the job might have recently produced or might produce in the future.

  • timeout string

    Controls the time to wait until a job has closed.

    External documentation
application/json

Body

  • allow_no_match boolean

    Refer to the description for the allow_no_match query parameter.

    Default value is true.

  • force boolean

    Refer to the descriptiion for the force query parameter.

    Default value is false.

  • timeout string

    Refer to the description for the timeout query parameter.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • closed boolean Required
POST /_ml/anomaly_detectors/{job_id}/_close
POST _ml/anomaly_detectors/low_request_rate/_close
resp = client.ml.close_job(
    job_id="low_request_rate",
)
const response = await client.ml.closeJob({
  job_id: "low_request_rate",
});
response = client.ml.close_job(
  job_id: "low_request_rate"
)
$resp = $client->ml()->closeJob([
    "job_id" => "low_request_rate",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/anomaly_detectors/low_request_rate/_close"
client.ml().closeJob(c -> c
    .jobId("low_request_rate")
);
Response examples (200)
A successful response when closing anomaly detection jobs.
{
  "closed": true
}

Create a calendar Generally available; Added in 6.2.0

PUT /_ml/calendars/{calendar_id}

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • calendar_id string Required

    A string that uniquely identifies a calendar.

application/json

Body

  • job_ids array[string]

    An array of anomaly detection job identifiers.

  • description string

    A description of the calendar.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • calendar_id string Required

      A string that uniquely identifies a calendar.

    • description string

      A description of the calendar.

    • job_ids string | array[string]

      A list of anomaly detection job identifiers or group names.

      One of:

      A list of anomaly detection job identifiers or group names.

PUT /_ml/calendars/{calendar_id}
PUT _ml/calendars/planned-outages
resp = client.ml.put_calendar(
    calendar_id="planned-outages",
)
const response = await client.ml.putCalendar({
  calendar_id: "planned-outages",
});
response = client.ml.put_calendar(
  calendar_id: "planned-outages"
)
$resp = $client->ml()->putCalendar([
    "calendar_id" => "planned-outages",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/calendars/planned-outages"
client.ml().putCalendar(p -> p
    .calendarId("planned-outages")
);




Delete a calendar Generally available; Added in 6.2.0

DELETE /_ml/calendars/{calendar_id}

Remove all scheduled events from a calendar, then delete it.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • calendar_id string Required

    A string that uniquely identifies a calendar.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ml/calendars/{calendar_id}
DELETE _ml/calendars/planned-outages
resp = client.ml.delete_calendar(
    calendar_id="planned-outages",
)
const response = await client.ml.deleteCalendar({
  calendar_id: "planned-outages",
});
response = client.ml.delete_calendar(
  calendar_id: "planned-outages"
)
$resp = $client->ml()->deleteCalendar([
    "calendar_id" => "planned-outages",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/calendars/planned-outages"
client.ml().deleteCalendar(d -> d
    .calendarId("planned-outages")
);
Response examples (200)
A successful response when deleting a calendar.
{
  "acknowledged": true
}








Delete anomaly jobs from a calendar Generally available; Added in 6.2.0

DELETE /_ml/calendars/{calendar_id}/jobs/{job_id}

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • calendar_id string Required

    A string that uniquely identifies a calendar.

  • job_id string | array[string] Required

    An identifier for the anomaly detection jobs. It can be a job identifier, a group name, or a comma-separated list of jobs or groups.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • calendar_id string Required

      A string that uniquely identifies a calendar.

    • description string

      A description of the calendar.

    • job_ids string | array[string]

      A list of anomaly detection job identifiers or group names.

      One of:

      A list of anomaly detection job identifiers or group names.

DELETE /_ml/calendars/{calendar_id}/jobs/{job_id}
DELETE _ml/calendars/planned-outages/jobs/total-requests
resp = client.ml.delete_calendar_job(
    calendar_id="planned-outages",
    job_id="total-requests",
)
const response = await client.ml.deleteCalendarJob({
  calendar_id: "planned-outages",
  job_id: "total-requests",
});
response = client.ml.delete_calendar_job(
  calendar_id: "planned-outages",
  job_id: "total-requests"
)
$resp = $client->ml()->deleteCalendarJob([
    "calendar_id" => "planned-outages",
    "job_id" => "total-requests",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/calendars/planned-outages/jobs/total-requests"
client.ml().deleteCalendarJob(d -> d
    .calendarId("planned-outages")
    .jobId("total-requests")
);
Response examples (200)
A successful response when deleting an anomaly detection job from a calendar.
{
  "calendar_id": "planned-outages",
  "job_ids": []
}

Get datafeeds configuration info Generally available; Added in 5.5.0

GET /_ml/datafeeds/{datafeed_id}

All methods and paths for this operation:

GET /_ml/datafeeds

GET /_ml/datafeeds/{datafeed_id}

You can get information for multiple datafeeds in a single API request by using a comma-separated list of datafeeds or a wildcard expression. You can get information for all datafeeds by using _all, by specifying * as the <feed_id>, or by omitting the <feed_id>. This API returns a maximum of 10,000 datafeeds.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • datafeed_id string | array[string] Required

    Identifier for the datafeed. It can be a datafeed identifier or a wildcard expression. If you do not specify one of these options, the API returns information about all datafeeds.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    1. Contains wildcard expressions and there are no datafeeds that match.
    2. Contains the _all string or no identifiers and there are no matches.
    3. Contains wildcard expressions and there are only partial matches.

    The default value is true, which returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

  • exclude_generated boolean

    Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • datafeeds array[object] Required
      Hide datafeeds attributes Show datafeeds attributes object
      • aggregations object
      • authorization object

        The security privileges that the datafeed uses to run its queries. If Elastic Stack security features were disabled at the time of the most recent update to the datafeed, this property is omitted.

        Hide authorization attributes Show authorization attributes object
        • api_key object

          If an API key was used for the most recent update to the datafeed, its name and identifier are listed in the response.

        • roles array[string]

          If a user ID was used for the most recent update to the datafeed, its roles at the time of the update are listed in the response.

        • service_account string

          If a service account was used for the most recent update to the datafeed, the account name is listed in the response.

      • chunking_config object
        Hide chunking_config attributes Show chunking_config attributes object
        • mode string Required

          If the mode is auto, the chunk size is dynamically calculated; this is the recommended value when the datafeed does not use aggregations. If the mode is manual, chunking is applied according to the specified time_span; use this mode when the datafeed uses aggregations. If the mode is off, no chunking is applied.

          Values are auto, manual, or off.

        • time_span string

          The time span that each search will be querying. This setting is applicable only when the mode is set to manual.

      • datafeed_id string Required
      • frequency string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        External documentation
      • indices array[string] Required
      • indexes array[string]
      • job_id string Required
      • max_empty_searches number
      • query_delay string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        External documentation
      • script_fields object
        Hide script_fields attribute Show script_fields attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • script object Required
          • ignore_failure boolean
      • scroll_size number
      • delayed_data_check_config object Required
        Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
        • check_window string

          The window of time that is searched for late data. This window of time ends with the latest finalized bucket. It defaults to null, which causes an appropriate check_window to be calculated when the real-time datafeed runs. In particular, the default check_window span calculation is based on the maximum of 2h or 8 * bucket_span.

        • enabled boolean Required

          Specifies whether the datafeed periodically checks for delayed data.

      • runtime_mappings object
        Hide runtime_mappings attribute Show runtime_mappings attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

          • fetch_fields array[object]

            For type lookup

          • format string

            A custom format for date type runtime fields.

      • indices_options object

        Controls how to deal with unavailable concrete indices (closed or missing), how wildcard expressions are expanded to actual indices (all, closed or open indices) and how to deal with wildcard expressions that resolve to no indices.

        Hide indices_options attributes Show indices_options attributes object
        • allow_no_indices boolean

          If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

        • expand_wildcards string | array[string]

          Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

          Supported values include:

          • all: Match any data stream or index, including hidden ones.
          • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
          • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
          • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
          • none: Wildcard expressions are not accepted.
        • ignore_unavailable boolean

          If true, missing or closed indices are not included in the response.

          Default value is false.

        • ignore_throttled boolean

          If true, concrete, expanded or aliased indices are ignored when frozen.

          Default value is true.

      • query object Required

        The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. By default, this property has the following value: {"match_all": {"boost": 1}}.

        Query DSL
GET /_ml/datafeeds/{datafeed_id}
GET _ml/datafeeds/datafeed-high_sum_total_sales
resp = client.ml.get_datafeeds(
    datafeed_id="datafeed-high_sum_total_sales",
)
const response = await client.ml.getDatafeeds({
  datafeed_id: "datafeed-high_sum_total_sales",
});
response = client.ml.get_datafeeds(
  datafeed_id: "datafeed-high_sum_total_sales"
)
$resp = $client->ml()->getDatafeeds([
    "datafeed_id" => "datafeed-high_sum_total_sales",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-high_sum_total_sales"
client.ml().getDatafeeds(g -> g
    .datafeedId("datafeed-high_sum_total_sales")
);

Create a datafeed Generally available; Added in 5.4.0

PUT /_ml/datafeeds/{datafeed_id}

Datafeeds retrieve data from Elasticsearch for analysis by an anomaly detection job. You can associate only one datafeed with each anomaly detection job. The datafeed contains a query that runs at a defined interval (frequency). If you are concerned about delayed data, you can add a delay (query_delay') at each interval. By default, the datafeed uses the following query:{"match_all": {"boost": 1}}`.

When Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had at the time of creation and runs the query using those same roles. If you provide secondary authorization headers, those credentials are used instead. You must use Kibana, this API, or the create anomaly detection jobs API to create a datafeed. Do not add a datafeed directly to the .ml-config index. Do not give users write privileges on the .ml-config index.

Required authorization

  • Index privileges: read
  • Cluster privileges: manage_ml

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

Query parameters

  • allow_no_indices boolean

    If true, wildcard indices expressions that resolve into no concrete indices are ignored. This includes the _all string or when no indices are specified.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_throttled boolean Deprecated

    If true, concrete, expanded, or aliased indices are ignored when frozen.

  • ignore_unavailable boolean

    If true, unavailable indices (missing or closed) are ignored.

application/json

Body Required

  • aggregations object

    If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.

  • chunking_config object

    Datafeeds might be required to search over long time periods, for several months or years. This search is split into time chunks in order to ensure the load on Elasticsearch is managed. Chunking configuration controls how the size of these time chunks are calculated; it is an advanced configuration option.

    Hide chunking_config attributes Show chunking_config attributes object
    • mode string Required

      If the mode is auto, the chunk size is dynamically calculated; this is the recommended value when the datafeed does not use aggregations. If the mode is manual, chunking is applied according to the specified time_span; use this mode when the datafeed uses aggregations. If the mode is off, no chunking is applied.

      Values are auto, manual, or off.

    • time_span string

      The time span that each search will be querying. This setting is applicable only when the mode is set to manual.

      External documentation
  • delayed_data_check_config object

    Specifies whether the datafeed checks for missing data and the size of the window. The datafeed can optionally search over indices that have already been read in an effort to determine whether any data has subsequently been added to the index. If missing data is found, it is a good indication that the query_delay is set too low and the data is being indexed after the datafeed has passed that moment in time. This check runs only on real-time datafeeds.

    Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
    • check_window string

      The window of time that is searched for late data. This window of time ends with the latest finalized bucket. It defaults to null, which causes an appropriate check_window to be calculated when the real-time datafeed runs. In particular, the default check_window span calculation is based on the maximum of 2h or 8 * bucket_span.

      External documentation
    • enabled boolean Required

      Specifies whether the datafeed periodically checks for delayed data.

  • frequency string

    The interval at which scheduled queries are made while the datafeed runs in real time. The default value is either the bucket span for short bucket spans, or, for longer bucket spans, a sensible fraction of the bucket span. When frequency is shorter than the bucket span, interim results for the last (partial) bucket are written then eventually overwritten by the full bucket results. If the datafeed uses aggregations, this value must be divisible by the interval of the date histogram aggregation.

    External documentation
  • indices string | array[string]

    An array of index names. Wildcards are supported. If any of the indices are in remote clusters, the master nodes and the machine learning nodes must have the remote_cluster_client role.

  • indices_options object

    Specifies index expansion options that are used during search

    Hide indices_options attributes Show indices_options attributes object
    • allow_no_indices boolean

      If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

    • expand_wildcards string | array[string]

      Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

      Supported values include:

      • all: Match any data stream or index, including hidden ones.
      • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
      • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
      • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
      • none: Wildcard expressions are not accepted.
    • ignore_unavailable boolean

      If true, missing or closed indices are not included in the response.

      Default value is false.

    • ignore_throttled boolean

      If true, concrete, expanded or aliased indices are ignored when frozen.

      Default value is true.

  • job_id string

    Identifier for the anomaly detection job.

  • max_empty_searches number

    If a real-time datafeed has never seen any data (including during any initial training period), it automatically stops and closes the associated job after this many real-time searches return no documents. In other words, it stops after frequency times max_empty_searches of real-time operation. If not set, a datafeed with no end time that sees no data remains started until it is explicitly stopped. By default, it is not set.

  • query object

    The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch.

    External documentation
  • query_delay string

    The number of seconds behind real time that data is queried. For example, if data from 10:04 a.m. might not be searchable in Elasticsearch until 10:06 a.m., set this property to 120 seconds. The default value is randomly selected between 60s and 120s. This randomness improves the query performance when there are multiple jobs running on the same node.

    External documentation
  • runtime_mappings object

    Specifies runtime fields for the datafeed search.

    Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        For type lookup

      • target_field string

        For type lookup

      • target_index string

        For type lookup

      • script object

        Painless script executed at query time.

        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang
        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • script_fields object

    Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.

    Hide script_fields attribute Show script_fields attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • script object Required
        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API
          Any of:

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • ignore_failure boolean
  • scroll_size number

    The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of index.max_result_window, which is 10,000 by default.

    Default value is 1000.

  • headers object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • aggregations object
    • authorization object
      Hide authorization attributes Show authorization attributes object
      • api_key object

        If an API key was used for the most recent update to the datafeed, its name and identifier are listed in the response.

        Hide api_key attributes Show api_key attributes object
        • id string Required

          The identifier for the API key.

        • name string Required

          The name of the API key.

      • roles array[string]

        If a user ID was used for the most recent update to the datafeed, its roles at the time of the update are listed in the response.

      • service_account string

        If a service account was used for the most recent update to the datafeed, the account name is listed in the response.

    • chunking_config object Required
      Hide chunking_config attributes Show chunking_config attributes object
      • mode string Required

        If the mode is auto, the chunk size is dynamically calculated; this is the recommended value when the datafeed does not use aggregations. If the mode is manual, chunking is applied according to the specified time_span; use this mode when the datafeed uses aggregations. If the mode is off, no chunking is applied.

        Values are auto, manual, or off.

      • time_span string

        The time span that each search will be querying. This setting is applicable only when the mode is set to manual.

        External documentation
    • delayed_data_check_config object
      Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
      • check_window string

        The window of time that is searched for late data. This window of time ends with the latest finalized bucket. It defaults to null, which causes an appropriate check_window to be calculated when the real-time datafeed runs. In particular, the default check_window span calculation is based on the maximum of 2h or 8 * bucket_span.

        External documentation
      • enabled boolean Required

        Specifies whether the datafeed periodically checks for delayed data.

    • datafeed_id string Required
    • frequency string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      External documentation
    • indices array[string] Required
    • job_id string Required
    • indices_options object

      Controls how to deal with unavailable concrete indices (closed or missing), how wildcard expressions are expanded to actual indices (all, closed or open indices) and how to deal with wildcard expressions that resolve to no indices.

      Hide indices_options attributes Show indices_options attributes object
      • allow_no_indices boolean

        If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

      • expand_wildcards string | array[string]

        Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

        Supported values include:

        • all: Match any data stream or index, including hidden ones.
        • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
        • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
        • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
        • none: Wildcard expressions are not accepted.
      • ignore_unavailable boolean

        If true, missing or closed indices are not included in the response.

        Default value is false.

      • ignore_throttled boolean

        If true, concrete, expanded or aliased indices are ignored when frozen.

        Default value is true.

    • max_empty_searches number
    • query object Required

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • query_delay string Required

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      External documentation
    • runtime_mappings object
      Hide runtime_mappings attribute Show runtime_mappings attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field
          • format string
        • format string

          A custom format for date type runtime fields.

        • input_field string

          For type lookup

        • target_field string

          For type lookup

        • target_index string

          For type lookup

        • script object

          Painless script executed at query time.

          Hide script attributes Show script attributes object
          • source string

            The script source.

          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          • options object
        • type string Required

          Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • script_fields object
      Hide script_fields attribute Show script_fields attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • script object Required
          Hide script attributes Show script attributes object
          • source string

            The script source.

          • id string

            The id for a stored script.

          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang
          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • ignore_failure boolean
    • scroll_size number Required
PUT /_ml/datafeeds/{datafeed_id}
PUT _ml/datafeeds/datafeed-test-job?pretty
{
  "indices": [
    "kibana_sample_data_logs"
  ],
  "query": {
    "bool": {
      "must": [
        {
          "match_all": {}
        }
      ]
    }
  },
  "job_id": "test-job"
}
resp = client.ml.put_datafeed(
    datafeed_id="datafeed-test-job",
    pretty=True,
    indices=[
        "kibana_sample_data_logs"
    ],
    query={
        "bool": {
            "must": [
                {
                    "match_all": {}
                }
            ]
        }
    },
    job_id="test-job",
)
const response = await client.ml.putDatafeed({
  datafeed_id: "datafeed-test-job",
  pretty: "true",
  indices: ["kibana_sample_data_logs"],
  query: {
    bool: {
      must: [
        {
          match_all: {},
        },
      ],
    },
  },
  job_id: "test-job",
});
response = client.ml.put_datafeed(
  datafeed_id: "datafeed-test-job",
  pretty: "true",
  body: {
    "indices": [
      "kibana_sample_data_logs"
    ],
    "query": {
      "bool": {
        "must": [
          {
            "match_all": {}
          }
        ]
      }
    },
    "job_id": "test-job"
  }
)
$resp = $client->ml()->putDatafeed([
    "datafeed_id" => "datafeed-test-job",
    "pretty" => "true",
    "body" => [
        "indices" => array(
            "kibana_sample_data_logs",
        ),
        "query" => [
            "bool" => [
                "must" => array(
                    [
                        "match_all" => new ArrayObject([]),
                    ],
                ),
            ],
        ],
        "job_id" => "test-job",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"indices":["kibana_sample_data_logs"],"query":{"bool":{"must":[{"match_all":{}}]}},"job_id":"test-job"}' "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-test-job?pretty"
Request example
An example body for a `PUT _ml/datafeeds/datafeed-test-job?pretty` request.
{
  "indices": [
    "kibana_sample_data_logs"
  ],
  "query": {
    "bool": {
      "must": [
        {
          "match_all": {}
        }
      ]
    }
  },
  "job_id": "test-job"
}
















































































































































Machine learning data frame analytics

































Stop data frame analytics jobs Generally available; Added in 7.3.0

POST /_ml/data_frame/analytics/{id}/_stop

A data frame analytics job can be started and stopped multiple times throughout its lifecycle.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • id string Required

    Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    1. Contains wildcard expressions and there are no data frame analytics jobs that match.
    2. Contains the _all string or no identifiers and there are no matches.
    3. Contains wildcard expressions and there are only partial matches.

    The default value is true, which returns an empty data_frame_analytics array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

  • force boolean

    If true, the data frame analytics job is stopped forcefully.

  • timeout string

    Controls the amount of time to wait until the data frame analytics job stops. Defaults to 20 seconds.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • stopped boolean Required
POST /_ml/data_frame/analytics/{id}/_stop
POST _ml/data_frame/analytics/loganalytics/_stop
resp = client.ml.stop_data_frame_analytics(
    id="loganalytics",
)
const response = await client.ml.stopDataFrameAnalytics({
  id: "loganalytics",
});
response = client.ml.stop_data_frame_analytics(
  id: "loganalytics"
)
$resp = $client->ml()->stopDataFrameAnalytics([
    "id" => "loganalytics",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/data_frame/analytics/loganalytics/_stop"
client.ml().stopDataFrameAnalytics(s -> s
    .id("loganalytics")
);












Create a trained model Generally available; Added in 7.10.0

PUT /_ml/trained_models/{model_id}

Enable you to supply a trained model that is not created by data frame analytics.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • model_id string Required

    The unique identifier of the trained model.

Query parameters

  • defer_definition_decompression boolean Generally available; Added in 8.0.0

    If set to true and a compressed_definition is provided, the request defers definition decompression and skips relevant validations.

  • wait_for_completion boolean Generally available; Added in 8.8.0

    Whether to wait for all child operations (e.g. model download) to complete.

application/json

Body Required

  • compressed_definition string

    The compressed (GZipped and Base64 encoded) inference definition of the model. If compressed_definition is specified, then definition cannot be specified.

  • definition object

    The inference definition for the model. If definition is specified, then compressed_definition cannot be specified.

    Hide definition attributes Show definition attributes object
    • preprocessors array[object]

      Collection of preprocessors

      Hide preprocessors attributes Show preprocessors attributes object
      • frequency_encoding object
        Hide frequency_encoding attributes Show frequency_encoding attributes object
        • field string Required
        • feature_name string Required
        • frequency_map object Required
      • one_hot_encoding object
        Hide one_hot_encoding attributes Show one_hot_encoding attributes object
        • field string Required
        • hot_map object Required
      • target_mean_encoding object
        Hide target_mean_encoding attributes Show target_mean_encoding attributes object
        • field string Required
        • feature_name string Required
        • target_map object Required
        • default_value number Required
    • trained_model object Required

      The definition of the trained model.

      Hide trained_model attributes Show trained_model attributes object
      • tree object

        The definition for a binary decision tree.

        Hide tree attributes Show tree attributes object
        • classification_labels array[string]
        • feature_names array[string] Required
        • target_type string
        • tree_structure array[object] Required
      • tree_node object

        The definition of a node in a tree. There are two major types of nodes: leaf nodes and not-leaf nodes.

        • Leaf nodes only need node_index and leaf_value defined.
        • All other nodes need split_feature, left_child, right_child, threshold, decision_type, and default_left defined.
        Hide tree_node attributes Show tree_node attributes object
        • decision_type string
        • default_left boolean
        • leaf_value number
        • left_child number
        • node_index number Required
        • right_child number
        • split_feature number
        • split_gain number
        • threshold number
      • ensemble object

        The definition for an ensemble model

        Hide ensemble attributes Show ensemble attributes object
        • classification_labels array[string]
        • feature_names array[string]
        • target_type string
        • trained_models array[object] Required
  • description string

    A human-readable description of the inference trained model.

  • inference_config object

    The default configuration for inference. This can be either a regression or classification configuration. It must match the underlying definition.trained_model's target_type. For pre-packaged models such as ELSER the config is not required.

    Hide inference_config attributes Show inference_config attributes object
    • regression object

      Regression configuration for inference.

      Hide regression attributes Show regression attributes object
      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • num_top_feature_importance_values number

        Specifies the maximum number of feature importance values per document.

        Default value is 0.

    • classification object

      Classification configuration for inference.

      Hide classification attributes Show classification attributes object
      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • num_top_feature_importance_values number

        Specifies the maximum number of feature importance values per document.

        Default value is 0.

      • prediction_field_type string

        Specifies the type of the predicted field to write. Acceptable values are: string, number, boolean. When boolean is provided 1.0 is transformed to true and 0.0 to false.

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • top_classes_results_field string

        Specifies the field to which the top classes are written. Defaults to top_classes.

    • text_classification object

      Text classification configuration for inference.

      Hide text_classification attributes Show text_classification attributes object
      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • tokenization object

        The tokenization options

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • classification_labels array[string]

        Classification labels to apply other than the stored labels. Must have the same deminsions as the default configured labels

      • vocabulary object
    • zero_shot_classification object

      Zeroshot classification configuration for inference.

      Hide zero_shot_classification attributes Show zero_shot_classification attributes object
      • tokenization object

        The tokenization options to update when inferring

      • hypothesis_template string

        Hypothesis template used when tokenizing labels for prediction

        Default value is "This example is {}.".

      • classification_labels array[string] Required

        The zero shot classification labels indicating entailment, neutral, and contradiction Must contain exactly and only entailment, neutral, and contradiction

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • multi_label boolean

        Indicates if more than one true label exists.

        Default value is false.

      • labels array[string]

        The labels to predict.

    • fill_mask object

      Fill mask configuration for inference.

      Hide fill_mask attributes Show fill_mask attributes object
      • mask_token string

        The string/token which will be removed from incoming documents and replaced with the inference prediction(s). In a response, this field contains the mask token for the specified model/tokenizer. Each model and tokenizer has a predefined mask token which cannot be changed. Thus, it is recommended not to set this value in requests. However, if this field is present in a request, its value must match the predefined value for that model/tokenizer, otherwise the request will fail.

      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • tokenization object

        The tokenization options to update when inferring

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object
    • learning_to_rank object
      Hide learning_to_rank attributes Show learning_to_rank attributes object
      • default_params object
        Hide default_params attribute Show default_params attribute object
        • * object Additional properties
      • feature_extractors array[object]
      • num_top_feature_importance_values number Required
    • ner object

      Named entity recognition configuration for inference.

      Hide ner attributes Show ner attributes object
      • tokenization object

        The tokenization options

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • classification_labels array[string]

        The token classification labels. Must be IOB formatted tags

      • vocabulary object
    • pass_through object

      Pass through configuration for inference.

      Hide pass_through attributes Show pass_through attributes object
      • tokenization object

        The tokenization options

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object
    • text_embedding object

      Text embedding configuration for inference.

      Hide text_embedding attributes Show text_embedding attributes object
      • embedding_size number

        The number of dimensions in the embedding output

      • tokenization object

        The tokenization options

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object
    • text_expansion object

      Text expansion configuration for inference.

      Hide text_expansion attributes Show text_expansion attributes object
      • tokenization object

        The tokenization options

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • vocabulary object
    • question_answering object

      Question answering configuration for inference.

      Hide question_answering attributes Show question_answering attributes object
      • num_top_classes number

        Specifies the number of top class predictions to return. Defaults to 0.

      • tokenization object

        The tokenization options to update when inferring

      • results_field string

        The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

      • max_answer_length number

        The maximum answer length to consider

  • input object

    The input field names for the model definition.

    Hide input attribute Show input attribute object
    • field_names string | array[string] Required
  • metadata object

    An object map that contains metadata about the model.

  • model_type string

    The model type.

    Supported values include:

    • tree_ensemble: The model definition is an ensemble model of decision trees.
    • lang_ident: A special type reserved for language identification models.
    • pytorch: The stored definition is a PyTorch (specifically a TorchScript) model. Currently only NLP models are supported.

    Values are tree_ensemble, lang_ident, or pytorch.

  • model_size_bytes number

    The estimated memory usage in bytes to keep the trained model in memory. This property is supported only if defer_definition_decompression is true or the model definition is not supplied.

  • platform_architecture string

    The platform architecture (if applicable) of the trained mode. If the model only works on one platform, because it is heavily optimized for a particular processor architecture and OS combination, then this field specifies which. The format of the string must match the platform identifiers used by Elasticsearch, so one of, linux-x86_64, linux-aarch64, darwin-x86_64, darwin-aarch64, or windows-x86_64. For portable models (those that work independent of processor architecture or OS features), leave this field unset.

  • tags array[string]

    An array of tags to organize the model.

  • prefix_strings object

    Optional prefix strings applied at inference

    Hide prefix_strings attributes Show prefix_strings attributes object
    • ingest string

      String prepended to input at ingest

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • model_id string Required

      Identifier for the trained model.

    • model_type string

      The model type

      Supported values include:

      • tree_ensemble: The model definition is an ensemble model of decision trees.
      • lang_ident: A special type reserved for language identification models.
      • pytorch: The stored definition is a PyTorch (specifically a TorchScript) model. Currently only NLP models are supported.

      Values are tree_ensemble, lang_ident, or pytorch.

    • tags array[string] Required

      A comma delimited string of tags. A trained model can have many tags, or none.

    • version string

      The Elasticsearch version number in which the trained model was created.

    • compressed_definition string
    • created_by string

      Information on the creator of the trained model.

    • create_time string | number

      The time when the trained model was created.

      One of:

      The time when the trained model was created.

    • default_field_map object

      Any field map described in the inference configuration takes precedence.

      Hide default_field_map attribute Show default_field_map attribute object
      • * string Additional properties
    • description string

      The free-text description of the trained model.

    • estimated_heap_memory_usage_bytes number

      The estimated heap usage in bytes to keep the trained model in memory.

    • estimated_operations number

      The estimated number of operations to use the trained model.

    • fully_defined boolean

      True if the full model definition is present.

    • inference_config object

      The default configuration for inference. This can be either a regression, classification, or one of the many NLP focused configurations. It must match the underlying definition.trained_model's target_type. For pre-packaged models such as ELSER the config is not required.

      Hide inference_config attributes Show inference_config attributes object
      • regression object

        Regression configuration for inference.

        Hide regression attributes Show regression attributes object
        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • num_top_feature_importance_values number

          Specifies the maximum number of feature importance values per document.

          Default value is 0.

      • classification object

        Classification configuration for inference.

        Hide classification attributes Show classification attributes object
        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • num_top_feature_importance_values number

          Specifies the maximum number of feature importance values per document.

          Default value is 0.

        • prediction_field_type string

          Specifies the type of the predicted field to write. Acceptable values are: string, number, boolean. When boolean is provided 1.0 is transformed to true and 0.0 to false.

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • top_classes_results_field string

          Specifies the field to which the top classes are written. Defaults to top_classes.

      • text_classification object

        Text classification configuration for inference.

        Hide text_classification attributes Show text_classification attributes object
        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • tokenization object

          The tokenization options

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • classification_labels array[string]

          Classification labels to apply other than the stored labels. Must have the same deminsions as the default configured labels

        • vocabulary object
      • zero_shot_classification object

        Zeroshot classification configuration for inference.

        Hide zero_shot_classification attributes Show zero_shot_classification attributes object
        • tokenization object

          The tokenization options to update when inferring

        • hypothesis_template string

          Hypothesis template used when tokenizing labels for prediction

          Default value is "This example is {}.".

        • classification_labels array[string] Required

          The zero shot classification labels indicating entailment, neutral, and contradiction Must contain exactly and only entailment, neutral, and contradiction

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • multi_label boolean

          Indicates if more than one true label exists.

          Default value is false.

        • labels array[string]

          The labels to predict.

      • fill_mask object

        Fill mask configuration for inference.

        Hide fill_mask attributes Show fill_mask attributes object
        • mask_token string

          The string/token which will be removed from incoming documents and replaced with the inference prediction(s). In a response, this field contains the mask token for the specified model/tokenizer. Each model and tokenizer has a predefined mask token which cannot be changed. Thus, it is recommended not to set this value in requests. However, if this field is present in a request, its value must match the predefined value for that model/tokenizer, otherwise the request will fail.

        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • tokenization object

          The tokenization options to update when inferring

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object
      • learning_to_rank object
        Hide learning_to_rank attributes Show learning_to_rank attributes object
        • default_params object
          Hide default_params attribute Show default_params attribute object
          • * object Additional properties
        • feature_extractors array[object]
        • num_top_feature_importance_values number Required
      • ner object

        Named entity recognition configuration for inference.

        Hide ner attributes Show ner attributes object
        • tokenization object

          The tokenization options

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • classification_labels array[string]

          The token classification labels. Must be IOB formatted tags

        • vocabulary object
      • pass_through object

        Pass through configuration for inference.

        Hide pass_through attributes Show pass_through attributes object
        • tokenization object

          The tokenization options

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object
      • text_embedding object

        Text embedding configuration for inference.

        Hide text_embedding attributes Show text_embedding attributes object
        • embedding_size number

          The number of dimensions in the embedding output

        • tokenization object

          The tokenization options

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object
      • text_expansion object

        Text expansion configuration for inference.

        Hide text_expansion attributes Show text_expansion attributes object
        • tokenization object

          The tokenization options

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • vocabulary object
      • question_answering object

        Question answering configuration for inference.

        Hide question_answering attributes Show question_answering attributes object
        • num_top_classes number

          Specifies the number of top class predictions to return. Defaults to 0.

        • tokenization object

          The tokenization options to update when inferring

        • results_field string

          The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.

        • max_answer_length number

          The maximum answer length to consider

    • input object Required

      The input field names for the model definition.

      Hide input attribute Show input attribute object
      • field_names array[string] Required

        An array of input field names for the model.

    • license_level string

      The license level of the trained model.

    • metadata object

      An object containing metadata about the trained model. For example, models created by data frame analytics contain analysis_config and input objects.

      Hide metadata attributes Show metadata attributes object
      • model_aliases array[string]
      • feature_importance_baseline object

        An object that contains the baseline for feature importance values. For regression analysis, it is a single value. For classification analysis, there is a value for each class.

        Hide feature_importance_baseline attribute Show feature_importance_baseline attribute object
        • * string Additional properties
      • hyperparameters array[object]

        List of the available hyperparameters optimized during the fine_parameter_tuning phase as well as specified by the user.

        Hide hyperparameters attributes Show hyperparameters attributes object
        • absolute_importance number

          A positive number showing how much the parameter influences the variation of the loss function. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.

        • name string Required

          Name of the hyperparameter.

        • relative_importance number

          A number between 0 and 1 showing the proportion of influence on the variation of the loss function among all tuned hyperparameters. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.

        • supplied boolean Required

          Indicates if the hyperparameter is specified by the user (true) or optimized (false).

        • value number Required

          The value of the hyperparameter, either optimized or specified by the user.

      • total_feature_importance array[object]

        An array of the total feature importance for each feature used from the training data set. This array of objects is returned if data frame analytics trained the model and the request includes total_feature_importance in the include request parameter.

        Hide total_feature_importance attributes Show total_feature_importance attributes object
        • feature_name string Required

          The feature for which this importance was calculated.

        • importance array[object] Required

          A collection of feature importance statistics related to the training data set for this particular feature.

        • classes array[object] Required

          If the trained model is a classification model, feature importance statistics are gathered per target class value.

    • model_size_bytes number | string

    • model_package object
      Hide model_package attributes Show model_package attributes object
      • Time unit for milliseconds

      • description string
      • inference_config object
        Hide inference_config attribute Show inference_config attribute object
        • * object Additional properties
      • metadata object
        Hide metadata attribute Show metadata attribute object
        • * object Additional properties
      • minimum_version string
      • model_repository string
      • model_type string
      • packaged_model_id string Required
      • platform_architecture string
      • prefix_strings object
        Hide prefix_strings attributes Show prefix_strings attributes object
        • ingest string

          String prepended to input at ingest

      • size number | string

      • sha256 string
      • tags array[string]
      • vocabulary_file string
    • location object
      Hide location attribute Show location attribute object
      • index object Required
        Hide index attribute Show index attribute object
        • name string Required
    • platform_architecture string
    • prefix_strings object
      Hide prefix_strings attributes Show prefix_strings attributes object
      • ingest string

        String prepended to input at ingest

PUT /_ml/trained_models/{model_id}
curl \
 --request PUT 'http://api.example.com/_ml/trained_models/{model_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"compressed_definition":"string","definition":{"preprocessors":[{"frequency_encoding":{"field":"string","feature_name":"string","frequency_map":{}},"one_hot_encoding":{"field":"string","hot_map":{}},"target_mean_encoding":{"field":"string","feature_name":"string","target_map":{},"default_value":42.0}}],"trained_model":{"tree":{"classification_labels":["string"],"feature_names":["string"],"target_type":"string","tree_structure":[{}]},"tree_node":{"decision_type":"string","default_left":true,"leaf_value":42.0,"left_child":42.0,"node_index":42.0,"right_child":42.0,"split_feature":42.0,"split_gain":42.0,"threshold":42.0},"ensemble":{"classification_labels":["string"],"feature_names":["string"],"target_type":"string","trained_models":[{}]}}},"description":"string","inference_config":{"regression":{"results_field":"string","num_top_feature_importance_values":0},"classification":{"num_top_classes":42.0,"num_top_feature_importance_values":0,"prediction_field_type":"string","results_field":"string","top_classes_results_field":"string"},"text_classification":{"num_top_classes":42.0,"tokenization":{},"results_field":"string","classification_labels":["string"],"vocabulary":{}},"zero_shot_classification":{"tokenization":{},"hypothesis_template":"\"This example is {}.\"","classification_labels":["string"],"results_field":"string","multi_label":false,"labels":["string"]},"fill_mask":{"mask_token":"string","num_top_classes":42.0,"tokenization":{},"results_field":"string","vocabulary":{}},"learning_to_rank":{"default_params":{"additionalProperty1":{},"additionalProperty2":{}},"feature_extractors":[{}],"num_top_feature_importance_values":42.0},"ner":{"tokenization":{},"results_field":"string","classification_labels":["string"],"vocabulary":{}},"pass_through":{"tokenization":{},"results_field":"string","vocabulary":{}},"text_embedding":{"embedding_size":42.0,"tokenization":{},"results_field":"string","vocabulary":{}},"text_expansion":{"tokenization":{},"results_field":"string","vocabulary":{}},"question_answering":{"num_top_classes":42.0,"tokenization":{},"results_field":"string","max_answer_length":42.0}},"input":{"field_names":"string"},"metadata":{},"model_type":"tree_ensemble","model_size_bytes":42.0,"platform_architecture":"string","tags":["string"],"prefix_strings":{"ingest":"string","search":"string"}}'








































Migration

Cancel a migration reindex operation Technical preview; Added in 8.18.0

POST /_migration/reindex/{index}/_cancel

Cancel a migration reindex attempt for a data stream or index.

Path parameters

  • index string | array[string] Required

    The index or data stream name

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_migration/reindex/{index}/_cancel
POST /_migration/reindex/my-data-stream/_cancel
resp = client.perform_request(
    "POST",
    "/_migration/reindex/my-data-stream/_cancel",
)
const response = await client.transport.request({
  method: "POST",
  path: "/_migration/reindex/my-data-stream/_cancel",
});
response = client.perform_request(
  "POST",
  "/_migration/reindex/my-data-stream/_cancel",
  {},
)
$requestFactory = Psr17FactoryDiscovery::findRequestFactory();
$request = $requestFactory->createRequest(
    "POST",
    "/_migration/reindex/my-data-stream/_cancel",
);
$resp = $client->sendRequest($request);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_migration/reindex/my-data-stream/_cancel"
client.indices().cancelMigrateReindex(c -> c
    .index("my-data-stream")
);

Create an index from a source index Technical preview; Added in 8.18.0

POST /_create_from/{source}/{dest}

All methods and paths for this operation:

PUT /_create_from/{source}/{dest}

POST /_create_from/{source}/{dest}

Copy the mappings and settings from the source index to a destination index while allowing request settings and mappings to override the source values.

Path parameters

  • source string Required

    The source index or data stream name

  • dest string Required

    The destination index or data stream name

application/json

Body

  • mappings_override object

    Mappings overrides to be applied to the destination index (optional)

    Hide mappings_override attributes Show mappings_override attributes object
    • all_field object
      Hide all_field attributes Show all_field attributes object
      • analyzer string Required
      • enabled boolean Required
      • omit_norms boolean Required
      • search_analyzer string Required
      • similarity string Required
      • store boolean Required
      • store_term_vector_offsets boolean Required
      • store_term_vector_payloads boolean Required
      • store_term_vector_positions boolean Required
      • store_term_vectors boolean Required
    • date_detection boolean
    • dynamic string

      Values are strict, runtime, true, or false.

    • dynamic_date_formats array[string]
    • dynamic_templates array[object]
    • _field_names object
      Hide _field_names attribute Show _field_names attribute object
      • enabled boolean Required
    • index_field object
      Hide index_field attribute Show index_field attribute object
      • enabled boolean Required
    • _meta object
      Hide _meta attribute Show _meta attribute object
      • * object Additional properties
    • numeric_detection boolean
    • properties object
    • _routing object
      Hide _routing attribute Show _routing attribute object
      • required boolean Required
    • _size object
      Hide _size attribute Show _size attribute object
      • enabled boolean Required
    • _source object
      Hide _source attributes Show _source attributes object
      • compress boolean
      • compress_threshold string
      • enabled boolean
      • excludes array[string]
      • includes array[string]
      • mode string

        Supported values include:

        • disabled
        • stored
        • synthetic: Instead of storing source documents on disk exactly as you send them, Elasticsearch can reconstruct source content on the fly upon retrieval.

        Values are disabled, stored, or synthetic.

    • runtime object
      Hide runtime attribute Show runtime attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field
          • format string
        • format string

          A custom format for date type runtime fields.

        • input_field string

          For type lookup

        • target_field string

          For type lookup

        • target_index string

          For type lookup

        • script object

          Painless script executed at query time.

          Hide script attributes Show script attributes object
          • source string

            The script source.

          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          • options object
        • type string Required

          Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • enabled boolean
    • subobjects string

      Values are true or false.

    • _data_stream_timestamp object
      Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
      • enabled boolean Required
  • settings_override object

    Settings overrides to be applied to the destination index (optional)

    Index settings
  • remove_index_blocks boolean

    If index blocks should be removed when creating destination index (optional)

    Default value is true.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • index string Required
    • shards_acknowledged boolean Required
POST /_create_from/{source}/{dest}
POST _create_from/my-index/my-new-index
resp = client.perform_request(
    "POST",
    "/_create_from/my-index/my-new-index",
)
const response = await client.transport.request({
  method: "POST",
  path: "/_create_from/my-index/my-new-index",
});
response = client.perform_request(
  "POST",
  "/_create_from/my-index/my-new-index",
  {},
)
$requestFactory = Psr17FactoryDiscovery::findRequestFactory();
$request = $requestFactory->createRequest(
    "POST",
    "/_create_from/my-index/my-new-index",
);
$resp = $client->sendRequest($request);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_create_from/my-index/my-new-index"
client.indices().createFrom(c -> c
    .dest("my-new-index")
    .source("my-index")
    .createFrom(cr -> cr)
);


































































Rollup





























Stop rollup jobs Deprecated Technical preview; Added in 6.3.0

POST /_rollup/job/{id}/_stop

If you try to stop a job that does not exist, an exception occurs. If you try to stop a job that is already stopped, nothing happens.

Since only a stopped job can be deleted, it can be useful to block the API until the indexer has fully stopped. This is accomplished with the wait_for_completion query parameter, and optionally a timeout. For example:

POST _rollup/job/sensor/_stop?wait_for_completion=true&timeout=10s

The parameter blocks the API call from returning until either the job has moved to STOPPED or the specified time has elapsed. If the specified time elapses without the job moving to STOPPED, a timeout exception occurs.

Required authorization

  • Cluster privileges: manage_rollup

Path parameters

  • id string Required

    Identifier for the rollup job.

Query parameters

  • timeout string

    If wait_for_completion is true, the API blocks for (at maximum) the specified duration while waiting for the job to stop. If more than timeout time has passed, the API throws a timeout exception. NOTE: Even if a timeout occurs, the stop request is still processing and eventually moves the job to STOPPED. The timeout simply means the API call itself timed out while waiting for the status change.

    External documentation
  • wait_for_completion boolean

    If set to true, causes the API to block until the indexer state completely stops. If set to false, the API returns immediately and the indexer is stopped asynchronously in the background.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • stopped boolean Required
POST /_rollup/job/{id}/_stop
POST _rollup/job/sensor/_stop?wait_for_completion=true&timeout=10s
resp = client.rollup.stop_job(
    id="sensor",
    wait_for_completion=True,
    timeout="10s",
)
const response = await client.rollup.stopJob({
  id: "sensor",
  wait_for_completion: "true",
  timeout: "10s",
});
response = client.rollup.stop_job(
  id: "sensor",
  wait_for_completion: "true",
  timeout: "10s"
)
$resp = $client->rollup()->stopJob([
    "id" => "sensor",
    "wait_for_completion" => "true",
    "timeout" => "10s",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_rollup/job/sensor/_stop?wait_for_completion=true&timeout=10s"
client.rollup().stopJob(s -> s
    .id("sensor")
    .timeout(t -> t
        .offset(10)
    )
    .waitForCompletion(true)
);

Get a script or search template Generally available

GET /_scripts/{id}

Retrieves a stored script or search template.

Required authorization

  • Cluster privileges: manage

Path parameters

  • id string Required

    The identifier for the stored script or search template.

Query parameters

  • master_timeout string

    The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _id string Required
    • found boolean Required
    • script object
      Hide script attributes Show script attributes object
      • lang string

        The language the script is written in. For serach templates, use mustache.

        Supported values include:

        • painless: Painless scripting language, purpose-built for Elasticsearch.
        • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
        • mustache: Mustache templated, used for templates.
        • java: Expert Java API
        Any of:

        The language the script is written in. For serach templates, use mustache.

        Supported values include:

        • painless: Painless scripting language, purpose-built for Elasticsearch.
        • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
        • mustache: Mustache templated, used for templates.
        • java: Expert Java API

        Values are painless, expression, mustache, or java.

      • options object
        Hide options attribute Show options attribute object
        • * string Additional properties
      • source string Required

        The script source. For search templates, an object containing the search template.

GET /_scripts/{id}
GET _scripts/my-search-template
resp = client.get_script(
    id="my-search-template",
)
const response = await client.getScript({
  id: "my-search-template",
});
response = client.get_script(
  id: "my-search-template"
)
$resp = $client->getScript([
    "id" => "my-search-template",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_scripts/my-search-template"
client.getScript(g -> g
    .id("my-search-template")
);

Delete a script or search template Generally available

DELETE /_scripts/{id}

Deletes a stored script or search template.

Required authorization

  • Cluster privileges: manage

Path parameters

  • id string Required

    The identifier for the stored script or search template.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation
  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    External documentation

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_scripts/{id}
DELETE _scripts/my-search-template
resp = client.delete_script(
    id="my-search-template",
)
const response = await client.deleteScript({
  id: "my-search-template",
});
response = client.delete_script(
  id: "my-search-template"
)
$resp = $client->deleteScript([
    "id" => "my-search-template",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_scripts/my-search-template"
client.deleteScript(d -> d
    .id("my-search-template")
);

















Get async search results Generally available; Added in 7.7.0

GET /_async_search/{id}

Retrieve the results of a previously submitted asynchronous search request. If the Elasticsearch security features are enabled, access to the results of a specific async search is restricted to the user or API key that submitted it.

Path parameters

  • id string

    A unique identifier for the async search.

Query parameters

  • keep_alive string

    The length of time that the async search should be available in the cluster. When not specified, the keep_alive set with the corresponding submit async request will be used. Otherwise, it is possible to override the value and extend the validity of the request. When this period expires, the search, if still running, is cancelled. If the search is completed, its saved results are deleted.

    External documentation
  • typed_keys boolean

    Specify whether aggregation and suggester names should be prefixed by their respective types in the response

  • wait_for_completion_timeout string

    Specifies to wait for the search to be completed up until the provided timeout. Final results will be returned if available before the timeout expires, otherwise the currently available results will be returned once the timeout expires. By default no timeout is set meaning that the currently available results will be returned without any additional wait.

    External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • is_partial boolean Required

      When the query is no longer running, this property indicates whether the search failed or was successfully completed on all shards. While the query is running, is_partial is always set to true.

    • is_running boolean Required

      Indicates whether the search is still running or has completed.


      If the search failed after some shards returned their results or the node that is coordinating the async search dies, results may be partial even though is_running is false.

    • expiration_time string | number

      Indicates when the async search will expire.

      One of:

      Indicates when the async search will expire.

    • Time unit for milliseconds

    • start_time string | number

      One of:
    • Time unit for milliseconds

    • completion_time string | number

      Indicates when the async search completed. It is present only when the search has completed.

      One of:

      Indicates when the async search completed. It is present only when the search has completed.

    • Time unit for milliseconds

    • response object Required
      Hide response attributes Show response attributes object
      • aggregations object

        Partial aggregations results, coming from the shards that have already completed running the query.

      • _clusters object
        Hide _clusters attributes Show _clusters attributes object
        • skipped number Required
        • successful number Required
        • total number Required
        • running number Required
        • partial number Required
        • failed number Required
        • details object
      • fields object
        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • hits object Required
        Hide hits attributes Show hits attributes object
        • total
        • hits array[object] Required
        • max_score
      • max_score number
      • num_reduce_phases number

        Indicates how many reductions of the results have been performed. If this number increases compared to the last retrieved results for a get asynch search request, you can expect additional results included in the search response.

      • profile object
        Hide profile attribute Show profile attribute object
        • shards array[object] Required
      • pit_id string
      • _scroll_id string
      • _shards object Required

        Indicates how many shards have run the query. Note that in order for shard results to be included in the search response, they need to be reduced first.

        Hide _shards attribute Show _shards attribute object
        • failures array[object]
      • suggest object
        Hide suggest attribute Show suggest attribute object
        • * array[object] Additional properties
      • terminated_early boolean
      • timed_out boolean Required
      • took number Required
GET /_async_search/{id}
GET /_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=
resp = client.async_search.get(
    id="FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
)
const response = await client.asyncSearch.get({
  id: "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
});
response = client.async_search.get(
  id: "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc="
)
$resp = $client->asyncSearch()->get([
    "id" => "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc="
client.asyncSearch().get(g -> g
    .id("FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=")
);
Response examples (200)
A succesful response from `GET /_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=`.
{
  "id" : "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
  "is_partial" : false, 
  "is_running" : false, 
  "start_time_in_millis" : 1583945890986,
  "expiration_time_in_millis" : 1584377890986, 
  "completion_time_in_millis" : 1583945903130, 
  "response" : {
    "took" : 12144,
    "timed_out" : false,
    "num_reduce_phases" : 46, 
    "_shards" : {
      "total" : 562,
      "successful" : 188, 
      "skipped" : 0,
      "failed" : 0
    },
    "hits" : {
      "total" : {
        "value" : 456433,
        "relation" : "eq"
      },
      "max_score" : null,
      "hits" : [ ]
    },
    "aggregations" : { 
      "sale_date" :  {
        "buckets" : []
      }
    }
  }
}

Delete an async search Generally available; Added in 7.7.0

DELETE /_async_search/{id}

If the asynchronous search is still running, it is cancelled. Otherwise, the saved search results are deleted. If the Elasticsearch security features are enabled, the deletion of a specific async search is restricted to: the authenticated user that submitted the original search request; users that have the cancel_task cluster privilege.

Path parameters

  • id string Required

    A unique identifier for the async search.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_async_search/{id}
DELETE /_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=
resp = client.async_search.delete(
    id="FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
)
const response = await client.asyncSearch.delete({
  id: "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
});
response = client.async_search.delete(
  id: "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc="
)
$resp = $client->asyncSearch()->delete([
    "id" => "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc="
client.asyncSearch().delete(d -> d
    .id("FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=")
);
































Run a knn search Deprecated Technical preview; Added in 8.0.0

POST /{index}/_knn_search

All methods and paths for this operation:

GET /{index}/_knn_search

POST /{index}/_knn_search

NOTE: The kNN search API has been replaced by the knn option in the search API.

Perform a k-nearest neighbor (kNN) search on a dense_vector field and return the matching documents. Given a query vector, the API finds the k closest vectors and returns those documents as search hits.

Elasticsearch uses the HNSW algorithm to support efficient kNN search. Like most kNN algorithms, HNSW is an approximate method that sacrifices result accuracy for improved search speed. This means the results returned are not always the true k closest neighbors.

The kNN search API supports restricting the search using a filter. The search will return the top k documents that also match the filter query.

A kNN search response has the exact same structure as a search API response. However, certain sections have a meaning specific to kNN search:

  • The document _score is determined by the similarity between the query and document vector.
  • The hits.total object contains the total number of nearest neighbor candidates considered, which is num_candidates * num_shards. The hits.total.relation will always be eq, indicating an exact value.

Path parameters

  • index string | array[string] Required

    A comma-separated list of index names to search; use _all or to perform the operation on all indices.

Query parameters

  • routing string

    A comma-separated list of specific routing values.

application/json

Body

  • _source boolean | object

    Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response.

    One of:

    Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response.

  • docvalue_fields array[object]

    The request returns doc values for field names matching these patterns in the hits.fields property of the response. It accepts wildcard (*) patterns.

    A reference to a field with formatting instructions on how to return the value

    Hide docvalue_fields attributes Show docvalue_fields attributes object
    • field string Required

      A wildcard pattern. The request returns values for field names matching this pattern.

    • format string

      The format in which the values are returned.

    • include_unmapped boolean
  • stored_fields string | array[string]

    A list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. You can pass _source: true to return both source fields and stored fields in the search response.

  • fields string | array[string]

    The request returns values for field names matching these patterns in the hits.fields property of the response. It accepts wildcard (*) patterns.

  • filter object | array[object]

    A query to filter the documents that can match. The kNN search will return the top k documents that also match this filter. The value can be a single query or a list of queries. If filter isn't provided, all documents are allowed to match.

    One of:

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • knn object Required

    The kNN query to run.

    Hide knn attributes Show knn attributes object
    • field string Required

      The name of the vector field to search against

    • query_vector array[number] Required

      The query vector

    • k number Required

      The final number of nearest neighbors to return as top hits

    • num_candidates number Required

      The number of nearest neighbor candidates to consider per shard

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number Required

      The milliseconds it took Elasticsearch to run the request.

    • timed_out boolean Required

      If true, the request timed out before completion; returned results may be partial or empty.

    • _shards object Required

      A count of shards used for the request.

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • hits object Required

      The returned documents and metadata.

      Hide hits attributes Show hits attributes object
      • total object | number

        Total hit count information, present only if track_total_hits wasn't false in the search request.

        One of:
        Hide attributes Show attributes
        • relation string Required

          Supported values include:

          • eq: Accurate
          • gte: Lower bound, including returned events or sequences

          Values are eq or gte.

        • value number Required
      • hits array[object] Required
        Hide hits attributes Show hits attributes object
        • _index string Required
        • _id string
        • _score number | string | null

        • _explanation object
        • fields object
          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • highlight object
          Hide highlight attribute Show highlight attribute object
          • * array[string] Additional properties
        • inner_hits object
          Hide inner_hits attribute Show inner_hits attribute object
          • * object Additional properties
        • matched_queries array[string] | object

        • _nested object
        • _ignored array[string]
        • ignored_field_values object
          Hide ignored_field_values attribute Show ignored_field_values attribute object
          • * array[number | string | boolean | null | object] Additional properties
        • _shard string
        • _node string
        • _routing string
        • _source object
        • _rank number
        • _seq_no number
        • _primary_term number
        • _version number
        • sort array[number | string | boolean | null | object]
      • max_score number | string | null

    • fields object

      The field values for the documents. These fields must be specified in the request using the fields parameter.

      Hide fields attribute Show fields attribute object
      • * object Additional properties
    • max_score number

      The highest returned document score. This value is null for requests that do not sort by score.

POST /{index}/_knn_search
curl \
 --request POST 'http://api.example.com/{index}/_knn_search' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"_source":true,"docvalue_fields":[{"field":"string","format":"string","include_unmapped":true}],"stored_fields":"string","fields":"string","filter":{},"knn":{"field":"string","query_vector":[42.0],"k":42.0,"num_candidates":42.0}}'

Run multiple searches Generally available; Added in 1.3.0

POST /{index}/_msearch

All methods and paths for this operation:

GET /_msearch

POST /_msearch
GET /{index}/_msearch
POST /{index}/_msearch

The format of the request is similar to the bulk API format and makes use of the newline delimited JSON (NDJSON) format. The structure is as follows:

header\n
body\n
header\n
body\n

This structure is specifically optimized to reduce parsing if a specific search ends up redirected to another node.

IMPORTANT: The final line of data must end with a newline character \n. Each newline character may be preceded by a carriage return \r. When sending requests to this endpoint the Content-Type header should be set to application/x-ndjson.

Required authorization

  • Index privileges: read

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and index aliases to search.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • ccs_minimize_roundtrips boolean

    If true, network roundtrips between the coordinating node and remote clusters are minimized for cross-cluster search requests.

  • expand_wildcards string | array[string]

    Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_throttled boolean

    If true, concrete, expanded or aliased indices are ignored when frozen.

  • ignore_unavailable boolean

    If true, missing or closed indices are not included in the response.

  • include_named_queries_score boolean

    Indicates whether hit.matched_queries should be rendered as a map that includes the name of the matched query associated with its score (true) or as an array containing the name of the matched queries (false) This functionality reruns each named query on every hit in a search response. Typically, this adds a small overhead to a request. However, using computationally expensive named queries on a large number of hits may add significant overhead.

  • index string | array[string]

    Comma-separated list of data streams, indices, and index aliases to use as default

  • max_concurrent_searches number

    Maximum number of concurrent searches the multi search API can execute.

  • max_concurrent_shard_requests number

    Maximum number of concurrent shard requests that each sub-search request executes per node.

  • pre_filter_shard_size number

    Defines a threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method i.e., if date filters are mandatory to match but the shard bounds and the query are disjoint.

  • rest_total_hits_as_int boolean

    If true, hits.total are returned as an integer in the response. Defaults to false, which returns an object.

  • routing string

    Custom routing value used to route search operations to a specific shard.

  • search_type string

    Indicates whether global term and document frequencies should be used when scoring returned documents.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • typed_keys boolean

    Specifies whether aggregation and suggester names should be prefixed by their respective types in the response.

application/json

Body object Required

One of:

Contains parameters used to limit or change the subsequent search body request.

  • allow_no_indices boolean
  • expand_wildcards string | array[string]

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.
  • ignore_unavailable boolean
  • index string | array[string]
  • preference string
  • request_cache boolean
  • routing string
  • search_type string

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • ccs_minimize_roundtrips boolean
  • allow_partial_search_results boolean
  • ignore_throttled boolean

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number Required
    • responses array[object] Required
      One of:
      Hide attributes Show attributes
      • took number Required

        The number of milliseconds it took Elasticsearch to run the request. This value is calculated by measuring the time elapsed between receipt of a request on the coordinating node and the time at which the coordinating node is ready to send the response. It includes:

        • Communication time between the coordinating node and data nodes
        • Time the request spends in the search thread pool, queued for execution
        • Actual run time

        It does not include:

        • Time needed to send the request to Elasticsearch
        • Time needed to serialize the JSON response
        • Time needed to send the response to a client
      • timed_out boolean Required

        If true, the request timed out before completion; returned results may be partial or empty.

      • _shards object Required

        A count of shards used for the request.

      • hits object Required

        The returned documents and metadata.

      • aggregations object
      • _clusters object
      • fields object
        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • max_score number
      • num_reduce_phases number
      • profile object
      • pit_id string
      • _scroll_id string

        The identifier for the search and its search context. You can use this scroll ID with the scroll API to retrieve the next batch of search results for the request. This property is returned only if the scroll query parameter is specified in the request.

      • suggest object
        Hide suggest attribute Show suggest attribute object
        • * array[object] Additional properties
      • terminated_early boolean
      • status number
POST /{index}/_msearch
GET my-index-000001/_msearch
{ }
{"query" : {"match" : { "message": "this is a test"}}}
{"index": "my-index-000002"}
{"query" : {"match_all" : {}}}
resp = client.msearch(
    index="my-index-000001",
    searches=[
        {},
        {
            "query": {
                "match": {
                    "message": "this is a test"
                }
            }
        },
        {
            "index": "my-index-000002"
        },
        {
            "query": {
                "match_all": {}
            }
        }
    ],
)
const response = await client.msearch({
  index: "my-index-000001",
  searches: [
    {},
    {
      query: {
        match: {
          message: "this is a test",
        },
      },
    },
    {
      index: "my-index-000002",
    },
    {
      query: {
        match_all: {},
      },
    },
  ],
});
response = client.msearch(
  index: "my-index-000001",
  body: [
    {},
    {
      "query": {
        "match": {
          "message": "this is a test"
        }
      }
    },
    {
      "index": "my-index-000002"
    },
    {
      "query": {
        "match_all": {}
      }
    }
  ]
)
$resp = $client->msearch([
    "index" => "my-index-000001",
    "body" => array(
        new ArrayObject([]),
        [
            "query" => [
                "match" => [
                    "message" => "this is a test",
                ],
            ],
        ],
        [
            "index" => "my-index-000002",
        ],
        [
            "query" => [
                "match_all" => new ArrayObject([]),
            ],
        ],
    ),
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '[{},{"query":{"match":{"message":"this is a test"}}},{"index":"my-index-000002"},{"query":{"match_all":{}}}]' "$ELASTICSEARCH_URL/my-index-000001/_msearch"
Request example
An example body for a `GET my-index-000001/_msearch` request.
{ }
{"query" : {"match" : { "message": "this is a test"}}}
{"index": "my-index-000002"}
{"query" : {"match_all" : {}}}




Open a point in time Generally available; Added in 7.10.0

POST /{index}/_pit

A search request by default runs against the most recent visible data of the target indices, which is called point in time. Elasticsearch pit (point in time) is a lightweight view into the state of the data as it existed when initiated. In some cases, it’s preferred to perform multiple search requests using the same point in time. For example, if refreshes happen between search_after requests, then the results of those requests might not be consistent as changes happening between searches are only visible to the more recent point in time.

A point in time must be opened explicitly before being used in search requests.

A subsequent search request with the pit parameter must not specify index, routing, or preference values as these parameters are copied from the point in time.

Just like regular searches, you can use from and size to page through point in time search results, up to the first 10,000 hits. If you want to retrieve more hits, use PIT with search_after.

IMPORTANT: The open point in time request and each subsequent search request can return different identifiers; always use the most recently received ID for the next search request.

When a PIT that contains shard failures is used in a search request, the missing are always reported in the search response as a NoShardAvailableActionException exception. To get rid of these exceptions, a new PIT needs to be created so that shards missing from the previous PIT can be handled, assuming they become available in the meantime.

Keeping point in time alive

The keep_alive parameter, which is passed to a open point in time request and search request, extends the time to live of the corresponding point in time. The value does not need to be long enough to process all data — it just needs to be long enough for the next request.

Normally, the background merge process optimizes the index by merging together smaller segments to create new, bigger segments. Once the smaller segments are no longer needed they are deleted. However, open point-in-times prevent the old segments from being deleted since they are still in use.

TIP: Keeping older segments alive means that more disk space and file handles are needed. Ensure that you have configured your nodes to have ample free file handles.

Additionally, if a segment contains deleted or updated documents then the point in time must keep track of whether each document in the segment was live at the time of the initial search request. Ensure that your nodes have sufficient heap space if you have many open point-in-times on an index that is subject to ongoing deletes or updates. Note that a point-in-time doesn't prevent its associated indices from being deleted. You can check how many point-in-times (that is, search contexts) are open with the nodes stats API.

Required authorization

  • Index privileges: read

Path parameters

  • index string | array[string] Required

    A comma-separated list of index names to open point in time; use _all or empty string to perform the operation on all indices

Query parameters

  • keep_alive string Required

    Extend the length of time that the point in time persists.

    External documentation
  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • preference string

    The node or shard the operation should be performed on. By default, it is random.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • allow_partial_search_results boolean

    Indicates whether the point in time tolerates unavailable shards or shard failures when initially creating the PIT. If false, creating a point in time request when a shard is missing or unavailable will throw an exception. If true, the point in time will contain all the shards that are available at the time of the request.

  • max_concurrent_shard_requests number

    Maximum number of concurrent shard requests that each sub-search request executes per node.

application/json

Body

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _shards object Required

      Shards used to create the PIT

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • id string Required
POST /{index}/_pit
POST /my-index-000001/_pit?keep_alive=1m&allow_partial_search_results=true
resp = client.open_point_in_time(
    index="my-index-000001",
    keep_alive="1m",
    allow_partial_search_results=True,
)
const response = await client.openPointInTime({
  index: "my-index-000001",
  keep_alive: "1m",
  allow_partial_search_results: "true",
});
response = client.open_point_in_time(
  index: "my-index-000001",
  keep_alive: "1m",
  allow_partial_search_results: "true"
)
$resp = $client->openPointInTime([
    "index" => "my-index-000001",
    "keep_alive" => "1m",
    "allow_partial_search_results" => "true",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_pit?keep_alive=1m&allow_partial_search_results=true"
client.openPointInTime(o -> o
    .allowPartialSearchResults(true)
    .index("my-index-000001")
    .keepAlive(k -> k
        .offset(1)
    )
);
Response examples (200)
A successful response from `POST /my-index-000001/_pit?keep_alive=1m&allow_partial_search_results=true`. It includes a summary of the total number of shards, as well as the number of successful shards when creating the PIT.
{
  "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA=",
  "_shards": {
    "total": 10,
    "successful": 10,
    "skipped": 0,
    "failed": 0
  }
}




Render a search template Generally available

POST /_render/template/{id}

All methods and paths for this operation:

GET /_render/template

POST /_render/template
GET /_render/template/{id}
POST /_render/template/{id}

Render a search template as a search request body.

Required authorization

  • Index privileges: read

Path parameters

  • id string Required

    The ID of the search template to render. If no source is specified, this or the id request body parameter is required.

application/json

Body

  • id string

    The ID of the search template to render. If no source is specified, this or the <template-id> request path parameter is required. If you specify both this parameter and the <template-id> parameter, the API uses only <template-id>.

  • file string
  • params object

    Key-value pairs used to replace Mustache variables in the template. The key is the variable name. The value is the variable value.

    Hide params attribute Show params attribute object
    • * object Additional properties
  • source string

    An inline search template. It supports the same parameters as the search API's request body. These parameters also support Mustache variables. If no id or <templated-id> is specified, this parameter is required.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • template_output object Required
      Hide template_output attribute Show template_output attribute object
      • * object Additional properties
POST /_render/template/{id}
POST _render/template
{
  "id": "my-search-template",
  "params": {
    "query_string": "hello world",
    "from": 20,
    "size": 10
  }
}
resp = client.render_search_template(
    id="my-search-template",
    params={
        "query_string": "hello world",
        "from": 20,
        "size": 10
    },
)
const response = await client.renderSearchTemplate({
  id: "my-search-template",
  params: {
    query_string: "hello world",
    from: 20,
    size: 10,
  },
});
response = client.render_search_template(
  body: {
    "id": "my-search-template",
    "params": {
      "query_string": "hello world",
      "from": 20,
      "size": 10
    }
  }
)
$resp = $client->renderSearchTemplate([
    "body" => [
        "id" => "my-search-template",
        "params" => [
            "query_string" => "hello world",
            "from" => 20,
            "size" => 10,
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"id":"my-search-template","params":{"query_string":"hello world","from":20,"size":10}}' "$ELASTICSEARCH_URL/_render/template"
client.renderSearchTemplate(r -> r
    .id("my-search-template")
    .params(Map.of("size", JsonData.fromJson("10"),"from", JsonData.fromJson("20"),"query_string", JsonData.fromJson("\"hello world\"")))
);
Request example
Run `POST _render/template`
{
  "id": "my-search-template",
  "params": {
    "query_string": "hello world",
    "from": 20,
    "size": 10
  }
}

Run a search Generally available

POST /{index}/_search

All methods and paths for this operation:

GET /_search

POST /_search
GET /{index}/_search
POST /{index}/_search

Get search hits that match the query defined in the request. You can provide search queries using the q query string parameter or the request body. If both are specified, only the query parameter is used.

If the Elasticsearch security features are enabled, you must have the read index privilege for the target data stream, index, or alias. For cross-cluster search, refer to the documentation about configuring CCS privileges. To search a point in time (PIT) for an alias, you must have the read index privilege for the alias's data streams or indices.

Search slicing

When paging through a large number of documents, it can be helpful to split the search into multiple slices to consume them independently with the slice and pit properties. By default the splitting is done first on the shards, then locally on each shard. The local splitting partitions the shard into contiguous ranges based on Lucene document IDs.

For instance if the number of shards is equal to 2 and you request 4 slices, the slices 0 and 2 are assigned to the first shard and the slices 1 and 3 are assigned to the second shard.

IMPORTANT: The same point-in-time ID should be used for all slices. If different PIT IDs are used, slices can overlap and miss documents. This situation can occur because the splitting criterion is based on Lucene document IDs, which are not stable across changes to the index.

Required authorization

  • Index privileges: read
External documentation

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (*). To search all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • allow_partial_search_results boolean

    If true and there are shard request timeouts or shard failures, the request returns partial results. If false, it returns an error with no partial results.

    To override the default behavior, you can set the search.default_allow_partial_results cluster setting to false.

  • analyzer string

    The analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • analyze_wildcard boolean

    If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • batched_reduce_size number

    The number of shard results that should be reduced at once on the coordinating node. If the potential number of shards in the request can be large, this value should be used as a protection mechanism to reduce the memory overhead per search request.

  • ccs_minimize_roundtrips boolean

    If true, network round-trips between the coordinating node and the remote clusters are minimized when running cross-cluster search (CCS) requests.

  • default_operator string

    The default operator for the query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as a default when no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • docvalue_fields string | array[string]

    A comma-separated list of fields to return as the docvalue representation of a field for each hit.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • explain boolean

    If true, the request returns detailed information about score computation as part of a hit.

  • ignore_throttled boolean Deprecated

    If true, concrete, expanded or aliased indices will be ignored when frozen.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • include_named_queries_score boolean

    If true, the response includes the score contribution from any named queries.

    This functionality reruns each named query on every hit in a search response. Typically, this adds a small overhead to a request. However, using computationally expensive named queries on a large number of hits may add significant overhead.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • max_concurrent_shard_requests number

    The number of concurrent shard requests per node that the search runs concurrently. This value should be used to limit the impact of the search on the cluster in order to limit the number of concurrent shard requests.

  • min_compatible_shard_node string

    The minimum version of the node that can handle the request Any handling node with a lower version will fail the request.

  • preference string

    The nodes and shards used for the search. By default, Elasticsearch selects from eligible nodes and shards using adaptive replica selection, accounting for allocation awareness. Valid values are:

    • _only_local to run the search only on shards on the local node;
    • _local to, if possible, run the search on shards on the local node, or if not, select shards using the default method;
    • _only_nodes:<node-id>,<node-id> to run the search on only the specified nodes IDs, where, if suitable shards exist on more than one selected node, use shards on those nodes using the default method, or if none of the specified nodes are available, select shards from any available node using the default method;
    • _prefer_nodes:<node-id>,<node-id> to if possible, run the search on the specified nodes IDs, or if not, select shards using the default method;
    • _shards:<shard>,<shard> to run the search only on the specified shards;
    • <custom-string> (any string that does not start with _) to route searches with the same <custom-string> to the same shards in the same order.
  • pre_filter_shard_size number

    A threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method (if date filters are mandatory to match but the shard bounds and the query are disjoint). When unspecified, the pre-filter phase is executed if any of these conditions is met:

    • The request targets more than 128 shards.
    • The request targets one or more read-only index.
    • The primary sort of the query targets an indexed field.
  • request_cache boolean

    If true, the caching of search results is enabled for requests where size is 0. It defaults to index level settings.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • scroll string

    The period to retain the search context for scrolling. By default, this value cannot exceed 1d (24 hours). You can change this limit by using the search.max_keep_alive cluster-level setting.

    External documentation
  • search_type string

    Indicates how distributed term frequencies are calculated for relevance scoring.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • stats array[string]

    Specific tag of the request for logging and statistical purposes.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. You can pass _source: true to return both source fields and stored fields in the search response.

  • suggest_field string

    The field to use for suggestions.

  • suggest_mode string

    The suggest mode. This parameter can be used only when the suggest_field and suggest_text query string parameters are specified.

    Supported values include:

    • missing: Only generate suggestions for terms that are not in the shard.
    • popular: Only suggest terms that occur in more docs on the shard than the original term.
    • always: Suggest any matching suggestions based on terms in the suggest text.

    Values are missing, popular, or always.

  • suggest_size number

    The number of suggestions to return. This parameter can be used only when the suggest_field and suggest_text query string parameters are specified.

  • suggest_text string

    The source text for which the suggestions should be returned. This parameter can be used only when the suggest_field and suggest_text query string parameters are specified.

  • terminate_after number

    The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.

    IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers. If set to 0 (default), the query does not terminate early.

  • timeout string

    The period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. It defaults to no timeout.

    External documentation
  • track_total_hits boolean | number

    The number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query.

  • track_scores boolean

    If true, the request calculates and returns document scores, even if the scores are not used for sorting.

  • typed_keys boolean

    If true, aggregation and suggester names are be prefixed by their respective types in the response.

  • rest_total_hits_as_int boolean

    Indicates whether hits.total should be rendered as an integer or an object in the rest search response.

  • version boolean

    If true, the request returns the document version as part of a hit.

  • _source boolean | string | array[string]

    The source fields that are returned for matching documents. These fields are returned in the hits._source property of the search response. Valid values are:

    • true to return the entire document source.
    • false to not return the document source.
    • <string> to return the source fields that are specified as a comma-separated list that supports wildcard (*) patterns.
  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • seq_no_primary_term boolean

    If true, the request returns the sequence number and primary term of the last modification of each hit.

  • q string

    A query in the Lucene query string syntax. Query parameter searches do not support the full Elasticsearch Query DSL but are handy for testing.

    IMPORTANT: This parameter overrides the query parameter in the request body. If both parameters are specified, documents matching the query request body parameter are not returned.

  • size number

    The number of hits to return. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter.

  • from number

    The starting document offset, which must be non-negative. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter.

  • sort string | array[string]

    A comma-separated list of <field>:<direction> pairs.

application/json

Body

  • aggregations object

    Defines the aggregations that are run as part of the search request.

  • collapse object

    Collapses search results the values of the specified field.

    External documentation
  • explain boolean

    If true, the request returns detailed information about score computation as part of a hit.

    Default value is false.

  • ext object

    Configuration of search extensions defined by Elasticsearch plugins.

    Hide ext attribute Show ext attribute object
    • * object Additional properties
  • from number

    The starting document offset, which must be non-negative. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter.

    Default value is 0.

  • highlight object

    Specifies the highlighter to use for retrieving highlighted snippets from one or more fields in your search results.

    Hide highlight attributes Show highlight attributes object
    • type string

      Supported values include:

      • plain: The plain highlighter uses the standard Lucene highlighter
      • fvh: The fvh highlighter uses the Lucene Fast Vector highlighter.
      • unified: The unified highlighter uses the Lucene Unified Highlighter.
      Any of:

      Supported values include:

      • plain: The plain highlighter uses the standard Lucene highlighter
      • fvh: The fvh highlighter uses the Lucene Fast Vector highlighter.
      • unified: The unified highlighter uses the Lucene Unified Highlighter.

      Values are plain, fvh, or unified.

    • boundary_chars string

      A string that contains each boundary character.

      Default value is .,!? \t\n.

    • boundary_max_scan number

      How far to scan for boundary characters.

      Default value is 20.

    • boundary_scanner string

      Specifies how to break the highlighted fragments: chars, sentence, or word. Only valid for the unified and fvh highlighters. Defaults to sentence for the unified highlighter. Defaults to chars for the fvh highlighter.

      Supported values include:

      • chars: Use the characters specified by boundary_chars as highlighting boundaries. The boundary_max_scan setting controls how far to scan for boundary characters. Only valid for the fvh highlighter.
      • sentence: Break highlighted fragments at the next sentence boundary, as determined by Java’s BreakIterator. You can specify the locale to use with boundary_scanner_locale. When used with the unified highlighter, the sentence scanner splits sentences bigger than fragment_size at the first word boundary next to fragment_size. You can set fragment_size to 0 to never split any sentence.
      • word: Break highlighted fragments at the next word boundary, as determined by Java’s BreakIterator. You can specify the locale to use with boundary_scanner_locale.

      Values are chars, sentence, or word.

    • boundary_scanner_locale string

      Controls which locale is used to search for sentence and word boundaries. This parameter takes a form of a language tag, for example: "en-US", "fr-FR", "ja-JP".

      Default value is Locale.ROOT.

    • force_source boolean Deprecated
    • fragmenter string

      Specifies how text should be broken up in highlight snippets: simple or span. Only valid for the plain highlighter.

      Values are simple or span.

    • fragment_size number

      The size of the highlighted fragment in characters.

      Default value is 100.

    • highlight_filter boolean
    • highlight_query object

      Highlight matches for a query other than the search query. This is especially useful if you use a rescore query because those are not taken into account by highlighting by default.

      External documentation
    • max_fragment_length number
    • max_analyzed_offset number

      If set to a non-negative value, highlighting stops at this defined maximum limit. The rest of the text is not processed, thus not highlighted and no error is returned The max_analyzed_offset query setting does not override the index.highlight.max_analyzed_offset setting, which prevails when it’s set to lower value than the query setting.

    • no_match_size number

      The amount of text you want to return from the beginning of the field if there are no matching fragments to highlight.

      Default value is 0.

    • number_of_fragments number

      The maximum number of fragments to return. If the number of fragments is set to 0, no fragments are returned. Instead, the entire field contents are highlighted and returned. This can be handy when you need to highlight short texts such as a title or address, but fragmentation is not required. If number_of_fragments is 0, fragment_size is ignored.

      Default value is 5.

    • options object
      Hide options attribute Show options attribute object
      • * object Additional properties
    • order string

      Sorts highlighted fragments by score when set to score. By default, fragments will be output in the order they appear in the field (order: none). Setting this option to score will output the most relevant fragments first. Each highlighter applies its own logic to compute relevancy scores.

      Value is score.

    • phrase_limit number

      Controls the number of matching phrases in a document that are considered. Prevents the fvh highlighter from analyzing too many phrases and consuming too much memory. When using matched_fields, phrase_limit phrases per matched field are considered. Raising the limit increases query time and consumes more memory. Only supported by the fvh highlighter.

      Default value is 256.

    • post_tags array[string]

      Use in conjunction with pre_tags to define the HTML tags to use for the highlighted text. By default, highlighted text is wrapped in <em> and </em> tags.

    • pre_tags array[string]

      Use in conjunction with post_tags to define the HTML tags to use for the highlighted text. By default, highlighted text is wrapped in <em> and </em> tags.

    • require_field_match boolean

      By default, only fields that contains a query match are highlighted. Set to false to highlight all fields.

      Default value is true.

    • tags_schema string

      Set to styled to use the built-in tag schema.

      Value is styled.

    • encoder string

      Values are default or html.

    • fields object Required
  • track_total_hits boolean | number

    Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query.

  • indices_boost array[object]

    Boost the _score of documents from specified indices. The boost value is the factor by which scores are multiplied. A boost value greater than 1.0 increases the score. A boost value between 0 and 1.0 decreases the score.

    External documentation
    Hide indices_boost attribute Show indices_boost attribute object
    • * number Additional properties
  • docvalue_fields array[object]

    An array of wildcard (*) field patterns. The request returns doc values for field names matching these patterns in the hits.fields property of the response.

    A reference to a field with formatting instructions on how to return the value

    External documentation
    Hide docvalue_fields attributes Show docvalue_fields attributes object
    • field string Required

      A wildcard pattern. The request returns values for field names matching this pattern.

    • format string

      The format in which the values are returned.

    • include_unmapped boolean
  • knn object | array[object]

    The approximate kNN search to run.

    One of:
    Hide attributes Show attributes
    • field string Required

      The name of the vector field to search against

    • query_vector array[number]

      The query vector

    • query_vector_builder object

      The query vector builder. You must provide a query_vector_builder or query_vector, but not both.

      Hide query_vector_builder attribute Show query_vector_builder attribute object
      • text_embedding object
        Hide text_embedding attributes Show text_embedding attributes object
        • model_id string Generally available; Added in 8.18.0

          Model ID is required for all dense_vector fields but may be inferred for semantic_text fields

        • model_text string Required
    • k number

      The final number of nearest neighbors to return as top hits

    • num_candidates number

      The number of nearest neighbor candidates to consider per shard

    • boost number

      Boost value to apply to kNN scores

    • filter object | array[object]

      Filters for the kNN search query

      One of:

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • similarity number

      The minimum similarity for a vector to be considered a match

    • inner_hits object

      If defined, each search hit will contain inner hits.

      Hide inner_hits attributes Show inner_hits attributes object
      • name string

        The name for the particular inner hit definition in the response. Useful when a search request contains multiple inner hits.

      • size number

        The maximum number of hits to return per inner_hits.

        Default value is 3.

      • from number

        Inner hit starting document offset.

        Default value is 0.

      • collapse object
        External documentation
      • docvalue_fields array[object]

        A reference to a field with formatting instructions on how to return the value

        Hide docvalue_fields attributes Show docvalue_fields attributes object
        • field
        • format string

          The format in which the values are returned.

        • include_unmapped boolean
      • explain boolean
      • ignore_unmapped boolean
      • script_fields object
        Hide script_fields attribute Show script_fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • ignore_failure boolean
      • seq_no_primary_term boolean
      • fields array[string]

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • sort array[string | object]

        How the inner hits should be sorted per inner_hits. By default, inner hits are sorted by score.

      • _source boolean | object

      • stored_fields string | array[string]
      • track_scores boolean

        Default value is false.

      • version boolean
    • rescore_vector object

      Apply oversampling and rescoring to quantized vectors *

      Hide rescore_vector attribute Show rescore_vector attribute object
      • oversample number Required

        Applies the specified oversample factor to k on the approximate kNN search

  • rank object

    The Reciprocal Rank Fusion (RRF) to use.

    Hide rank attribute Show rank attribute object
    • rrf object

      The reciprocal rank fusion parameters

      Hide rrf attributes Show rrf attributes object
      • rank_constant number

        How much influence documents in individual result sets per query have over the final ranked result set

      • rank_window_size number

        Size of the individual result sets per query

  • min_score number

    The minimum _score for matching documents. Documents with a lower _score are not included in search results and results collected by aggregations.

  • post_filter object

    Use the post_filter parameter to filter search results. The search hits are filtered after the aggregations are calculated. A post filter has no impact on the aggregation results.

    External documentation
  • profile boolean

    Set to true to return detailed timing information about the execution of individual components in a search request. NOTE: This is a debugging tool and adds significant overhead to search execution.

    Default value is false.

  • query object

    The search definition using the Query DSL.

    External documentation
  • rescore object | array[object]

    Can be used to improve precision by reordering just the top (for example 100 - 500) documents returned by the query and post_filter phases.

    One of:
    Hide attributes Show attributes
    • window_size number
    • query object
      Hide query attributes Show query attributes object
      • rescore_query object Required

        The query to use for rescoring. This query is only run on the Top-K results returned by the query and post_filter phases.

      • query_weight number

        Relative importance of the original query versus the rescore query.

        Default value is 1.

      • rescore_query_weight number

        Relative importance of the rescore query versus the original query.

        Default value is 1.

      • score_mode string

        Determines how scores are combined.

        Supported values include:

        • avg: Average the original score and the rescore query score.
        • max: Take the max of original score and the rescore query score.
        • min: Take the min of the original score and the rescore query score.
        • multiply: Multiply the original score by the rescore query score. Useful for function query rescores.
        • total: Add the original score and the rescore query score.

        Values are avg, max, min, multiply, or total.

    • learning_to_rank object
      Hide learning_to_rank attributes Show learning_to_rank attributes object
      • model_id string Required

        The unique identifier of the trained model uploaded to Elasticsearch

      • params object

        Named parameters to be passed to the query templates used for feature

        Hide params attribute Show params attribute object
        • * object Additional properties
  • retriever object

    A retriever is a specification to describe top documents returned from a search. A retriever replaces other elements of the search API that also return top documents such as query and knn.

    Hide retriever attributes Show retriever attributes object
    • standard object

      A retriever that replaces the functionality of a traditional query.

      Hide standard attributes Show standard attributes object
      • filter object | array[object]

        Query to filter the documents that can match.

        One of:

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      • min_score number

        Minimum _score for matching documents. Documents with a lower _score are not included in the top documents.

      • _name string

        Retriever name.

      • query object

        Defines a query to retrieve a set of top documents.

      • search_after array[number | string | boolean | null | object]

        Defines a search after object parameter used for pagination.

      • terminate_after number

        Maximum number of documents to collect for each shard.

      • sort
      • collapse object

        Collapses the top documents by a specified key into a single top document per key.

    • knn object

      A retriever that replaces the functionality of a knn search.

      Hide knn attributes Show knn attributes object
      • filter object | array[object]

        Query to filter the documents that can match.

        One of:

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      • min_score number

        Minimum _score for matching documents. Documents with a lower _score are not included in the top documents.

      • _name string

        Retriever name.

      • field string Required

        The name of the vector field to search against.

      • query_vector array[number]

        Query vector. Must have the same number of dimensions as the vector field you are searching against. You must provide a query_vector_builder or query_vector, but not both.

      • query_vector_builder object

        Defines a model to build a query vector.

      • k number Required

        Number of nearest neighbors to return as top hits.

      • num_candidates number Required

        Number of nearest neighbor candidates to consider per shard.

      • similarity number

        The minimum similarity required for a document to be considered a match.

      • rescore_vector object

        Apply oversampling and rescoring to quantized vectors *

    • rrf object

      A retriever that produces top documents from reciprocal rank fusion (RRF).

      Hide rrf attributes Show rrf attributes object
      • filter object | array[object]

        Query to filter the documents that can match.

        One of:

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      • min_score number

        Minimum _score for matching documents. Documents with a lower _score are not included in the top documents.

      • _name string

        Retriever name.

      • retrievers array[object] Required

        A list of child retrievers to specify which sets of returned top documents will have the RRF formula applied to them.

      • rank_constant number

        This value determines how much influence documents in individual result sets per query have over the final ranked result set.

      • rank_window_size number

        This value determines the size of the individual result sets per query.

      • query string
      • fields array[string]
    • text_similarity_reranker object

      A retriever that reranks the top documents based on a reranking model using the InferenceAPI

      Hide text_similarity_reranker attributes Show text_similarity_reranker attributes object
      • filter object | array[object]

        Query to filter the documents that can match.

        One of:

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      • min_score number

        Minimum _score for matching documents. Documents with a lower _score are not included in the top documents.

      • _name string

        Retriever name.

      • retriever object Required

        The nested retriever which will produce the first-level results, that will later be used for reranking.

      • rank_window_size number

        This value determines how many documents we will consider from the nested retriever.

      • inference_id string

        Unique identifier of the inference endpoint created using the inference API.

      • inference_text string Required

        The text snippet used as the basis for similarity comparison

      • field string Required

        The document field to be used for text similarity comparisons. This field should contain the text that will be evaluated against the inference_text

    • rule object

      A retriever that replaces the functionality of a rule query.

      Hide rule attributes Show rule attributes object
      • filter object | array[object]

        Query to filter the documents that can match.

        One of:

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      • min_score number

        Minimum _score for matching documents. Documents with a lower _score are not included in the top documents.

      • _name string

        Retriever name.

      • ruleset_ids string | array[string] Required

        The ruleset IDs containing the rules this retriever is evaluating against.

      • match_criteria object Required

        The match criteria that will determine if a rule in the provided rulesets should be applied.

      • retriever object Required

        The retriever whose results rules should be applied to.

      • rank_window_size number

        This value determines the size of the individual result set.

    • rescorer object

      A retriever that re-scores only the results produced by its child retriever.

      Hide rescorer attributes Show rescorer attributes object
      • filter object | array[object]

        Query to filter the documents that can match.

        One of:

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      • min_score number

        Minimum _score for matching documents. Documents with a lower _score are not included in the top documents.

      • _name string

        Retriever name.

      • retriever object Required

        Inner retriever.

      • rescore array[object] Required
    • linear object

      A retriever that supports the combination of different retrievers through a weighted linear combination.

      Hide linear attributes Show linear attributes object
      • filter object | array[object]

        Query to filter the documents that can match.

        One of:

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      • min_score number

        Minimum _score for matching documents. Documents with a lower _score are not included in the top documents.

      • _name string

        Retriever name.

      • retrievers array[object]

        Inner retrievers.

      • rank_window_size number
      • query string
      • fields array[string]
      • normalizer string

        Values are none, minmax, or l2_norm.

    • pinned object

      A pinned retriever applies pinned documents to the underlying retriever. This retriever will rewrite to a PinnedQueryBuilder.

      Hide pinned attributes Show pinned attributes object
      • filter object | array[object]

        Query to filter the documents that can match.

        One of:

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      • min_score number

        Minimum _score for matching documents. Documents with a lower _score are not included in the top documents.

      • _name string

        Retriever name.

      • retriever object Required

        Inner retriever.

      • ids array[string]
      • docs array[object]
      • rank_window_size number
  • script_fields object

    Retrieve a script evaluation (based on different fields) for each hit.

    Hide script_fields attribute Show script_fields attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • script object Required
        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API
          Any of:

          Specifies the language the script is written in.

          Supported values include:

          • painless: Painless scripting language, purpose-built for Elasticsearch.
          • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
          • mustache: Mustache templated, used for templates.
          • java: Expert Java API

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • ignore_failure boolean
  • search_after array[number | string | boolean | null | object]

    Used to retrieve the next page of hits using a set of sort values from the previous page.

    A field value.

  • size number

    The number of hits to return, which must not be negative. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after property.

    Default value is 10.

  • slice object

    Split a scrolled search into multiple slices that can be consumed independently.

    Hide slice attributes Show slice attributes object
    • field string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • max number Required
  • sort string | object | array[string | object]

    A comma-separated list of : pairs.

    One of:

    A comma-separated list of : pairs.

  • _source boolean | object

    The source fields that are returned for matching documents. These fields are returned in the hits._source property of the search response. If the stored_fields property is specified, the _source property defaults to false. Otherwise, it defaults to true.

    One of:

    The source fields that are returned for matching documents. These fields are returned in the hits._source property of the search response. If the stored_fields property is specified, the _source property defaults to false. Otherwise, it defaults to true.

  • fields array[object]

    An array of wildcard (*) field patterns. The request returns values for field names matching these patterns in the hits.fields property of the response.

    A reference to a field with formatting instructions on how to return the value

    Hide fields attributes Show fields attributes object
    • field string Required

      A wildcard pattern. The request returns values for field names matching this pattern.

    • format string

      The format in which the values are returned.

    • include_unmapped boolean
  • suggest object

    Defines a suggester that provides similar looking terms based on a provided text.

    Hide suggest attribute Show suggest attribute object
    • text string

      Global suggest text, to avoid repetition when the same text is used in several suggesters

  • terminate_after number

    The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.

    IMPORTANT: Use with caution. Elasticsearch applies this property to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this property for requests that target data streams with backing indices across multiple data tiers.

    If set to 0 (default), the query does not terminate early.

    Default value is 0.

  • timeout string

    The period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.

  • track_scores boolean

    If true, calculate and return document scores, even if the scores are not used for sorting.

    Default value is false.

  • version boolean

    If true, the request returns the document version as part of a hit.

    Default value is false.

  • seq_no_primary_term boolean

    If true, the request returns sequence number and primary term of the last modification of each hit.

    External documentation
  • stored_fields string | array[string]

    A comma-separated list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source property defaults to false. You can pass _source: true to return both source fields and stored fields in the search response.

  • pit object

    Limit the search to a point in time (PIT). If you provide a PIT, you cannot specify an <index> in the request path.

    Hide pit attributes Show pit attributes object
    • id string Required
    • keep_alive string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      External documentation
  • runtime_mappings object

    One or more runtime fields in the search request. These fields take precedence over mapped fields with the same name.

    Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        For type lookup

      • target_field string

        For type lookup

      • target_index string

        For type lookup

      • script object

        Painless script executed at query time.

        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang
        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • stats array[string]

    The stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number Required

      The number of milliseconds it took Elasticsearch to run the request. This value is calculated by measuring the time elapsed between receipt of a request on the coordinating node and the time at which the coordinating node is ready to send the response. It includes:

      • Communication time between the coordinating node and data nodes
      • Time the request spends in the search thread pool, queued for execution
      • Actual run time

      It does not include:

      • Time needed to send the request to Elasticsearch
      • Time needed to serialize the JSON response
      • Time needed to send the response to a client
    • timed_out boolean Required

      If true, the request timed out before completion; returned results may be partial or empty.

    • _shards object Required

      A count of shards used for the request.

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • hits object Required

      The returned documents and metadata.

      Hide hits attributes Show hits attributes object
      • total object | number

        Total hit count information, present only if track_total_hits wasn't false in the search request.

        One of:
        Hide attributes Show attributes
        • relation string Required

          Supported values include:

          • eq: Accurate
          • gte: Lower bound, including returned events or sequences

          Values are eq or gte.

        • value number Required
      • hits array[object] Required
        Hide hits attributes Show hits attributes object
        • _index string Required
        • _id string
        • _score number | string | null

        • _explanation object
        • fields object
          Hide fields attribute Show fields attribute object
          • * object Additional properties
        • highlight object
          Hide highlight attribute Show highlight attribute object
          • * array[string] Additional properties
        • inner_hits object
          Hide inner_hits attribute Show inner_hits attribute object
          • * object Additional properties
        • matched_queries array[string] | object

        • _nested object
        • _ignored array[string]
        • ignored_field_values object
          Hide ignored_field_values attribute Show ignored_field_values attribute object
          • * array[number | string | boolean | null | object] Additional properties
        • _shard string
        • _node string
        • _routing string
        • _source object
        • _rank number
        • _seq_no number
        • _primary_term number
        • _version number
        • sort array[number | string | boolean | null | object]
      • max_score number | string | null

    • aggregations object
    • _clusters object
      Hide _clusters attributes Show _clusters attributes object
      • skipped number Required
      • successful number Required
      • total number Required
      • running number Required
      • partial number Required
      • failed number Required
      • details object
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • timed_out boolean Required
          • _shards object
          • failures array[object]
    • fields object
      Hide fields attribute Show fields attribute object
      • * object Additional properties
    • max_score number
    • num_reduce_phases number
    • profile object
      Hide profile attribute Show profile attribute object
      • shards array[object] Required
        Hide shards attributes Show shards attributes object
        • aggregations array[object] Required
        • cluster string Required
        • dfs object
        • fetch object
        • id string Required
        • index string Required
        • node_id string Required
        • searches array[object] Required
        • shard_id number Required
    • pit_id string
    • _scroll_id string

      The identifier for the search and its search context. You can use this scroll ID with the scroll API to retrieve the next batch of search results for the request. This property is returned only if the scroll query parameter is specified in the request.

    • suggest object
      Hide suggest attribute Show suggest attribute object
      • * array[object] Additional properties
        One of:
        Hide attributes Show attributes
        • length number Required
        • offset number Required
        • text string Required
        • options
    • terminated_early boolean
POST /{index}/_search
GET /my-index-000001/_search?from=40&size=20
{
  "query": {
    "term": {
      "user.id": "kimchy"
    }
  }
}
resp = client.search(
    index="my-index-000001",
    from="40",
    size="20",
    query={
        "term": {
            "user.id": "kimchy"
        }
    },
)
const response = await client.search({
  index: "my-index-000001",
  from: 40,
  size: 20,
  query: {
    term: {
      "user.id": "kimchy",
    },
  },
});
response = client.search(
  index: "my-index-000001",
  from: "40",
  size: "20",
  body: {
    "query": {
      "term": {
        "user.id": "kimchy"
      }
    }
  }
)
$resp = $client->search([
    "index" => "my-index-000001",
    "from" => "40",
    "size" => "20",
    "body" => [
        "query" => [
            "term" => [
                "user.id" => "kimchy",
            ],
        ],
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":{"term":{"user.id":"kimchy"}}}' "$ELASTICSEARCH_URL/my-index-000001/_search?from=40&size=20"
client.search(s -> s
    .from(40)
    .index("my-index-000001")
    .query(q -> q
        .term(t -> t
            .field("user.id")
            .value(FieldValue.of("kimchy"))
        )
    )
    .size(20)
,Void.class);
Request examples
Run `GET /my-index-000001/_search?from=40&size=20` to run a search.
{
  "query": {
    "term": {
      "user.id": "kimchy"
    }
  }
}
Run `POST /_search` to run a point in time search. The `id` parameter tells Elasticsearch to run the request using contexts from this open point in time. The `keep_alive` parameter tells Elasticsearch how long it should extend the time to live of the point in time.
{
    "size": 100,  
    "query": {
        "match" : {
            "title" : "elasticsearch"
        }
    },
    "pit": {
      "id":  "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA==", 
      "keep_alive": "1m"  
    }
}
When paging through a large number of documents, it can be helpful to split the search into multiple slices to consume them independently. The result from running the first `GET /_search` request returns documents belonging to the first slice (`id: 0`). If you run a second request with `id` set to `1', it returns documents in the second slice. Since the maximum number of slices is set to `2`, the union of the results is equivalent to the results of a point-in-time search without slicing.
{
  "slice": {
    "id": 0,                      
    "max": 2                      
  },
  "query": {
    "match": {
      "message": "foo"
    }
  },
  "pit": {
    "id": "46ToAwMDaWR5BXV1aWQyKwZub2RlXzMAAAAAAAAAACoBYwADaWR4BXV1aWQxAgZub2RlXzEAAAAAAAAAAAEBYQADaWR5BXV1aWQyKgZub2RlXzIAAAAAAAAAAAwBYgACBXV1aWQyAAAFdXVpZDEAAQltYXRjaF9hbGw_gAAAAA=="
  }
}
Response examples (200)
An abbreviated response from `GET /my-index-000001/_search?from=40&size=20` with a simple term query.
{
  "took": 5,
  "timed_out": false,
  "_shards": {
    "total": 1,
    "successful": 1,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": {
      "value": 20,
      "relation": "eq"
    },
    "max_score": 1.3862942,
    "hits": [
      {
        "_index": "my-index-000001",
        "_id": "0",
        "_score": 1.3862942,
        "_source": {
          "@timestamp": "2099-11-15T14:12:12",
          "http": {
            "request": {
              "method": "get"
            },
            "response": {
              "status_code": 200,
              "bytes": 1070000
            },
            "version": "1.1"
          },
          "source": {
            "ip": "127.0.0.1"
          },
          "message": "GET /search HTTP/1.1 200 1070000",
          "user": {
            "id": "kimchy"
          }
        }
      }
    ]
  }
}









































Searchable snapshots

















Security

Activate a user profile Generally available; Added in 8.2.0

POST /_security/profile/_activate

Create or update a user profile on behalf of another user.

NOTE: The user profile feature is designed only for use by Kibana and Elastic's Observability, Enterprise Search, and Elastic Security solutions. Individual users and external applications should not call this API directly. The calling application must have either an access_token or a combination of username and password for the user that the profile document is intended for. Elastic reserves the right to change or remove this feature in future releases without prior notice.

This API creates or updates a profile document for end users with information that is extracted from the user's authentication object including username, full_name, roles, and the authentication realm. For example, in the JWT access_token case, the profile user's username is extracted from the JWT token claim pointed to by the claims.principal setting of the JWT realm that authenticated the token.

When updating a profile document, the API enables the document if it was disabled. Any updates do not change existing content for either the labels or data fields.

Required authorization

  • Cluster privileges: manage_user_profile
application/json

Body Required

  • access_token string

    The user's Elasticsearch access token or JWT. Both access and id JWT token types are supported and they depend on the underlying JWT realm configuration. If you specify the access_token grant type, this parameter is required. It is not valid with other grant types.

  • grant_type string Required

    The type of grant.

    Supported values include:

    • password: In this type of grant, you must supply the user ID and password for which you want to create the API key.
    • access_token: In this type of grant, you must supply an access token that was created by the Elasticsearch token service. If you are activating a user profile, you can alternatively supply a JWT (either a JWT access_token or a JWT id_token).

    Values are password or access_token.

  • password string

    The user's password. If you specify the password grant type, this parameter is required. It is not valid with other grant types.

  • username string

    The username that identifies the user. If you specify the password grant type, this parameter is required. It is not valid with other grant types.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • uid string Required
    • user object Required
      Hide user attributes Show user attributes object
    • data object Required
      Hide data attribute Show data attribute object
      • * object Additional properties
    • labels object Required
      Hide labels attribute Show labels attribute object
      • * object Additional properties
    • enabled boolean
    • last_synchronized number Required
    • _doc object Required
      Hide _doc attributes Show _doc attributes object
      • _primary_term number Required
      • _seq_no number Required
POST /_security/profile/_activate
POST /_security/profile/_activate
{
  "grant_type": "password",
  "username" : "jacknich",
  "password" : "l0ng-r4nd0m-p@ssw0rd"
}
resp = client.security.activate_user_profile(
    grant_type="password",
    username="jacknich",
    password="l0ng-r4nd0m-p@ssw0rd",
)
const response = await client.security.activateUserProfile({
  grant_type: "password",
  username: "jacknich",
  password: "l0ng-r4nd0m-p@ssw0rd",
});
response = client.security.activate_user_profile(
  body: {
    "grant_type": "password",
    "username": "jacknich",
    "password": "l0ng-r4nd0m-p@ssw0rd"
  }
)
$resp = $client->security()->activateUserProfile([
    "body" => [
        "grant_type" => "password",
        "username" => "jacknich",
        "password" => "l0ng-r4nd0m-p@ssw0rd",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"grant_type":"password","username":"jacknich","password":"l0ng-r4nd0m-p@ssw0rd"}' "$ELASTICSEARCH_URL/_security/profile/_activate"
client.security().activateUserProfile(a -> a
    .grantType(GrantType.Password)
    .password("l0ng-r4nd0m-p@ssw0rd")
    .username("jacknich")
);
Request example
Run `POST /_security/profile/_activate` to activate a user profile.
{
  "grant_type": "password",
  "username" : "jacknich",
  "password" : "l0ng-r4nd0m-p@ssw0rd"
}
Response examples (200)
A successful response from `POST /_security/profile/_activate`.
{
  "uid": "u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0",
  "enabled": true,
  "last_synchronized": 1642650651037,
  "user": {
    "username": "jacknich",
    "roles": [
      "admin", "other_role1"
    ],
    "realm_name": "native",
    "full_name": "Jack Nicholson",
    "email": "jacknich@example.com"
  },
  "labels": {},
  "data": {},
  "_doc": {
    "_primary_term": 88,
    "_seq_no": 66
  }
}
































Clear the roles cache Generally available

POST /_security/role/{name}/_clear_cache

Evict roles from the native role cache.

Required authorization

  • Cluster privileges: manage_security

Path parameters

  • name string | array[string] Required

    A comma-separated list of roles to evict from the role cache. To evict all roles, use an asterisk (*). It does not support other wildcard patterns.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object Required

      Contains statistics about the number of nodes selected by the request.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attribute Show * attribute object
        • name string Required
POST /_security/role/{name}/_clear_cache
POST /_security/role/my_admin_role/_clear_cache
resp = client.security.clear_cached_roles(
    name="my_admin_role",
)
const response = await client.security.clearCachedRoles({
  name: "my_admin_role",
});
response = client.security.clear_cached_roles(
  name: "my_admin_role"
)
$resp = $client->security()->clearCachedRoles([
    "name" => "my_admin_role",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/role/my_admin_role/_clear_cache"
client.security().clearCachedRoles(c -> c
    .name("my_admin_role")
);








Create an API key Generally available; Added in 6.7.0

POST /_security/api_key

All methods and paths for this operation:

PUT /_security/api_key

POST /_security/api_key

Create an API key for access without requiring basic authentication.

IMPORTANT: If the credential that is used to authenticate this request is an API key, the derived API key cannot have any privileges. If you specify privileges, the API returns an error.

A successful request returns a JSON structure that contains the API key, its unique id, and its name. If applicable, it also returns expiration information for the API key in milliseconds.

NOTE: By default, API keys never expire. You can specify expiration information when you create the API keys.

The API keys are created by the Elasticsearch API key service, which is automatically enabled. To configure or turn off the API key service, refer to API key service setting documentation.

Required authorization

  • Cluster privileges: manage_own_api_key
External documentation

Query parameters

  • refresh string

    If true (the default) then refresh the affected shards to make this operation visible to search, if wait_for then wait for a refresh to make this operation visible to search, if false then do nothing with refreshes.

    Values are true, false, or wait_for.

application/json

Body Required

  • expiration string

    The expiration time for the API key. By default, API keys never expire.

    External documentation
  • name string

    A name for the API key.

  • role_descriptors object

    An array of role descriptors for this API key. When it is not specified or it is an empty array, the API key will have a point in time snapshot of permissions of the authenticated user. If you supply role descriptors, the resultant permissions are an intersection of API keys permissions and the authenticated user's permissions thereby limiting the access scope for API keys. The structure of role descriptor is the same as the request for the create role API. For more details, refer to the create or update roles API.

    NOTE: Due to the way in which this permission intersection is calculated, it is not possible to create an API key that is a child of another API key, unless the derived key is created without any privileges. In this case, you must explicitly specify a role descriptor with no privileges. The derived API key can be used for authentication; it will not have authority to call Elasticsearch APIs.

    External documentation
    Hide role_descriptors attribute Show role_descriptors attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • cluster array[string]

        A list of cluster privileges. These privileges define the cluster level actions that API keys are able to execute.

      • indices array[object]

        A list of indices permissions entries.

        Hide indices attributes Show indices attributes object
        • field_security object

          The document fields that the owners of the role have read access to.

        • names array[string] Required

          A list of indices (or index name patterns) to which the permissions in this entry apply.

        • privileges array[string] Required

          The index level privileges that owners of the role have on the specified indices.

        • query string | object

          A search query that defines the documents the owners of the role have access to. A document within the specified indices must match this query for it to be accessible by the owners of the role.

          One of:

          A search query that defines the documents the owners of the role have access to. A document within the specified indices must match this query for it to be accessible by the owners of the role.

        • allow_restricted_indices boolean Generally available

          Set to true if using wildcard or regular expressions for patterns that cover restricted indices. Implicitly, restricted indices have limited privileges that can cause pattern tests to fail. If restricted indices are explicitly included in the names list, Elasticsearch checks privileges against these indices regardless of the value set for allow_restricted_indices.

          Default value is false.

      • remote_indices array[object] Generally available; Added in 8.14.0

        A list of indices permissions for remote clusters.

        The subset of index level privileges that can be defined for remote clusters.

        Hide remote_indices attributes Show remote_indices attributes object
        • clusters string | array[string] Required

          A list of cluster aliases to which the permissions in this entry apply.

        • field_security object

          The document fields that the owners of the role have read access to.

        • names array[string] Required

          A list of indices (or index name patterns) to which the permissions in this entry apply.

        • privileges array[string] Required

          The index level privileges that owners of the role have on the specified indices.

        • query string | object

          A search query that defines the documents the owners of the role have access to. A document within the specified indices must match this query for it to be accessible by the owners of the role.

          One of:

          A search query that defines the documents the owners of the role have access to. A document within the specified indices must match this query for it to be accessible by the owners of the role.

        • allow_restricted_indices boolean Generally available

          Set to true if using wildcard or regular expressions for patterns that cover restricted indices. Implicitly, restricted indices have limited privileges that can cause pattern tests to fail. If restricted indices are explicitly included in the names list, Elasticsearch checks privileges against these indices regardless of the value set for allow_restricted_indices.

          Default value is false.

      • remote_cluster array[object] Generally available; Added in 8.15.0

        A list of cluster permissions for remote clusters. NOTE: This is limited a subset of the cluster permissions.

        The subset of cluster level privileges that can be defined for remote clusters.

        Hide remote_cluster attributes Show remote_cluster attributes object
        • clusters string | array[string] Required

          A list of cluster aliases to which the permissions in this entry apply.

        • privileges array[string] Required

          The cluster level privileges that owners of the role have on the remote cluster.

          Values are monitor_enrich or monitor_stats.

      • global array[object] | object

        An object defining global privileges. A global privilege is a form of cluster privilege that is request-aware. Support for global privileges is currently limited to the management of application privileges.

        One of:
        Hide attribute Show attribute object
        • application object Required
      • applications array[object]

        A list of application privilege entries

        Hide applications attributes Show applications attributes object
        • application string Required

          The name of the application to which this entry applies.

        • privileges array[string] Required

          A list of strings, where each element is the name of an application privilege or action.

        • resources array[string] Required

          A list resources to which the privileges are applied.

      • metadata object

        Optional meta-data. Within the metadata object, keys that begin with _ are reserved for system usage.

        Hide metadata attribute Show metadata attribute object
        • * object Additional properties
      • run_as array[string]

        A list of users that the API keys can impersonate. NOTE: In Elastic Cloud Serverless, the run-as feature is disabled. For API compatibility, you can still specify an empty run_as field, but a non-empty list will be rejected.

      • description string

        Optional description of the role descriptor

      • restriction object

        Restriction for when the role descriptor is allowed to be effective.

        Hide restriction attribute Show restriction attribute object
        • workflows array[string] Required

          A list of workflows to which the API key is restricted. NOTE: In order to use a role restriction, an API key must be created with a single role descriptor.

      • transient_metadata object
        Hide transient_metadata attribute Show transient_metadata attribute object
        • * object Additional properties
  • metadata object

    Arbitrary metadata that you want to associate with the API key. It supports nested data structure. Within the metadata object, keys beginning with _ are reserved for system usage.

    Hide metadata attribute Show metadata attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • api_key string Required

      Generated API key.

    • expiration number

      Expiration in milliseconds for the API key.

    • id string Required

      Unique ID for this API key.

    • name string Required

      Specifies the name for this API key.

    • encoded string Required Generally available; Added in 7.16.0

      API key credentials which is the base64-encoding of the UTF-8 representation of id and api_key joined by a colon (:).

POST /_security/api_key
POST /_security/api_key
{
  "name": "my-api-key",
  "expiration": "1d",   
  "role_descriptors": { 
    "role-a": {
      "cluster": ["all"],
      "indices": [
        {
          "names": ["index-a*"],
          "privileges": ["read"]
        }
      ]
    },
    "role-b": {
      "cluster": ["all"],
      "indices": [
        {
          "names": ["index-b*"],
          "privileges": ["all"]
        }
      ]
    }
  },
  "metadata": {
    "application": "my-application",
    "environment": {
      "level": 1,
      "trusted": true,
      "tags": ["dev", "staging"]
    }
  }
}
resp = client.security.create_api_key(
    name="my-api-key",
    expiration="1d",
    role_descriptors={
        "role-a": {
            "cluster": [
                "all"
            ],
            "indices": [
                {
                    "names": [
                        "index-a*"
                    ],
                    "privileges": [
                        "read"
                    ]
                }
            ]
        },
        "role-b": {
            "cluster": [
                "all"
            ],
            "indices": [
                {
                    "names": [
                        "index-b*"
                    ],
                    "privileges": [
                        "all"
                    ]
                }
            ]
        }
    },
    metadata={
        "application": "my-application",
        "environment": {
            "level": 1,
            "trusted": True,
            "tags": [
                "dev",
                "staging"
            ]
        }
    },
)
const response = await client.security.createApiKey({
  name: "my-api-key",
  expiration: "1d",
  role_descriptors: {
    "role-a": {
      cluster: ["all"],
      indices: [
        {
          names: ["index-a*"],
          privileges: ["read"],
        },
      ],
    },
    "role-b": {
      cluster: ["all"],
      indices: [
        {
          names: ["index-b*"],
          privileges: ["all"],
        },
      ],
    },
  },
  metadata: {
    application: "my-application",
    environment: {
      level: 1,
      trusted: true,
      tags: ["dev", "staging"],
    },
  },
});
response = client.security.create_api_key(
  body: {
    "name": "my-api-key",
    "expiration": "1d",
    "role_descriptors": {
      "role-a": {
        "cluster": [
          "all"
        ],
        "indices": [
          {
            "names": [
              "index-a*"
            ],
            "privileges": [
              "read"
            ]
          }
        ]
      },
      "role-b": {
        "cluster": [
          "all"
        ],
        "indices": [
          {
            "names": [
              "index-b*"
            ],
            "privileges": [
              "all"
            ]
          }
        ]
      }
    },
    "metadata": {
      "application": "my-application",
      "environment": {
        "level": 1,
        "trusted": true,
        "tags": [
          "dev",
          "staging"
        ]
      }
    }
  }
)
$resp = $client->security()->createApiKey([
    "body" => [
        "name" => "my-api-key",
        "expiration" => "1d",
        "role_descriptors" => [
            "role-a" => [
                "cluster" => array(
                    "all",
                ),
                "indices" => array(
                    [
                        "names" => array(
                            "index-a*",
                        ),
                        "privileges" => array(
                            "read",
                        ),
                    ],
                ),
            ],
            "role-b" => [
                "cluster" => array(
                    "all",
                ),
                "indices" => array(
                    [
                        "names" => array(
                            "index-b*",
                        ),
                        "privileges" => array(
                            "all",
                        ),
                    ],
                ),
            ],
        ],
        "metadata" => [
            "application" => "my-application",
            "environment" => [
                "level" => 1,
                "trusted" => true,
                "tags" => array(
                    "dev",
                    "staging",
                ),
            ],
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"name":"my-api-key","expiration":"1d","role_descriptors":{"role-a":{"cluster":["all"],"indices":[{"names":["index-a*"],"privileges":["read"]}]},"role-b":{"cluster":["all"],"indices":[{"names":["index-b*"],"privileges":["all"]}]}},"metadata":{"application":"my-application","environment":{"level":1,"trusted":true,"tags":["dev","staging"]}}}' "$ELASTICSEARCH_URL/_security/api_key"
client.security().createApiKey(c -> c
    .expiration(e -> e
        .time("1d")
    )
    .metadata(Map.of("environment", JsonData.fromJson("{\"level\":1,\"trusted\":true,\"tags\":[\"dev\",\"staging\"]}"),"application", JsonData.fromJson("\"my-application\"")))
    .name("my-api-key")
    .roleDescriptors(Map.of("role-b", RoleDescriptor.of(r -> r
            .cluster("all")
            .indices(i -> i
                .names("index-b*")
                .privileges("all")
            )),"role-a", RoleDescriptor.of(r -> r
            .cluster("all")
            .indices(i -> i
                .names("index-a*")
                .privileges("read")
            ))))
);
Request example
Run `POST /_security/api_key` to create an API key. If `expiration` is not provided, the API keys do not expire. If `role_descriptors` is not provided, the permissions of the authenticated user are applied.
{
  "name": "my-api-key",
  "expiration": "1d",   
  "role_descriptors": { 
    "role-a": {
      "cluster": ["all"],
      "indices": [
        {
          "names": ["index-a*"],
          "privileges": ["read"]
        }
      ]
    },
    "role-b": {
      "cluster": ["all"],
      "indices": [
        {
          "names": ["index-b*"],
          "privileges": ["all"]
        }
      ]
    }
  },
  "metadata": {
    "application": "my-application",
    "environment": {
      "level": 1,
      "trusted": true,
      "tags": ["dev", "staging"]
    }
  }
}
Response examples (200)
A successful response from `POST /_security/api_key`.
{
  "id": "VuaCfGcBCdbkQm-e5aOx",        
  "name": "my-api-key",
  "expiration": 1544068612110,         
  "api_key": "ui2lp2axTNmsyakw9tvNnw", 
  "encoded": "VnVhQ2ZHY0JDZGJrUW0tZTVhT3g6dWkybHAyYXhUTm1zeWFrdzl0dk5udw=="  
}

Invalidate API keys Generally available; Added in 6.7.0

DELETE /_security/api_key

This API invalidates API keys created by the create API key or grant API key APIs. Invalidated API keys fail authentication, but they can still be viewed using the get API key information and query API key information APIs, for at least the configured retention period, until they are automatically deleted.

To use this API, you must have at least the manage_security, manage_api_key, or manage_own_api_key cluster privileges. The manage_security privilege allows deleting any API key, including both REST and cross cluster API keys. The manage_api_key privilege allows deleting any REST API key, but not cross cluster API keys. The manage_own_api_key only allows deleting REST API keys that are owned by the user. In addition, with the manage_own_api_key privilege, an invalidation request must be issued in one of the three formats:

  • Set the parameter owner=true.
  • Or, set both username and realm_name to match the user's identity.
  • Or, if the request is issued by an API key, that is to say an API key invalidates itself, specify its ID in the ids field.

Required authorization

  • Cluster privileges: manage_api_key,manage_own_api_key
application/json

Body Required

  • id string
  • ids array[string]

    A list of API key ids. This parameter cannot be used with any of name, realm_name, or username.

  • name string

    An API key name. This parameter cannot be used with any of ids, realm_name or username.

  • owner boolean

    Query API keys owned by the currently authenticated user. The realm_name or username parameters cannot be specified when this parameter is set to true as they are assumed to be the currently authenticated ones.

    NOTE: At least one of ids, name, username, and realm_name must be specified if owner is false.

    Default value is false.

  • realm_name string

    The name of an authentication realm. This parameter cannot be used with either ids or name, or when owner flag is set to true.

  • username string

    The username of a user. This parameter cannot be used with either ids or name or when owner flag is set to true.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • error_count number Required

      The number of errors that were encountered when invalidating the API keys.

    • error_details array[object]

      Details about the errors. This field is not present in the response when error_count is 0.

      Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      Hide error_details attributes Show error_details attributes object
      • type string Required

        The type of error

      • reason string | null

        A human-readable explanation of the error, in English.

      • stack_trace string

        The server stack trace. Present only if the error_trace=true parameter was sent with the request.

      • caused_by object

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • root_cause array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • suppressed array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

    • invalidated_api_keys array[string] Required

      The IDs of the API keys that were invalidated as part of this request.

    • previously_invalidated_api_keys array[string] Required

      The IDs of the API keys that were already invalidated.

DELETE /_security/api_key
DELETE /_security/api_key
{
  "ids" : [ "VuaCfGcBCdbkQm-e5aOx" ]
}
resp = client.security.invalidate_api_key(
    ids=[
        "VuaCfGcBCdbkQm-e5aOx"
    ],
)
const response = await client.security.invalidateApiKey({
  ids: ["VuaCfGcBCdbkQm-e5aOx"],
});
response = client.security.invalidate_api_key(
  body: {
    "ids": [
      "VuaCfGcBCdbkQm-e5aOx"
    ]
  }
)
$resp = $client->security()->invalidateApiKey([
    "body" => [
        "ids" => array(
            "VuaCfGcBCdbkQm-e5aOx",
        ),
    ],
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"ids":["VuaCfGcBCdbkQm-e5aOx"]}' "$ELASTICSEARCH_URL/_security/api_key"
client.security().invalidateApiKey(i -> i
    .ids("VuaCfGcBCdbkQm-e5aOx")
);
Request examples
Run `DELETE /_security/api_key` to invalidate the API keys identified by ID.
{
  "ids" : [ "VuaCfGcBCdbkQm-e5aOx" ]
}
Run `DELETE /_security/api_key` to invalidate the API keys identified by name.
{
  "name" : "my-api-key"
}
Run `DELETE /_security/api_key` to invalidate all API keys for the `native1` realm.
{
  "realm_name" : "native1"
}
Run `DELETE /_security/api_key` to invalidate all API keys for the user `myuser` in all realms.
{
  "username" : "myuser"
}
Run `DELETE /_security/api_key` to invalidate the API keys identified by ID if they are owned by the currently authenticated user.
{
  "ids" : ["VuaCfGcBCdbkQm-e5aOx"],
  "owner" : "true"
}
Run `DELETE /_security/api_key` to invalidate all API keys for the user `myuser` in the `native1` realm .
{
  "username" : "myuser",
  "realm_name" : "native1"
}
Response examples (200)
A successful response from `DELETE /_security/api_key`.
{
  "invalidated_api_keys": [ 
    "api-key-id-1"
  ],
  "previously_invalidated_api_keys": [ 
    "api-key-id-2",
    "api-key-id-3"
  ],
  "error_count": 2, 
  "error_details": [ 
    {
      "type": "exception",
      "reason": "error occurred while invalidating api keys",
      "caused_by": {
        "type": "illegal_argument_exception",
        "reason": "invalid api key id"
      }
    },
    {
      "type": "exception",
      "reason": "error occurred while invalidating api keys",
      "caused_by": {
        "type": "illegal_argument_exception",
        "reason": "invalid api key id"
      }
    }
  ]
}
















Get application privileges Generally available; Added in 6.4.0

GET /_security/privilege/{application}/{name}

All methods and paths for this operation:

GET /_security/privilege

GET /_security/privilege/{application}
GET /_security/privilege/{application}/{name}

To use this API, you must have one of the following privileges:

  • The read_security cluster privilege (or a greater privilege such as manage_security or all).
  • The "Manage Application Privileges" global privilege for the application being referenced in the request.

Required authorization

  • Cluster privileges: read_security
External documentation

Path parameters

  • application string Required

    The name of the application. Application privileges are always associated with exactly one application. If you do not specify this parameter, the API returns information about all privileges for all applications.

  • name string | array[string] Required

    The name of the privilege. If you do not specify this parameter, the API returns information about all privileges for the requested application.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • actions array[string] Required
        • application string
        • name string
        • metadata object
          Hide metadata attribute Show metadata attribute object
          • * object Additional properties
GET /_security/privilege/{application}/{name}
GET /_security/privilege/myapp/read
resp = client.security.get_privileges(
    application="myapp",
    name="read",
)
const response = await client.security.getPrivileges({
  application: "myapp",
  name: "read",
});
response = client.security.get_privileges(
  application: "myapp",
  name: "read"
)
$resp = $client->security()->getPrivileges([
    "application" => "myapp",
    "name" => "read",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/privilege/myapp/read"
client.security().getPrivileges(g -> g
    .application("myapp")
    .name("read")
);
Response examples (200)
A successful response from `GET /_security/privilege/myapp/read`. The response contains information about the `read` privilege for the `app01` application.
{
  "myapp": {
    "read": {
      "application": "myapp",
      "name": "read",
      "actions": [
        "data:read/*",
        "action:login"
      ],
      "metadata": {
        "description": "Read access to myapp"
      }
    }
  }
}

Delete application privileges Generally available; Added in 6.4.0

DELETE /_security/privilege/{application}/{name}

To use this API, you must have one of the following privileges:

  • The manage_security cluster privilege (or a greater privilege such as all).
  • The "Manage Application Privileges" global privilege for the application being referenced in the request.

Required authorization

  • Cluster privileges: manage_security
External documentation

Path parameters

  • application string Required

    The name of the application. Application privileges are always associated with exactly one application.

  • name string | array[string] Required

    The name of the privilege.

Query parameters

  • refresh string

    If true (the default) then refresh the affected shards to make this operation visible to search, if wait_for then wait for a refresh to make this operation visible to search, if false then do nothing with refreshes.

    Values are true, false, or wait_for.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • * object Additional properties
        Hide * attribute Show * attribute object
        • found boolean Required
DELETE /_security/privilege/{application}/{name}
DELETE /_security/privilege/myapp/read
resp = client.security.delete_privileges(
    application="myapp",
    name="read",
)
const response = await client.security.deletePrivileges({
  application: "myapp",
  name: "read",
});
response = client.security.delete_privileges(
  application: "myapp",
  name: "read"
)
$resp = $client->security()->deletePrivileges([
    "application" => "myapp",
    "name" => "read",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/privilege/myapp/read"
client.security().deletePrivileges(d -> d
    .application("myapp")
    .name("read")
);
Response examples (200)
A successful response from `DELETE /_security/privilege/myapp/read`. If the privilege is successfully deleted, `found` is set to `true`.
{
  "myapp": {
    "read": {
      "found" : true
    }
  }
}








Delete roles Generally available

DELETE /_security/role/{name}

Delete roles in the native realm. The role management APIs are generally the preferred way to manage roles, rather than using file-based role management. The delete roles API cannot remove roles that are defined in roles files.

Required authorization

  • Cluster privileges: manage_security

Path parameters

  • name string Required

    The name of the role.

Query parameters

  • refresh string

    If true (the default) then refresh the affected shards to make this operation visible to search, if wait_for then wait for a refresh to make this operation visible to search, if false then do nothing with refreshes.

    Values are true, false, or wait_for.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • found boolean Required

      If the role is successfully deleted, found is true. Otherwise, found is false.

DELETE /_security/role/{name}
DELETE /_security/role/my_admin_role
resp = client.security.delete_role(
    name="my_admin_role",
)
const response = await client.security.deleteRole({
  name: "my_admin_role",
});
response = client.security.delete_role(
  name: "my_admin_role"
)
$resp = $client->security()->deleteRole([
    "name" => "my_admin_role",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/role/my_admin_role"
client.security().deleteRole(d -> d
    .name("my_admin_role")
);
Response examples (200)
A successful response from `DELETE /_security/role/my_admin_role`. If the role is successfully deleted, `found` is set to `true`.
{
  "found" : true
}




Create or update role mappings Generally available; Added in 5.5.0

POST /_security/role_mapping/{name}

All methods and paths for this operation:

PUT /_security/role_mapping/{name}

POST /_security/role_mapping/{name}

Role mappings define which roles are assigned to each user. Each mapping has rules that identify users and a list of roles that are granted to those users. The role mapping APIs are generally the preferred way to manage role mappings rather than using role mapping files. The create or update role mappings API cannot update role mappings that are defined in role mapping files.

NOTE: This API does not create roles. Rather, it maps users to existing roles. Roles can be created by using the create or update roles API or roles files.

Role templates

The most common use for role mappings is to create a mapping from a known value on the user to a fixed role name. For example, all users in the cn=admin,dc=example,dc=com LDAP group should be given the superuser role in Elasticsearch. The roles field is used for this purpose.

For more complex needs, it is possible to use Mustache templates to dynamically determine the names of the roles that should be granted to the user. The role_templates field is used for this purpose.

NOTE: To use role templates successfully, the relevant scripting feature must be enabled. Otherwise, all attempts to create a role mapping with role templates fail.

All of the user fields that are available in the role mapping rules are also available in the role templates. Thus it is possible to assign a user to a role that reflects their username, their groups, or the name of the realm to which they authenticated.

By default a template is evaluated to produce a single string that is the name of the role which should be assigned to the user. If the format of the template is set to "json" then the template is expected to produce a JSON string or an array of JSON strings for the role names.

Required authorization

  • Cluster privileges: manage_security
External documentation

Path parameters

  • name string Required

    The distinct name that identifies the role mapping. The name is used solely as an identifier to facilitate interaction via the API; it does not affect the behavior of the mapping in any way.

Query parameters

  • refresh string

    If true (the default) then refresh the affected shards to make this operation visible to search, if wait_for then wait for a refresh to make this operation visible to search, if false then do nothing with refreshes.

    Values are true, false, or wait_for.

application/json

Body Required

  • enabled boolean

    Mappings that have enabled set to false are ignored when role mapping is performed.

  • metadata object

    Additional metadata that helps define which roles are assigned to each user. Within the metadata object, keys beginning with _ are reserved for system usage.

    Hide metadata attribute Show metadata attribute object
    • * object Additional properties
  • roles array[string]

    A list of role names that are granted to the users that match the role mapping rules. Exactly one of roles or role_templates must be specified.

  • role_templates array[object]

    A list of Mustache templates that will be evaluated to determine the roles names that should granted to the users that match the role mapping rules. Exactly one of roles or role_templates must be specified.

    Hide role_templates attributes Show role_templates attributes object
    • format string

      Values are string or json.

    • template object Required
      Hide template attributes Show template attributes object
      • source string

        The script source.

      • id string

        The id for a stored script.

      • params object

        Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

        Hide params attribute Show params attribute object
        • * object Additional properties
      • lang string

        Specifies the language the script is written in.

        Supported values include:

        • painless: Painless scripting language, purpose-built for Elasticsearch.
        • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
        • mustache: Mustache templated, used for templates.
        • java: Expert Java API
        Any of:

        Specifies the language the script is written in.

        Supported values include:

        • painless: Painless scripting language, purpose-built for Elasticsearch.
        • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
        • mustache: Mustache templated, used for templates.
        • java: Expert Java API

        Values are painless, expression, mustache, or java.

      • options object
        Hide options attribute Show options attribute object
        • * string Additional properties
  • rules object

    The rules that determine which users should be matched by the mapping. A rule is a logical condition that is expressed by using a JSON DSL.

    Hide rules attributes Show rules attributes object
    • any array[object]
    • all array[object]
    • field object
      Hide field attributes Show field attributes object
      • username string | array[string]
      • dn string | array[string]
      • groups string | array[string]
    • except object
  • run_as array[string]

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • created boolean
    • role_mapping object Required
      Hide role_mapping attribute Show role_mapping attribute object
      • created boolean Required
POST /_security/role_mapping/{name}
POST /_security/role_mapping/mapping1
{
  "roles": [ "user"],
  "enabled": true, 
  "rules": {
    "field" : { "username" : "*" }
  },
  "metadata" : { 
    "version" : 1
  }
}
resp = client.security.put_role_mapping(
    name="mapping1",
    roles=[
        "user"
    ],
    enabled=True,
    rules={
        "field": {
            "username": "*"
        }
    },
    metadata={
        "version": 1
    },
)
const response = await client.security.putRoleMapping({
  name: "mapping1",
  roles: ["user"],
  enabled: true,
  rules: {
    field: {
      username: "*",
    },
  },
  metadata: {
    version: 1,
  },
});
response = client.security.put_role_mapping(
  name: "mapping1",
  body: {
    "roles": [
      "user"
    ],
    "enabled": true,
    "rules": {
      "field": {
        "username": "*"
      }
    },
    "metadata": {
      "version": 1
    }
  }
)
$resp = $client->security()->putRoleMapping([
    "name" => "mapping1",
    "body" => [
        "roles" => array(
            "user",
        ),
        "enabled" => true,
        "rules" => [
            "field" => [
                "username" => "*",
            ],
        ],
        "metadata" => [
            "version" => 1,
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"roles":["user"],"enabled":true,"rules":{"field":{"username":"*"}},"metadata":{"version":1}}' "$ELASTICSEARCH_URL/_security/role_mapping/mapping1"
client.security().putRoleMapping(p -> p
    .enabled(true)
    .metadata("version", JsonData.fromJson("1"))
    .name("mapping1")
    .roles("user")
    .rules(r -> r
        .field(NamedValue.of("username",List.of(FieldValue.of("*"))))
    )
);
Request examples
Run `POST /_security/role_mapping/mapping1` to assign the `user` role to all users.
{
  "roles": [ "user"],
  "enabled": true, 
  "rules": {
    "field" : { "username" : "*" }
  },
  "metadata" : { 
    "version" : 1
  }
}
Run `POST /_security/role_mapping/mapping2` to assign the "user" and "admin" roles to specific users.
{
  "roles": [ "user", "admin" ],
  "enabled": true,
  "rules": {
    "field" : { "username" : [ "esadmin01", "esadmin02" ] }
  }
}
Run `POST /_security/role_mapping/mapping3` to match users who authenticated against a specific realm.
{
  "roles": [ "ldap-user" ],
  "enabled": true,
  "rules": {
    "field" : { "realm.name" : "ldap1" }
  }
}
Run `POST /_security/role_mapping/mapping4` to match any user where either the username is `esadmin` or the user is in the `cn=admin,dc=example,dc=com group`. This example is useful when the group names in your identity management system (such as Active Directory, or a SAML Identity Provider) do not have a one-to-one correspondence with the names of roles in Elasticsearch. The role mapping is the means by which you link a group name with a role name.
{
  "roles": [ "superuser" ],
  "enabled": true,
  "rules": {
    "any": [
      {
        "field": {
          "username": "esadmin"
        }
      },
      {
        "field": {
          "groups": "cn=admins,dc=example,dc=com"
        }
      }
    ]
  }
}
Run `POST /_security/role_mapping/mapping5` to use an array syntax for the groups field when there are multiple groups. This pattern matches any of the groups (rather than all of the groups).
{
  "role_templates": [
    {
      "template": { "source": "{{#tojson}}groups{{/tojson}}" }, 
      "format" : "json" 
    }
  ],
  "rules": {
    "field" : { "realm.name" : "saml1" }
  },
  "enabled": true
}
Run `POST /_security/role_mapping/mapping6` for rare cases when the names of your groups may be an exact match for the names of your Elasticsearch roles. This can be the case when your SAML Identity Provider includes its own "group mapping" feature and can be configured to release Elasticsearch role names in the user's SAML attributes. In these cases it is possible to use a template that treats the group names as role names. NOTE: This should only be done if you intend to define roles for all of the provided groups. Mapping a user to a large number of unnecessary or undefined roles is inefficient and can have a negative effect on system performance. If you only need to map a subset of the groups, you should do it by using explicit mappings. The `tojson` mustache function is used to convert the list of group names into a valid JSON array. Because the template produces a JSON array, the `format` must be set to `json`.
{
  "role_templates": [
    {
      "template": { "source": "{{#tojson}}groups{{/tojson}}" }, 
      "format" : "json" 
    }
  ],
  "rules": {
    "field" : { "realm.name" : "saml1" }
  },
  "enabled": true
}
Run `POST /_security/role_mapping/mapping7` to match users within a particular LDAP sub-tree in a specific realm.
{
  "roles": [ "ldap-example-user" ],
  "enabled": true,
  "rules": {
    "all": [
      { "field" : { "dn" : "*,ou=subtree,dc=example,dc=com" } },
      { "field" : { "realm.name" : "ldap1" } }
    ]
  }
}
Run `POST /_security/role_mapping/mapping8` to assign rules that are complex and include wildcard matching. For example, this mapping matches any user where all of these conditions are met: the Distinguished Name matches the pattern `*,ou=admin,dc=example,dc=com`, or the `username` is `es-admin`, or the `username` is `es-system`; the user is in the `cn=people,dc=example,dc=com` group; the user does not have a `terminated_date`.
{
  "roles": [ "superuser" ],
  "enabled": true,
  "rules": {
    "all": [
      {
        "any": [
          {
            "field": {
              "dn": "*,ou=admin,dc=example,dc=com"
            }
          },
          {
            "field": {
              "username": [ "es-admin", "es-system" ]
            }
          }
        ]
      },
      {
        "field": {
          "groups": "cn=people,dc=example,dc=com"
        }
      },
      {
        "except": {
          "field": {
            "metadata.terminated_date": null
          }
        }
      }
    ]
  }
}
Run `POST /_security/role_mapping/mapping9` to use templated roles to automatically map every user to their own custom role. In this example every user who authenticates using the `cloud-saml` realm will be automatically mapped to two roles: the `saml_user` role and a role that is their username prefixed with `_user_`. For example, the user `nwong` would be assigned the `saml_user` and `_user_nwong` roles.
{
  "rules": { "field": { "realm.name": "cloud-saml" } },
  "role_templates": [
    { "template": { "source" : "saml_user" } }, 
    { "template": { "source" : "_user_{{username}}" } }
  ],
  "enabled": true
}
Response examples (200)
A successful response from `POST /_security/role_mapping/mapping1`.
{
  "role_mapping" : {
    "created" : true 
  }
}












Delete users Generally available

DELETE /_security/user/{username}

Delete users from the native realm.

Required authorization

  • Cluster privileges: manage_security

Path parameters

  • username string Required

    An identifier for the user.

Query parameters

  • refresh string

    If true (the default) then refresh the affected shards to make this operation visible to search, if wait_for then wait for a refresh to make this operation visible to search, if false then do nothing with refreshes.

    Values are true, false, or wait_for.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • found boolean Required

      If the user is successfully deleted, the request returns {"found": true}. Otherwise, found is set to false.

DELETE /_security/user/{username}
DELETE /_security/user/jacknich
resp = client.security.delete_user(
    username="jacknich",
)
const response = await client.security.deleteUser({
  username: "jacknich",
});
response = client.security.delete_user(
  username: "jacknich"
)
$resp = $client->security()->deleteUser([
    "username" => "jacknich",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/user/jacknich"
client.security().deleteUser(d -> d
    .username("jacknich")
);
Response examples (200)
A successful response from `DELETE /_security/user/jacknich`.
{
  "found" : true
}
























































Get a user profile Generally available; Added in 8.2.0

GET /_security/profile/{uid}

Get a user's profile using the unique profile ID.

NOTE: The user profile feature is designed only for use by Kibana and Elastic's Observability, Enterprise Search, and Elastic Security solutions. Individual users and external applications should not call this API directly. Elastic reserves the right to change or remove this feature in future releases without prior notice.

Required authorization

  • Cluster privileges: read_security

Path parameters

  • uid string | array[string] Required

    A unique identifier for the user profile.

Query parameters

  • data string | array[string]

    A comma-separated list of filters for the data field of the profile document. To return all content use data=*. To return a subset of content use data=<key> to retrieve content nested under the specified <key>. By default returns no data content.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • profiles array[object] Required

      A successful call returns the JSON representation of the user profile and its internal versioning numbers. The API returns an empty object if no profile document is found for the provided uid. The content of the data field is not returned by default to avoid deserializing a potential large payload.

      Hide profiles attributes Show profiles attributes object
      • uid string Required
      • user object Required
        Hide user attributes Show user attributes object
        • email
        • full_name
        • roles array[string] Required
      • data object Required
        Hide data attribute Show data attribute object
        • * object Additional properties
      • labels object Required
        Hide labels attribute Show labels attribute object
        • * object Additional properties
      • enabled boolean
      • last_synchronized number Required
      • _doc object Required
        Hide _doc attribute Show _doc attribute object
        • _primary_term number Required
    • errors object
      Hide errors attributes Show errors attributes object
      • count number Required
      • details object Required
        Hide details attribute Show details attribute object
        • * object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Hide * attributes Show * attributes object
          • type string Required

            The type of error

          • reason string | null

            A human-readable explanation of the error, in English.

          • stack_trace string

            The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • caused_by object

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • root_cause array[object]

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • suppressed array[object]

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

GET /_security/profile/{uid}
GET /_security/profile/u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0
resp = client.security.get_user_profile(
    uid="u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0",
)
const response = await client.security.getUserProfile({
  uid: "u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0",
});
response = client.security.get_user_profile(
  uid: "u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0"
)
$resp = $client->security()->getUserProfile([
    "uid" => "u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/profile/u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0"
client.security().getUserProfile(g -> g
    .uid("u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0")
);
Response examples (200)
A successful response from `GET /_security/profile/u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0`. By default, no content is returned in the `data` field.
{
  "profiles": [
    {
      "uid": "u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0",
      "enabled": true,
      "last_synchronized": 1642650651037,
      "user": {
        "username": "jacknich",
        "roles": [
          "admin", "other_role1"
        ],
        "realm_name": "native",
        "full_name": "Jack Nicholson",
        "email": "jacknich@example.com"
      },
      "labels": {
        "direction": "north"
      },
      "data": {}, 
      "_doc": {
        "_primary_term": 88,
        "_seq_no": 66
      }
    }
  ]
}
A successful response from `GET /_security/profile/u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0?data=app1.key1`.
{
  "profiles": [
    {
      "uid": "u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0",
      "enabled": true,
      "last_synchronized": 1642650651037,
      "user": {
        "username": "jacknich",
        "roles": [
          "admin", "other_role1"
        ],
        "realm_name": "native",
        "full_name": "Jack Nicholson",
        "email": "jacknich@example.com"
      },
      "labels": {
        "direction": "north"
      },
      "data": {
        "app1": {
          "key1": "value1"
        }
      },
      "_doc": {
        "_primary_term": 88,
        "_seq_no": 66
      }
    }
  ]
}
A response that contains errors that occurred while retrieving user profiles.
{
  "profiles": [],
  "errors": {
    "count": 1,
    "details": {
      "u_FmxQt3gr1BBH5wpnz9HkouPj3Q710XkOgg1PWkwLPBW_5": {
        "type": "resource_not_found_exception",
        "reason": "profile document not found"
      }
    }
  }
}

Grant an API key Generally available; Added in 7.9.0

POST /_security/api_key/grant

Create an API key on behalf of another user. This API is similar to the create API keys API, however it creates the API key for a user that is different than the user that runs the API. The caller must have authentication credentials for the user on whose behalf the API key will be created. It is not possible to use this API to create an API key without that user's credentials. The supported user authentication credential types are:

  • username and password
  • Elasticsearch access tokens
  • JWTs

The user, for whom the authentication credentials is provided, can optionally "run as" (impersonate) another user. In this case, the API key will be created on behalf of the impersonated user.

This API is intended be used by applications that need to create and manage API keys for end users, but cannot guarantee that those users have permission to create API keys on their own behalf. The API keys are created by the Elasticsearch API key service, which is automatically enabled.

A successful grant API key API call returns a JSON structure that contains the API key, its unique id, and its name. If applicable, it also returns expiration information for the API key in milliseconds.

By default, API keys never expire. You can specify expiration information when you create the API keys.

Required authorization

  • Cluster privileges: grant_api_key

Query parameters

  • refresh string

    If 'true', Elasticsearch refreshes the affected shards to make this operation visible to search. If 'wait_for', it waits for a refresh to make this operation visible to search. If 'false', nothing is done with refreshes.

    Values are true, false, or wait_for.

application/json

Body Required

  • api_key object Required

    The API key.

    Hide api_key attributes Show api_key attributes object
    • name string Required
    • expiration string

      Expiration time for the API key. By default, API keys never expire.

    • role_descriptors object | array[object]

      The role descriptors for this API key. When it is not specified or is an empty array, the API key has a point in time snapshot of permissions of the specified user or access token. If you supply role descriptors, the resultant permissions are an intersection of API keys permissions and the permissions of the user or access token.

      One of:
      Hide attribute Show attribute
      • * object Additional properties
        Hide * attributes Show * attributes object
        • cluster array[string]

          A list of cluster privileges. These privileges define the cluster level actions that API keys are able to execute.

        • indices array[object]

          A list of indices permissions entries.

        • remote_indices array[object] Generally available; Added in 8.14.0

          A list of indices permissions for remote clusters.

          The subset of index level privileges that can be defined for remote clusters.

          The subset of index level privileges that can be defined for remote clusters.

        • remote_cluster array[object] Generally available; Added in 8.15.0

          A list of cluster permissions for remote clusters. NOTE: This is limited a subset of the cluster permissions.

          The subset of cluster level privileges that can be defined for remote clusters.

          The subset of cluster level privileges that can be defined for remote clusters.

        • global array[object] | object

          An object defining global privileges. A global privilege is a form of cluster privilege that is request-aware. Support for global privileges is currently limited to the management of application privileges.

        • applications array[object]

          A list of application privilege entries

        • metadata object

          Optional meta-data. Within the metadata object, keys that begin with _ are reserved for system usage.

        • run_as array[string]

          A list of users that the API keys can impersonate. NOTE: In Elastic Cloud Serverless, the run-as feature is disabled. For API compatibility, you can still specify an empty run_as field, but a non-empty list will be rejected.

        • description string

          Optional description of the role descriptor

        • restriction object

          Restriction for when the role descriptor is allowed to be effective.

        • transient_metadata object
          Hide transient_metadata attribute Show transient_metadata attribute object
          • * object Additional properties
    • metadata object

      Arbitrary metadata that you want to associate with the API key. It supports nested data structure. Within the metadata object, keys beginning with _ are reserved for system usage.

      Hide metadata attribute Show metadata attribute object
      • * object Additional properties
  • grant_type string Required

    The type of grant. Supported grant types are: access_token, password.

    Values are access_token or password.

  • access_token string

    The user's access token. If you specify the access_token grant type, this parameter is required. It is not valid with other grant types.

  • username string

    The user name that identifies the user. If you specify the password grant type, this parameter is required. It is not valid with other grant types.

  • password string

    The user's password. If you specify the password grant type, this parameter is required. It is not valid with other grant types.

  • run_as string

    The name of the user to be impersonated.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • api_key string Required
    • id string Required
    • name string Required
    • expiration number

      Time unit for milliseconds

    • encoded string Required
POST /_security/api_key/grant
POST /_security/api_key/grant
{
  "grant_type": "password",
  "username" : "test_admin",
  "password" : "x-pack-test-password",
  "api_key" : {
    "name": "my-api-key",
    "expiration": "1d",
    "role_descriptors": {
      "role-a": {
        "cluster": ["all"],
        "indices": [
          {
          "names": ["index-a*"],
          "privileges": ["read"]
          }
        ]
      },
      "role-b": {
        "cluster": ["all"],
        "indices": [
          {
          "names": ["index-b*"],
          "privileges": ["all"]
          }
        ]
      }
    },
    "metadata": {
      "application": "my-application",
      "environment": {
        "level": 1,
        "trusted": true,
        "tags": ["dev", "staging"]
      }
    }
  }
}
resp = client.security.grant_api_key(
    grant_type="password",
    username="test_admin",
    password="x-pack-test-password",
    api_key={
        "name": "my-api-key",
        "expiration": "1d",
        "role_descriptors": {
            "role-a": {
                "cluster": [
                    "all"
                ],
                "indices": [
                    {
                        "names": [
                            "index-a*"
                        ],
                        "privileges": [
                            "read"
                        ]
                    }
                ]
            },
            "role-b": {
                "cluster": [
                    "all"
                ],
                "indices": [
                    {
                        "names": [
                            "index-b*"
                        ],
                        "privileges": [
                            "all"
                        ]
                    }
                ]
            }
        },
        "metadata": {
            "application": "my-application",
            "environment": {
                "level": 1,
                "trusted": True,
                "tags": [
                    "dev",
                    "staging"
                ]
            }
        }
    },
)
const response = await client.security.grantApiKey({
  grant_type: "password",
  username: "test_admin",
  password: "x-pack-test-password",
  api_key: {
    name: "my-api-key",
    expiration: "1d",
    role_descriptors: {
      "role-a": {
        cluster: ["all"],
        indices: [
          {
            names: ["index-a*"],
            privileges: ["read"],
          },
        ],
      },
      "role-b": {
        cluster: ["all"],
        indices: [
          {
            names: ["index-b*"],
            privileges: ["all"],
          },
        ],
      },
    },
    metadata: {
      application: "my-application",
      environment: {
        level: 1,
        trusted: true,
        tags: ["dev", "staging"],
      },
    },
  },
});
response = client.security.grant_api_key(
  body: {
    "grant_type": "password",
    "username": "test_admin",
    "password": "x-pack-test-password",
    "api_key": {
      "name": "my-api-key",
      "expiration": "1d",
      "role_descriptors": {
        "role-a": {
          "cluster": [
            "all"
          ],
          "indices": [
            {
              "names": [
                "index-a*"
              ],
              "privileges": [
                "read"
              ]
            }
          ]
        },
        "role-b": {
          "cluster": [
            "all"
          ],
          "indices": [
            {
              "names": [
                "index-b*"
              ],
              "privileges": [
                "all"
              ]
            }
          ]
        }
      },
      "metadata": {
        "application": "my-application",
        "environment": {
          "level": 1,
          "trusted": true,
          "tags": [
            "dev",
            "staging"
          ]
        }
      }
    }
  }
)
$resp = $client->security()->grantApiKey([
    "body" => [
        "grant_type" => "password",
        "username" => "test_admin",
        "password" => "x-pack-test-password",
        "api_key" => [
            "name" => "my-api-key",
            "expiration" => "1d",
            "role_descriptors" => [
                "role-a" => [
                    "cluster" => array(
                        "all",
                    ),
                    "indices" => array(
                        [
                            "names" => array(
                                "index-a*",
                            ),
                            "privileges" => array(
                                "read",
                            ),
                        ],
                    ),
                ],
                "role-b" => [
                    "cluster" => array(
                        "all",
                    ),
                    "indices" => array(
                        [
                            "names" => array(
                                "index-b*",
                            ),
                            "privileges" => array(
                                "all",
                            ),
                        ],
                    ),
                ],
            ],
            "metadata" => [
                "application" => "my-application",
                "environment" => [
                    "level" => 1,
                    "trusted" => true,
                    "tags" => array(
                        "dev",
                        "staging",
                    ),
                ],
            ],
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"grant_type":"password","username":"test_admin","password":"x-pack-test-password","api_key":{"name":"my-api-key","expiration":"1d","role_descriptors":{"role-a":{"cluster":["all"],"indices":[{"names":["index-a*"],"privileges":["read"]}]},"role-b":{"cluster":["all"],"indices":[{"names":["index-b*"],"privileges":["all"]}]}},"metadata":{"application":"my-application","environment":{"level":1,"trusted":true,"tags":["dev","staging"]}}}}' "$ELASTICSEARCH_URL/_security/api_key/grant"
client.security().grantApiKey(g -> g
    .apiKey(a -> a
        .name("my-api-key")
        .expiration(e -> e
            .time("1d")
        )
        .roleDescriptors(Map.of("role-b", RoleDescriptor.of(r -> r
                .cluster("all")
                .indices(i -> i
                    .names("index-b*")
                    .privileges("all")
                )),"role-a", RoleDescriptor.of(r -> r
                .cluster("all")
                .indices(i -> i
                    .names("index-a*")
                    .privileges("read")
                ))))
        .metadata(Map.of("environment", JsonData.fromJson("{\"level\":1,\"trusted\":true,\"tags\":[\"dev\",\"staging\"]}"),"application", JsonData.fromJson("\"my-application\"")))
    )
    .grantType(ApiKeyGrantType.Password)
    .password("x-pack-test-password")
    .username("test_admin")
);
Request examples
Run `POST /_security/api_key/grant` to create an API key on behalf of the `test_admin` user.
{
  "grant_type": "password",
  "username" : "test_admin",
  "password" : "x-pack-test-password",
  "api_key" : {
    "name": "my-api-key",
    "expiration": "1d",
    "role_descriptors": {
      "role-a": {
        "cluster": ["all"],
        "indices": [
          {
          "names": ["index-a*"],
          "privileges": ["read"]
          }
        ]
      },
      "role-b": {
        "cluster": ["all"],
        "indices": [
          {
          "names": ["index-b*"],
          "privileges": ["all"]
          }
        ]
      }
    },
    "metadata": {
      "application": "my-application",
      "environment": {
        "level": 1,
        "trusted": true,
        "tags": ["dev", "staging"]
      }
    }
  }
}
Run `POST /_security/api_key/grant`. The user (`test_admin`) whose credentials are provided can "run as" another user (`test_user`). The API key will be granted to the impersonated user (`test_user`).
{
  "grant_type": "password",
  "username" : "test_admin",  
  "password" : "x-pack-test-password",  
  "run_as": "test_user",  
  "api_key" : {
    "name": "another-api-key"
  }
}








Authenticate OpenID Connect Generally available

POST /_security/oidc/authenticate

Exchange an OpenID Connect authentication response message for an Elasticsearch internal access token and refresh token that can be subsequently used for authentication.

Elasticsearch exposes all the necessary OpenID Connect related functionality with the OpenID Connect APIs. These APIs are used internally by Kibana in order to provide OpenID Connect based authentication, but can also be used by other, custom web applications or other clients.

application/json

Body Required

  • nonce string Required

    Associate a client session with an ID token and mitigate replay attacks. This value needs to be the same as the one that was provided to the /_security/oidc/prepare API or the one that was generated by Elasticsearch and included in the response to that call.

  • realm string

    The name of the OpenID Connect realm. This property is useful in cases where multiple realms are defined.

  • redirect_uri string Required

    The URL to which the OpenID Connect Provider redirected the User Agent in response to an authentication request after a successful authentication. This URL must be provided as-is (URL encoded), taken from the body of the response or as the value of a location header in the response from the OpenID Connect Provider.

  • state string Required

    Maintain state between the authentication request and the response. This value needs to be the same as the one that was provided to the /_security/oidc/prepare API or the one that was generated by Elasticsearch and included in the response to that call.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • access_token string Required

      The Elasticsearch access token.

    • expires_in number Required

      The duration (in seconds) of the tokens.

    • refresh_token string Required

      The Elasticsearch refresh token.

    • type string Required

      The type of token.

POST /_security/oidc/authenticate
POST /_security/oidc/authenticate
{
  "redirect_uri" : "https://oidc-kibana.elastic.co:5603/api/security/oidc/callback?code=jtI3Ntt8v3_XvcLzCFGq&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
  "state" : "4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
  "nonce" : "WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM",
  "realm" : "oidc1"
}
resp = client.security.oidc_authenticate(
    redirect_uri="https://oidc-kibana.elastic.co:5603/api/security/oidc/callback?code=jtI3Ntt8v3_XvcLzCFGq&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
    state="4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
    nonce="WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM",
    realm="oidc1",
)
const response = await client.security.oidcAuthenticate({
  redirect_uri:
    "https://oidc-kibana.elastic.co:5603/api/security/oidc/callback?code=jtI3Ntt8v3_XvcLzCFGq&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
  state: "4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
  nonce: "WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM",
  realm: "oidc1",
});
response = client.security.oidc_authenticate(
  body: {
    "redirect_uri": "https://oidc-kibana.elastic.co:5603/api/security/oidc/callback?code=jtI3Ntt8v3_XvcLzCFGq&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
    "state": "4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
    "nonce": "WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM",
    "realm": "oidc1"
  }
)
$resp = $client->security()->oidcAuthenticate([
    "body" => [
        "redirect_uri" => "https://oidc-kibana.elastic.co:5603/api/security/oidc/callback?code=jtI3Ntt8v3_XvcLzCFGq&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
        "state" => "4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
        "nonce" => "WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM",
        "realm" => "oidc1",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"redirect_uri":"https://oidc-kibana.elastic.co:5603/api/security/oidc/callback?code=jtI3Ntt8v3_XvcLzCFGq&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I","state":"4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I","nonce":"WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM","realm":"oidc1"}' "$ELASTICSEARCH_URL/_security/oidc/authenticate"
client.security().oidcAuthenticate(o -> o
    .nonce("WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM")
    .realm("oidc1")
    .redirectUri("https://oidc-kibana.elastic.co:5603/api/security/oidc/callback?code=jtI3Ntt8v3_XvcLzCFGq&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I")
    .state("4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I")
);
Request example
Run `POST /_security/oidc/authenticate` to exchange the response that was returned from the OpenID Connect Provider after a successful authentication for an Elasticsearch access token and refresh token. This example is from an authentication that uses the authorization code grant flow.
{
  "redirect_uri" : "https://oidc-kibana.elastic.co:5603/api/security/oidc/callback?code=jtI3Ntt8v3_XvcLzCFGq&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
  "state" : "4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
  "nonce" : "WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM",
  "realm" : "oidc1"
}
Response examples (200)
A successful response from `POST /_security/oidc/authenticate`. It contains the access and refresh tokens that were generated, the token duration (in seconds), and the type.
{
  "access_token" : "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
  "type" : "Bearer",
  "expires_in" : 1200,
  "refresh_token": "vLBPvmAB6KvwvJZr27cS"
}

Logout of OpenID Connect Generally available

POST /_security/oidc/logout

Invalidate an access token and a refresh token that were generated as a response to the /_security/oidc/authenticate API.

If the OpenID Connect authentication realm in Elasticsearch is accordingly configured, the response to this call will contain a URI pointing to the end session endpoint of the OpenID Connect Provider in order to perform single logout.

Elasticsearch exposes all the necessary OpenID Connect related functionality with the OpenID Connect APIs. These APIs are used internally by Kibana in order to provide OpenID Connect based authentication, but can also be used by other, custom web applications or other clients.

application/json

Body Required

  • token string Required

    The access token to be invalidated.

  • refresh_token string

    The refresh token to be invalidated.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • redirect string Required

      A URI that points to the end session endpoint of the OpenID Connect Provider with all the parameters of the logout request as HTTP GET parameters.

POST /_security/oidc/logout
POST /_security/oidc/logout
{
  "token" : "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
  "refresh_token": "vLBPvmAB6KvwvJZr27cS"
}
resp = client.security.oidc_logout(
    token="dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
    refresh_token="vLBPvmAB6KvwvJZr27cS",
)
const response = await client.security.oidcLogout({
  token:
    "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
  refresh_token: "vLBPvmAB6KvwvJZr27cS",
});
response = client.security.oidc_logout(
  body: {
    "token": "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
    "refresh_token": "vLBPvmAB6KvwvJZr27cS"
  }
)
$resp = $client->security()->oidcLogout([
    "body" => [
        "token" => "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
        "refresh_token" => "vLBPvmAB6KvwvJZr27cS",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"token":"dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==","refresh_token":"vLBPvmAB6KvwvJZr27cS"}' "$ELASTICSEARCH_URL/_security/oidc/logout"
client.security().oidcLogout(o -> o
    .refreshToken("vLBPvmAB6KvwvJZr27cS")
    .token("dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==")
);
Request example
Run `POST /_security/oidc/logout` to perform the logout.
{
  "token" : "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
  "refresh_token": "vLBPvmAB6KvwvJZr27cS"
}
Response examples (200)
A successful response from `POST /_security/oidc/logout`, which contains the URI pointing to the End Session Endpoint of the OpenID Connect Provider with all the parameters of the Logout Request as HTTP GET parameters.
{
  "redirect" : "https://op-provider.org/logout?id_token_hint=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c&post_logout_redirect_uri=https%3A%2F%2Ffanyv88.com%3A443%2Fhttp%2Foidc-kibana.elastic.co%2Floggedout&state=lGYK0EcSLjqH6pkT5EVZjC6eIW5YCGgywj2sxROO"
}

Prepare OpenID connect authentication Generally available

POST /_security/oidc/prepare

Create an oAuth 2.0 authentication request as a URL string based on the configuration of the OpenID Connect authentication realm in Elasticsearch.

The response of this API is a URL pointing to the Authorization Endpoint of the configured OpenID Connect Provider, which can be used to redirect the browser of the user in order to continue the authentication process.

Elasticsearch exposes all the necessary OpenID Connect related functionality with the OpenID Connect APIs. These APIs are used internally by Kibana in order to provide OpenID Connect based authentication, but can also be used by other, custom web applications or other clients.

application/json

Body Required

  • iss string

    In the case of a third party initiated single sign on, this is the issuer identifier for the OP that the RP is to send the authentication request to. It cannot be specified when realm is specified. One of realm or iss is required.

  • login_hint string

    In the case of a third party initiated single sign on, it is a string value that is included in the authentication request as the login_hint parameter. This parameter is not valid when realm is specified.

  • nonce string

    The value used to associate a client session with an ID token and to mitigate replay attacks. If the caller of the API does not provide a value, Elasticsearch will generate one with sufficient entropy and return it in the response.

  • realm string

    The name of the OpenID Connect realm in Elasticsearch the configuration of which should be used in order to generate the authentication request. It cannot be specified when iss is specified. One of realm or iss is required.

  • state string

    The value used to maintain state between the authentication request and the response, typically used as a Cross-Site Request Forgery mitigation. If the caller of the API does not provide a value, Elasticsearch will generate one with sufficient entropy and return it in the response.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • nonce string Required
    • realm string Required
    • redirect string Required

      A URI that points to the authorization endpoint of the OpenID Connect Provider with all the parameters of the authentication request as HTTP GET parameters.

    • state string Required
POST /_security/oidc/prepare
POST /_security/oidc/prepare
{
  "realm" : "oidc1"
}
resp = client.security.oidc_prepare_authentication(
    realm="oidc1",
)
const response = await client.security.oidcPrepareAuthentication({
  realm: "oidc1",
});
response = client.security.oidc_prepare_authentication(
  body: {
    "realm": "oidc1"
  }
)
$resp = $client->security()->oidcPrepareAuthentication([
    "body" => [
        "realm" => "oidc1",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"realm":"oidc1"}' "$ELASTICSEARCH_URL/_security/oidc/prepare"
client.security().oidcPrepareAuthentication(o -> o
    .realm("oidc1")
);
Request examples
Run `POST /_security/oidc/prepare` to generate an authentication request for the OpenID Connect Realm `oidc1`.
{
  "realm" : "oidc1"
}
Run `POST /_security/oidc/prepare` to generate an authentication request for the OpenID Connect Realm `oidc1`, where the values for the `state` and the `nonce` have been generated by the client.
{
  "realm" : "oidc1",
  "state" : "lGYK0EcSLjqH6pkT5EVZjC6eIW5YCGgywj2sxROO",
  "nonce" : "zOBXLJGUooRrbLbQk5YCcyC8AXw3iloynvluYhZ5"
}
Run `POST /_security/oidc/prepare` to generate an authentication request for a third party initiated single sign on. Specify the issuer that should be used for matching the appropriate OpenID Connect Authentication realm.
{
  "iss" : "http://127.0.0.1:8080",
  "login_hint": "this_is_an_opaque_string"
}
Response examples (200)
A successful response from `POST /_security/oidc/prepare`. It contains the URI pointing to the Authorization Endpoint of the OpenID Connect Provider with all the parameters of the Authentication Request as HTTP GET parameters.
{
  "redirect" : "http://127.0.0.1:8080/c2id-login?scope=openid&response_type=id_token&redirect_uri=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Fmy.fantastic.rp%2Fcb&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I&nonce=WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM&client_id=elasticsearch-rp",
  "state" : "4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
  "nonce" : "WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM",
  "realm" : "oidc1"
}




Find API keys with a query Generally available; Added in 7.15.0

POST /_security/_query/api_key

All methods and paths for this operation:

GET /_security/_query/api_key

POST /_security/_query/api_key

Get a paginated list of API keys and their information. You can optionally filter the results with a query.

To use this API, you must have at least the manage_own_api_key or the read_security cluster privileges. If you have only the manage_own_api_key privilege, this API returns only the API keys that you own. If you have the read_security, manage_api_key, or greater privileges (including manage_security), this API returns all API keys regardless of ownership.

Required authorization

  • Cluster privileges: manage_own_api_key,read_security

Query parameters

  • with_limited_by boolean Generally available; Added in 8.5.0

    Return the snapshot of the owner user's role descriptors associated with the API key. An API key's actual permission is the intersection of its assigned role descriptors and the owner user's role descriptors (effectively limited by it). An API key cannot retrieve any API key’s limited-by role descriptors (including itself) unless it has manage_api_key or higher privileges.

  • with_profile_uid boolean Generally available; Added in 8.14.0

    Determines whether to also retrieve the profile UID for the API key owner principal. If it exists, the profile UID is returned under the profile_uid response field for each API key.

  • typed_keys boolean Generally available; Added in 8.14.0

    Determines whether aggregation names are prefixed by their respective types in the response.

application/json

Body

  • aggregations object

    Any aggregations to run over the corpus of returned API keys. Aggregations and queries work together. Aggregations are computed only on the API keys that match the query. This supports only a subset of aggregation types, namely: terms, range, date_range, missing, cardinality, value_count, composite, filter, and filters. Additionally, aggregations only run over the same subset of fields that query works with.

  • query object

    A query to filter which API keys to return. If the query parameter is missing, it is equivalent to a match_all query. The query supports a subset of query types, including match_all, bool, term, terms, match, ids, prefix, wildcard, exists, range, and simple_query_string. You can query the following public information associated with an API key: id, type, name, creation, expiration, invalidated, invalidation, username, realm, and metadata.

    NOTE: The queryable string values associated with API keys are internally mapped as keywords. Consequently, if no analyzer parameter is specified for a match query, then the provided match query string is interpreted as a single keyword value. Such a match query is hence equivalent to a term query.

    Hide query attributes Show query attributes object
    • match object

      Returns documents that match a provided text, number, date or boolean value. The provided text is analyzed before matching.

    • prefix object

      Returns documents that contain a specific prefix in a provided field.

    • range object

      Returns documents that contain terms within a provided range.

    • term object

      Returns documents that contain an exact term in a provided field. To return a document, the query term must exactly match the queried field's value, including whitespace and capitalization.

    • wildcard object

      Returns documents that contain terms matching a wildcard pattern.

  • from number

    The starting document offset. It must not be negative. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter.

    Default value is 0.

  • sort string | object | array[string | object]

    The sort definition. Other than id, all public fields of an API key are eligible for sorting. In addition, sort can also be applied to the _doc field to sort by index order.

    One of:

    The sort definition. Other than id, all public fields of an API key are eligible for sorting. In addition, sort can also be applied to the _doc field to sort by index order.

  • size number

    The number of hits to return. It must not be negative. The size parameter can be set to 0, in which case no API key matches are returned, only the aggregation results. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter.

    Default value is 10.

  • search_after array[number | string | boolean | null | object]

    The search after definition.

    A field value.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • total number Required

      The total number of API keys found.

    • count number Required

      The number of API keys returned in the response.

    • api_keys array[object] Required

      A list of API key information.

      Hide api_keys attributes Show api_keys attributes object
      • id string Required

        Id for the API key

      • name string Required

        Name of the API key.

      • type string Required

        The type of the API key (e.g. rest or cross_cluster).

        Values are rest or cross_cluster.

      • creation number

        Time unit for milliseconds

      • expiration number

        Time unit for milliseconds

      • invalidated boolean Required

        Invalidation status for the API key. If the key has been invalidated, it has a value of true. Otherwise, it is false.

      • invalidation number

        Time unit for milliseconds

      • username string Required

        Principal for which this API key was created

      • realm string Required

        Realm name of the principal for which this API key was created.

      • realm_type string Generally available; Added in 8.14.0

        Realm type of the principal for which this API key was created

      • metadata object Required

        Metadata of the API key

        Hide metadata attribute Show metadata attribute object
        • * object Additional properties
      • role_descriptors object

        The role descriptors assigned to this API key when it was created or last updated. An empty role descriptor means the API key inherits the owner user’s permissions.

        Hide role_descriptors attribute Show role_descriptors attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • cluster array[string]

            A list of cluster privileges. These privileges define the cluster level actions that API keys are able to execute.

          • indices array[object]

            A list of indices permissions entries.

          • remote_indices array[object] Generally available; Added in 8.14.0

            A list of indices permissions for remote clusters.

            The subset of index level privileges that can be defined for remote clusters.

            The subset of index level privileges that can be defined for remote clusters.

          • remote_cluster array[object] Generally available; Added in 8.15.0

            A list of cluster permissions for remote clusters. NOTE: This is limited a subset of the cluster permissions.

            The subset of cluster level privileges that can be defined for remote clusters.

            The subset of cluster level privileges that can be defined for remote clusters.

          • global array[object] | object

            An object defining global privileges. A global privilege is a form of cluster privilege that is request-aware. Support for global privileges is currently limited to the management of application privileges.

          • applications array[object]

            A list of application privilege entries

          • metadata object

            Optional meta-data. Within the metadata object, keys that begin with _ are reserved for system usage.

          • run_as array[string]

            A list of users that the API keys can impersonate. NOTE: In Elastic Cloud Serverless, the run-as feature is disabled. For API compatibility, you can still specify an empty run_as field, but a non-empty list will be rejected.

          • description string

            Optional description of the role descriptor

          • restriction object

            Restriction for when the role descriptor is allowed to be effective.

          • transient_metadata object
            Hide transient_metadata attribute Show transient_metadata attribute object
            • * object Additional properties
      • limited_by array[object] Generally available; Added in 8.5.0

        The owner user’s permissions associated with the API key. It is a point-in-time snapshot captured at creation and subsequent updates. An API key’s effective permissions are an intersection of its assigned privileges and the owner user’s permissions.

        Hide limited_by attribute Show limited_by attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • cluster array[string]

            A list of cluster privileges. These privileges define the cluster level actions that API keys are able to execute.

          • indices array[object]

            A list of indices permissions entries.

          • remote_indices array[object] Generally available; Added in 8.14.0

            A list of indices permissions for remote clusters.

          • remote_cluster array[object] Generally available; Added in 8.15.0

            A list of cluster permissions for remote clusters. NOTE: This is limited a subset of the cluster permissions.

          • global
          • applications array[object]

            A list of application privilege entries

          • metadata
          • run_as array[string]

            A list of users that the API keys can impersonate. NOTE: In Elastic Cloud Serverless, the run-as feature is disabled. For API compatibility, you can still specify an empty run_as field, but a non-empty list will be rejected.

          • description string

            Optional description of the role descriptor

          • restriction
          • transient_metadata object
      • access object

        The access granted to cross-cluster API keys. The access is composed of permissions for cross cluster search and cross cluster replication. At least one of them must be specified. When specified, the new access assignment fully replaces the previously assigned access.

        Hide access attributes Show access attributes object
        • replication array[object]

          A list of indices permission entries for cross-cluster replication.

      • profile_uid string Generally available; Added in 8.14.0

        The profile uid for the API key owner principal, if requested and if it exists

      • _sort array[number | string | boolean | null | object]

        Sorting values when using the sort parameter with the security.query_api_keys API.

        A field value.

    • aggregations object

      The aggregations result, if requested.

POST /_security/_query/api_key
GET /_security/_query/api_key?with_limited_by=true
{
  "query": {
    "ids": {
      "values": [
        "VuaCfGcBCdbkQm-e5aOx"
      ]
    }
  }
}
resp = client.security.query_api_keys(
    with_limited_by=True,
    query={
        "ids": {
            "values": [
                "VuaCfGcBCdbkQm-e5aOx"
            ]
        }
    },
)
const response = await client.security.queryApiKeys({
  with_limited_by: "true",
  query: {
    ids: {
      values: ["VuaCfGcBCdbkQm-e5aOx"],
    },
  },
});
response = client.security.query_api_keys(
  with_limited_by: "true",
  body: {
    "query": {
      "ids": {
        "values": [
          "VuaCfGcBCdbkQm-e5aOx"
        ]
      }
    }
  }
)
$resp = $client->security()->queryApiKeys([
    "with_limited_by" => "true",
    "body" => [
        "query" => [
            "ids" => [
                "values" => array(
                    "VuaCfGcBCdbkQm-e5aOx",
                ),
            ],
        ],
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":{"ids":{"values":["VuaCfGcBCdbkQm-e5aOx"]}}}' "$ELASTICSEARCH_URL/_security/_query/api_key?with_limited_by=true"
client.security().queryApiKeys(q -> q
    .query(qu -> qu
        .ids(i -> i
            .values("VuaCfGcBCdbkQm-e5aOx")
        )
    )
    .withLimitedBy(true)
);
Request examples
Run `GET /_security/_query/api_key?with_limited_by=true` to retrieve an API key by ID.
{
  "query": {
    "ids": {
      "values": [
        "VuaCfGcBCdbkQm-e5aOx"
      ]
    }
  }
}
Run `GET /_security/_query/api_key`. Use a `bool` query to issue complex logical conditions and use `from`, `size`, and `sort` to help paginate the result. For example, the API key name must begin with `app1-key-` and must not be `app1-key-01`. It must be owned by a username with the wildcard pattern `org-*-user` and the `environment` metadata field must have a `production` value. The offset to begin the search result is the twentieth (zero-based index) API key. The page size of the response is 10 API keys. The result is first sorted by creation date in descending order, then by name in ascending order.
{
  "query": {
    "bool": {
      "must": [
        {
          "prefix": {
            "name": "app1-key-" 
          }
        },
        {
          "term": {
            "invalidated": "false" 
          }
        }
      ],
      "must_not": [
        {
          "term": {
            "name": "app1-key-01" 
          }
        }
      ],
      "filter": [
        {
          "wildcard": {
            "username": "org-*-user" 
          }
        },
        {
          "term": {
            "metadata.environment": "production" 
          }
        }
      ]
    }
  },
  "from": 20, 
  "size": 10, 
  "sort": [ 
    { "creation": { "order": "desc", "format": "date_time" } },
    "name"
  ]
}
Run `GET /_security/_query/api_key` to retrieve the API key by name.
{
  "query": {
    "term": {
      "name": {
        "value": "application-key-1"
      }
    }
  }
}
Response examples (200)
A successful response from `GET /_security/_query/api_key?with_limited_by=true`. The `limited_by` details are the owner user's permissions associated with the API key. It is a point-in-time snapshot captured at creation and subsequent updates. An API key's effective permissions are an intersection of its assigned privileges and the owner user's permissions.
{
  "api_keys": [
    {
      "id": "VuaCfGcBCdbkQm-e5aOx",
      "name": "application-key-1",
      "creation": 1548550550158,
      "expiration": 1548551550158,
      "invalidated": false,
      "username": "myuser",
      "realm": "native1",
      "realm_type": "native",
      "metadata": {
        "application": "my-application"
      },
      "role_descriptors": { },
      "limited_by": [ 
        {
          "role-power-user": {
            "cluster": [
              "monitor"
            ],
            "indices": [
              {
                "names": [
                  "*"
                ],
                "privileges": [
                  "read"
                ],
                "allow_restricted_indices": false
              }
            ],
            "applications": [ ],
            "run_as": [ ],
            "metadata": { },
            "transient_metadata": {
              "enabled": true
            }
          }
        }
      ]
    }
  ]
}
An abbreviated response from `GET /_security/_query/api_key` that contains a list of matched API keys along with their sort values. The first sort value is creation time, which is displayed in `date_time` format. The second sort value is the API key name.
{
  "total": 100,
  "count": 10,
  "api_keys": [
    {
      "id": "CLXgVnsBOGkf8IyjcXU7",
      "name": "app1-key-79",
      "creation": 1629250154811,
      "invalidated": false,
      "username": "org-admin-user",
      "realm": "native1",
      "metadata": {
        "environment": "production"
      },
      "role_descriptors": { },
      "_sort": [
        "2021-08-18T01:29:14.811Z",  
        "app1-key-79"  
      ]
    },
    {
      "id": "BrXgVnsBOGkf8IyjbXVB",
      "name": "app1-key-78",
      "creation": 1629250153794,
      "invalidated": false,
      "username": "org-admin-user",
      "realm": "native1",
      "metadata": {
        "environment": "production"
      },
      "role_descriptors": { },
      "_sort": [
        "2021-08-18T01:29:13.794Z",
        "app1-key-78"
      ]
    }
  ]
}
A successful response from `GET /_security/_query/api_key`. It includes the role descriptors that are assigned to each API key when it was created or last updated. Note that an API key's effective permissions are an intersection of its assigned privileges and the point-in-time snapshot of the owner user's permissions. An empty role descriptors object means the API key inherits the owner user's permissions.
{
  "total": 3,
  "count": 3,
  "api_keys": [ 
    {
      "id": "nkvrGXsB8w290t56q3Rg",
      "name": "my-api-key-1",
      "creation": 1628227480421,
      "expiration": 1629091480421,
      "invalidated": false,
      "username": "elastic",
      "realm": "reserved",
      "realm_type": "reserved",
      "metadata": {
        "letter": "a"
      },
      "role_descriptors": { 
        "role-a": {
          "cluster": [
            "monitor"
          ],
          "indices": [
            {
              "names": [
                "index-a"
              ],
              "privileges": [
                "read"
              ],
              "allow_restricted_indices": false
            }
          ],
          "applications": [ ],
          "run_as": [ ],
          "metadata": { },
          "transient_metadata": {
            "enabled": true
          }
        }
      }
    },
    {
      "id": "oEvrGXsB8w290t5683TI",
      "name": "my-api-key-2",
      "creation": 1628227498953,
      "expiration": 1628313898953,
      "invalidated": false,
      "username": "elastic",
      "realm": "reserved",
      "metadata": {
        "letter": "b"
      },
      "role_descriptors": { } 
    }
  ]
}