Elasticsearch API

Base URL
http://api.example.com

Elasticsearch provides REST APIs that are used by the UI components and can be called directly to configure and access Elasticsearch features.

Documentation source and versions

This documentation is derived from the 8.19 branch of the elasticsearch-specification repository. It is provided under license Attribution-NonCommercial-NoDerivatives 4.0 International. This documentation contains work-in-progress information for future Elastic Stack releases.

Last update on Sep 16, 2025.

This API is provided under license Apache 2.0.

Authentication

The API accepts 3 different authentication methods:

Api key auth (http_api_key)

Elasticsearch APIs support key-based authentication. You must create an API key and use the encoded value in the request header. For example:

curl -X GET "${ES_URL}/_cat/indices?v=true" \
  -H "Authorization: ApiKey ${API_KEY}"

To get API keys, use the /_security/api_key APIs.

Basic auth (http)

Basic auth tokens are constructed with the Basic keyword, followed by a space, followed by a base64-encoded string of your username:password (separated by a : colon).

Example: send a Authorization: Basic aGVsbG86aGVsbG8= HTTP header with your requests to authenticate with the API.

Bearer auth (http)

Elasticsearch APIs support the use of bearer tokens in the Authorization HTTP header to authenticate with the API. For examples, refer to Token-based authentication services


















Compact and aligned text (CAT)

The compact and aligned text (CAT) APIs aim are intended only for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, it's recommend to use a corresponding JSON API. All the cat commands accept a query string parameter help to see all the headers and info they provide, and the /_cat command alone lists all the available commands.

Get aliases Generally available

GET /_cat/aliases/{name}

All methods and paths for this operation:

GET /_cat/aliases

GET /_cat/aliases/{name}

Get the cluster's index aliases, including filter and routing information. This API does not return data stream aliases.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • name string | array[string]

    A comma-separated list of aliases to retrieve. Supports wildcards (*). To retrieve all aliases, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • alias (or a): The name of the alias.
    • index (or i, idx): The name of the index the alias points to.
    • filter (or f, fi): The filter applied to the alias.
    • routing.index (or ri, routingIndex): Index routing value for the alias.
    • routing.search (or rs, routingSearch): Search routing value for the alias.
    • is_write_index (or w, isWriteIndex): Indicates if the index is the write index for the alias.

    Values are alias, a, index, i, idx, filter, f, fi, routing.index, ri, routingIndex, routing.search, rs, routingSearch, is_write_index, w, or isWriteIndex.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • alias string

      alias name

    • index string

      index alias points to

    • filter string

      filter

    • routing.index string

      index routing

    • is_write_index string

      write index

GET /_cat/aliases/{name}
GET _cat/aliases?format=json&v=true
resp = client.cat.aliases(
    format="json",
    v=True,
)
const response = await client.cat.aliases({
  format: "json",
  v: "true",
});
response = client.cat.aliases(
  format: "json",
  v: "true"
)
$resp = $client->cat()->aliases([
    "format" => "json",
    "v" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/aliases?format=json&v=true"
client.cat().aliases();
Response examples (200)
A successful response from `GET _cat/aliases?format=json&v=true`. This response shows that `alias2` has configured a filter and `alias3` and `alias4` have routing configurations.
[
  {
    "alias": "alias1",
    "index": "test1",
    "filter": "-",
    "routing.index": "-",
    "routing.search": "-",
    "is_write_index": "true"
  },
  {
    "alias": "alias1",
    "index": "test1",
    "filter": "*",
    "routing.index": "-",
    "routing.search": "-",
    "is_write_index": "true"
  },
  {
    "alias": "alias3",
    "index": "test1",
    "filter": "-",
    "routing.index": "1",
    "routing.search": "1",
    "is_write_index": "true"
  },
  {
    "alias": "alias4",
    "index": "test1",
    "filter": "-",
    "routing.index": "2",
    "routing.search": "1,2",
    "is_write_index": "true"
  }
]












Get field data cache information Generally available

GET /_cat/fielddata/{fields}

All methods and paths for this operation:

GET /_cat/fielddata

GET /_cat/fielddata/{fields}

Get the amount of heap memory currently used by the field data cache on every data node in the cluster.

IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • fields string | array[string] Required

    Comma-separated list of fields used to limit returned information. To retrieve all fields, omit this parameter.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • fields string | array[string]

    Comma-separated list of fields used to limit returned information.

  • h string | array[string]

    A comma-separated list of columns names to display. It supports simple wildcards.

    Supported values include:

    • id: The node ID.
    • host (or h): The host name of the node.
    • ip: The IP address of the node.
    • node (or n): The node name.
    • field (or f): The field name.
    • size (or s): The field data usage.

    Values are id, host, h, ip, node, n, field, f, size, or s.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      node id

    • host string

      host name

    • ip string

      ip address

    • node string

      node name

    • field string

      field name

    • size string

      field data usage

GET /_cat/fielddata/{fields}
GET /_cat/fielddata?v=true&fields=body&format=json
resp = client.cat.fielddata(
    v=True,
    fields="body",
    format="json",
)
const response = await client.cat.fielddata({
  v: "true",
  fields: "body",
  format: "json",
});
response = client.cat.fielddata(
  v: "true",
  fields: "body",
  format: "json"
)
$resp = $client->cat()->fielddata([
    "v" => "true",
    "fields" => "body",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/fielddata?v=true&fields=body&format=json"
client.cat().fielddata();
Response examples (200)
A successful response from `GET /_cat/fielddata?v=true&fields=body&format=json`. You can specify an individual field in the request body or URL path. This example retrieves heap memory size information for the `body` field.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  }
]
A successful response from `GET /_cat/fielddata/body,soul?v=true&format=json`. You can specify a comma-separated list of fields in the request body or URL path. This example retrieves heap memory size information for the `body` and `soul` fields. To get information for all fields, run `GET /_cat/fielddata?v=true`.
[
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "1127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "body",
    "size": "544b"
  },
  {
    "id": "Nqk-6inXQq-OxUfOUI8jNQ",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "Nqk-6in",
    "field": "soul",
    "size": "480b"
  }
]




Get CAT help Generally available

GET /_cat

Get help for the CAT APIs.

Responses

  • 200 application/json
GET /_cat
curl \
 --request GET 'http://api.example.com/_cat' \
 --header "Authorization: $API_KEY"




















Get trained models Generally available; Added in 7.7.0

GET /_cat/ml/trained_models/{model_id}

All methods and paths for this operation:

GET /_cat/ml/trained_models

GET /_cat/ml/trained_models/{model_id}

Get configuration and usage information about inference trained models.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • model_id string Required

    A unique identifier for the trained model.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request: contains wildcard expressions and there are no models that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, the API returns an empty array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    A comma-separated list of column names to display.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.
  • s string | array[string]

    A comma-separated list of column names or aliases used to sort the response.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.
  • from number

    Skips the specified number of transforms.

  • size number

    The maximum number of transforms to display.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The model identifier.

    • created_by string

      Information about the creator of the model.

    • heap_size number | string

      The estimated heap size to keep the model in memory.

      One of:

      The estimated heap size to keep the model in memory.

    • operations string

      The estimated number of operations to use the model. This number helps to measure the computational complexity of the model.

    • license string

      The license level of the model.

    • create_time string | number

      The time the model was created.

      One of:

      The time the model was created.

    • version string

      The version of Elasticsearch when the model was created.

    • description string

      A description of the model.

    • ingest.pipelines string

      The number of pipelines that are referencing the model.

    • ingest.count string

      The total number of documents that are processed by the model.

    • ingest.time string

      The total time spent processing documents with thie model.

    • ingest.current string

      The total number of documents that are currently being handled by the model.

    • ingest.failed string

      The total number of failed ingest attempts with the model.

    • data_frame.id string

      The identifier for the data frame analytics job that created the model. Only displayed if the job is still available.

    • data_frame.create_time string

      The time the data frame analytics job was created.

    • data_frame.source_index string

      The source index used to train in the data frame analysis.

    • data_frame.analysis string

      The analysis used by the data frame to build the model.

    • type string Generally available; Added in 8.0.0
GET /_cat/ml/trained_models/{model_id}
GET _cat/ml/trained_models?v=true&format=json
resp = client.cat.ml_trained_models(
    v=True,
    format="json",
)
const response = await client.cat.mlTrainedModels({
  v: "true",
  format: "json",
});
response = client.cat.ml_trained_models(
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlTrainedModels([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/trained_models?v=true&format=json"
client.cat().mlTrainedModels();
Response examples (200)
A successful response from `GET _cat/ml/trained_models?v=true&format=json`.
[
  {
    "id": "ddddd-1580216177138",
    "heap_size": "0b",
    "operations": "196",
    "create_time": "2025-03-25T00:01:38.662Z",
    "type": "pytorch",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  },
  {
    "id": "lang_ident_model_1",
    "heap_size": "1mb",
    "operations": "39629",
    "create_time": "2019-12-05T12:28:34.594Z",
    "type": "lang_ident",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  }
]

Get node attribute information Generally available

GET /_cat/nodeattrs

Get information about custom node attributes. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • node string

      The node name.

    • id string

      The unique node identifier.

    • pid string

      The process identifier.

    • host string

      The host name.

    • ip string

      The IP address.

    • port string

      The bound transport port.

    • attr string

      The attribute name.

    • value string

      The attribute value.

GET /_cat/nodeattrs
GET /_cat/nodeattrs?v=true&format=json
resp = client.cat.nodeattrs(
    v=True,
    format="json",
)
const response = await client.cat.nodeattrs({
  v: "true",
  format: "json",
});
response = client.cat.nodeattrs(
  v: "true",
  format: "json"
)
$resp = $client->cat()->nodeattrs([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/nodeattrs?v=true&format=json"
client.cat().nodeattrs();
Response examples (200)
A successful response from `GET /_cat/nodeattrs?v=true&format=json`. The `node`, `host`, and `ip` columns provide basic information about each node. The `attr` and `value` columns return custom node attributes, one per line.
[
  {
    "node": "node-0",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "attr": "testattr",
    "value": "test"
  }
]
A successful response from `GET /_cat/nodeattrs?v=true&h=name,pid,attr,value`. It returns the `name`, `pid`, `attr`, and `value` columns.
[
  {
    "name": "node-0",
    "pid": "19566",
    "attr": "testattr",
    "value": "test"
  }
]








Get plugin information Generally available

GET /_cat/plugins

Get a list of plugins running on each node of a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • include_bootstrap boolean

    Include bootstrap plugins in the response

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The unique node identifier.

    • name string

      The node name.

    • component string

      The component name.

    • version string

      The component version.

    • description string

      The plugin details.

    • type string

      The plugin type.

GET /_cat/plugins
GET /_cat/plugins?v=true&s=component&h=name,component,version,description&format=json
resp = client.cat.plugins(
    v=True,
    s="component",
    h="name,component,version,description",
    format="json",
)
const response = await client.cat.plugins({
  v: "true",
  s: "component",
  h: "name,component,version,description",
  format: "json",
});
response = client.cat.plugins(
  v: "true",
  s: "component",
  h: "name,component,version,description",
  format: "json"
)
$resp = $client->cat()->plugins([
    "v" => "true",
    "s" => "component",
    "h" => "name,component,version,description",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/plugins?v=true&s=component&h=name,component,version,description&format=json"
client.cat().plugins();
Response examples (200)
A successful response from `GET /_cat/plugins?v=true&s=component&h=name,component,version,description&format=json`.
[
  { "name": "U7321H6", "component": "analysis-icu", "version": "8.17.0", "description": "The ICU Analysis plugin integrates the Lucene ICU module into Elasticsearch, adding ICU-related analysis components."},
  {"name": "U7321H6", "component": "analysis-kuromoji",   "verison":  "8.17.0", description: "The Japanese (kuromoji) Analysis plugin integrates Lucene kuromoji analysis module into elasticsearch."},
  {"name" "U7321H6", "component": "analysis-nori", "version":         "8.17.0", "description": "The Korean (nori) Analysis plugin integrates Lucene nori analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-phonetic",   "verison":  "8.17.0", "description": "The Phonetic Analysis plugin integrates phonetic token filter analysis with elasticsearch."},
  {"name": "U7321H6", "component": "analysis-smartcn",   "verison":  "8.17.0", "description": "Smart Chinese Analysis plugin integrates Lucene Smart Chinese analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-stempel",   "verison":  "8.17.0", "description": "The Stempel (Polish) Analysis plugin integrates Lucene stempel (polish) analysis module into elasticsearch."},
  {"name": "U7321H6", "component": "analysis-ukrainian",   "verison":  "8.17.0", "description": "The Ukrainian Analysis plugin integrates the Lucene UkrainianMorfologikAnalyzer into elasticsearch."},
  {"name": "U7321H6", "component": "discovery-azure-classic",   "verison":  "8.17.0", "description": "The Azure Classic Discovery plugin allows to use Azure Classic API for the unicast discovery mechanism"},
  {"name": "U7321H6", "component": "discovery-ec2",   "verison":  "8.17.0", "description": "The EC2 discovery plugin allows to use AWS API for the unicast discovery mechanism."},
  {"name": "U7321H6", "component": "discovery-gce",   "verison":  "8.17.0", "description": "The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism."},
  {"name": "U7321H6", "component": "mapper-annotated-text",   "verison":  "8.17.0", "description": "The Mapper Annotated_text plugin adds support for text fields with markup used to inject annotation tokens into the index."},
  {"name": "U7321H6", "component": "mapper-murmur3",   "verison":  "8.17.0", "description": "The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index."},
  {"name": "U7321H6", "component": "mapper-size",   "verison":  "8.17.0", "description": "The Mapper Size plugin allows document to record their uncompressed size at index time."},
  {"name": "U7321H6", "component": "store-smb",   "verison":  "8.17.0", "description": "The Store SMB plugin adds support for SMB stores."}
]
































Get transform information Generally available; Added in 7.7.0

GET /_cat/transforms/{transform_id}

All methods and paths for this operation:

GET /_cat/transforms

GET /_cat/transforms/{transform_id}

Get configuration and usage information about transforms.

CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.

Required authorization

  • Cluster privileges: monitor_transform

Path parameters

  • transform_id string Required

    A transform identifier or a wildcard expression. If you do not specify one of these options, the API returns information for all transforms.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, it returns an empty transforms array when there are no matches and the subset of results when there are partial matches. If false, the request returns a 404 status code when there are no matches or only partial matches.

  • from number

    Skips the specified number of transforms.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • size number

    The maximum number of transforms to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The transform identifier.

    • state string

      The status of the transform. Returned values include: aborting: The transform is aborting. failed: The transform failed. For more information about the failure, check thereasonfield. indexing: The transform is actively processing data and creating new documents. started: The transform is running but not actively indexing data. stopped: The transform is stopped. stopping`: The transform is stopping.

    • checkpoint string

      The sequence number for the checkpoint.

    • documents_processed string

      The number of documents that have been processed from the source index of the transform.

    • checkpoint_progress string | null

      The progress of the next checkpoint that is currently in progress.

    • last_search_time string | null

      The timestamp of the last search in the source indices. This field is shown only if the transform is running.

    • changes_last_detection_time string | null

      The timestamp when changes were last detected in the source indices.

    • create_time string

      The time the transform was created.

    • version string

      The version of Elasticsearch that existed on the node when the transform was created.

    • source_index string

      The source indices for the transform.

    • dest_index string

      The destination index for the transform.

    • pipeline string

      The unique identifier for the ingest pipeline.

    • description string

      The description of the transform.

    • transform_type string

      The type of transform: batch or continuous.

    • frequency string

      The interval between checks for changes in the source indices when the transform is running continuously.

    • max_page_search_size string

      The initial page size that is used for the composite aggregation for each checkpoint.

    • docs_per_second string

      The number of input documents per second.

    • reason string

      If a transform has a failed state, these details describe the reason for failure.

    • search_total string

      The total number of search operations on the source index for the transform.

    • search_failure string

      The total number of search failures.

    • search_time string

      The total amount of search time, in milliseconds.

    • index_total string

      The total number of index operations done by the transform.

    • index_failure string

      The total number of indexing failures.

    • index_time string

      The total time spent indexing documents, in milliseconds.

    • documents_indexed string

      The number of documents that have been indexed into the destination index for the transform.

    • delete_time string

      The total time spent deleting documents, in milliseconds.

    • documents_deleted string

      The number of documents deleted from the destination index due to the retention policy for the transform.

    • trigger_count string

      The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • pages_processed string

      The number of search or bulk index operations processed. Documents are processed in batches instead of individually.

    • processing_time string

      The total time spent processing results, in milliseconds.

    • checkpoint_duration_time_exp_avg string

      The exponential moving average of the duration of the checkpoint, in milliseconds.

    • indexed_documents_exp_avg string

      The exponential moving average of the number of new documents that have been indexed.

    • processed_documents_exp_avg string

      The exponential moving average of the number of documents that have been processed.

GET /_cat/transforms/{transform_id}
GET /_cat/transforms?v=true&format=json
resp = client.cat.transforms(
    v=True,
    format="json",
)
const response = await client.cat.transforms({
  v: "true",
  format: "json",
});
response = client.cat.transforms(
  v: "true",
  format: "json"
)
$resp = $client->cat()->transforms([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/transforms?v=true&format=json"
client.cat().transforms();
Response examples (200)
A successful response from `GET /_cat/transforms?v=true&format=json`.
[
  {
    "id" : "ecommerce_transform",
    "state" : "started",
    "checkpoint" : "1",
    "documents_processed" : "705",
    "checkpoint_progress" : "100.00",
    "changes_last_detection_time" : null
  }
]





Update voting configuration exclusions Generally available; Added in 7.0.0

POST /_cluster/voting_config_exclusions

Update the cluster voting config exclusions by node IDs or node names. By default, if there are more than three master-eligible nodes in the cluster and you remove fewer than half of the master-eligible nodes in the cluster at once, the voting configuration automatically shrinks. If you want to shrink the voting configuration to contain fewer than three nodes or to remove half or more of the master-eligible nodes in the cluster at once, use this API to remove departing nodes from the voting configuration manually. The API adds an entry for each specified node to the cluster’s voting configuration exclusions list. It then waits until the cluster has reconfigured its voting configuration to exclude the specified nodes.

Clusters should have no voting configuration exclusions in normal operation. Once the excluded nodes have stopped, clear the voting configuration exclusions with DELETE /_cluster/voting_config_exclusions. This API waits for the nodes to be fully removed from the cluster before it returns. If your cluster has voting configuration exclusions for nodes that you no longer intend to remove, use DELETE /_cluster/voting_config_exclusions?wait_for_removal=false to clear the voting configuration exclusions without waiting for the nodes to leave the cluster.

A response to POST /_cluster/voting_config_exclusions with an HTTP status code of 200 OK guarantees that the node has been removed from the voting configuration and will not be reinstated until the voting configuration exclusions are cleared by calling DELETE /_cluster/voting_config_exclusions. If the call to POST /_cluster/voting_config_exclusions fails or returns a response with an HTTP status code other than 200 OK then the node may not have been removed from the voting configuration. In that case, you may safely retry the call.

NOTE: Voting exclusions are required only when you remove at least half of the master-eligible nodes from a cluster in a short time period. They are not required when removing master-ineligible nodes or when removing fewer than half of the master-eligible nodes.

External documentation

Query parameters

  • node_names string | array[string]

    A comma-separated list of the names of the nodes to exclude from the voting configuration. If specified, you may not also specify node_ids.

  • node_ids string | array[string]

    A comma-separated list of the persistent ids of the nodes to exclude from the voting configuration. If specified, you may not also specify node_names.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

  • timeout string

    When adding a voting configuration exclusion, the API waits for the specified nodes to be excluded from the voting configuration before returning. If the timeout expires before the appropriate condition is satisfied, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
POST /_cluster/voting_config_exclusions
curl \
 --request POST 'http://api.example.com/_cluster/voting_config_exclusions' \
 --header "Authorization: $API_KEY"




















































Get the hot threads for nodes Generally available

GET /_nodes/{node_id}/hot_threads

All methods and paths for this operation:

GET /_nodes/hot_threads

GET /_nodes/{node_id}/hot_threads

Get a breakdown of the hot threads on each selected node in the cluster. The output is plain text with a breakdown of the top hot threads for each node.

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • node_id string | array[string] Required

    List of node IDs or names used to limit returned information.

Query parameters

  • ignore_idle_threads boolean

    If true, known idle threads (e.g. waiting in a socket select, or to get a task from an empty queue) are filtered out.

  • interval string

    The interval to do the second sampling of threads.

    Values are -1 or 0.

  • snapshots number

    Number of samples of thread stacktrace.

  • threads number

    Specifies the number of hot threads to provide information for.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • type string

    The type to sample.

    Values are cpu, wait, block, gpu, or mem.

  • sort string

    The sort order for 'cpu' type (default: total)

    Values are cpu, wait, block, gpu, or mem.

Responses

  • 200 application/json
GET /_nodes/{node_id}/hot_threads
GET /_nodes/hot_threads
resp = client.nodes.hot_threads()
const response = await client.nodes.hotThreads();
response = client.nodes.hot_threads
$resp = $client->nodes()->hotThreads();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_nodes/hot_threads"
client.nodes().hotThreads(h -> h);

Get node information Generally available; Added in 1.3.0

GET /_nodes/{node_id}/{metric}

All methods and paths for this operation:

GET /_nodes

GET /_nodes/{metric}
GET /_nodes/{node_id}
GET /_nodes/{node_id}/{metric}

By default, the API returns all attributes and core settings for cluster nodes.

Path parameters

  • node_id string | array[string] Required

    Comma-separated list of node IDs or names used to limit returned information.

  • metric string | array[string] Required

    Limits the information returned to the specific metrics. Supports a comma-separated list, such as http,ingest.

Query parameters

  • flat_settings boolean

    If true, returns settings in flat format.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • attributes object Required
          Hide attributes attribute Show attributes attribute object
          • * string Additional properties
        • build_flavor string Required
        • build_hash string Required

          Short hash of the last git commit in this release.

        • build_type string Required
        • component_versions object Required
          Hide component_versions attribute Show component_versions attribute object
          • * number Additional properties
        • host string Required

          The node’s host name.

        • http object
          Hide http attributes Show http attributes object
          • bound_address array[string] Required
          • max_content_length_in_bytes number Required
          • publish_address string Required
        • index_version number Required
        • ip string Required

          The node’s IP address.

        • jvm object
          Hide jvm attributes Show jvm attributes object
          • gc_collectors array[string] Required
          • memory_pools array[string] Required
          • pid number Required
          • vm_vendor string Required
          • using_bundled_jdk boolean Required
          • using_compressed_ordinary_object_pointers
          • input_arguments array[string] Required
        • name string Required

          The node's name

        • os object
          Hide os attributes Show os attributes object
          • arch string Required

            Name of the JVM architecture (ex: amd64, x86)

          • available_processors number Required

            Number of processors available to the Java virtual machine

          • allocated_processors number

            The number of processors actually used to calculate thread pool size. This number can be set with the node.processors setting of a node and defaults to the number of processors reported by the OS.

        • plugins array[object]
          Hide plugins attributes Show plugins attributes object
          • classname string Required
          • description string Required
          • elasticsearch_version
          • extended_plugins array[string] Required
          • has_native_controller boolean Required
          • java_version
          • name
          • version
          • licensed boolean Required
        • process object
          Hide process attributes Show process attributes object
          • id number Required

            Process identifier (PID)

          • mlockall boolean Required

            Indicates if the process address space has been successfully locked in memory

        • roles array[string] Required

          Values are master, data, data_cold, data_content, data_frozen, data_hot, data_warm, client, ingest, ml, voting_only, transform, remote_cluster_client, or coordinating_only.

        • settings object
        • thread_pool object
          Hide thread_pool attribute Show thread_pool attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • core number
            • max number
            • queue_size number Required
            • size number
            • type string Required
        • total_indexing_buffer number

          Total heap allowed to be used to hold recently indexed documents before they must be written to disk. This size is a shared pool across all shards on this node, and is controlled by Indexing Buffer settings.

        • total_indexing_buffer_in_bytes number | string

          Same as total_indexing_buffer, but expressed in bytes.

          One of:

          Same as total_indexing_buffer, but expressed in bytes.

        • transport object
          Hide transport attributes Show transport attributes object
          • bound_address array[string] Required
          • publish_address string Required
          • profiles object Required
        • transport_address string Required

          Host and port where transport HTTP connections are accepted.

        • transport_version number Required
        • version string Required

          Elasticsearch version running on this node.

        • modules array[object]
          Hide modules attributes Show modules attributes object
          • classname string Required
          • description string Required
          • elasticsearch_version
          • extended_plugins array[string] Required
          • has_native_controller boolean Required
          • java_version
          • name
          • version
          • licensed boolean Required
        • ingest object
          Hide ingest attribute Show ingest attribute object
          • processors array[object] Required
        • aggregations object
          Hide aggregations attribute Show aggregations attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • types array[string] Required
        • remote_cluster_server object
          Hide remote_cluster_server attribute Show remote_cluster_server attribute object
          • bound_address array[string] Required
GET /_nodes/{node_id}/{metric}
GET _nodes/_all/jvm
resp = client.nodes.info(
    node_id="_all",
    metric="jvm",
)
const response = await client.nodes.info({
  node_id: "_all",
  metric: "jvm",
});
response = client.nodes.info(
  node_id: "_all",
  metric: "jvm"
)
$resp = $client->nodes()->info([
    "node_id" => "_all",
    "metric" => "jvm",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_nodes/_all/jvm"
client.nodes().info(i -> i
    .metric("jvm")
    .nodeId("_all")
);
Response examples (200)
An abbreviated response when requesting cluster nodes information.
{
    "_nodes": {},
    "cluster_name": "elasticsearch",
    "nodes": {
      "USpTGYaBSIKbgSUJR2Z9lg": {
        "name": "node-0",
        "transport_address": "192.168.17:9300",
        "host": "node-0.elastic.co",
        "ip": "192.168.17",
        "version": "{version}",
        "transport_version": 100000298,
        "index_version": 100000074,
        "component_versions": {
          "ml_config_version": 100000162,
          "transform_config_version": 100000096
        },
        "build_flavor": "default",
        "build_type": "{build_type}",
        "build_hash": "587409e",
        "roles": [
          "master",
          "data",
          "ingest"
        ],
        "attributes": {},
        "plugins": [
          {
            "name": "analysis-icu",
            "version": "{version}",
            "description": "The ICU Analysis plugin integrates Lucene ICU
  module into elasticsearch, adding ICU relates analysis components.",
            "classname":
  "org.elasticsearch.plugin.analysis.icu.AnalysisICUPlugin",
            "has_native_controller": false
          }
        ],
        "modules": [
          {
            "name": "lang-painless",
            "version": "{version}",
            "description": "An easy, safe and fast scripting language for
  Elasticsearch",
            "classname": "org.elasticsearch.painless.PainlessPlugin",
            "has_native_controller": false
          }
        ]
      }
    }
}








Get feature usage information Generally available; Added in 6.0.0

GET /_nodes/{node_id}/usage/{metric}

All methods and paths for this operation:

GET /_nodes/usage

GET /_nodes/usage/{metric}
GET /_nodes/{node_id}/usage
GET /_nodes/{node_id}/usage/{metric}

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • node_id string | array[string] Required

    A comma-separated list of node IDs or names to limit the returned information; use _local to return information from the node you're connecting to, leave empty to get information from all nodes

  • metric string | array[string] Required

    Limits the information returned to the specific metrics. A comma-separated list of the following options: _all, rest_actions.

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Contains statistics about the number of nodes selected by the request’s node filters.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason
        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by
        • root_cause array[object]
        • suppressed array[object]
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • rest_actions object Required
          Hide rest_actions attribute Show rest_actions attribute object
          • * number Additional properties
        • Time unit for milliseconds

        • Time unit for milliseconds

        • aggregations object Required
          Hide aggregations attribute Show aggregations attribute object
          • * object Additional properties
GET /_nodes/{node_id}/usage/{metric}
GET _nodes/usage
resp = client.nodes.usage()
const response = await client.nodes.usage();
response = client.nodes.usage
$resp = $client->nodes()->usage();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_nodes/usage"
client.nodes().usage(u -> u);

Get the cluster health Generally available; Added in 8.7.0

GET /_health_report/{feature}

All methods and paths for this operation:

GET /_health_report

GET /_health_report/{feature}

Get a report with the health status of an Elasticsearch cluster. The report contains a list of indicators that compose Elasticsearch functionality.

Each indicator has a health status of: green, unknown, yellow or red. The indicator will provide an explanation and metadata describing the reason for its current health status.

The cluster’s status is controlled by the worst indicator status.

In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.

Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. The root cause and remediation steps are encapsulated in a diagnosis. A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.

NOTE: The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.

Path parameters

  • feature string | array[string] Required

    A feature of the cluster, as returned by the top-level health report API.

Query parameters

  • timeout string

    Explicit operation timeout.

    Values are -1 or 0.

  • verbose boolean

    Opt-in for more information about the health of the system.

  • size number

    Limit the number of affected resources the health report API returns.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cluster_name string Required
    • indicators object Required
      Hide indicators attributes Show indicators attributes object
      • master_is_stable object

        MASTER_IS_STABLE

        Hide master_is_stable attributes Show master_is_stable attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • shards_availability object

        SHARDS_AVAILABILITY

        Hide shards_availability attributes Show shards_availability attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • disk object

        DISK

        Hide disk attributes Show disk attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • repository_integrity object

        REPOSITORY_INTEGRITY

        Hide repository_integrity attributes Show repository_integrity attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • data_stream_lifecycle object

        DATA_STREAM_LIFECYCLE

        Hide data_stream_lifecycle attributes Show data_stream_lifecycle attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • ilm object

        ILM

        Hide ilm attributes Show ilm attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • slm object

        SLM

        Hide slm attributes Show slm attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
      • shards_capacity object

        SHARDS_CAPACITY

        Hide shards_capacity attributes Show shards_capacity attributes object
        • symptom string Required
        • impacts array[object]
        • diagnosis array[object]
    • status string

      Values are green, yellow, red, unknown, or unavailable.

GET /_health_report/{feature}
GET _health_report
resp = client.health_report()
const response = await client.healthReport();
response = client.health_report
$resp = $client->healthReport();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_health_report"
client.healthReport(h -> h);

Connector

The connector and sync jobs APIs provide a convenient way to create and manage Elastic connectors and sync jobs in an internal index. Connectors are Elasticsearch integrations that bring content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure:

  • Elastic managed connectors (Native connectors) are a managed service on Elastic Cloud
  • Self-managed connectors (Connector clients) are self-managed on your infrastructure.

This API provides an alternative to relying solely on Kibana UI for connector and sync job management. The API comes with a set of validations and assertions to ensure that the state representation in the internal index remains valid.

Check out the connector API tutorial




Get a connector Beta; Added in 8.12.0

GET /_connector/{connector_id}

Get the details about a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • api_key_id string
    • api_key_secret_id string
    • configuration object Required
      Hide configuration attribute Show configuration attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • category string
        • depends_on array[object] Required
          Hide depends_on attributes Show depends_on attributes object
          • field string Required
          • value
        • display string Required

          Values are textbox, textarea, numeric, toggle, or dropdown.

        • label string Required
        • options array[object] Required
          Hide options attributes Show options attributes object
          • label string Required
          • value
        • order number
        • placeholder string
        • required boolean Required
        • sensitive boolean Required
        • type string

          Values are str, int, list, or bool.

        • ui_restrictions array[string]
        • validations array[object]
        • value object Required
    • custom_scheduling object Required
      Hide custom_scheduling attribute Show custom_scheduling attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • configuration_overrides object Required
          Hide configuration_overrides attributes Show configuration_overrides attributes object
          • max_crawl_depth number
          • sitemap_discovery_disabled boolean
          • domain_allowlist array[string]
          • sitemap_urls array[string]
          • seed_urls array[string]
        • enabled boolean Required
        • interval string Required
        • last_synced string
        • name string Required
    • description string
    • features object
      Hide features attributes Show features attributes object
      • document_level_security object

        Indicates whether document-level security is enabled.

        Hide document_level_security attribute Show document_level_security attribute object
        • enabled boolean Required
      • incremental_sync object

        Indicates whether incremental syncs are enabled.

        Hide incremental_sync attribute Show incremental_sync attribute object
        • enabled boolean Required
      • native_connector_api_keys object

        Indicates whether managed connector API keys are enabled.

        Hide native_connector_api_keys attribute Show native_connector_api_keys attribute object
        • enabled boolean Required
      • sync_rules object
        Hide sync_rules attributes Show sync_rules attributes object
        • advanced object

          Indicates whether advanced sync rules are enabled.

        • basic object

          Indicates whether basic sync rules are enabled.

    • filtering array[object] Required
      Hide filtering attributes Show filtering attributes object
      • active object Required
        Hide active attributes Show active attributes object
        • advanced_snippet object Required
        • rules array[object] Required
        • validation object Required
      • domain string
      • draft object Required
        Hide draft attributes Show draft attributes object
        • advanced_snippet object Required
        • rules array[object] Required
        • validation object Required
    • id string
    • index_name string | null

    • is_native boolean Required
    • language string
    • last_access_control_sync_error string
    • last_access_control_sync_scheduled_at string | number

      One of:
    • last_access_control_sync_status string

      Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

    • last_deleted_document_count number
    • last_incremental_sync_scheduled_at string | number

      One of:
    • last_indexed_document_count number
    • last_seen string | number

      One of:
    • last_sync_error string
    • last_sync_scheduled_at string | number

      One of:
    • last_sync_status string

      Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

    • last_synced string | number

      One of:
    • name string
    • pipeline object
      Hide pipeline attributes Show pipeline attributes object
      • extract_binary_content boolean Required
      • name string Required
      • reduce_whitespace boolean Required
      • run_ml_inference boolean Required
    • scheduling object Required
      Hide scheduling attributes Show scheduling attributes object
      • access_control object
        Hide access_control attributes Show access_control attributes object
        • enabled boolean Required
        • interval string Required

          The interval is expressed using the crontab syntax

      • full object
        Hide full attributes Show full attributes object
        • enabled boolean Required
        • interval string Required

          The interval is expressed using the crontab syntax

      • incremental object
        Hide incremental attributes Show incremental attributes object
        • enabled boolean Required
        • interval string Required

          The interval is expressed using the crontab syntax

    • service_type string
    • status string Required

      Values are created, needs_configuration, configured, connected, or error.

    • sync_cursor object
    • sync_now boolean Required
GET /_connector/{connector_id}
GET _connector/my-connector-id
resp = client.connector.get(
    connector_id="my-connector-id",
)
const response = await client.connector.get({
  connector_id: "my-connector-id",
});
response = client.connector.get(
  connector_id: "my-connector-id"
)
$resp = $client->connector()->get([
    "connector_id" => "my-connector-id",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/my-connector-id"
client.connector().get(g -> g
    .connectorId("my-connector-id")
);




Delete a connector Beta; Added in 8.12.0

DELETE /_connector/{connector_id}

Removes a connector and associated sync jobs. This is a destructive action that is not recoverable. NOTE: This action doesn’t delete any API keys, ingest pipelines, or data indices associated with the connector. These need to be removed manually.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be deleted

Query parameters

  • delete_sync_jobs boolean

    A flag indicating if associated sync jobs should be also removed. Defaults to false.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_connector/{connector_id}
DELETE _connector/my-connector-id&delete_sync_jobs=true
resp = client.connector.delete(
    connector_id="my-connector-id&delete_sync_jobs=true",
)
const response = await client.connector.delete({
  connector_id: "my-connector-id&delete_sync_jobs=true",
});
response = client.connector.delete(
  connector_id: "my-connector-id&delete_sync_jobs=true"
)
$resp = $client->connector()->delete([
    "connector_id" => "my-connector-id&delete_sync_jobs=true",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/my-connector-id&delete_sync_jobs=true"
client.connector().delete(d -> d
    .connectorId("my-connector-id&delete_sync_jobs=true")
);
Response examples (200)
{
    "acknowledged": true
}








Cancel a connector sync job Beta; Added in 8.12.0

PUT /_connector/_sync_job/{connector_sync_job_id}/_cancel

Cancel a connector sync job, which sets the status to cancelling and updates cancellation_requested_at to the current time. The connector service is then responsible for setting the status of connector sync jobs to cancelled.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/_sync_job/{connector_sync_job_id}/_cancel
PUT _connector/_sync_job/my-connector-sync-job-id/_cancel
resp = client.connector.sync_job_cancel(
    connector_sync_job_id="my-connector-sync-job-id",
)
const response = await client.connector.syncJobCancel({
  connector_sync_job_id: "my-connector-sync-job-id",
});
response = client.connector.sync_job_cancel(
  connector_sync_job_id: "my-connector-sync-job-id"
)
$resp = $client->connector()->syncJobCancel([
    "connector_sync_job_id" => "my-connector-sync-job-id",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id/_cancel"
client.connector().syncJobCancel(s -> s
    .connectorSyncJobId("my-connector-sync-job-id")
);




Claim a connector sync job Technical preview

PUT /_connector/_sync_job/{connector_sync_job_id}/_claim

This action updates the job status to in_progress and sets the last_seen and started_at timestamps to the current time. Additionally, it can set the sync_cursor property for the sync job.

This API is not intended for direct connector management by users. It supports the implementation of services that utilize the connector protocol to communicate with Elasticsearch.

To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job.

application/json

Body Required

  • sync_cursor object

    The cursor object from the last incremental sync job. This should reference the sync_cursor field in the connector state for which the job runs.

  • worker_hostname string Required

    The host name of the current system that will run the job.

Responses

  • 200 application/json
PUT /_connector/_sync_job/{connector_sync_job_id}/_claim
PUT _connector/_sync_job/my-connector-sync-job-id/_claim
{
  "worker_hostname": "some-machine"
}
resp = client.connector.sync_job_claim(
    connector_sync_job_id="my-connector-sync-job-id",
    worker_hostname="some-machine",
)
const response = await client.connector.syncJobClaim({
  connector_sync_job_id: "my-connector-sync-job-id",
  worker_hostname: "some-machine",
});
response = client.connector.sync_job_claim(
  connector_sync_job_id: "my-connector-sync-job-id",
  body: {
    "worker_hostname": "some-machine"
  }
)
$resp = $client->connector()->syncJobClaim([
    "connector_sync_job_id" => "my-connector-sync-job-id",
    "body" => [
        "worker_hostname" => "some-machine",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"worker_hostname":"some-machine"}' "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id/_claim"
client.connector().syncJobClaim(s -> s
    .connectorSyncJobId("my-connector-sync-job-id")
    .workerHostname("some-machine")
);
Request example
An example body for a `PUT _connector/_sync_job/my-connector-sync-job-id/_claim` request.
{
  "worker_hostname": "some-machine"
}

Get a connector sync job Beta; Added in 8.12.0

GET /_connector/_sync_job/{connector_sync_job_id}

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cancelation_requested_at string | number

      One of:
    • canceled_at string | number

      One of:
    • completed_at string | number

      One of:
    • connector object Required
      Hide connector attributes Show connector attributes object
      • configuration object Required
        Hide configuration attribute Show configuration attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • category string
          • depends_on array[object] Required
          • label string Required
          • options array[object] Required
          • order number
          • placeholder string
          • required boolean Required
          • sensitive boolean Required
          • tooltip
          • ui_restrictions array[string]
          • validations array[object]
          • value object Required
      • filtering object Required
        Hide filtering attributes Show filtering attributes object
        • advanced_snippet object Required
        • rules array[object] Required
        • validation object Required
      • id string Required
      • index_name string Required
      • language string
      • pipeline object
        Hide pipeline attributes Show pipeline attributes object
        • extract_binary_content boolean Required
        • name string Required
        • reduce_whitespace boolean Required
        • run_ml_inference boolean Required
      • service_type string Required
      • sync_cursor object
    • created_at string | number

      One of:
    • deleted_document_count number Required
    • error string
    • id string Required
    • indexed_document_count number Required
    • indexed_document_volume number Required
    • job_type string Required

      Values are full, incremental, or access_control.

    • last_seen string | number

      One of:
    • metadata object Required
      Hide metadata attribute Show metadata attribute object
      • * object Additional properties
    • started_at string | number

      One of:
    • status string Required

      Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

    • total_document_count number Required
    • trigger_method string Required

      Values are on_demand or scheduled.

    • worker_hostname string
GET /_connector/_sync_job/{connector_sync_job_id}
GET _connector/_sync_job/my-connector-sync-job
resp = client.connector.sync_job_get(
    connector_sync_job_id="my-connector-sync-job",
)
const response = await client.connector.syncJobGet({
  connector_sync_job_id: "my-connector-sync-job",
});
response = client.connector.sync_job_get(
  connector_sync_job_id: "my-connector-sync-job"
)
$resp = $client->connector()->syncJobGet([
    "connector_sync_job_id" => "my-connector-sync-job",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job"
client.connector().syncJobGet(s -> s
    .connectorSyncJobId("my-connector-sync-job")
);

Delete a connector sync job Beta; Added in 8.12.0

DELETE /_connector/_sync_job/{connector_sync_job_id}

Remove a connector sync job and its associated data. This is a destructive action that is not recoverable.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job to be deleted

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_connector/_sync_job/{connector_sync_job_id}
DELETE _connector/_sync_job/my-connector-sync-job-id
resp = client.connector.sync_job_delete(
    connector_sync_job_id="my-connector-sync-job-id",
)
const response = await client.connector.syncJobDelete({
  connector_sync_job_id: "my-connector-sync-job-id",
});
response = client.connector.sync_job_delete(
  connector_sync_job_id: "my-connector-sync-job-id"
)
$resp = $client->connector()->syncJobDelete([
    "connector_sync_job_id" => "my-connector-sync-job-id",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id"
client.connector().syncJobDelete(s -> s
    .connectorSyncJobId("my-connector-sync-job-id")
);
Response examples (200)
{
  "acknowledged": true
}
































Update the connector features Technical preview

PUT /_connector/{connector_id}/_features

Update the connector features in the connector document. This API can be used to control the following aspects of a connector:

  • document-level security
  • incremental syncs
  • advanced sync rules
  • basic sync rules

Normally, the running connector service automatically manages these features. However, you can use this API to override the default behavior.

To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated.

application/json

Body Required

  • features object Required
    Hide features attributes Show features attributes object
    • document_level_security object

      Indicates whether document-level security is enabled.

      Hide document_level_security attribute Show document_level_security attribute object
      • enabled boolean Required
    • incremental_sync object

      Indicates whether incremental syncs are enabled.

      Hide incremental_sync attribute Show incremental_sync attribute object
      • enabled boolean Required
    • native_connector_api_keys object

      Indicates whether managed connector API keys are enabled.

      Hide native_connector_api_keys attribute Show native_connector_api_keys attribute object
      • enabled boolean Required
    • sync_rules object
      Hide sync_rules attributes Show sync_rules attributes object
      • advanced object

        Indicates whether advanced sync rules are enabled.

        Hide advanced attribute Show advanced attribute object
        • enabled boolean Required
      • basic object

        Indicates whether basic sync rules are enabled.

        Hide basic attribute Show basic attribute object
        • enabled boolean Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_features
PUT _connector/my-connector/_features
{
  "features": {
    "document_level_security": {
      "enabled": true
    },
    "incremental_sync": {
      "enabled": true
    },
    "sync_rules": {
      "advanced": {
        "enabled": false
      },
      "basic": {
        "enabled": true
      }
    }
  }
}
resp = client.connector.update_features(
    connector_id="my-connector",
    features={
        "document_level_security": {
            "enabled": True
        },
        "incremental_sync": {
            "enabled": True
        },
        "sync_rules": {
            "advanced": {
                "enabled": False
            },
            "basic": {
                "enabled": True
            }
        }
    },
)
const response = await client.connector.updateFeatures({
  connector_id: "my-connector",
  features: {
    document_level_security: {
      enabled: true,
    },
    incremental_sync: {
      enabled: true,
    },
    sync_rules: {
      advanced: {
        enabled: false,
      },
      basic: {
        enabled: true,
      },
    },
  },
});
response = client.connector.update_features(
  connector_id: "my-connector",
  body: {
    "features": {
      "document_level_security": {
        "enabled": true
      },
      "incremental_sync": {
        "enabled": true
      },
      "sync_rules": {
        "advanced": {
          "enabled": false
        },
        "basic": {
          "enabled": true
        }
      }
    }
  }
)
$resp = $client->connector()->updateFeatures([
    "connector_id" => "my-connector",
    "body" => [
        "features" => [
            "document_level_security" => [
                "enabled" => true,
            ],
            "incremental_sync" => [
                "enabled" => true,
            ],
            "sync_rules" => [
                "advanced" => [
                    "enabled" => false,
                ],
                "basic" => [
                    "enabled" => true,
                ],
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"features":{"document_level_security":{"enabled":true},"incremental_sync":{"enabled":true},"sync_rules":{"advanced":{"enabled":false},"basic":{"enabled":true}}}}' "$ELASTICSEARCH_URL/_connector/my-connector/_features"
client.connector().updateFeatures(u -> u
    .connectorId("my-connector")
    .features(f -> f
        .documentLevelSecurity(d -> d
            .enabled(true)
        )
        .incrementalSync(i -> i
            .enabled(true)
        )
        .syncRules(s -> s
            .advanced(a -> a
                .enabled(false)
            )
            .basic(b -> b
                .enabled(true)
            )
        )
    )
);
Request examples
{
  "features": {
    "document_level_security": {
      "enabled": true
    },
    "incremental_sync": {
      "enabled": true
    },
    "sync_rules": {
      "advanced": {
        "enabled": false
      },
      "basic": {
        "enabled": true
      }
    }
  }
}
{
  "features": {
    "document_level_security": {
      "enabled": true
    }
  }
}
Response examples (200)
{
  "result": "updated"
}








Update the connector index name Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_index_name

Update the index_name field of a connector, specifying the index where the data ingested by the connector is stored.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • index_name string | null Required

    One of:

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_index_name
PUT _connector/my-connector/_index_name
{
    "index_name": "data-from-my-google-drive"
}
resp = client.connector.update_index_name(
    connector_id="my-connector",
    index_name="data-from-my-google-drive",
)
const response = await client.connector.updateIndexName({
  connector_id: "my-connector",
  index_name: "data-from-my-google-drive",
});
response = client.connector.update_index_name(
  connector_id: "my-connector",
  body: {
    "index_name": "data-from-my-google-drive"
  }
)
$resp = $client->connector()->updateIndexName([
    "connector_id" => "my-connector",
    "body" => [
        "index_name" => "data-from-my-google-drive",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_name":"data-from-my-google-drive"}' "$ELASTICSEARCH_URL/_connector/my-connector/_index_name"
client.connector().updateIndexName(u -> u
    .connectorId("my-connector")
    .indexName("data-from-my-google-drive")
);
Request example
{
    "index_name": "data-from-my-google-drive"
}
Response examples (200)
{
  "result": "updated"
}





























Create or update auto-follow patterns Generally available; Added in 6.5.0

PUT /_ccr/auto_follow/{name}

Create a collection of cross-cluster replication auto-follow patterns for a remote cluster. Newly created indices on the remote cluster that match any of the patterns are automatically configured as follower indices. Indices on the remote cluster that were created before the auto-follow pattern was created will not be auto-followed even if they match the pattern.

This API can also be used to update auto-follow patterns. NOTE: Follower indices that were configured automatically before updating an auto-follow pattern will remain unchanged even if they do not match against the new patterns.

External documentation

Path parameters

  • name string Required

    The name of the collection of auto-follow patterns.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

application/json

Body Required

  • remote_cluster string Required

    The remote cluster containing the leader indices to match against.

  • follow_index_pattern string

    The name of follower index. The template {{leader_index}} can be used to derive the name of the follower index from the name of the leader index. When following a data stream, use {{leader_index}}; CCR does not support changes to the names of a follower data stream’s backing indices.

  • leader_index_patterns array[string]

    An array of simple index patterns to match against indices in the remote cluster specified by the remote_cluster field.

  • leader_index_exclusion_patterns array[string]

    An array of simple index patterns that can be used to exclude indices from being auto-followed. Indices in the remote cluster whose names are matching one or more leader_index_patterns and one or more leader_index_exclusion_patterns won’t be followed.

  • max_outstanding_read_requests number

    The maximum number of outstanding reads requests from the remote cluster.

    Default value is 12.

  • settings object

    Settings to override from the leader index. Note that certain settings can not be overrode (e.g., index.number_of_shards).

    Hide settings attribute Show settings attribute object
    • * object Additional properties
  • max_outstanding_write_requests number

    The maximum number of outstanding reads requests from the remote cluster.

    Default value is 9.

  • read_poll_timeout string

    The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again.

  • max_read_request_operation_count number

    The maximum number of operations to pull per read from the remote cluster.

    Default value is 5120.

  • max_read_request_size number | string

    The maximum size in bytes of per read of a batch of operations pulled from the remote cluster.

    One of:

    The maximum size in bytes of per read of a batch of operations pulled from the remote cluster.

  • max_retry_delay string

    The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying.

  • max_write_buffer_count number

    The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit.

    Default value is 2147483647.

  • max_write_buffer_size number | string

    The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit.

    One of:

    The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit.

  • max_write_request_operation_count number

    The maximum number of operations per bulk write request executed on the follower.

    Default value is 5120.

  • max_write_request_size number | string

    The maximum total bytes of operations per bulk write request executed on the follower.

    One of:

    The maximum total bytes of operations per bulk write request executed on the follower.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_ccr/auto_follow/{name}
PUT /_ccr/auto_follow/my_auto_follow_pattern
{
  "remote_cluster" : "remote_cluster",
  "leader_index_patterns" :
  [
    "leader_index*"
  ],
  "follow_index_pattern" : "{{leader_index}}-follower",
  "settings": {
    "index.number_of_replicas": 0
  },
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
resp = client.ccr.put_auto_follow_pattern(
    name="my_auto_follow_pattern",
    remote_cluster="remote_cluster",
    leader_index_patterns=[
        "leader_index*"
    ],
    follow_index_pattern="{{leader_index}}-follower",
    settings={
        "index.number_of_replicas": 0
    },
    max_read_request_operation_count=1024,
    max_outstanding_read_requests=16,
    max_read_request_size="1024k",
    max_write_request_operation_count=32768,
    max_write_request_size="16k",
    max_outstanding_write_requests=8,
    max_write_buffer_count=512,
    max_write_buffer_size="512k",
    max_retry_delay="10s",
    read_poll_timeout="30s",
)
const response = await client.ccr.putAutoFollowPattern({
  name: "my_auto_follow_pattern",
  remote_cluster: "remote_cluster",
  leader_index_patterns: ["leader_index*"],
  follow_index_pattern: "{{leader_index}}-follower",
  settings: {
    "index.number_of_replicas": 0,
  },
  max_read_request_operation_count: 1024,
  max_outstanding_read_requests: 16,
  max_read_request_size: "1024k",
  max_write_request_operation_count: 32768,
  max_write_request_size: "16k",
  max_outstanding_write_requests: 8,
  max_write_buffer_count: 512,
  max_write_buffer_size: "512k",
  max_retry_delay: "10s",
  read_poll_timeout: "30s",
});
response = client.ccr.put_auto_follow_pattern(
  name: "my_auto_follow_pattern",
  body: {
    "remote_cluster": "remote_cluster",
    "leader_index_patterns": [
      "leader_index*"
    ],
    "follow_index_pattern": "{{leader_index}}-follower",
    "settings": {
      "index.number_of_replicas": 0
    },
    "max_read_request_operation_count": 1024,
    "max_outstanding_read_requests": 16,
    "max_read_request_size": "1024k",
    "max_write_request_operation_count": 32768,
    "max_write_request_size": "16k",
    "max_outstanding_write_requests": 8,
    "max_write_buffer_count": 512,
    "max_write_buffer_size": "512k",
    "max_retry_delay": "10s",
    "read_poll_timeout": "30s"
  }
)
$resp = $client->ccr()->putAutoFollowPattern([
    "name" => "my_auto_follow_pattern",
    "body" => [
        "remote_cluster" => "remote_cluster",
        "leader_index_patterns" => array(
            "leader_index*",
        ),
        "follow_index_pattern" => "{{leader_index}}-follower",
        "settings" => [
            "index.number_of_replicas" => 0,
        ],
        "max_read_request_operation_count" => 1024,
        "max_outstanding_read_requests" => 16,
        "max_read_request_size" => "1024k",
        "max_write_request_operation_count" => 32768,
        "max_write_request_size" => "16k",
        "max_outstanding_write_requests" => 8,
        "max_write_buffer_count" => 512,
        "max_write_buffer_size" => "512k",
        "max_retry_delay" => "10s",
        "read_poll_timeout" => "30s",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"remote_cluster":"remote_cluster","leader_index_patterns":["leader_index*"],"follow_index_pattern":"{{leader_index}}-follower","settings":{"index.number_of_replicas":0},"max_read_request_operation_count":1024,"max_outstanding_read_requests":16,"max_read_request_size":"1024k","max_write_request_operation_count":32768,"max_write_request_size":"16k","max_outstanding_write_requests":8,"max_write_buffer_count":512,"max_write_buffer_size":"512k","max_retry_delay":"10s","read_poll_timeout":"30s"}' "$ELASTICSEARCH_URL/_ccr/auto_follow/my_auto_follow_pattern"
client.ccr().putAutoFollowPattern(p -> p
    .followIndexPattern("{{leader_index}}-follower")
    .leaderIndexPatterns("leader_index*")
    .maxOutstandingReadRequests(16)
    .maxOutstandingWriteRequests(8)
    .maxReadRequestOperationCount(1024)
    .maxReadRequestSize("1024k")
    .maxRetryDelay(m -> m
        .time("10s")
    )
    .maxWriteBufferCount(512)
    .maxWriteBufferSize("512k")
    .maxWriteRequestOperationCount(32768)
    .maxWriteRequestSize("16k")
    .name("my_auto_follow_pattern")
    .readPollTimeout(r -> r
        .time("30s")
    )
    .remoteCluster("remote_cluster")
    .settings("index.number_of_replicas", JsonData.fromJson("0"))
);
Request example
Run `PUT /_ccr/auto_follow/my_auto_follow_pattern` to creates an auto-follow pattern.
{
  "remote_cluster" : "remote_cluster",
  "leader_index_patterns" :
  [
    "leader_index*"
  ],
  "follow_index_pattern" : "{{leader_index}}-follower",
  "settings": {
    "index.number_of_replicas": 0
  },
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
Response examples (200)
A successful response for creating an auto-follow pattern.
{
  "acknowledged": true
}




Create a follower Generally available; Added in 6.5.0

PUT /{index}/_ccr/follow

Create a cross-cluster replication follower index that follows a specific leader index. When the API returns, the follower index exists and cross-cluster replication starts replicating operations from the leader index to the follower index.

Path parameters

  • index string Required

    The name of the follower index.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    Specifies the number of shards to wait on being active before responding. This defaults to waiting on none of the shards to be active. A shard must be restored from the leader index before being active. Restoring a follower shard requires transferring all the remote Lucene segment files to the follower index.

    Values are all or index-setting.

application/json

Body Required

  • data_stream_name string

    If the leader index is part of a data stream, the name to which the local data stream for the followed index should be renamed.

  • leader_index string Required

    The name of the index in the leader cluster to follow.

  • max_outstanding_read_requests number

    The maximum number of outstanding reads requests from the remote cluster.

  • max_outstanding_write_requests number

    The maximum number of outstanding write requests on the follower.

  • max_read_request_operation_count number

    The maximum number of operations to pull per read from the remote cluster.

  • max_read_request_size number | string

    The maximum size in bytes of per read of a batch of operations pulled from the remote cluster.

    One of:

    The maximum size in bytes of per read of a batch of operations pulled from the remote cluster.

  • max_retry_delay string

    The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying.

  • max_write_buffer_count number

    The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit.

  • max_write_buffer_size number | string

    The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit.

    One of:

    The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit.

  • max_write_request_operation_count number

    The maximum number of operations per bulk write request executed on the follower.

  • max_write_request_size number | string

    The maximum total bytes of operations per bulk write request executed on the follower.

    One of:

    The maximum total bytes of operations per bulk write request executed on the follower.

  • read_poll_timeout string

    The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again.

  • remote_cluster string Required

    The remote cluster containing the leader index.

  • settings object

    Settings to override from the leader index.

    Index settings

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • follow_index_created boolean Required
    • follow_index_shards_acked boolean Required
    • index_following_started boolean Required
PUT /{index}/_ccr/follow
PUT /follower_index/_ccr/follow?wait_for_active_shards=1
{
  "remote_cluster" : "remote_cluster",
  "leader_index" : "leader_index",
  "settings": {
    "index.number_of_replicas": 0
  },
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
resp = client.ccr.follow(
    index="follower_index",
    wait_for_active_shards="1",
    remote_cluster="remote_cluster",
    leader_index="leader_index",
    settings={
        "index.number_of_replicas": 0
    },
    max_read_request_operation_count=1024,
    max_outstanding_read_requests=16,
    max_read_request_size="1024k",
    max_write_request_operation_count=32768,
    max_write_request_size="16k",
    max_outstanding_write_requests=8,
    max_write_buffer_count=512,
    max_write_buffer_size="512k",
    max_retry_delay="10s",
    read_poll_timeout="30s",
)
const response = await client.ccr.follow({
  index: "follower_index",
  wait_for_active_shards: 1,
  remote_cluster: "remote_cluster",
  leader_index: "leader_index",
  settings: {
    "index.number_of_replicas": 0,
  },
  max_read_request_operation_count: 1024,
  max_outstanding_read_requests: 16,
  max_read_request_size: "1024k",
  max_write_request_operation_count: 32768,
  max_write_request_size: "16k",
  max_outstanding_write_requests: 8,
  max_write_buffer_count: 512,
  max_write_buffer_size: "512k",
  max_retry_delay: "10s",
  read_poll_timeout: "30s",
});
response = client.ccr.follow(
  index: "follower_index",
  wait_for_active_shards: "1",
  body: {
    "remote_cluster": "remote_cluster",
    "leader_index": "leader_index",
    "settings": {
      "index.number_of_replicas": 0
    },
    "max_read_request_operation_count": 1024,
    "max_outstanding_read_requests": 16,
    "max_read_request_size": "1024k",
    "max_write_request_operation_count": 32768,
    "max_write_request_size": "16k",
    "max_outstanding_write_requests": 8,
    "max_write_buffer_count": 512,
    "max_write_buffer_size": "512k",
    "max_retry_delay": "10s",
    "read_poll_timeout": "30s"
  }
)
$resp = $client->ccr()->follow([
    "index" => "follower_index",
    "wait_for_active_shards" => "1",
    "body" => [
        "remote_cluster" => "remote_cluster",
        "leader_index" => "leader_index",
        "settings" => [
            "index.number_of_replicas" => 0,
        ],
        "max_read_request_operation_count" => 1024,
        "max_outstanding_read_requests" => 16,
        "max_read_request_size" => "1024k",
        "max_write_request_operation_count" => 32768,
        "max_write_request_size" => "16k",
        "max_outstanding_write_requests" => 8,
        "max_write_buffer_count" => 512,
        "max_write_buffer_size" => "512k",
        "max_retry_delay" => "10s",
        "read_poll_timeout" => "30s",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"remote_cluster":"remote_cluster","leader_index":"leader_index","settings":{"index.number_of_replicas":0},"max_read_request_operation_count":1024,"max_outstanding_read_requests":16,"max_read_request_size":"1024k","max_write_request_operation_count":32768,"max_write_request_size":"16k","max_outstanding_write_requests":8,"max_write_buffer_count":512,"max_write_buffer_size":"512k","max_retry_delay":"10s","read_poll_timeout":"30s"}' "$ELASTICSEARCH_URL/follower_index/_ccr/follow?wait_for_active_shards=1"
client.ccr().follow(f -> f
    .index("follower_index")
    .leaderIndex("leader_index")
    .maxOutstandingReadRequests(16L)
    .maxOutstandingWriteRequests(8)
    .maxReadRequestOperationCount(1024)
    .maxReadRequestSize("1024k")
    .maxRetryDelay(m -> m
        .time("10s")
    )
    .maxWriteBufferCount(512)
    .maxWriteBufferSize("512k")
    .maxWriteRequestOperationCount(32768)
    .maxWriteRequestSize("16k")
    .readPollTimeout(r -> r
        .time("30s")
    )
    .remoteCluster("remote_cluster")
    .settings(s -> s
        .otherSettings("index.number_of_replicas", JsonData.fromJson("0"))
    )
    .waitForActiveShards(w -> w
        .count(1)
    )
);
Request example
Run `PUT /follower_index/_ccr/follow?wait_for_active_shards=1` to create a follower index named `follower_index`.
{
  "remote_cluster" : "remote_cluster",
  "leader_index" : "leader_index",
  "settings": {
    "index.number_of_replicas": 0
  },
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
Response examples (200)
A successful response from `PUT /follower_index/_ccr/follow?wait_for_active_shards=1`.
{
  "follow_index_created" : true,
  "follow_index_shards_acked" : true,
  "index_following_started" : true
}
























Resume a follower Generally available; Added in 6.5.0

POST /{index}/_ccr/resume_follow

Resume a cross-cluster replication follower index that was paused. The follower index could have been paused with the pause follower API. Alternatively it could be paused due to replication that cannot be retried due to failures during following tasks. When this API returns, the follower index will resume fetching operations from the leader index.

External documentation

Path parameters

  • index string Required

    The name of the follow index to resume following.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

application/json

Body

  • max_outstanding_read_requests number
  • max_outstanding_write_requests number
  • max_read_request_operation_count number
  • max_read_request_size string
  • max_retry_delay string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • max_write_buffer_count number
  • max_write_buffer_size string
  • max_write_request_operation_count number
  • max_write_request_size string
  • read_poll_timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /{index}/_ccr/resume_follow
POST /follower_index/_ccr/resume_follow
{
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
resp = client.ccr.resume_follow(
    index="follower_index",
    max_read_request_operation_count=1024,
    max_outstanding_read_requests=16,
    max_read_request_size="1024k",
    max_write_request_operation_count=32768,
    max_write_request_size="16k",
    max_outstanding_write_requests=8,
    max_write_buffer_count=512,
    max_write_buffer_size="512k",
    max_retry_delay="10s",
    read_poll_timeout="30s",
)
const response = await client.ccr.resumeFollow({
  index: "follower_index",
  max_read_request_operation_count: 1024,
  max_outstanding_read_requests: 16,
  max_read_request_size: "1024k",
  max_write_request_operation_count: 32768,
  max_write_request_size: "16k",
  max_outstanding_write_requests: 8,
  max_write_buffer_count: 512,
  max_write_buffer_size: "512k",
  max_retry_delay: "10s",
  read_poll_timeout: "30s",
});
response = client.ccr.resume_follow(
  index: "follower_index",
  body: {
    "max_read_request_operation_count": 1024,
    "max_outstanding_read_requests": 16,
    "max_read_request_size": "1024k",
    "max_write_request_operation_count": 32768,
    "max_write_request_size": "16k",
    "max_outstanding_write_requests": 8,
    "max_write_buffer_count": 512,
    "max_write_buffer_size": "512k",
    "max_retry_delay": "10s",
    "read_poll_timeout": "30s"
  }
)
$resp = $client->ccr()->resumeFollow([
    "index" => "follower_index",
    "body" => [
        "max_read_request_operation_count" => 1024,
        "max_outstanding_read_requests" => 16,
        "max_read_request_size" => "1024k",
        "max_write_request_operation_count" => 32768,
        "max_write_request_size" => "16k",
        "max_outstanding_write_requests" => 8,
        "max_write_buffer_count" => 512,
        "max_write_buffer_size" => "512k",
        "max_retry_delay" => "10s",
        "read_poll_timeout" => "30s",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"max_read_request_operation_count":1024,"max_outstanding_read_requests":16,"max_read_request_size":"1024k","max_write_request_operation_count":32768,"max_write_request_size":"16k","max_outstanding_write_requests":8,"max_write_buffer_count":512,"max_write_buffer_size":"512k","max_retry_delay":"10s","read_poll_timeout":"30s"}' "$ELASTICSEARCH_URL/follower_index/_ccr/resume_follow"
client.ccr().resumeFollow(r -> r
    .index("follower_index")
    .maxOutstandingReadRequests(16L)
    .maxOutstandingWriteRequests(8L)
    .maxReadRequestOperationCount(1024L)
    .maxReadRequestSize("1024k")
    .maxRetryDelay(m -> m
        .time("10s")
    )
    .maxWriteBufferCount(512L)
    .maxWriteBufferSize("512k")
    .maxWriteRequestOperationCount(32768L)
    .maxWriteRequestSize("16k")
    .readPollTimeout(re -> re
        .time("30s")
    )
);
Request example
Run `POST /follower_index/_ccr/resume_follow` to resume the follower index.
{
  "max_read_request_operation_count" : 1024,
  "max_outstanding_read_requests" : 16,
  "max_read_request_size" : "1024k",
  "max_write_request_operation_count" : 32768,
  "max_write_request_size" : "16k",
  "max_outstanding_write_requests" : 8,
  "max_write_buffer_count" : 512,
  "max_write_buffer_size" : "512k",
  "max_retry_delay" : "10s",
  "read_poll_timeout" : "30s"
}
Response examples (200)
A successful response from resuming a folower index.
{
  "acknowledged" : true
}




Unfollow an index Generally available; Added in 6.5.0

POST /{index}/_ccr/unfollow

Convert a cross-cluster replication follower index to a regular index. The API stops the following task associated with a follower index and removes index metadata and settings associated with cross-cluster replication. The follower index must be paused and closed before you call the unfollow API.


Currently cross-cluster replication does not support converting an existing regular index to a follower index. Converting a follower index to a regular index is an irreversible operation.

Required authorization

  • Index privileges: manage_follow_index
External documentation

Path parameters

  • index string Required

    The name of the follower index.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /{index}/_ccr/unfollow
POST /follower_index/_ccr/unfollow
resp = client.ccr.unfollow(
    index="follower_index",
)
const response = await client.ccr.unfollow({
  index: "follower_index",
});
response = client.ccr.unfollow(
  index: "follower_index"
)
$resp = $client->ccr()->unfollow([
    "index" => "follower_index",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/follower_index/_ccr/unfollow"
client.ccr().unfollow(u -> u
    .index("follower_index")
);
Response examples (200)
A successful response from `POST /follower_index/_ccr/unfollow`.
{
  "acknowledged" : true
}

Get data streams Generally available; Added in 7.9.0

GET /_data_stream/{name}

All methods and paths for this operation:

GET /_data_stream

GET /_data_stream/{name}

Get information about one or more data streams.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • name string | array[string]

    Comma-separated list of data stream names used to limit the request. Wildcard (*) expressions are supported. If omitted, all data streams are returned.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • include_defaults boolean Generally available; Added in 8.11.0

    If true, returns all relevant default configurations for the index template.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • verbose boolean

    Whether the maximum timestamp for each data stream should be calculated and returned.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • _meta object

        Custom metadata for the stream, copied from the _meta object of the stream’s matching index template. If empty, the response omits this property.

        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • allow_custom_routing boolean

        If true, the data stream allows custom routing on write request.

      • failure_store object

        Information about failure store backing indices

        Hide failure_store attributes Show failure_store attributes object
        • enabled boolean Required
        • indices array[object] Required
        • rollover_on_write boolean Required
      • generation number Required

        Current generation for the data stream. This number acts as a cumulative count of the stream’s rollovers, starting at 1.

      • hidden boolean Required

        If true, the data stream is hidden.

      • ilm_policy string

        Name of the current ILM lifecycle policy in the stream’s matching index template. This lifecycle policy is set in the index.lifecycle.name setting. If the template does not include a lifecycle policy, this property is not included in the response. NOTE: A data stream’s backing indices may be assigned different lifecycle policies. To retrieve the lifecycle policy for individual backing indices, use the get index settings API.

      • next_generation_managed_by string Required

        Name of the lifecycle system that'll manage the next generation of the data stream.

        Values are Index Lifecycle Management, Data stream lifecycle, or Unmanaged.

      • prefer_ilm boolean Required

        Indicates if ILM should take precedence over DSL in case both are configured to managed this data stream.

      • indices array[object] Required

        Array of objects containing information about the data stream’s backing indices. The last item in this array contains information about the stream’s current write index.

        Hide indices attributes Show indices attributes object
        • index_name string Required

          Name of the backing index.

        • index_uuid string Required

          Universally unique identifier (UUID) for the index.

        • ilm_policy string

          Name of the current ILM lifecycle policy configured for this backing index.

        • managed_by string

          Name of the lifecycle system that's currently managing this backing index.

          Values are Index Lifecycle Management, Data stream lifecycle, or Unmanaged.

        • prefer_ilm boolean

          Indicates if ILM should take precedence over DSL in case both are configured to manage this index.

      • lifecycle object

        Contains the configuration for the data stream lifecycle of this data stream.

        Hide lifecycle attribute Show lifecycle attribute object
        • enabled boolean

          If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

          Default value is true.

      • name string Required

        Name of the data stream.

      • replicated boolean

        If true, the data stream is created and managed by cross-cluster replication and the local cluster can not write into this data stream or change its mappings.

      • rollover_on_write boolean Required

        If true, the next write to this data stream will trigger a rollover first and the document will be indexed in the new backing index. If the rollover fails the indexing request will fail too.

      • status string Required

        Health status of the data stream. This health status is based on the state of the primary and replica shards of the stream’s backing indices.

        Supported values include:

        • green (or GREEN): All shards are assigned.
        • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
        • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
        • unknown
        • unavailable

        Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

      • system boolean Generally available; Added in 7.10.0

        If true, the data stream is created and managed by an Elastic stack component and cannot be modified through normal user interaction.

      • template string Required

        Name of the index template used to create the data stream’s backing indices. The template’s index pattern must match the name of this data stream.

      • timestamp_field object Required

        Information about the @timestamp field in the data stream.

        Hide timestamp_field attribute Show timestamp_field attribute object
        • name string Required

          Name of the timestamp field for the data stream, which must be @timestamp. The @timestamp field must be included in every document indexed to the data stream.

GET /_data_stream/{name}
GET _data_stream/my-data-stream
resp = client.indices.get_data_stream(
    name="my-data-stream",
)
const response = await client.indices.getDataStream({
  name: "my-data-stream",
});
response = client.indices.get_data_stream(
  name: "my-data-stream"
)
$resp = $client->indices()->getDataStream([
    "name" => "my-data-stream",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/my-data-stream"
client.indices().getDataStream(g -> g
    .name("my-data-stream")
);
Response examples (200)
A successful response for retrieving information about a data stream.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-my-data-stream-2099.03.07-000001",
          "index_uuid": "xCEhwsp8Tey0-FLNFYVwSg",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        },
        {
          "index_name": ".ds-my-data-stream-2099.03.08-000002",
          "index_uuid": "PA_JquKGSiKcAKBA8DJ5gw",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        }
      ],
      "generation": 2,
      "_meta": {
        "my-meta-field": "foo"
      },
      "status": "GREEN",
      "next_generation_managed_by": "Index Lifecycle Management",
      "prefer_ilm": true,
      "template": "my-index-template",
      "ilm_policy": "my-lifecycle-policy",
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false,
      "rollover_on_write": false
    },
    {
      "name": "my-data-stream-two",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-my-data-stream-two-2099.03.08-000001",
          "index_uuid": "3liBu2SYS5axasRt6fUIpA",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        }
      ],
      "generation": 1,
      "_meta": {
        "my-meta-field": "foo"
      },
      "status": "YELLOW",
      "next_generation_managed_by": "Index Lifecycle Management",
      "prefer_ilm": true,
      "template": "my-index-template",
      "ilm_policy": "my-lifecycle-policy",
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false,
      "rollover_on_write": false
    }
  ]
}












Get data stream lifecycles Generally available; Added in 8.11.0

GET /_data_stream/{name}/_lifecycle

Get the data stream lifecycle configuration of one or more data streams.

External documentation

Path parameters

  • name string | array[string] Required

    Comma-separated list of data streams to limit the request. Supports wildcards (*). To target all data streams, omit this parameter or use * or _all.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • include_defaults boolean

    If true, return all default settings in the response.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • name string Required
      • lifecycle object

        Data stream lifecycle with rollover can be used to display the configuration including the default rollover conditions, if asked.

        Hide lifecycle attribute Show lifecycle attribute object
        • enabled boolean

          If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

          Default value is true.

GET /_data_stream/{name}/_lifecycle
GET /_data_stream/{name}/_lifecycle?human&pretty
resp = client.indices.get_data_lifecycle(
    name="{name}",
    human=True,
    pretty=True,
)
const response = await client.indices.getDataLifecycle({
  name: "{name}",
  human: "true",
  pretty: "true",
});
response = client.indices.get_data_lifecycle(
  name: "{name}",
  human: "true",
  pretty: "true"
)
$resp = $client->indices()->getDataLifecycle([
    "name" => "{name}",
    "human" => "true",
    "pretty" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/%7Bname%7D/_lifecycle?human&pretty"
Response examples (200)
A successful response from `GET /_data_stream/{name}/_lifecycle?human&pretty`.
{
  "data_streams": [
    {
      "name": "my-data-stream-1",
      "lifecycle": {
        "enabled": true,
        "data_retention": "7d"
      }
    },
    {
      "name": "my-data-stream-2",
      "lifecycle": {
        "enabled": true,
        "data_retention": "7d"
      }
    }
  ]
}
























Promote a data stream Generally available; Added in 7.9.0

POST /_data_stream/_promote/{name}

Promote a data stream from a replicated data stream managed by cross-cluster replication (CCR) to a regular data stream.

With CCR auto following, a data stream from a remote cluster can be replicated to the local cluster. These data streams can't be rolled over in the local cluster. These replicated data streams roll over only if the upstream data stream rolls over. In the event that the remote cluster is no longer available, the data stream in the local cluster can be promoted to a regular data stream, which allows these data streams to be rolled over in the local cluster.

NOTE: When promoting a data stream, ensure the local cluster has a data stream enabled index template that matches the data stream. If this is missing, the data stream will not be able to roll over until a matching index template is created. This will affect the lifecycle management of the data stream and interfere with the data stream size and retention.

Path parameters

  • name string Required

    The name of the data stream

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
POST /_data_stream/_promote/{name}
POST /_data_stream/_promote/my-data-stream
resp = client.indices.promote_data_stream(
    name="my-data-stream",
)
const response = await client.indices.promoteDataStream({
  name: "my-data-stream",
});
response = client.indices.promote_data_stream(
  name: "my-data-stream"
)
$resp = $client->indices()->promoteDataStream([
    "name" => "my-data-stream",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/_promote/my-data-stream"
client.indices().promoteDataStream(p -> p
    .name("my-data-stream")
);













Create or update a document in an index Generally available

POST /{index}/_doc/{id}

All methods and paths for this operation:

POST /{index}/_doc

PUT /{index}/_doc/{id}
POST /{index}/_doc/{id}

Add a JSON document to the specified data stream or index and make it searchable. If the target is an index and the document already exists, the request updates the document and increments its version.

NOTE: You cannot use this API to send update requests for existing documents in a data stream.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:

  • To add or overwrite a document using the PUT /<target>/_doc/<_id> request format, you must have the create, index, or write index privilege.
  • To add a document using the POST /<target>/_doc/ request format, you must have the create_doc, create, index, or write index privilege.
  • To automatically create a data stream or index with this API request, you must have the auto_configure, create_index, or manage index privilege.

Automatic data stream creation requires a matching index template with data stream enabled.

NOTE: Replica shards might not all be started when an indexing operation returns successfully. By default, only the primary is required. Set wait_for_active_shards to change this default behavior.

Automatically create data streams and indices

If the request's target doesn't exist and matches an index template with a data_stream definition, the index operation automatically creates the data stream.

If the target doesn't exist and doesn't match a data stream template, the operation automatically creates the index and applies any matching index templates.

NOTE: Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation.

If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed.

Automatic index creation is controlled by the action.auto_create_index setting. If it is true, any index can be created automatically. You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to false to turn off automatic index creation entirely. Specify a comma-separated list of patterns you want to allow or prefix each pattern with + or - to indicate whether it should be allowed or blocked. When a list is specified, the default behaviour is to disallow.

NOTE: The action.auto_create_index setting affects the automatic creation of indices only. It does not affect the creation of data streams.

Optimistic concurrency control

Index operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the if_seq_no and if_primary_term parameters. If a mismatch is detected, the operation will result in a VersionConflictException and a status code of 409.

Routing

By default, shard placement — or routing — is controlled by using a hash of the document's ID value. For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the routing parameter.

When setting up explicit mapping, you can also use the _routing field to direct the index operation to extract the routing value from the document itself. This does come at the (very minimal) cost of an additional document parsing pass. If the _routing mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted.

NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing setting enabled in the template.

Distributed

The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas.

Active shards

To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation. If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs. By default, write operations only wait for the primary shards to be active before proceeding (that is to say wait_for_active_shards is 1). This default can be overridden in the index settings dynamically by setting index.write.wait_for_active_shards. To alter this behavior per operation, use the wait_for_active_shards request parameter.

Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is number_of_replicas+1). Specifying a negative value or a number greater than the number of shard copies will throw an error.

For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes). If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding. This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data. If wait_for_active_shards is set on the request to 3 (and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding. This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard. However, if you set wait_for_active_shards to all (or to 4, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index. The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard.

It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts. After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary. The _shards section of the API response reveals the number of shard copies on which replication succeeded and failed.

No operation (noop) updates

When updating a document by using this API, a new version of the document is always created even if the document hasn't changed. If this isn't acceptable use the _update API with detect_noop set to true. The detect_noop option isn't available on this API because it doesn’t fetch the old source and isn't able to compare it against the new source.

There isn't a definitive rule for when noop updates aren't acceptable. It's a combination of lots of factors like how frequently your data source sends updates that are actually noops and how many queries per second Elasticsearch runs on the shard receiving the updates.

Versioning

Each indexed document is given a version number. By default, internal versioning is used that starts at 1 and increments with each update, deletes included. Optionally, the version number can be set to an external value (for example, if maintained in a database). To enable this functionality, version_type should be set to external. The value provided must be a numeric, long value greater than or equal to 0, and less than around 9.2e+18.

NOTE: Versioning is completely real time, and is not affected by the near real time aspects of search operations. If no version is provided, the operation runs without any version checks.

When using the external version type, the system checks to see if the version number passed to the index request is greater than the version of the currently stored document. If true, the document will be indexed and the new version number used. If the value provided is less than or equal to the stored document's version number, a version conflict will occur and the index operation will fail. For example:

PUT my-index-000001/_doc/1?version=2&version_type=external
{
  "user": {
    "id": "elkbee"
  }
}

In this example, the operation will succeed since the supplied version of 2 is higher than the current document version of 1.
If the document was already updated and its version was set to 2 or higher, the indexing command will fail and result in a conflict (409 HTTP status code).

A nice side effect is that there is no need to maintain strict ordering of async indexing operations run as a result of changes to a source database, as long as version numbers from the source database are used.
Even the simple case of updating the Elasticsearch index using data from a database is simplified if external versioning is used, as only the latest version will be used if the index operations arrive out of order.

## Required authorization

* Index privileges: `index`
External documentation

Path parameters

  • index string Required

    The name of the data stream or index to target. If the target doesn't exist and matches the name or wildcard (*) pattern of an index template with a data_stream definition, this request creates the data stream. If the target doesn't exist and doesn't match a data stream template, this request creates the index. You can check for existing targets with the resolve index API.

  • id string Required

    A unique identifier for the document. To automatically generate a document ID, use the POST /<target>/_doc/ request format and omit this parameter.

Query parameters

  • if_primary_term number

    Only perform the operation if the document has this primary term.

  • if_seq_no number

    Only perform the operation if the document has this sequence number.

  • include_source_on_error boolean

    True or false if to include the document source in the error message in case of parsing errors.

  • op_type string

    Set to create to only index the document if it does not already exist (put if absent). If a document with the specified _id already exists, the indexing operation will fail. The behavior is the same as using the <index>/_create endpoint. If a document ID is specified, this paramater defaults to index. Otherwise, it defaults to create. If the request targets a data stream, an op_type of create is required.

    Supported values include:

    • index: Overwrite any documents that already exist.
    • create: Only index documents that do not already exist.

    Values are index or create.

  • pipeline string

    The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to _none disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, it does nothing with refreshes.

    Values are true, false, or wait_for.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • timeout string

    The period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards.

    This parameter is useful for situations where the primary shard assigned to perform the operation might not be available when the operation runs. Some reasons for this might be that the primary shard is currently recovering from a gateway or undergoing relocation. By default, the operation will wait on the primary shard to become available for at least 1 minute before failing and responding with an error. The actual wait time could be longer, particularly when multiple waits occur.

    Values are -1 or 0.

  • version number

    An explicit version number for concurrency control. It must be a non-negative long number.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. You can set it to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

    Values are all or index-setting.

  • require_alias boolean

    If true, the destination must be an index alias.

  • require_data_stream boolean

    If true, the request's actions must target a data stream (existing or to be created).

application/json

Body Required

object object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _id string Required

      The unique identifier for the added document.

    • _index string Required

      The name of the index the document was added to.

    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • result string Required

      The result of the indexing operation: created or updated.

      Values are created, updated, deleted, not_found, or noop.

    • _seq_no number

      The sequence number assigned to the document for the indexing operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

    • _shards object Required

      Information about the replication process of the operation.

      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number
        • status string
        • primary boolean
      • skipped number
    • _version number Required

      The document version, which is incremented each time the document is updated.

    • forced_refresh boolean
POST /{index}/_doc/{id}
POST my-index-000001/_doc/
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
resp = client.index(
    index="my-index-000001",
    document={
        "@timestamp": "2099-11-15T13:12:00",
        "message": "GET /search HTTP/1.1 200 1070000",
        "user": {
            "id": "kimchy"
        }
    },
)
const response = await client.index({
  index: "my-index-000001",
  document: {
    "@timestamp": "2099-11-15T13:12:00",
    message: "GET /search HTTP/1.1 200 1070000",
    user: {
      id: "kimchy",
    },
  },
});
response = client.index(
  index: "my-index-000001",
  body: {
    "@timestamp": "2099-11-15T13:12:00",
    "message": "GET /search HTTP/1.1 200 1070000",
    "user": {
      "id": "kimchy"
    }
  }
)
$resp = $client->index([
    "index" => "my-index-000001",
    "body" => [
        "@timestamp" => "2099-11-15T13:12:00",
        "message" => "GET /search HTTP/1.1 200 1070000",
        "user" => [
            "id" => "kimchy",
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"@timestamp":"2099-11-15T13:12:00","message":"GET /search HTTP/1.1 200 1070000","user":{"id":"kimchy"}}' "$ELASTICSEARCH_URL/my-index-000001/_doc/"
client.index(i -> i
    .index("my-index-000001")
    .document(JsonData.fromJson("{\"@timestamp\":\"2099-11-15T13:12:00\",\"message\":\"GET /search HTTP/1.1 200 1070000\",\"user\":{\"id\":\"kimchy\"}}"))
);
Request examples
Run `POST my-index-000001/_doc/` to index a document. When you use the `POST /<target>/_doc/` request format, the `op_type` is automatically set to `create` and the index operation generates a unique ID for the document.
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
Run `PUT my-index-000001/_doc/1` to insert a JSON document into the `my-index-000001` index with an `_id` of 1.
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
Response examples (200)
A successful response from `POST my-index-000001/_doc/`, which contains an automated document ID.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "W0tpsmIBdwcYyG50zbta",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "result": "created"
}
A successful response from `PUT my-index-000001/_doc/1`.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "result": "created"
}




Check a document Generally available

HEAD /{index}/_doc/{id}

Verify that a document exists. For example, check to see if a document with the _id 0 exists:

HEAD my-index-000001/_doc/0

If the document exists, the API returns a status code of 200 - OK. If the document doesn’t exist, the API returns 404 - Not Found.

Versioning support

You can use the version parameter to check the document only if its current version is equal to the specified one.

Internally, Elasticsearch has marked the old document as deleted and added an entirely new document. The old version of the document doesn't disappear immediately, although you won't be able to access it. Elasticsearch cleans up deleted documents in the background as you continue to index more data.

Path parameters

  • index string Required

    A comma-separated list of data streams, indices, and aliases. It supports wildcards (*).

  • id string Required

    A unique document identifier.

Query parameters

  • preference string

    The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

    If it is set to _local, the operation will prefer to be run on a local allocated shard when possible. If it is set to a custom value, the value is used to guarantee that the same shards will be used for the same custom value. This can help with "jumping values" when hitting different shards in different refresh states. A sample value can be something like the web session ID or the user name.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false.

  • version number

    Explicit version number for concurrency control. The specified version must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
HEAD /{index}/_doc/{id}
HEAD my-index-000001/_doc/0
resp = client.exists(
    index="my-index-000001",
    id="0",
)
const response = await client.exists({
  index: "my-index-000001",
  id: 0,
});
response = client.exists(
  index: "my-index-000001",
  id: "0"
)
$resp = $client->exists([
    "index" => "my-index-000001",
    "id" => "0",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_doc/0"
client.exists(e -> e
    .id("0")
    .index("my-index-000001")
);
















Get multiple documents Generally available; Added in 1.3.0

POST /{index}/_mget

All methods and paths for this operation:

GET /_mget

POST /_mget
GET /{index}/_mget
POST /{index}/_mget

Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.

Filter source fields

By default, the _source field is returned for every document (if stored). Use the _source and _source_include or source_exclude attributes to filter what fields are returned for a particular document. You can include the _source, _source_includes, and _source_excludes query parameters in the request URI to specify the defaults to use when there are no per-document instructions.

Get stored fields

Use the stored_fields attribute to specify the set of stored fields you want to retrieve. Any requested fields that are not stored are ignored. You can include the stored_fields query parameter in the request URI to specify the defaults to use when there are no per-document instructions.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    Name of the index to retrieve documents from when ids are specified, or when a document in the docs array does not specify an index.

Query parameters

  • preference string

    Specifies the node or shard the operation should be performed on. Random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes relevant shards before retrieving documents.

  • routing string

    Custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    True or false to return the _source field or not, or a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    If true, retrieves the document fields stored in the index rather than the document _source.

application/json

Body Required

  • docs array[object]

    The documents you want to retrieve. Required if no index is specified in the request URI.

    Hide docs attributes Show docs attributes object
    • _id string Required

      The unique document ID.

    • _index string

      The index that contains the document.

    • routing string

      The key for the primary shard the document resides on. Required if routing is used during indexing.

    • _source boolean | object

      If false, excludes all _source fields.

      One of:

      If false, excludes all _source fields.

    • stored_fields string | array[string]

      The stored fields you want to retrieve.

    • version number
    • version_type string

      Supported values include:

      • internal: Use internal versioning that starts at 1 and increments with each update or delete.
      • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
      • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
      • force: This option is deprecated because it can cause primary and replica shards to diverge.

      Values are internal, external, external_gte, or force.

  • ids string | array[string]

    The IDs of the documents you want to retrieve. Allowed when the index is specified in the request URI.

    One of:

    The IDs of the documents you want to retrieve. Allowed when the index is specified in the request URI.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required

      The response includes a docs array that contains the documents in the order specified in the request. The structure of the returned documents is similar to that returned by the get API. If there is a failure getting a particular document, the error is included in place of the document.

      One of:
      Hide attributes Show attributes
      • _index string Required

        The name of the index the document belongs to.

      • fields object

        If the stored_fields parameter is set to true and found is true, it contains the document fields stored in the index.

        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • _ignored array[string]
      • found boolean Required

        Indicates whether the document exists.

      • _id string Required

        The unique identifier for the document.

      • _primary_term number

        The primary term assigned to the document for the indexing operation.

      • _routing string

        The explicit routing, if set.

      • _seq_no number

        The sequence number assigned to the document for the indexing operation. Sequence numbers are used to ensure an older version of a document doesn't overwrite a newer version.

      • _source object

        If found is true, it contains the document data formatted in JSON. If the _source parameter is set to false or the stored_fields parameter is set to true, it is excluded.

      • _version number

        The document version, which is ncremented each time the document is updated.

POST /{index}/_mget
GET /my-index-000001/_mget
{
  "docs": [
    {
      "_id": "1"
    },
    {
      "_id": "2"
    }
  ]
}
resp = client.mget(
    index="my-index-000001",
    docs=[
        {
            "_id": "1"
        },
        {
            "_id": "2"
        }
    ],
)
const response = await client.mget({
  index: "my-index-000001",
  docs: [
    {
      _id: "1",
    },
    {
      _id: "2",
    },
  ],
});
response = client.mget(
  index: "my-index-000001",
  body: {
    "docs": [
      {
        "_id": "1"
      },
      {
        "_id": "2"
      }
    ]
  }
)
$resp = $client->mget([
    "index" => "my-index-000001",
    "body" => [
        "docs" => array(
            [
                "_id" => "1",
            ],
            [
                "_id" => "2",
            ],
        ),
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"docs":[{"_id":"1"},{"_id":"2"}]}' "$ELASTICSEARCH_URL/my-index-000001/_mget"
client.mget(m -> m
    .docs(List.of(MultiGetOperation.of(mu -> mu
            .id("1")),MultiGetOperation.of(mu -> mu
            .id("2"))))
    .index("my-index-000001")
);
Request examples
Run `GET /my-index-000001/_mget`. When you specify an index in the request URI, only the document IDs are required in the request body.
{
  "docs": [
    {
      "_id": "1"
    },
    {
      "_id": "2"
    }
  ]
}
Run `GET /_mget`. This request sets `_source` to `false` for document 1 to exclude the source entirely. It retrieves `field3` and `field4` from document 2. It retrieves the `user` field from document 3 but filters out the `user.location` field.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "_source": false
    },
    {
      "_index": "test",
      "_id": "2",
      "_source": [ "field3", "field4" ]
    },
    {
      "_index": "test",
      "_id": "3",
      "_source": {
        "include": [ "user" ],
        "exclude": [ "user.location" ]
      }
    }
  ]
}
Run `GET /_mget`. This request retrieves `field1` and `field2` from document 1 and `field3` and `field4` from document 2.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "stored_fields": [ "field1", "field2" ]
    },
    {
      "_index": "test",
      "_id": "2",
      "stored_fields": [ "field3", "field4" ]
    }
  ]
}
Run `GET /_mget?routing=key1`. If routing is used during indexing, you need to specify the routing value to retrieve documents. This request fetches `test/_doc/2` from the shard corresponding to routing key `key1`. It fetches `test/_doc/1` from the shard corresponding to routing key `key2`.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "routing": "key2"
    },
    {
      "_index": "test",
      "_id": "2"
    }
  ]
}




















Update documents Generally available; Added in 2.4.0

POST /{index}/_update_by_query

Updates documents that match the specified query. If no query is specified, performs an update on every document in the data stream or index without modifying the source, which is useful for picking up mapping changes.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:

  • read
  • index or write

You can specify the query criteria in the request URI or the request body using the same syntax as the search API.

When you submit an update by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and updates matching documents using internal versioning. When the versions match, the document is updated and the version number is incremented. If a document changes between the time that the snapshot is taken and the update operation is processed, it results in a version conflict and the operation fails. You can opt to count version conflicts instead of halting and returning by setting conflicts to proceed. Note that if you opt to count version conflicts, the operation could attempt to update more documents from the source than max_docs until it has successfully updated max_docs documents or it has gone through every document in the source query.

NOTE: Documents with a version equal to 0 cannot be updated using update by query because internal versioning does not support 0 as a valid version number.

While processing an update by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents. A bulk update request is performed for each batch of matching documents. Any query or update failures cause the update by query request to fail and the failures are shown in the response. Any update requests that completed successfully still stick, they are not rolled back.

Throttling update requests

To control the rate at which update by query issues batches of update operations, you can set requests_per_second to any positive decimal number. This pads each batch with a wait time to throttle the rate. Set requests_per_second to -1 to turn off throttling.

Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account. The padding time is the difference between the batch size divided by the requests_per_second and the time spent writing. By default the batch size is 1000, so if requests_per_second is set to 500:

target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds

Since the batch is issued as a single _bulk request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth".

Slicing

Update by query supports sliced scroll to parallelize the update process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.

Setting slices to auto chooses a reasonable number for most data streams and indices. This setting will use one slice per shard, up to a certain limit. If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards.

Adding slices to _update_by_query just automates the manual process of creating sub-requests, which means it has some quirks:

  • You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
  • Fetching the status of the task for the request with slices only contains the status of completed slices.
  • These sub-requests are individually addressable for things like cancellation and rethrottling.
  • Rethrottling the request with slices will rethrottle the unfinished sub-request proportionally.
  • Canceling the request with slices will cancel each sub-request.
  • Due to the nature of slices each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
  • Parameters like requests_per_second and max_docs on a request with slices are distributed proportionally to each sub-request. Combine that with the point above about distribution being uneven and you should conclude that using max_docs with slices might not result in exactly max_docs documents being updated.
  • Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.

If you're slicing manually or otherwise tuning automatic slicing, keep in mind that:

  • Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many slices hurts performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.
  • Update performance scales linearly across available resources with the number of slices.

Whether query or update performance dominates the runtime depends on the documents being reindexed and cluster resources.

Update the document source

Update by query supports scripts to update the document source. As with the update API, you can set ctx.op to change the operation that is performed.

Set ctx.op = "noop" if your script decides that it doesn't have to make any changes. The update by query operation skips updating the document and increments the noop counter.

Set ctx.op = "delete" if your script decides that the document should be deleted. The update by query operation deletes the document and increments the deleted counter.

Update by query supports only index, noop, and delete. Setting ctx.op to anything else is an error. Setting any other field in ctx is an error. This API enables you to only modify the source of matching documents; you cannot move them.

Required authorization

  • Index privileges: read,write

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (*). To search all data streams or indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • analyzer string

    The analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • analyze_wildcard boolean

    If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • conflicts string

    The preferred behavior when update by query hits version conflicts: abort or proceed.

    Supported values include:

    • abort: Stop reindexing if there are conflicts.
    • proceed: Continue reindexing even if there are conflicts.

    Values are abort or proceed.

  • default_operator string

    The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as default where no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • from number

    Skips the specified number of documents.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • max_docs number

    The maximum number of documents to process. It defaults to all documents. When set to a value less then or equal to scroll_size then a scroll will not be used to retrieve the results for the operation.

  • pipeline string

    The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to _none disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter.

  • preference string

    The node or shard the operation should be performed on. It is random by default.

  • q string

    A query in the Lucene query string syntax.

  • refresh boolean

    If true, Elasticsearch refreshes affected shards to make the operation visible to search after the request completes. This is different than the update API's refresh parameter, which causes just the shard that received the request to be refreshed.

  • request_cache boolean

    If true, the request cache is used for this request. It defaults to the index-level setting.

  • requests_per_second number

    The throttle for this request in sub-requests per second.

  • routing string

    A custom value used to route operations to a specific shard.

  • scroll string

    The period to retain the search context for scrolling.

    Values are -1 or 0.

  • scroll_size number

    The size of the scroll request that powers the operation.

  • search_timeout string

    An explicit timeout for each search request. By default, there is no timeout.

    Values are -1 or 0.

  • search_type string

    The type of the search operation. Available options include query_then_fetch and dfs_query_then_fetch.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • slices number | string

    The number of slices this task should be divided into.

    Value is auto.

  • sort array[string]

    A comma-separated list of : pairs.

  • stats array[string]

    The specific tag of the request for logging and statistical purposes.

  • terminate_after number

    The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.

    IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.

  • timeout string

    The period each update request waits for the following operations: dynamic mapping updates, waiting for active shards. By default, it is one minute. This guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.

    Values are -1 or 0.

  • version boolean

    If true, returns the document version as part of a hit.

  • version_type boolean

    Should the document increment the version number (internal) on hit or not (reindex)

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The timeout parameter controls how long each write request waits for unavailable shards to become available. Both work exactly the way they work in the bulk API.

    Values are all or index-setting.

  • wait_for_completion boolean

    If true, the request blocks until the operation is complete. If false, Elasticsearch performs some preflight checks, launches the request, and returns a task ID that you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at .tasks/task/${taskId}.

application/json

Body

  • max_docs number

    The maximum number of documents to update.

  • query object

    The documents to update using the Query DSL.

    External documentation
  • script object

    The script to run to update the document source or metadata when updating.

    Hide script attributes Show script attributes object
    • source string

      The script source.

    • id string

      The id for a stored script.

    • params object

      Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

      Hide params attribute Show params attribute object
      • * object Additional properties
    • lang string

      Specifies the language the script is written in.

      Supported values include:

      • painless: Painless scripting language, purpose-built for Elasticsearch.
      • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
      • mustache: Mustache templated, used for templates.
      • java: Expert Java API
      Any of:

      Specifies the language the script is written in.

      Supported values include:

      • painless: Painless scripting language, purpose-built for Elasticsearch.
      • expression: Lucene’s expressions language, compiles a JavaScript expression to bytecode.
      • mustache: Mustache templated, used for templates.
      • java: Expert Java API

      Values are painless, expression, mustache, or java.

    • options object
      Hide options attribute Show options attribute object
      • * string Additional properties
  • slice object

    Slice the request manually using the provided slice ID and total number of slices.

    Hide slice attributes Show slice attributes object
    • field string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • max number Required
  • conflicts string

    The preferred behavior when update by query hits version conflicts: abort or proceed.

    Supported values include:

    • abort: Stop reindexing if there are conflicts.
    • proceed: Continue reindexing even if there are conflicts.

    Values are abort or proceed.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • batches number

      The number of scroll responses pulled back by the update by query.

    • failures array[object]

      Array of failures if there were any unrecoverable errors during the process. If this is non-empty then the request ended because of those failures. Update by query is implemented using batches. Any failure causes the entire process to end, but all failures in the current batch are collected into the array. You can use the conflicts option to prevent reindex from ending when version conflicts occur.

      Hide failures attributes Show failures attributes object
      • cause object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide cause attributes Show cause attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • id string Required
      • index string Required
      • status number Required
    • noops number

      The number of documents that were ignored because the script used for the update by query returned a noop value for ctx.op.

    • deleted number

      The number of documents that were successfully deleted.

    • requests_per_second number

      The number of requests per second effectively run during the update by query.

    • retries object

      The number of retries attempted by update by query. bulk is the number of bulk actions retried. search is the number of search actions retried.

      Hide retries attributes Show retries attributes object
      • bulk number Required

        The number of bulk actions retried.

    • task string | number

    • timed_out boolean

      If true, some requests timed out during the update by query.

    • took number

      Time unit for milliseconds

    • total number

      The number of documents that were successfully processed.

    • updated number

      The number of documents that were successfully updated.

    • version_conflicts number

      The number of version conflicts that the update by query hit.

    • throttled string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • throttled_millis number

      Time unit for milliseconds

    • throttled_until string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • throttled_until_millis number

      Time unit for milliseconds

POST /{index}/_update_by_query
POST my-index-000001/_update_by_query?conflicts=proceed
{
  "query": { 
    "term": {
      "user.id": "kimchy"
    }
  }
}
resp = client.update_by_query(
    index="my-index-000001",
    conflicts="proceed",
    query={
        "term": {
            "user.id": "kimchy"
        }
    },
)
const response = await client.updateByQuery({
  index: "my-index-000001",
  conflicts: "proceed",
  query: {
    term: {
      "user.id": "kimchy",
    },
  },
});
response = client.update_by_query(
  index: "my-index-000001",
  conflicts: "proceed",
  body: {
    "query": {
      "term": {
        "user.id": "kimchy"
      }
    }
  }
)
$resp = $client->updateByQuery([
    "index" => "my-index-000001",
    "conflicts" => "proceed",
    "body" => [
        "query" => [
            "term" => [
                "user.id" => "kimchy",
            ],
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":{"term":{"user.id":"kimchy"}}}' "$ELASTICSEARCH_URL/my-index-000001/_update_by_query?conflicts=proceed"
client.updateByQuery(u -> u
    .conflicts(Conflicts.Proceed)
    .index("my-index-000001")
    .query(q -> q
        .term(t -> t
            .field("user.id")
            .value(FieldValue.of("kimchy"))
        )
    )
);
Request examples
Run `POST my-index-000001/_update_by_query?conflicts=proceed` to update documents that match a query.
{
  "query": { 
    "term": {
      "user.id": "kimchy"
    }
  }
}
Run `POST my-index-000001/_update_by_query` with a script to update the document source. It increments the `count` field for all documents with a `user.id` of `kimchy` in `my-index-000001`.
{
  "script": {
    "source": "ctx._source.count++",
    "lang": "painless"
  },
  "query": {
    "term": {
      "user.id": "kimchy"
    }
  }
}
Run `POST my-index-000001/_update_by_query` to slice an update by query manually. Provide a slice ID and total number of slices to each request.
{
  "slice": {
    "id": 0,
    "max": 2
  },
  "script": {
    "source": "ctx._source['extra'] = 'test'"
  }
}
Run `POST my-index-000001/_update_by_query?refresh&slices=5` to use automatic slicing. It automatically parallelizes using sliced scroll to slice on `_id`.
{
  "script": {
    "source": "ctx._source['extra'] = 'test'"
  }
}









Create an enrich policy Generally available; Added in 7.5.0

PUT /_enrich/policy/{name}

Creates an enrich policy.

Path parameters

  • name string Required

    Name of the enrich policy to create or update.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

application/json

Body Required

  • geo_match object Additional properties

    Matches enrich data to incoming documents based on a geo_shape query.

    Hide geo_match attributes Show geo_match attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string
  • match object Additional properties

    Matches enrich data to incoming documents based on a term query.

    Hide match attributes Show match attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string
  • range object Additional properties

    Matches a number, date, or IP address in incoming documents to a range in the enrich index based on a term query.

    Hide range attributes Show range attributes object
    • enrich_fields string | array[string] Required
    • indices string | array[string] Required
    • match_field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • name string
    • elasticsearch_version string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_enrich/policy/{name}
PUT /_enrich/policy/postal_policy
{
  "geo_match": {
    "indices": "postal_codes",
    "match_field": "location",
    "enrich_fields": [ "location", "postal_code" ]
  }
}
resp = client.enrich.put_policy(
    name="postal_policy",
    geo_match={
        "indices": "postal_codes",
        "match_field": "location",
        "enrich_fields": [
            "location",
            "postal_code"
        ]
    },
)
const response = await client.enrich.putPolicy({
  name: "postal_policy",
  geo_match: {
    indices: "postal_codes",
    match_field: "location",
    enrich_fields: ["location", "postal_code"],
  },
});
response = client.enrich.put_policy(
  name: "postal_policy",
  body: {
    "geo_match": {
      "indices": "postal_codes",
      "match_field": "location",
      "enrich_fields": [
        "location",
        "postal_code"
      ]
    }
  }
)
$resp = $client->enrich()->putPolicy([
    "name" => "postal_policy",
    "body" => [
        "geo_match" => [
            "indices" => "postal_codes",
            "match_field" => "location",
            "enrich_fields" => array(
                "location",
                "postal_code",
            ),
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"geo_match":{"indices":"postal_codes","match_field":"location","enrich_fields":["location","postal_code"]}}' "$ELASTICSEARCH_URL/_enrich/policy/postal_policy"
client.enrich().putPolicy(p -> p
    .geoMatch(g -> g
        .enrichFields(List.of("location","postal_code"))
        .indices("postal_codes")
        .matchField("location")
    )
    .name("postal_policy")
);
Request example
An example body for a `PUT /_enrich/policy/postal_policy` request.
{
  "geo_match": {
    "indices": "postal_codes",
    "match_field": "location",
    "enrich_fields": [ "location", "postal_code" ]
  }
}




Run an enrich policy Generally available; Added in 7.5.0

PUT /_enrich/policy/{name}/_execute

Create the enrich index for an existing enrich policy.

Path parameters

  • name string Required

    Enrich policy to execute.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

  • wait_for_completion boolean

    If true, the request blocks other enrich policy execution requests until complete.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • status object
      Hide status attributes Show status attributes object
      • phase string Required

        Values are SCHEDULED, RUNNING, COMPLETE, FAILED, or CANCELLED.

      • step string
    • task string | number

PUT /_enrich/policy/{name}/_execute
PUT /_enrich/policy/my-policy/_execute?wait_for_completion=false
resp = client.enrich.execute_policy(
    name="my-policy",
    wait_for_completion=False,
)
const response = await client.enrich.executePolicy({
  name: "my-policy",
  wait_for_completion: "false",
});
response = client.enrich.execute_policy(
  name: "my-policy",
  wait_for_completion: "false"
)
$resp = $client->enrich()->executePolicy([
    "name" => "my-policy",
    "wait_for_completion" => "false",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy/_execute?wait_for_completion=false"
client.enrich().executePolicy(e -> e
    .name("my-policy")
    .waitForCompletion(false)
);













Get the async EQL status Generally available; Added in 7.9.0

GET /_eql/search/status/{id}

Get the current status for an async EQL search or a stored synchronous EQL search without returning results.

Path parameters

  • id string Required

    Identifier for the search.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string Required

      Identifier for the search.

    • is_partial boolean Required

      If true, the search request is still executing. If false, the search is completed.

    • is_running boolean Required

      If true, the response does not contain complete search results. This could be because either the search is still running (is_running status is false), or because it is already completed (is_running status is true) and results are partial due to failures or timeouts.

    • start_time_in_millis number

      Time unit for milliseconds

    • expiration_time_in_millis number

      Time unit for milliseconds

    • completion_status number

      For a completed search shows the http status code of the completed search.

GET /_eql/search/status/{id}
GET /_eql/search/status/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=
resp = client.eql.get_status(
    id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
)
const response = await client.eql.getStatus({
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
});
response = client.eql.get_status(
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE="
)
$resp = $client->eql()->getStatus([
    "id" => "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_eql/search/status/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE="
client.eql().getStatus(g -> g
    .id("FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=")
);
Response examples (200)
A successful response for getting status information for an async EQL search.
{
  "id": "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
  "is_running" : true,
  "is_partial" : true,
  "start_time_in_millis" : 1611690235000,
  "expiration_time_in_millis" : 1611690295000
}

Get EQL search results Generally available; Added in 7.9.0

POST /{index}/_eql/search

All methods and paths for this operation:

GET /{index}/_eql/search

POST /{index}/_eql/search

Returns search results for an Event Query Language (EQL) query. EQL assumes each document in a data stream or index corresponds to an event.

External documentation

Path parameters

  • index string | array[string] Required

    The name of the index to scope the operation

Query parameters

  • allow_no_indices boolean

    Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes _all string or when no indices have been specified)

  • allow_partial_search_results boolean

    If true, returns partial results if there are shard failures. If false, returns an error with no partial results.

  • allow_partial_sequence_results boolean

    If true, sequence queries will return partial results in case of shard failures. If false, they will return no results at all. This flag has effect only if allow_partial_search_results is true.

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ccs_minimize_roundtrips boolean

    Indicates whether network round-trips should be minimized as part of cross-cluster search requests execution

  • ignore_unavailable boolean

    If true, missing or closed indices are not included in the response.

  • keep_alive string

    Period for which the search and its results are stored on the cluster.

    Values are -1 or 0.

  • keep_on_completion boolean

    If true, the search and its results are stored on the cluster.

  • wait_for_completion_timeout string

    Timeout duration to wait for the request to finish. Defaults to no timeout, meaning the request waits for complete search results.

    Values are -1 or 0.

application/json

Body Required

  • query string Required

    EQL query you wish to run.

  • case_sensitive boolean
  • event_category_field string

    Field containing the event classification, such as process, file, or network.

  • tiebreaker_field string

    Field used to sort hits with the same timestamp in ascending order

  • timestamp_field string

    Field containing event timestamp. Default "@timestamp"

  • fetch_size number

    Maximum number of events to search at a time for sequence queries.

  • filter object | array[object]

    Query, written in Query DSL, used to filter the events on which the EQL query runs.

    One of:

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • keep_alive string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • keep_on_completion boolean
  • wait_for_completion_timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • allow_partial_search_results boolean

    Allow query execution also in case of shard failures. If true, the query will keep running and will return results based on the available shards. For sequences, the behavior can be further refined using allow_partial_sequence_results

    Default value is true.

  • allow_partial_sequence_results boolean

    This flag applies only to sequences and has effect only if allow_partial_search_results=true. If true, the sequence query will return results based on the available shards, ignoring the others. If false, the sequence query will return successfully, but will always have empty results.

    Default value is false.

  • size number

    For basic queries, the maximum number of matching events to return. Defaults to 10

  • fields object | array[object]

    Array of wildcard (*) patterns. The response returns values for field names matching these patterns in the fields property of each hit.

    One of:

    A reference to a field with formatting instructions on how to return the value

    Hide attributes Show attributes
    • field string Required

      A wildcard pattern. The request returns values for field names matching this pattern.

    • format string

      The format in which the values are returned.

    • include_unmapped boolean
  • result_position string

    Supported values include:

    • tail: Return the most recent matches, similar to the Unix tail command.
    • head: Return the earliest matches, similar to the Unix head command.

    Values are tail or head.

  • runtime_mappings object
    Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        For type lookup

      • target_field string

        For type lookup

      • target_index string

        For type lookup

      • script object

        Painless script executed at query time.

        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string

          The id for a stored script.

        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang
        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Field type, which can be: boolean, composite, date, double, geo_point, ip,keyword, long, or lookup.

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • max_samples_per_key number

    By default, the response of a sample query contains up to 10 samples, with one sample per unique set of join keys. Use the size parameter to get a smaller or larger set of samples. To retrieve more than one sample per set of join keys, use the max_samples_per_key parameter. Pipes are not supported for sample queries.

    Default value is 1.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      Identifier for the search.

    • is_partial boolean

      If true, the response does not contain complete search results.

    • is_running boolean

      If true, the search request is still executing.

    • took number

      Time unit for milliseconds

    • timed_out boolean

      If true, the request timed out before completion.

    • hits object Required

      Contains matching events and sequences. Also contains related metadata.

      Hide hits attributes Show hits attributes object
      • total object

        Metadata about the number of matching events or sequences.

        Hide total attributes Show total attributes object
        • relation string Required

          Supported values include:

          • eq: Accurate
          • gte: Lower bound, including returned events or sequences

          Values are eq or gte.

        • value number Required
      • events array[object]

        Contains events matching the query. Each object represents a matching event.

        Hide events attributes Show events attributes object
        • _index string Required

          Name of the index containing the event.

        • _id string Required

          Unique identifier for the event. This ID is only unique within the index.

        • _source object Required

          Original JSON body passed for the event at index time.

        • missing boolean

          Set to true for events in a timespan-constrained sequence that do not meet a given condition.

        • fields object
          Hide fields attribute Show fields attribute object
          • * array[object] Additional properties
      • sequences array[object]

        Contains event sequences matching the query. Each object represents a matching sequence. This parameter is only returned for EQL queries containing a sequence.

        Hide sequences attributes Show sequences attributes object
        • events array[object] Required

          Contains events matching the query. Each object represents a matching event.

        • join_keys array[object]

          Shared field values used to constrain matches in the sequence. These are defined using the by keyword in the EQL query syntax.

    • shard_failures array[object]

      Contains information about shard failures (if any), in case allow_partial_search_results=true

      Hide shard_failures attributes Show shard_failures attributes object
      • index string
      • node string
      • reason object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide reason attributes Show reason attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • shard number
      • status string
      • primary boolean
POST /{index}/_eql/search
GET /my-data-stream/_eql/search
{
  "query": """
    process where (process.name == "cmd.exe" and process.pid != 2013)
  """
}
resp = client.eql.search(
    index="my-data-stream",
    query="\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  ",
)
const response = await client.eql.search({
  index: "my-data-stream",
  query:
    '\n    process where (process.name == "cmd.exe" and process.pid != 2013)\n  ',
});
response = client.eql.search(
  index: "my-data-stream",
  body: {
    "query": "\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  "
  }
)
$resp = $client->eql()->search([
    "index" => "my-data-stream",
    "body" => [
        "query" => "\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  ",
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":"\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  "}' "$ELASTICSEARCH_URL/my-data-stream/_eql/search"
client.eql().search(s -> s
    .index("my-data-stream")
    .query(" process where (process.name == \"cmd.exe\" and process.pid != 2013) ")
);
Request examples
Run `GET /my-data-stream/_eql/search` to search for events that have a `process.name` of `cmd.exe` and a `process.pid` other than `2013`.
{
  "query": """
    process where (process.name == "cmd.exe" and process.pid != 2013)
  """
}
Run `GET /my-data-stream/_eql/search` to search for a sequence of events. The sequence starts with an event with an `event.category` of `file`, a `file.name` of `cmd.exe`, and a `process.pid` other than `2013`. It is followed by an event with an `event.category` of `process` and a `process.executable` that contains the substring `regsvr32`. These events must also share the same `process.pid` value.
{
  "query": """
    sequence by process.pid
      [ file where file.name == "cmd.exe" and process.pid != 2013 ]
      [ process where stringContains(process.executable, "regsvr32") ]
  """
}
Response examples (200)
{
  "is_partial": false,
  "is_running": false,
  "took": 6,
  "timed_out": false,
  "hits": {
    "total": {
      "value": 1,
      "relation": "eq"
    },
    "sequences": [
      {
        "join_keys": [
          2012
        ],
        "events": [
          {
            "_index": ".ds-my-data-stream-2099.12.07-000001",
            "_id": "AtOJ4UjUBAAx3XR5kcCM",
            "_source": {
              "@timestamp": "2099-12-06T11:04:07.000Z",
              "event": {
                "category": "file",
                "id": "dGCHwoeS",
                "sequence": 2
              },
              "file": {
                "accessed": "2099-12-07T11:07:08.000Z",
                "name": "cmd.exe",
                "path": "C:\\Windows\\System32\\cmd.exe",
                "type": "file",
                "size": 16384
              },
              "process": {
                "pid": 2012,
                "name": "cmd.exe",
                "executable": "C:\\Windows\\System32\\cmd.exe"
              }
            }
          },
          {
            "_index": ".ds-my-data-stream-2099.12.07-000001",
            "_id": "OQmfCaduce8zoHT93o4H",
            "_source": {
              "@timestamp": "2099-12-07T11:07:09.000Z",
              "event": {
                "category": "process",
                "id": "aR3NWVOs",
                "sequence": 4
              },
              "process": {
                "pid": 2012,
                "name": "regsvr32.exe",
                "command_line": "regsvr32.exe  /s /u /i:https://...RegSvr32.sct scrobj.dll",
                "executable": "C:\\Windows\\System32\\regsvr32.exe"
              }
            }
          }
        ]
      }
    ]
  }
}













Stop async ES|QL query Generally available; Added in 8.18.0

POST /_query/async/{id}/stop

This API interrupts the query execution and returns the results so far. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can stop it.

External documentation

Path parameters

  • id string Required

    The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with the keep_on_completion parameter set to true.

Query parameters

  • drop_null_columns boolean

    Indicates whether columns that are entirely null will be removed from the columns and values portion of the results. If true, the response will include an extra section under the name all_columns which has the name of all the columns.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • took number

      Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
      • name string Required
      • type string Required
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
      • name string Required
      • type string Required
    • values array[array] Required
    • _clusters object

      Cross-cluster search information. Present if include_ccs_metadata was true in the request and a cross-cluster search was performed.

      Hide _clusters attributes Show _clusters attributes object
      • total number Required
      • successful number Required
      • running number Required
      • skipped number Required
      • partial number Required
      • failed number Required
      • details object Required
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • _shards object
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

POST /_query/async/{id}/stop
POST /_query/async/FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=/stop
resp = client.esql.async_query_stop(
    id="FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=",
)
const response = await client.esql.asyncQueryStop({
  id: "FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=",
});
response = client.esql.async_query_stop(
  id: "FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM="
)
$resp = $client->esql()->asyncQueryStop([
    "id" => "FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_query/async/FkpMRkJGS1gzVDRlM3g4ZzMyRGlLbkEaTXlJZHdNT09TU2VTZVBoNDM3cFZMUToxMDM=/stop"




Features

The feature APIs enable you to introspect and manage features provided by Elasticsearch and Elasticsearch plugins.

Get the features Generally available; Added in 7.12.0

GET /_features

Get a list of features that can be included in snapshots using the feature_states field when creating a snapshot. You can use this API to determine which feature states to include when taking a snapshot. By default, all feature states are included in a snapshot if that snapshot includes the global state, or none if it does not.

A feature state includes one or more system indices necessary for a given feature to function. In order to ensure data integrity, all system indices that comprise a feature state are snapshotted and restored together.

The features listed by this API are a combination of built-in features and features defined by plugins. In order for a feature state to be listed in this API and recognized as a valid feature state by the create snapshot API, the plugin that defines that feature must be installed on the master node.

External documentation

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • features array[object] Required
      Hide features attributes Show features attributes object
      • name string Required
      • description string Required
GET /_features
GET _features
resp = client.features.get_features()
const response = await client.features.getFeatures();
response = client.features.get_features
$resp = $client->features()->getFeatures();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_features"
client.features().getFeatures(g -> g);
Response examples (200)
A successful response for retrieving a list of feature states that can be included when taking a snapshot.
{
  "features": [
    {
      "name": "tasks",
      "description": "Manages task results"
    },
    {
      "name": "kibana",
      "description": "Manages Kibana configuration and reports"
    }
  ]
}


















Explore graph analytics Generally available

POST /{index}/_graph/explore

All methods and paths for this operation:

GET /{index}/_graph/explore

POST /{index}/_graph/explore

Extract and summarize information about the documents and terms in an Elasticsearch data stream or index. The easiest way to understand the behavior of this API is to use the Graph UI to explore connections. An initial request to the _explore API contains a seed query that identifies the documents of interest and specifies the fields that define the vertices and connections you want to include in the graph. Subsequent requests enable you to spider out from one more vertices of interest. You can exclude vertices that have already been returned.

External documentation

Path parameters

  • index string | array[string] Required

    Name of the index.

Query parameters

  • routing string

    Custom value used to route operations to a specific shard.

  • timeout string

    Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.

    Values are -1 or 0.

application/json

Body

  • connections object

    Specifies or more fields from which you want to extract terms that are associated with the specified vertices.

    Hide connections attributes Show connections attributes object
    • connections object

      Specifies one or more fields from which you want to extract terms that are associated with the specified vertices.

    • query object Required

      An optional guiding query that constrains the Graph API as it explores connected terms.

      External documentation
    • vertices array[object] Required

      Contains the fields you are interested in.

      Hide vertices attributes Show vertices attributes object
      • exclude array[string]

        Prevents the specified terms from being included in the results.

      • field string Required

        Identifies a field in the documents of interest.

      • include array[object]

        Identifies the terms of interest that form the starting points from which you want to spider out.

        Hide include attributes Show include attributes object
        • boost number Required
        • term string Required
      • min_doc_count number

        Specifies how many documents must contain a pair of terms before it is considered to be a useful connection. This setting acts as a certainty threshold.

        Default value is 3.

      • shard_min_doc_count number

        Controls how many documents on a particular shard have to contain a pair of terms before the connection is returned for global consideration.

        Default value is 2.

      • size number

        Specifies the maximum number of vertex terms returned for each field.

        Default value is 5.

  • controls object

    Direct the Graph API how to build the graph.

    Hide controls attributes Show controls attributes object
    • sample_diversity object

      To avoid the top-matching documents sample being dominated by a single source of results, it is sometimes necessary to request diversity in the sample. You can do this by selecting a single-value field and setting a maximum number of documents per value for that field.

      Hide sample_diversity attributes Show sample_diversity attributes object
      • field string Required

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • max_docs_per_value number Required
    • sample_size number

      Each hop considers a sample of the best-matching documents on each shard. Using samples improves the speed of execution and keeps exploration focused on meaningfully-connected terms. Very small values (less than 50) might not provide sufficient weight-of-evidence to identify significant connections between terms. Very large sample sizes can dilute the quality of the results and increase execution times.

      Default value is 100.

    • timeout string

      The length of time in milliseconds after which exploration will be halted and the results gathered so far are returned. This timeout is honored on a best-effort basis. Execution might overrun this timeout if, for example, a long pause is encountered while FieldData is loaded for a field.

    • use_significance boolean Required

      Filters associated terms so only those that are significantly associated with your query are included.

  • query object

    A seed query that identifies the documents of interest. Can be any valid Elasticsearch query.

    External documentation
  • vertices array[object]

    Specifies one or more fields that contain the terms you want to include in the graph as vertices.

    Hide vertices attributes Show vertices attributes object
    • exclude array[string]

      Prevents the specified terms from being included in the results.

    • field string Required

      Identifies a field in the documents of interest.

    • include array[object]

      Identifies the terms of interest that form the starting points from which you want to spider out.

      Hide include attributes Show include attributes object
      • boost number Required
      • term string Required
    • min_doc_count number

      Specifies how many documents must contain a pair of terms before it is considered to be a useful connection. This setting acts as a certainty threshold.

      Default value is 3.

    • shard_min_doc_count number

      Controls how many documents on a particular shard have to contain a pair of terms before the connection is returned for global consideration.

      Default value is 2.

    • size number

      Specifies the maximum number of vertex terms returned for each field.

      Default value is 5.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • connections array[object] Required
      Hide connections attributes Show connections attributes object
      • doc_count number Required
      • source number Required
      • target number Required
      • weight number Required
    • failures array[object] Required
      Hide failures attributes Show failures attributes object
      • index string
      • node string
      • reason object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide reason attributes Show reason attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • shard number
      • status string
      • primary boolean
    • timed_out boolean Required
    • took number Required
    • vertices array[object] Required
      Hide vertices attributes Show vertices attributes object
      • depth number Required
      • field string Required

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • term string Required
      • weight number Required
POST /{index}/_graph/explore
POST clicklogs/_graph/explore
{
  "query": {
    "match": {
      "query.raw": "midi"
    }
  },
  "vertices": [
    {
      "field": "product"
    }
  ],
  "connections": {
    "vertices": [
      {
        "field": "query.raw"
      }
    ]
  }
}
resp = client.graph.explore(
    index="clicklogs",
    query={
        "match": {
            "query.raw": "midi"
        }
    },
    vertices=[
        {
            "field": "product"
        }
    ],
    connections={
        "vertices": [
            {
                "field": "query.raw"
            }
        ]
    },
)
const response = await client.graph.explore({
  index: "clicklogs",
  query: {
    match: {
      "query.raw": "midi",
    },
  },
  vertices: [
    {
      field: "product",
    },
  ],
  connections: {
    vertices: [
      {
        field: "query.raw",
      },
    ],
  },
});
response = client.graph.explore(
  index: "clicklogs",
  body: {
    "query": {
      "match": {
        "query.raw": "midi"
      }
    },
    "vertices": [
      {
        "field": "product"
      }
    ],
    "connections": {
      "vertices": [
        {
          "field": "query.raw"
        }
      ]
    }
  }
)
$resp = $client->graph()->explore([
    "index" => "clicklogs",
    "body" => [
        "query" => [
            "match" => [
                "query.raw" => "midi",
            ],
        ],
        "vertices" => array(
            [
                "field" => "product",
            ],
        ),
        "connections" => [
            "vertices" => array(
                [
                    "field" => "query.raw",
                ],
            ),
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":{"match":{"query.raw":"midi"}},"vertices":[{"field":"product"}],"connections":{"vertices":[{"field":"query.raw"}]}}' "$ELASTICSEARCH_URL/clicklogs/_graph/explore"
client.graph().explore(e -> e
    .connections(c -> c
        .vertices(v -> v
            .field("query.raw")
        )
    )
    .index("clicklogs")
    .query(q -> q
        .match(m -> m
            .field("query.raw")
            .query(FieldValue.of("midi"))
        )
    )
    .vertices(v -> v
        .field("product")
    )
);
Request example
Run `POST clicklogs/_graph/explore` for a basic exploration An initial graph explore query typically begins with a query to identify strongly related terms. Seed the exploration with a query. This example is searching `clicklogs` for people who searched for the term `midi`.Identify the vertices to include in the graph. This example is looking for product codes that are significantly associated with searches for `midi`. Find the connections. This example is looking for other search terms that led people to click on the products that are associated with searches for `midi`.
{
  "query": {
    "match": {
      "query.raw": "midi"
    }
  },
  "vertices": [
    {
      "field": "product"
    }
  ],
  "connections": {
    "vertices": [
      {
        "field": "query.raw"
      }
    ]
  }
}





























Add an index block Generally available; Added in 7.9.0

PUT /{index}/_block/{block}

Add an index block to an index. Index blocks limit the operations allowed on an index by blocking specific operation types.

Path parameters

  • index string Required

    A comma-separated list or wildcard expression of index names used to limit the request. By default, you must explicitly name the indices you are adding blocks to. To allow the adding of blocks to indices with _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. You can update this setting in the elasticsearch.yml file or by using the cluster update settings API.

  • block string

    The block type to add to the index.

    Supported values include:

    • metadata: Disable metadata changes, such as closing the index.
    • read: Disable read operations.
    • read_only: Disable write operations and metadata changes.
    • write: Disable write operations. However, metadata changes are still allowed.

    Values are metadata, read, read_only, or write.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

  • timeout string

    The period to wait for a response from all relevant nodes in the cluster after updating the cluster metadata. If no response is received before the timeout expires, the cluster metadata update still applies but the response will indicate that it was not completely acknowledged. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
    • indices array[object] Required
      Hide indices attributes Show indices attributes object
      • name string Required
      • blocked boolean Required
PUT /{index}/_block/{block}
PUT /my-index-000001/_block/write
resp = client.indices.add_block(
    index="my-index-000001",
    block="write",
)
const response = await client.indices.addBlock({
  index: "my-index-000001",
  block: "write",
});
response = client.indices.add_block(
  index: "my-index-000001",
  block: "write"
)
$resp = $client->indices()->addBlock([
    "index" => "my-index-000001",
    "block" => "write",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_block/write"
client.indices().addBlock(a -> a
    .block(IndicesBlockOptions.Write)
    .index("my-index-000001")
);
Response examples (200)
A successful response from `PUT /my-index-000001/_block/write`, which adds an index block to an index.'
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "indices" : [ {
    "name" : "my-index-000001",
    "blocked" : true
  } ]
}
























Delete indices Generally available

DELETE /{index}

Deleting an index deletes its documents, shards, and metadata. It does not delete related Kibana components, such as data views, visualizations, or dashboards.

You cannot delete the current write index of a data stream. To delete the index, you must roll over the data stream so a new write index is created. You can then use the delete index API to delete the previous write index.

Required authorization

  • Index privileges: delete_index

Path parameters

  • index string | array[string] Required

    Comma-separated list of indices to delete. You cannot specify index aliases. By default, this parameter does not support wildcards (*) or _all. To use wildcards or _all, set the action.destructive_requires_name cluster setting to false.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • _shards object
      Hide _shards attributes Show _shards attributes object
      • failed number Required

        The number of shards the operation or search attempted to run on but failed.

      • successful number Required

        The number of shards the operation or search succeeded on.

      • total number Required

        The number of shards the operation or search will run on overall.

      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index
        • node string
        • reason
        • shard number
        • status string
        • primary boolean
      • skipped number
DELETE /{index}
DELETE /books
resp = client.indices.delete(
    index="books",
)
const response = await client.indices.delete({
  index: "books",
});
response = client.indices.delete(
  index: "books"
)
$resp = $client->indices()->delete([
    "index" => "books",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/books"
client.indices().delete(d -> d
    .index("books")
);




















Create or update an index template Generally available; Added in 7.9.0

POST /_index_template/{name}

All methods and paths for this operation:

PUT /_index_template/{name}

POST /_index_template/{name}

Index templates define settings, mappings, and aliases that can be applied automatically to new indices.

Elasticsearch applies templates to new indices based on an wildcard pattern that matches the index name. Index templates are applied during data stream or index creation. For data streams, these settings and mappings are applied when the stream's backing indices are created. Settings and mappings specified in a create index API request override any settings or mappings specified in an index template. Changes to index templates do not affect existing indices, including the existing backing indices of a data stream.

You can use C-style /* *\/ block comments in index templates. You can include comments anywhere in the request body, except before the opening curly bracket.

Multiple matching templates

If multiple index templates match the name of a new index or data stream, the template with the highest priority is used.

Multiple templates with overlapping index patterns at the same priority are not allowed and an error will be thrown when attempting to create a template matching an existing index template at identical priorities.

Composing aliases, mappings, and settings

When multiple component templates are specified in the composed_of field for an index template, they are merged in the order specified, meaning that later component templates override earlier component templates. Any mappings, settings, or aliases from the parent index template are merged in next. Finally, any configuration on the index request itself is merged. Mapping definitions are merged recursively, which means that later mapping components can introduce new field mappings and update the mapping configuration. If a field mapping is already contained in an earlier component, its definition will be completely overwritten by the later one. This recursive merging strategy applies not only to field mappings, but also root options like dynamic_templates and meta. If an earlier component contains a dynamic_templates block, then by default new dynamic_templates entries are appended onto the end. If an entry already exists with the same key, then it is overwritten by the new definition.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Index or template name

Query parameters

  • create boolean

    If true, this request cannot replace or update existing index templates.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • cause string

    User defined reason for creating/updating the index template

application/json

Body Required

  • index_patterns string | array[string]

    Name of the index template to create.

  • composed_of array[string]

    An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence.

  • template object

    Template to be applied. It may optionally include an aliases, mappings, or settings configuration.

    Hide template attributes Show template attributes object
    • aliases object

      Aliases to add. If the index template includes a data_stream object, these are data stream aliases. Otherwise, these are index aliases. Data stream aliases ignore the index_routing, routing, and search_routing options.

      Hide aliases attribute Show aliases attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • filter object

          Query used to limit documents the alias can access.

          External documentation
        • index_routing string

          Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

        • is_hidden boolean

          If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

          Default value is false.

        • is_write_index boolean

          If true, the index is the write index for the alias.

          Default value is false.

        • routing string

          Value used to route indexing and search operations to a specific shard.

        • search_routing string

          Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

    • mappings object

      Mapping for fields in the index. If specified, this mapping can include field names, field data types, and mapping parameters.

      Hide mappings attributes Show mappings attributes object
      • all_field object
        Hide all_field attributes Show all_field attributes object
        • analyzer string Required
        • enabled boolean Required
        • omit_norms boolean Required
        • search_analyzer string Required
        • similarity string Required
        • store boolean Required
        • store_term_vector_offsets boolean Required
        • store_term_vector_payloads boolean Required
        • store_term_vector_positions boolean Required
        • store_term_vectors boolean Required
      • date_detection boolean
      • dynamic string

        Values are strict, runtime, true, or false.

      • dynamic_date_formats array[string]
      • dynamic_templates array[object]
      • _field_names object
        Hide _field_names attribute Show _field_names attribute object
        • enabled boolean Required
      • index_field object
        Hide index_field attribute Show index_field attribute object
        • enabled boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • numeric_detection boolean
      • properties object
      • _routing object
        Hide _routing attribute Show _routing attribute object
        • required boolean Required
      • _size object
        Hide _size attribute Show _size attribute object
        • enabled boolean Required
      • _source object
        Hide _source attributes Show _source attributes object
        • compress boolean
        • compress_threshold string
        • enabled boolean
        • excludes array[string]
        • includes array[string]
      • runtime object
        Hide runtime attribute Show runtime attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

          • fetch_fields array[object]

            For type lookup

          • format string

            A custom format for date type runtime fields.

      • enabled boolean
      • subobjects string

        Values are true or false.

      • _data_stream_timestamp object
        Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
        • enabled boolean Required
    • settings object

      Configuration options for the index.

      Index settings
    • lifecycle object

      Data stream lifecycle denotes that a data stream is managed by the data stream lifecycle and contains the configuration.

      Hide lifecycle attributes Show lifecycle attributes object
      • data_retention string

        If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

      • downsampling object

        The downsampling configuration to execute for the managed backing index after rollover.

        Hide downsampling attribute Show downsampling attribute object
        • rounds array[object] Required

          The list of downsampling rounds to execute as part of this downsampling configuration

      • enabled boolean

        If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

        Default value is true.

  • data_stream object

    If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with a data_stream object.

    Hide data_stream attributes Show data_stream attributes object
    • hidden boolean
    • allow_custom_routing boolean
  • priority number

    Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch.

  • version number

    Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch. External systems can use these version numbers to simplify template management. To unset a version, replace the template without specifying one.

  • _meta object

    Optional user metadata about the index template. It may have any contents. It is not automatically generated or used by Elasticsearch. This user-defined object is stored in the cluster state, so keeping it short is preferable To unset the metadata, replace the template without specifying it.

    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • allow_auto_create boolean

    This setting overrides the value of the action.auto_create_index cluster setting. If set to true in a template, then indices can be automatically created using that template even if auto-creation of indices is disabled via actions.auto_create_index. If set to false, then indices or data streams matching the template must always be explicitly created, and may never be automatically created.

  • ignore_missing_component_templates array[string]

    The configuration option ignore_missing_component_templates can be used when an index template references a component template that might not exist

  • deprecated boolean

    Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_index_template/{name}
PUT /_index_template/template_1
{
  "index_patterns" : ["template*"],
  "priority" : 1,
  "template": {
    "settings" : {
      "number_of_shards" : 2
    }
  }
}
resp = client.indices.put_index_template(
    name="template_1",
    index_patterns=[
        "template*"
    ],
    priority=1,
    template={
        "settings": {
            "number_of_shards": 2
        }
    },
)
const response = await client.indices.putIndexTemplate({
  name: "template_1",
  index_patterns: ["template*"],
  priority: 1,
  template: {
    settings: {
      number_of_shards: 2,
    },
  },
});
response = client.indices.put_index_template(
  name: "template_1",
  body: {
    "index_patterns": [
      "template*"
    ],
    "priority": 1,
    "template": {
      "settings": {
        "number_of_shards": 2
      }
    }
  }
)
$resp = $client->indices()->putIndexTemplate([
    "name" => "template_1",
    "body" => [
        "index_patterns" => array(
            "template*",
        ),
        "priority" => 1,
        "template" => [
            "settings" => [
                "number_of_shards" => 2,
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_patterns":["template*"],"priority":1,"template":{"settings":{"number_of_shards":2}}}' "$ELASTICSEARCH_URL/_index_template/template_1"
client.indices().putIndexTemplate(p -> p
    .indexPatterns("template*")
    .name("template_1")
    .priority(1L)
    .template(t -> t
        .settings(s -> s
            .numberOfShards("2")
        )
    )
);
Request examples
{
  "index_patterns" : ["template*"],
  "priority" : 1,
  "template": {
    "settings" : {
      "number_of_shards" : 2
    }
  }
}
You can include index aliases in an index template. During index creation, the `{index}` placeholder in the alias name will be replaced with the actual index name that the template gets applied to.
{
  "index_patterns": [
    "template*"
  ],
  "template": {
    "settings": {
      "number_of_shards": 1
    },
    "aliases": {
      "alias1": {},
      "alias2": {
        "filter": {
          "term": {
            "user.id": "kimchy"
          }
        },
        "routing": "shard-1"
      },
      "{index}-alias": {}
    }
  }
}

Delete an index template Generally available; Added in 7.8.0

DELETE /_index_template/{name}

The provided may contain multiple template names separated by a comma. If multiple template names are specified then there is no wildcard support and the provided names should match completely with existing templates.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string | array[string] Required

    Comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_index_template/{name}
DELETE /_index_template/my-index-template
resp = client.indices.delete_index_template(
    name="my-index-template",
)
const response = await client.indices.deleteIndexTemplate({
  name: "my-index-template",
});
response = client.indices.delete_index_template(
  name: "my-index-template"
)
$resp = $client->indices()->deleteIndexTemplate([
    "name" => "my-index-template",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_index_template/my-index-template"
client.indices().deleteIndexTemplate(d -> d
    .name("my-index-template")
);
















Check existence of index templates Generally available

HEAD /_template/{name}

Get information about whether index templates exist. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.

IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

Required authorization

  • Cluster privileges: manage_index_templates
External documentation

Path parameters

  • name string | array[string] Required

    A comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported.

Query parameters

  • flat_settings boolean

    Indicates whether to use a flat format for the response.

  • local boolean

    Indicates whether to get information from the local node only.

  • master_timeout string

    The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to -1.

    Values are -1 or 0.

Responses

  • 200 application/json
HEAD /_template/{name}
HEAD /_template/template_1
resp = client.indices.exists_template(
    name="template_1",
)
const response = await client.indices.existsTemplate({
  name: "template_1",
});
response = client.indices.exists_template(
  name: "template_1"
)
$resp = $client->indices()->existsTemplate([
    "name" => "template_1",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_template/template_1"
client.indices().existsTemplate(e -> e
    .name("template_1")
);








































Open a closed index Generally available

POST /{index}/_open

For data streams, the API opens any closed backing indices.

A closed index is blocked for read/write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. This allows closed indices to not have to maintain internal data structures for indexing or searching documents, resulting in a smaller overhead on the cluster.

When opening or closing an index, the master is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened or closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.

You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behavior can be turned off by using the ignore_unavailable=true parameter.

By default, you must explicitly name the indices you are opening or closing. To open or close indices with _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. This setting can also be changed with the cluster update settings API.

Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting cluster.indices.close.enable to false.

Because opening or closing an index allocates its shards, the wait_for_active_shards setting on index creation applies to the _open and _close index actions as well.

Required authorization

  • Index privileges: manage

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). By default, you must explicitly name the indices you using to limit the request. To limit a request using _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. You can update this setting in the elasticsearch.yml file or using the cluster update settings API.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
POST /{index}/_open
POST /.ds-my-data-stream-2099.03.07-000001/_open/
resp = client.indices.open(
    index=".ds-my-data-stream-2099.03.07-000001",
)
const response = await client.indices.open({
  index: ".ds-my-data-stream-2099.03.07-000001",
});
response = client.indices.open(
  index: ".ds-my-data-stream-2099.03.07-000001"
)
$resp = $client->indices()->open([
    "index" => ".ds-my-data-stream-2099.03.07-000001",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/.ds-my-data-stream-2099.03.07-000001/_open/"
client.indices().open(o -> o
    .index(".ds-my-data-stream-2099.03.07-000001")
);
Response examples (200)
A successful response for opening an index.
{
  "acknowledged" : true,
  "shards_acknowledged" : true
}








































Simulate an index Generally available; Added in 7.9.0

POST /_index_template/_simulate_index/{name}

Get the index configuration that would be applied to the specified index from an existing index template.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Name of the index to simulate

Query parameters

  • create boolean

    Whether the index template we optionally defined in the body should only be dry-run added if new or can also replace an existing one

  • cause string

    User defined reason for dry-run creating the new template for simulation purposes

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • include_defaults boolean Generally available; Added in 8.11.0

    If true, returns all relevant default configurations for the index template.

application/json

Body

  • index_patterns string | array[string] Required

    Name of the index template.

  • composed_of array[string] Required

    An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence.

  • template object

    Template to be applied. It may optionally include an aliases, mappings, or settings configuration.

    Hide template attributes Show template attributes object
    • aliases object

      Aliases to add. If the index template includes a data_stream object, these are data stream aliases. Otherwise, these are index aliases. Data stream aliases ignore the index_routing, routing, and search_routing options.

      Hide aliases attribute Show aliases attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • filter object

          Query used to limit documents the alias can access.

          External documentation
        • index_routing string

          Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

        • is_hidden boolean

          If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

          Default value is false.

        • is_write_index boolean

          If true, the index is the write index for the alias.

          Default value is false.

        • routing string

          Value used to route indexing and search operations to a specific shard.

        • search_routing string

          Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

    • mappings object

      Mapping for fields in the index. If specified, this mapping can include field names, field data types, and mapping parameters.

      Hide mappings attributes Show mappings attributes object
      • all_field object
        Hide all_field attributes Show all_field attributes object
        • analyzer string Required
        • enabled boolean Required
        • omit_norms boolean Required
        • search_analyzer string Required
        • similarity string Required
        • store boolean Required
        • store_term_vector_offsets boolean Required
        • store_term_vector_payloads boolean Required
        • store_term_vector_positions boolean Required
        • store_term_vectors boolean Required
      • date_detection boolean
      • dynamic string

        Values are strict, runtime, true, or false.

      • dynamic_date_formats array[string]
      • dynamic_templates array[object]
      • _field_names object
        Hide _field_names attribute Show _field_names attribute object
        • enabled boolean Required
      • index_field object
        Hide index_field attribute Show index_field attribute object
        • enabled boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • numeric_detection boolean
      • properties object
      • _routing object
        Hide _routing attribute Show _routing attribute object
        • required boolean Required
      • _size object
        Hide _size attribute Show _size attribute object
        • enabled boolean Required
      • _source object
        Hide _source attributes Show _source attributes object
        • compress boolean
        • compress_threshold string
        • enabled boolean
        • excludes array[string]
        • includes array[string]
      • runtime object
        Hide runtime attribute Show runtime attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

          • fetch_fields array[object]

            For type lookup

          • format string

            A custom format for date type runtime fields.

      • enabled boolean
      • subobjects string

        Values are true or false.

      • _data_stream_timestamp object
        Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
        • enabled boolean Required
    • settings object

      Configuration options for the index.

      Index settings
    • lifecycle object

      Data stream lifecycle with rollover can be used to display the configuration including the default rollover conditions, if asked.

      Hide lifecycle attributes Show lifecycle attributes object
      • data_retention string

        If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

      • downsampling object

        The downsampling configuration to execute for the managed backing index after rollover.

      • enabled boolean

        If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

        Default value is true.

      • rollover object

        The conditions which will trigger the rollover of a backing index as configured by the cluster setting cluster.lifecycle.default.rollover. This property is an implementation detail and it will only be retrieved when the query param include_defaults is set to true. The contents of this field are subject to change.

  • version number

    Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch.

  • priority number

    Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch.

  • _meta object

    Optional user metadata about the index template. May have any contents. This map is not automatically generated by Elasticsearch.

    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • allow_auto_create boolean
  • data_stream object

    If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with a data_stream object.

    Hide data_stream attributes Show data_stream attributes object
    • hidden boolean

      If true, the data stream is hidden.

      Default value is false.

    • allow_custom_routing boolean

      If true, the data stream supports custom routing.

      Default value is false.

  • deprecated boolean Generally available; Added in 8.12.0

    Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning.

  • ignore_missing_component_templates string | array[string]

    A list of component template names that are allowed to be absent.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • overlapping array[object]
      Hide overlapping attributes Show overlapping attributes object
      • name string Required
      • index_patterns array[string] Required
    • template object Required
      Hide template attributes Show template attributes object
      • aliases object Required
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            Query used to limit documents the alias can access.

          • index_routing string

            Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

          • is_hidden boolean

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string

            Value used to route indexing and search operations to a specific shard.

          • search_routing string

            Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

      • mappings object Required
        Hide mappings attributes Show mappings attributes object
        • all_field object
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
        • index_field object
        • _meta object
        • numeric_detection boolean
        • properties object
        • _routing object
        • _size object
        • _source object
        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
      • settings object Required
        Index settings
POST /_index_template/_simulate_index/{name}
POST /_index_template/_simulate_index/my-index-000001
resp = client.indices.simulate_index_template(
    name="my-index-000001",
    index_template=None,
)
const response = await client.indices.simulateIndexTemplate({
  name: "my-index-000001",
  index_template: null,
});
response = client.indices.simulate_index_template(
  name: "my-index-000001"
)
$resp = $client->indices()->simulateIndexTemplate([
    "name" => "my-index-000001",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_index_template/_simulate_index/my-index-000001"
client.indices().simulateIndexTemplate(s -> s
    .name("my-index-000001")
);
Response examples (200)
A successful response from `POST /_index_template/_simulate_index/my-index-000001`.
{
  "template" : {
    "settings" : {
      "index" : {
        "number_of_shards" : "2",
        "number_of_replicas" : "0",
        "routing" : {
          "allocation" : {
            "include" : {
              "_tier_preference" : "data_content"
            }
          }
        }
      }
    },
    "mappings" : {
      "properties" : {
        "@timestamp" : {
          "type" : "date"
        }
      }
    },
    "aliases" : { }
  },
  "overlapping" : [
    {
      "name" : "template_1",
      "index_patterns" : [
        "my-index-*"
      ]
    }
  ]
}

Simulate an index template Generally available

POST /_index_template/_simulate/{name}

All methods and paths for this operation:

POST /_index_template/_simulate

POST /_index_template/_simulate/{name}

Get the index configuration that would be applied by a particular index template.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    Name of the index template to simulate. To test a template configuration before you add it to the cluster, omit this parameter and specify the template configuration in the request body.

Query parameters

  • create boolean

    If true, the template passed in the body is only used if no existing templates match the same index patterns. If false, the simulation uses the template with the highest priority. Note that the template is not permanently added or updated in either case; it is only used for the simulation.

  • cause string

    User defined reason for dry-run creating the new template for simulation purposes

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • include_defaults boolean Generally available; Added in 8.11.0

    If true, returns all relevant default configurations for the index template.

application/json

Body

  • allow_auto_create boolean

    This setting overrides the value of the action.auto_create_index cluster setting. If set to true in a template, then indices can be automatically created using that template even if auto-creation of indices is disabled via actions.auto_create_index. If set to false, then indices or data streams matching the template must always be explicitly created, and may never be automatically created.

  • index_patterns string | array[string]

    Array of wildcard (*) expressions used to match the names of data streams and indices during creation.

  • composed_of array[string]

    An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence.

  • template object

    Template to be applied. It may optionally include an aliases, mappings, or settings configuration.

    Hide template attributes Show template attributes object
    • aliases object

      Aliases to add. If the index template includes a data_stream object, these are data stream aliases. Otherwise, these are index aliases. Data stream aliases ignore the index_routing, routing, and search_routing options.

      Hide aliases attribute Show aliases attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • filter object

          Query used to limit documents the alias can access.

          External documentation
        • index_routing string

          Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

        • is_hidden boolean

          If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

          Default value is false.

        • is_write_index boolean

          If true, the index is the write index for the alias.

          Default value is false.

        • routing string

          Value used to route indexing and search operations to a specific shard.

        • search_routing string

          Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

    • mappings object

      Mapping for fields in the index. If specified, this mapping can include field names, field data types, and mapping parameters.

      Hide mappings attributes Show mappings attributes object
      • all_field object
        Hide all_field attributes Show all_field attributes object
        • analyzer string Required
        • enabled boolean Required
        • omit_norms boolean Required
        • search_analyzer string Required
        • similarity string Required
        • store boolean Required
        • store_term_vector_offsets boolean Required
        • store_term_vector_payloads boolean Required
        • store_term_vector_positions boolean Required
        • store_term_vectors boolean Required
      • date_detection boolean
      • dynamic string

        Values are strict, runtime, true, or false.

      • dynamic_date_formats array[string]
      • dynamic_templates array[object]
      • _field_names object
        Hide _field_names attribute Show _field_names attribute object
        • enabled boolean Required
      • index_field object
        Hide index_field attribute Show index_field attribute object
        • enabled boolean Required
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • numeric_detection boolean
      • properties object
      • _routing object
        Hide _routing attribute Show _routing attribute object
        • required boolean Required
      • _size object
        Hide _size attribute Show _size attribute object
        • enabled boolean Required
      • _source object
        Hide _source attributes Show _source attributes object
        • compress boolean
        • compress_threshold string
        • enabled boolean
        • excludes array[string]
        • includes array[string]
      • runtime object
        Hide runtime attribute Show runtime attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

          • fetch_fields array[object]

            For type lookup

          • format string

            A custom format for date type runtime fields.

      • enabled boolean
      • subobjects string

        Values are true or false.

      • _data_stream_timestamp object
        Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
        • enabled boolean Required
    • settings object

      Configuration options for the index.

      Index settings
    • lifecycle object

      Data stream lifecycle denotes that a data stream is managed by the data stream lifecycle and contains the configuration.

      Hide lifecycle attributes Show lifecycle attributes object
      • data_retention string

        If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely.

      • downsampling object

        The downsampling configuration to execute for the managed backing index after rollover.

        Hide downsampling attribute Show downsampling attribute object
        • rounds array[object] Required

          The list of downsampling rounds to execute as part of this downsampling configuration

      • enabled boolean

        If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

        Default value is true.

  • data_stream object

    If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with a data_stream object.

    Hide data_stream attributes Show data_stream attributes object
    • hidden boolean
    • allow_custom_routing boolean
  • priority number

    Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch.

  • version number

    Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch.

  • _meta object

    Optional user metadata about the index template. May have any contents. This map is not automatically generated by Elasticsearch.

    Hide _meta attribute Show _meta attribute object
    • * object Additional properties
  • ignore_missing_component_templates array[string]

    The configuration option ignore_missing_component_templates can be used when an index template references a component template that might not exist

  • deprecated boolean

    Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • overlapping array[object]
      Hide overlapping attributes Show overlapping attributes object
      • name string Required
      • index_patterns array[string] Required
    • template object Required
      Hide template attributes Show template attributes object
      • aliases object Required
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            Query used to limit documents the alias can access.

          • index_routing string

            Value used to route indexing operations to a specific shard. If specified, this overwrites the routing value for indexing operations.

          • is_hidden boolean

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string

            Value used to route indexing and search operations to a specific shard.

          • search_routing string

            Value used to route search operations to a specific shard. If specified, this overwrites the routing value for search operations.

      • mappings object Required
        Hide mappings attributes Show mappings attributes object
        • all_field object
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
        • index_field object
        • _meta object
        • numeric_detection boolean
        • properties object
        • _routing object
        • _size object
        • _source object
        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
      • settings object Required
        Index settings
POST /_index_template/_simulate/{name}
POST /_index_template/_simulate
{
  "index_patterns": ["my-index-*"],
  "composed_of": ["ct2"],
  "priority": 10,
  "template": {
    "settings": {
      "index.number_of_replicas": 1
    }
  }
}
resp = client.indices.simulate_template(
    index_patterns=[
        "my-index-*"
    ],
    composed_of=[
        "ct2"
    ],
    priority=10,
    template={
        "settings": {
            "index.number_of_replicas": 1
        }
    },
)
const response = await client.indices.simulateTemplate({
  index_patterns: ["my-index-*"],
  composed_of: ["ct2"],
  priority: 10,
  template: {
    settings: {
      "index.number_of_replicas": 1,
    },
  },
});
response = client.indices.simulate_template(
  body: {
    "index_patterns": [
      "my-index-*"
    ],
    "composed_of": [
      "ct2"
    ],
    "priority": 10,
    "template": {
      "settings": {
        "index.number_of_replicas": 1
      }
    }
  }
)
$resp = $client->indices()->simulateTemplate([
    "body" => [
        "index_patterns" => array(
            "my-index-*",
        ),
        "composed_of" => array(
            "ct2",
        ),
        "priority" => 10,
        "template" => [
            "settings" => [
                "index.number_of_replicas" => 1,
            ],
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_patterns":["my-index-*"],"composed_of":["ct2"],"priority":10,"template":{"settings":{"index.number_of_replicas":1}}}' "$ELASTICSEARCH_URL/_index_template/_simulate"
client.indices().simulateTemplate(s -> s
    .composedOf("ct2")
    .indexPatterns("my-index-*")
    .priority(10L)
    .template(t -> t
        .settings(se -> se
            .otherSettings("index.number_of_replicas", JsonData.fromJson("1"))
        )
    )
);
Request example
To see what settings will be applied by a template before you add it to the cluster, you can pass a template configuration in the request body. The specified template is used for the simulation if it has a higher priority than existing templates.
{
  "index_patterns": ["my-index-*"],
  "composed_of": ["ct2"],
  "priority": 10,
  "template": {
    "settings": {
      "index.number_of_replicas": 1
    }
  }
}
Response examples (200)
A successful response from `POST /_index_template/_simulate` with a template configuration in the request body. The response shows any overlapping templates with a lower priority.
{
  "template" : {
    "settings" : {
      "index" : {
        "number_of_replicas" : "1",
        "routing" : {
          "allocation" : {
            "include" : {
              "_tier_preference" : "data_content"
            }
          }
        }
      }
    },
    "mappings" : {
      "properties" : {
        "@timestamp" : {
          "type" : "date"
        }
      }
    },
    "aliases" : { }
  },
  "overlapping" : [
    {
      "name" : "final-template",
      "index_patterns" : [
        "my-index-*"
      ]
    }
  ]
}