Elasticsearch Serverless API

Base URL
http://api.example.com

Documentation source and versions

This documentation is derived from the main branch of the elasticsearch-specification repository. It is provided under license Attribution-NonCommercial-NoDerivatives 4.0 International.

Last update on Jul 23, 2025.

This API is provided under license Apache 2.0.







Create a behavioral analytics collection Deprecated Technical preview

PUT /_application/analytics/{name}

Path parameters

  • name string Required

    The name of the analytics collection to be created or updated.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • name string Required
PUT /_application/analytics/{name}
PUT _application/analytics/my_analytics_collection
resp = client.search_application.put_behavioral_analytics(
    name="my_analytics_collection",
)
const response = await client.searchApplication.putBehavioralAnalytics({
  name: "my_analytics_collection",
});
response = client.search_application.put_behavioral_analytics(
  name: "my_analytics_collection"
)
$resp = $client->searchApplication()->putBehavioralAnalytics([
    "name" => "my_analytics_collection",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/analytics/my_analytics_collection"
client.searchApplication().putBehavioralAnalytics(p -> p
    .name("my_analytics_collection")
);

Delete a behavioral analytics collection Deprecated Technical preview

DELETE /_application/analytics/{name}

The associated data stream is also deleted.

Path parameters

  • name string Required

    The name of the analytics collection to be deleted

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_application/analytics/{name}
DELETE _application/analytics/my_analytics_collection/
resp = client.search_application.delete_behavioral_analytics(
    name="my_analytics_collection",
)
const response = await client.searchApplication.deleteBehavioralAnalytics({
  name: "my_analytics_collection",
});
response = client.search_application.delete_behavioral_analytics(
  name: "my_analytics_collection"
)
$resp = $client->searchApplication()->deleteBehavioralAnalytics([
    "name" => "my_analytics_collection",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/analytics/my_analytics_collection/"
client.searchApplication().deleteBehavioralAnalytics(d -> d
    .name("my_analytics_collection")
);

Compact and aligned text (CAT)

The compact and aligned text (CAT) APIs aim are intended only for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, it's recommend to use a corresponding JSON API. All the cat commands accept a query string parameter help to see all the headers and info they provide, and the /_cat command alone lists all the available commands.

Get aliases Generally available

GET /_cat/aliases/{name}

All methods and paths for this operation:

GET /_cat/aliases

GET /_cat/aliases/{name}

Get the cluster's index aliases, including filter and routing information. This API does not return data stream aliases.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • name string | array[string]

    A comma-separated list of aliases to retrieve. Supports wildcards (*). To retrieve all aliases, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicated that the request should never timeout, you can set it to -1.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • alias string

      alias name

    • index string
    • filter string

      filter

    • routing.index string

      index routing

    • is_write_index string

      write index

GET _cat/aliases?format=json&v=true
resp = client.cat.aliases(
    format="json",
    v=True,
)
const response = await client.cat.aliases({
  format: "json",
  v: "true",
});
response = client.cat.aliases(
  format: "json",
  v: "true"
)
$resp = $client->cat()->aliases([
    "format" => "json",
    "v" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/aliases?format=json&v=true"
client.cat().aliases();
Response examples (200)
A successful response from `GET _cat/aliases?format=json&v=true`. This response shows that `alias2` has configured a filter and `alias3` and `alias4` have routing configurations.
[
  {
    "alias": "alias1",
    "index": "test1",
    "filter": "-",
    "routing.index": "-",
    "routing.search": "-",
    "is_write_index": "true"
  },
  {
    "alias": "alias1",
    "index": "test1",
    "filter": "*",
    "routing.index": "-",
    "routing.search": "-",
    "is_write_index": "true"
  },
  {
    "alias": "alias3",
    "index": "test1",
    "filter": "-",
    "routing.index": "1",
    "routing.search": "1",
    "is_write_index": "true"
  },
  {
    "alias": "alias4",
    "index": "test1",
    "filter": "-",
    "routing.index": "2",
    "routing.search": "1,2",
    "is_write_index": "true"
  }
]

Get component templates Generally available

GET /_cat/component_templates/{name}

All methods and paths for this operation:

GET /_cat/component_templates

GET /_cat/component_templates/{name}

Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.

Required authorization

  • Cluster privileges: monitor

Path parameters

  • name string Required

    The name of the component template. It accepts wildcard expressions. If it is omitted, all component templates are returned.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    The period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • name string Required
    • version string | null Required

    • alias_count string Required
    • mapping_count string Required
    • settings_count string Required
    • metadata_count string Required
    • included_in string Required
GET /_cat/component_templates/{name}
GET _cat/component_templates/my-template-*?v=true&s=name&format=json
resp = client.cat.component_templates(
    name="my-template-*",
    v=True,
    s="name",
    format="json",
)
const response = await client.cat.componentTemplates({
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json",
});
response = client.cat.component_templates(
  name: "my-template-*",
  v: "true",
  s: "name",
  format: "json"
)
$resp = $client->cat()->componentTemplates([
    "name" => "my-template-*",
    "v" => "true",
    "s" => "name",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/component_templates/my-template-*?v=true&s=name&format=json"
client.cat().componentTemplates();
Response examples (200)
A successful response from `GET _cat/component_templates/my-template-*?v=true&s=name&format=json`.
[
  {
    "name": "my-template-1",
    "version": "null",
    "alias_count": "0",
    "mapping_count": "0",
    "settings_count": "1",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  },
    {
    "name": "my-template-2",
    "version": null,
    "alias_count": "0",
    "mapping_count": "3",
    "settings_count": "0",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  }
]

Get a document count Generally available

GET /_cat/count/{index}

All methods and paths for this operation:

GET /_cat/count

GET /_cat/count/{index}

Get quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.

Required authorization

  • Index privileges: read

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. It supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • epoch number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      One of:

      Time unit for seconds

    • timestamp string

      Time of day, expressed as HH:MM:SS

    • count string

      the document count

GET /_cat/count/my-index-000001?v=true&format=json
resp = client.cat.count(
    index="my-index-000001",
    v=True,
    format="json",
)
const response = await client.cat.count({
  index: "my-index-000001",
  v: "true",
  format: "json",
});
response = client.cat.count(
  index: "my-index-000001",
  v: "true",
  format: "json"
)
$resp = $client->cat()->count([
    "index" => "my-index-000001",
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/count/my-index-000001?v=true&format=json"
client.cat().count();
Response examples (200)
A successful response from `GET /_cat/count/my-index-000001?v=true&format=json`. It retrieves the document count for the `my-index-000001` data stream or index.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "120"
  }
]
A successful response from `GET /_cat/count?v=true&format=json`. It retrieves the document count for all data streams and indices in the cluster.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "121"
  }
]

Get CAT help Generally available

GET /_cat

Get help for the CAT APIs.

Responses

  • 200 application/json
GET /_cat
curl \
 --request GET 'http://api.example.com/_cat' \
 --header "Authorization: $API_KEY"

Get index information Generally available

GET /_cat/indices/{index}

All methods and paths for this operation:

GET /_cat/indices

GET /_cat/indices/{index}

Get high-level information about indices in a cluster, including backing indices for data streams.

Use this request to get the following information for each index in a cluster:

  • shard count
  • document count
  • deleted document count
  • primary store size
  • total store size of all shards, including shard replicas

These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs.

CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.

Required authorization

  • Index privileges: monitor
  • Cluster privileges: monitor

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • health string

    The health status used to limit returned indices. By default, the response includes indices of any health status.

    Supported values include:

    • green (or GREEN): All shards are assigned.
    • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
    • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.
    • unknown
    • unavailable

    Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

  • include_unloaded_segments boolean

    If true, the response includes information from segments that are not loaded into memory.

  • pri boolean

    If true, the response only includes information from primary shards.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • health string

      current health status

    • status string

      open/close status

    • index string

      index name

    • uuid string

      index uuid

    • pri string

      number of primary shards

    • rep string

      number of replica shards

    • docs.count string | null

      available docs

    • docs.deleted string | null

      deleted docs

    • creation.date string

      index creation date (millisecond value)

    • creation.date.string string

      index creation date (as string)

    • store.size string | null

      store size of primaries & replicas

    • pri.store.size string | null

      store size of primaries

    • dataset.size string | null

      total size of dataset (including the cache for partially mounted indices)

    • completion.size string

      size of completion

    • pri.completion.size string

      size of completion

    • fielddata.memory_size string

      used fielddata cache

    • pri.fielddata.memory_size string

      used fielddata cache

    • fielddata.evictions string

      fielddata evictions

    • pri.fielddata.evictions string

      fielddata evictions

    • query_cache.memory_size string

      used query cache

    • pri.query_cache.memory_size string

      used query cache

    • query_cache.evictions string

      query cache evictions

    • pri.query_cache.evictions string

      query cache evictions

    • request_cache.memory_size string

      used request cache

    • pri.request_cache.memory_size string

      used request cache

    • request_cache.evictions string

      request cache evictions

    • pri.request_cache.evictions string

      request cache evictions

    • request_cache.hit_count string

      request cache hit count

    • pri.request_cache.hit_count string

      request cache hit count

    • request_cache.miss_count string

      request cache miss count

    • pri.request_cache.miss_count string

      request cache miss count

    • flush.total string

      number of flushes

    • pri.flush.total string

      number of flushes

    • flush.total_time string

      time spent in flush

    • pri.flush.total_time string

      time spent in flush

    • get.current string

      number of current get ops

    • pri.get.current string

      number of current get ops

    • get.time string

      time spent in get

    • pri.get.time string

      time spent in get

    • get.total string

      number of get ops

    • pri.get.total string

      number of get ops

    • get.exists_time string

      time spent in successful gets

    • pri.get.exists_time string

      time spent in successful gets

    • get.exists_total string

      number of successful gets

    • pri.get.exists_total string

      number of successful gets

    • get.missing_time string

      time spent in failed gets

    • pri.get.missing_time string

      time spent in failed gets

    • get.missing_total string

      number of failed gets

    • pri.get.missing_total string

      number of failed gets

    • indexing.delete_current string

      number of current deletions

    • pri.indexing.delete_current string

      number of current deletions

    • indexing.delete_time string

      time spent in deletions

    • pri.indexing.delete_time string

      time spent in deletions

    • indexing.delete_total string

      number of delete ops

    • pri.indexing.delete_total string

      number of delete ops

    • indexing.index_current string

      number of current indexing ops

    • pri.indexing.index_current string

      number of current indexing ops

    • indexing.index_time string

      time spent in indexing

    • pri.indexing.index_time string

      time spent in indexing

    • indexing.index_total string

      number of indexing ops

    • pri.indexing.index_total string

      number of indexing ops

    • indexing.index_failed string

      number of failed indexing ops

    • pri.indexing.index_failed string

      number of failed indexing ops

    • merges.current string

      number of current merges

    • pri.merges.current string

      number of current merges

    • merges.current_docs string

      number of current merging docs

    • pri.merges.current_docs string

      number of current merging docs

    • merges.current_size string

      size of current merges

    • pri.merges.current_size string

      size of current merges

    • merges.total string

      number of completed merge ops

    • pri.merges.total string

      number of completed merge ops

    • merges.total_docs string

      docs merged

    • pri.merges.total_docs string

      docs merged

    • merges.total_size string

      size merged

    • pri.merges.total_size string

      size merged

    • merges.total_time string

      time spent in merges

    • pri.merges.total_time string

      time spent in merges

    • refresh.total string

      total refreshes

    • pri.refresh.total string

      total refreshes

    • refresh.time string

      time spent in refreshes

    • pri.refresh.time string

      time spent in refreshes

    • refresh.external_total string

      total external refreshes

    • pri.refresh.external_total string

      total external refreshes

    • refresh.external_time string

      time spent in external refreshes

    • pri.refresh.external_time string

      time spent in external refreshes

    • refresh.listeners string

      number of pending refresh listeners

    • pri.refresh.listeners string

      number of pending refresh listeners

    • search.fetch_current string

      current fetch phase ops

    • pri.search.fetch_current string

      current fetch phase ops

    • search.fetch_time string

      time spent in fetch phase

    • pri.search.fetch_time string

      time spent in fetch phase

    • search.fetch_total string

      total fetch ops

    • pri.search.fetch_total string

      total fetch ops

    • search.open_contexts string

      open search contexts

    • pri.search.open_contexts string

      open search contexts

    • search.query_current string

      current query phase ops

    • pri.search.query_current string

      current query phase ops

    • search.query_time string

      time spent in query phase

    • pri.search.query_time string

      time spent in query phase

    • search.query_total string

      total query phase ops

    • pri.search.query_total string

      total query phase ops

    • search.scroll_current string

      open scroll contexts

    • pri.search.scroll_current string

      open scroll contexts

    • search.scroll_time string

      time scroll contexts held open

    • pri.search.scroll_time string

      time scroll contexts held open

    • search.scroll_total string

      completed scroll contexts

    • pri.search.scroll_total string

      completed scroll contexts

    • segments.count string

      number of segments

    • pri.segments.count string

      number of segments

    • segments.memory string

      memory used by segments

    • pri.segments.memory string

      memory used by segments

    • segments.index_writer_memory string

      memory used by index writer

    • pri.segments.index_writer_memory string

      memory used by index writer

    • segments.version_map_memory string

      memory used by version map

    • pri.segments.version_map_memory string

      memory used by version map

    • segments.fixed_bitset_memory string

      memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields

    • pri.segments.fixed_bitset_memory string

      memory used by fixed bit sets for nested object field types and export type filters for types referred in _parent fields

    • warmer.current string

      current warmer ops

    • pri.warmer.current string

      current warmer ops

    • warmer.total string

      total warmer ops

    • pri.warmer.total string

      total warmer ops

    • warmer.total_time string

      time spent in warmers

    • pri.warmer.total_time string

      time spent in warmers

    • suggest.current string

      number of current suggest ops

    • pri.suggest.current string

      number of current suggest ops

    • suggest.time string

      time spend in suggest

    • pri.suggest.time string

      time spend in suggest

    • suggest.total string

      number of suggest ops

    • pri.suggest.total string

      number of suggest ops

    • memory.total string

      total used memory

    • pri.memory.total string

      total user memory

    • search.throttled string

      indicates if the index is search throttled

    • bulk.total_operations string

      number of bulk shard ops

    • pri.bulk.total_operations string

      number of bulk shard ops

    • bulk.total_time string

      time spend in shard bulk

    • pri.bulk.total_time string

      time spend in shard bulk

    • bulk.total_size_in_bytes string

      total size in bytes of shard bulk

    • pri.bulk.total_size_in_bytes string

      total size in bytes of shard bulk

    • bulk.avg_time string

      average time spend in shard bulk

    • pri.bulk.avg_time string

      average time spend in shard bulk

    • bulk.avg_size_in_bytes string

      average size in bytes of shard bulk

    • pri.bulk.avg_size_in_bytes string

      average size in bytes of shard bulk

GET /_cat/indices/my-index-*?v=true&s=index&format=json
resp = client.cat.indices(
    index="my-index-*",
    v=True,
    s="index",
    format="json",
)
const response = await client.cat.indices({
  index: "my-index-*",
  v: "true",
  s: "index",
  format: "json",
});
response = client.cat.indices(
  index: "my-index-*",
  v: "true",
  s: "index",
  format: "json"
)
$resp = $client->cat()->indices([
    "index" => "my-index-*",
    "v" => "true",
    "s" => "index",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/indices/my-index-*?v=true&s=index&format=json"
client.cat().indices();
Response examples (200)
A successful response from `GET /_cat/indices/my-index-*?v=true&s=index&format=json`.
[
  {
    "health": "yellow",
    "status": "open",
    "index": "my-index-000001",
    "uuid": "u8FNjxh8Rfy_awN11oDKYQ",
    "pri": "1",
    "rep": "1",
    "docs.count": "1200",
    "docs.deleted": "0",
    "store.size": "88.1kb",
    "pri.store.size": "88.1kb",
    "dataset.size": "88.1kb"
  },
  {
    "health": "green",
    "status": "open",
    "index": "my-index-000002",
    "uuid": "nYFWZEO7TUiOjLQXBaYJpA ",
    "pri": "1",
    "rep": "0",
    "docs.count": "0",
    "docs.deleted": "0",
    "store.size": "260b",
    "pri.store.size": "260b",
    "dataset.size": "260b"
  }
]

Get data frame analytics jobs Generally available

GET /_cat/ml/data_frame/analytics/{id}

All methods and paths for this operation:

GET /_cat/ml/data_frame/analytics

GET /_cat/ml/data_frame/analytics/{id}

Get configuration and usage information about data frame analytics jobs.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • id string Required

    The ID of the data frame analytics to fetch

Query parameters

  • allow_no_match boolean

    Whether to ignore if a wildcard expression matches no configs. (This includes _all string or when no configs have been specified)

  • bytes string

    The unit in which to display byte values

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): Contains messages relating to the selection of a node.
    • create_time (or ct, createTime): The time when the data frame analytics job was created.
    • description (or d): A description of a job.
    • dest_index (or di, destIndex): Name of the destination index.
    • failure_reason (or fr, failureReason): Contains messages about the reason why a data frame analytics job failed.
    • id: Identifier for the data frame analytics job.
    • model_memory_limit (or mml, modelMemoryLimit): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.
    • node.address (or na, nodeAddress): The network address of the node that the data frame analytics job is assigned to.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that the data frame analytics job is assigned to.
    • node.id (or ni, nodeId): The unique identifier of the node that the data frame analytics job is assigned to.
    • node.name (or nn, nodeName): The name of the node that the data frame analytics job is assigned to.
    • progress (or p): The progress report of the data frame analytics job by phase.
    • source_index (or si, sourceIndex): Name of the source index.
    • state (or s): Current state of the data frame analytics job.
    • type (or t): The type of analysis that the data frame analytics job performs.
    • version (or v): The Elasticsearch version number in which the data frame analytics job was created.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): Contains messages relating to the selection of a node.
    • create_time (or ct, createTime): The time when the data frame analytics job was created.
    • description (or d): A description of a job.
    • dest_index (or di, destIndex): Name of the destination index.
    • failure_reason (or fr, failureReason): Contains messages about the reason why a data frame analytics job failed.
    • id: Identifier for the data frame analytics job.
    • model_memory_limit (or mml, modelMemoryLimit): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.
    • node.address (or na, nodeAddress): The network address of the node that the data frame analytics job is assigned to.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that the data frame analytics job is assigned to.
    • node.id (or ni, nodeId): The unique identifier of the node that the data frame analytics job is assigned to.
    • node.name (or nn, nodeName): The name of the node that the data frame analytics job is assigned to.
    • progress (or p): The progress report of the data frame analytics job by phase.
    • source_index (or si, sourceIndex): Name of the source index.
    • state (or s): Current state of the data frame analytics job.
    • type (or t): The type of analysis that the data frame analytics job performs.
    • version (or v): The Elasticsearch version number in which the data frame analytics job was created.
  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • type string

      The type of analysis that the job performs.

    • create_time string

      The time when the job was created.

    • version string
    • source_index string
    • dest_index string
    • description string

      A description of the job.

    • model_memory_limit string

      The approximate maximum amount of memory resources that are permitted for the job.

    • state string

      The current status of the job.

    • failure_reason string

      Messages about the reason why the job failed.

    • progress string

      The progress report for the job by phase.

    • assignment_explanation string

      Messages related to the selection of a node.

    • node.id string
    • node.name string
    • node.ephemeral_id string
    • node.address string

      The network address of the assigned node.

GET /_cat/ml/data_frame/analytics/{id}
GET _cat/ml/data_frame/analytics?v=true&format=json
resp = client.cat.ml_data_frame_analytics(
    v=True,
    format="json",
)
const response = await client.cat.mlDataFrameAnalytics({
  v: "true",
  format: "json",
});
response = client.cat.ml_data_frame_analytics(
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlDataFrameAnalytics([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/data_frame/analytics?v=true&format=json"
client.cat().mlDataFrameAnalytics();
Response examples (200)
A successful response from `GET _cat/ml/data_frame/analytics?v=true&format=json`.
[
  {
    "id": "classifier_job_1",
    "type": "classification",
    "create_time": "2020-02-12T11:49:09.594Z",
    "state": "stopped"
  },
    {
    "id": "classifier_job_2",
    "type": "classification",
    "create_time": "2020-02-12T11:49:14.479Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_3",
    "type": "classification",
    "create_time": "2020-02-12T11:49:16.928Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_4",
    "type": "classification",
    "create_time": "2020-02-12T11:49:19.127Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_5",
    "type": "classification",
    "create_time": "2020-02-12T11:49:21.349Z",
    "state": "stopped"
  }
]




Get anomaly detection jobs Generally available

GET /_cat/ml/anomaly_detectors/{job_id}

All methods and paths for this operation:

GET /_cat/ml/anomaly_detectors

GET /_cat/ml/anomaly_detectors/{job_id}

Get configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no jobs that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      Values are closing, closed, opened, failed, or opening.

    • opened_time string

      For open jobs only, the amount of time the job has been opened.

    • assignment_explanation string

      For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.

    • data.processed_records string

      The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed_record_count is the number of aggregation results processed, not the number of Elasticsearch documents.

    • data.processed_fields string

      The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.

    • data.input_bytes number | string

    • data.input_records string

      The number of input documents posted to the anomaly detection job.

    • data.input_fields string

      The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.

    • data.invalid_dates string

      The number of input documents with either a missing date field or a date that could not be parsed.

    • data.missing_fields string

      The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing. If you are using datafeeds or posting data to the job in JSON format, a high missing_field_count is often not an indication of data issues. It is not necessarily a cause for concern.

    • data.out_of_order_timestamps string

      The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.

    • data.empty_buckets string

      The number of buckets which did not contain any data. If your data contains many empty buckets, consider increasing your bucket_span or using functions that are tolerant to gaps in data such as mean, non_null_sum or non_zero_count.

    • data.sparse_buckets string

      The number of buckets that contained few data points compared to the expected number of data points. If your data contains many sparse buckets, consider using a longer bucket_span.

    • data.buckets string

      The total number of buckets processed.

    • data.earliest_record string

      The timestamp of the earliest chronologically input document.

    • data.latest_record string

      The timestamp of the latest chronologically input document.

    • data.last string

      The timestamp at which data was last analyzed, according to server time.

    • data.last_empty_bucket string

      The timestamp of the last bucket that did not contain any data.

    • data.last_sparse_bucket string

      The timestamp of the last bucket that was considered sparse.

    • model.bytes number | string

    • model.memory_status string

      Values are ok, soft_limit, or hard_limit.

    • model.bytes_exceeded number | string

    • model.memory_limit string

      The upper limit for model memory usage, checked on increasing values.

    • model.by_fields string

      The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.over_fields string

      The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.partition_fields string

      The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.bucket_allocation_failures string

      The number of buckets for which new entities in incoming data were not processed due to insufficient model memory. This situation is also signified by a hard_limit: memory_status property value.

    • model.categorization_status string

      Values are ok or warn.

    • model.categorized_doc_count string

      The number of documents that have had a field categorized.

    • model.total_category_count string

      The number of categories created by categorization.

    • model.frequent_category_count string

      The number of categories that match more than 1% of categorized documents.

    • model.rare_category_count string

      The number of categories that match just one categorized document.

    • model.dead_category_count string

      The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.

    • model.failed_category_count string

      The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model_memory_limit. This count does not track which specific categories failed to be created. Therefore you cannot use this value to determine the number of unique categories that were missed.

    • model.log_time string

      The timestamp when the model stats were gathered, according to server time.

    • model.timestamp string

      The timestamp of the last record when the model stats were gathered.

    • forecasts.total string

      The number of individual forecasts currently available for the job. A value of one or more indicates that forecasts exist.

    • forecasts.memory.min string

      The minimum memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.max string

      The maximum memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.avg string

      The average memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.total string

      The total memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.records.min string

      The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.max string

      The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.avg string

      The average number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.total string

      The total number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.time.min string

      The minimum runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.max string

      The maximum runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.avg string

      The average runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.total string

      The total runtime in milliseconds for forecasts related to the anomaly detection job.

    • node.id string
    • node.name string

      The name of the assigned node.

    • node.ephemeral_id string
    • node.address string

      The network address of the assigned node.

    • buckets.count string

      The number of bucket results produced by the job.

    • buckets.time.total string

      The sum of all bucket processing times, in milliseconds.

    • buckets.time.min string

      The minimum of all bucket processing times, in milliseconds.

    • buckets.time.max string

      The maximum of all bucket processing times, in milliseconds.

    • buckets.time.exp_avg string

      The exponential moving average of all bucket processing times, in milliseconds.

    • buckets.time.exp_avg_hour string

      The exponential moving average of bucket processing times calculated in a one hour time window, in milliseconds.

GET /_cat/ml/anomaly_detectors/{job_id}
GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json
resp = client.cat.ml_jobs(
    h="id,s,dpr,mb",
    v=True,
    format="json",
)
const response = await client.cat.mlJobs({
  h: "id,s,dpr,mb",
  v: "true",
  format: "json",
});
response = client.cat.ml_jobs(
  h: "id,s,dpr,mb",
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlJobs([
    "h" => "id,s,dpr,mb",
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json"
client.cat().mlJobs();
Response examples (200)
A successful response from `GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json`.
[
  {
    "id": "high_sum_total_sales",
    "s": "closed",
    "dpr": "14022",
    "mb": "1.5mb"
  },
  {
    "id": "low_request_rate",
    "s": "closed",
    "dpr": "1216",
    "mb": "40.5kb"
  },
  {
    "id": "response_code_rates",
    "s": "closed",
    "dpr": "28146",
    "mb": "132.7kb"
  },
  {
    "id": "url_scanning",
    "s": "closed",
    "dpr": "28146",
    "mb": "501.6kb"
  }
]




Get transform information Generally available

GET /_cat/transforms/{transform_id}

All methods and paths for this operation:

GET /_cat/transforms

GET /_cat/transforms/{transform_id}

Get configuration and usage information about transforms.

CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.

Required authorization

  • Cluster privileges: monitor_transform

Path parameters

  • transform_id string Required

    A transform identifier or a wildcard expression. If you do not specify one of these options, the API returns information for all transforms.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, it returns an empty transforms array when there are no matches and the subset of results when there are partial matches. If false, the request returns a 404 status code when there are no matches or only partial matches.

  • from number

    Skips the specified number of transforms.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • size number

    The maximum number of transforms to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      The status of the transform. Returned values include: aborting: The transform is aborting. failed: The transform failed. For more information about the failure, check thereasonfield. indexing: The transform is actively processing data and creating new documents. started: The transform is running but not actively indexing data. stopped: The transform is stopped. stopping`: The transform is stopping.

    • checkpoint string

      The sequence number for the checkpoint.

    • documents_processed string

      The number of documents that have been processed from the source index of the transform.

    • checkpoint_progress string | null

      The progress of the next checkpoint that is currently in progress.

    • last_search_time string | null

      The timestamp of the last search in the source indices. This field is shown only if the transform is running.

    • changes_last_detection_time string | null

      The timestamp when changes were last detected in the source indices.

    • create_time string

      The time the transform was created.

    • version string
    • source_index string

      The source indices for the transform.

    • dest_index string

      The destination index for the transform.

    • pipeline string

      The unique identifier for the ingest pipeline.

    • description string

      The description of the transform.

    • transform_type string

      The type of transform: batch or continuous.

    • frequency string

      The interval between checks for changes in the source indices when the transform is running continuously.

    • max_page_search_size string

      The initial page size that is used for the composite aggregation for each checkpoint.

    • docs_per_second string

      The number of input documents per second.

    • reason string

      If a transform has a failed state, these details describe the reason for failure.

    • search_total string

      The total number of search operations on the source index for the transform.

    • search_failure string

      The total number of search failures.

    • search_time string

      The total amount of search time, in milliseconds.

    • index_total string

      The total number of index operations done by the transform.

    • index_failure string

      The total number of indexing failures.

    • index_time string

      The total time spent indexing documents, in milliseconds.

    • documents_indexed string

      The number of documents that have been indexed into the destination index for the transform.

    • delete_time string

      The total time spent deleting documents, in milliseconds.

    • documents_deleted string

      The number of documents deleted from the destination index due to the retention policy for the transform.

    • trigger_count string

      The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • pages_processed string

      The number of search or bulk index operations processed. Documents are processed in batches instead of individually.

    • processing_time string

      The total time spent processing results, in milliseconds.

    • checkpoint_duration_time_exp_avg string

      The exponential moving average of the duration of the checkpoint, in milliseconds.

    • indexed_documents_exp_avg string

      The exponential moving average of the number of new documents that have been indexed.

    • processed_documents_exp_avg string

      The exponential moving average of the number of documents that have been processed.

GET /_cat/transforms/{transform_id}
GET /_cat/transforms?v=true&format=json
resp = client.cat.transforms(
    v=True,
    format="json",
)
const response = await client.cat.transforms({
  v: "true",
  format: "json",
});
response = client.cat.transforms(
  v: "true",
  format: "json"
)
$resp = $client->cat()->transforms([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/transforms?v=true&format=json"
client.cat().transforms();
Response examples (200)
A successful response from `GET /_cat/transforms?v=true&format=json`.
[
  {
    "id" : "ecommerce_transform",
    "state" : "started",
    "checkpoint" : "1",
    "documents_processed" : "705",
    "checkpoint_progress" : "100.00",
    "changes_last_detection_time" : null
  }
]

Cluster

Get cluster info Generally available

GET /_info/{target}

Returns basic information about the cluster.

Path parameters

  • target string | array[string]

    Limits the information returned to the specific target. Supports a comma-separated list, such as http,ingest.

    Supported values include: _all, http, ingest, thread_pool, script

    Values are _all, http, ingest, thread_pool, or script.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cluster_name string Required
    • http object
      Hide http attributes Show http attributes object
      • current_open number

        Current number of open HTTP connections for the node.

      • total_opened number

        Total number of HTTP connections opened for the node.

      • clients array[object]

        Information on current and recently-closed HTTP client connections. Clients that have been closed longer than the http.client_stats.closed_channels.max_age setting will not be represented here.

        Hide clients attributes Show clients attributes object
        • id number

          Unique ID for the HTTP client.

        • agent string

          Reported agent for the HTTP client. If unavailable, this property is not included in the response.

        • local_address string

          Local address for the HTTP connection.

        • remote_address string

          Remote address for the HTTP connection.

        • last_uri string

          The URI of the client’s most recent request.

        • opened_time_millis number

          Time at which the client opened the connection.

        • closed_time_millis number

          Time at which the client closed the connection if the connection is closed.

        • last_request_time_millis number

          Time of the most recent request from this client.

        • request_count number

          Number of requests from this client.

        • request_size_bytes number

          Cumulative size in bytes of all requests from this client.

        • x_opaque_id string

          Value from the client’s x-opaque-id HTTP header. If unavailable, this property is not included in the response.

    • ingest object
      Hide ingest attributes Show ingest attributes object
      • pipelines object

        Contains statistics about ingest pipelines for the node.

        Hide pipelines attribute Show pipelines attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • count number Required

            Total number of documents ingested during the lifetime of this node.

          • current number Required

            Total number of documents currently being ingested.

          • failed number Required

            Total number of failed ingest operations during the lifetime of this node.

          • processors array[object] Required

            Total number of ingest processors.

            Hide processors attribute Show processors attribute object
            • * object Additional properties
          • time_in_millis number

            Time unit for milliseconds

          • ingested_as_first_pipeline_in_bytes number Required Generally available

            Total number of bytes of all documents ingested by the pipeline. This field is only present on pipelines which are the first to process a document. Thus, it is not present on pipelines which only serve as a final pipeline after a default pipeline, a pipeline run after a reroute processor, or pipelines in pipeline processors.

          • produced_as_first_pipeline_in_bytes number Required Generally available

            Total number of bytes of all documents produced by the pipeline. This field is only present on pipelines which are the first to process a document. Thus, it is not present on pipelines which only serve as a final pipeline after a default pipeline, a pipeline run after a reroute processor, or pipelines in pipeline processors. In situations where there are subsequent pipelines, the value represents the size of the document after all pipelines have run.

      • total object
        Hide total attributes Show total attributes object
        • count number Required

          Total number of documents ingested during the lifetime of this node.

        • current number Required

          Total number of documents currently being ingested.

        • failed number Required

          Total number of failed ingest operations during the lifetime of this node.

        • time_in_millis number

          Time unit for milliseconds

    • thread_pool object
      Hide thread_pool attribute Show thread_pool attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • active number

          Number of active threads in the thread pool.

        • completed number

          Number of tasks completed by the thread pool executor.

        • largest number

          Highest number of active threads in the thread pool.

        • queue number

          Number of tasks in queue for the thread pool.

        • rejected number

          Number of tasks rejected by the thread pool executor.

        • threads number

          Number of threads in the thread pool.

    • script object
      Hide script attributes Show script attributes object
      • cache_evictions number

        Total number of times the script cache has evicted old data.

      • compilations number

        Total number of inline script compilations performed by the node.

      • compilations_history object

        Contains this recent history of script compilations.

        Hide compilations_history attribute Show compilations_history attribute object
        • * number Additional properties
      • compilation_limit_triggered number

        Total number of times the script compilation circuit breaker has limited inline script compilations.

      • contexts array[object]
        Hide contexts attributes Show contexts attributes object
        • context string
        • compilations number
        • cache_evictions number
        • compilation_limit_triggered number
GET /_info/_all
resp = client.cluster.info(
    target="_all",
)
const response = await client.cluster.info({
  target: "_all",
});
response = client.cluster.info(
  target: "_all"
)
$resp = $client->cluster()->info([
    "target" => "_all",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_info/_all"
client.cluster().info(i -> i
    .target("_all")
);




Connector

The connector and sync jobs APIs provide a convenient way to create and manage Elastic connectors and sync jobs in an internal index. Connectors are Elasticsearch integrations for syncing content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. This API provides an alternative to relying solely on Kibana UI for connector and sync job management. The API comes with a set of validations and assertions to ensure that the state representation in the internal index remains valid. This API requires the manage_connector privilege or, for read-only endpoints, the monitor_connector privilege.

Check out the connector API tutorial








Create or update a connector Beta

PUT /_connector/{connector_id}

All methods and paths for this operation:

PUT /_connector

PUT /_connector/{connector_id}

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be created or updated. ID is auto-generated if not provided.

application/json

Body

  • description string
  • index_name string
  • is_native boolean
  • language string
  • name string
  • service_type string

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

    • id string Required
PUT _connector/my-connector
{
  "index_name": "search-google-drive",
  "name": "My Connector",
  "service_type": "google_drive"
}
resp = client.connector.put(
    connector_id="my-connector",
    index_name="search-google-drive",
    name="My Connector",
    service_type="google_drive",
)
const response = await client.connector.put({
  connector_id: "my-connector",
  index_name: "search-google-drive",
  name: "My Connector",
  service_type: "google_drive",
});
response = client.connector.put(
  connector_id: "my-connector",
  body: {
    "index_name": "search-google-drive",
    "name": "My Connector",
    "service_type": "google_drive"
  }
)
$resp = $client->connector()->put([
    "connector_id" => "my-connector",
    "body" => [
        "index_name" => "search-google-drive",
        "name" => "My Connector",
        "service_type" => "google_drive",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_name":"search-google-drive","name":"My Connector","service_type":"google_drive"}' "$ELASTICSEARCH_URL/_connector/my-connector"
client.connector().put(p -> p
    .connectorId("my-connector")
    .indexName("search-google-drive")
    .name("My Connector")
    .serviceType("google_drive")
);
Request examples
{
  "index_name": "search-google-drive",
  "name": "My Connector",
  "service_type": "google_drive"
}
{
  "index_name": "search-google-drive",
  "name": "My Connector",
  "description": "My Connector to sync data to Elastic index from Google Drive",
  "service_type": "google_drive",
  "language": "english"
}
Response examples (200)
{
  "result": "created",
  "id": "my-connector"
}

Delete a connector Beta

DELETE /_connector/{connector_id}

Removes a connector and associated sync jobs. This is a destructive action that is not recoverable. NOTE: This action doesn’t delete any API keys, ingest pipelines, or data indices associated with the connector. These need to be removed manually.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be deleted

Query parameters

  • delete_sync_jobs boolean

    A flag indicating if associated sync jobs should be also removed. Defaults to false.

  • hard boolean

    A flag indicating if the connector should be hard deleted.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_connector/{connector_id}
DELETE _connector/my-connector-id&delete_sync_jobs=true
resp = client.connector.delete(
    connector_id="my-connector-id&delete_sync_jobs=true",
)
const response = await client.connector.delete({
  connector_id: "my-connector-id&delete_sync_jobs=true",
});
response = client.connector.delete(
  connector_id: "my-connector-id&delete_sync_jobs=true"
)
$resp = $client->connector()->delete([
    "connector_id" => "my-connector-id&delete_sync_jobs=true",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/my-connector-id&delete_sync_jobs=true"
client.connector().delete(d -> d
    .connectorId("my-connector-id&delete_sync_jobs=true")
);
Response examples (200)
{
    "acknowledged": true
}








Cancel a connector sync job Beta

PUT /_connector/_sync_job/{connector_sync_job_id}/_cancel

Cancel a connector sync job, which sets the status to cancelling and updates cancellation_requested_at to the current time. The connector service is then responsible for setting the status of connector sync jobs to cancelled.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/_sync_job/{connector_sync_job_id}/_cancel
PUT _connector/_sync_job/my-connector-sync-job-id/_cancel
resp = client.connector.sync_job_cancel(
    connector_sync_job_id="my-connector-sync-job-id",
)
const response = await client.connector.syncJobCancel({
  connector_sync_job_id: "my-connector-sync-job-id",
});
response = client.connector.sync_job_cancel(
  connector_sync_job_id: "my-connector-sync-job-id"
)
$resp = $client->connector()->syncJobCancel([
    "connector_sync_job_id" => "my-connector-sync-job-id",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id/_cancel"
client.connector().syncJobCancel(s -> s
    .connectorSyncJobId("my-connector-sync-job-id")
);

Get a connector sync job Beta

GET /_connector/_sync_job/{connector_sync_job_id}

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cancelation_requested_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • canceled_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • completed_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • connector object Required
      Hide connector attributes Show connector attributes object
      • configuration object Required
        Hide configuration attribute Show configuration attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • category string
          • default_value number | string | boolean | null Required

            A scalar value.

          • depends_on array[object] Required
            Hide depends_on attributes Show depends_on attributes object
            • field string Required
            • value
          • display string Required

            Values are textbox, textarea, numeric, toggle, or dropdown.

          • label string Required
          • options array[object] Required
            Hide options attributes Show options attributes object
            • label string Required
            • value
          • order number
          • placeholder string
          • required boolean Required
          • sensitive boolean Required
          • type string

            Values are str, int, list, or bool.

          • ui_restrictions array[string]
          • validations array[object]
          • value object Required
      • filtering object Required
        Hide filtering attributes Show filtering attributes object
        • advanced_snippet object Required
          Hide advanced_snippet attributes Show advanced_snippet attributes object
          • created_at string | number

            A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

            One of:
          • updated_at string | number

            A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

            One of:
          • value object Required
        • rules array[object] Required
          Hide rules attributes Show rules attributes object
          • created_at string
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • id string Required
          • order number Required
          • policy string Required

            Values are exclude or include.

          • rule string Required

            Values are contains, ends_with, equals, regex, starts_with, >, or <.

          • updated_at string
          • value string Required
        • validation object Required
          Hide validation attributes Show validation attributes object
          • errors array[object] Required
            Hide errors attributes Show errors attributes object
            • ids array[string] Required
            • messages array[string] Required
          • state string Required

            Values are edited, invalid, or valid.

      • id string Required
      • index_name string Required
      • language string
      • pipeline object
        Hide pipeline attributes Show pipeline attributes object
        • extract_binary_content boolean Required
        • name string Required
        • reduce_whitespace boolean Required
        • run_ml_inference boolean Required
      • service_type string Required
      • sync_cursor object
    • created_at string | number Required

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • deleted_document_count number Required
    • error string
    • id string Required
    • indexed_document_count number Required
    • indexed_document_volume number Required
    • job_type string Required

      Values are full, incremental, or access_control.

    • last_seen string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • metadata object Required
      Hide metadata attribute Show metadata attribute object
      • * object Additional properties
    • started_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • status string Required

      Values are canceling, canceled, completed, error, in_progress, pending, or suspended.

    • total_document_count number Required
    • trigger_method string Required

      Values are on_demand or scheduled.

    • worker_hostname string
GET /_connector/_sync_job/{connector_sync_job_id}
GET _connector/_sync_job/my-connector-sync-job
resp = client.connector.sync_job_get(
    connector_sync_job_id="my-connector-sync-job",
)
const response = await client.connector.syncJobGet({
  connector_sync_job_id: "my-connector-sync-job",
});
response = client.connector.sync_job_get(
  connector_sync_job_id: "my-connector-sync-job"
)
$resp = $client->connector()->syncJobGet([
    "connector_sync_job_id" => "my-connector-sync-job",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job"
client.connector().syncJobGet(s -> s
    .connectorSyncJobId("my-connector-sync-job")
);

Delete a connector sync job Beta

DELETE /_connector/_sync_job/{connector_sync_job_id}

Remove a connector sync job and its associated data. This is a destructive action that is not recoverable.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job to be deleted

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_connector/_sync_job/{connector_sync_job_id}
DELETE _connector/_sync_job/my-connector-sync-job-id
resp = client.connector.sync_job_delete(
    connector_sync_job_id="my-connector-sync-job-id",
)
const response = await client.connector.syncJobDelete({
  connector_sync_job_id: "my-connector-sync-job-id",
});
response = client.connector.sync_job_delete(
  connector_sync_job_id: "my-connector-sync-job-id"
)
$resp = $client->connector()->syncJobDelete([
    "connector_sync_job_id" => "my-connector-sync-job-id",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id"
client.connector().syncJobDelete(s -> s
    .connectorSyncJobId("my-connector-sync-job-id")
);
Response examples (200)
{
  "acknowledged": true
}




















Update the connector error field Technical preview

PUT /_connector/{connector_id}/_error

Set the error field for the connector. If the error provided in the request body is non-null, the connector’s status is updated to error. Otherwise, if the error is reset to null, the connector status is updated to connected.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • error string | null Required

    One of:

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_error
PUT _connector/my-connector/_error
{
    "error": "Houston, we have a problem!"
}
resp = client.connector.update_error(
    connector_id="my-connector",
    error="Houston, we have a problem!",
)
const response = await client.connector.updateError({
  connector_id: "my-connector",
  error: "Houston, we have a problem!",
});
response = client.connector.update_error(
  connector_id: "my-connector",
  body: {
    "error": "Houston, we have a problem!"
  }
)
$resp = $client->connector()->updateError([
    "connector_id" => "my-connector",
    "body" => [
        "error" => "Houston, we have a problem!",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"error":"Houston, we have a problem!"}' "$ELASTICSEARCH_URL/_connector/my-connector/_error"
client.connector().updateError(u -> u
    .connectorId("my-connector")
    .error("Houston, we have a problem!")
);
Request example
{
    "error": "Houston, we have a problem!"
}
Response examples (200)
{
  "result": "updated"
}




Update the connector draft filtering validation Technical preview

PUT /_connector/{connector_id}/_filtering/_validation

Update the draft filtering validation info for a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • validation object Required
    Hide validation attributes Show validation attributes object
    • errors array[object] Required
      Hide errors attributes Show errors attributes object
      • ids array[string] Required
      • messages array[string] Required
    • state string Required

      Values are edited, invalid, or valid.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_filtering/_validation
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_filtering/_validation' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"validation":{"errors":[{"ids":["string"],"messages":["string"]}],"state":"edited"}}'

Update the connector index name Beta

PUT /_connector/{connector_id}/_index_name

Update the index_name field of a connector, specifying the index where the data ingested by the connector is stored.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • index_name string | null Required

    One of:

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_index_name
PUT _connector/my-connector/_index_name
{
    "index_name": "data-from-my-google-drive"
}
resp = client.connector.update_index_name(
    connector_id="my-connector",
    index_name="data-from-my-google-drive",
)
const response = await client.connector.updateIndexName({
  connector_id: "my-connector",
  index_name: "data-from-my-google-drive",
});
response = client.connector.update_index_name(
  connector_id: "my-connector",
  body: {
    "index_name": "data-from-my-google-drive"
  }
)
$resp = $client->connector()->updateIndexName([
    "connector_id" => "my-connector",
    "body" => [
        "index_name" => "data-from-my-google-drive",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_name":"data-from-my-google-drive"}' "$ELASTICSEARCH_URL/_connector/my-connector/_index_name"
client.connector().updateIndexName(u -> u
    .connectorId("my-connector")
    .indexName("data-from-my-google-drive")
);
Request example
{
    "index_name": "data-from-my-google-drive"
}
Response examples (200)
{
  "result": "updated"
}

Update the connector name and description Beta

PUT /_connector/{connector_id}/_name

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • name string
  • description string

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_name
PUT _connector/my-connector/_name
{
    "name": "Custom connector",
    "description": "This is my customized connector"
}
resp = client.connector.update_name(
    connector_id="my-connector",
    name="Custom connector",
    description="This is my customized connector",
)
const response = await client.connector.updateName({
  connector_id: "my-connector",
  name: "Custom connector",
  description: "This is my customized connector",
});
response = client.connector.update_name(
  connector_id: "my-connector",
  body: {
    "name": "Custom connector",
    "description": "This is my customized connector"
  }
)
$resp = $client->connector()->updateName([
    "connector_id" => "my-connector",
    "body" => [
        "name" => "Custom connector",
        "description" => "This is my customized connector",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"name":"Custom connector","description":"This is my customized connector"}' "$ELASTICSEARCH_URL/_connector/my-connector/_name"
client.connector().updateName(u -> u
    .connectorId("my-connector")
    .description("This is my customized connector")
    .name("Custom connector")
);
Request example
{
    "name": "Custom connector",
    "description": "This is my customized connector"
}
Response examples (200)
{
  "result": "updated"
}

Update the connector is_native flag Beta

PUT /_connector/{connector_id}/_native

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • is_native boolean Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_native
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_native' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"is_native":true}'




Update the connector scheduling Beta

PUT /_connector/{connector_id}/_scheduling

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • scheduling object Required
    Hide scheduling attributes Show scheduling attributes object
    • access_control object
      Hide access_control attributes Show access_control attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

    • full object
      Hide full attributes Show full attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

    • incremental object
      Hide incremental attributes Show incremental attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_scheduling
PUT _connector/my-connector/_scheduling
{
    "scheduling": {
        "access_control": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": true,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": false,
            "interval": "0 30 0 * * ?"
        }
    }
}
resp = client.connector.update_scheduling(
    connector_id="my-connector",
    scheduling={
        "access_control": {
            "enabled": True,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": True,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": False,
            "interval": "0 30 0 * * ?"
        }
    },
)
const response = await client.connector.updateScheduling({
  connector_id: "my-connector",
  scheduling: {
    access_control: {
      enabled: true,
      interval: "0 10 0 * * ?",
    },
    full: {
      enabled: true,
      interval: "0 20 0 * * ?",
    },
    incremental: {
      enabled: false,
      interval: "0 30 0 * * ?",
    },
  },
});
response = client.connector.update_scheduling(
  connector_id: "my-connector",
  body: {
    "scheduling": {
      "access_control": {
        "enabled": true,
        "interval": "0 10 0 * * ?"
      },
      "full": {
        "enabled": true,
        "interval": "0 20 0 * * ?"
      },
      "incremental": {
        "enabled": false,
        "interval": "0 30 0 * * ?"
      }
    }
  }
)
$resp = $client->connector()->updateScheduling([
    "connector_id" => "my-connector",
    "body" => [
        "scheduling" => [
            "access_control" => [
                "enabled" => true,
                "interval" => "0 10 0 * * ?",
            ],
            "full" => [
                "enabled" => true,
                "interval" => "0 20 0 * * ?",
            ],
            "incremental" => [
                "enabled" => false,
                "interval" => "0 30 0 * * ?",
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"scheduling":{"access_control":{"enabled":true,"interval":"0 10 0 * * ?"},"full":{"enabled":true,"interval":"0 20 0 * * ?"},"incremental":{"enabled":false,"interval":"0 30 0 * * ?"}}}' "$ELASTICSEARCH_URL/_connector/my-connector/_scheduling"
client.connector().updateScheduling(u -> u
    .connectorId("my-connector")
    .scheduling(s -> s
        .accessControl(a -> a
            .enabled(true)
            .interval("0 10 0 * * ?")
        )
        .full(f -> f
            .enabled(true)
            .interval("0 20 0 * * ?")
        )
        .incremental(i -> i
            .enabled(false)
            .interval("0 30 0 * * ?")
        )
    )
);
{
    "scheduling": {
        "access_control": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": true,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": false,
            "interval": "0 30 0 * * ?"
        }
    }
}
{
    "scheduling": {
        "full": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        }
    }
}
Response examples (200)
{
  "result": "updated"
}




Update the connector status Technical preview

PUT /_connector/{connector_id}/_status

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • status string Required

    Values are created, needs_configuration, configured, connected, or error.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_status
PUT _connector/my-connector/_status
{
    "status": "needs_configuration"
}
resp = client.connector.update_status(
    connector_id="my-connector",
    status="needs_configuration",
)
const response = await client.connector.updateStatus({
  connector_id: "my-connector",
  status: "needs_configuration",
});
response = client.connector.update_status(
  connector_id: "my-connector",
  body: {
    "status": "needs_configuration"
  }
)
$resp = $client->connector()->updateStatus([
    "connector_id" => "my-connector",
    "body" => [
        "status" => "needs_configuration",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"status":"needs_configuration"}' "$ELASTICSEARCH_URL/_connector/my-connector/_status"
client.connector().updateStatus(u -> u
    .connectorId("my-connector")
    .status(ConnectorStatus.NeedsConfiguration)
);
Request example
{
    "status": "needs_configuration"
}
Response examples (200)
{
  "result": "updated"
}

Get data streams Generally available

GET /_data_stream/{name}

All methods and paths for this operation:

GET /_data_stream

GET /_data_stream/{name}

Get information about one or more data streams.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • name string | array[string]

    Comma-separated list of data stream names used to limit the request. Wildcard (*) expressions are supported. If omitted, all data streams are returned.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • include_defaults boolean Generally available

    If true, returns all relevant default configurations for the index template.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • verbose boolean

    Whether the maximum timestamp for each data stream should be calculated and returned.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • allow_custom_routing boolean

        If true, the data stream allows custom routing on write request.

      • failure_store object
        Hide failure_store attributes Show failure_store attributes object
        • enabled boolean Required
        • indices array[object] Required
          Hide indices attributes Show indices attributes object
          • index_name string Required
          • index_uuid string Required
          • ilm_policy string
          • managed_by string

            Values are Index Lifecycle Management, Data stream lifecycle, or Unmanaged.

          • prefer_ilm boolean

            Indicates if ILM should take precedence over DSL in case both are configured to manage this index.

          • index_mode string

            Values are standard, time_series, logsdb, or lookup.

        • rollover_on_write boolean Required
      • generation number Required

        Current generation for the data stream. This number acts as a cumulative count of the stream’s rollovers, starting at 1.

      • hidden boolean Required

        If true, the data stream is hidden.

      • ilm_policy string
      • next_generation_managed_by string Required

        Values are Index Lifecycle Management, Data stream lifecycle, or Unmanaged.

      • prefer_ilm boolean Required

        Indicates if ILM should take precedence over DSL in case both are configured to managed this data stream.

      • indices array[object] Required

        Array of objects containing information about the data stream’s backing indices. The last item in this array contains information about the stream’s current write index.

        Hide indices attributes Show indices attributes object
        • index_name string Required
        • index_uuid string Required
        • ilm_policy string
        • managed_by string

          Values are Index Lifecycle Management, Data stream lifecycle, or Unmanaged.

        • prefer_ilm boolean

          Indicates if ILM should take precedence over DSL in case both are configured to manage this index.

        • index_mode string

          Values are standard, time_series, logsdb, or lookup.

      • lifecycle object

        Data stream lifecycle with rollover can be used to display the configuration including the default rollover conditions, if asked.

        Hide lifecycle attributes Show lifecycle attributes object
        • data_retention string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • downsampling object
          Hide downsampling attribute Show downsampling attribute object
          • rounds array[object] Required

            The list of downsampling rounds to execute as part of this downsampling configuration

        • enabled boolean

          If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

          Default value is true.

        • rollover object
          Hide rollover attributes Show rollover attributes object
          • min_age string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • max_age string
          • min_docs number
          • max_docs number
          • min_size
          • max_size
          • min_primary_shard_size
          • max_primary_shard_size
          • min_primary_shard_docs number
          • max_primary_shard_docs number
      • name string Required
      • replicated boolean

        If true, the data stream is created and managed by cross-cluster replication and the local cluster can not write into this data stream or change its mappings.

      • rollover_on_write boolean Required

        If true, the next write to this data stream will trigger a rollover first and the document will be indexed in the new backing index. If the rollover fails the indexing request will fail too.

      • settings object Required Additional properties
        Index settings
      • mappings object
        Hide mappings attributes Show mappings attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
          • mode string

            Values are disabled, stored, or synthetic.

        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

            • input_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_index string
            • script object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
      • status string Required

        Values are green, GREEN, yellow, YELLOW, red, RED, unknown, or unavailable.

      • system boolean Generally available

        If true, the data stream is created and managed by an Elastic stack component and cannot be modified through normal user interaction.

      • template string Required
      • timestamp_field object Required
        Hide timestamp_field attribute Show timestamp_field attribute object
        • name string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • index_mode string

        Values are standard, time_series, logsdb, or lookup.

GET _data_stream/my-data-stream
resp = client.indices.get_data_stream(
    name="my-data-stream",
)
const response = await client.indices.getDataStream({
  name: "my-data-stream",
});
response = client.indices.get_data_stream(
  name: "my-data-stream"
)
$resp = $client->indices()->getDataStream([
    "name" => "my-data-stream",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/my-data-stream"
client.indices().getDataStream(g -> g
    .name("my-data-stream")
);
Response examples (200)
A successful response for retrieving information about a data stream.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-my-data-stream-2099.03.07-000001",
          "index_uuid": "xCEhwsp8Tey0-FLNFYVwSg",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        },
        {
          "index_name": ".ds-my-data-stream-2099.03.08-000002",
          "index_uuid": "PA_JquKGSiKcAKBA8DJ5gw",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        }
      ],
      "generation": 2,
      "_meta": {
        "my-meta-field": "foo"
      },
      "status": "GREEN",
      "next_generation_managed_by": "Index Lifecycle Management",
      "prefer_ilm": true,
      "template": "my-index-template",
      "ilm_policy": "my-lifecycle-policy",
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false,
      "rollover_on_write": false
    },
    {
      "name": "my-data-stream-two",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-my-data-stream-two-2099.03.08-000001",
          "index_uuid": "3liBu2SYS5axasRt6fUIpA",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        }
      ],
      "generation": 1,
      "_meta": {
        "my-meta-field": "foo"
      },
      "status": "YELLOW",
      "next_generation_managed_by": "Index Lifecycle Management",
      "prefer_ilm": true,
      "template": "my-index-template",
      "ilm_policy": "my-lifecycle-policy",
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false,
      "rollover_on_write": false
    }
  ]
}

Create a data stream Generally available

PUT /_data_stream/{name}

You must have a matching index template with data stream enabled.

Required authorization

  • Index privileges: create_index

Path parameters

  • name string Required

    Name of the data stream, which must meet the following criteria: Lowercase only; Cannot include \, /, *, ?, ", <, >, |, ,, #, :, or a space character; Cannot start with -, _, +, or .ds-; Cannot be . or ..; Cannot be longer than 255 bytes. Multi-byte characters count towards this limit faster.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT _data_stream/logs-foo-bar
resp = client.indices.create_data_stream(
    name="logs-foo-bar",
)
const response = await client.indices.createDataStream({
  name: "logs-foo-bar",
});
response = client.indices.create_data_stream(
  name: "logs-foo-bar"
)
$resp = $client->indices()->createDataStream([
    "name" => "logs-foo-bar",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/logs-foo-bar"
client.indices().createDataStream(c -> c
    .name("logs-foo-bar")
);




Get the status for a data stream lifecycle Generally available

GET /{index}/_lifecycle/explain

Get information about an index or data stream's current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.

External documentation

Path parameters

  • index string | array[string] Required

    The name of the index to explain

Query parameters

  • include_defaults boolean

    indicates if the API should return the default values the system uses for the index's lifecycle

  • master_timeout string

    Specify timeout for connection to master

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • indices object Required
      Hide indices attribute Show indices attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • index string Required
        • managed_by_lifecycle boolean Required
        • index_creation_date_millis number

          Time unit for milliseconds

        • time_since_index_creation string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • rollover_date_millis number

          Time unit for milliseconds

        • time_since_rollover string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • lifecycle object

          Data stream lifecycle with rollover can be used to display the configuration including the default rollover conditions, if asked.

          Hide lifecycle attributes Show lifecycle attributes object
          • data_retention string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • downsampling object
            Hide downsampling attribute Show downsampling attribute object
            • rounds array[object] Required

              The list of downsampling rounds to execute as part of this downsampling configuration

          • enabled boolean

            If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

            Default value is true.

          • rollover object
            Hide rollover attributes Show rollover attributes object
            • min_age string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • max_age string
            • min_docs number
            • max_docs number
            • min_size
            • max_size
            • min_primary_shard_size
            • max_primary_shard_size
            • min_primary_shard_docs number
            • max_primary_shard_docs number
        • generation_time string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • error string
GET .ds-metrics-2023.03.22-000001/_lifecycle/explain
resp = client.indices.explain_data_lifecycle(
    index=".ds-metrics-2023.03.22-000001",
)
const response = await client.indices.explainDataLifecycle({
  index: ".ds-metrics-2023.03.22-000001",
});
response = client.indices.explain_data_lifecycle(
  index: ".ds-metrics-2023.03.22-000001"
)
$resp = $client->indices()->explainDataLifecycle([
    "index" => ".ds-metrics-2023.03.22-000001",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/.ds-metrics-2023.03.22-000001/_lifecycle/explain"
client.indices().explainDataLifecycle(e -> e
    .index(".ds-metrics-2023.03.22-000001")
);
Response examples (200)
A successful response from `GET .ds-metrics-2023.03.22-000001/_lifecycle/explain`, which retrieves the lifecycle status for a data stream backing index. If the index is managed by a data stream lifecycle, the API will show the `managed_by_lifecycle` field set to `true` and the rest of the response will contain information about the lifecycle execution status for this index.
{
  "indices": {
    ".ds-metrics-2023.03.22-000001": {
      "index" : ".ds-metrics-2023.03.22-000001",
      "managed_by_lifecycle" : true,
      "index_creation_date_millis" : 1679475563571,
      "time_since_index_creation" : "843ms",
      "rollover_date_millis" : 1679475564293,
      "time_since_rollover" : "121ms",
      "lifecycle" : { },
      "generation_time" : "121ms"
  }
}
The API reports any errors related to the lifecycle execution for the target index.
{
  "indices": {
    ".ds-metrics-2023.03.22-000001": {
      "index" : ".ds-metrics-2023.03.22-000001",
      "managed_by_lifecycle" : true,
      "index_creation_date_millis" : 1679475563571,
      "time_since_index_creation" : "843ms",
      "lifecycle" : {
        "enabled": true
      },
      "error": "{\"type\":\"validation_exception\",\"reason\":\"Validation Failed: 1: this action would add [2] shards, but this cluster
currently has [4]/[3] maximum normal shards open;\"}"
  }
}




































Update data streams Generally available

POST /_data_stream/_modify

Performs one or more data stream modification actions in a single atomic operation.

application/json

Body Required

  • actions array[object] Required

    Actions to perform.

    Hide actions attributes Show actions attributes object
    • add_backing_index object
      Hide add_backing_index attributes Show add_backing_index attributes object
      • data_stream string Required
      • index string Required
    • remove_backing_index object
      Hide remove_backing_index attributes Show remove_backing_index attributes object
      • data_stream string Required
      • index string Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST _data_stream/_modify
{
  "actions": [
    {
      "remove_backing_index": {
        "data_stream": "my-data-stream",
        "index": ".ds-my-data-stream-2023.07.26-000001"
      }
    },
    {
      "add_backing_index": {
        "data_stream": "my-data-stream",
        "index": ".ds-my-data-stream-2023.07.26-000001-downsample"
      }
    }
  ]
}
resp = client.indices.modify_data_stream(
    actions=[
        {
            "remove_backing_index": {
                "data_stream": "my-data-stream",
                "index": ".ds-my-data-stream-2023.07.26-000001"
            }
        },
        {
            "add_backing_index": {
                "data_stream": "my-data-stream",
                "index": ".ds-my-data-stream-2023.07.26-000001-downsample"
            }
        }
    ],
)
const response = await client.indices.modifyDataStream({
  actions: [
    {
      remove_backing_index: {
        data_stream: "my-data-stream",
        index: ".ds-my-data-stream-2023.07.26-000001",
      },
    },
    {
      add_backing_index: {
        data_stream: "my-data-stream",
        index: ".ds-my-data-stream-2023.07.26-000001-downsample",
      },
    },
  ],
});
response = client.indices.modify_data_stream(
  body: {
    "actions": [
      {
        "remove_backing_index": {
          "data_stream": "my-data-stream",
          "index": ".ds-my-data-stream-2023.07.26-000001"
        }
      },
      {
        "add_backing_index": {
          "data_stream": "my-data-stream",
          "index": ".ds-my-data-stream-2023.07.26-000001-downsample"
        }
      }
    ]
  }
)
$resp = $client->indices()->modifyDataStream([
    "body" => [
        "actions" => array(
            [
                "remove_backing_index" => [
                    "data_stream" => "my-data-stream",
                    "index" => ".ds-my-data-stream-2023.07.26-000001",
                ],
            ],
            [
                "add_backing_index" => [
                    "data_stream" => "my-data-stream",
                    "index" => ".ds-my-data-stream-2023.07.26-000001-downsample",
                ],
            ],
        ),
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"actions":[{"remove_backing_index":{"data_stream":"my-data-stream","index":".ds-my-data-stream-2023.07.26-000001"}},{"add_backing_index":{"data_stream":"my-data-stream","index":".ds-my-data-stream-2023.07.26-000001-downsample"}}]}' "$ELASTICSEARCH_URL/_data_stream/_modify"
client.indices().modifyDataStream(m -> m
    .actions(List.of(Action.of(a -> a
            .removeBackingIndex(r -> r
                .dataStream("my-data-stream")
                .index(".ds-my-data-stream-2023.07.26-000001")
        )),Action.of(ac -> ac
            .addBackingIndex(ad -> ad
                .dataStream("my-data-stream")
                .index(".ds-my-data-stream-2023.07.26-000001-downsample")
        ))))
);
Request example
An example body for a `POST _data_stream/_modify` request.
{
  "actions": [
    {
      "remove_backing_index": {
        "data_stream": "my-data-stream",
        "index": ".ds-my-data-stream-2023.07.26-000001"
      }
    },
    {
      "add_backing_index": {
        "data_stream": "my-data-stream",
        "index": ".ds-my-data-stream-2023.07.26-000001-downsample"
      }
    }
  ]
}

Bulk index or delete documents Generally available

PUT /{index}/_bulk

All methods and paths for this operation:

POST /_bulk

PUT /_bulk
POST /{index}/_bulk
PUT /{index}/_bulk

Perform multiple index, create, delete, and update actions in a single request. This reduces overhead and can greatly increase indexing speed.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:

  • To use the create action, you must have the create_doc, create, index, or write index privilege. Data streams support only the create action.
  • To use the index action, you must have the create, index, or write index privilege.
  • To use the delete action, you must have the delete or write index privilege.
  • To use the update action, you must have the index or write index privilege.
  • To automatically create a data stream or index with a bulk API request, you must have the auto_configure, create_index, or manage index privilege.
  • To make the result of a bulk operation visible to search using the refresh parameter, you must have the maintenance or manage index privilege.

Automatic data stream creation requires a matching index template with data stream enabled.

The actions are specified in the request body using a newline delimited JSON (NDJSON) structure:

action_and_meta_data\n
optional_source\n
action_and_meta_data\n
optional_source\n
....
action_and_meta_data\n
optional_source\n

The index and create actions expect a source on the next line and have the same semantics as the op_type parameter in the standard index API. A create action fails if a document with the same ID already exists in the target An index action adds or replaces a document as necessary.

NOTE: Data streams support only the create action. To update or delete a document in a data stream, you must target the backing index containing the document.

An update action expects that the partial doc, upsert, and script and its options are specified on the next line.

A delete action does not expect a source on the next line and has the same semantics as the standard delete API.

NOTE: The final line of data must end with a newline character (\n). Each newline character may be preceded by a carriage return (\r). When sending NDJSON data to the _bulk endpoint, use a Content-Type header of application/json or application/x-ndjson. Because this format uses literal newline characters (\n) as delimiters, make sure that the JSON actions and sources are not pretty printed.

If you provide a target in the request path, it is used for any actions that don't explicitly specify an _index argument.

A note on the format: the idea here is to make processing as fast as possible. As some of the actions are redirected to other shards on other nodes, only action_meta_data is parsed on the receiving node side.

Client libraries using this protocol should try and strive to do something similar on the client side, and reduce buffering as much as possible.

There is no "correct" number of actions to perform in a single bulk request. Experiment with different settings to find the optimal size for your particular workload. Note that Elasticsearch limits the maximum size of a HTTP request to 100mb by default so clients must ensure that no request exceeds this size. It is not possible to index a single document that exceeds the size limit, so you must pre-process any such documents into smaller pieces before sending them to Elasticsearch. For instance, split documents into pages or chapters before indexing them, or store raw binary data in a system outside Elasticsearch and replace the raw data with a link to the external system in the documents that you send to Elasticsearch.

Client suppport for bulk requests

Some of the officially supported clients provide helpers to assist with bulk requests and reindexing:

  • Go: Check out esutil.BulkIndexer
  • Perl: Check out Search::Elasticsearch::Client::5_0::Bulk and Search::Elasticsearch::Client::5_0::Scroll
  • Python: Check out elasticsearch.helpers.*
  • JavaScript: Check out client.helpers.*
  • .NET: Check out BulkAllObservable
  • PHP: Check out bulk indexing.

Submitting bulk requests with cURL

If you're providing text file input to curl, you must use the --data-binary flag instead of plain -d. The latter doesn't preserve newlines. For example:

$ cat requests
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
$ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@requests"; echo
{"took":7, "errors": false, "items":[{"index":{"_index":"test","_id":"1","_version":1,"result":"created","forced_refresh":false}}]}

Optimistic concurrency control

Each index and delete action within a bulk API call may include the if_seq_no and if_primary_term parameters in their respective action and meta data lines. The if_seq_no and if_primary_term parameters control how operations are run, based on the last modification to existing documents. See Optimistic concurrency control for more details.

Versioning

Each bulk item can include the version value using the version field. It automatically follows the behavior of the index or delete operation based on the _version mapping. It also support the version_type.

Routing

Each bulk item can include the routing value using the routing field. It automatically follows the behavior of the index or delete operation based on the _routing mapping.

NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing setting enabled in the template.

Wait for active shards

When making bulk calls, you can set the wait_for_active_shards parameter to require a minimum number of shard copies to be active before starting to process the bulk request.

Refresh

Control when the changes made by this request are visible to search.

NOTE: Only the shards that receive the bulk request will be affected by refresh. Imagine a _bulk?refresh=wait_for request with three documents in it that happen to be routed to different shards in an index with five shards. The request will only wait for those three shards to refresh. The other two shards that make up the index do not participate in the _bulk request at all.

You might want to disable the refresh interval temporarily to improve indexing throughput for large bulk requests. Refer to the linked documentation for step-by-step instructions using the index settings API.

External documentation

Path parameters

  • index string Required

    The name of the data stream, index, or index alias to perform bulk actions on.

Query parameters

  • include_source_on_error boolean

    True or false if to include the document source in the error message in case of parsing errors.

  • list_executed_pipelines boolean

    If true, the response will include the ingest pipelines that were run for each index or create.

  • pipeline string

    The pipeline identifier to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, wait for a refresh to make this operation visible to search. If false, do nothing with refreshes. Valid values: true, false, wait_for.

    Values are true, false, or wait_for.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or contains a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • timeout string

    The period each action waits for the following operations: automatic index creation, dynamic mapping updates, and waiting for active shards. The default is 1m (one minute), which guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default is 1, which waits for each primary shard to be active.

    Values are all or index-setting.

  • require_alias boolean

    If true, the request's actions must target an index alias.

  • require_data_stream boolean

    If true, the request's actions must target a data stream (existing or to be created).

application/json

Body object Required

One of:
  • index object
    Hide index attributes Show index attributes object
    • _id string
    • _index string
    • routing string
    • if_primary_term number
    • if_seq_no number
    • version number
    • version_type string

      Values are internal, external, external_gte, or force.

    • dynamic_templates object

      A map from the full name of fields to the name of dynamic templates. It defaults to an empty map. If a name matches a dynamic template, that template will be applied regardless of other match predicates defined in the template. If a field is already defined in the mapping, then this parameter won't be used.

      Hide dynamic_templates attribute Show dynamic_templates attribute object
      • * string Additional properties
    • pipeline string

      The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

    • require_alias boolean

      If true, the request's actions must target an index alias.

      Default value is false.

  • create object
    Hide create attributes Show create attributes object
    • _id string
    • _index string
    • routing string
    • if_primary_term number
    • if_seq_no number
    • version number
    • version_type string

      Values are internal, external, external_gte, or force.

    • dynamic_templates object

      A map from the full name of fields to the name of dynamic templates. It defaults to an empty map. If a name matches a dynamic template, that template will be applied regardless of other match predicates defined in the template. If a field is already defined in the mapping, then this parameter won't be used.

      Hide dynamic_templates attribute Show dynamic_templates attribute object
      • * string Additional properties
    • pipeline string

      The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to _none turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter.

    • require_alias boolean

      If true, the request's actions must target an index alias.

      Default value is false.

  • update object
    Hide update attributes Show update attributes object
    • _id string
    • _index string
    • routing string
    • if_primary_term number
    • if_seq_no number
    • version number
    • version_type string

      Values are internal, external, external_gte, or force.

    • require_alias boolean

      If true, the request's actions must target an index alias.

      Default value is false.

    • retry_on_conflict number

      The number of times an update should be retried in the case of a version conflict.

  • delete object
    Hide delete attributes Show delete attributes object
    • _id string
    • _index string
    • routing string
    • if_primary_term number
    • if_seq_no number
    • version number
    • version_type string

      Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • errors boolean Required

      If true, one or more of the operations in the bulk request did not complete successfully.

    • items array[object] Required

      The result of each operation in the bulk request, in the order they were submitted.

      Hide items attribute Show items attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • _id string | null

          The document ID associated with the operation.

        • _index string Required

          The name of the index associated with the operation. If the operation targeted a data stream, this is the backing index into which the document was written.

        • status number Required

          The HTTP status code returned for the operation.

        • failure_store string

          Values are not_applicable_or_unknown, used, not_enabled, or failed.

        • error object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Hide error attributes Show error attributes object
          • type string Required

            The type of error

          • reason string | null

            A human-readable explanation of the error, in English.

          • stack_trace string

            The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • caused_by object

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • root_cause array[object]

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • suppressed array[object]

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • _primary_term number

          The primary term assigned to the document for the operation. This property is returned only for successful operations.

        • result string

          The result of the operation. Successful values are created, deleted, and updated.

        • _seq_no number
        • _shards object
          Hide _shards attributes Show _shards attributes object
          • failed number Required
          • successful number Required
          • total number Required
          • failures array[object]
          • skipped number
        • _version number
        • forced_refresh boolean
        • get object
          Hide get attributes Show get attributes object
          • fields object
            Hide fields attribute Show fields attribute object
            • * object Additional properties
          • found boolean Required
          • _seq_no number
          • _primary_term number
          • _routing string
          • _source object
            Hide _source attribute Show _source attribute object
            • * object Additional properties
    • took number Required

      The length of time, in milliseconds, it took to process the bulk request.

    • ingest_took number
POST _bulk
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_id" : "2" } }
{ "create" : { "_index" : "test", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
resp = client.bulk(
    operations=[
        {
            "index": {
                "_index": "test",
                "_id": "1"
            }
        },
        {
            "field1": "value1"
        },
        {
            "delete": {
                "_index": "test",
                "_id": "2"
            }
        },
        {
            "create": {
                "_index": "test",
                "_id": "3"
            }
        },
        {
            "field1": "value3"
        },
        {
            "update": {
                "_id": "1",
                "_index": "test"
            }
        },
        {
            "doc": {
                "field2": "value2"
            }
        }
    ],
)
const response = await client.bulk({
  operations: [
    {
      index: {
        _index: "test",
        _id: "1",
      },
    },
    {
      field1: "value1",
    },
    {
      delete: {
        _index: "test",
        _id: "2",
      },
    },
    {
      create: {
        _index: "test",
        _id: "3",
      },
    },
    {
      field1: "value3",
    },
    {
      update: {
        _id: "1",
        _index: "test",
      },
    },
    {
      doc: {
        field2: "value2",
      },
    },
  ],
});
response = client.bulk(
  body: [
    {
      "index": {
        "_index": "test",
        "_id": "1"
      }
    },
    {
      "field1": "value1"
    },
    {
      "delete": {
        "_index": "test",
        "_id": "2"
      }
    },
    {
      "create": {
        "_index": "test",
        "_id": "3"
      }
    },
    {
      "field1": "value3"
    },
    {
      "update": {
        "_id": "1",
        "_index": "test"
      }
    },
    {
      "doc": {
        "field2": "value2"
      }
    }
  ]
)
$resp = $client->bulk([
    "body" => array(
        [
            "index" => [
                "_index" => "test",
                "_id" => "1",
            ],
        ],
        [
            "field1" => "value1",
        ],
        [
            "delete" => [
                "_index" => "test",
                "_id" => "2",
            ],
        ],
        [
            "create" => [
                "_index" => "test",
                "_id" => "3",
            ],
        ],
        [
            "field1" => "value3",
        ],
        [
            "update" => [
                "_id" => "1",
                "_index" => "test",
            ],
        ],
        [
            "doc" => [
                "field2" => "value2",
            ],
        ],
    ),
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '[{"index":{"_index":"test","_id":"1"}},{"field1":"value1"},{"delete":{"_index":"test","_id":"2"}},{"create":{"_index":"test","_id":"3"}},{"field1":"value3"},{"update":{"_id":"1","_index":"test"}},{"doc":{"field2":"value2"}}]' "$ELASTICSEARCH_URL/_bulk"
Run `POST _bulk` to perform multiple operations.
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_id" : "2" } }
{ "create" : { "_index" : "test", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
When you run `POST _bulk` and use the `update` action, you can use `retry_on_conflict` as a field in the action itself (not in the extra payload line) to specify how many times an update should be retried in the case of a version conflict.
{ "update" : {"_id" : "1", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"} }
{ "update" : { "_id" : "0", "_index" : "index1", "retry_on_conflict" : 3} }
{ "script" : { "source": "ctx._source.counter += params.param1", "lang" : "painless", "params" : {"param1" : 1}}, "upsert" : {"counter" : 1}}
{ "update" : {"_id" : "2", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"}, "doc_as_upsert" : true }
{ "update" : {"_id" : "3", "_index" : "index1", "_source" : true} }
{ "doc" : {"field" : "value"} }
{ "update" : {"_id" : "4", "_index" : "index1"} }
{ "doc" : {"field" : "value"}, "_source": true}
To return only information about failed operations, run `POST /_bulk?filter_path=items.*.error`.
{ "update": {"_id": "5", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "update": {"_id": "6", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "create": {"_id": "7", "_index": "index1"} }
{ "my_field": "foo" }
Run `POST /_bulk` to perform a bulk request that consists of index and create actions with the `dynamic_templates` parameter. The bulk request creates two new fields `work_location` and `home_location` with type `geo_point` according to the `dynamic_templates` parameter. However, the `raw_location` field is created using default dynamic mapping rules, as a text field in that case since it is supplied as a string in the JSON document.
{ "index" : { "_index" : "my_index", "_id" : "1", "dynamic_templates": {"work_location": "geo_point"}} }
{ "field" : "value1", "work_location": "41.12,-71.34", "raw_location": "41.12,-71.34"}
{ "create" : { "_index" : "my_index", "_id" : "2", "dynamic_templates": {"home_location": "geo_point"}} }
{ "field" : "value2", "home_location": "41.12,-71.34"}
Response examples (200)
{
   "took": 30,
   "errors": false,
   "items": [
      {
         "index": {
            "_index": "test",
            "_id": "1",
            "_version": 1,
            "result": "created",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 201,
            "_seq_no" : 0,
            "_primary_term": 1
         }
      },
      {
         "delete": {
            "_index": "test",
            "_id": "2",
            "_version": 1,
            "result": "not_found",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 404,
            "_seq_no" : 1,
            "_primary_term" : 2
         }
      },
      {
         "create": {
            "_index": "test",
            "_id": "3",
            "_version": 1,
            "result": "created",
            "_shards": {
               "total": 2,
               "successful": 1,
               "failed": 0
            },
            "status": 201,
            "_seq_no" : 2,
            "_primary_term" : 3
         }
      },
      {
         "update": {
            "_index": "test",
            "_id": "1",
            "_version": 2,
            "result": "updated",
            "_shards": {
                "total": 2,
                "successful": 1,
                "failed": 0
            },
            "status": 200,
            "_seq_no" : 3,
            "_primary_term" : 4
         }
      }
   ]
}
If you run `POST /_bulk` with operations that update non-existent documents, the operations cannot complete successfully. The API returns a response with an `errors` property value `true`. The response also includes an error object for any failed operations. The error object contains additional information about the failure, such as the error type and reason.
{
  "took": 486,
  "errors": true,
  "items": [
    {
      "update": {
        "_index": "index1",
        "_id": "5",
        "status": 404,
        "error": {
          "type": "document_missing_exception",
          "reason": "[5]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "update": {
        "_index": "index1",
        "_id": "6",
        "status": 404,
        "error": {
          "type": "document_missing_exception",
          "reason": "[6]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "create": {
        "_index": "index1",
        "_id": "7",
        "_version": 1,
        "result": "created",
        "_shards": {
          "total": 2,
          "successful": 1,
          "failed": 0
        },
        "_seq_no": 0,
        "_primary_term": 1,
        "status": 201
      }
    }
  ]
}
An example response from `POST /_bulk?filter_path=items.*.error`, which returns only information about failed operations.
{
  "items": [
    {
      "update": {
        "error": {
          "type": "document_missing_exception",
          "reason": "[5]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    },
    {
      "update": {
        "error": {
          "type": "document_missing_exception",
          "reason": "[6]: document missing",
          "index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
          "shard": "0",
          "index": "index1"
        }
      }
    }
  ]
}
















Check a document Generally available

HEAD /{index}/_doc/{id}

Verify that a document exists. For example, check to see if a document with the _id 0 exists:

HEAD my-index-000001/_doc/0

If the document exists, the API returns a status code of 200 - OK. If the document doesn’t exist, the API returns 404 - Not Found.

Versioning support

You can use the version parameter to check the document only if its current version is equal to the specified one.

Internally, Elasticsearch has marked the old document as deleted and added an entirely new document. The old version of the document doesn't disappear immediately, although you won't be able to access it. Elasticsearch cleans up deleted documents in the background as you continue to index more data.

Path parameters

  • index string Required

    A comma-separated list of data streams, indices, and aliases. It supports wildcards (*).

  • id string Required

    A unique document identifier.

Query parameters

  • preference string

    The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

    If it is set to _local, the operation will prefer to be run on a local allocated shard when possible. If it is set to a custom value, the value is used to guarantee that the same shards will be used for the same custom value. This can help with "jumping values" when hitting different shards in different refresh states. A sample value can be something like the web session ID or the user name.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false.

  • version number

    Explicit version number for concurrency control. The specified version must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
HEAD my-index-000001/_doc/0
resp = client.exists(
    index="my-index-000001",
    id="0",
)
const response = await client.exists({
  index: "my-index-000001",
  id: 0,
});
response = client.exists(
  index: "my-index-000001",
  id: "0"
)
$resp = $client->exists([
    "index" => "my-index-000001",
    "id" => "0",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_doc/0"
client.exists(e -> e
    .id("0")
    .index("my-index-000001")
);








Check for a document source Generally available

HEAD /{index}/_source/{id}

Check whether a document source exists in an index. For example:

HEAD my-index-000001/_source/1

A document's source is not available if it is disabled in the mapping.

Required authorization

  • Index privileges: read
External documentation

Path parameters

  • index string Required

    A comma-separated list of data streams, indices, and aliases. It supports wildcards (*).

  • id string Required

    A unique identifier for the document.

Query parameters

  • preference string

    The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude in the response.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response.

  • version number

    The version number for concurrency control. It must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
HEAD my-index-000001/_source/1
resp = client.exists_source(
    index="my-index-000001",
    id="1",
)
const response = await client.existsSource({
  index: "my-index-000001",
  id: 1,
});
response = client.exists_source(
  index: "my-index-000001",
  id: "1"
)
$resp = $client->existsSource([
    "index" => "my-index-000001",
    "id" => "1",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_source/1"
client.existsSource(e -> e
    .id("1")
    .index("my-index-000001")
);
























Enrich

Get an enrich policy Generally available

GET /_enrich/policy/{name}

All methods and paths for this operation:

GET /_enrich/policy

GET /_enrich/policy/{name}

Returns information about an enrich policy.

Path parameters

  • name string | array[string] Required

    Comma-separated list of enrich policy names used to limit the request. To return information for all enrich policies, omit this parameter.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • policies array[object] Required
      Hide policies attribute Show policies attribute object
      • config object Required
        Hide config attribute Show config attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • enrich_fields string | array[string] Required
          • indices string | array[string] Required
          • match_field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • query object

            An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

            External documentation
          • name string
          • elasticsearch_version string
GET /_enrich/policy/my-policy
resp = client.enrich.get_policy(
    name="my-policy",
)
const response = await client.enrich.getPolicy({
  name: "my-policy",
});
response = client.enrich.get_policy(
  name: "my-policy"
)
$resp = $client->enrich()->getPolicy([
    "name" => "my-policy",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy"
client.enrich().getPolicy(g -> g
    .name("my-policy")
);




Delete an enrich policy Generally available

DELETE /_enrich/policy/{name}

Deletes an existing enrich policy and its enrich index.

Path parameters

  • name string Required

    Enrich policy to delete.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_enrich/policy/my-policy
resp = client.enrich.delete_policy(
    name="my-policy",
)
const response = await client.enrich.deletePolicy({
  name: "my-policy",
});
response = client.enrich.delete_policy(
  name: "my-policy"
)
$resp = $client->enrich()->deletePolicy([
    "name" => "my-policy",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy"
client.enrich().deletePolicy(d -> d
    .name("my-policy")
);

















Get EQL search results Generally available

POST /{index}/_eql/search

All methods and paths for this operation:

GET /{index}/_eql/search

POST /{index}/_eql/search

Returns search results for an Event Query Language (EQL) query. EQL assumes each document in a data stream or index corresponds to an event.

External documentation

Path parameters

  • index string | array[string] Required

    The name of the index to scope the operation

Query parameters

  • allow_no_indices boolean

    Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes _all string or when no indices have been specified)

  • allow_partial_search_results boolean

    If true, returns partial results if there are shard failures. If false, returns an error with no partial results.

  • allow_partial_sequence_results boolean

    If true, sequence queries will return partial results in case of shard failures. If false, they will return no results at all. This flag has effect only if allow_partial_search_results is true.

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ccs_minimize_roundtrips boolean

    Indicates whether network round-trips should be minimized as part of cross-cluster search requests execution

  • ignore_unavailable boolean

    If true, missing or closed indices are not included in the response.

  • keep_alive string

    Period for which the search and its results are stored on the cluster.

    Values are -1 or 0.

  • keep_on_completion boolean

    If true, the search and its results are stored on the cluster.

  • wait_for_completion_timeout string

    Timeout duration to wait for the request to finish. Defaults to no timeout, meaning the request waits for complete search results.

    Values are -1 or 0.

application/json

Body Required

  • query string Required

    EQL query you wish to run.

  • case_sensitive boolean
  • event_category_field string

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • tiebreaker_field string

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • timestamp_field string

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • fetch_size number
  • filter object | array[object]

    Query, written in Query DSL, used to filter the events on which the EQL query runs.

    One of:

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • keep_alive string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • keep_on_completion boolean
  • wait_for_completion_timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • allow_partial_search_results boolean

    Allow query execution also in case of shard failures. If true, the query will keep running and will return results based on the available shards. For sequences, the behavior can be further refined using allow_partial_sequence_results

    Default value is true.

  • allow_partial_sequence_results boolean

    This flag applies only to sequences and has effect only if allow_partial_search_results=true. If true, the sequence query will return results based on the available shards, ignoring the others. If false, the sequence query will return successfully, but will always have empty results.

    Default value is false.

  • size number
  • fields object | array[object]

    Array of wildcard (*) patterns. The response returns values for field names matching these patterns in the fields property of each hit.

    One of:

    A reference to a field with formatting instructions on how to return the value

    Hide attributes Show attributes
    • field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • format string

      The format in which the values are returned.

    • include_unmapped boolean
  • result_position string

    Values are tail or head.

  • runtime_mappings object
    Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • input_field string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • target_field string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • target_index string
      • script object
        Hide script attributes Show script attributes object
        • source string | object

          One of:
        • id string
        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Any of:

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • max_samples_per_key number

    By default, the response of a sample query contains up to 10 samples, with one sample per unique set of join keys. Use the size parameter to get a smaller or larger set of samples. To retrieve more than one sample per set of join keys, use the max_samples_per_key parameter. Pipes are not supported for sample queries.

    Default value is 1.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • is_partial boolean

      If true, the response does not contain complete search results.

    • is_running boolean

      If true, the search request is still executing.

    • took number

      Time unit for milliseconds

    • timed_out boolean

      If true, the request timed out before completion.

    • hits object Required
      Hide hits attributes Show hits attributes object
      • total object
        Hide total attributes Show total attributes object
        • relation string Required

          Values are eq or gte.

        • value number Required
      • events array[object]

        Contains events matching the query. Each object represents a matching event.

        Hide events attributes Show events attributes object
        • _index string Required
        • _id string Required
        • _source object Required

          Original JSON body passed for the event at index time.

        • missing boolean

          Set to true for events in a timespan-constrained sequence that do not meet a given condition.

        • fields object
          Hide fields attribute Show fields attribute object
          • * array[object] Additional properties
      • sequences array[object]

        Contains event sequences matching the query. Each object represents a matching sequence. This parameter is only returned for EQL queries containing a sequence.

        Hide sequences attributes Show sequences attributes object
        • events array[object] Required

          Contains events matching the query. Each object represents a matching event.

          Hide events attributes Show events attributes object
          • _index string Required
          • _id string Required
          • _source object Required

            Original JSON body passed for the event at index time.

          • missing boolean

            Set to true for events in a timespan-constrained sequence that do not meet a given condition.

          • fields object
        • join_keys array[object]

          Shared field values used to constrain matches in the sequence. These are defined using the by keyword in the EQL query syntax.

    • shard_failures array[object]

      Contains information about shard failures (if any), in case allow_partial_search_results=true

      Hide shard_failures attributes Show shard_failures attributes object
      • index string
      • node string
      • reason object Required

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide reason attributes Show reason attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • shard number Required
      • status string
GET /my-data-stream/_eql/search
{
  "query": """
    process where (process.name == "cmd.exe" and process.pid != 2013)
  """
}
resp = client.eql.search(
    index="my-data-stream",
    query="\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  ",
)
const response = await client.eql.search({
  index: "my-data-stream",
  query:
    '\n    process where (process.name == "cmd.exe" and process.pid != 2013)\n  ',
});
response = client.eql.search(
  index: "my-data-stream",
  body: {
    "query": "\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  "
  }
)
$resp = $client->eql()->search([
    "index" => "my-data-stream",
    "body" => [
        "query" => "\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  ",
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":"\n    process where (process.name == \"cmd.exe\" and process.pid != 2013)\n  "}' "$ELASTICSEARCH_URL/my-data-stream/_eql/search"
client.eql().search(s -> s
    .index("my-data-stream")
    .query(" process where (process.name == \"cmd.exe\" and process.pid != 2013) ")
);
Request examples
Run `GET /my-data-stream/_eql/search` to search for events that have a `process.name` of `cmd.exe` and a `process.pid` other than `2013`.
{
  "query": """
    process where (process.name == "cmd.exe" and process.pid != 2013)
  """
}
Run `GET /my-data-stream/_eql/search` to search for a sequence of events. The sequence starts with an event with an `event.category` of `file`, a `file.name` of `cmd.exe`, and a `process.pid` other than `2013`. It is followed by an event with an `event.category` of `process` and a `process.executable` that contains the substring `regsvr32`. These events must also share the same `process.pid` value.
{
  "query": """
    sequence by process.pid
      [ file where file.name == "cmd.exe" and process.pid != 2013 ]
      [ process where stringContains(process.executable, "regsvr32") ]
  """
}
Response examples (200)
{
  "is_partial": false,
  "is_running": false,
  "took": 6,
  "timed_out": false,
  "hits": {
    "total": {
      "value": 1,
      "relation": "eq"
    },
    "sequences": [
      {
        "join_keys": [
          2012
        ],
        "events": [
          {
            "_index": ".ds-my-data-stream-2099.12.07-000001",
            "_id": "AtOJ4UjUBAAx3XR5kcCM",
            "_source": {
              "@timestamp": "2099-12-06T11:04:07.000Z",
              "event": {
                "category": "file",
                "id": "dGCHwoeS",
                "sequence": 2
              },
              "file": {
                "accessed": "2099-12-07T11:07:08.000Z",
                "name": "cmd.exe",
                "path": "C:\\Windows\\System32\\cmd.exe",
                "type": "file",
                "size": 16384
              },
              "process": {
                "pid": 2012,
                "name": "cmd.exe",
                "executable": "C:\\Windows\\System32\\cmd.exe"
              }
            }
          },
          {
            "_index": ".ds-my-data-stream-2099.12.07-000001",
            "_id": "OQmfCaduce8zoHT93o4H",
            "_source": {
              "@timestamp": "2099-12-07T11:07:09.000Z",
              "event": {
                "category": "process",
                "id": "aR3NWVOs",
                "sequence": 4
              },
              "process": {
                "pid": 2012,
                "name": "regsvr32.exe",
                "command_line": "regsvr32.exe  /s /u /i:https://...RegSvr32.sct scrobj.dll",
                "executable": "C:\\Windows\\System32\\regsvr32.exe"
              }
            }
          }
        ]
      }
    ]
  }
}







































Remove an index block Generally available

DELETE /{index}/_block/{block}

Remove an index block from an index. Index blocks limit the operations allowed on an index by blocking specific operation types.

Required authorization

  • Index privileges: manage

Path parameters

  • index string Required

    A comma-separated list or wildcard expression of index names used to limit the request. By default, you must explicitly name the indices you are removing blocks from. To allow the removal of blocks from indices with _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. You can update this setting in the elasticsearch.yml file or by using the cluster update settings API.

  • block string Required

    The block type to remove from the index.

    Supported values include:

    • metadata: Disable metadata changes, such as closing the index.
    • read: Disable read operations.
    • read_only: Disable write operations and metadata changes.
    • write: Disable write operations. However, metadata changes are still allowed.

    Values are metadata, read, read_only, or write.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

  • timeout string

    The period to wait for a response from all relevant nodes in the cluster after updating the cluster metadata. If no response is received before the timeout expires, the cluster metadata update still applies but the response will indicate that it was not completely acknowledged. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • indices array[object] Required
      Hide indices attributes Show indices attributes object
      • name string Required
      • unblocked boolean
      • exception object

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide exception attributes Show exception attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

DELETE /my-index-000001/_block/write
resp = client.indices.remove_block(
    index="my-index-000001",
    block="write",
)
const response = await client.indices.removeBlock({
  index: "my-index-000001",
  block: "write",
});
response = client.indices.remove_block(
  index: "my-index-000001",
  block: "write"
)
$resp = $client->indices()->removeBlock([
    "index" => "my-index-000001",
    "block" => "write",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_block/write"
client.indices().removeBlock(a -> a
    .block(IndicesBlockOptions.Write)
    .index("my-index-000001")
);
Response examples (200)
A successful response from `DELETE /my-index-000001/_block/write`, which removes an index block from an index.'
{
  "acknowledged" : true,
  "indices" : [ {
    "name" : "my-index-000001",
    "unblocked" : true
  } ]
}




Get index information Generally available

GET /{index}

Get information about one or more indices. For data streams, the API returns information about the stream’s backing indices.

Required authorization

  • Index privileges: view_index_metadata,manage

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*) are supported.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flat_settings boolean

    If true, returns settings in flat format.

  • ignore_unavailable boolean

    If false, requests that target a missing index return an error.

  • include_defaults boolean

    If true, return all default settings in the response.

  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • features string | array[string] Generally available

    Return only information on specified index features

    Supported values include: aliases, mappings, settings

    Values are aliases, mappings, or settings.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object
      Hide * attributes Show * attributes object
      • aliases object
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

            External documentation
          • index_routing string
          • is_hidden boolean

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string
          • search_routing string
      • mappings object
        Hide mappings attributes Show mappings attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
          • mode string

            Values are disabled, stored, or synthetic.

        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

              Hide fields attribute Show fields attribute object
              • * object Additional properties
            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

            • input_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_index string
            • script object
              Hide script attributes Show script attributes object
              • source
              • id string
              • params object

                Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              • lang
              • options object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
      • settings object Additional properties
        Index settings
      • defaults object Additional properties
        Index settings
      • data_stream string
      • lifecycle object

        Data stream lifecycle denotes that a data stream is managed by the data stream lifecycle and contains the configuration.

        Hide lifecycle attributes Show lifecycle attributes object
        • data_retention string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • downsampling object
          Hide downsampling attribute Show downsampling attribute object
          • rounds array[object] Required

            The list of downsampling rounds to execute as part of this downsampling configuration

            Hide rounds attributes Show rounds attributes object
            • after string Required

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • config object Required
        • enabled boolean

          If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

          Default value is true.

GET /my-index-000001
resp = client.indices.get(
    index="my-index-000001",
)
const response = await client.indices.get({
  index: "my-index-000001",
});
response = client.indices.get(
  index: "my-index-000001"
)
$resp = $client->indices()->get([
    "index" => "my-index-000001",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001"
client.indices().get(g -> g
    .index("my-index-000001")
);

Create an index Generally available

PUT /{index}

You can use the create index API to add a new index to an Elasticsearch cluster. When creating an index, you can specify the following:

  • Settings for the index.
  • Mappings for fields in the index.
  • Index aliases

Wait for active shards

By default, index creation will only return a response to the client when the primary copies of each shard have been started, or the request times out. The index creation response will indicate what happened. For example, acknowledged indicates whether the index was successfully created in the cluster, while shards_acknowledged indicates whether the requisite number of shard copies were started for each shard in the index before timing out. Note that it is still possible for either acknowledged or shards_acknowledged to be false, but for the index creation to be successful. These values simply indicate whether the operation completed before the timeout. If acknowledged is false, the request timed out before the cluster state was updated with the newly created index, but it probably will be created sometime soon. If shards_acknowledged is false, then the request timed out before the requisite number of shards were started (by default just the primaries), even if the cluster state was successfully updated to reflect the newly created index (that is to say, acknowledged is true).

You can change the default of only waiting for the primary shards to start through the index setting index.write.wait_for_active_shards. Note that changing this setting will also affect the wait_for_active_shards value on all subsequent write operations.

Required authorization

  • Index privileges: create_index,manage

Path parameters

  • index string Required

    Name of the index you wish to create. Index names must meet the following criteria:

    • Lowercase only
    • Cannot include \, /, *, ?, ", <, >, |, (space character), ,, or #
    • Indices prior to 7.0 could contain a colon (:), but that has been deprecated and will not be supported in later versions
    • Cannot start with -, _, or +
    • Cannot be . or ..
    • Cannot be longer than 255 bytes (note thtat it is bytes, so multi-byte characters will reach the limit faster)
    • Names starting with . are deprecated, except for hidden indices and internal indices managed by plugins

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

application/json

Body

  • aliases object

    Aliases for the index.

    Hide aliases attribute Show aliases attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • filter object

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

        External documentation
      • index_routing string
      • is_hidden boolean

        If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

        Default value is false.

      • is_write_index boolean

        If true, the index is the write index for the alias.

        Default value is false.

      • routing string
      • search_routing string
  • mappings object
    Hide mappings attributes Show mappings attributes object
    • all_field object
      Hide all_field attributes Show all_field attributes object
      • analyzer string Required
      • enabled boolean Required
      • omit_norms boolean Required
      • search_analyzer string Required
      • similarity string Required
      • store boolean Required
      • store_term_vector_offsets boolean Required
      • store_term_vector_payloads boolean Required
      • store_term_vector_positions boolean Required
      • store_term_vectors boolean Required
    • date_detection boolean
    • dynamic string

      Values are strict, runtime, true, or false.

    • dynamic_date_formats array[string]
    • dynamic_templates array[object]
    • _field_names object
      Hide _field_names attribute Show _field_names attribute object
      • enabled boolean Required
    • index_field object
      Hide index_field attribute Show index_field attribute object
      • enabled boolean Required
    • _meta object
      Hide _meta attribute Show _meta attribute object
      • * object Additional properties
    • numeric_detection boolean
    • properties object
    • _routing object
      Hide _routing attribute Show _routing attribute object
      • required boolean Required
    • _size object
      Hide _size attribute Show _size attribute object
      • enabled boolean Required
    • _source object
      Hide _source attributes Show _source attributes object
      • compress boolean
      • compress_threshold string
      • enabled boolean
      • excludes array[string]
      • includes array[string]
      • mode string

        Values are disabled, stored, or synthetic.

    • runtime object
      Hide runtime attribute Show runtime attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • format string
        • format string

          A custom format for date type runtime fields.

        • input_field string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • target_field string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • target_index string
        • script object
          Hide script attributes Show script attributes object
          • source string | object

            One of:
          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • type string Required

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • enabled boolean
    • subobjects string

      Values are true or false.

    • _data_stream_timestamp object
      Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
      • enabled boolean Required
  • settings object Additional properties
    Index settings

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • index string Required
    • shards_acknowledged boolean Required
    • acknowledged boolean Required
PUT /my-index-000001
{
  "settings": {
    "number_of_shards": 3,
    "number_of_replicas": 2
  }
}
resp = client.indices.create(
    index="my-index-000001",
    settings={
        "number_of_shards": 3,
        "number_of_replicas": 2
    },
)
const response = await client.indices.create({
  index: "my-index-000001",
  settings: {
    number_of_shards: 3,
    number_of_replicas: 2,
  },
});
response = client.indices.create(
  index: "my-index-000001",
  body: {
    "settings": {
      "number_of_shards": 3,
      "number_of_replicas": 2
    }
  }
)
$resp = $client->indices()->create([
    "index" => "my-index-000001",
    "body" => [
        "settings" => [
            "number_of_shards" => 3,
            "number_of_replicas" => 2,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"settings":{"number_of_shards":3,"number_of_replicas":2}}' "$ELASTICSEARCH_URL/my-index-000001"
client.indices().create(c -> c
    .index("my-index-000001")
    .settings(s -> s
        .numberOfShards("3")
        .numberOfReplicas("2")
    )
);
This request specifies the `number_of_shards` and `number_of_replicas`.
{
  "settings": {
    "number_of_shards": 3,
    "number_of_replicas": 2
  }
}
You can provide mapping definitions in the create index API requests.
{
  "settings": {
    "number_of_shards": 1
  },
  "mappings": {
    "properties": {
      "field1": { "type": "text" }
    }
  }
}
You can provide mapping definitions in the create index API requests. Index alias names also support date math.
{
  "aliases": {
    "alias_1": {},
    "alias_2": {
      "filter": {
        "term": {
          "user.id": "kimchy"
        }
      },
      "routing": "shard-1"
    }
  }
}












Delete an alias Generally available

DELETE /{index}/_aliases/{name}

All methods and paths for this operation:

DELETE /{index}/_alias/{name}

DELETE /{index}/_aliases/{name}

Removes a data stream or index from an alias.

Required authorization

  • Index privileges: manage

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices used to limit the request. Supports wildcards (*).

  • name string | array[string] Required

    Comma-separated list of aliases to remove. Supports wildcards (*). To remove all aliases, use * or _all.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • errors boolean
DELETE my-data-stream/_alias/my-alias
resp = client.indices.delete_alias(
    index="my-data-stream",
    name="my-alias",
)
const response = await client.indices.deleteAlias({
  index: "my-data-stream",
  name: "my-alias",
});
response = client.indices.delete_alias(
  index: "my-data-stream",
  name: "my-alias"
)
$resp = $client->indices()->deleteAlias([
    "index" => "my-data-stream",
    "name" => "my-alias",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-data-stream/_alias/my-alias"
client.indices().deleteAlias(d -> d
    .index("my-data-stream")
    .name("my-alias")
);




















Check aliases Generally available

HEAD /{index}/_alias/{name}

All methods and paths for this operation:

HEAD /_alias/{name}

HEAD /{index}/_alias/{name}

Check if one or more data stream or index aliases exist.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams or indices used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

  • name string | array[string] Required

    Comma-separated list of aliases to check. Supports wildcards (*).

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, requests that include a missing data stream or index in the target indices or data streams return an error.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
HEAD _alias/my-alias
resp = client.indices.exists_alias(
    name="my-alias",
)
const response = await client.indices.existsAlias({
  name: "my-alias",
});
response = client.indices.exists_alias(
  name: "my-alias"
)
$resp = $client->indices()->existsAlias([
    "name" => "my-alias",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_alias/my-alias"
client.indices().existsAlias(e -> e
    .name("my-alias")
);

Get mapping definitions Generally available

GET /{index}/_mapping

All methods and paths for this operation:

GET /_mapping

GET /{index}/_mapping

For data streams, the API retrieves mappings for the stream’s backing indices.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • local boolean Deprecated

    If true, the request retrieves information from the local node only.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • item object
        Hide item attributes Show item attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
          • mode string

            Values are disabled, stored, or synthetic.

        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

              Hide fields attribute Show fields attribute object
              • * object Additional properties
            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

            • input_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_index string
            • script object
              Hide script attributes Show script attributes object
              • source
              • id string
              • params object

                Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              • lang
              • options object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
      • mappings object Required
        Hide mappings attributes Show mappings attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
          • mode string

            Values are disabled, stored, or synthetic.

        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

              Hide fields attribute Show fields attribute object
              • * object Additional properties
            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

            • input_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_index string
            • script object
              Hide script attributes Show script attributes object
              • source
              • id string
              • params object

                Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              • lang
              • options object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
GET /books/_mapping
resp = client.indices.get_mapping(
    index="books",
)
const response = await client.indices.getMapping({
  index: "books",
});
response = client.indices.get_mapping(
  index: "books"
)
$resp = $client->indices()->getMapping([
    "index" => "books",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/books/_mapping"
client.indices().getMapping(g -> g
    .index("books")
);





























































Delete an inference endpoint Generally available

DELETE /_inference/{task_type}/{inference_id}

All methods and paths for this operation:

DELETE /_inference/{inference_id}

DELETE /_inference/{task_type}/{inference_id}

Path parameters

  • task_type string Required

    The task type

    Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

  • inference_id string Required

    The inference identifier.

Query parameters

  • dry_run boolean

    When true, the endpoint is not deleted and a list of ingest processors which reference this endpoint is returned.

  • force boolean

    When true, the inference endpoint is forcefully deleted even if it is still being used by ingest processors or semantic text fields.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object

    Acknowledged response. For dry_run, contains the list of pipelines which reference the inference endpoint

    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • pipelines array[string] Required
DELETE /_inference/{task_type}/{inference_id}
DELETE /_inference/sparse_embedding/my-elser-model
resp = client.inference.delete(
    task_type="sparse_embedding",
    inference_id="my-elser-model",
)
const response = await client.inference.delete({
  task_type: "sparse_embedding",
  inference_id: "my-elser-model",
});
response = client.inference.delete(
  task_type: "sparse_embedding",
  inference_id: "my-elser-model"
)
$resp = $client->inference()->delete([
    "task_type" => "sparse_embedding",
    "inference_id" => "my-elser-model",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_inference/sparse_embedding/my-elser-model"
client.inference().delete(d -> d
    .inferenceId("my-elser-model")
    .taskType(TaskType.SparseEmbedding)
);




Create an Amazon Bedrock inference endpoint Generally available

PUT /_inference/{task_type}/{amazonbedrock_inference_id}

Create an inference endpoint to perform an inference task with the amazonbedrock service.


You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion or text_embedding.

  • amazonbedrock_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is amazonbedrock.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • access_key string Required

      A valid AWS access key that has permissions to use Amazon Bedrock and access to models for inference requests.

    • model string Required

      The base model ID or an ARN to a custom model based on a foundational model. The base model IDs can be found in the Amazon Bedrock documentation. Note that the model ID must be available for the provider chosen and your IAM user must have access to the model.

      External documentation
    • provider string

      The model provider for your deployment. Note that some providers may support only certain task types. Supported providers include:

      • amazontitan - available for text_embedding and completion task types
      • anthropic - available for completion task type only
      • ai21labs - available for completion task type only
      • cohere - available for text_embedding and completion task types
      • meta - available for completion task type only
      • mistral - available for completion task type only
    • region string Required

      The region that your model or ARN is deployed in. The list of available regions per model can be found in the Amazon Bedrock documentation.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
    • secret_key string Required

      A valid AWS secret key that is paired with the access_key. For informationg about creating and managing access and secret keys, refer to the AWS documentation.

      External documentation
  • task_settings object
    Hide task_settings attributes Show task_settings attributes object
    • max_new_tokens number

      For a completion task, it sets the maximum number for the output tokens to be generated.

      Default value is 64.

    • temperature number

      For a completion task, it is a number between 0.0 and 1.0 that controls the apparent creativity of the results. At temperature 0.0 the model is most deterministic, at temperature 1.0 most random. It should not be used if top_p or top_k is specified.

    • top_k number

      For a completion task, it limits samples to the top-K most likely words, balancing coherence and variability. It is only available for anthropic, cohere, and mistral providers. It is an alternative to temperature; it should not be used if temperature is specified.

    • top_p number

      For a completion task, it is a number in the range of 0.0 to 1.0, to eliminate low-probability tokens. Top-p uses nucleus sampling to select top tokens whose sum of likelihoods does not exceed a certain value, ensuring both variety and coherence. It is an alternative to temperature; it should not be used if temperature is specified.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{amazonbedrock_inference_id}
PUT _inference/text_embedding/amazon_bedrock_embeddings
{
    "service": "amazonbedrock",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "provider": "amazontitan",
        "model": "amazon.titan-embed-text-v2:0"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="amazon_bedrock_embeddings",
    inference_config={
        "service": "amazonbedrock",
        "service_settings": {
            "access_key": "AWS-access-key",
            "secret_key": "AWS-secret-key",
            "region": "us-east-1",
            "provider": "amazontitan",
            "model": "amazon.titan-embed-text-v2:0"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "amazon_bedrock_embeddings",
  inference_config: {
    service: "amazonbedrock",
    service_settings: {
      access_key: "AWS-access-key",
      secret_key: "AWS-secret-key",
      region: "us-east-1",
      provider: "amazontitan",
      model: "amazon.titan-embed-text-v2:0",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "amazon_bedrock_embeddings",
  body: {
    "service": "amazonbedrock",
    "service_settings": {
      "access_key": "AWS-access-key",
      "secret_key": "AWS-secret-key",
      "region": "us-east-1",
      "provider": "amazontitan",
      "model": "amazon.titan-embed-text-v2:0"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "amazon_bedrock_embeddings",
    "body" => [
        "service" => "amazonbedrock",
        "service_settings" => [
            "access_key" => "AWS-access-key",
            "secret_key" => "AWS-secret-key",
            "region" => "us-east-1",
            "provider" => "amazontitan",
            "model" => "amazon.titan-embed-text-v2:0",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"amazonbedrock","service_settings":{"access_key":"AWS-access-key","secret_key":"AWS-secret-key","region":"us-east-1","provider":"amazontitan","model":"amazon.titan-embed-text-v2:0"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/amazon_bedrock_embeddings"
client.inference().put(p -> p
    .inferenceId("amazon_bedrock_embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("amazonbedrock")
        .serviceSettings(JsonData.fromJson("{\"access_key\":\"AWS-access-key\",\"secret_key\":\"AWS-secret-key\",\"region\":\"us-east-1\",\"provider\":\"amazontitan\",\"model\":\"amazon.titan-embed-text-v2:0\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/amazon_bedrock_embeddings` to create an inference endpoint that performs a text embedding task.
{
    "service": "amazonbedrock",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "provider": "amazontitan",
        "model": "amazon.titan-embed-text-v2:0"
    }
}
Run `PUT _inference/completion/openai-completion` to create an inference endpoint to perform a completion task type.
{
    "service": "openai",
    "service_settings": {
        "api_key": "OpenAI-API-Key",
        "model_id": "gpt-3.5-turbo"
    }
}

Create an Anthropic inference endpoint Generally available

PUT /_inference/{task_type}/{anthropic_inference_id}

Create an inference endpoint to perform an inference task with the anthropic service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The task type. The only valid task type for the model to perform is completion.

    Value is completion.

  • anthropic_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is anthropic.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key for the Anthropic API.

    • model_id string Required

      The name of the model to use for the inference task. Refer to the Anthropic documentation for the list of supported models.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
  • task_settings object
    Hide task_settings attributes Show task_settings attributes object
    • max_tokens number Required

      For a completion task, it is the maximum number of tokens to generate before stopping.

    • temperature number

      For a completion task, it is the amount of randomness injected into the response. For more details about the supported range, refer to Anthropic documentation.

      External documentation
    • top_k number

      For a completion task, it specifies to only sample from the top K options for each subsequent token. It is recommended for advanced use cases only. You usually only need to use temperature.

    • top_p number

      For a completion task, it specifies to use Anthropic's nucleus sampling. In nucleus sampling, Anthropic computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches the specified probability. You should either alter temperature or top_p, but not both. It is recommended for advanced use cases only. You usually only need to use temperature.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Value is completion.

PUT /_inference/{task_type}/{anthropic_inference_id}
PUT _inference/completion/anthropic_completion
{
    "service": "anthropic",
    "service_settings": {
        "api_key": "Anthropic-Api-Key",
        "model_id": "Model-ID"
    },
    "task_settings": {
        "max_tokens": 1024
    }
}
resp = client.inference.put(
    task_type="completion",
    inference_id="anthropic_completion",
    inference_config={
        "service": "anthropic",
        "service_settings": {
            "api_key": "Anthropic-Api-Key",
            "model_id": "Model-ID"
        },
        "task_settings": {
            "max_tokens": 1024
        }
    },
)
const response = await client.inference.put({
  task_type: "completion",
  inference_id: "anthropic_completion",
  inference_config: {
    service: "anthropic",
    service_settings: {
      api_key: "Anthropic-Api-Key",
      model_id: "Model-ID",
    },
    task_settings: {
      max_tokens: 1024,
    },
  },
});
response = client.inference.put(
  task_type: "completion",
  inference_id: "anthropic_completion",
  body: {
    "service": "anthropic",
    "service_settings": {
      "api_key": "Anthropic-Api-Key",
      "model_id": "Model-ID"
    },
    "task_settings": {
      "max_tokens": 1024
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "completion",
    "inference_id" => "anthropic_completion",
    "body" => [
        "service" => "anthropic",
        "service_settings" => [
            "api_key" => "Anthropic-Api-Key",
            "model_id" => "Model-ID",
        ],
        "task_settings" => [
            "max_tokens" => 1024,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"anthropic","service_settings":{"api_key":"Anthropic-Api-Key","model_id":"Model-ID"},"task_settings":{"max_tokens":1024}}' "$ELASTICSEARCH_URL/_inference/completion/anthropic_completion"
client.inference().put(p -> p
    .inferenceId("anthropic_completion")
    .taskType(TaskType.Completion)
    .inferenceConfig(i -> i
        .service("anthropic")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Anthropic-Api-Key\",\"model_id\":\"Model-ID\"}"))
        .taskSettings(JsonData.fromJson("{\"max_tokens\":1024}"))
    )
);
Request example
Run `PUT _inference/completion/anthropic_completion` to create an inference endpoint that performs a completion task.
{
    "service": "anthropic",
    "service_settings": {
        "api_key": "Anthropic-Api-Key",
        "model_id": "Model-ID"
    },
    "task_settings": {
        "max_tokens": 1024
    }
}

Create an Azure AI studio inference endpoint Generally available

PUT /_inference/{task_type}/{azureaistudio_inference_id}

Create an inference endpoint to perform an inference task with the azureaistudio service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion, rerank, or text_embedding.

  • azureaistudio_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is azureaistudio.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Azure AI Studio model deployment. This key can be found on the overview page for your deployment in the management section of your Azure AI Studio account.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • endpoint_type string Required

      The type of endpoint that is available for deployment through Azure AI Studio: token or realtime. The token endpoint type is for "pay as you go" endpoints that are billed per token. The realtime endpoint type is for "real-time" endpoints that are billed per hour of usage.

      External documentation
    • target string Required

      The target URL of your Azure AI Studio model deployment. This can be found on the overview page for your deployment in the management section of your Azure AI Studio account.

    • provider string Required

      The model provider for your deployment. Note that some providers may support only certain task types. Supported providers include:

      • cohere - available for text_embedding and completion task types
      • databricks - available for completion task type only
      • meta - available for completion task type only
      • microsoft_phi - available for completion task type only
      • mistral - available for completion task type only
      • openai - available for text_embedding and completion task types
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
  • task_settings object
    Hide task_settings attributes Show task_settings attributes object
    • do_sample number

      For a completion task, instruct the inference process to perform sampling. It has no effect unless temperature or top_p is specified.

    • max_new_tokens number

      For a completion task, provide a hint for the maximum number of output tokens to be generated.

      Default value is 64.

    • temperature number

      For a completion task, control the apparent creativity of generated completions with a sampling temperature. It must be a number in the range of 0.0 to 2.0. It should not be used if top_p is specified.

    • top_p number

      For a completion task, make the model consider the results of the tokens with nucleus sampling probability. It is an alternative value to temperature and must be a number in the range of 0.0 to 2.0. It should not be used if temperature is specified.

    • user string

      For a text_embedding task, specify the user issuing the request. This information can be used for abuse detection.

    • return_documents boolean

      For a rerank task, return doc text within the results.

    • top_n number

      For a rerank task, the number of most relevant documents to return. It defaults to the number of the documents.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding, completion, or rerank.

PUT /_inference/{task_type}/{azureaistudio_inference_id}
PUT _inference/text_embedding/azure_ai_studio_embeddings
{
    "service": "azureaistudio",
    "service_settings": {
        "api_key": "Azure-AI-Studio-API-key",
        "target": "Target-Uri",
        "provider": "openai",
        "endpoint_type": "token"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="azure_ai_studio_embeddings",
    inference_config={
        "service": "azureaistudio",
        "service_settings": {
            "api_key": "Azure-AI-Studio-API-key",
            "target": "Target-Uri",
            "provider": "openai",
            "endpoint_type": "token"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "azure_ai_studio_embeddings",
  inference_config: {
    service: "azureaistudio",
    service_settings: {
      api_key: "Azure-AI-Studio-API-key",
      target: "Target-Uri",
      provider: "openai",
      endpoint_type: "token",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "azure_ai_studio_embeddings",
  body: {
    "service": "azureaistudio",
    "service_settings": {
      "api_key": "Azure-AI-Studio-API-key",
      "target": "Target-Uri",
      "provider": "openai",
      "endpoint_type": "token"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "azure_ai_studio_embeddings",
    "body" => [
        "service" => "azureaistudio",
        "service_settings" => [
            "api_key" => "Azure-AI-Studio-API-key",
            "target" => "Target-Uri",
            "provider" => "openai",
            "endpoint_type" => "token",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"azureaistudio","service_settings":{"api_key":"Azure-AI-Studio-API-key","target":"Target-Uri","provider":"openai","endpoint_type":"token"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/azure_ai_studio_embeddings"
client.inference().put(p -> p
    .inferenceId("azure_ai_studio_embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("azureaistudio")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Azure-AI-Studio-API-key\",\"target\":\"Target-Uri\",\"provider\":\"openai\",\"endpoint_type\":\"token\"}"))
    )
);
Run `PUT _inference/text_embedding/azure_ai_studio_embeddings` to create an inference endpoint that performs a text_embedding task. Note that you do not specify a model here, as it is defined already in the Azure AI Studio deployment.
{
    "service": "azureaistudio",
    "service_settings": {
        "api_key": "Azure-AI-Studio-API-key",
        "target": "Target-Uri",
        "provider": "openai",
        "endpoint_type": "token"
    }
}
Run `PUT _inference/completion/azure_ai_studio_completion` to create an inference endpoint that performs a completion task.
{
    "service": "azureaistudio",
    "service_settings": {
        "api_key": "Azure-AI-Studio-API-key",
        "target": "Target-URI",
        "provider": "databricks",
        "endpoint_type": "realtime"
    }
}
Run `PUT _inference/rerank/azure_ai_studio_rerank` to create an inference endpoint that performs a rerank task.
{
    "service": "azureaistudio",
    "service_settings": {
        "api_key": "Azure-AI-Studio-API-key",
        "target": "Target-URI",
        "provider": "cohere",
        "endpoint_type": "token"
    }
}

Create an Azure OpenAI inference endpoint Generally available

PUT /_inference/{task_type}/{azureopenai_inference_id}

Create an inference endpoint to perform an inference task with the azureopenai service.

The list of chat completion models that you can choose from in your Azure OpenAI deployment include:

The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API.

    Values are completion or text_embedding.

  • azureopenai_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is azureopenai.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string

      A valid API key for your Azure OpenAI account. You must specify either api_key or entra_id. If you do not provide either or you provide both, you will receive an error when you try to create your model.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • api_version string Required

      The Azure API version ID to use. It is recommended to use the latest supported non-preview version.

    • deployment_id string Required

      The deployment name of your deployed models. Your Azure OpenAI deployments can be found though the Azure OpenAI Studio portal that is linked to your subscription.

      External documentation
    • entra_id string

      A valid Microsoft Entra token. You must specify either api_key or entra_id. If you do not provide either or you provide both, you will receive an error when you try to create your model.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
    • resource_name string Required

      The name of your Azure OpenAI resource. You can find this from the list of resources in the Azure Portal for your subscription.

      External documentation
  • task_settings object
    Hide task_settings attribute Show task_settings attribute object
    • user string

      For a completion or text_embedding task, specify the user issuing the request. This information can be used for abuse detection.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{azureopenai_inference_id}
PUT _inference/text_embedding/azure_openai_embeddings
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="azure_openai_embeddings",
    inference_config={
        "service": "azureopenai",
        "service_settings": {
            "api_key": "Api-Key",
            "resource_name": "Resource-name",
            "deployment_id": "Deployment-id",
            "api_version": "2024-02-01"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "azure_openai_embeddings",
  inference_config: {
    service: "azureopenai",
    service_settings: {
      api_key: "Api-Key",
      resource_name: "Resource-name",
      deployment_id: "Deployment-id",
      api_version: "2024-02-01",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "azure_openai_embeddings",
  body: {
    "service": "azureopenai",
    "service_settings": {
      "api_key": "Api-Key",
      "resource_name": "Resource-name",
      "deployment_id": "Deployment-id",
      "api_version": "2024-02-01"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "azure_openai_embeddings",
    "body" => [
        "service" => "azureopenai",
        "service_settings" => [
            "api_key" => "Api-Key",
            "resource_name" => "Resource-name",
            "deployment_id" => "Deployment-id",
            "api_version" => "2024-02-01",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"azureopenai","service_settings":{"api_key":"Api-Key","resource_name":"Resource-name","deployment_id":"Deployment-id","api_version":"2024-02-01"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/azure_openai_embeddings"
client.inference().put(p -> p
    .inferenceId("azure_openai_embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("azureopenai")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Api-Key\",\"resource_name\":\"Resource-name\",\"deployment_id\":\"Deployment-id\",\"api_version\":\"2024-02-01\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/azure_openai_embeddings` to create an inference endpoint that performs a `text_embedding` task. You do not specify a model, as it is defined already in the Azure OpenAI deployment.
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}
Run `PUT _inference/completion/azure_openai_completion` to create an inference endpoint that performs a `completion` task.
{
    "service": "azureopenai",
    "service_settings": {
        "api_key": "Api-Key",
        "resource_name": "Resource-name",
        "deployment_id": "Deployment-id",
        "api_version": "2024-02-01"
    }
}

Create a Cohere inference endpoint Generally available

PUT /_inference/{task_type}/{cohere_inference_id}

Create an inference endpoint to perform an inference task with the cohere service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion, rerank, or text_embedding.

  • cohere_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is cohere.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key for your Cohere account. You can find or create your Cohere API keys on the Cohere API key settings page.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • embedding_type string

      Values are binary, bit, byte, float, or int8.

    • model_id string Required

      For a completion, rerank, or text_embedding task, the name of the model to use for the inference task.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
    • similarity string

      Values are cosine, dot_product, or l2_norm.

  • task_settings object
    Hide task_settings attributes Show task_settings attributes object
    • input_type string Required

      Values are classification, clustering, ingest, or search.

    • return_documents boolean

      For a rerank task, return doc text within the results.

    • top_n number

      For a rerank task, the number of most relevant documents to return. It defaults to the number of the documents. If this inference endpoint is used in a text_similarity_reranker retriever query and top_n is set, it must be greater than or equal to rank_window_size in the query.

    • truncate string

      Values are END, NONE, or START.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding, rerank, or completion.

PUT /_inference/{task_type}/{cohere_inference_id}
PUT _inference/text_embedding/cohere-embeddings
{
    "service": "cohere",
    "service_settings": {
        "api_key": "Cohere-Api-key",
        "model_id": "embed-english-light-v3.0",
        "embedding_type": "byte"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="cohere-embeddings",
    inference_config={
        "service": "cohere",
        "service_settings": {
            "api_key": "Cohere-Api-key",
            "model_id": "embed-english-light-v3.0",
            "embedding_type": "byte"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "cohere-embeddings",
  inference_config: {
    service: "cohere",
    service_settings: {
      api_key: "Cohere-Api-key",
      model_id: "embed-english-light-v3.0",
      embedding_type: "byte",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "cohere-embeddings",
  body: {
    "service": "cohere",
    "service_settings": {
      "api_key": "Cohere-Api-key",
      "model_id": "embed-english-light-v3.0",
      "embedding_type": "byte"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "cohere-embeddings",
    "body" => [
        "service" => "cohere",
        "service_settings" => [
            "api_key" => "Cohere-Api-key",
            "model_id" => "embed-english-light-v3.0",
            "embedding_type" => "byte",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"cohere","service_settings":{"api_key":"Cohere-Api-key","model_id":"embed-english-light-v3.0","embedding_type":"byte"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/cohere-embeddings"
client.inference().put(p -> p
    .inferenceId("cohere-embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("cohere")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Cohere-Api-key\",\"model_id\":\"embed-english-light-v3.0\",\"embedding_type\":\"byte\"}"))
    )
);
Run `PUT _inference/text_embedding/cohere-embeddings` to create an inference endpoint that performs a text embedding task.
{
    "service": "cohere",
    "service_settings": {
        "api_key": "Cohere-Api-key",
        "model_id": "embed-english-light-v3.0",
        "embedding_type": "byte"
    }
}
Run `PUT _inference/rerank/cohere-rerank` to create an inference endpoint that performs a rerank task.
{
    "service": "cohere",
    "service_settings": {
        "api_key": "Cohere-API-key",
        "model_id": "rerank-english-v3.0"
    },
    "task_settings": {
        "top_n": 10,
        "return_documents": true
    }
}
Run `PUT _inference/completion/cohere-completion` to create an inference endpoint that performs a completion task.
{
    "service": "cohere",
    "service_settings": {
        "api_key": "Cohere-API-key",
        "model_id": "command-a-03-2025"
    }
}
















Create an Google AI Studio inference endpoint Generally available

PUT /_inference/{task_type}/{googleaistudio_inference_id}

Create an inference endpoint to perform an inference task with the googleaistudio service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion or text_embedding.

  • googleaistudio_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is googleaistudio.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Google Gemini account.

    • model_id string Required

      The name of the model to use for the inference task. Refer to the Google documentation for the list of supported models.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{googleaistudio_inference_id}
PUT _inference/completion/google_ai_studio_completion
{
    "service": "googleaistudio",
    "service_settings": {
        "api_key": "api-key",
        "model_id": "model-id"
    }
}
resp = client.inference.put(
    task_type="completion",
    inference_id="google_ai_studio_completion",
    inference_config={
        "service": "googleaistudio",
        "service_settings": {
            "api_key": "api-key",
            "model_id": "model-id"
        }
    },
)
const response = await client.inference.put({
  task_type: "completion",
  inference_id: "google_ai_studio_completion",
  inference_config: {
    service: "googleaistudio",
    service_settings: {
      api_key: "api-key",
      model_id: "model-id",
    },
  },
});
response = client.inference.put(
  task_type: "completion",
  inference_id: "google_ai_studio_completion",
  body: {
    "service": "googleaistudio",
    "service_settings": {
      "api_key": "api-key",
      "model_id": "model-id"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "completion",
    "inference_id" => "google_ai_studio_completion",
    "body" => [
        "service" => "googleaistudio",
        "service_settings" => [
            "api_key" => "api-key",
            "model_id" => "model-id",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"googleaistudio","service_settings":{"api_key":"api-key","model_id":"model-id"}}' "$ELASTICSEARCH_URL/_inference/completion/google_ai_studio_completion"
client.inference().put(p -> p
    .inferenceId("google_ai_studio_completion")
    .taskType(TaskType.Completion)
    .inferenceConfig(i -> i
        .service("googleaistudio")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"api-key\",\"model_id\":\"model-id\"}"))
    )
);
Request example
Run `PUT _inference/completion/google_ai_studio_completion` to create an inference endpoint to perform a `completion` task type.
{
    "service": "googleaistudio",
    "service_settings": {
        "api_key": "api-key",
        "model_id": "model-id"
    }
}






































































































































































































































































































Query rules

Query rules enable you to configure per-query rules that are applied at query time to queries that match the specific rule. Query rules are organized into rulesets, collections of query rules that are matched against incoming queries. Query rules are applied using the rule query. If a query matches one or more rules in the ruleset, the query is re-written to apply the rules before searching. This allows pinning documents for only queries that match a specific term.

Learn more about the rule query

















































Search





Delete an async search Generally available

DELETE /_async_search/{id}

If the asynchronous search is still running, it is cancelled. Otherwise, the saved search results are deleted. If the Elasticsearch security features are enabled, the deletion of a specific async search is restricted to: the authenticated user that submitted the original search request; users that have the cancel_task cluster privilege.

Path parameters

  • id string Required

    A unique identifier for the async search.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=
resp = client.async_search.delete(
    id="FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
)
const response = await client.asyncSearch.delete({
  id: "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
});
response = client.async_search.delete(
  id: "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc="
)
$resp = $client->asyncSearch()->delete([
    "id" => "FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_async_search/FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc="
client.asyncSearch().delete(d -> d
    .id("FmRldE8zREVEUzA2ZVpUeGs2ejJFUFEaMkZ5QTVrSTZSaVN3WlNFVmtlWHJsdzoxMDc=")
);




































































Search application


















































Get builtin privileges Generally available

GET /_security/privilege/_builtin

Get the list of cluster privileges and index privileges that are available in this version of Elasticsearch.

Required authorization

  • Cluster privileges: manage_security
External documentation

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cluster array[string] Required

      The list of cluster privileges that are understood by this version of Elasticsearch.

    • index array[string] Required

      The list of index privileges that are understood by this version of Elasticsearch.

GET /_security/privilege/_builtin
GET /_security/privilege/_builtin
resp = client.security.get_builtin_privileges()
const response = await client.security.getBuiltinPrivileges();
response = client.security.get_builtin_privileges
$resp = $client->security()->getBuiltinPrivileges();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/privilege/_builtin"
client.security().getBuiltinPrivileges();
Response examples (200)
A successful response from `GET /_security/privilege/_builtin`.
{
  "cluster" : [
    "all",
    "cancel_task",
    "create_snapshot",
    "cross_cluster_replication",
    "cross_cluster_search",
    "delegate_pki",
    "grant_api_key",
    "manage",
    "manage_api_key",
    "manage_autoscaling",
    "manage_behavioral_analytics",
    "manage_ccr",
    "manage_connector",
    "manage_data_frame_transforms",
    "manage_data_stream_global_retention",
    "manage_enrich",
    "manage_ilm",
    "manage_index_templates",
    "manage_inference",
    "manage_ingest_pipelines",
    "manage_logstash_pipelines",
    "manage_ml",
    "manage_oidc",
    "manage_own_api_key",
    "manage_pipeline",
    "manage_rollup",
    "manage_saml",
    "manage_search_application",
    "manage_search_query_rules",
    "manage_search_synonyms",
    "manage_security",
    "manage_service_account",
    "manage_slm",
    "manage_token",
    "manage_transform",
    "manage_user_profile",
    "manage_watcher",
    "monitor",
    "monitor_connector",
    "monitor_data_frame_transforms",
    "monitor_data_stream_global_retention",
    "monitor_enrich",
    "monitor_inference",
    "monitor_ml",
    "monitor_rollup",
    "monitor_snapshot",
    "monitor_stats",
    "monitor_text_structure",
    "monitor_transform",
    "monitor_watcher",
    "none",
    "post_behavioral_analytics_event",
    "read_ccr",
    "read_connector_secrets",
    "read_fleet_secrets",
    "read_ilm",
    "read_pipeline",
    "read_security",
    "read_slm",
    "transport_client",
    "write_connector_secrets",
    "write_fleet_secrets"
  ],
  "index" : [
    "all",
    "auto_configure",
    "create",
    "create_doc",
    "create_index",
    "cross_cluster_replication",
    "cross_cluster_replication_internal",
    "delete",
    "delete_index",
    "index",
    "maintenance",
    "manage",
    "manage_data_stream_lifecycle",
    "manage_follow_index",
    "manage_ilm",
    "manage_leader_index",
    "monitor",
    "none",
    "read",
    "read_cross_cluster",
    "view_index_metadata",
    "write"
  ],
  "remote_cluster" : [
    "monitor_enrich",
    "monitor_stats"
  ]
}




































































































Schedule a transform to start now Generally available

POST /_transform/{transform_id}/_schedule_now

Instantly run a transform to process data. If you run this API, the transform will process the new data instantly, without waiting for the configured frequency interval. After the API is called, the transform will be processed again at now + frequency unless the API is called again in the meantime.

Required authorization

  • Cluster privileges: manage_transform

Path parameters

  • transform_id string Required

    Identifier for the transform.

Query parameters

  • timeout string

    Controls the time to wait for the scheduling to take place

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_transform/{transform_id}/_schedule_now
POST _transform/ecommerce_transform/_schedule_now
resp = client.transform.schedule_now_transform(
    transform_id="ecommerce_transform",
)
const response = await client.transform.scheduleNowTransform({
  transform_id: "ecommerce_transform",
});
response = client.transform.schedule_now_transform(
  transform_id: "ecommerce_transform"
)
$resp = $client->transform()->scheduleNowTransform([
    "transform_id" => "ecommerce_transform",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_transform/ecommerce_transform/_schedule_now"
client.transform().scheduleNowTransform(s -> s
    .transformId("ecommerce_transform")
);
Response examples (200)
A successful response when the transform is scheduled to run now.
{
  "acknowledged": true
}