Compact and aligned text (CAT)
The compact and aligned text (CAT) APIs aim are intended only for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, it's recommend to use a corresponding JSON API.
All the cat commands accept a query string parameter help
to see all the headers and info they provide, and the /_cat
command alone lists all the available commands.
Bulk index or delete documents
Perform multiple index
, create
, delete
, and update
actions in a single request.
This reduces overhead and can greatly increase indexing speed.
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:
- To use the
create
action, you must have thecreate_doc
,create
,index
, orwrite
index privilege. Data streams support only thecreate
action. - To use the
index
action, you must have thecreate
,index
, orwrite
index privilege. - To use the
delete
action, you must have thedelete
orwrite
index privilege. - To use the
update
action, you must have theindex
orwrite
index privilege. - To automatically create a data stream or index with a bulk API request, you must have the
auto_configure
,create_index
, ormanage
index privilege. - To make the result of a bulk operation visible to search using the
refresh
parameter, you must have themaintenance
ormanage
index privilege.
Automatic data stream creation requires a matching index template with data stream enabled.
The actions are specified in the request body using a newline delimited JSON (NDJSON) structure:
action_and_meta_data\n
optional_source\n
action_and_meta_data\n
optional_source\n
....
action_and_meta_data\n
optional_source\n
The index
and create
actions expect a source on the next line and have the same semantics as the op_type
parameter in the standard index API.
A create
action fails if a document with the same ID already exists in the target
An index
action adds or replaces a document as necessary.
NOTE: Data streams support only the create
action.
To update or delete a document in a data stream, you must target the backing index containing the document.
An update
action expects that the partial doc, upsert, and script and its options are specified on the next line.
A delete
action does not expect a source on the next line and has the same semantics as the standard delete API.
NOTE: The final line of data must end with a newline character (\n
).
Each newline character may be preceded by a carriage return (\r
).
When sending NDJSON data to the _bulk
endpoint, use a Content-Type
header of application/json
or application/x-ndjson
.
Because this format uses literal newline characters (\n
) as delimiters, make sure that the JSON actions and sources are not pretty printed.
If you provide a target in the request path, it is used for any actions that don't explicitly specify an _index
argument.
A note on the format: the idea here is to make processing as fast as possible.
As some of the actions are redirected to other shards on other nodes, only action_meta_data
is parsed on the receiving node side.
Client libraries using this protocol should try and strive to do something similar on the client side, and reduce buffering as much as possible.
There is no "correct" number of actions to perform in a single bulk request. Experiment with different settings to find the optimal size for your particular workload. Note that Elasticsearch limits the maximum size of a HTTP request to 100mb by default so clients must ensure that no request exceeds this size. It is not possible to index a single document that exceeds the size limit, so you must pre-process any such documents into smaller pieces before sending them to Elasticsearch. For instance, split documents into pages or chapters before indexing them, or store raw binary data in a system outside Elasticsearch and replace the raw data with a link to the external system in the documents that you send to Elasticsearch.
Client suppport for bulk requests
Some of the officially supported clients provide helpers to assist with bulk requests and reindexing:
- Go: Check out
esutil.BulkIndexer
- Perl: Check out
Search::Elasticsearch::Client::5_0::Bulk
andSearch::Elasticsearch::Client::5_0::Scroll
- Python: Check out
elasticsearch.helpers.*
- JavaScript: Check out
client.helpers.*
- .NET: Check out
BulkAllObservable
- PHP: Check out bulk indexing.
Submitting bulk requests with cURL
If you're providing text file input to curl
, you must use the --data-binary
flag instead of plain -d
.
The latter doesn't preserve newlines. For example:
$ cat requests
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
$ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@requests"; echo
{"took":7, "errors": false, "items":[{"index":{"_index":"test","_id":"1","_version":1,"result":"created","forced_refresh":false}}]}
Optimistic concurrency control
Each index
and delete
action within a bulk API call may include the if_seq_no
and if_primary_term
parameters in their respective action and meta data lines.
The if_seq_no
and if_primary_term
parameters control how operations are run, based on the last modification to existing documents. See Optimistic concurrency control for more details.
Versioning
Each bulk item can include the version value using the version
field.
It automatically follows the behavior of the index or delete operation based on the _version
mapping.
It also support the version_type
.
Routing
Each bulk item can include the routing value using the routing
field.
It automatically follows the behavior of the index or delete operation based on the _routing
mapping.
NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing
setting enabled in the template.
Wait for active shards
When making bulk calls, you can set the wait_for_active_shards
parameter to require a minimum number of shard copies to be active before starting to process the bulk request.
Refresh
Control when the changes made by this request are visible to search.
NOTE: Only the shards that receive the bulk request will be affected by refresh.
Imagine a _bulk?refresh=wait_for
request with three documents in it that happen to be routed to different shards in an index with five shards.
The request will only wait for those three shards to refresh.
The other two shards that make up the index do not participate in the _bulk
request at all.
Query parameters
-
include_source_on_error
boolean True or false if to include the document source in the error message in case of parsing errors.
-
list_executed_pipelines
boolean If
true
, the response will include the ingest pipelines that were run for each index or create. -
pipeline
string The pipeline identifier to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to
_none
turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter. -
refresh
string If
true
, Elasticsearch refreshes the affected shards to make this operation visible to search. Ifwait_for
, wait for a refresh to make this operation visible to search. Iffalse
, do nothing with refreshes. Valid values:true
,false
,wait_for
.Values are
true
,false
, orwait_for
. -
routing
string A custom value that is used to route operations to a specific shard.
-
_source
boolean | string | array[string] Indicates whether to return the
_source
field (true
orfalse
) or contains a list of fields to return. -
_source_excludes
string | array[string] A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in
_source_includes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
_source_includes
string | array[string] A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the
_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
timeout
string The period each action waits for the following operations: automatic index creation, dynamic mapping updates, and waiting for active shards. The default is
1m
(one minute), which guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.Values are
-1
or0
. -
wait_for_active_shards
number | string The number of shard copies that must be active before proceeding with the operation. Set to
all
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). The default is1
, which waits for each primary shard to be active.Values are
all
orindex-setting
. -
require_alias
boolean If
true
, the request's actions must target an index alias. -
require_data_stream
boolean If
true
, the request's actions must target a data stream (existing or to be created).
curl \
--request PUT 'http://api.example.com/_bulk' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{ \"index\" : { \"_index\" : \"test\", \"_id\" : \"1\" } }\n{ \"field1\" : \"value1\" }\n{ \"delete\" : { \"_index\" : \"test\", \"_id\" : \"2\" } }\n{ \"create\" : { \"_index\" : \"test\", \"_id\" : \"3\" } }\n{ \"field1\" : \"value3\" }\n{ \"update\" : {\"_id\" : \"1\", \"_index\" : \"test\"} }\n{ \"doc\" : {\"field2\" : \"value2\"} }"'
{ "index" : { "_index" : "test", "_id" : "1" } }
{ "field1" : "value1" }
{ "delete" : { "_index" : "test", "_id" : "2" } }
{ "create" : { "_index" : "test", "_id" : "3" } }
{ "field1" : "value3" }
{ "update" : {"_id" : "1", "_index" : "test"} }
{ "doc" : {"field2" : "value2"} }
{ "update" : {"_id" : "1", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"} }
{ "update" : { "_id" : "0", "_index" : "index1", "retry_on_conflict" : 3} }
{ "script" : { "source": "ctx._source.counter += params.param1", "lang" : "painless", "params" : {"param1" : 1}}, "upsert" : {"counter" : 1}}
{ "update" : {"_id" : "2", "_index" : "index1", "retry_on_conflict" : 3} }
{ "doc" : {"field" : "value"}, "doc_as_upsert" : true }
{ "update" : {"_id" : "3", "_index" : "index1", "_source" : true} }
{ "doc" : {"field" : "value"} }
{ "update" : {"_id" : "4", "_index" : "index1"} }
{ "doc" : {"field" : "value"}, "_source": true}
{ "update": {"_id": "5", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "update": {"_id": "6", "_index": "index1"} }
{ "doc": {"my_field": "foo"} }
{ "create": {"_id": "7", "_index": "index1"} }
{ "my_field": "foo" }
{ "index" : { "_index" : "my_index", "_id" : "1", "dynamic_templates": {"work_location": "geo_point"}} }
{ "field" : "value1", "work_location": "41.12,-71.34", "raw_location": "41.12,-71.34"}
{ "create" : { "_index" : "my_index", "_id" : "2", "dynamic_templates": {"home_location": "geo_point"}} }
{ "field" : "value2", "home_location": "41.12,-71.34"}
{
"took": 30,
"errors": false,
"items": [
{
"index": {
"_index": "test",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"status": 201,
"_seq_no" : 0,
"_primary_term": 1
}
},
{
"delete": {
"_index": "test",
"_id": "2",
"_version": 1,
"result": "not_found",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"status": 404,
"_seq_no" : 1,
"_primary_term" : 2
}
},
{
"create": {
"_index": "test",
"_id": "3",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"status": 201,
"_seq_no" : 2,
"_primary_term" : 3
}
},
{
"update": {
"_index": "test",
"_id": "1",
"_version": 2,
"result": "updated",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"status": 200,
"_seq_no" : 3,
"_primary_term" : 4
}
}
]
}
{
"took": 486,
"errors": true,
"items": [
{
"update": {
"_index": "index1",
"_id": "5",
"status": 404,
"error": {
"type": "document_missing_exception",
"reason": "[5]: document missing",
"index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
"shard": "0",
"index": "index1"
}
}
},
{
"update": {
"_index": "index1",
"_id": "6",
"status": 404,
"error": {
"type": "document_missing_exception",
"reason": "[6]: document missing",
"index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
"shard": "0",
"index": "index1"
}
}
},
{
"create": {
"_index": "index1",
"_id": "7",
"_version": 1,
"result": "created",
"_shards": {
"total": 2,
"successful": 1,
"failed": 0
},
"_seq_no": 0,
"_primary_term": 1,
"status": 201
}
}
]
}
{
"items": [
{
"update": {
"error": {
"type": "document_missing_exception",
"reason": "[5]: document missing",
"index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
"shard": "0",
"index": "index1"
}
}
},
{
"update": {
"error": {
"type": "document_missing_exception",
"reason": "[6]: document missing",
"index_uuid": "aAsFqTI0Tc2W0LCWgPNrOA",
"shard": "0",
"index": "index1"
}
}
}
]
}
Get enrich stats
Added in 7.5.0
Returns enrich coordinator statistics and information about enrich policies that are currently executing.
Query parameters
-
master_timeout
string Period to wait for a connection to the master node.
Values are
-1
or0
.
curl \
--request GET 'http://api.example.com/_enrich/_stats' \
--header "Authorization: $API_KEY"
Index
Index APIs enable you to manage individual indices, index settings, aliases, mappings, and index templates.
Create an index
You can use the create index API to add a new index to an Elasticsearch cluster. When creating an index, you can specify the following:
- Settings for the index.
- Mappings for fields in the index.
- Index aliases
Wait for active shards
By default, index creation will only return a response to the client when the primary copies of each shard have been started, or the request times out.
The index creation response will indicate what happened.
For example, acknowledged
indicates whether the index was successfully created in the cluster, while shards_acknowledged
indicates whether the requisite number of shard copies were started for each shard in the index before timing out.
Note that it is still possible for either acknowledged
or shards_acknowledged
to be false
, but for the index creation to be successful.
These values simply indicate whether the operation completed before the timeout.
If acknowledged
is false, the request timed out before the cluster state was updated with the newly created index, but it probably will be created sometime soon.
If shards_acknowledged
is false, then the request timed out before the requisite number of shards were started (by default just the primaries), even if the cluster state was successfully updated to reflect the newly created index (that is to say, acknowledged
is true
).
You can change the default of only waiting for the primary shards to start through the index setting index.write.wait_for_active_shards
.
Note that changing this setting will also affect the wait_for_active_shards
value on all subsequent write operations.
Path parameters
-
index
string Required Name of the index you wish to create.
Query parameters
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
wait_for_active_shards
number | string The number of shard copies that must be active before proceeding with the operation. Set to
all
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).Values are
all
orindex-setting
.
curl \
--request PUT 'http://api.example.com/{index}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"settings\": {\n \"number_of_shards\": 3,\n \"number_of_replicas\": 2\n }\n}"'
{
"settings": {
"number_of_shards": 3,
"number_of_replicas": 2
}
}
{
"settings": {
"number_of_shards": 1
},
"mappings": {
"properties": {
"field1": { "type": "text" }
}
}
}
{
"aliases": {
"alias_1": {},
"alias_2": {
"filter": {
"term": {
"user.id": "kimchy"
}
},
"routing": "shard-1"
}
}
}
Create or update a legacy index template
Deprecated
Index templates define settings, mappings, and aliases that can be applied automatically to new indices. Elasticsearch applies templates to new indices based on an index pattern that matches the index name.
IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
Composable templates always take precedence over legacy templates. If no composable template matches a new index, matching legacy templates are applied according to their order.
Index templates are only applied during index creation. Changes to index templates do not affect existing indices. Settings and mappings specified in create index API requests override any settings or mappings specified in an index template.
You can use C-style /* *\/
block comments in index templates.
You can include comments anywhere in the request body, except before the opening curly bracket.
Indices matching multiple templates
Multiple index templates can potentially match an index, in this case, both the settings and mappings are merged into the final configuration of the index. The order of the merging can be controlled using the order parameter, with lower order being applied first, and higher orders overriding them. NOTE: Multiple matching templates with the same order value will result in a non-deterministic merging order.
Path parameters
-
name
string Required The name of the template
Query parameters
-
create
boolean If true, this request cannot replace or update existing index templates.
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
order
number Order in which Elasticsearch applies this template if index matches multiple templates.
Templates with lower 'order' values are merged first. Templates with higher 'order' values are merged later, overriding templates with lower values.
-
cause
string User defined reason for creating/updating the index template
Body
Required
-
aliases
object Aliases for the index.
index_patterns
string | array[string] Array of wildcard expressions used to match the names of indices during creation.
-
mappings
object -
order
number Order in which Elasticsearch applies this template if index matches multiple templates.
Templates with lower 'order' values are merged first. Templates with higher 'order' values are merged later, overriding templates with lower values.
-
settings
object -
version
number
curl \
--request POST 'http://api.example.com/_template/{name}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"index_patterns\": [\n \"te*\",\n \"bar*\"\n ],\n \"settings\": {\n \"number_of_shards\": 1\n },\n \"mappings\": {\n \"_source\": {\n \"enabled\": false\n }\n },\n \"properties\": {\n \"host_name\": {\n \"type\": \"keyword\"\n },\n \"created_at\": {\n \"type\": \"date\",\n \"format\": \"EEE MMM dd HH:mm:ss Z yyyy\"\n }\n }\n}"'
{
"index_patterns": [
"te*",
"bar*"
],
"settings": {
"number_of_shards": 1
},
"mappings": {
"_source": {
"enabled": false
}
},
"properties": {
"host_name": {
"type": "keyword"
},
"created_at": {
"type": "date",
"format": "EEE MMM dd HH:mm:ss Z yyyy"
}
}
}
{
"index_patterns": [
"te*"
],
"settings": {
"number_of_shards": 1
},
"aliases": {
"alias1": {},
"alias2": {
"filter": {
"term": {
"user.id": "kimchy"
}
},
"routing": "shard-1"
},
"{index}-alias": {}
}
}
Flush data streams or indices
Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.
After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
Values are
all
,open
,closed
,hidden
, ornone
. -
force
boolean If
true
, the request forces a flush even if there are no changes to commit to the index. -
wait_if_ongoing
boolean If
true
, the flush operation blocks until execution when another flush operation is running. Iffalse
, Elasticsearch returns an error if you request a flush when another flush operation is running.
curl \
--request POST 'http://api.example.com/_flush' \
--header "Authorization: $API_KEY"
Get mapping definitions
Retrieves mapping definitions for one or more fields. For data streams, the API retrieves field mappings for the stream’s backing indices.
This API is useful if you don't need a complete mapping or if an index mapping contains a large number of fields.
Path parameters
-
index
string | array[string] Required Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (
*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
fields
string | array[string] Required Comma-separated list or wildcard expression of fields used to limit returned information. Supports wildcards (
*
).
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
Values are
all
,open
,closed
,hidden
, ornone
. -
include_defaults
boolean If
true
, return all default settings in the response. -
local
boolean If
true
, the request retrieves information from the local node only.
curl \
--request GET 'http://api.example.com/{index}/_mapping/field/{fields}' \
--header "Authorization: $API_KEY"
{
"publications": {
"mappings": {
"title": {
"full_name": "title",
"mapping": {
"title": {
"type": "text"
}
}
}
}
}
}
{
"publications": {
"mappings": {
"author.id": {
"full_name": "author.id",
"mapping": {
"id": {
"type": "text"
}
}
},
"abstract": {
"full_name": "abstract",
"mapping": {
"abstract": {
"type": "text"
}
}
}
}
}
}
{
"publications": {
"mappings": {
"author.name": {
"full_name": "author.name",
"mapping": {
"name": {
"type": "text"
}
}
},
"abstract": {
"full_name": "abstract",
"mapping": {
"abstract": {
"type": "text"
}
}
},
"author.id": {
"full_name": "author.id",
"mapping": {
"id": {
"type": "text"
}
}
}
}
}
}
Query parameters
-
allow_no_indices
boolean If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
all_shards
boolean If
true
, the validation is executed on all shards instead of one random shard per index. -
analyzer
string Analyzer to use for the query string. This parameter can only be used when the
q
query string parameter is specified. -
analyze_wildcard
boolean If
true
, wildcard and prefix queries are analyzed. -
default_operator
string The default operator for query string query:
AND
orOR
.Values are
and
,AND
,or
, orOR
. -
df
string Field to use as default where no field prefix is given in the query string. This parameter can only be used when the
q
query string parameter is specified. -
expand_wildcards
string | array[string] Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
Values are
all
,open
,closed
,hidden
, ornone
. -
explain
boolean If
true
, the response returns detailed information if an error has occurred. -
lenient
boolean If
true
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. -
rewrite
boolean If
true
, returns a more detailed explanation showing the actual Lucene query that will be executed. -
q
string Query in the Lucene query string syntax.
Body
-
query
object An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation
curl \
--request GET 'http://api.example.com/_validate/query' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"query":{}}'
Create or update a lifecycle policy
Added in 6.6.0
If the specified policy exists, it is replaced and the policy version is incremented.
NOTE: Only the latest version of the policy is stored, you cannot revert to previous versions.
Path parameters
-
policy
string Required Identifier for the policy.
Query parameters
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
curl \
--request PUT 'http://api.example.com/_ilm/policy/{policy}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"policy\": {\n \"_meta\": {\n \"description\": \"used for nginx log\",\n \"project\": {\n \"name\": \"myProject\",\n \"department\": \"myDepartment\"\n }\n },\n \"phases\": {\n \"warm\": {\n \"min_age\": \"10d\",\n \"actions\": {\n \"forcemerge\": {\n \"max_num_segments\": 1\n }\n }\n },\n \"delete\": {\n \"min_age\": \"30d\",\n \"actions\": {\n \"delete\": {}\n }\n }\n }\n }\n}"'
{
"policy": {
"_meta": {
"description": "used for nginx log",
"project": {
"name": "myProject",
"department": "myDepartment"
}
},
"phases": {
"warm": {
"min_age": "10d",
"actions": {
"forcemerge": {
"max_num_segments": 1
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
{
"acknowledged": true
}
Remove policies from an index
Added in 6.6.0
Remove the assigned lifecycle policies from an index or a data stream's backing indices. It also stops managing the indices.
Path parameters
-
index
string Required The name of the index to remove policy on
curl \
--request POST 'http://api.example.com/{index}/_ilm/remove' \
--header "Authorization: $API_KEY"
{
"has_failures" : false,
"failed_indexes" : []
}
Start the ILM plugin
Added in 6.6.0
Start the index lifecycle management plugin if it is currently stopped. ILM is started automatically when the cluster is formed. Restarting ILM is necessary only when it has been stopped using the stop ILM API.
Query parameters
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
curl \
--request POST 'http://api.example.com/_ilm/start' \
--header "Authorization: $API_KEY"
{
"acknowledged": true
}
curl \
--request GET 'http://api.example.com/_inference' \
--header "Authorization: $API_KEY"
Create an AlibabaCloud AI Search inference endpoint
Added in 8.16.0
Create an inference endpoint to perform an inference task with the alibabacloud-ai-search
service.
Path parameters
-
task_type
string Required The type of the inference task that the model will perform.
Values are
completion
,rerank
,space_embedding
, ortext_embedding
. -
alibabacloud_inference_id
string Required The unique identifier of the inference endpoint.
Body
-
chunking_settings
object -
service
string Required Value is
alibabacloud-ai-search
. -
service_settings
object Required -
task_settings
object
curl \
--request PUT 'http://api.example.com/_inference/{task_type}/{alibabacloud_inference_id}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"service\": \"alibabacloud-ai-search\",\n \"service_settings\": {\n \"host\" : \"default-j01.platform-cn-shanghai.opensearch.aliyuncs.com\",\n \"api_key\": \"AlibabaCloud-API-Key\",\n \"service_id\": \"ops-qwen-turbo\",\n \"workspace\" : \"default\"\n }\n}"'
{
"service": "alibabacloud-ai-search",
"service_settings": {
"host" : "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"api_key": "AlibabaCloud-API-Key",
"service_id": "ops-qwen-turbo",
"workspace" : "default"
}
}
{
"service": "alibabacloud-ai-search",
"service_settings": {
"api_key": "AlibabaCloud-API-Key",
"service_id": "ops-bge-reranker-larger",
"host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"workspace": "default"
}
}
{
"service": "alibabacloud-ai-search",
"service_settings": {
"api_key": "AlibabaCloud-API-Key",
"service_id": "ops-text-sparse-embedding-001",
"host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"workspace": "default"
}
}
{
"service": "alibabacloud-ai-search",
"service_settings": {
"api_key": "AlibabaCloud-API-Key",
"service_id": "ops-text-embedding-001",
"host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"workspace": "default"
}
}
Create an Azure OpenAI inference endpoint
Added in 8.14.0
Create an inference endpoint to perform an inference task with the azureopenai
service.
The list of chat completion models that you can choose from in your Azure OpenAI deployment include:
The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.
Path parameters
-
task_type
string Required The type of the inference task that the model will perform. NOTE: The
chat_completion
task type only supports streaming and only through the _stream API.Values are
completion
ortext_embedding
. -
azureopenai_inference_id
string Required The unique identifier of the inference endpoint.
Body
-
chunking_settings
object -
service
string Required Value is
azureopenai
. -
service_settings
object Required -
task_settings
object
curl \
--request PUT 'http://api.example.com/_inference/{task_type}/{azureopenai_inference_id}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"service\": \"azureopenai\",\n \"service_settings\": {\n \"api_key\": \"Api-Key\",\n \"resource_name\": \"Resource-name\",\n \"deployment_id\": \"Deployment-id\",\n \"api_version\": \"2024-02-01\"\n }\n}"'
{
"service": "azureopenai",
"service_settings": {
"api_key": "Api-Key",
"resource_name": "Resource-name",
"deployment_id": "Deployment-id",
"api_version": "2024-02-01"
}
}
{
"service": "azureopenai",
"service_settings": {
"api_key": "Api-Key",
"resource_name": "Resource-name",
"deployment_id": "Deployment-id",
"api_version": "2024-02-01"
}
}
Get machine learning memory usage info
Added in 8.2.0
Get information about how machine learning jobs and trained models are using memory, on each node, both within the JVM heap, and natively, outside of the JVM.
Query parameters
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
curl \
--request GET 'http://api.example.com/_ml/memory/_stats' \
--header "Authorization: $API_KEY"
Get snapshot information
Added in 0.0.0
Path parameters
-
repository
string Required Comma-separated list of snapshot repository names used to limit the request. Wildcard (*) expressions are supported.
-
snapshot
string | array[string] Required Comma-separated list of snapshot names to retrieve. Also accepts wildcards (*).
- To get information about all snapshots in a registered repository, use a wildcard (*) or _all.
- To get information about any snapshots that are currently running, use _current.
Query parameters
-
master_timeout
string Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
verbose
boolean If true, returns additional information about each snapshot such as the version of Elasticsearch which took the snapshot, the start and end times of the snapshot, and the number of shards snapshotted.
-
index_details
boolean If true, returns additional information about each index in the snapshot comprising the number of shards in the index, the total size of the index in bytes, and the maximum number of segments per shard in the index. Defaults to false, meaning that this information is omitted.
-
index_names
boolean If true, returns the name of each index in each snapshot.
-
include_repository
boolean If true, returns the repository name in each snapshot.
-
sort
string Allows setting a sort order for the result. Defaults to start_time, i.e. sorting by snapshot start time stamp.
Values are
start_time
,duration
,name
,index_count
,repository
,shard_count
, orfailed_shard_count
. -
size
number Maximum number of snapshots to return. Defaults to 0 which means return all that match the request without limit.
-
order
string Sort order. Valid values are asc for ascending and desc for descending order. Defaults to asc, meaning ascending order.
Supported values include:
asc
: Ascending (smallest to largest)desc
: Descending (largest to smallest)
Values are
asc
ordesc
. -
after
string Offset identifier to start pagination from as returned by the next field in the response body.
-
offset
number Numeric offset to start pagination from based on the snapshots matching this request. Using a non-zero value for this parameter is mutually exclusive with using the after parameter. Defaults to 0.
-
from_sort_value
string Value of the current sort column at which to start retrieval. Can either be a string snapshot- or repository name when sorting by snapshot or repository name, a millisecond time value or a number when sorting by index- or shard count.
-
slm_policy_filter
string Filter snapshots by a comma-separated list of SLM policy names that snapshots belong to. Also accepts wildcards (*) and combinations of wildcards followed by exclude patterns starting with -. To include snapshots not created by an SLM policy you can use the special pattern _none that will match all snapshots without an SLM policy.
curl \
--request GET 'http://api.example.com/_snapshot/{repository}/{snapshot}' \
--header "Authorization: $API_KEY"
{
"snapshots": [
{
"snapshot": "snapshot_1",
"uuid": "dKb54xw67gvdRctLCxSket",
"repository": "my_repository",
"version_id": <version_id>,
"version": <version>,
"indices": [],
"data_streams": [],
"feature_states": [],
"include_global_state": true,
"state": "SUCCESS",
"start_time": "2020-07-06T21:55:18.128Z",
"start_time_in_millis": 1593093628849,
"end_time": "2020-07-06T21:55:18.129Z",
"end_time_in_millis": 1593093628850,
"duration_in_millis": 1,
"failures": [],
"shards": {
"total": 0,
"failed": 0,
"successful": 0
}
},
{
"snapshot": "snapshot_2",
"uuid": "vdRctLCxSketdKb54xw67g",
"repository": "my_repository",
"version_id": <version_id>,
"version": <version>,
"indices": [],
"data_streams": [],
"feature_states": [],
"include_global_state": true,
"state": "SUCCESS",
"start_time": "2020-07-06T21:55:18.130Z",
"start_time_in_millis": 1593093628851,
"end_time": "2020-07-06T21:55:18.130Z",
"end_time_in_millis": 1593093628851,
"duration_in_millis": 0,
"failures": [],
"shards": {
"total": 0,
"failed": 0,
"successful": 0
}
},
{
"snapshot": "snapshot_3",
"uuid": "dRctdKb54xw67gvLCxSket",
"repository": "my_repository",
"version_id": <version_id>,
"version": <version>,
"indices": [],
"data_streams": [],
"feature_states": [],
"include_global_state": true,
"state": "SUCCESS",
"start_time": "2020-07-06T21:55:18.131Z",
"start_time_in_millis": 1593093628852,
"end_time": "2020-07-06T21:55:18.135Z",
"end_time_in_millis": 1593093628856,
"duration_in_millis": 4,
"failures": [],
"shards": {
"total": 0,
"failed": 0,
"successful": 0
}
}
],
"total": 3,
"remaining": 0
}
Create a transform
Added in 7.2.0
Creates a transform.
A transform copies data from source indices, transforms it, and persists it into an entity-centric destination index. You can also think of the destination index as a two-dimensional tabular data structure (known as a data frame). The ID for each document in the data frame is generated from a hash of the entity, so there is a unique row per entity.
You must choose either the latest or pivot method for your transform; you cannot use both in a single transform. If
you choose to use the pivot method for your transform, the entities are defined by the set of group_by
fields in
the pivot object. If you choose to use the latest method, the entities are defined by the unique_key
field values
in the latest object.
You must have create_index
, index
, and read
privileges on the destination index and read
and
view_index_metadata
privileges on the source indices. When Elasticsearch security features are enabled, the
transform remembers which roles the user that created it had at the time of creation and uses those same roles. If
those roles do not have the required privileges on the source and destination indices, the transform fails when it
attempts unauthorized operations.
NOTE: You must use Kibana or this API to create a transform. Do not add a transform directly into any
.transform-internal*
indices using the Elasticsearch index API. If Elasticsearch security features are enabled, do
not give users any privileges on .transform-internal*
indices. If you used transforms prior to 7.5, also do not
give users any privileges on .data-frame-internal*
indices.
Path parameters
-
transform_id
string Required Identifier for the transform. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It has a 64 character limit and must start and end with alphanumeric characters.
Query parameters
-
defer_validation
boolean When the transform is created, a series of validations occur to ensure its success. For example, there is a check for the existence of the source indices and a check that the destination index is not part of the source index pattern. You can use this parameter to skip the checks, for example when the source index does not exist until after the transform is created. The validations are always run when you start the transform, however, with the exception of privilege checks.
-
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
Body
Required
-
dest
object Required -
description
string Free text description of the transform.
-
frequency
string A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
latest
object -
_meta
object -
pivot
object -
retention_policy
object -
settings
object -
source
object Required -
sync
object
curl \
--request PUT 'http://api.example.com/_transform/{transform_id}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"source\": {\n \"index\": \"kibana_sample_data_ecommerce\",\n \"query\": {\n \"term\": {\n \"geoip.continent_name\": {\n \"value\": \"Asia\"\n }\n }\n }\n },\n \"pivot\": {\n \"group_by\": {\n \"customer_id\": {\n \"terms\": {\n \"field\": \"customer_id\",\n \"missing_bucket\": true\n }\n }\n },\n \"aggregations\": {\n \"max_price\": {\n \"max\": {\n \"field\": \"taxful_total_price\"\n }\n }\n }\n },\n \"description\": \"Maximum priced ecommerce data by customer_id in Asia\",\n \"dest\": {\n \"index\": \"kibana_sample_data_ecommerce_transform1\",\n \"pipeline\": \"add_timestamp_pipeline\"\n },\n \"frequency\": \"5m\",\n \"sync\": {\n \"time\": {\n \"field\": \"order_date\",\n \"delay\": \"60s\"\n }\n },\n \"retention_policy\": {\n \"time\": {\n \"field\": \"order_date\",\n \"max_age\": \"30d\"\n }\n }\n}"'
{
"source": {
"index": "kibana_sample_data_ecommerce",
"query": {
"term": {
"geoip.continent_name": {
"value": "Asia"
}
}
}
},
"pivot": {
"group_by": {
"customer_id": {
"terms": {
"field": "customer_id",
"missing_bucket": true
}
}
},
"aggregations": {
"max_price": {
"max": {
"field": "taxful_total_price"
}
}
}
},
"description": "Maximum priced ecommerce data by customer_id in Asia",
"dest": {
"index": "kibana_sample_data_ecommerce_transform1",
"pipeline": "add_timestamp_pipeline"
},
"frequency": "5m",
"sync": {
"time": {
"field": "order_date",
"delay": "60s"
}
},
"retention_policy": {
"time": {
"field": "order_date",
"max_age": "30d"
}
}
}
{
"source": {
"index": "kibana_sample_data_ecommerce"
},
"latest": {
"unique_key": [
"customer_id"
],
"sort": "order_date"
},
"description": "Latest order for each customer",
"dest": {
"index": "kibana_sample_data_ecommerce_transform2"
},
"frequency": "5m",
"sync": {
"time": {
"field": "order_date",
"delay": "60s"
}
}
}
{
"acknowledged": true
}
Preview a transform
Added in 7.2.0
Generates a preview of the results that you will get when you create a transform with the same configuration.
It returns a maximum of 100 results. The calculations are based on all the current data in the source index. It also generates a list of mappings and settings for the destination index. These values are determined based on the field types of the source index and the transform aggregations.
Query parameters
-
timeout
string Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
Body
-
dest
object -
description
string Free text description of the transform.
-
frequency
string A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
pivot
object -
source
object -
settings
object -
sync
object -
retention_policy
object -
latest
object
curl \
--request POST 'http://api.example.com/_transform/_preview' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '"{\n \"source\": {\n \"index\": \"kibana_sample_data_ecommerce\"\n },\n \"pivot\": {\n \"group_by\": {\n \"customer_id\": {\n \"terms\": {\n \"field\": \"customer_id\",\n \"missing_bucket\": true\n }\n }\n },\n \"aggregations\": {\n \"max_price\": {\n \"max\": {\n \"field\": \"taxful_total_price\"\n }\n }\n }\n }\n}"'
{
"source": {
"index": "kibana_sample_data_ecommerce"
},
"pivot": {
"group_by": {
"customer_id": {
"terms": {
"field": "customer_id",
"missing_bucket": true
}
}
},
"aggregations": {
"max_price": {
"max": {
"field": "taxful_total_price"
}
}
}
}
}
{
"preview": [
{
"max_price": 171,
"customer_id": "10"
},
{
"max_price": 233,
"customer_id": "11"
},
{
"max_price": 200,
"customer_id": "12"
},
{
"max_price": 301,
"customer_id": "13"
},
{
"max_price": 176,
"customer_id": "14"
},
{
"max_price": 2250,
"customer_id": "15"
},
{
"max_price": 170,
"customer_id": "16"
},
{
"max_price": 243,
"customer_id": "17"
},
{
"max_price": 154,
"customer_id": "18"
},
{
"max_price": 393,
"customer_id": "19"
},
{
"max_price": 165,
"customer_id": "20"
},
{
"max_price": 115,
"customer_id": "21"
},
{
"max_price": 192,
"customer_id": "22"
},
{
"max_price": 169,
"customer_id": "23"
},
{
"max_price": 230,
"customer_id": "24"
},
{
"max_price": 278,
"customer_id": "25"
},
{
"max_price": 200,
"customer_id": "26"
},
{
"max_price": 344,
"customer_id": "27"
},
{
"max_price": 175,
"customer_id": "28"
},
{
"max_price": 177,
"customer_id": "29"
},
{
"max_price": 190,
"customer_id": "30"
},
{
"max_price": 190,
"customer_id": "31"
},
{
"max_price": 205,
"customer_id": "32"
},
{
"max_price": 215,
"customer_id": "33"
},
{
"max_price": 270,
"customer_id": "34"
},
{
"max_price": 184,
"customer_id": "36"
},
{
"max_price": 222,
"customer_id": "37"
},
{
"max_price": 370,
"customer_id": "38"
},
{
"max_price": 240,
"customer_id": "39"
},
{
"max_price": 230,
"customer_id": "4"
},
{
"max_price": 229,
"customer_id": "41"
},
{
"max_price": 190,
"customer_id": "42"
},
{
"max_price": 150,
"customer_id": "43"
},
{
"max_price": 175,
"customer_id": "44"
},
{
"max_price": 190,
"customer_id": "45"
},
{
"max_price": 150,
"customer_id": "46"
},
{
"max_price": 310,
"customer_id": "48"
},
{
"max_price": 223,
"customer_id": "49"
},
{
"max_price": 283,
"customer_id": "5"
},
{
"max_price": 185,
"customer_id": "50"
},
{
"max_price": 190,
"customer_id": "51"
},
{
"max_price": 333,
"customer_id": "52"
},
{
"max_price": 165,
"customer_id": "6"
},
{
"max_price": 144,
"customer_id": "7"
},
{
"max_price": 198,
"customer_id": "8"
},
{
"max_price": 210,
"customer_id": "9"
}
],
"generated_dest_index": {
"mappings": {
"_meta": {
"_transform": {
"transform": "transform-preview",
"version": {
"created": "10.0.0"
},
"creation_date_in_millis": 1712948905889
},
"created_by": "transform"
},
"properties": {
"max_price": {
"type": "half_float"
},
"customer_id": {
"type": "keyword"
}
}
},
"settings": {
"index": {
"number_of_shards": "1",
"auto_expand_replicas": "0-1"
}
},
"aliases": {}
}
}
Upgrade all transforms
Added in 7.16.0
Transforms are compatible across minor versions and between supported major versions. However, over time, the format of transform configuration information may change. This API identifies transforms that have a legacy configuration format and upgrades them to the latest version. It also cleans up the internal data structures that store the transform state and checkpoints. The upgrade does not affect the source and destination indices. The upgrade also does not affect the roles that transforms use when Elasticsearch security features are enabled; the role used to read source data and write to the destination index remains unchanged.
If a transform upgrade step fails, the upgrade stops and an error is returned about the underlying issue. Resolve the issue then re-run the process again. A summary is returned when the upgrade is finished.
To ensure continuous transforms remain running during a major version upgrade of the cluster – for example, from 7.16 to 8.0 – it is recommended to upgrade transforms before upgrading the cluster. You may want to perform a recent cluster backup prior to the upgrade.
curl \
--request POST 'http://api.example.com/_transform/_upgrade' \
--header "Authorization: $API_KEY"
{
"needs_update": 0,
"updated": 2,
"no_action": 1
}
Acknowledge a watch
Acknowledging a watch enables you to manually throttle the execution of the watch's actions.
The acknowledgement state of an action is stored in the status.actions.<id>.ack.state
structure.
IMPORTANT: If the specified watch is currently being executed, this API will return an error The reason for this behavior is to prevent overwriting the watch status from a watch execution.
Acknowledging an action throttles further executions of that action until its ack.state
is reset to awaits_successful_execution
.
This happens when the condition of the watch is not met (the condition evaluates to false).
Path parameters
-
watch_id
string Required The watch identifier.
curl \
--request POST 'http://api.example.com/_watcher/watch/{watch_id}/_ack' \
--header "Authorization: $API_KEY"
{
"status": {
"state": {
"active": true,
"timestamp": "2015-05-26T18:04:27.723Z"
},
"last_checked": "2015-05-26T18:04:27.753Z",
"last_met_condition": "2015-05-26T18:04:27.763Z",
"actions": {
"test_index": {
"ack" : {
"timestamp": "2015-05-26T18:04:27.713Z",
"state": "acked"
},
"last_execution" : {
"timestamp": "2015-05-25T18:04:27.733Z",
"successful": true
},
"last_successful_execution" : {
"timestamp": "2015-05-25T18:04:27.773Z",
"successful": true
}
}
},
"execution_state": "executed",
"version": 2
}
}
Activate a watch
A watch can be either active or inactive.
Path parameters
-
watch_id
string Required The watch identifier.
curl \
--request POST 'http://api.example.com/_watcher/watch/{watch_id}/_activate' \
--header "Authorization: $API_KEY"