Get an autoscaling policy
Generally available; Added in 7.11.0
NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
GET /_autoscaling/policy/my_autoscaling_policy
resp = client.autoscaling.get_autoscaling_policy(
name="my_autoscaling_policy",
)
const response = await client.autoscaling.getAutoscalingPolicy({
name: "my_autoscaling_policy",
});
response = client.autoscaling.get_autoscaling_policy(
name: "my_autoscaling_policy"
)
$resp = $client->autoscaling()->getAutoscalingPolicy([
"name" => "my_autoscaling_policy",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_autoscaling/policy/my_autoscaling_policy"
client.autoscaling().getAutoscalingPolicy(g -> g
.name("my_autoscaling_policy")
);
{
"roles": <roles>,
"deciders": <deciders>
}
Get behavioral analytics collections
Technical preview; Added in 8.8.0
All methods and paths for this operation:
GET _application/analytics/my*
resp = client.search_application.get_behavioral_analytics(
name="my*",
)
const response = await client.searchApplication.getBehavioralAnalytics({
name: "my*",
});
response = client.search_application.get_behavioral_analytics(
name: "my*"
)
$resp = $client->searchApplication()->getBehavioralAnalytics([
"name" => "my*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/analytics/my*"
client.searchApplication().getBehavioralAnalytics(g -> g
.name("my*")
);
{
"my_analytics_collection": {
"event_data_stream": {
"name": "behavioral_analytics-events-my_analytics_collection"
}
},
"my_analytics_collection2": {
"event_data_stream": {
"name": "behavioral_analytics-events-my_analytics_collection2"
}
}
}
Get component templates
Generally available; Added in 5.1.0
All methods and paths for this operation:
Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.
Required authorization
- Cluster privileges:
monitor
Path parameters
-
The name of the component template. It accepts wildcard expressions. If it is omitted, all component templates are returned.
Query parameters
-
List of columns to appear in the response. Supports simple wildcards.
-
List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:asc
or:desc
as a suffix to the column name. -
If
true
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
The period to wait for a connection to the master node.
Values are
-1
or0
.
GET _cat/component_templates/my-template-*?v=true&s=name&format=json
resp = client.cat.component_templates(
name="my-template-*",
v=True,
s="name",
format="json",
)
const response = await client.cat.componentTemplates({
name: "my-template-*",
v: "true",
s: "name",
format: "json",
});
response = client.cat.component_templates(
name: "my-template-*",
v: "true",
s: "name",
format: "json"
)
$resp = $client->cat()->componentTemplates([
"name" => "my-template-*",
"v" => "true",
"s" => "name",
"format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/component_templates/my-template-*?v=true&s=name&format=json"
client.cat().componentTemplates();
[
{
"name": "my-template-1",
"version": "null",
"alias_count": "0",
"mapping_count": "0",
"settings_count": "1",
"metadata_count": "0",
"included_in": "[my-index-template]"
},
{
"name": "my-template-2",
"version": null,
"alias_count": "0",
"mapping_count": "3",
"settings_count": "0",
"metadata_count": "0",
"included_in": "[my-index-template]"
}
]
Get a document count
Generally available
All methods and paths for this operation:
Get quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.
IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.
Required authorization
- Index privileges:
read
Path parameters
-
A comma-separated list of data streams, indices, and aliases used to limit the request. It supports wildcards (
*
). To target all data streams and indices, omit this parameter or use*
or_all
.
GET /_cat/count/my-index-000001?v=true&format=json
resp = client.cat.count(
index="my-index-000001",
v=True,
format="json",
)
const response = await client.cat.count({
index: "my-index-000001",
v: "true",
format: "json",
});
response = client.cat.count(
index: "my-index-000001",
v: "true",
format: "json"
)
$resp = $client->cat()->count([
"index" => "my-index-000001",
"v" => "true",
"format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/count/my-index-000001?v=true&format=json"
client.cat().count();
[
{
"epoch": "1475868259",
"timestamp": "15:24:20",
"count": "120"
}
]
[
{
"epoch": "1475868259",
"timestamp": "15:24:20",
"count": "121"
}
]
Get the cluster health status
Generally available
IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console.
They are not intended for use by applications. For application consumption, use the cluster health API.
This API is often used to check malfunctioning clusters.
To help you track cluster health alongside log files and alerting systems, the API returns timestamps in two formats:
HH:MM:SS
, which is human-readable but includes no date information;
Unix epoch time
, which is machine-sortable and includes date information.
The latter format is useful for cluster recoveries that take multiple days.
You can use the cat health API to verify cluster health across multiple nodes.
You also can use the API to track the recovery of a large cluster over a longer period of time.
Required authorization
- Cluster privileges:
monitor
Query parameters
-
The unit used to display time values.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
. -
If true, returns
HH:MM:SS
and Unix epoch timestamps. -
List of columns to appear in the response. Supports simple wildcards.
-
List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:asc
or:desc
as a suffix to the column name.
GET /_cat/health?v=true&format=json
resp = client.cat.health(
v=True,
format="json",
)
const response = await client.cat.health({
v: "true",
format: "json",
});
response = client.cat.health(
v: "true",
format: "json"
)
$resp = $client->cat()->health([
"v" => "true",
"format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/health?v=true&format=json"
client.cat().health();
[
{
"epoch": "1475871424",
"timestamp": "16:17:04",
"cluster": "elasticsearch",
"status": "green",
"node.total": "1",
"node.data": "1",
"shards": "1",
"pri": "1",
"relo": "0",
"init": "0",
"unassign": "0",
"unassign.pri": "0",
"pending_tasks": "0",
"max_task_wait_time": "-",
"active_shards_percent": "100.0%"
}
]
curl \
--request GET 'http://api.example.com/_cat' \
--header "Authorization: $API_KEY"
Get task information
Technical preview; Added in 5.0.0
Get information about tasks currently running in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the task management API.
Required authorization
- Cluster privileges:
monitor
Query parameters
-
The task action names, which are used to limit the response.
-
If
true
, the response includes detailed information about shard recoveries. -
Unique node identifiers, which are used to limit the response.
-
The parent task identifier, which is used to limit the response.
-
List of columns to appear in the response. Supports simple wildcards.
-
List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:asc
or:desc
as a suffix to the column name. -
Unit used to display time values.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
. -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
If
true
, the request blocks until the task has completed.
GET _cat/tasks?v=true&format=json
resp = client.cat.tasks(
v=True,
format="json",
)
const response = await client.cat.tasks({
v: "true",
format: "json",
});
response = client.cat.tasks(
v: "true",
format: "json"
)
$resp = $client->cat()->tasks([
"v" => "true",
"format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/tasks?v=true&format=json"
client.cat().tasks();
[
{
"action": "cluster:monitor/tasks/lists[n]",
"task_id": "oTUltX4IQMOUUVeiohTt8A:124",
"parent_task_id": "oTUltX4IQMOUUVeiohTt8A:123",
"type": "direct",
"start_time": "1458585884904",
"timestamp": "01:48:24",
"running_time": "44.1micros",
"ip": "127.0.0.1:9300",
"node": "oTUltX4IQMOUUVeiohTt8A"
},
{
"action": "cluster:monitor/tasks/lists",
"task_id": "oTUltX4IQMOUUVeiohTt8A:123",
"parent_task_id": "-",
"type": "transport",
"start_time": "1458585884904",
"timestamp": "01:48:24",
"running_time": "186.2micros",
"ip": "127.0.0.1:9300",
"node": "oTUltX4IQMOUUVeiohTt8A"
}
]
Get index template information
Generally available; Added in 5.2.0
All methods and paths for this operation:
Get information about the index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.
Required authorization
- Cluster privileges:
monitor
Path parameters
-
The name of the template to return. Accepts wildcard expressions. If omitted, all templates are returned.
Query parameters
-
List of columns to appear in the response. Supports simple wildcards.
-
List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:asc
or:desc
as a suffix to the column name. -
If
true
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
Period to wait for a connection to the master node.
Values are
-1
or0
.
GET _cat/templates/my-template-*?v=true&s=name&format=json
resp = client.cat.templates(
name="my-template-*",
v=True,
s="name",
format="json",
)
const response = await client.cat.templates({
name: "my-template-*",
v: "true",
s: "name",
format: "json",
});
response = client.cat.templates(
name: "my-template-*",
v: "true",
s: "name",
format: "json"
)
$resp = $client->cat()->templates([
"name" => "my-template-*",
"v" => "true",
"s" => "name",
"format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/templates/my-template-*?v=true&s=name&format=json"
client.cat().templates();
[
{
"name": "my-template-0",
"index_patterns": "[te*]",
"order": "500",
"version": null,
"composed_of": "[]"
},
{
"name": "my-template-1",
"index_patterns": "[tea*]",
"order": "501",
"version": null,
"composed_of": "[]"
},
{
"name": "my-template-2",
"index_patterns": "[teak*]",
"order": "502",
"version": "7",
"composed_of": "[]"
}
]
Update voting configuration exclusions
Generally available; Added in 7.0.0
Update the cluster voting config exclusions by node IDs or node names. By default, if there are more than three master-eligible nodes in the cluster and you remove fewer than half of the master-eligible nodes in the cluster at once, the voting configuration automatically shrinks. If you want to shrink the voting configuration to contain fewer than three nodes or to remove half or more of the master-eligible nodes in the cluster at once, use this API to remove departing nodes from the voting configuration manually. The API adds an entry for each specified node to the cluster’s voting configuration exclusions list. It then waits until the cluster has reconfigured its voting configuration to exclude the specified nodes.
Clusters should have no voting configuration exclusions in normal operation.
Once the excluded nodes have stopped, clear the voting configuration exclusions with DELETE /_cluster/voting_config_exclusions
.
This API waits for the nodes to be fully removed from the cluster before it returns.
If your cluster has voting configuration exclusions for nodes that you no longer intend to remove, use DELETE /_cluster/voting_config_exclusions?wait_for_removal=false
to clear the voting configuration exclusions without waiting for the nodes to leave the cluster.
A response to POST /_cluster/voting_config_exclusions
with an HTTP status code of 200 OK guarantees that the node has been removed from the voting configuration and will not be reinstated until the voting configuration exclusions are cleared by calling DELETE /_cluster/voting_config_exclusions
.
If the call to POST /_cluster/voting_config_exclusions
fails or returns a response with an HTTP status code other than 200 OK then the node may not have been removed from the voting configuration.
In that case, you may safely retry the call.
NOTE: Voting exclusions are required only when you remove at least half of the master-eligible nodes from a cluster in a short time period. They are not required when removing master-ineligible nodes or when removing fewer than half of the master-eligible nodes.
Query parameters
-
A comma-separated list of the names of the nodes to exclude from the voting configuration. If specified, you may not also specify node_ids.
-
A comma-separated list of the persistent ids of the nodes to exclude from the voting configuration. If specified, you may not also specify node_names.
-
Period to wait for a connection to the master node.
Values are
-1
or0
. -
When adding a voting configuration exclusion, the API waits for the specified nodes to be excluded from the voting configuration before returning. If the timeout expires before the appropriate condition is satisfied, the request fails and returns an error.
Values are
-1
or0
.
curl \
--request POST 'http://api.example.com/_cluster/voting_config_exclusions' \
--header "Authorization: $API_KEY"
Get cluster info
Generally available; Added in 8.9.0
Returns basic information about the cluster.
GET /_info/_all
resp = client.cluster.info(
target="_all",
)
const response = await client.cluster.info({
target: "_all",
});
response = client.cluster.info(
target: "_all"
)
$resp = $client->cluster()->info([
"target" => "_all",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_info/_all"
client.cluster().info(i -> i
.target("_all")
);
Get the hot threads for nodes
Generally available
All methods and paths for this operation:
Get a breakdown of the hot threads on each selected node in the cluster. The output is plain text with a breakdown of the top hot threads for each node.
Required authorization
- Cluster privileges:
monitor
,manage
Query parameters
-
If true, known idle threads (e.g. waiting in a socket select, or to get a task from an empty queue) are filtered out.
-
The interval to do the second sampling of threads.
Values are
-1
or0
. -
Number of samples of thread stacktrace.
-
Specifies the number of hot threads to provide information for.
-
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
The type to sample.
Values are
cpu
,wait
,block
,gpu
, ormem
. -
The sort order for 'cpu' type (default: total)
Values are
cpu
,wait
,block
,gpu
, ormem
.
GET /_nodes/hot_threads
resp = client.nodes.hot_threads()
const response = await client.nodes.hotThreads();
response = client.nodes.hot_threads
$resp = $client->nodes()->hotThreads();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_nodes/hot_threads"
client.nodes().hotThreads(h -> h);
Connector
The connector and sync jobs APIs provide a convenient way to create and manage Elastic connectors and sync jobs in an internal index. Connectors are Elasticsearch integrations that bring content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure:
- Elastic managed connectors (Native connectors) are a managed service on Elastic Cloud
- Self-managed connectors (Connector clients) are self-managed on your infrastructure.
This API provides an alternative to relying solely on Kibana UI for connector and sync job management. The API comes with a set of validations and assertions to ensure that the state representation in the internal index remains valid.
Check in a connector sync job
Technical preview
Check in a connector sync job and set the last_seen
field to the current time before updating it in the internal index.
To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
PUT _connector/_sync_job/my-connector-sync-job/_check_in
resp = client.connector.sync_job_check_in(
connector_sync_job_id="my-connector-sync-job",
)
const response = await client.connector.syncJobCheckIn({
connector_sync_job_id: "my-connector-sync-job",
});
response = client.connector.sync_job_check_in(
connector_sync_job_id: "my-connector-sync-job"
)
$resp = $client->connector()->syncJobCheckIn([
"connector_sync_job_id" => "my-connector-sync-job",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job/_check_in"
client.connector().syncJobCheckIn(s -> s
.connectorSyncJobId("my-connector-sync-job")
);
Delete a connector sync job
Beta; Added in 8.12.0
Remove a connector sync job and its associated data. This is a destructive action that is not recoverable.
DELETE _connector/_sync_job/my-connector-sync-job-id
resp = client.connector.sync_job_delete(
connector_sync_job_id="my-connector-sync-job-id",
)
const response = await client.connector.syncJobDelete({
connector_sync_job_id: "my-connector-sync-job-id",
});
response = client.connector.sync_job_delete(
connector_sync_job_id: "my-connector-sync-job-id"
)
$resp = $client->connector()->syncJobDelete([
"connector_sync_job_id" => "my-connector-sync-job-id",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id"
client.connector().syncJobDelete(s -> s
.connectorSyncJobId("my-connector-sync-job-id")
);
{
"acknowledged": true
}
Update the connector draft filtering validation
Technical preview; Added in 8.12.0
Update the draft filtering validation info for a connector.
curl \
--request PUT 'http://api.example.com/_connector/{connector_id}/_filtering/_validation' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"validation":{"errors":[{"ids":["string"],"messages":["string"]}],"state":"edited"}}'
Update the connector name and description
Beta; Added in 8.12.0
PUT _connector/my-connector/_name
{
"name": "Custom connector",
"description": "This is my customized connector"
}
resp = client.connector.update_name(
connector_id="my-connector",
name="Custom connector",
description="This is my customized connector",
)
const response = await client.connector.updateName({
connector_id: "my-connector",
name: "Custom connector",
description: "This is my customized connector",
});
response = client.connector.update_name(
connector_id: "my-connector",
body: {
"name": "Custom connector",
"description": "This is my customized connector"
}
)
$resp = $client->connector()->updateName([
"connector_id" => "my-connector",
"body" => [
"name" => "Custom connector",
"description" => "This is my customized connector",
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"name":"Custom connector","description":"This is my customized connector"}' "$ELASTICSEARCH_URL/_connector/my-connector/_name"
client.connector().updateName(u -> u
.connectorId("my-connector")
.description("This is my customized connector")
.name("Custom connector")
);
{
"name": "Custom connector",
"description": "This is my customized connector"
}
{
"result": "updated"
}
Update the connector service type
Beta; Added in 8.12.0
PUT _connector/my-connector/_service_type
{
"service_type": "sharepoint_online"
}
resp = client.connector.update_service_type(
connector_id="my-connector",
service_type="sharepoint_online",
)
const response = await client.connector.updateServiceType({
connector_id: "my-connector",
service_type: "sharepoint_online",
});
response = client.connector.update_service_type(
connector_id: "my-connector",
body: {
"service_type": "sharepoint_online"
}
)
$resp = $client->connector()->updateServiceType([
"connector_id" => "my-connector",
"body" => [
"service_type" => "sharepoint_online",
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service_type":"sharepoint_online"}' "$ELASTICSEARCH_URL/_connector/my-connector/_service_type"
client.connector().updateServiceType(u -> u
.connectorId("my-connector")
.serviceType("sharepoint_online")
);
{
"service_type": "sharepoint_online"
}
{
"result": "updated"
}
Pause an auto-follow pattern
Generally available; Added in 7.5.0
Pause a cross-cluster replication auto-follow pattern. When the API returns, the auto-follow pattern is inactive. New indices that are created on the remote cluster and match the auto-follow patterns are ignored.
You can resume auto-following with the resume auto-follow pattern API. When it resumes, the auto-follow pattern is active again and automatically configures follower indices for newly created indices on the remote cluster that match its patterns. Remote indices that were created while the pattern was paused will also be followed, unless they have been deleted or closed in the interim.
Required authorization
- Cluster privileges:
manage_ccr
POST /_ccr/auto_follow/my_auto_follow_pattern/pause
resp = client.ccr.pause_auto_follow_pattern(
name="my_auto_follow_pattern",
)
const response = await client.ccr.pauseAutoFollowPattern({
name: "my_auto_follow_pattern",
});
response = client.ccr.pause_auto_follow_pattern(
name: "my_auto_follow_pattern"
)
$resp = $client->ccr()->pauseAutoFollowPattern([
"name" => "my_auto_follow_pattern",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ccr/auto_follow/my_auto_follow_pattern/pause"
client.ccr().pauseAutoFollowPattern(p -> p
.name("my_auto_follow_pattern")
);
{
"acknowledged" : true
}
Downsample an index
Technical preview; Added in 8.5.0
Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min
, max
, sum
, value_count
and avg
) for each metric field grouped by a configured time interval.
For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index.
All documents within an hour interval are summarized and stored as a single document in the downsample index.
NOTE: Only indices in a time series data stream are supported.
Neither field nor document level security can be defined on the source index.
The source index must be read only (index.blocks.write: true
).
POST /my-time-series-index/_downsample/my-downsampled-time-series-index
{
"fixed_interval": "1d"
}
resp = client.indices.downsample(
index="my-time-series-index",
target_index="my-downsampled-time-series-index",
config={
"fixed_interval": "1d"
},
)
const response = await client.indices.downsample({
index: "my-time-series-index",
target_index: "my-downsampled-time-series-index",
config: {
fixed_interval: "1d",
},
});
response = client.indices.downsample(
index: "my-time-series-index",
target_index: "my-downsampled-time-series-index",
body: {
"fixed_interval": "1d"
}
)
$resp = $client->indices()->downsample([
"index" => "my-time-series-index",
"target_index" => "my-downsampled-time-series-index",
"body" => [
"fixed_interval" => "1d",
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"fixed_interval":"1d"}' "$ELASTICSEARCH_URL/my-time-series-index/_downsample/my-downsampled-time-series-index"
client.indices().downsample(d -> d
.index("my-time-series-index")
.targetIndex("my-downsampled-time-series-index")
.config(c -> c
.fixedInterval(f -> f
.time("1d")
)
)
);
{
"fixed_interval": "1d"
}
Convert an index alias to a data stream
Generally available; Added in 7.9.0
Converts an index alias to a data stream.
You must have a matching index template that is data stream enabled.
The alias must meet the following criteria:
The alias must have a write index;
All indices for the alias must have a @timestamp
field mapping of a date
or date_nanos
field type;
The alias must not have any filters;
The alias must not use custom routing.
If successful, the request removes the alias and creates a data stream with the same name.
The indices for the alias become hidden backing indices for the stream.
The write index for the alias becomes the write index for the stream.
Required authorization
- Index privileges:
manage
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
POST _data_stream/_migrate/my-time-series-data
resp = client.indices.migrate_to_data_stream(
name="my-time-series-data",
)
const response = await client.indices.migrateToDataStream({
name: "my-time-series-data",
});
response = client.indices.migrate_to_data_stream(
name: "my-time-series-data"
)
$resp = $client->indices()->migrateToDataStream([
"name" => "my-time-series-data",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/_migrate/my-time-series-data"
client.indices().migrateToDataStream(m -> m
.name("my-time-series-data")
);
Promote a data stream
Generally available; Added in 7.9.0
Promote a data stream from a replicated data stream managed by cross-cluster replication (CCR) to a regular data stream.
With CCR auto following, a data stream from a remote cluster can be replicated to the local cluster. These data streams can't be rolled over in the local cluster. These replicated data streams roll over only if the upstream data stream rolls over. In the event that the remote cluster is no longer available, the data stream in the local cluster can be promoted to a regular data stream, which allows these data streams to be rolled over in the local cluster.
NOTE: When promoting a data stream, ensure the local cluster has a data stream enabled index template that matches the data stream. If this is missing, the data stream will not be able to roll over until a matching index template is created. This will affect the lifecycle management of the data stream and interfere with the data stream size and retention.
POST /_data_stream/_promote/my-data-stream
resp = client.indices.promote_data_stream(
name="my-data-stream",
)
const response = await client.indices.promoteDataStream({
name: "my-data-stream",
});
response = client.indices.promote_data_stream(
name: "my-data-stream"
)
$resp = $client->indices()->promoteDataStream([
"name" => "my-data-stream",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/_promote/my-data-stream"
client.indices().promoteDataStream(p -> p
.name("my-data-stream")
);
Create a new document in the index
Generally available; Added in 5.0.0
All methods and paths for this operation:
You can index a new JSON document with the /<target>/_doc/
or /<target>/_create/<_id>
APIs
Using _create
guarantees that the document is indexed only if it does not already exist.
It returns a 409 response when a document with a same ID already exists in the index.
To update an existing document, you must use the /<target>/_doc/
API.
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:
- To add a document using the
PUT /<target>/_create/<_id>
orPOST /<target>/_create/<_id>
request formats, you must have thecreate_doc
,create
,index
, orwrite
index privilege. - To automatically create a data stream or index with this API request, you must have the
auto_configure
,create_index
, ormanage
index privilege.
Automatic data stream creation requires a matching index template with data stream enabled.
Automatically create data streams and indices
If the request's target doesn't exist and matches an index template with a data_stream
definition, the index operation automatically creates the data stream.
If the target doesn't exist and doesn't match a data stream template, the operation automatically creates the index and applies any matching index templates.
NOTE: Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation.
If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed.
Automatic index creation is controlled by the action.auto_create_index
setting.
If it is true
, any index can be created automatically.
You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to false
to turn off automatic index creation entirely.
Specify a comma-separated list of patterns you want to allow or prefix each pattern with +
or -
to indicate whether it should be allowed or blocked.
When a list is specified, the default behaviour is to disallow.
NOTE: The action.auto_create_index
setting affects the automatic creation of indices only.
It does not affect the creation of data streams.
Routing
By default, shard placement — or routing — is controlled by using a hash of the document's ID value.
For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the routing
parameter.
When setting up explicit mapping, you can also use the _routing
field to direct the index operation to extract the routing value from the document itself.
This does come at the (very minimal) cost of an additional document parsing pass.
If the _routing
mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted.
NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing
setting enabled in the template.
Distributed
The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas.
Active shards
To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation.
If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs.
By default, write operations only wait for the primary shards to be active before proceeding (that is to say wait_for_active_shards
is 1
).
This default can be overridden in the index settings dynamically by setting index.write.wait_for_active_shards
.
To alter this behavior per operation, use the wait_for_active_shards request
parameter.
Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is number_of_replicas
+1).
Specifying a negative value or a number greater than the number of shard copies will throw an error.
For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes).
If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding.
This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data.
If wait_for_active_shards
is set on the request to 3
(and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding.
This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard.
However, if you set wait_for_active_shards
to all
(or to 4
, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index.
The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard.
It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts.
After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary.
The _shards
section of the API response reveals the number of shard copies on which replication succeeded and failed.
Required authorization
- Index privileges:
create
Path parameters
-
The name of the data stream or index to target. If the target doesn't exist and matches the name or wildcard (
*
) pattern of an index template with adata_stream
definition, this request creates the data stream. If the target doesn't exist and doesn’t match a data stream template, this request creates the index. -
A unique identifier for the document. To automatically generate a document ID, use the
POST /<target>/_doc/
request format.
Query parameters
-
True or false if to include the document source in the error message in case of parsing errors.
-
The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to
_none
turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter. -
If
true
, Elasticsearch refreshes the affected shards to make this operation visible to search. Ifwait_for
, it waits for a refresh to make this operation visible to search. Iffalse
, it does nothing with refreshes.Values are
true
,false
, orwait_for
. -
If
true
, the destination must be an index alias. -
If
true
, the request's actions must target a data stream (existing or to be created). -
A custom value that is used to route operations to a specific shard.
-
The period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards. Elasticsearch waits for at least the specified timeout period before failing. The actual wait time could be longer, particularly when multiple waits occur.
This parameter is useful for situations where the primary shard assigned to perform the operation might not be available when the operation runs. Some reasons for this might be that the primary shard is currently recovering from a gateway or undergoing relocation. By default, the operation will wait on the primary shard to become available for at least 1 minute before failing and responding with an error. The actual wait time could be longer, particularly when multiple waits occur.
Values are
-1
or0
. -
The explicit version number for concurrency control. It must be a non-negative long number.
-
The version type.
Supported values include:
internal
: Use internal versioning that starts at 1 and increments with each update or delete.external
: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.external_gte
: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: Theexternal_gte
version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.force
: This option is deprecated because it can cause primary and replica shards to diverge.
Values are
internal
,external
,external_gte
, orforce
. -
The number of shard copies that must be active before proceeding with the operation. You can set it to
all
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). The default value of1
means it waits for each primary shard to be active.Values are
all
orindex-setting
.
PUT my-index-000001/_create/1
{
"@timestamp": "2099-11-15T13:12:00",
"message": "GET /search HTTP/1.1 200 1070000",
"user": {
"id": "kimchy"
}
}
resp = client.create(
index="my-index-000001",
id="1",
document={
"@timestamp": "2099-11-15T13:12:00",
"message": "GET /search HTTP/1.1 200 1070000",
"user": {
"id": "kimchy"
}
},
)
const response = await client.create({
index: "my-index-000001",
id: 1,
document: {
"@timestamp": "2099-11-15T13:12:00",
message: "GET /search HTTP/1.1 200 1070000",
user: {
id: "kimchy",
},
},
});
response = client.create(
index: "my-index-000001",
id: "1",
body: {
"@timestamp": "2099-11-15T13:12:00",
"message": "GET /search HTTP/1.1 200 1070000",
"user": {
"id": "kimchy"
}
}
)
$resp = $client->create([
"index" => "my-index-000001",
"id" => "1",
"body" => [
"@timestamp" => "2099-11-15T13:12:00",
"message" => "GET /search HTTP/1.1 200 1070000",
"user" => [
"id" => "kimchy",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"@timestamp":"2099-11-15T13:12:00","message":"GET /search HTTP/1.1 200 1070000","user":{"id":"kimchy"}}' "$ELASTICSEARCH_URL/my-index-000001/_create/1"
client.create(c -> c
.id("1")
.index("my-index-000001")
.document(JsonData.fromJson("{\"@timestamp\":\"2099-11-15T13:12:00\",\"message\":\"GET /search HTTP/1.1 200 1070000\",\"user\":{\"id\":\"kimchy\"}}"))
);
{
"@timestamp": "2099-11-15T13:12:00",
"message": "GET /search HTTP/1.1 200 1070000",
"user": {
"id": "kimchy"
}
}
{
"_index": "my-index-000001",
"_id": "1",
"_version": 1,
"result": "created",
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"_seq_no": 0,
"_primary_term": 1
}
Delete a document
Generally available
Remove a JSON document from the specified index.
NOTE: You cannot send deletion requests directly to a data stream. To delete a document in a data stream, you must target the backing index containing the document.
Optimistic concurrency control
Delete operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the if_seq_no
and if_primary_term
parameters.
If a mismatch is detected, the operation will result in a VersionConflictException
and a status code of 409
.
Versioning
Each document indexed is versioned.
When deleting a document, the version can be specified to make sure the relevant document you are trying to delete is actually being deleted and it has not changed in the meantime.
Every write operation run on a document, deletes included, causes its version to be incremented.
The version number of a deleted document remains available for a short time after deletion to allow for control of concurrent operations.
The length of time for which a deleted document's version remains available is determined by the index.gc_deletes
index setting.
Routing
If routing is used during indexing, the routing value also needs to be specified to delete a document.
If the _routing
mapping is set to required
and no routing value is specified, the delete API throws a RoutingMissingException
and rejects the request.
For example:
DELETE /my-index-000001/_doc/1?routing=shard-1
This request deletes the document with ID 1, but it is routed based on the user. The document is not deleted if the correct routing is not specified.
Distributed
The delete operation gets hashed into a specific shard ID. It then gets redirected into the primary shard within that ID group and replicated (if needed) to shard replicas within that ID group.
Required authorization
- Index privileges:
delete
Query parameters
-
Only perform the operation if the document has this primary term.
-
Only perform the operation if the document has this sequence number.
-
If
true
, Elasticsearch refreshes the affected shards to make this operation visible to search. Ifwait_for
, it waits for a refresh to make this operation visible to search. Iffalse
, it does nothing with refreshes.Values are
true
,false
, orwait_for
. -
A custom value used to route operations to a specific shard.
-
The period to wait for active shards.
This parameter is useful for situations where the primary shard assigned to perform the delete operation might not be available when the delete operation runs. Some reasons for this might be that the primary shard is currently recovering from a store or undergoing relocation. By default, the delete operation will wait on the primary shard to become available for up to 1 minute before failing and responding with an error.
Values are
-1
or0
. -
An explicit version number for concurrency control. It must match the current version of the document for the request to succeed.
-
The version type.
Supported values include:
internal
: Use internal versioning that starts at 1 and increments with each update or delete.external
: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.external_gte
: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: Theexternal_gte
version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.force
: This option is deprecated because it can cause primary and replica shards to diverge.
Values are
internal
,external
,external_gte
, orforce
. -
The minimum number of shard copies that must be active before proceeding with the operation. You can set it to
all
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). The default value of1
means it waits for each primary shard to be active.Values are
all
orindex-setting
.
DELETE /my-index-000001/_doc/1
resp = client.delete(
index="my-index-000001",
id="1",
)
const response = await client.delete({
index: "my-index-000001",
id: 1,
});
response = client.delete(
index: "my-index-000001",
id: "1"
)
$resp = $client->delete([
"index" => "my-index-000001",
"id" => "1",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_doc/1"
client.delete(d -> d
.id("1")
.index("my-index-000001")
);
{
"_shards": {
"total": 2,
"failed": 0,
"successful": 2
},
"_index": "my-index-000001",
"_id": "1",
"_version": 2,
"_primary_term": 1,
"_seq_no": 5,
"result": "deleted"
}
Throttle a reindex operation
Generally available; Added in 2.4.0
Change the number of requests per second for a particular reindex operation. For example:
POST _reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1
Rethrottling that speeds up the query takes effect immediately. Rethrottling that slows down the query will take effect after completing the current batch. This behavior prevents scroll timeouts.
POST _reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1
resp = client.reindex_rethrottle(
task_id="r1A2WoRbTwKZ516z6NEs5A:36619",
requests_per_second="-1",
)
const response = await client.reindexRethrottle({
task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
requests_per_second: "-1",
});
response = client.reindex_rethrottle(
task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
requests_per_second: "-1"
)
$resp = $client->reindexRethrottle([
"task_id" => "r1A2WoRbTwKZ516z6NEs5A:36619",
"requests_per_second" => "-1",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1"
client.reindexRethrottle(r -> r
.requestsPerSecond(-1.0F)
.taskId("r1A2WoRbTwKZ516z6NEs5A:36619")
);
Throttle an update by query operation
Generally available; Added in 6.5.0
Change the number of requests per second for a particular update by query operation. Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.
POST _update_by_query/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1
resp = client.update_by_query_rethrottle(
task_id="r1A2WoRbTwKZ516z6NEs5A:36619",
requests_per_second="-1",
)
const response = await client.updateByQueryRethrottle({
task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
requests_per_second: "-1",
});
response = client.update_by_query_rethrottle(
task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
requests_per_second: "-1"
)
$resp = $client->updateByQueryRethrottle([
"task_id" => "r1A2WoRbTwKZ516z6NEs5A:36619",
"requests_per_second" => "-1",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_update_by_query/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1"
client.updateByQueryRethrottle(u -> u
.requestsPerSecond(-1.0F)
.taskId("r1A2WoRbTwKZ516z6NEs5A:36619")
);
Get an enrich policy
Generally available; Added in 7.5.0
All methods and paths for this operation:
Returns information about an enrich policy.
Path parameters
-
Comma-separated list of enrich policy names used to limit the request. To return information for all enrich policies, omit this parameter.
GET /_enrich/policy/my-policy
resp = client.enrich.get_policy(
name="my-policy",
)
const response = await client.enrich.getPolicy({
name: "my-policy",
});
response = client.enrich.get_policy(
name: "my-policy"
)
$resp = $client->enrich()->getPolicy([
"name" => "my-policy",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy"
client.enrich().getPolicy(g -> g
.name("my-policy")
);
Run an enrich policy
Generally available; Added in 7.5.0
Create the enrich index for an existing enrich policy.
PUT /_enrich/policy/my-policy/_execute?wait_for_completion=false
resp = client.enrich.execute_policy(
name="my-policy",
wait_for_completion=False,
)
const response = await client.enrich.executePolicy({
name: "my-policy",
wait_for_completion: "false",
});
response = client.enrich.execute_policy(
name: "my-policy",
wait_for_completion: "false"
)
$resp = $client->enrich()->executePolicy([
"name" => "my-policy",
"wait_for_completion" => "false",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy/_execute?wait_for_completion=false"
client.enrich().executePolicy(e -> e
.name("my-policy")
.waitForCompletion(false)
);
EQL
Event Query Language (EQL) is a query language for event-based time series data, such as logs, metrics, and traces.
GET /_eql/search/status/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=
resp = client.eql.get_status(
id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
)
const response = await client.eql.getStatus({
id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
});
response = client.eql.get_status(
id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE="
)
$resp = $client->eql()->getStatus([
"id" => "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_eql/search/status/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE="
client.eql().getStatus(g -> g
.id("FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=")
);
{
"id": "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
"is_running" : true,
"is_partial" : true,
"start_time_in_millis" : 1611690235000,
"expiration_time_in_millis" : 1611690295000
}
Check component templates
Generally available; Added in 7.8.0
Returns information about whether a particular component template exists.
Path parameters
-
Comma-separated list of component template names used to limit the request. Wildcard (*) expressions are supported.
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.
curl \
--request HEAD 'http://api.example.com/_component_template/{name}' \
--header "Authorization: $API_KEY"
Path parameters
-
Comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported.
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
DELETE /_index_template/my-index-template
resp = client.indices.delete_index_template(
name="my-index-template",
)
const response = await client.indices.deleteIndexTemplate({
name: "my-index-template",
});
response = client.indices.delete_index_template(
name: "my-index-template"
)
$resp = $client->indices()->deleteIndexTemplate([
"name" => "my-index-template",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_index_template/my-index-template"
client.indices().deleteIndexTemplate(d -> d
.name("my-index-template")
);
Path parameters
-
The name of the legacy index template to delete. Wildcard (
*
) expressions are supported.
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
DELETE _template/.cloud-hot-warm-allocation-0
resp = client.indices.delete_template(
name=".cloud-hot-warm-allocation-0",
)
const response = await client.indices.deleteTemplate({
name: ".cloud-hot-warm-allocation-0",
});
response = client.indices.delete_template(
name: ".cloud-hot-warm-allocation-0"
)
$resp = $client->indices()->deleteTemplate([
"name" => ".cloud-hot-warm-allocation-0",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_template/.cloud-hot-warm-allocation-0"
client.indices().deleteTemplate(d -> d
.name(".cloud-hot-warm-allocation-0")
);
Analyze the index disk usage
Technical preview; Added in 7.15.0
Analyze the disk usage of each field of an index or data stream. This API might not support indices created in previous Elasticsearch versions. The result of a small index can be inaccurate as some parts of an index might not be analyzed by the API.
NOTE: The total size of fields of the analyzed shards of the index in the response is usually smaller than the index store_size
value because some small metadata files are ignored and some parts of data files might not be scanned by the API.
Since stored fields are stored together in a compressed format, the sizes of stored fields are also estimates and can be inaccurate.
The stored size of the _id
field is likely underestimated while the _source
field is overestimated.
Path parameters
-
Comma-separated list of data streams, indices, and aliases used to limit the request. It’s recommended to execute this API with a single index (or the latest backing index of a data stream) as the API consumes resources significantly.
Query parameters
-
If false, the request returns an error if any wildcard expression, index alias, or
_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
Values are
all
,open
,closed
,hidden
, ornone
. -
If
true
, the API performs a flush before analysis. Iffalse
, the response may not include uncommitted data. -
Analyzing field disk usage is resource-intensive. To use the API, this parameter must be set to
true
.
POST /my-index-000001/_disk_usage?run_expensive_tasks=true
resp = client.indices.disk_usage(
index="my-index-000001",
run_expensive_tasks=True,
)
const response = await client.indices.diskUsage({
index: "my-index-000001",
run_expensive_tasks: "true",
});
response = client.indices.disk_usage(
index: "my-index-000001",
run_expensive_tasks: "true"
)
$resp = $client->indices()->diskUsage([
"index" => "my-index-000001",
"run_expensive_tasks" => "true",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_disk_usage?run_expensive_tasks=true"
client.indices().diskUsage(d -> d
.index("my-index-000001")
.runExpensiveTasks(true)
);
Get index shard stores
Generally available
All methods and paths for this operation:
Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream's backing indices.
The index shard stores API returns the following information:
- The node on which each replica shard exists.
- The allocation ID for each replica shard.
- A unique ID for each replica shard.
- Any errors encountered while opening the shard index or from an earlier failure.
By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.
Required authorization
- Index privileges:
monitor
Query parameters
-
If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.
-
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams.
Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
Values are
all
,open
,closed
,hidden
, ornone
. -
List of shard health statuses used to limit the request.
Supported values include:
green
: The primary shard and all replica shards are assigned.yellow
: One or more replica shards are unassigned.red
: The primary shard is unassigned.all
: Return all shards, regardless of health status.
Values are
green
,yellow
,red
, orall
.
GET /_shard_stores?status=green
resp = client.indices.shard_stores(
status="green",
)
const response = await client.indices.shardStores({
status: "green",
});
response = client.indices.shard_stores(
status: "green"
)
$resp = $client->indices()->shardStores([
"status" => "green",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_shard_stores?status=green"
{
"indices": {
"my-index-000001": {
"shards": {
"0": {
"stores": [
{
"sPa3OgxLSYGvQ4oPs-Tajw": {
"name": "node_t0",
"ephemeral_id": "9NlXRFGCT1m8tkvYCMK-8A",
"transport_address": "local[1]",
"external_id": "node_t0",
"attributes": {},
"roles": [],
"version": "8.10.0",
"min_index_version": 7000099,
"max_index_version": 8100099
},
"allocation_id": "2iNySv_OQVePRX-yaRH_lQ",
"allocation": "primary",
"store_exception": {}
}
]
}
}
}
}
}
Query parameters
-
If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.Supported values include:
all
: Match any data stream or index, including hidden ones.open
: Match open, non-hidden indices. Also matches any non-hidden data stream.closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, orboth
.none
: Wildcard expressions are not accepted.
Values are
all
,open
,closed
,hidden
, ornone
. -
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
The number of shard copies that must be active before proceeding with the operation. Set to
all
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
curl \
--request POST 'http://api.example.com/{index}/_unfreeze' \
--header "Authorization: $API_KEY"
GET _ilm/status
resp = client.ilm.get_status()
const response = await client.ilm.getStatus();
response = client.ilm.get_status
$resp = $client->ilm()->getStatus();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ilm/status"
client.ilm().getStatus();
{
"operation_mode": "RUNNING"
}
Create an ELSER inference endpoint
Deprecated
Generally available; Added in 8.11.0
Create an inference endpoint to perform an inference task with the elser
service.
You can also deploy ELSER by using the Elasticsearch inference integration.
Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings.
The API request will automatically download and deploy the ELSER model if it isn't already downloaded.
You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
Required authorization
- Cluster privileges:
manage_inference
Path parameters
-
The type of the inference task that the model will perform.
Value is
sparse_embedding
. -
The unique identifier of the inference endpoint.
Query parameters
-
Specifies the amount of time to wait for the inference endpoint to be created.
Values are
-1
or0
.
PUT _inference/sparse_embedding/my-elser-model
{
"service": "elser",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
}
}
resp = client.inference.put(
task_type="sparse_embedding",
inference_id="my-elser-model",
inference_config={
"service": "elser",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
}
},
)
const response = await client.inference.put({
task_type: "sparse_embedding",
inference_id: "my-elser-model",
inference_config: {
service: "elser",
service_settings: {
num_allocations: 1,
num_threads: 1,
},
},
});
response = client.inference.put(
task_type: "sparse_embedding",
inference_id: "my-elser-model",
body: {
"service": "elser",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
}
}
)
$resp = $client->inference()->put([
"task_type" => "sparse_embedding",
"inference_id" => "my-elser-model",
"body" => [
"service" => "elser",
"service_settings" => [
"num_allocations" => 1,
"num_threads" => 1,
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"elser","service_settings":{"num_allocations":1,"num_threads":1}}' "$ELASTICSEARCH_URL/_inference/sparse_embedding/my-elser-model"
client.inference().put(p -> p
.inferenceId("my-elser-model")
.taskType(TaskType.SparseEmbedding)
.inferenceConfig(i -> i
.service("elser")
.serviceSettings(JsonData.fromJson("{\"num_allocations\":1,\"num_threads\":1}"))
)
);
{
"service": "elser",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
}
}
{
"service": "elser",
"service_settings": {
"adaptive_allocations": {
"enabled": true,
"min_number_of_allocations": 3,
"max_number_of_allocations": 10
},
"num_threads": 1
}
}
{
"inference_id": "my-elser-model",
"task_type": "sparse_embedding",
"service": "elser",
"service_settings": {
"num_allocations": 1,
"num_threads": 1
},
"task_settings": {}
}
Create an Google AI Studio inference endpoint
Generally available; Added in 8.15.0
Path parameters
-
The type of the inference task that the model will perform.
Values are
completion
ortext_embedding
. -
The unique identifier of the inference endpoint.
Query parameters
-
Specifies the amount of time to wait for the inference endpoint to be created.
Values are
-1
or0
.
PUT _inference/completion/google_ai_studio_completion
{
"service": "googleaistudio",
"service_settings": {
"api_key": "api-key",
"model_id": "model-id"
}
}
resp = client.inference.put(
task_type="completion",
inference_id="google_ai_studio_completion",
inference_config={
"service": "googleaistudio",
"service_settings": {
"api_key": "api-key",
"model_id": "model-id"
}
},
)
const response = await client.inference.put({
task_type: "completion",
inference_id: "google_ai_studio_completion",
inference_config: {
service: "googleaistudio",
service_settings: {
api_key: "api-key",
model_id: "model-id",
},
},
});
response = client.inference.put(
task_type: "completion",
inference_id: "google_ai_studio_completion",
body: {
"service": "googleaistudio",
"service_settings": {
"api_key": "api-key",
"model_id": "model-id"
}
}
)
$resp = $client->inference()->put([
"task_type" => "completion",
"inference_id" => "google_ai_studio_completion",
"body" => [
"service" => "googleaistudio",
"service_settings" => [
"api_key" => "api-key",
"model_id" => "model-id",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"googleaistudio","service_settings":{"api_key":"api-key","model_id":"model-id"}}' "$ELASTICSEARCH_URL/_inference/completion/google_ai_studio_completion"
client.inference().put(p -> p
.inferenceId("google_ai_studio_completion")
.taskType(TaskType.Completion)
.inferenceConfig(i -> i
.service("googleaistudio")
.serviceSettings(JsonData.fromJson("{\"api_key\":\"api-key\",\"model_id\":\"model-id\"}"))
)
);
{
"service": "googleaistudio",
"service_settings": {
"api_key": "api-key",
"model_id": "model-id"
}
}
Create a VoyageAI inference endpoint
Generally available; Added in 8.19.0
Path parameters
-
The type of the inference task that the model will perform.
Values are
text_embedding
orrerank
. -
The unique identifier of the inference endpoint.
Query parameters
-
Specifies the amount of time to wait for the inference endpoint to be created.
Values are
-1
or0
.
PUT _inference/text_embedding/openai-embeddings
{
"service": "voyageai",
"service_settings": {
"model_id": "voyage-3-large",
"dimensions": 512
}
}
resp = client.inference.put(
task_type="text_embedding",
inference_id="openai-embeddings",
inference_config={
"service": "voyageai",
"service_settings": {
"model_id": "voyage-3-large",
"dimensions": 512
}
},
)
const response = await client.inference.put({
task_type: "text_embedding",
inference_id: "openai-embeddings",
inference_config: {
service: "voyageai",
service_settings: {
model_id: "voyage-3-large",
dimensions: 512,
},
},
});
response = client.inference.put(
task_type: "text_embedding",
inference_id: "openai-embeddings",
body: {
"service": "voyageai",
"service_settings": {
"model_id": "voyage-3-large",
"dimensions": 512
}
}
)
$resp = $client->inference()->put([
"task_type" => "text_embedding",
"inference_id" => "openai-embeddings",
"body" => [
"service" => "voyageai",
"service_settings" => [
"model_id" => "voyage-3-large",
"dimensions" => 512,
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"voyageai","service_settings":{"model_id":"voyage-3-large","dimensions":512}}' "$ELASTICSEARCH_URL/_inference/text_embedding/openai-embeddings"
client.inference().put(p -> p
.inferenceId("openai-embeddings")
.taskType(TaskType.TextEmbedding)
.inferenceConfig(i -> i
.service("voyageai")
.serviceSettings(JsonData.fromJson("{\"model_id\":\"voyage-3-large\",\"dimensions\":512}"))
)
);
{
"service": "voyageai",
"service_settings": {
"model_id": "voyage-3-large",
"dimensions": 512
}
}
{
"service": "voyageai",
"service_settings": {
"model_id": "rerank-2"
}
}
Perform text embedding inference on the service
Generally available; Added in 8.11.0
Query parameters
-
Specifies the amount of time to wait for the inference request to complete.
Values are
-1
or0
.
POST _inference/text_embedding/my-cohere-endpoint
{
"input": "The sky above the port was the color of television tuned to a dead channel.",
"task_settings": {
"input_type": "ingest"
}
}
resp = client.inference.text_embedding(
inference_id="my-cohere-endpoint",
input="The sky above the port was the color of television tuned to a dead channel.",
task_settings={
"input_type": "ingest"
},
)
const response = await client.inference.textEmbedding({
inference_id: "my-cohere-endpoint",
input:
"The sky above the port was the color of television tuned to a dead channel.",
task_settings: {
input_type: "ingest",
},
});
response = client.inference.text_embedding(
inference_id: "my-cohere-endpoint",
body: {
"input": "The sky above the port was the color of television tuned to a dead channel.",
"task_settings": {
"input_type": "ingest"
}
}
)
$resp = $client->inference()->textEmbedding([
"inference_id" => "my-cohere-endpoint",
"body" => [
"input" => "The sky above the port was the color of television tuned to a dead channel.",
"task_settings" => [
"input_type" => "ingest",
],
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"input":"The sky above the port was the color of television tuned to a dead channel.","task_settings":{"input_type":"ingest"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/my-cohere-endpoint"
client.inference().textEmbedding(t -> t
.inferenceId("my-cohere-endpoint")
.input("The sky above the port was the color of television tuned to a dead channel.")
.taskSettings(JsonData.fromJson("{\"input_type\":\"ingest\"}"))
);
{
"input": "The sky above the port was the color of television tuned to a dead channel.",
"task_settings": {
"input_type": "ingest"
}
}
{
"text_embedding": [
{
"embedding": [
{
0.018569946,
-0.036895752,
0.01486969,
-0.0045204163,
-0.04385376,
0.0075950623,
0.04260254,
-0.004005432,
0.007865906,
0.030792236,
-0.050476074,
0.011795044,
-0.011642456,
-0.010070801
}
]
}
]
}
Get GeoIP database configurations
Generally available; Added in 8.15.0
All methods and paths for this operation:
Get information about one or more IP geolocation database configurations.
curl \
--request GET 'http://api.example.com/_ingest/geoip/database/{id}' \
--header "Authorization: $API_KEY"
Create or update a GeoIP database configuration
Generally available; Added in 8.15.0
Refer to the create or update IP geolocation database configuration API.
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
curl \
--request PUT 'http://api.example.com/_ingest/geoip/database/{id}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"name":"string","maxmind":{"account_id":"string"}}'
Get IP geolocation database configurations
Generally available; Added in 8.15.0
GET /_ingest/ip_location/database/my-database-id
resp = client.ingest.get_ip_location_database(
id="my-database-id",
)
const response = await client.ingest.getIpLocationDatabase({
id: "my-database-id",
});
response = client.ingest.get_ip_location_database(
id: "my-database-id"
)
$resp = $client->ingest()->getIpLocationDatabase([
"id" => "my-database-id",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ingest/ip_location/database/my-database-id"
client.ingest().getIpLocationDatabase(g -> g
.id("my-database-id")
);
Create or update an IP geolocation database configuration
Generally available; Added in 8.15.0
Query parameters
-
The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. A value of
-1
indicates that the request should never time out.Values are
-1
or0
. -
The period to wait for a response from all relevant nodes in the cluster after updating the cluster metadata. If no response is received before the timeout expires, the cluster metadata update still applies but the response indicates that it was not completely acknowledged. A value of
-1
indicates that the request should never time out.Values are
-1
or0
.
Body
Required
The configuration necessary to identify which IP geolocation provider to use to download a database, as well as any provider-specific configuration necessary for such downloading.
At present, the only supported providers are maxmind
and ipinfo
, and the maxmind
provider requires that an account_id
(string) is configured.
A provider (either maxmind
or ipinfo
) must be specified. The web and local providers can be returned as read only configurations.
PUT _ingest/ip_location/database/my-database-1
{
"name": "GeoIP2-Domain",
"maxmind": {
"account_id": "1234567"
}
}
resp = client.ingest.put_ip_location_database(
id="my-database-1",
configuration={
"name": "GeoIP2-Domain",
"maxmind": {
"account_id": "1234567"
}
},
)
const response = await client.ingest.putIpLocationDatabase({
id: "my-database-1",
configuration: {
name: "GeoIP2-Domain",
maxmind: {
account_id: "1234567",
},
},
});
response = client.ingest.put_ip_location_database(
id: "my-database-1",
body: {
"name": "GeoIP2-Domain",
"maxmind": {
"account_id": "1234567"
}
}
)
$resp = $client->ingest()->putIpLocationDatabase([
"id" => "my-database-1",
"body" => [
"name" => "GeoIP2-Domain",
"maxmind" => [
"account_id" => "1234567",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"name":"GeoIP2-Domain","maxmind":{"account_id":"1234567"}}' "$ELASTICSEARCH_URL/_ingest/ip_location/database/my-database-1"
client.ingest().putIpLocationDatabase(p -> p
.id("my-database-1")
.configuration(c -> c
.maxmind(m -> m
.accountId("1234567")
)
.name("GeoIP2-Domain")
)
);
{
"name": "GeoIP2-Domain",
"maxmind": {
"account_id": "1234567"
}
}
Set upgrade_mode for ML indices
Generally available; Added in 6.7.0
Sets a cluster wide upgrade_mode setting that prepares machine learning indices for an upgrade. When upgrading your cluster, in some circumstances you must restart your nodes and reindex your machine learning indices. In those circumstances, there must be no machine learning jobs running. You can close the machine learning jobs, do the upgrade, then open all the jobs again. Alternatively, you can use this API to temporarily halt tasks associated with the jobs and datafeeds and prevent new jobs from opening. You can also use this API during upgrades that do not require you to reindex your machine learning indices, though stopping jobs is not a requirement in that case. You can see the current value for the upgrade_mode setting by using the get machine learning info API.
Required authorization
- Cluster privileges:
manage_ml
POST _ml/set_upgrade_mode?enabled=true
resp = client.ml.set_upgrade_mode(
enabled=True,
)
const response = await client.ml.setUpgradeMode({
enabled: "true",
});
response = client.ml.set_upgrade_mode(
enabled: "true"
)
$resp = $client->ml()->setUpgradeMode([
"enabled" => "true",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/set_upgrade_mode?enabled=true"
client.ml().setUpgradeMode(s -> s
.enabled(true)
);
Delete a model snapshot
Generally available; Added in 5.4.0
DELETE _ml/anomaly_detectors/farequote/model_snapshots/1491948163
resp = client.ml.delete_model_snapshot(
job_id="farequote",
snapshot_id="1491948163",
)
const response = await client.ml.deleteModelSnapshot({
job_id: "farequote",
snapshot_id: 1491948163,
});
response = client.ml.delete_model_snapshot(
job_id: "farequote",
snapshot_id: "1491948163"
)
$resp = $client->ml()->deleteModelSnapshot([
"job_id" => "farequote",
"snapshot_id" => "1491948163",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/anomaly_detectors/farequote/model_snapshots/1491948163"
client.ml().deleteModelSnapshot(d -> d
.jobId("farequote")
.snapshotId("1491948163")
);
{
"acknowledged": true
}
Create part of a trained model definition
Generally available; Added in 8.0.0
Path parameters
-
The unique identifier of the trained model.
-
The definition part number. When the definition is loaded for inference the definition parts are streamed in the order of their part number. The first part must be
0
and the final part must betotal_parts - 1
.
PUT _ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/definition/0
{
"definition": "...",
"total_definition_length": 265632637,
"total_parts": 64
}
resp = client.ml.put_trained_model_definition_part(
model_id="elastic__distilbert-base-uncased-finetuned-conll03-english",
part="0",
definition="...",
total_definition_length=265632637,
total_parts=64,
)
const response = await client.ml.putTrainedModelDefinitionPart({
model_id: "elastic__distilbert-base-uncased-finetuned-conll03-english",
part: 0,
definition: "...",
total_definition_length: 265632637,
total_parts: 64,
});
response = client.ml.put_trained_model_definition_part(
model_id: "elastic__distilbert-base-uncased-finetuned-conll03-english",
part: "0",
body: {
"definition": "...",
"total_definition_length": 265632637,
"total_parts": 64
}
)
$resp = $client->ml()->putTrainedModelDefinitionPart([
"model_id" => "elastic__distilbert-base-uncased-finetuned-conll03-english",
"part" => "0",
"body" => [
"definition" => "...",
"total_definition_length" => 265632637,
"total_parts" => 64,
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"definition":"...","total_definition_length":265632637,"total_parts":64}' "$ELASTICSEARCH_URL/_ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/definition/0"
{
"definition": "...",
"total_definition_length": 265632637,
"total_parts": 64
}
Start a trained model deployment
Generally available; Added in 8.0.0
Path parameters
-
The unique identifier of the trained model. Currently, only PyTorch models are supported.
Query parameters
-
The inference cache size (in memory outside the JVM heap) per node for the model. The default value is the same size as the
model_size_bytes
. To disable the cache,0b
can be provided. -
A unique identifier for the deployment of the model.
-
The number of model allocations on each node where the model is deployed. All allocations on a node share the same copy of the model in memory but use a separate set of threads to evaluate the model. Increasing this value generally increases the throughput. If this setting is greater than the number of hardware threads it will automatically be changed to a value less than the number of hardware threads. If adaptive_allocations is enabled, do not set this value, because it’s automatically set.
-
The deployment priority.
Values are
normal
orlow
. -
Specifies the number of inference requests that are allowed in the queue. After the number of requests exceeds this value, new requests are rejected with a 429 error.
-
Sets the number of threads used by each model allocation during inference. This generally increases the inference speed. The inference process is a compute-bound process; any number greater than the number of available hardware threads on the machine does not increase the inference speed. If this setting is greater than the number of hardware threads it will automatically be changed to a value less than the number of hardware threads.
-
Specifies the amount of time to wait for the model to deploy.
Values are
-1
or0
. -
Specifies the allocation status to wait for before returning.
Supported values include:
started
: The trained model is started on at least one node.starting
: Trained model deployment is starting but it is not yet deployed on any nodes.fully_allocated
: Trained model deployment has started on all valid nodes.
Values are
started
,starting
, orfully_allocated
.
POST _ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/deployment/_start?wait_for=started&timeout=1m
resp = client.ml.start_trained_model_deployment(
model_id="elastic__distilbert-base-uncased-finetuned-conll03-english",
wait_for="started",
timeout="1m",
)
const response = await client.ml.startTrainedModelDeployment({
model_id: "elastic__distilbert-base-uncased-finetuned-conll03-english",
wait_for: "started",
timeout: "1m",
});
response = client.ml.start_trained_model_deployment(
model_id: "elastic__distilbert-base-uncased-finetuned-conll03-english",
wait_for: "started",
timeout: "1m"
)
$resp = $client->ml()->startTrainedModelDeployment([
"model_id" => "elastic__distilbert-base-uncased-finetuned-conll03-english",
"wait_for" => "started",
"timeout" => "1m",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/deployment/_start?wait_for=started&timeout=1m"
client.ml().startTrainedModelDeployment(s -> s
.modelId("elastic__distilbert-base-uncased-finetuned-conll03-english")
.timeout(t -> t
.offset(1)
)
.waitFor(DeploymentAllocationState.Started)
);
Create or update a query ruleset
Generally available; Added in 8.10.0
There is a limit of 100 rules per ruleset.
This limit can be increased by using the xpack.applications.rules.max_rules_per_ruleset
cluster setting.
IMPORTANT: Due to limitations within pinned queries, you can only select documents using ids
or docs
, but cannot use both in single rule.
It is advised to use one or the other in query rulesets, to avoid errors.
Additionally, pinned queries have a maximum limit of 100 pinned hits.
If multiple matching rules pin more than 100 documents, only the first 100 documents are pinned in the order they are specified in the ruleset.
Required authorization
- Cluster privileges:
manage_search_query_rules
PUT _query_rules/my-ruleset
{
"rules": [
{
"rule_id": "my-rule1",
"type": "pinned",
"criteria": [
{
"type": "contains",
"metadata": "user_query",
"values": [ "pugs", "puggles" ]
},
{
"type": "exact",
"metadata": "user_country",
"values": [ "us" ]
}
],
"actions": {
"ids": [
"id1",
"id2"
]
}
},
{
"rule_id": "my-rule2",
"type": "pinned",
"criteria": [
{
"type": "fuzzy",
"metadata": "user_query",
"values": [ "rescue dogs" ]
}
],
"actions": {
"docs": [
{
"_index": "index1",
"_id": "id3"
},
{
"_index": "index2",
"_id": "id4"
}
]
}
}
]
}
resp = client.query_rules.put_ruleset(
ruleset_id="my-ruleset",
rules=[
{
"rule_id": "my-rule1",
"type": "pinned",
"criteria": [
{
"type": "contains",
"metadata": "user_query",
"values": [
"pugs",
"puggles"
]
},
{
"type": "exact",
"metadata": "user_country",
"values": [
"us"
]
}
],
"actions": {
"ids": [
"id1",
"id2"
]
}
},
{
"rule_id": "my-rule2",
"type": "pinned",
"criteria": [
{
"type": "fuzzy",
"metadata": "user_query",
"values": [
"rescue dogs"
]
}
],
"actions": {
"docs": [
{
"_index": "index1",
"_id": "id3"
},
{
"_index": "index2",
"_id": "id4"
}
]
}
}
],
)
const response = await client.queryRules.putRuleset({
ruleset_id: "my-ruleset",
rules: [
{
rule_id: "my-rule1",
type: "pinned",
criteria: [
{
type: "contains",
metadata: "user_query",
values: ["pugs", "puggles"],
},
{
type: "exact",
metadata: "user_country",
values: ["us"],
},
],
actions: {
ids: ["id1", "id2"],
},
},
{
rule_id: "my-rule2",
type: "pinned",
criteria: [
{
type: "fuzzy",
metadata: "user_query",
values: ["rescue dogs"],
},
],
actions: {
docs: [
{
_index: "index1",
_id: "id3",
},
{
_index: "index2",
_id: "id4",
},
],
},
},
],
});
response = client.query_rules.put_ruleset(
ruleset_id: "my-ruleset",
body: {
"rules": [
{
"rule_id": "my-rule1",
"type": "pinned",
"criteria": [
{
"type": "contains",
"metadata": "user_query",
"values": [
"pugs",
"puggles"
]
},
{
"type": "exact",
"metadata": "user_country",
"values": [
"us"
]
}
],
"actions": {
"ids": [
"id1",
"id2"
]
}
},
{
"rule_id": "my-rule2",
"type": "pinned",
"criteria": [
{
"type": "fuzzy",
"metadata": "user_query",
"values": [
"rescue dogs"
]
}
],
"actions": {
"docs": [
{
"_index": "index1",
"_id": "id3"
},
{
"_index": "index2",
"_id": "id4"
}
]
}
}
]
}
)
$resp = $client->queryRules()->putRuleset([
"ruleset_id" => "my-ruleset",
"body" => [
"rules" => array(
[
"rule_id" => "my-rule1",
"type" => "pinned",
"criteria" => array(
[
"type" => "contains",
"metadata" => "user_query",
"values" => array(
"pugs",
"puggles",
),
],
[
"type" => "exact",
"metadata" => "user_country",
"values" => array(
"us",
),
],
),
"actions" => [
"ids" => array(
"id1",
"id2",
),
],
],
[
"rule_id" => "my-rule2",
"type" => "pinned",
"criteria" => array(
[
"type" => "fuzzy",
"metadata" => "user_query",
"values" => array(
"rescue dogs",
),
],
),
"actions" => [
"docs" => array(
[
"_index" => "index1",
"_id" => "id3",
],
[
"_index" => "index2",
"_id" => "id4",
],
),
],
],
),
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"rules":[{"rule_id":"my-rule1","type":"pinned","criteria":[{"type":"contains","metadata":"user_query","values":["pugs","puggles"]},{"type":"exact","metadata":"user_country","values":["us"]}],"actions":{"ids":["id1","id2"]}},{"rule_id":"my-rule2","type":"pinned","criteria":[{"type":"fuzzy","metadata":"user_query","values":["rescue dogs"]}],"actions":{"docs":[{"_index":"index1","_id":"id3"},{"_index":"index2","_id":"id4"}]}}]}' "$ELASTICSEARCH_URL/_query_rules/my-ruleset"
client.queryRules().putRuleset(p -> p
.rules(List.of(QueryRule.queryRuleOf(q -> q
.ruleId("my-rule1")
.type(QueryRuleType.Pinned)
.criteria(List.of(QueryRuleCriteria.of(qu -> qu
.type(QueryRuleCriteriaType.Contains)
.metadata("user_query")
.values(List.of(JsonData.fromJson("\"pugs\""),JsonData.fromJson("\"puggles\"")))),QueryRuleCriteria.of(qu -> qu
.type(QueryRuleCriteriaType.Exact)
.metadata("user_country")
.values(JsonData.fromJson("\"us\"")))))
.actions(a -> a
.ids(List.of("id1","id2"))
)),QueryRule.queryRuleOf(q -> q
.ruleId("my-rule2")
.type(QueryRuleType.Pinned)
.criteria(c -> c
.type(QueryRuleCriteriaType.Fuzzy)
.metadata("user_query")
.values(JsonData.fromJson("\"rescue dogs\""))
)
.actions(a -> a
.docs(List.of(PinnedDoc.of(pi -> pi
.id("id3")
.index("index1")),PinnedDoc.of(pi -> pi
.id("id4")
.index("index2"))))
))))
.rulesetId("my-ruleset")
);
{
"rules": [
{
"rule_id": "my-rule1",
"type": "pinned",
"criteria": [
{
"type": "contains",
"metadata": "user_query",
"values": [ "pugs", "puggles" ]
},
{
"type": "exact",
"metadata": "user_country",
"values": [ "us" ]
}
],
"actions": {
"ids": [
"id1",
"id2"
]
}
},
{
"rule_id": "my-rule2",
"type": "pinned",
"criteria": [
{
"type": "fuzzy",
"metadata": "user_query",
"values": [ "rescue dogs" ]
}
],
"actions": {
"docs": [
{
"_index": "index1",
"_id": "id3"
},
{
"_index": "index2",
"_id": "id4"
}
]
}
}
]
}
DELETE _query_rules/my-ruleset/
resp = client.query_rules.delete_ruleset(
ruleset_id="my-ruleset",
)
const response = await client.queryRules.deleteRuleset({
ruleset_id: "my-ruleset",
});
response = client.query_rules.delete_ruleset(
ruleset_id: "my-ruleset"
)
$resp = $client->queryRules()->deleteRuleset([
"ruleset_id" => "my-ruleset",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_query_rules/my-ruleset/"
client.queryRules().deleteRuleset(d -> d
.rulesetId("my-ruleset")
);
GET _query_rules/?from=0&size=3
resp = client.query_rules.list_rulesets(
from="0",
size="3",
)
const response = await client.queryRules.listRulesets({
from: 0,
size: 3,
});
response = client.query_rules.list_rulesets(
from: "0",
size: "3"
)
$resp = $client->queryRules()->listRulesets([
"from" => "0",
"size" => "3",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_query_rules/?from=0&size=3"
client.queryRules().listRulesets(l -> l
.from(0)
.size(3)
);
{
"count": 3,
"results": [
{
"ruleset_id": "ruleset-1",
"rule_total_count": 1,
"rule_criteria_types_counts": {
"exact": 1
}
},
{
"ruleset_id": "ruleset-2",
"rule_total_count": 2,
"rule_criteria_types_counts": {
"exact": 1,
"fuzzy": 1
}
},
{
"ruleset_id": "ruleset-3",
"rule_total_count": 3,
"rule_criteria_types_counts": {
"exact": 1,
"fuzzy": 2
}
}
]
}
Get the rollup job capabilities
Deprecated
Technical preview; Added in 6.3.0
All methods and paths for this operation:
Get the capabilities of any rollup jobs that have been configured for a specific index or index pattern.
This API is useful because a rollup job is often configured to rollup only a subset of fields from the source index. Furthermore, only certain aggregations can be configured for various fields, leading to a limited subset of functionality depending on that configuration. This API enables you to inspect an index and determine:
- Does this index have associated rollup data somewhere in the cluster?
- If yes to the first question, what fields were rolled up, what aggregations can be performed, and where does the data live?
Required authorization
- Cluster privileges:
monitor_rollup
GET _rollup/data/sensor-*
resp = client.rollup.get_rollup_caps(
id="sensor-*",
)
const response = await client.rollup.getRollupCaps({
id: "sensor-*",
});
response = client.rollup.get_rollup_caps(
id: "sensor-*"
)
$resp = $client->rollup()->getRollupCaps([
"id" => "sensor-*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_rollup/data/sensor-*"
client.rollup().getRollupCaps(g -> g
.id("sensor-*")
);
{
"sensor-*" : {
"rollup_jobs" : [
{
"job_id" : "sensor",
"rollup_index" : "sensor_rollup",
"index_pattern" : "sensor-*",
"fields" : {
"node" : [
{
"agg" : "terms"
}
],
"temperature" : [
{
"agg" : "min"
},
{
"agg" : "max"
},
{
"agg" : "sum"
}
],
"timestamp" : [
{
"agg" : "date_histogram",
"time_zone" : "UTC",
"fixed_interval" : "1h",
"delay": "7d"
}
],
"voltage" : [
{
"agg" : "avg"
}
]
}
}
]
}
}
Search rolled-up data
Deprecated
Technical preview; Added in 6.3.0
All methods and paths for this operation:
The rollup search endpoint is needed because, internally, rolled-up documents utilize a different document structure than the original data. It rewrites standard Query DSL into a format that matches the rollup documents then takes the response and rewrites it back to what a client would expect given the original query.
The request body supports a subset of features from the regular search API. The following functionality is not available:
size
: Because rollups work on pre-aggregated data, no search hits can be returned and so size must be set to zero or omitted entirely.
highlighter
, suggestors
, post_filter
, profile
, explain
: These are similarly disallowed.
Searching both historical rollup and non-rollup data
The rollup search API has the capability to search across both "live" non-rollup data and the aggregated rollup data. This is done by simply adding the live indices to the URI. For example:
GET sensor-1,sensor_rollup/_rollup_search
{
"size": 0,
"aggregations": {
"max_temperature": {
"max": {
"field": "temperature"
}
}
}
}
The rollup search endpoint does two things when the search runs:
- The original request is sent to the non-rollup index unaltered.
- A rewritten version of the original request is sent to the rollup index.
When the two responses are received, the endpoint rewrites the rollup response and merges the two together. During the merging process, if there is any overlap in buckets between the two responses, the buckets from the non-rollup index are used.
Path parameters
-
A comma-separated list of data streams and indices used to limit the request. This parameter has the following rules:
- At least one data stream, index, or wildcard expression must be specified. This target can include a rollup or non-rollup index. For data streams, the stream's backing indices can only serve as non-rollup indices. Omitting the parameter or using
_all
are not permitted. - Multiple non-rollup indices may be specified.
- Only one rollup index may be specified. If more than one are supplied, an exception occurs.
- Wildcard expressions (
*
) may be used. If they match more than one rollup index, an exception occurs. However, you can use an expression to match multiple non-rollup indices or data streams.
- At least one data stream, index, or wildcard expression must be specified. This target can include a rollup or non-rollup index. For data streams, the stream's backing indices can only serve as non-rollup indices. Omitting the parameter or using
Query parameters
-
Indicates whether hits.total should be rendered as an integer or an object in the rest search response
-
Specify whether aggregation and suggester names should be prefixed by their respective types in the response
Body
Required
-
Specifies aggregations.
External documentation -
An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation -
Must be zero if set, as rollups work on pre-aggregated data.
GET /sensor_rollup/_rollup_search
{
"size": 0,
"aggregations": {
"max_temperature": {
"max": {
"field": "temperature"
}
}
}
}
resp = client.rollup.rollup_search(
index="sensor_rollup",
size=0,
aggregations={
"max_temperature": {
"max": {
"field": "temperature"
}
}
},
)
const response = await client.rollup.rollupSearch({
index: "sensor_rollup",
size: 0,
aggregations: {
max_temperature: {
max: {
field: "temperature",
},
},
},
});
response = client.rollup.rollup_search(
index: "sensor_rollup",
body: {
"size": 0,
"aggregations": {
"max_temperature": {
"max": {
"field": "temperature"
}
}
}
}
)
$resp = $client->rollup()->rollupSearch([
"index" => "sensor_rollup",
"body" => [
"size" => 0,
"aggregations" => [
"max_temperature" => [
"max" => [
"field" => "temperature",
],
],
],
],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"size":0,"aggregations":{"max_temperature":{"max":{"field":"temperature"}}}}' "$ELASTICSEARCH_URL/sensor_rollup/_rollup_search"
client.rollup().rollupSearch(r -> r
.aggregations("max_temperature", a -> a
.max(m -> m
.field("temperature")
)
)
.index("sensor_rollup")
.size(0)
);
{
"size": 0,
"aggregations": {
"max_temperature": {
"max": {
"field": "temperature"
}
}
}
}
{
"took" : 102,
"timed_out" : false,
"terminated_early" : false,
"_shards" : {} ,
"hits" : {
"total" : {
"value": 0,
"relation": "eq"
},
"max_score" : 0.0,
"hits" : [ ]
},
"aggregations" : {
"max_temperature" : {
"value" : 202.0
}
}
}
Script
Use the script support APIs to get a list of supported script contexts and languages. Use the stored script APIs to manage stored scripts and search templates.
GET _script_context
resp = client.get_script_context()
const response = await client.getScriptContext();
response = client.get_script_context
$resp = $client->getScriptContext();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_script_context"
client.getScriptContext();
Run a script
Technical preview; Added in 6.3.0
All methods and paths for this operation:
Runs a script and returns a result. Use this API to build and test scripts, such as when defining a script for a runtime field. This API requires very few dependencies and is especially useful if you don't have permissions to write documents on a cluster.
The API uses several contexts, which control how scripts are run, what variables are available at runtime, and what the return type is.
Each context requires a script, but additional parameters depend on the context you're using for that script.
POST /_scripts/painless/_execute
{
"script": {
"source": "params.count / params.total",
"params": {
"count": 100.0,
"total": 1000.0
}
}
}
resp = client.scripts_painless_execute(
script={
"source": "params.count / params.total",
"params": {
"count": 100,
"total": 1000
}
},
)
const response = await client.scriptsPainlessExecute({
script: {
source: "params.count / params.total",
params: {
count: 100,
total: 1000,
},
},
});
response = client.scripts_painless_execute(
body: {
"script": {
"source": "params.count / params.total",
"params": {
"count": 100,
"total": 1000
}
}
}
)
$resp = $client->scriptsPainlessExecute([
"body" => [
"script" => [
"source" => "params.count / params.total",
"params" => [
"count" => 100,
"total" => 1000,
],
],
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"script":{"source":"params.count / params.total","params":{"count":100,"total":1000}}}' "$ELASTICSEARCH_URL/_scripts/painless/_execute"
client.scriptsPainlessExecute(s -> s
.script(sc -> sc
.source(so -> so
.scriptString("params.count / params.total")
)
.params(Map.of("total", JsonData.fromJson("1000"),"count", JsonData.fromJson("100")))
)
);
{
"script": {
"source": "params.count / params.total",
"params": {
"count": 100.0,
"total": 1000.0
}
}
}
{
"script": {
"source": "doc['field'].value.length() <= params.max_length",
"params": {
"max_length": 4
}
},
"context": "filter",
"context_setup": {
"index": "my-index-000001",
"document": {
"field": "four"
}
}
}
{
"script": {
"source": "doc['rank'].value / params.max_rank",
"params": {
"max_rank": 5.0
}
},
"context": "score",
"context_setup": {
"index": "my-index-000001",
"document": {
"rank": 4
}
}
}
{
"result": "0.1"
}
{
"result": true
}
{
"result": 0.8
}
Mount a snapshot
Generally available; Added in 7.10.0
Path parameters
-
The name of the repository containing the snapshot of the index to mount.
-
The name of the snapshot of the index to mount.
Query parameters
-
The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to
-1
.Values are
-1
or0
. -
If true, the request blocks until the operation is complete.
-
The mount option for the searchable snapshot index.
POST /_snapshot/my_repository/my_snapshot/_mount?wait_for_completion=true
{
"index": "my_docs",
"renamed_index": "docs",
"index_settings": {
"index.number_of_replicas": 0
},
"ignore_index_settings": [ "index.refresh_interval" ]
}
resp = client.searchable_snapshots.mount(
repository="my_repository",
snapshot="my_snapshot",
wait_for_completion=True,
index="my_docs",
renamed_index="docs",
index_settings={
"index.number_of_replicas": 0
},
ignore_index_settings=[
"index.refresh_interval"
],
)
const response = await client.searchableSnapshots.mount({
repository: "my_repository",
snapshot: "my_snapshot",
wait_for_completion: "true",
index: "my_docs",
renamed_index: "docs",
index_settings: {
"index.number_of_replicas": 0,
},
ignore_index_settings: ["index.refresh_interval"],
});
response = client.searchable_snapshots.mount(
repository: "my_repository",
snapshot: "my_snapshot",
wait_for_completion: "true",
body: {
"index": "my_docs",
"renamed_index": "docs",
"index_settings": {
"index.number_of_replicas": 0
},
"ignore_index_settings": [
"index.refresh_interval"
]
}
)
$resp = $client->searchableSnapshots()->mount([
"repository" => "my_repository",
"snapshot" => "my_snapshot",
"wait_for_completion" => "true",
"body" => [
"index" => "my_docs",
"renamed_index" => "docs",
"index_settings" => [
"index.number_of_replicas" => 0,
],
"ignore_index_settings" => array(
"index.refresh_interval",
),
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index":"my_docs","renamed_index":"docs","index_settings":{"index.number_of_replicas":0},"ignore_index_settings":["index.refresh_interval"]}' "$ELASTICSEARCH_URL/_snapshot/my_repository/my_snapshot/_mount?wait_for_completion=true"
client.searchableSnapshots().mount(m -> m
.ignoreIndexSettings("index.refresh_interval")
.index("my_docs")
.indexSettings("index.number_of_replicas", JsonData.fromJson("0"))
.renamedIndex("docs")
.repository("my_repository")
.snapshot("my_snapshot")
.waitForCompletion(true)
);
{
"index": "my_docs",
"renamed_index": "docs",
"index_settings": {
"index.number_of_replicas": 0
},
"ignore_index_settings": [ "index.refresh_interval" ]
}
Delete application privileges
Generally available; Added in 6.4.0
To use this API, you must have one of the following privileges:
- The
manage_security
cluster privilege (or a greater privilege such asall
). - The "Manage Application Privileges" global privilege for the application being referenced in the request.
Required authorization
- Cluster privileges:
manage_security
Path parameters
-
The name of the application. Application privileges are always associated with exactly one application.
-
The name of the privilege.
DELETE /_security/privilege/myapp/read
resp = client.security.delete_privileges(
application="myapp",
name="read",
)
const response = await client.security.deletePrivileges({
application: "myapp",
name: "read",
});
response = client.security.delete_privileges(
application: "myapp",
name: "read"
)
$resp = $client->security()->deletePrivileges([
"application" => "myapp",
"name" => "read",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/privilege/myapp/read"
client.security().deletePrivileges(d -> d
.application("myapp")
.name("read")
);
{
"myapp": {
"read": {
"found" : true
}
}
}
Delete role mappings
Generally available; Added in 5.5.0
Role mappings define which roles are assigned to each user. The role mapping APIs are generally the preferred way to manage role mappings rather than using role mapping files. The delete role mappings API cannot remove role mappings that are defined in role mapping files.
Required authorization
- Cluster privileges:
manage_security
Path parameters
-
The distinct name that identifies the role mapping. The name is used solely as an identifier to facilitate interaction via the API; it does not affect the behavior of the mapping in any way.
DELETE /_security/role_mapping/mapping1
resp = client.security.delete_role_mapping(
name="mapping1",
)
const response = await client.security.deleteRoleMapping({
name: "mapping1",
});
response = client.security.delete_role_mapping(
name: "mapping1"
)
$resp = $client->security()->deleteRoleMapping([
"name" => "mapping1",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/role_mapping/mapping1"
client.security().deleteRoleMapping(d -> d
.name("mapping1")
);
{
"found" : true
}
Enroll Kibana
Generally available; Added in 8.0.0
Enable a Kibana instance to configure itself for communication with a secured Elasticsearch cluster.
NOTE: This API is currently intended for internal use only by Kibana. Kibana uses this API internally to configure itself for communications with an Elasticsearch cluster that already has security features enabled.
GET /_security/enroll/kibana
resp = client.security.enroll_kibana()
const response = await client.security.enrollKibana();
response = client.security.enroll_kibana
$resp = $client->security()->enrollKibana();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/enroll/kibana"
client.security().enrollKibana();
{
"token" : {
"name" : "enroll-process-token-1629123923000",
"value": "AAEAAWVsYXN0aWM...vZmxlZXQtc2VydmVyL3Rva2VuMTo3TFdaSDZ"
},
"http_ca" : "MIIJlAIBAzVoGCSqGSIb3...vsDfsA3UZBAjEPfhubpQysAICAA=",
}
Get user privileges
Generally available; Added in 6.5.0
Get the security privileges for the logged in user. All users can use this API, but only to determine their own privileges. To check the privileges of other users, you must use the run as feature. To check whether a user has a specific list of privileges, use the has privileges API.
GET /_security/user/_privileges
resp = client.security.get_user_privileges()
const response = await client.security.getUserPrivileges();
response = client.security.get_user_privileges
$resp = $client->security()->getUserPrivileges();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/user/_privileges"
client.security().getUserPrivileges(g -> g);
{
"cluster" : [
"all"
],
"global" : [ ],
"indices" : [
{
"names" : [
"*"
],
"privileges" : [
"all"
],
"allow_restricted_indices" : true
}
],
"applications" : [
{
"application" : "*",
"privileges" : [
"*"
],
"resources" : [
"*"
]
}
],
"run_as" : [
"*"
]
}
Grant an API key
Generally available; Added in 7.9.0
Create an API key on behalf of another user. This API is similar to the create API keys API, however it creates the API key for a user that is different than the user that runs the API. The caller must have authentication credentials for the user on whose behalf the API key will be created. It is not possible to use this API to create an API key without that user's credentials. The supported user authentication credential types are:
- username and password
- Elasticsearch access tokens
- JWTs
The user, for whom the authentication credentials is provided, can optionally "run as" (impersonate) another user. In this case, the API key will be created on behalf of the impersonated user.
This API is intended be used by applications that need to create and manage API keys for end users, but cannot guarantee that those users have permission to create API keys on their own behalf. The API keys are created by the Elasticsearch API key service, which is automatically enabled.
A successful grant API key API call returns a JSON structure that contains the API key, its unique id, and its name. If applicable, it also returns expiration information for the API key in milliseconds.
By default, API keys never expire. You can specify expiration information when you create the API keys.
Required authorization
- Cluster privileges:
grant_api_key
Query parameters
-
If 'true', Elasticsearch refreshes the affected shards to make this operation visible to search. If 'wait_for', it waits for a refresh to make this operation visible to search. If 'false', nothing is done with refreshes.
Values are
true
,false
, orwait_for
.
POST /_security/api_key/grant
{
"grant_type": "password",
"username" : "test_admin",
"password" : "x-pack-test-password",
"api_key" : {
"name": "my-api-key",
"expiration": "1d",
"role_descriptors": {
"role-a": {
"cluster": ["all"],
"indices": [
{
"names": ["index-a*"],
"privileges": ["read"]
}
]
},
"role-b": {
"cluster": ["all"],
"indices": [
{
"names": ["index-b*"],
"privileges": ["all"]
}
]
}
},
"metadata": {
"application": "my-application",
"environment": {
"level": 1,
"trusted": true,
"tags": ["dev", "staging"]
}
}
}
}
resp = client.security.grant_api_key(
grant_type="password",
username="test_admin",
password="x-pack-test-password",
api_key={
"name": "my-api-key",
"expiration": "1d",
"role_descriptors": {
"role-a": {
"cluster": [
"all"
],
"indices": [
{
"names": [
"index-a*"
],
"privileges": [
"read"
]
}
]
},
"role-b": {
"cluster": [
"all"
],
"indices": [
{
"names": [
"index-b*"
],
"privileges": [
"all"
]
}
]
}
},
"metadata": {
"application": "my-application",
"environment": {
"level": 1,
"trusted": True,
"tags": [
"dev",
"staging"
]
}
}
},
)
const response = await client.security.grantApiKey({
grant_type: "password",
username: "test_admin",
password: "x-pack-test-password",
api_key: {
name: "my-api-key",
expiration: "1d",
role_descriptors: {
"role-a": {
cluster: ["all"],
indices: [
{
names: ["index-a*"],
privileges: ["read"],
},
],
},
"role-b": {
cluster: ["all"],
indices: [
{
names: ["index-b*"],
privileges: ["all"],
},
],
},
},
metadata: {
application: "my-application",
environment: {
level: 1,
trusted: true,
tags: ["dev", "staging"],
},
},
},
});
response = client.security.grant_api_key(
body: {
"grant_type": "password",
"username": "test_admin",
"password": "x-pack-test-password",
"api_key": {
"name": "my-api-key",
"expiration": "1d",
"role_descriptors": {
"role-a": {
"cluster": [
"all"
],
"indices": [
{
"names": [
"index-a*"
],
"privileges": [
"read"
]
}
]
},
"role-b": {
"cluster": [
"all"
],
"indices": [
{
"names": [
"index-b*"
],
"privileges": [
"all"
]
}
]
}
},
"metadata": {
"application": "my-application",
"environment": {
"level": 1,
"trusted": true,
"tags": [
"dev",
"staging"
]
}
}
}
}
)
$resp = $client->security()->grantApiKey([
"body" => [
"grant_type" => "password",
"username" => "test_admin",
"password" => "x-pack-test-password",
"api_key" => [
"name" => "my-api-key",
"expiration" => "1d",
"role_descriptors" => [
"role-a" => [
"cluster" => array(
"all",
),
"indices" => array(
[
"names" => array(
"index-a*",
),
"privileges" => array(
"read",
),
],
),
],
"role-b" => [
"cluster" => array(
"all",
),
"indices" => array(
[
"names" => array(
"index-b*",
),
"privileges" => array(
"all",
),
],
),
],
],
"metadata" => [
"application" => "my-application",
"environment" => [
"level" => 1,
"trusted" => true,
"tags" => array(
"dev",
"staging",
),
],
],
],
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"grant_type":"password","username":"test_admin","password":"x-pack-test-password","api_key":{"name":"my-api-key","expiration":"1d","role_descriptors":{"role-a":{"cluster":["all"],"indices":[{"names":["index-a*"],"privileges":["read"]}]},"role-b":{"cluster":["all"],"indices":[{"names":["index-b*"],"privileges":["all"]}]}},"metadata":{"application":"my-application","environment":{"level":1,"trusted":true,"tags":["dev","staging"]}}}}' "$ELASTICSEARCH_URL/_security/api_key/grant"
client.security().grantApiKey(g -> g
.apiKey(a -> a
.name("my-api-key")
.expiration(e -> e
.time("1d")
)
.roleDescriptors(Map.of("role-b", RoleDescriptor.of(r -> r
.cluster("all")
.indices(i -> i
.names("index-b*")
.privileges("all")
)),"role-a", RoleDescriptor.of(r -> r
.cluster("all")
.indices(i -> i
.names("index-a*")
.privileges("read")
))))
.metadata(Map.of("environment", JsonData.fromJson("{\"level\":1,\"trusted\":true,\"tags\":[\"dev\",\"staging\"]}"),"application", JsonData.fromJson("\"my-application\"")))
)
.grantType(ApiKeyGrantType.Password)
.password("x-pack-test-password")
.username("test_admin")
);
{
"grant_type": "password",
"username" : "test_admin",
"password" : "x-pack-test-password",
"api_key" : {
"name": "my-api-key",
"expiration": "1d",
"role_descriptors": {
"role-a": {
"cluster": ["all"],
"indices": [
{
"names": ["index-a*"],
"privileges": ["read"]
}
]
},
"role-b": {
"cluster": ["all"],
"indices": [
{
"names": ["index-b*"],
"privileges": ["all"]
}
]
}
},
"metadata": {
"application": "my-application",
"environment": {
"level": 1,
"trusted": true,
"tags": ["dev", "staging"]
}
}
}
}
{
"grant_type": "password",
"username" : "test_admin",
"password" : "x-pack-test-password",
"run_as": "test_user",
"api_key" : {
"name": "another-api-key"
}
}
Prepare OpenID connect authentication
Generally available
Create an oAuth 2.0 authentication request as a URL string based on the configuration of the OpenID Connect authentication realm in Elasticsearch.
The response of this API is a URL pointing to the Authorization Endpoint of the configured OpenID Connect Provider, which can be used to redirect the browser of the user in order to continue the authentication process.
Elasticsearch exposes all the necessary OpenID Connect related functionality with the OpenID Connect APIs. These APIs are used internally by Kibana in order to provide OpenID Connect based authentication, but can also be used by other, custom web applications or other clients.
Body
Required
-
In the case of a third party initiated single sign on, this is the issuer identifier for the OP that the RP is to send the authentication request to. It cannot be specified when realm is specified. One of realm or iss is required.
-
In the case of a third party initiated single sign on, it is a string value that is included in the authentication request as the login_hint parameter. This parameter is not valid when realm is specified.
-
The value used to associate a client session with an ID token and to mitigate replay attacks. If the caller of the API does not provide a value, Elasticsearch will generate one with sufficient entropy and return it in the response.
-
The name of the OpenID Connect realm in Elasticsearch the configuration of which should be used in order to generate the authentication request. It cannot be specified when iss is specified. One of realm or iss is required.
-
The value used to maintain state between the authentication request and the response, typically used as a Cross-Site Request Forgery mitigation. If the caller of the API does not provide a value, Elasticsearch will generate one with sufficient entropy and return it in the response.
POST /_security/oidc/prepare
{
"realm" : "oidc1"
}
resp = client.security.oidc_prepare_authentication(
realm="oidc1",
)
const response = await client.security.oidcPrepareAuthentication({
realm: "oidc1",
});
response = client.security.oidc_prepare_authentication(
body: {
"realm": "oidc1"
}
)
$resp = $client->security()->oidcPrepareAuthentication([
"body" => [
"realm" => "oidc1",
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"realm":"oidc1"}' "$ELASTICSEARCH_URL/_security/oidc/prepare"
client.security().oidcPrepareAuthentication(o -> o
.realm("oidc1")
);
{
"realm" : "oidc1"
}
{
"realm" : "oidc1",
"state" : "lGYK0EcSLjqH6pkT5EVZjC6eIW5YCGgywj2sxROO",
"nonce" : "zOBXLJGUooRrbLbQk5YCcyC8AXw3iloynvluYhZ5"
}
{
"iss" : "http://127.0.0.1:8080",
"login_hint": "this_is_an_opaque_string"
}
{
"redirect" : "http://127.0.0.1:8080/c2id-login?scope=openid&response_type=id_token&redirect_uri=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Fmy.fantastic.rp%2Fcb&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I&nonce=WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM&client_id=elasticsearch-rp",
"state" : "4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I",
"nonce" : "WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM",
"realm" : "oidc1"
}
Invalidate SAML
Generally available; Added in 7.5.0
Submit a SAML LogoutRequest message to Elasticsearch for consumption.
NOTE: This API is intended for use by custom web applications other than Kibana. If you are using Kibana, refer to the documentation for configuring SAML single-sign-on on the Elastic Stack.
The logout request comes from the SAML IdP during an IdP initiated Single Logout.
The custom web application can use this API to have Elasticsearch process the LogoutRequest
.
After successful validation of the request, Elasticsearch invalidates the access token and refresh token that corresponds to that specific SAML principal and provides a URL that contains a SAML LogoutResponse message.
Thus the user can be redirected back to their IdP.
Body
Required
-
The Assertion Consumer Service URL that matches the one of the SAML realm in Elasticsearch that should be used. You must specify either this parameter or the
realm
parameter. -
The query part of the URL that the user was redirected to by the SAML IdP to initiate the Single Logout. This query should include a single parameter named
SAMLRequest
that contains a SAML logout request that is deflated and Base64 encoded. If the SAML IdP has signed the logout request, the URL should include two extra parameters namedSigAlg
andSignature
that contain the algorithm used for the signature and the signature value itself. In order for Elasticsearch to be able to verify the IdP's signature, the value of thequery_string
field must be an exact match to the string provided by the browser. The client application must not attempt to parse or process the string in any way. -
The name of the SAML realm in Elasticsearch the configuration. You must specify either this parameter or the
acs
parameter.
POST /_security/saml/invalidate
{
"query_string" : "SAMLRequest=nZFda4MwFIb%2FiuS%2BmviRpqFaClKQdbvo2g12M2KMraCJ9cRR9utnW4Wyi13sMie873MeznJ1aWrnS3VQGR0j4mLkKC1NUeljjA77zYyhVbIE0dR%2By7fmaHq7U%2BdegXWGpAZ%2B%2F4pR32luBFTAtWgUcCv56%2Fp5y30X87Yz1khTIycdgpUW9kY7WdsC9zxoXTvMvWuVV98YyMnSGH2SYE5pwALBIr9QKiwDGpW0oGVUznGeMyJZKFkQ4jBf5HnhUymjIhzCAL3KNFihbYx8TBYzzGaY7EnIyZwHzCWMfiDnbRIftkSjJr%2BFu0e9v%2B0EgOquRiiZjKpiVFp6j50T4WXoyNJ%2FEWC9fdqc1t%2F1%2B2F3aUpjzhPiXpqMz1%2FHSn4A&SigAlg=https%3A%2F%2Ffanyv88.com%3A443%2Fhttp%2Fwww.w3.org%2F2001%2F04%2Fxmldsig-more%23rsa-sha256&Signature=MsAYz2NFdovMG2mXf6TSpu5vlQQyEJAg%2B4KCwBqJTmrb3yGXKUtIgvjqf88eCAK32v3eN8vupjPC8LglYmke1ZnjK0%2FKxzkvSjTVA7mMQe2AQdKbkyC038zzRq%2FYHcjFDE%2Bz0qISwSHZY2NyLePmwU7SexEXnIz37jKC6NMEhus%3D",
"realm" : "saml1"
}
resp = client.security.saml_invalidate(
query_string="SAMLRequest=nZFda4MwFIb%2FiuS%2BmviRpqFaClKQdbvo2g12M2KMraCJ9cRR9utnW4Wyi13sMie873MeznJ1aWrnS3VQGR0j4mLkKC1NUeljjA77zYyhVbIE0dR%2By7fmaHq7U%2BdegXWGpAZ%2B%2F4pR32luBFTAtWgUcCv56%2Fp5y30X87Yz1khTIycdgpUW9kY7WdsC9zxoXTvMvWuVV98YyMnSGH2SYE5pwALBIr9QKiwDGpW0oGVUznGeMyJZKFkQ4jBf5HnhUymjIhzCAL3KNFihbYx8TBYzzGaY7EnIyZwHzCWMfiDnbRIftkSjJr%2BFu0e9v%2B0EgOquRiiZjKpiVFp6j50T4WXoyNJ%2FEWC9fdqc1t%2F1%2B2F3aUpjzhPiXpqMz1%2FHSn4A&SigAlg=https%3A%2F%2Ffanyv88.com%3A443%2Fhttp%2Fwww.w3.org%2F2001%2F04%2Fxmldsig-more%23rsa-sha256&Signature=MsAYz2NFdovMG2mXf6TSpu5vlQQyEJAg%2B4KCwBqJTmrb3yGXKUtIgvjqf88eCAK32v3eN8vupjPC8LglYmke1ZnjK0%2FKxzkvSjTVA7mMQe2AQdKbkyC038zzRq%2FYHcjFDE%2Bz0qISwSHZY2NyLePmwU7SexEXnIz37jKC6NMEhus%3D",
realm="saml1",
)
const response = await client.security.samlInvalidate({
query_string:
"SAMLRequest=nZFda4MwFIb%2FiuS%2BmviRpqFaClKQdbvo2g12M2KMraCJ9cRR9utnW4Wyi13sMie873MeznJ1aWrnS3VQGR0j4mLkKC1NUeljjA77zYyhVbIE0dR%2By7fmaHq7U%2BdegXWGpAZ%2B%2F4pR32luBFTAtWgUcCv56%2Fp5y30X87Yz1khTIycdgpUW9kY7WdsC9zxoXTvMvWuVV98YyMnSGH2SYE5pwALBIr9QKiwDGpW0oGVUznGeMyJZKFkQ4jBf5HnhUymjIhzCAL3KNFihbYx8TBYzzGaY7EnIyZwHzCWMfiDnbRIftkSjJr%2BFu0e9v%2B0EgOquRiiZjKpiVFp6j50T4WXoyNJ%2FEWC9fdqc1t%2F1%2B2F3aUpjzhPiXpqMz1%2FHSn4A&SigAlg=https%3A%2F%2Ffanyv88.com%3A443%2Fhttp%2Fwww.w3.org%2F2001%2F04%2Fxmldsig-more%23rsa-sha256&Signature=MsAYz2NFdovMG2mXf6TSpu5vlQQyEJAg%2B4KCwBqJTmrb3yGXKUtIgvjqf88eCAK32v3eN8vupjPC8LglYmke1ZnjK0%2FKxzkvSjTVA7mMQe2AQdKbkyC038zzRq%2FYHcjFDE%2Bz0qISwSHZY2NyLePmwU7SexEXnIz37jKC6NMEhus%3D",
realm: "saml1",
});
response = client.security.saml_invalidate(
body: {
"query_string": "SAMLRequest=nZFda4MwFIb%2FiuS%2BmviRpqFaClKQdbvo2g12M2KMraCJ9cRR9utnW4Wyi13sMie873MeznJ1aWrnS3VQGR0j4mLkKC1NUeljjA77zYyhVbIE0dR%2By7fmaHq7U%2BdegXWGpAZ%2B%2F4pR32luBFTAtWgUcCv56%2Fp5y30X87Yz1khTIycdgpUW9kY7WdsC9zxoXTvMvWuVV98YyMnSGH2SYE5pwALBIr9QKiwDGpW0oGVUznGeMyJZKFkQ4jBf5HnhUymjIhzCAL3KNFihbYx8TBYzzGaY7EnIyZwHzCWMfiDnbRIftkSjJr%2BFu0e9v%2B0EgOquRiiZjKpiVFp6j50T4WXoyNJ%2FEWC9fdqc1t%2F1%2B2F3aUpjzhPiXpqMz1%2FHSn4A&SigAlg=https%3A%2F%2Ffanyv88.com%3A443%2Fhttp%2Fwww.w3.org%2F2001%2F04%2Fxmldsig-more%23rsa-sha256&Signature=MsAYz2NFdovMG2mXf6TSpu5vlQQyEJAg%2B4KCwBqJTmrb3yGXKUtIgvjqf88eCAK32v3eN8vupjPC8LglYmke1ZnjK0%2FKxzkvSjTVA7mMQe2AQdKbkyC038zzRq%2FYHcjFDE%2Bz0qISwSHZY2NyLePmwU7SexEXnIz37jKC6NMEhus%3D",
"realm": "saml1"
}
)
$resp = $client->security()->samlInvalidate([
"body" => [
"query_string" => "SAMLRequest=nZFda4MwFIb%2FiuS%2BmviRpqFaClKQdbvo2g12M2KMraCJ9cRR9utnW4Wyi13sMie873MeznJ1aWrnS3VQGR0j4mLkKC1NUeljjA77zYyhVbIE0dR%2By7fmaHq7U%2BdegXWGpAZ%2B%2F4pR32luBFTAtWgUcCv56%2Fp5y30X87Yz1khTIycdgpUW9kY7WdsC9zxoXTvMvWuVV98YyMnSGH2SYE5pwALBIr9QKiwDGpW0oGVUznGeMyJZKFkQ4jBf5HnhUymjIhzCAL3KNFihbYx8TBYzzGaY7EnIyZwHzCWMfiDnbRIftkSjJr%2BFu0e9v%2B0EgOquRiiZjKpiVFp6j50T4WXoyNJ%2FEWC9fdqc1t%2F1%2B2F3aUpjzhPiXpqMz1%2FHSn4A&SigAlg=https%3A%2F%2Ffanyv88.com%3A443%2Fhttp%2Fwww.w3.org%2F2001%2F04%2Fxmldsig-more%23rsa-sha256&Signature=MsAYz2NFdovMG2mXf6TSpu5vlQQyEJAg%2B4KCwBqJTmrb3yGXKUtIgvjqf88eCAK32v3eN8vupjPC8LglYmke1ZnjK0%2FKxzkvSjTVA7mMQe2AQdKbkyC038zzRq%2FYHcjFDE%2Bz0qISwSHZY2NyLePmwU7SexEXnIz37jKC6NMEhus%3D",
"realm" => "saml1",
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query_string":"SAMLRequest=nZFda4MwFIb%2FiuS%2BmviRpqFaClKQdbvo2g12M2KMraCJ9cRR9utnW4Wyi13sMie873MeznJ1aWrnS3VQGR0j4mLkKC1NUeljjA77zYyhVbIE0dR%2By7fmaHq7U%2BdegXWGpAZ%2B%2F4pR32luBFTAtWgUcCv56%2Fp5y30X87Yz1khTIycdgpUW9kY7WdsC9zxoXTvMvWuVV98YyMnSGH2SYE5pwALBIr9QKiwDGpW0oGVUznGeMyJZKFkQ4jBf5HnhUymjIhzCAL3KNFihbYx8TBYzzGaY7EnIyZwHzCWMfiDnbRIftkSjJr%2BFu0e9v%2B0EgOquRiiZjKpiVFp6j50T4WXoyNJ%2FEWC9fdqc1t%2F1%2B2F3aUpjzhPiXpqMz1%2FHSn4A&SigAlg=https%3A%2F%2Ffanyv88.com%3A443%2Fhttp%2Fwww.w3.org%2F2001%2F04%2Fxmldsig-more%23rsa-sha256&Signature=MsAYz2NFdovMG2mXf6TSpu5vlQQyEJAg%2B4KCwBqJTmrb3yGXKUtIgvjqf88eCAK32v3eN8vupjPC8LglYmke1ZnjK0%2FKxzkvSjTVA7mMQe2AQdKbkyC038zzRq%2FYHcjFDE%2Bz0qISwSHZY2NyLePmwU7SexEXnIz37jKC6NMEhus%3D","realm":"saml1"}' "$ELASTICSEARCH_URL/_security/saml/invalidate"
client.security().samlInvalidate(s -> s
.queryString("SAMLRequest=nZFda4MwFIb%2FiuS%2BmviRpqFaClKQdbvo2g12M2KMraCJ9cRR9utnW4Wyi13sMie873MeznJ1aWrnS3VQGR0j4mLkKC1NUeljjA77zYyhVbIE0dR%2By7fmaHq7U%2BdegXWGpAZ%2B%2F4pR32luBFTAtWgUcCv56%2Fp5y30X87Yz1khTIycdgpUW9kY7WdsC9zxoXTvMvWuVV98YyMnSGH2SYE5pwALBIr9QKiwDGpW0oGVUznGeMyJZKFkQ4jBf5HnhUymjIhzCAL3KNFihbYx8TBYzzGaY7EnIyZwHzCWMfiDnbRIftkSjJr%2BFu0e9v%2B0EgOquRiiZjKpiVFp6j50T4WXoyNJ%2FEWC9fdqc1t%2F1%2B2F3aUpjzhPiXpqMz1%2FHSn4A&SigAlg=https%3A%2F%2Ffanyv88.com%3A443%2Fhttp%2Fwww.w3.org%2F2001%2F04%2Fxmldsig-more%23rsa-sha256&Signature=MsAYz2NFdovMG2mXf6TSpu5vlQQyEJAg%2B4KCwBqJTmrb3yGXKUtIgvjqf88eCAK32v3eN8vupjPC8LglYmke1ZnjK0%2FKxzkvSjTVA7mMQe2AQdKbkyC038zzRq%2FYHcjFDE%2Bz0qISwSHZY2NyLePmwU7SexEXnIz37jKC6NMEhus%3D")
.realm("saml1")
);
{
"query_string" : "SAMLRequest=nZFda4MwFIb%2FiuS%2BmviRpqFaClKQdbvo2g12M2KMraCJ9cRR9utnW4Wyi13sMie873MeznJ1aWrnS3VQGR0j4mLkKC1NUeljjA77zYyhVbIE0dR%2By7fmaHq7U%2BdegXWGpAZ%2B%2F4pR32luBFTAtWgUcCv56%2Fp5y30X87Yz1khTIycdgpUW9kY7WdsC9zxoXTvMvWuVV98YyMnSGH2SYE5pwALBIr9QKiwDGpW0oGVUznGeMyJZKFkQ4jBf5HnhUymjIhzCAL3KNFihbYx8TBYzzGaY7EnIyZwHzCWMfiDnbRIftkSjJr%2BFu0e9v%2B0EgOquRiiZjKpiVFp6j50T4WXoyNJ%2FEWC9fdqc1t%2F1%2B2F3aUpjzhPiXpqMz1%2FHSn4A&SigAlg=https%3A%2F%2Ffanyv88.com%3A443%2Fhttp%2Fwww.w3.org%2F2001%2F04%2Fxmldsig-more%23rsa-sha256&Signature=MsAYz2NFdovMG2mXf6TSpu5vlQQyEJAg%2B4KCwBqJTmrb3yGXKUtIgvjqf88eCAK32v3eN8vupjPC8LglYmke1ZnjK0%2FKxzkvSjTVA7mMQe2AQdKbkyC038zzRq%2FYHcjFDE%2Bz0qISwSHZY2NyLePmwU7SexEXnIz37jKC6NMEhus%3D",
"realm" : "saml1"
}
{
"redirect" : "https://my-idp.org/logout/SAMLResponse=....",
"invalidated" : 2,
"realm" : "saml1"
}
Get the snapshot status
Generally available; Added in 7.8.0
All methods and paths for this operation:
Get a detailed description of the current state for each shard participating in the snapshot. Note that this API should be used only to obtain detailed shard-level information for ongoing snapshots. If this detail is not needed or you want to obtain information about one or more existing snapshots, use the get snapshot API.
WARNING: Using the API to return the status of any snapshots other than currently running snapshots can be expensive. The API requires a read from the repository for each shard in each snapshot. For example, if you have 100 snapshots with 1,000 shards each, an API request that includes all snapshots will require 100,000 reads (100 snapshots x 1,000 shards).
Depending on the latency of your storage, such requests can take an extremely long time to return results. These requests can also tax machine resources and, when using cloud storage, incur high processing costs.
Required authorization
- Cluster privileges:
monitor_snapshot
GET _snapshot/my_repository/snapshot_2/_status
resp = client.snapshot.status(
repository="my_repository",
snapshot="snapshot_2",
)
const response = await client.snapshot.status({
repository: "my_repository",
snapshot: "snapshot_2",
});
response = client.snapshot.status(
repository: "my_repository",
snapshot: "snapshot_2"
)
$resp = $client->snapshot()->status([
"repository" => "my_repository",
"snapshot" => "snapshot_2",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_snapshot/my_repository/snapshot_2/_status"
client.snapshot().status(s -> s
.repository("my_repository")
.snapshot("snapshot_2")
);
{
"snapshots" : [
{
"snapshot" : "snapshot_2",
"repository" : "my_repository",
"uuid" : "lNeQD1SvTQCqqJUMQSwmGg",
"state" : "SUCCESS",
"include_global_state" : false,
"shards_stats" : {
"initializing" : 0,
"started" : 0,
"finalizing" : 0,
"done" : 1,
"failed" : 0,
"total" : 1
},
"stats" : {
"incremental" : {
"file_count" : 3,
"size_in_bytes" : 5969
},
"total" : {
"file_count" : 4,
"size_in_bytes" : 6024
},
"start_time_in_millis" : 1594829326691,
"time_in_millis" : 205
},
"indices" : {
"index_1" : {
"shards_stats" : {
"initializing" : 0,
"started" : 0,
"finalizing" : 0,
"done" : 1,
"failed" : 0,
"total" : 1
},
"stats" : {
"incremental" : {
"file_count" : 3,
"size_in_bytes" : 5969
},
"total" : {
"file_count" : 4,
"size_in_bytes" : 6024
},
"start_time_in_millis" : 1594829326896,
"time_in_millis" : 0
},
"shards" : {
"0" : {
"stage" : "DONE",
"stats" : {
"incremental" : {
"file_count" : 3,
"size_in_bytes" : 5969
},
"total" : {
"file_count" : 4,
"size_in_bytes" : 6024
},
"start_time_in_millis" : 1594829326896,
"time_in_millis" : 0
}
}
}
}
}
}
]
}
Verify a snapshot repository
Generally available; Added in 0.0.0
Check for common misconfigurations in a snapshot repository.
Required authorization
- Cluster privileges:
manage
POST _snapshot/my_unverified_backup/_verify
resp = client.snapshot.verify_repository(
name="my_unverified_backup",
)
const response = await client.snapshot.verifyRepository({
name: "my_unverified_backup",
});
response = client.snapshot.verify_repository(
repository: "my_unverified_backup"
)
$resp = $client->snapshot()->verifyRepository([
"repository" => "my_unverified_backup",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_snapshot/my_unverified_backup/_verify"
client.snapshot().verifyRepository(v -> v
.name("my_unverified_backup")
);
SQL
Elasticsearch's SQL APIs enable you to run SQL queries on Elasticsearch indices and data streams.
Delete an async SQL search
Generally available; Added in 7.15.0
Delete an async SQL search or a stored synchronous SQL search. If the search is still running, the API cancels it.
If the Elasticsearch security features are enabled, only the following users can use this API to delete a search:
- Users with the
cancel_task
cluster privilege. - The user who first submitted the search.
Required authorization
- Cluster privileges:
cancel_task
DELETE _sql/async/delete/FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=
resp = client.sql.delete_async(
id="FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=",
)
const response = await client.sql.deleteAsync({
id: "FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=",
});
response = client.sql.delete_async(
id: "FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI="
)
$resp = $client->sql()->deleteAsync([
"id" => "FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_sql/async/delete/FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI="
client.sql().deleteAsync(d -> d
.id("FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=")
);
GET _sql/async/status/FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=
resp = client.sql.get_async_status(
id="FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=",
)
const response = await client.sql.getAsyncStatus({
id: "FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=",
});
response = client.sql.get_async_status(
id: "FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU="
)
$resp = $client->sql()->getAsyncStatus([
"id" => "FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_sql/async/status/FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU="
client.sql().getAsyncStatus(g -> g
.id("FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=")
);
Body
Required
-
The maximum number of rows (or entries) to return in one response.
Default value is
1000
. -
An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation -
The SQL query to run.
POST _sql/translate
{
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 10
}
resp = client.sql.translate(
query="SELECT * FROM library ORDER BY page_count DESC",
fetch_size=10,
)
const response = await client.sql.translate({
query: "SELECT * FROM library ORDER BY page_count DESC",
fetch_size: 10,
});
response = client.sql.translate(
body: {
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 10
}
)
$resp = $client->sql()->translate([
"body" => [
"query" => "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size" => 10,
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":"SELECT * FROM library ORDER BY page_count DESC","fetch_size":10}' "$ELASTICSEARCH_URL/_sql/translate"
client.sql().translate(t -> t
.fetchSize(10)
.query("SELECT * FROM library ORDER BY page_count DESC")
);
{
"query": "SELECT * FROM library ORDER BY page_count DESC",
"fetch_size": 10
}
GET _synonyms
resp = client.synonyms.get_synonyms_sets()
const response = await client.synonyms.getSynonymsSets();
response = client.synonyms.get_synonyms_sets
$resp = $client->synonyms()->getSynonymsSets();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_synonyms"
client.synonyms().getSynonymsSets(g -> g);
{
"count": 3,
"results": [
{
"synonyms_set": "ecommerce-synonyms",
"count": 2
},
{
"synonyms_set": "my-synonyms-set",
"count": 3
},
{
"synonyms_set": "new-ecommerce-synonyms",
"count": 1
}
]
}
Query parameters
-
If this value is false, the transform must be stopped before it can be deleted. If true, the transform is deleted regardless of its current state.
-
If this value is true, the destination index is deleted together with the transform. If false, the destination index will not be deleted
-
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
DELETE _transform/ecommerce_transform
resp = client.transform.delete_transform(
transform_id="ecommerce_transform",
)
const response = await client.transform.deleteTransform({
transform_id: "ecommerce_transform",
});
response = client.transform.delete_transform(
transform_id: "ecommerce_transform"
)
$resp = $client->transform()->deleteTransform([
"transform_id" => "ecommerce_transform",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_transform/ecommerce_transform"
client.transform().deleteTransform(d -> d
.transformId("ecommerce_transform")
);
{
"acknowledged": true
}
GET /_xpack/usage
resp = client.xpack.usage()
const response = await client.xpack.usage();
response = client.xpack.usage
$resp = $client->xpack()->usage();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_xpack/usage"
client.xpack().usage(u -> u);
{
"security" : {
"available" : true,
"enabled" : true
},
"monitoring" : {
"available" : true,
"enabled" : true,
"collection_enabled" : false,
"enabled_exporters" : {
"local" : 1
}
},
"watcher" : {
"available" : true,
"enabled" : true,
"execution" : {
"actions" : {
"_all" : {
"total" : 0,
"total_time_in_ms" : 0
}
}
},
"watch" : {
"input" : {
"_all" : {
"total" : 0,
"active" : 0
}
},
"trigger" : {
"_all" : {
"total" : 0,
"active" : 0
}
}
},
"count" : {
"total" : 0,
"active" : 0
}
},
"graph" : {
"available" : true,
"enabled" : true
},
"ml" : {
"available" : true,
"enabled" : true,
"jobs" : {
"_all" : {
"count" : 0,
"detectors" : { },
"created_by" : { },
"model_size" : { },
"forecasts" : {
"total" : 0,
"forecasted_jobs" : 0
}
}
},
"datafeeds" : {
"_all" : {
"count" : 0
}
},
"data_frame_analytics_jobs" : {
"_all" : {
"count" : 0
},
"analysis_counts": { },
"memory_usage": {
"peak_usage_bytes": {
"min": 0.0,
"max": 0.0,
"avg": 0.0,
"total": 0.0
}
}
},
"inference" : {
"ingest_processors" : {
"_all" : {
"num_docs_processed" : {
"max" : 0,
"sum" : 0,
"min" : 0
},
"pipelines" : {
"count" : 0
},
"num_failures" : {
"max" : 0,
"sum" : 0,
"min" : 0
},
"time_ms" : {
"max" : 0,
"sum" : 0,
"min" : 0
}
}
},
"trained_models" : {
"_all" : {
"count": 1
},
"count": {
"total": 1,
"prepackaged": 1,
"other": 0
},
"model_size_bytes": {
"min": 0.0,
"max": 0.0,
"avg": 0.0,
"total": 0.0
},
"estimated_operations": {
"min": 0.0,
"max": 0.0,
"avg": 0.0,
"total": 0.0
}
},
"deployments": {
"count": 0,
"inference_counts": {
"total": 0.0,
"min": 0.0,
"avg": 0.0,
"max": 0.0
},
"stats_by_model": [],
"model_sizes_bytes": {
"total": 0.0,
"min": 0.0,
"avg": 0.0,
"max": 0.0
},
"time_ms": {
"avg": 0.0
}
}
},
"node_count" : 1,
"memory": {
anomaly_detectors_memory_bytes: 0,
data_frame_analytics_memory_bytes: 0,
pytorch_inference_memory_bytes: 0,
total_used_memory_bytes: 0
}
},
"inference": {
"available" : true,
"enabled" : true,
"models" : [ ]
},
"logstash" : {
"available" : true,
"enabled" : true
},
"eql" : {
"available" : true,
"enabled" : true
},
"esql" : {
"available" : true,
"enabled" : true,
"features" : {
"eval" : 0,
"stats" : 0,
"dissect": 0,
"grok" : 0,
"limit" : 0,
"where" : 0,
"sort" : 0,
"drop" : 0,
"show" : 0,
"rename" : 0,
"mv_expand" : 0,
"keep" : 0,
"enrich" : 0,
"from" : 0,
"row" : 0
},
"queries" : {
"rest" : {
"total" : 0,
"failed" : 0
},
"kibana" : {
"total" : 0,
"failed" : 0
},
"_all" : {
"total" : 0,
"failed" : 0
}
}
},
"sql" : {
"available" : true,
"enabled" : true,
"features" : {
"having" : 0,
"subselect" : 0,
"limit" : 0,
"orderby" : 0,
"where" : 0,
"join" : 0,
"groupby" : 0,
"command" : 0,
"local" : 0
},
"queries" : {
"rest" : {
"total" : 0,
"paging" : 0,
"failed" : 0
},
"cli" : {
"total" : 0,
"paging" : 0,
"failed" : 0
},
"canvas" : {
"total" : 0,
"paging" : 0,
"failed" : 0
},
"odbc" : {
"total" : 0,
"paging" : 0,
"failed" : 0
},
"jdbc" : {
"total" : 0,
"paging" : 0,
"failed" : 0
},
"odbc32" : {
"total" : 0,
"paging" : 0,
"failed" : 0
},
"odbc64" : {
"total" : 0,
"paging" : 0,
"failed" : 0
},
"_all" : {
"total" : 0,
"paging" : 0,
"failed" : 0
},
"translate" : {
"count" : 0
}
}
},
"rollup" : {
"available" : true,
"enabled" : true
},
"ilm" : {
"policy_count" : 3,
"policy_stats" : [ ]
},
"slm" : {
"available" : true,
"enabled" : true
},
"ccr" : {
"available" : true,
"enabled" : true,
"follower_indices_count" : 0,
"auto_follow_patterns_count" : 0
},
"transform" : {
"available" : true,
"enabled" : true
},
"voting_only" : {
"available" : true,
"enabled" : true
},
"searchable_snapshots" : {
"available" : true,
"enabled" : true,
"indices_count" : 0,
"full_copy_indices_count" : 0,
"shared_cache_indices_count" : 0
},
"frozen_indices" : {
"available" : true,
"enabled" : true,
"indices_count" : 0
},
"spatial" : {
"available" : true,
"enabled" : true
},
"analytics" : {
"available" : true,
"enabled" : true,
"stats": {
"boxplot_usage" : 0,
"top_metrics_usage" : 0,
"normalize_usage" : 0,
"cumulative_cardinality_usage" : 0,
"t_test_usage" : 0,
"rate_usage" : 0,
"string_stats_usage" : 0,
"moving_percentiles_usage" : 0,
"multi_terms_usage" : 0
}
},
"data_streams" : {
"available" : true,
"enabled" : true,
"data_streams" : 0,
"indices_count" : 0
},
"data_lifecycle" : {
"available": true,
"enabled": true,
"count": 0,
"default_rollover_used": true,
"data_retention": {
"configured_data_streams": 0
},
"effective_retention": {
"retained_data_streams": 0
},
"global_retention": {
"default": {
"defined": false
},
"max": {
"defined": false
}
}
},
"data_tiers" : {
"available" : true,
"enabled" : true,
"data_warm" : {
"node_count" : 0,
"index_count" : 0,
"total_shard_count" : 0,
"primary_shard_count" : 0,
"doc_count" : 0,
"total_size_bytes" : 0,
"primary_size_bytes" : 0,
"primary_shard_size_avg_bytes" : 0,
"primary_shard_size_median_bytes" : 0,
"primary_shard_size_mad_bytes" : 0
},
"data_frozen" : {
"node_count" : 1,
"index_count" : 0,
"total_shard_count" : 0,
"primary_shard_count" : 0,
"doc_count" : 0,
"total_size_bytes" : 0,
"primary_size_bytes" : 0,
"primary_shard_size_avg_bytes" : 0,
"primary_shard_size_median_bytes" : 0,
"primary_shard_size_mad_bytes" : 0
},
"data_cold" : {
"node_count" : 0,
"index_count" : 0,
"total_shard_count" : 0,
"primary_shard_count" : 0,
"doc_count" : 0,
"total_size_bytes" : 0,
"primary_size_bytes" : 0,
"primary_shard_size_avg_bytes" : 0,
"primary_shard_size_median_bytes" : 0,
"primary_shard_size_mad_bytes" : 0
},
"data_content" : {
"node_count" : 0,
"index_count" : 0,
"total_shard_count" : 0,
"primary_shard_count" : 0,
"doc_count" : 0,
"total_size_bytes" : 0,
"primary_size_bytes" : 0,
"primary_shard_size_avg_bytes" : 0,
"primary_shard_size_median_bytes" : 0,
"primary_shard_size_mad_bytes" : 0
},
"data_hot" : {
"node_count" : 0,
"index_count" : 0,
"total_shard_count" : 0,
"primary_shard_count" : 0,
"doc_count" : 0,
"total_size_bytes" : 0,
"primary_size_bytes" : 0,
"primary_shard_size_avg_bytes" : 0,
"primary_shard_size_median_bytes" : 0,
"primary_shard_size_mad_bytes" : 0
}
},
"aggregate_metric" : {
"available" : true,
"enabled" : true
},
"archive" : {
"available" : true,
"enabled" : true,
"indices_count" : 0
},
"health_api" : {
"available" : true,
"enabled" : true,
"invocations": {
"total": 0
}
},
"remote_clusters": {
"size": 0,
"mode": {
"proxy": 0,
"sniff": 0
},
"security": {
"cert": 0,
"api_key": 0
}
},
"enterprise_search" : {
"available": true,
"enabled": true,
"search_applications" : {
"count": 0
},
"analytics_collections": {
"count": 0
},
"query_rulesets": {
"total_rule_count": 0,
"total_count": 0,
"min_rule_count": 0,
"max_rule_count": 0
}
},
"universal_profiling" : {
"available" : true,
"enabled" : true
},
"logsdb": {
"available": true,
"enabled": false,
"indices_count": 0,
"indices_with_synthetic_source": 0,
"num_docs": 0,
"size_in_bytes": 0,
"has_custom_cutoff_date": false
}
}
Watcher
You can use Watcher to watch for changes or anomalies in your data and perform the necessary actions in response.
Deactivate a watch
Generally available
All methods and paths for this operation:
A watch can be either active or inactive.
Required authorization
- Cluster privileges:
manage_watcher
PUT _watcher/watch/my_watch/_deactivate
resp = client.watcher.deactivate_watch(
watch_id="my_watch",
)
const response = await client.watcher.deactivateWatch({
watch_id: "my_watch",
});
response = client.watcher.deactivate_watch(
watch_id: "my_watch"
)
$resp = $client->watcher()->deactivateWatch([
"watch_id" => "my_watch",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_watcher/watch/my_watch/_deactivate"
client.watcher().deactivateWatch(d -> d
.watchId("my_watch")
);
Update Watcher index settings
Generally available
Update settings for the Watcher internal index (.watches
).
Only a subset of settings can be modified.
This includes index.auto_expand_replicas
, index.number_of_replicas
, index.routing.allocation.exclude.*
,
index.routing.allocation.include.*
and index.routing.allocation.require.*
.
Modification of index.routing.allocation.include._tier_preference
is an exception and is not allowed as the
Watcher shards must always be in the data_content
tier.
Required authorization
- Cluster privileges:
manage_watcher
Query parameters
-
The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
. -
The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Values are
-1
or0
.
PUT /_watcher/settings
{
"index.auto_expand_replicas": "0-4"
}
resp = client.watcher.update_settings(
index.auto_expand_replicas="0-4",
)
const response = await client.watcher.updateSettings({
"index.auto_expand_replicas": "0-4",
});
response = client.watcher.update_settings(
body: {
"index.auto_expand_replicas": "0-4"
}
)
$resp = $client->watcher()->updateSettings([
"body" => [
"index.auto_expand_replicas" => "0-4",
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index.auto_expand_replicas":"0-4"}' "$ELASTICSEARCH_URL/_watcher/settings"
client.watcher().updateSettings(u -> u
.indexAutoExpandReplicas("0-4")
);
{
"index.auto_expand_replicas": "0-4"
}
Query watches
Generally available; Added in 7.11.0
All methods and paths for this operation:
Get all registered watches in a paginated manner and optionally filter watches by a query.
Note that only the _id
and metadata.*
fields are queryable or sortable.
Required authorization
- Cluster privileges:
monitor_watcher
Body
-
The offset from the first result to fetch. It must be non-negative.
Default value is
0
. -
The number of hits to return. It must be non-negative.
Default value is
10
. -
An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation -
A field value.
A field value.
GET /_watcher/_query/watches
resp = client.watcher.query_watches()
const response = await client.watcher.queryWatches();
response = client.watcher.query_watches
$resp = $client->watcher()->queryWatches();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_watcher/_query/watches"
client.watcher().queryWatches();
{
"count": 1,
"watches": [
{
"_id": "my_watch",
"watch": {
"trigger": {
"schedule": {
"hourly": {
"minute": [
0,
5
]
}
}
},
"input": {
"simple": {
"payload": {
"send": "yes"
}
}
},
"condition": {
"always": {}
},
"actions": {
"test_index": {
"index": {
"index": "test"
}
}
}
},
"status": {
"state": {
"active": true,
"timestamp": "2015-05-26T18:21:08.630Z"
},
"actions": {
"test_index": {
"ack": {
"timestamp": "2015-05-26T18:21:08.630Z",
"state": "awaits_successful_execution"
}
}
},
"version": -1
},
"_seq_no": 0,
"_primary_term": 1
}
]
}
POST _watcher/_stop
resp = client.watcher.stop()
const response = await client.watcher.stop();
response = client.watcher.stop
$resp = $client->watcher()->stop();
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_watcher/_stop"
client.watcher().stop(s -> s);
{
"acknowledged": true
}