Get a document count Generally available

GET /_cat/count/{index}

All methods and paths for this operation:

GET /_cat/count

GET /_cat/count/{index}

Get quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.

Required authorization

  • Index privileges: read

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. It supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • epoch number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      One of:

      Time unit for seconds

    • timestamp string

      Time of day, expressed as HH:MM:SS

    • count string

      the document count

GET /_cat/count/my-index-000001?v=true&format=json
resp = client.cat.count(
    index="my-index-000001",
    v=True,
    format="json",
)
const response = await client.cat.count({
  index: "my-index-000001",
  v: "true",
  format: "json",
});
response = client.cat.count(
  index: "my-index-000001",
  v: "true",
  format: "json"
)
$resp = $client->cat()->count([
    "index" => "my-index-000001",
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/count/my-index-000001?v=true&format=json"
client.cat().count();
Response examples (200)
A successful response from `GET /_cat/count/my-index-000001?v=true&format=json`. It retrieves the document count for the `my-index-000001` data stream or index.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "120"
  }
]
A successful response from `GET /_cat/count?v=true&format=json`. It retrieves the document count for all data streams and indices in the cluster.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "121"
  }
]
















Get master node information Generally available

GET /_cat/master

Get information about the master node, including the ID, bound IP address, and name.

IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      node id

    • host string

      host name

    • ip string

      ip address

    • node string

      node name

GET /_cat/master?v=true&format=json
resp = client.cat.master(
    v=True,
    format="json",
)
const response = await client.cat.master({
  v: "true",
  format: "json",
});
response = client.cat.master(
  v: "true",
  format: "json"
)
$resp = $client->cat()->master([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/master?v=true&format=json"
client.cat().master();
Response examples (200)
A successful response from `GET /_cat/master?v=true&format=json`.
[
  {
    "id": "YzWoH_2BT-6UjVGDyPdqYg",
    "host": "127.0.0.1",
    "ip": "127.0.0.1",
    "node": "YzWoH_2"
  }
]








Get anomaly detection jobs Generally available; Added in 7.7.0

GET /_cat/ml/anomaly_detectors/{job_id}

All methods and paths for this operation:

GET /_cat/ml/anomaly_detectors

GET /_cat/ml/anomaly_detectors/{job_id}

Get configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

Query parameters

  • allow_no_match boolean

    Specifies what to do when the request:

    • Contains wildcard expressions and there are no jobs that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      Values are closing, closed, opened, failed, or opening.

    • opened_time string

      For open jobs only, the amount of time the job has been opened.

    • assignment_explanation string

      For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.

    • data.processed_records string

      The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed_record_count is the number of aggregation results processed, not the number of Elasticsearch documents.

    • data.processed_fields string

      The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.

    • data.input_bytes number | string

    • data.input_records string

      The number of input documents posted to the anomaly detection job.

    • data.input_fields string

      The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.

    • data.invalid_dates string

      The number of input documents with either a missing date field or a date that could not be parsed.

    • data.missing_fields string

      The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing. If you are using datafeeds or posting data to the job in JSON format, a high missing_field_count is often not an indication of data issues. It is not necessarily a cause for concern.

    • data.out_of_order_timestamps string

      The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.

    • data.empty_buckets string

      The number of buckets which did not contain any data. If your data contains many empty buckets, consider increasing your bucket_span or using functions that are tolerant to gaps in data such as mean, non_null_sum or non_zero_count.

    • data.sparse_buckets string

      The number of buckets that contained few data points compared to the expected number of data points. If your data contains many sparse buckets, consider using a longer bucket_span.

    • data.buckets string

      The total number of buckets processed.

    • data.earliest_record string

      The timestamp of the earliest chronologically input document.

    • data.latest_record string

      The timestamp of the latest chronologically input document.

    • data.last string

      The timestamp at which data was last analyzed, according to server time.

    • data.last_empty_bucket string

      The timestamp of the last bucket that did not contain any data.

    • data.last_sparse_bucket string

      The timestamp of the last bucket that was considered sparse.

    • model.bytes number | string

    • model.memory_status string

      Values are ok, soft_limit, or hard_limit.

    • model.bytes_exceeded number | string

    • model.memory_limit string

      The upper limit for model memory usage, checked on increasing values.

    • model.by_fields string

      The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.over_fields string

      The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.partition_fields string

      The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • model.bucket_allocation_failures string

      The number of buckets for which new entities in incoming data were not processed due to insufficient model memory. This situation is also signified by a hard_limit: memory_status property value.

    • model.categorization_status string

      Values are ok or warn.

    • model.categorized_doc_count string

      The number of documents that have had a field categorized.

    • model.total_category_count string

      The number of categories created by categorization.

    • model.frequent_category_count string

      The number of categories that match more than 1% of categorized documents.

    • model.rare_category_count string

      The number of categories that match just one categorized document.

    • model.dead_category_count string

      The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.

    • model.failed_category_count string

      The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model_memory_limit. This count does not track which specific categories failed to be created. Therefore you cannot use this value to determine the number of unique categories that were missed.

    • model.log_time string

      The timestamp when the model stats were gathered, according to server time.

    • model.timestamp string

      The timestamp of the last record when the model stats were gathered.

    • forecasts.total string

      The number of individual forecasts currently available for the job. A value of one or more indicates that forecasts exist.

    • forecasts.memory.min string

      The minimum memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.max string

      The maximum memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.avg string

      The average memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.memory.total string

      The total memory usage in bytes for forecasts related to the anomaly detection job.

    • forecasts.records.min string

      The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.max string

      The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.avg string

      The average number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.records.total string

      The total number of model_forecast documents written for forecasts related to the anomaly detection job.

    • forecasts.time.min string

      The minimum runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.max string

      The maximum runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.avg string

      The average runtime in milliseconds for forecasts related to the anomaly detection job.

    • forecasts.time.total string

      The total runtime in milliseconds for forecasts related to the anomaly detection job.

    • node.id string
    • node.name string

      The name of the assigned node.

    • node.ephemeral_id string
    • node.address string

      The network address of the assigned node.

    • buckets.count string

      The number of bucket results produced by the job.

    • buckets.time.total string

      The sum of all bucket processing times, in milliseconds.

    • buckets.time.min string

      The minimum of all bucket processing times, in milliseconds.

    • buckets.time.max string

      The maximum of all bucket processing times, in milliseconds.

    • buckets.time.exp_avg string

      The exponential moving average of all bucket processing times, in milliseconds.

    • buckets.time.exp_avg_hour string

      The exponential moving average of bucket processing times calculated in a one hour time window, in milliseconds.

GET /_cat/ml/anomaly_detectors/{job_id}
GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json
resp = client.cat.ml_jobs(
    h="id,s,dpr,mb",
    v=True,
    format="json",
)
const response = await client.cat.mlJobs({
  h: "id,s,dpr,mb",
  v: "true",
  format: "json",
});
response = client.cat.ml_jobs(
  h: "id,s,dpr,mb",
  v: "true",
  format: "json"
)
$resp = $client->cat()->mlJobs([
    "h" => "id,s,dpr,mb",
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json"
client.cat().mlJobs();
Response examples (200)
A successful response from `GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json`.
[
  {
    "id": "high_sum_total_sales",
    "s": "closed",
    "dpr": "14022",
    "mb": "1.5mb"
  },
  {
    "id": "low_request_rate",
    "s": "closed",
    "dpr": "1216",
    "mb": "40.5kb"
  },
  {
    "id": "response_code_rates",
    "s": "closed",
    "dpr": "28146",
    "mb": "132.7kb"
  },
  {
    "id": "url_scanning",
    "s": "closed",
    "dpr": "28146",
    "mb": "501.6kb"
  }
]
























Get snapshot repository information Generally available; Added in 2.1.0

GET /_cat/repositories

Get a list of snapshot repositories for a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot repository API.

Required authorization

  • Cluster privileges: monitor_snapshot

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The unique repository identifier.

    • type string

      The repository type.

GET /_cat/repositories?v=true&format=json
resp = client.cat.repositories(
    v=True,
    format="json",
)
const response = await client.cat.repositories({
  v: "true",
  format: "json",
});
response = client.cat.repositories(
  v: "true",
  format: "json"
)
$resp = $client->cat()->repositories([
    "v" => "true",
    "format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/repositories?v=true&format=json"
client.cat().repositories();
Response examples (200)
A successful response from `GET /_cat/repositories?v=true&format=json`.
[
  {
    "id": "repo1",
    "type": "fs"
  },
  {
    "id": "repo2",
    "type": "s3"
  }
]




























Cluster





Update voting configuration exclusions Generally available; Added in 7.0.0

POST /_cluster/voting_config_exclusions

Update the cluster voting config exclusions by node IDs or node names. By default, if there are more than three master-eligible nodes in the cluster and you remove fewer than half of the master-eligible nodes in the cluster at once, the voting configuration automatically shrinks. If you want to shrink the voting configuration to contain fewer than three nodes or to remove half or more of the master-eligible nodes in the cluster at once, use this API to remove departing nodes from the voting configuration manually. The API adds an entry for each specified node to the cluster’s voting configuration exclusions list. It then waits until the cluster has reconfigured its voting configuration to exclude the specified nodes.

Clusters should have no voting configuration exclusions in normal operation. Once the excluded nodes have stopped, clear the voting configuration exclusions with DELETE /_cluster/voting_config_exclusions. This API waits for the nodes to be fully removed from the cluster before it returns. If your cluster has voting configuration exclusions for nodes that you no longer intend to remove, use DELETE /_cluster/voting_config_exclusions?wait_for_removal=false to clear the voting configuration exclusions without waiting for the nodes to leave the cluster.

A response to POST /_cluster/voting_config_exclusions with an HTTP status code of 200 OK guarantees that the node has been removed from the voting configuration and will not be reinstated until the voting configuration exclusions are cleared by calling DELETE /_cluster/voting_config_exclusions. If the call to POST /_cluster/voting_config_exclusions fails or returns a response with an HTTP status code other than 200 OK then the node may not have been removed from the voting configuration. In that case, you may safely retry the call.

NOTE: Voting exclusions are required only when you remove at least half of the master-eligible nodes from a cluster in a short time period. They are not required when removing master-ineligible nodes or when removing fewer than half of the master-eligible nodes.

External documentation

Query parameters

  • node_names string | array[string]

    A comma-separated list of the names of the nodes to exclude from the voting configuration. If specified, you may not also specify node_ids.

  • node_ids string | array[string]

    A comma-separated list of the persistent ids of the nodes to exclude from the voting configuration. If specified, you may not also specify node_names.

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

  • timeout string

    When adding a voting configuration exclusion, the API waits for the specified nodes to be excluded from the voting configuration before returning. If the timeout expires before the appropriate condition is satisfied, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
POST /_cluster/voting_config_exclusions
curl \
 --request POST 'http://api.example.com/_cluster/voting_config_exclusions' \
 --header "Authorization: $API_KEY"












Get the cluster health status Generally available; Added in 1.3.0

GET /_cluster/health/{index}

All methods and paths for this operation:

GET /_cluster/health

GET /_cluster/health/{index}

You can also use the API to get the health status of only specified data streams and indices. For data streams, the API retrieves the health status of the stream’s backing indices.

The cluster health status is: green, yellow or red. On the shard level, a red status indicates that the specific shard is not allocated in the cluster. Yellow means that the primary shard is allocated but replicas are not. Green means that all shards are allocated. The index level status is controlled by the worst shard status.

One of the main benefits of the API is the ability to wait until the cluster reaches a certain high watermark health level. The cluster status is controlled by the worst index status.

Required authorization

  • Cluster privileges: monitor,manage

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*) are supported. To target all data streams and indices in a cluster, omit this parameter or use _all or *.

Query parameters

  • expand_wildcards string | array[string]

    Whether to expand wildcard expression to concrete indices that are open, closed or both.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • level string

    Can be one of cluster, indices or shards. Controls the details level of the health information returned.

    Values are cluster, indices, or shards.

  • local boolean

    If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    A number controlling to how many active shards to wait for, all to wait for all shards in the cluster to be active, or 0 to not wait.

    Values are all or index-setting.

  • wait_for_events string

    Can be one of immediate, urgent, high, normal, low, languid. Wait until all currently queued events with the given priority are processed.

    Values are immediate, urgent, high, normal, low, or languid.

  • wait_for_nodes string | number

    The request waits until the specified number N of nodes is available. It also accepts >=N, <=N, >N and <N. Alternatively, it is possible to use ge(N), le(N), gt(N) and lt(N) notation.

  • wait_for_no_initializing_shards boolean

    A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard initializations. Defaults to false, which means it will not wait for initializing shards.

  • wait_for_no_relocating_shards boolean

    A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard relocations. Defaults to false, which means it will not wait for relocating shards.

  • wait_for_status string

    One of green, yellow or red. Will wait (until the timeout provided) until the status of the cluster changes to the one provided or better, i.e. green > yellow > red. By default, will not wait for any status.

    Supported values include:

    • green (or GREEN): All shards are assigned.
    • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
    • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.

    Values are green, GREEN, yellow, YELLOW, red, or RED.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • active_primary_shards number Required

      The number of active primary shards.

    • active_shards number Required

      The total number of active primary and replica shards.

    • active_shards_percent string

      The ratio of active shards in the cluster expressed as a string formatted percentage.

    • active_shards_percent_as_number number Required

      The ratio of active shards in the cluster expressed as a percentage.

    • cluster_name string Required
    • delayed_unassigned_shards number Required

      The number of shards whose allocation has been delayed by the timeout settings.

    • indices object
      Hide indices attribute Show indices attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • active_primary_shards number Required
        • active_shards number Required
        • initializing_shards number Required
        • number_of_replicas number Required
        • number_of_shards number Required
        • relocating_shards number Required
        • shards object
          Hide shards attribute Show shards attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • active_shards number Required
            • initializing_shards number Required
            • primary_active boolean Required
            • relocating_shards number Required
            • status string Required

              Values are green, GREEN, yellow, YELLOW, red, or RED.

            • unassigned_shards number Required
            • unassigned_primary_shards number Required
        • status string Required

          Values are green, GREEN, yellow, YELLOW, red, or RED.

        • unassigned_shards number Required
        • unassigned_primary_shards number Required
    • initializing_shards number Required

      The number of shards that are under initialization.

    • number_of_data_nodes number Required

      The number of nodes that are dedicated data nodes.

    • number_of_in_flight_fetch number Required

      The number of unfinished fetches.

    • number_of_nodes number Required

      The number of nodes within the cluster.

    • number_of_pending_tasks number Required

      The number of cluster-level changes that have not yet been executed.

    • relocating_shards number Required

      The number of shards that are under relocation.

    • status string Required

      Values are green, GREEN, yellow, YELLOW, red, or RED.

    • task_max_waiting_in_queue string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • task_max_waiting_in_queue_millis number

      Time unit for milliseconds

    • timed_out boolean Required

      If false the response returned within the period of time that is specified by the timeout parameter (30s by default)

    • unassigned_primary_shards number Required

      The number of primary shards that are not allocated.

    • unassigned_shards number Required

      The number of shards that are not allocated.

GET _cluster/health
resp = client.cluster.health()
const response = await client.cluster.health();
response = client.cluster.health
$resp = $client->cluster()->health();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cluster/health"
client.cluster().health(h -> h);
Response examples (200)
A successful response from `GET _cluster/health`. It is the health status of a quiet single node cluster with a single index with one shard and one replica.
{
  "cluster_name" : "testcluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 1,
  "active_shards" : 1,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 50.0
}

























































Get the cluster health Generally available; Added in 8.7.0

GET /_health_report/{feature}

All methods and paths for this operation:

GET /_health_report

GET /_health_report/{feature}

Get a report with the health status of an Elasticsearch cluster. The report contains a list of indicators that compose Elasticsearch functionality.

Each indicator has a health status of: green, unknown, yellow or red. The indicator will provide an explanation and metadata describing the reason for its current health status.

The cluster’s status is controlled by the worst indicator status.

In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.

Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. The root cause and remediation steps are encapsulated in a diagnosis. A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.

NOTE: The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.

Path parameters

  • feature string | array[string] Required

    A feature of the cluster, as returned by the top-level health report API.

Query parameters

  • timeout string

    Explicit operation timeout.

    Values are -1 or 0.

  • verbose boolean

    Opt-in for more information about the health of the system.

  • size number

    Limit the number of affected resources the health report API returns.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • cluster_name string Required
    • indicators object Required
      Hide indicators attributes Show indicators attributes object
      • master_is_stable object

        MASTER_IS_STABLE

        Hide master_is_stable attributes Show master_is_stable attributes object
        • status string Required

          Values are green, yellow, red, or unknown.

        • symptom string Required
        • impacts array[object]
          Hide impacts attributes Show impacts attributes object
          • description string Required
          • id string Required
          • impact_areas array[string] Required

            Values are search, ingest, backup, or deployment_management.

          • severity number Required
        • diagnosis array[object]
          Hide diagnosis attributes Show diagnosis attributes object
          • id string Required
          • action string Required
          • affected_resources object Required
          • cause string Required
          • help_url string Required
        • details object
          Hide details attributes Show details attributes object
          • current_master object Required
            Hide current_master attributes Show current_master attributes object
            • name
            • node_id
          • recent_masters array[object] Required
          • exception_fetching_history object
            Hide exception_fetching_history attributes Show exception_fetching_history attributes object
            • message string Required
            • stack_trace string Required
          • cluster_formation array[object]
      • shards_availability object

        SHARDS_AVAILABILITY

        Hide shards_availability attributes Show shards_availability attributes object
        • status string Required

          Values are green, yellow, red, or unknown.

        • symptom string Required
        • impacts array[object]
          Hide impacts attributes Show impacts attributes object
          • description string Required
          • id string Required
          • impact_areas array[string] Required

            Values are search, ingest, backup, or deployment_management.

          • severity number Required
        • diagnosis array[object]
          Hide diagnosis attributes Show diagnosis attributes object
          • id string Required
          • action string Required
          • affected_resources object Required
          • cause string Required
          • help_url string Required
        • details object
          Hide details attributes Show details attributes object
          • creating_primaries number Required
          • creating_replicas number Required
          • initializing_primaries number Required
          • initializing_replicas number Required
          • restarting_primaries number Required
          • restarting_replicas number Required
          • started_primaries number Required
          • started_replicas number Required
          • unassigned_primaries number Required
          • unassigned_replicas number Required
      • disk object

        DISK

        Hide disk attributes Show disk attributes object
        • status string Required

          Values are green, yellow, red, or unknown.

        • symptom string Required
        • impacts array[object]
          Hide impacts attributes Show impacts attributes object
          • description string Required
          • id string Required
          • impact_areas array[string] Required

            Values are search, ingest, backup, or deployment_management.

          • severity number Required
        • diagnosis array[object]
          Hide diagnosis attributes Show diagnosis attributes object
          • id string Required
          • action string Required
          • affected_resources object Required
          • cause string Required
          • help_url string Required
        • details object
          Hide details attributes Show details attributes object
          • indices_with_readonly_block number Required
          • nodes_with_enough_disk_space number Required
          • nodes_over_high_watermark number Required
          • nodes_over_flood_stage_watermark number Required
          • nodes_with_unknown_disk_status number Required
      • repository_integrity object

        REPOSITORY_INTEGRITY

        Hide repository_integrity attributes Show repository_integrity attributes object
        • status string Required

          Values are green, yellow, red, or unknown.

        • symptom string Required
        • impacts array[object]
          Hide impacts attributes Show impacts attributes object
          • description string Required
          • id string Required
          • impact_areas array[string] Required

            Values are search, ingest, backup, or deployment_management.

          • severity number Required
        • diagnosis array[object]
          Hide diagnosis attributes Show diagnosis attributes object
          • id string Required
          • action string Required
          • affected_resources object Required
          • cause string Required
          • help_url string Required
        • details object
          Hide details attributes Show details attributes object
          • total_repositories number
          • corrupted_repositories number
          • corrupted array[string]
      • data_stream_lifecycle object

        DATA_STREAM_LIFECYCLE

        Hide data_stream_lifecycle attributes Show data_stream_lifecycle attributes object
        • status string Required

          Values are green, yellow, red, or unknown.

        • symptom string Required
        • impacts array[object]
          Hide impacts attributes Show impacts attributes object
          • description string Required
          • id string Required
          • impact_areas array[string] Required

            Values are search, ingest, backup, or deployment_management.

          • severity number Required
        • diagnosis array[object]
          Hide diagnosis attributes Show diagnosis attributes object
          • id string Required
          • action string Required
          • affected_resources object Required
          • cause string Required
          • help_url string Required
        • details object
          Hide details attributes Show details attributes object
          • stagnating_backing_indices_count number Required
          • total_backing_indices_in_error number Required
          • stagnating_backing_indices array[object]
      • ilm object

        ILM

        Hide ilm attributes Show ilm attributes object
        • status string Required

          Values are green, yellow, red, or unknown.

        • symptom string Required
        • impacts array[object]
          Hide impacts attributes Show impacts attributes object
          • description string Required
          • id string Required
          • impact_areas array[string] Required

            Values are search, ingest, backup, or deployment_management.

          • severity number Required
        • diagnosis array[object]
          Hide diagnosis attributes Show diagnosis attributes object
          • id string Required
          • action string Required
          • affected_resources object Required
          • cause string Required
          • help_url string Required
        • details object
          Hide details attributes Show details attributes object
          • ilm_status string Required

            Values are RUNNING, STOPPING, or STOPPED.

          • policies number Required
          • stagnating_indices number Required
      • slm object

        SLM

        Hide slm attributes Show slm attributes object
        • status string Required

          Values are green, yellow, red, or unknown.

        • symptom string Required
        • impacts array[object]
          Hide impacts attributes Show impacts attributes object
          • description string Required
          • id string Required
          • impact_areas array[string] Required

            Values are search, ingest, backup, or deployment_management.

          • severity number Required
        • diagnosis array[object]
          Hide diagnosis attributes Show diagnosis attributes object
          • id string Required
          • action string Required
          • affected_resources object Required
          • cause string Required
          • help_url string Required
        • details object
          Hide details attributes Show details attributes object
          • slm_status string Required

            Values are RUNNING, STOPPING, or STOPPED.

          • policies number Required
          • unhealthy_policies object
            Hide unhealthy_policies attributes Show unhealthy_policies attributes object
            • count number Required
            • invocations_since_last_success object
      • shards_capacity object

        SHARDS_CAPACITY

        Hide shards_capacity attributes Show shards_capacity attributes object
        • status string Required

          Values are green, yellow, red, or unknown.

        • symptom string Required
        • impacts array[object]
          Hide impacts attributes Show impacts attributes object
          • description string Required
          • id string Required
          • impact_areas array[string] Required

            Values are search, ingest, backup, or deployment_management.

          • severity number Required
        • diagnosis array[object]
          Hide diagnosis attributes Show diagnosis attributes object
          • id string Required
          • action string Required
          • affected_resources object Required
          • cause string Required
          • help_url string Required
        • details object
          Hide details attributes Show details attributes object
          • data object Required
            Hide data attributes Show data attributes object
            • max_shards_in_cluster number Required
            • current_used_shards number
          • frozen object Required
            Hide frozen attributes Show frozen attributes object
            • max_shards_in_cluster number Required
            • current_used_shards number
      • file_settings object

        FILE_SETTINGS

        Hide file_settings attributes Show file_settings attributes object
        • status string Required

          Values are green, yellow, red, or unknown.

        • symptom string Required
        • impacts array[object]
          Hide impacts attributes Show impacts attributes object
          • description string Required
          • id string Required
          • impact_areas array[string] Required

            Values are search, ingest, backup, or deployment_management.

          • severity number Required
        • diagnosis array[object]
          Hide diagnosis attributes Show diagnosis attributes object
          • id string Required
          • action string Required
          • affected_resources object Required
          • cause string Required
          • help_url string Required
        • details object
          Hide details attributes Show details attributes object
          • failure_streak number Required
          • most_recent_failure string Required
    • status string

      Values are green, yellow, red, or unknown.

GET _health_report
resp = client.health_report()
const response = await client.healthReport();
response = client.health_report
$resp = $client->healthReport();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_health_report"
client.healthReport(h -> h);

























































Set the connector sync job stats Technical preview

PUT /_connector/_sync_job/{connector_sync_job_id}/_stats

Stats include: deleted_document_count, indexed_document_count, indexed_document_volume, and total_document_count. You can also update last_seen. This API is mainly used by the connector service for updating sync job information.

To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.

Path parameters

  • connector_sync_job_id string Required

    The unique identifier of the connector sync job.

application/json

Body Required

  • deleted_document_count number Required

    The number of documents the sync job deleted.

  • indexed_document_count number Required

    The number of documents the sync job indexed.

  • indexed_document_volume number Required

    The total size of the data (in MiB) the sync job indexed.

  • last_seen string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • metadata object
    Hide metadata attribute Show metadata attribute object
    • * object Additional properties
  • total_document_count number

    The total number of documents in the target index after the sync job finished.

Responses

  • 200 application/json
PUT /_connector/_sync_job/{connector_sync_job_id}/_stats
PUT _connector/_sync_job/my-connector-sync-job/_stats
{
    "deleted_document_count": 10,
    "indexed_document_count": 20,
    "indexed_document_volume": 1000,
    "total_document_count": 2000,
    "last_seen": "2023-01-02T10:00:00Z"
}
resp = client.connector.sync_job_update_stats(
    connector_sync_job_id="my-connector-sync-job",
    deleted_document_count=10,
    indexed_document_count=20,
    indexed_document_volume=1000,
    total_document_count=2000,
    last_seen="2023-01-02T10:00:00Z",
)
const response = await client.connector.syncJobUpdateStats({
  connector_sync_job_id: "my-connector-sync-job",
  deleted_document_count: 10,
  indexed_document_count: 20,
  indexed_document_volume: 1000,
  total_document_count: 2000,
  last_seen: "2023-01-02T10:00:00Z",
});
response = client.connector.sync_job_update_stats(
  connector_sync_job_id: "my-connector-sync-job",
  body: {
    "deleted_document_count": 10,
    "indexed_document_count": 20,
    "indexed_document_volume": 1000,
    "total_document_count": 2000,
    "last_seen": "2023-01-02T10:00:00Z"
  }
)
$resp = $client->connector()->syncJobUpdateStats([
    "connector_sync_job_id" => "my-connector-sync-job",
    "body" => [
        "deleted_document_count" => 10,
        "indexed_document_count" => 20,
        "indexed_document_volume" => 1000,
        "total_document_count" => 2000,
        "last_seen" => "2023-01-02T10:00:00Z",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"deleted_document_count":10,"indexed_document_count":20,"indexed_document_volume":1000,"total_document_count":2000,"last_seen":"2023-01-02T10:00:00Z"}' "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job/_stats"
client.connector().syncJobUpdateStats(s -> s
    .connectorSyncJobId("my-connector-sync-job")
    .deletedDocumentCount(10L)
    .indexedDocumentCount(20L)
    .indexedDocumentVolume(1000L)
    .lastSeen(l -> l
        .time("2023-01-02T10:00:00Z")
    )
    .totalDocumentCount(2000)
);
Request example
An example body for a `PUT _connector/_sync_job/my-connector-sync-job/_stats` request.
{
    "deleted_document_count": 10,
    "indexed_document_count": 20,
    "indexed_document_volume": 1000,
    "total_document_count": 2000,
    "last_seen": "2023-01-02T10:00:00Z"
}

Activate the connector draft filter Technical preview; Added in 8.12.0

PUT /_connector/{connector_id}/_filtering/_activate

Activates the valid draft filtering for a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_filtering/_activate
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_filtering/_activate' \
 --header "Authorization: $API_KEY"
















Update the connector filtering Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_filtering

Update the draft filtering configuration of a connector and marks the draft validation state as edited. The filtering draft is activated once validated by the running Elastic connector service. The filtering property is used to configure sync rules (both basic and advanced) for a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • filtering array[object]
    Hide filtering attributes Show filtering attributes object
    • active object Required
      Hide active attributes Show active attributes object
      • advanced_snippet object Required
        Hide advanced_snippet attributes Show advanced_snippet attributes object
        • created_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          One of:
        • updated_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          One of:
        • value object Required
      • rules array[object] Required
        Hide rules attributes Show rules attributes object
        • created_at string
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • id string Required
        • order number Required
        • policy string Required

          Values are exclude or include.

        • rule string Required

          Values are contains, ends_with, equals, regex, starts_with, >, or <.

        • updated_at string
        • value string Required
      • validation object Required
        Hide validation attributes Show validation attributes object
        • errors array[object] Required
          Hide errors attributes Show errors attributes object
          • ids array[string] Required
          • messages array[string] Required
        • state string Required

          Values are edited, invalid, or valid.

    • domain string
    • draft object Required
      Hide draft attributes Show draft attributes object
      • advanced_snippet object Required
        Hide advanced_snippet attributes Show advanced_snippet attributes object
        • created_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          One of:
        • updated_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          One of:
        • value object Required
      • rules array[object] Required
        Hide rules attributes Show rules attributes object
        • created_at string
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • id string Required
        • order number Required
        • policy string Required

          Values are exclude or include.

        • rule string Required

          Values are contains, ends_with, equals, regex, starts_with, >, or <.

        • updated_at string
        • value string Required
      • validation object Required
        Hide validation attributes Show validation attributes object
        • errors array[object] Required
          Hide errors attributes Show errors attributes object
          • ids array[string] Required
          • messages array[string] Required
        • state string Required

          Values are edited, invalid, or valid.

  • rules array[object]
    Hide rules attributes Show rules attributes object
    • created_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • order number Required
    • policy string Required

      Values are exclude or include.

    • rule string Required

      Values are contains, ends_with, equals, regex, starts_with, >, or <.

    • updated_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • value string Required
  • advanced_snippet object
    Hide advanced_snippet attributes Show advanced_snippet attributes object
    • created_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • updated_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      One of:
    • value object Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_filtering
PUT _connector/my-g-drive-connector/_filtering
{
    "rules": [
         {
            "field": "file_extension",
            "id": "exclude-txt-files",
            "order": 0,
            "policy": "exclude",
            "rule": "equals",
            "value": "txt"
        },
        {
            "field": "_",
            "id": "DEFAULT",
            "order": 1,
            "policy": "include",
            "rule": "regex",
            "value": ".*"
        }
    ]
}
resp = client.connector.update_filtering(
    connector_id="my-g-drive-connector",
    rules=[
        {
            "field": "file_extension",
            "id": "exclude-txt-files",
            "order": 0,
            "policy": "exclude",
            "rule": "equals",
            "value": "txt"
        },
        {
            "field": "_",
            "id": "DEFAULT",
            "order": 1,
            "policy": "include",
            "rule": "regex",
            "value": ".*"
        }
    ],
)
const response = await client.connector.updateFiltering({
  connector_id: "my-g-drive-connector",
  rules: [
    {
      field: "file_extension",
      id: "exclude-txt-files",
      order: 0,
      policy: "exclude",
      rule: "equals",
      value: "txt",
    },
    {
      field: "_",
      id: "DEFAULT",
      order: 1,
      policy: "include",
      rule: "regex",
      value: ".*",
    },
  ],
});
response = client.connector.update_filtering(
  connector_id: "my-g-drive-connector",
  body: {
    "rules": [
      {
        "field": "file_extension",
        "id": "exclude-txt-files",
        "order": 0,
        "policy": "exclude",
        "rule": "equals",
        "value": "txt"
      },
      {
        "field": "_",
        "id": "DEFAULT",
        "order": 1,
        "policy": "include",
        "rule": "regex",
        "value": ".*"
      }
    ]
  }
)
$resp = $client->connector()->updateFiltering([
    "connector_id" => "my-g-drive-connector",
    "body" => [
        "rules" => array(
            [
                "field" => "file_extension",
                "id" => "exclude-txt-files",
                "order" => 0,
                "policy" => "exclude",
                "rule" => "equals",
                "value" => "txt",
            ],
            [
                "field" => "_",
                "id" => "DEFAULT",
                "order" => 1,
                "policy" => "include",
                "rule" => "regex",
                "value" => ".*",
            ],
        ),
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"rules":[{"field":"file_extension","id":"exclude-txt-files","order":0,"policy":"exclude","rule":"equals","value":"txt"},{"field":"_","id":"DEFAULT","order":1,"policy":"include","rule":"regex","value":".*"}]}' "$ELASTICSEARCH_URL/_connector/my-g-drive-connector/_filtering"
client.connector().updateFiltering(u -> u
    .connectorId("my-g-drive-connector")
    .rules(List.of(FilteringRule.of(f -> f
            .field("file_extension")
            .id("exclude-txt-files")
            .order(0)
            .policy(FilteringPolicy.Exclude)
            .rule(FilteringRuleRule.Equals)
            .value("txt")),FilteringRule.of(f -> f
            .field("_")
            .id("DEFAULT")
            .order(1)
            .policy(FilteringPolicy.Include)
            .rule(FilteringRuleRule.Regex)
            .value(".*"))))
);
Request examples
{
    "rules": [
         {
            "field": "file_extension",
            "id": "exclude-txt-files",
            "order": 0,
            "policy": "exclude",
            "rule": "equals",
            "value": "txt"
        },
        {
            "field": "_",
            "id": "DEFAULT",
            "order": 1,
            "policy": "include",
            "rule": "regex",
            "value": ".*"
        }
    ]
}
{
    "advanced_snippet": {
        "value": [{
            "tables": [
                "users",
                "orders"
            ],
            "query": "SELECT users.id AS id, orders.order_id AS order_id FROM users JOIN orders ON users.id = orders.user_id"
        }]
    }
}
Response examples (200)
{
  "result": "updated"
}




















Update the connector scheduling Beta; Added in 8.12.0

PUT /_connector/{connector_id}/_scheduling

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • scheduling object Required
    Hide scheduling attributes Show scheduling attributes object
    • access_control object
      Hide access_control attributes Show access_control attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

    • full object
      Hide full attributes Show full attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

    • incremental object
      Hide incremental attributes Show incremental attributes object
      • enabled boolean Required
      • interval string Required

        The interval is expressed using the crontab syntax

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_scheduling
PUT _connector/my-connector/_scheduling
{
    "scheduling": {
        "access_control": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": true,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": false,
            "interval": "0 30 0 * * ?"
        }
    }
}
resp = client.connector.update_scheduling(
    connector_id="my-connector",
    scheduling={
        "access_control": {
            "enabled": True,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": True,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": False,
            "interval": "0 30 0 * * ?"
        }
    },
)
const response = await client.connector.updateScheduling({
  connector_id: "my-connector",
  scheduling: {
    access_control: {
      enabled: true,
      interval: "0 10 0 * * ?",
    },
    full: {
      enabled: true,
      interval: "0 20 0 * * ?",
    },
    incremental: {
      enabled: false,
      interval: "0 30 0 * * ?",
    },
  },
});
response = client.connector.update_scheduling(
  connector_id: "my-connector",
  body: {
    "scheduling": {
      "access_control": {
        "enabled": true,
        "interval": "0 10 0 * * ?"
      },
      "full": {
        "enabled": true,
        "interval": "0 20 0 * * ?"
      },
      "incremental": {
        "enabled": false,
        "interval": "0 30 0 * * ?"
      }
    }
  }
)
$resp = $client->connector()->updateScheduling([
    "connector_id" => "my-connector",
    "body" => [
        "scheduling" => [
            "access_control" => [
                "enabled" => true,
                "interval" => "0 10 0 * * ?",
            ],
            "full" => [
                "enabled" => true,
                "interval" => "0 20 0 * * ?",
            ],
            "incremental" => [
                "enabled" => false,
                "interval" => "0 30 0 * * ?",
            ],
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"scheduling":{"access_control":{"enabled":true,"interval":"0 10 0 * * ?"},"full":{"enabled":true,"interval":"0 20 0 * * ?"},"incremental":{"enabled":false,"interval":"0 30 0 * * ?"}}}' "$ELASTICSEARCH_URL/_connector/my-connector/_scheduling"
client.connector().updateScheduling(u -> u
    .connectorId("my-connector")
    .scheduling(s -> s
        .accessControl(a -> a
            .enabled(true)
            .interval("0 10 0 * * ?")
        )
        .full(f -> f
            .enabled(true)
            .interval("0 20 0 * * ?")
        )
        .incremental(i -> i
            .enabled(false)
            .interval("0 30 0 * * ?")
        )
    )
);
{
    "scheduling": {
        "access_control": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        },
        "full": {
            "enabled": true,
            "interval": "0 20 0 * * ?"
        },
        "incremental": {
            "enabled": false,
            "interval": "0 30 0 * * ?"
        }
    }
}
{
    "scheduling": {
        "full": {
            "enabled": true,
            "interval": "0 10 0 * * ?"
        }
    }
}
Response examples (200)
{
  "result": "updated"
}




Update the connector status Technical preview; Added in 8.12.0

PUT /_connector/{connector_id}/_status

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • status string Required

    Values are created, needs_configuration, configured, connected, or error.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_status
PUT _connector/my-connector/_status
{
    "status": "needs_configuration"
}
resp = client.connector.update_status(
    connector_id="my-connector",
    status="needs_configuration",
)
const response = await client.connector.updateStatus({
  connector_id: "my-connector",
  status: "needs_configuration",
});
response = client.connector.update_status(
  connector_id: "my-connector",
  body: {
    "status": "needs_configuration"
  }
)
$resp = $client->connector()->updateStatus([
    "connector_id" => "my-connector",
    "body" => [
        "status" => "needs_configuration",
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"status":"needs_configuration"}' "$ELASTICSEARCH_URL/_connector/my-connector/_status"
client.connector().updateStatus(u -> u
    .connectorId("my-connector")
    .status(ConnectorStatus.NeedsConfiguration)
);
Request example
{
    "status": "needs_configuration"
}
Response examples (200)
{
  "result": "updated"
}









Delete auto-follow patterns Generally available; Added in 6.5.0

DELETE /_ccr/auto_follow/{name}

Delete a collection of cross-cluster replication auto-follow patterns.

Required authorization

  • Cluster privileges: manage_ccr
External documentation

Path parameters

  • name string Required

    The auto-follow pattern collection to delete.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ccr/auto_follow/my_auto_follow_pattern
resp = client.ccr.delete_auto_follow_pattern(
    name="my_auto_follow_pattern",
)
const response = await client.ccr.deleteAutoFollowPattern({
  name: "my_auto_follow_pattern",
});
response = client.ccr.delete_auto_follow_pattern(
  name: "my_auto_follow_pattern"
)
$resp = $client->ccr()->deleteAutoFollowPattern([
    "name" => "my_auto_follow_pattern",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ccr/auto_follow/my_auto_follow_pattern"
client.ccr().deleteAutoFollowPattern(d -> d
    .name("my_auto_follow_pattern")
);
Response examples (200)
A successful response from `DELETE /_ccr/auto_follow/my_auto_follow_pattern`, which deletes an auto-follow pattern.
{
  "acknowledged" : true
}
























Resume an auto-follow pattern Generally available; Added in 7.5.0

POST /_ccr/auto_follow/{name}/resume

Resume a cross-cluster replication auto-follow pattern that was paused. The auto-follow pattern will resume configuring following indices for newly created indices that match its patterns on the remote cluster. Remote indices created while the pattern was paused will also be followed unless they have been deleted or closed in the interim.

Required authorization

  • Cluster privileges: manage_ccr
External documentation

Path parameters

  • name string Required

    The name of the auto-follow pattern to resume.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ccr/auto_follow/{name}/resume
POST /_ccr/auto_follow/my_auto_follow_pattern/resume
resp = client.ccr.resume_auto_follow_pattern(
    name="my_auto_follow_pattern",
)
const response = await client.ccr.resumeAutoFollowPattern({
  name: "my_auto_follow_pattern",
});
response = client.ccr.resume_auto_follow_pattern(
  name: "my_auto_follow_pattern"
)
$resp = $client->ccr()->resumeAutoFollowPattern([
    "name" => "my_auto_follow_pattern",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ccr/auto_follow/my_auto_follow_pattern/resume"
client.ccr().resumeAutoFollowPattern(r -> r
    .name("my_auto_follow_pattern")
);
Response examples (200)
A successful response `POST /_ccr/auto_follow/my_auto_follow_pattern/resume`, which resumes an auto-follow pattern.
{
  "acknowledged" : true
}




Get cross-cluster replication stats Generally available; Added in 6.5.0

GET /_ccr/stats

This API returns stats about auto-following and the same shard-level stats as the get follower stats API.

Required authorization

  • Cluster privileges: monitor

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

    Values are -1 or 0.

  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • auto_follow_stats object Required
      Hide auto_follow_stats attributes Show auto_follow_stats attributes object
      • auto_followed_clusters array[object] Required
        Hide auto_followed_clusters attributes Show auto_followed_clusters attributes object
        • cluster_name string Required
        • last_seen_metadata_version number Required
        • time_since_last_check_millis number

          Time unit for milliseconds

      • number_of_failed_follow_indices number Required

        The number of indices that the auto-follow coordinator failed to automatically follow. The causes of recent failures are captured in the logs of the elected master node and in the auto_follow_stats.recent_auto_follow_errors field.

      • number_of_failed_remote_cluster_state_requests number Required

        The number of times that the auto-follow coordinator failed to retrieve the cluster state from a remote cluster registered in a collection of auto-follow patterns.

      • number_of_successful_follow_indices number Required

        The number of indices that the auto-follow coordinator successfully followed.

      • recent_auto_follow_errors array[object] Required

        An array of objects representing failures by the auto-follow coordinator.

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide recent_auto_follow_errors attributes Show recent_auto_follow_errors attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

    • follow_stats object Required
      Hide follow_stats attribute Show follow_stats attribute object
      • indices array[object] Required
        Hide indices attributes Show indices attributes object
        • index string Required
        • shards array[object] Required

          An array of shard-level following task statistics.

          Hide shards attributes Show shards attributes object
          • bytes_read number Required

            The total of transferred bytes read from the leader. This is only an estimate and does not account for compression if enabled.

          • failed_read_requests number Required

            The number of failed reads.

          • failed_write_requests number Required

            The number of failed bulk write requests on the follower.

          • fatal_exception object

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • follower_aliases_version number Required
          • follower_global_checkpoint number Required

            The current global checkpoint on the follower. The difference between the leader_global_checkpoint and the follower_global_checkpoint is an indication of how much the follower is lagging the leader.

          • follower_index string Required

            The name of the follower index.

          • follower_mapping_version number Required
          • follower_max_seq_no number Required
          • follower_settings_version number Required
          • last_requested_seq_no number Required
          • leader_global_checkpoint number Required

            The current global checkpoint on the leader known to the follower task.

          • leader_index string Required

            The name of the index in the leader cluster being followed.

          • leader_max_seq_no number Required
          • operations_read number Required

            The total number of operations read from the leader.

          • operations_written number Required

            The number of operations written on the follower.

          • outstanding_read_requests number Required

            The number of active read requests from the follower.

          • outstanding_write_requests number Required

            The number of active bulk write requests on the follower.

          • read_exceptions array[object] Required

            An array of objects representing failed reads.

          • remote_cluster string Required

            The remote cluster containing the leader index.

          • shard_id number Required

            The numerical shard ID, with values from 0 to one less than the number of replicas.

          • successful_read_requests number Required

            The number of successful fetches.

          • successful_write_requests number Required

            The number of bulk write requests run on the follower.

          • time_since_last_read string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • time_since_last_read_millis
          • total_read_remote_exec_time string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • total_read_remote_exec_time_millis
          • total_read_time string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • total_read_time_millis
          • total_write_time string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • total_write_time_millis
          • write_buffer_operation_count number Required

            The number of write operations queued on the follower.

          • write_buffer_size_in_bytes
GET /_ccr/stats
resp = client.ccr.stats()
const response = await client.ccr.stats();
response = client.ccr.stats
$resp = $client->ccr()->stats();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ccr/stats"
client.ccr().stats(s -> s);
Response examples (200)
A successful response from `GET /_ccr/stats` that returns cross-cluster replication stats.
{
  "auto_follow_stats" : {
    "number_of_failed_follow_indices" : 0,
    "number_of_failed_remote_cluster_state_requests" : 0,
    "number_of_successful_follow_indices" : 1,
    "recent_auto_follow_errors" : [],
    "auto_followed_clusters" : []
  },
  "follow_stats" : {
    "indices" : [
      {
        "index" : "follower_index",
        "total_global_checkpoint_lag" : 256,
        "shards" : [
          {
            "remote_cluster" : "remote_cluster",
            "leader_index" : "leader_index",
            "follower_index" : "follower_index",
            "shard_id" : 0,
            "leader_global_checkpoint" : 1024,
            "leader_max_seq_no" : 1536,
            "follower_global_checkpoint" : 768,
            "follower_max_seq_no" : 896,
            "last_requested_seq_no" : 897,
            "outstanding_read_requests" : 8,
            "outstanding_write_requests" : 2,
            "write_buffer_operation_count" : 64,
            "follower_mapping_version" : 4,
            "follower_settings_version" : 2,
            "follower_aliases_version" : 8,
            "total_read_time_millis" : 32768,
            "total_read_remote_exec_time_millis" : 16384,
            "successful_read_requests" : 32,
            "failed_read_requests" : 0,
            "operations_read" : 896,
            "bytes_read" : 32768,
            "total_write_time_millis" : 16384,
            "write_buffer_size_in_bytes" : 1536,
            "successful_write_requests" : 16,
            "failed_write_requests" : 0,
            "operations_written" : 832,
            "read_exceptions" : [ ],
            "time_since_last_read_millis" : 8
          }
        ]
      }
    ]
  }
}













Delete data streams Generally available; Added in 7.9.0

DELETE /_data_stream/{name}

Deletes one or more data streams and their backing indices.

Required authorization

  • Index privileges: delete_index

Path parameters

  • name string | array[string] Required

    Comma-separated list of data streams to delete. Wildcard (*) expressions are supported.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values,such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE _data_stream/my-data-stream
resp = client.indices.delete_data_stream(
    name="my-data-stream",
)
const response = await client.indices.deleteDataStream({
  name: "my-data-stream",
});
response = client.indices.delete_data_stream(
  name: "my-data-stream"
)
$resp = $client->indices()->deleteDataStream([
    "name" => "my-data-stream",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_data_stream/my-data-stream"
client.indices().deleteDataStream(d -> d
    .name("my-data-stream")
);












Get data stream options Generally available; Added in 8.19.0

GET /_data_stream/{name}/_options

Get the data stream options configuration of one or more data streams.

Path parameters

  • name string | array[string] Required

    Comma-separated list of data streams to limit the request. Supports wildcards (*). To target all data streams, omit this parameter or use * or _all.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • name string Required
      • options object

        Data stream options contain the configuration of data stream level features for a given data stream, for example, the failure store configuration.

        Hide options attribute Show options attribute object
        • failure_store object

          Data stream failure store contains the configuration of the failure store for a given data stream.

          Hide failure_store attributes Show failure_store attributes object
          • enabled boolean

            If defined, it turns the failure store on/off (true/false) for this data stream. A data stream failure store that's disabled (enabled: false) will redirect no new failed indices to the failure store; however, it will not remove any existing data from the failure store.

            Default value is true.

          • lifecycle object

            The failure store lifecycle configures the data stream lifecycle configuration for failure indices.

            Hide lifecycle attributes Show lifecycle attributes object
            • data_retention string

              A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

            • enabled boolean

              If defined, it turns data stream lifecycle on/off (true/false) for this data stream. A data stream lifecycle that's disabled (enabled: false) will have no effect on the data stream.

              Default value is true.

GET /_data_stream/{name}/_options
curl \
 --request GET 'http://api.example.com/_data_stream/{name}/_options' \
 --header "Authorization: $API_KEY"




















Update data stream settings Generally available; Added in 9.1.0

PUT /_data_stream/{name}/_settings

This API can be used to override settings on specific data streams. These overrides will take precedence over what is specified in the template that the data stream matches. To prevent your data stream from getting into an invalid state, only certain settings are allowed. If possible, the setting change is applied to all backing indices. Otherwise, it will be applied when the data stream is next rolled over.

Required authorization

  • Index privileges: manage

Path parameters

  • name string | array[string] Required

    A comma-separated list of data streams or data stream patterns.

Query parameters

  • dry_run boolean

    If true, the request does not actually change the settings on any data streams or indices. Instead, it simulates changing the settings and reports back to the user what would have happened had these settings actually been applied.

  • master_timeout string

    The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

application/json

Body Required

object object
Index settings

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • name string Required
      • applied_to_data_stream boolean Required

        If the settings were successfully applied to the data stream (or would have been, if running in dry_run mode), it is true. If an error occurred, it is false.

      • error string

        A message explaining why the settings could not be applied to the data stream.

      • settings object Required
        Index settings
      • effective_settings object Required
        Index settings
      • index_settings_results object Required
        Hide index_settings_results attributes Show index_settings_results attributes object
        • applied_to_data_stream_only array[string] Required

          The list of settings that were applied to the data stream but not to backing indices. These will be applied to the write index the next time the data stream is rolled over.

        • applied_to_data_stream_and_backing_indices array[string] Required

          The list of settings that were applied to the data stream and to all of its backing indices. These settings will also be applied to the write index the next time the data stream is rolled over.

        • errors array[object]
          Hide errors attributes Show errors attributes object
          • index string Required
          • error string Required

            A message explaining why the settings could not be applied to specific indices.

PUT /_data_stream/{name}/_settings
PUT /_data_stream/my-data-stream/_settings
{
  "index.lifecycle.name" : "new-test-policy",
  "index.number_of_shards": 11
}
resp = client.indices.put_data_stream_settings(
    name="my-data-stream",
    settings={
        "index.lifecycle.name": "new-test-policy",
        "index.number_of_shards": 11
    },
)
const response = await client.indices.putDataStreamSettings({
  name: "my-data-stream",
  settings: {
    "index.lifecycle.name": "new-test-policy",
    "index.number_of_shards": 11,
  },
});
response = client.indices.put_data_stream_settings(
  name: "my-data-stream",
  body: {
    "index.lifecycle.name": "new-test-policy",
    "index.number_of_shards": 11
  }
)
$resp = $client->indices()->putDataStreamSettings([
    "name" => "my-data-stream",
    "body" => [
        "index.lifecycle.name" => "new-test-policy",
        "index.number_of_shards" => 11,
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index.lifecycle.name":"new-test-policy","index.number_of_shards":11}' "$ELASTICSEARCH_URL/_data_stream/my-data-stream/_settings"
Request example
This is a request to change two settings on a data stream.
{
  "index.lifecycle.name" : "new-test-policy",
  "index.number_of_shards": 11
}
This shows a response to `PUT /_data_stream/my-data-stream/_settings` when two settings are successfully updated on the data stream. In this case, `index.number_of_shards` is only applied to the data stream -- it will be applied to the write index on rollover. The setting `index.lifecycle.name` is applied to the data stream and all backing indices.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "applied_to_data_stream": true,
      "settings": {
        "index": {
          "lifecycle": {
            "name": "new-test-policy"
          },
          "number_of_shards": "11"
        }
      },
      "effective_settings": {
        "index": {
          "lifecycle": {
            "name": "new-test-policy"
          },
          "mode": "standard",
          "number_of_shards": "11",
          "number_of_replicas": "0"
        }
      },
      "index_settings_results": {
        "applied_to_data_stream_only": [
          "index.number_of_shards"
        ],
        "applied_to_data_stream_and_backing_indices": [
          "index.lifecycle.name"
        ]
      }
    }
  ]
}
This shows a response to `PUT /_data_stream/my-data-stream/_settings` when a setting is successfully applied to the data stream, but one of the backing indices, `.ds-my-data-stream-2025.05.28-000001`, has a write block. The response reports that the setting was not successfully applied to that index.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "applied_to_data_stream": true,
      "settings": {
        "index": {
          "lifecycle": {
            "name": "new-test-policy"
          },
          "number_of_shards": "11"
        }
      },
      "effective_settings": {
        "index": {
          "lifecycle": {
            "name": "new-test-policy"
          },
          "mode": "standard",
          "number_of_shards": "11",
          "number_of_replicas": "0"
        }
      },
      "index_settings_results": {
        "applied_to_data_stream_only": [
          "index.number_of_shards"
        ],
        "applied_to_data_stream_and_backing_indices": [
          "index.lifecycle.name"
        ],
        "errors": [
          {
            "index": ".ds-my-data-stream-2025.05.28-000001",
            "error": "index [.ds-my-data-stream-2025.05.28-000001] blocked by: [FORBIDDEN/9/index metadata (api)];"
          }
        ]
      }
    }
  ]
}
This shows a response to `PUT /_data_stream/my-data-stream/_settings` when a user attempts to set a setting that is not allowed on a data stream. As a result, no change was applied to the data stream.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "applied_to_data_stream": false,
      "error": "Cannot set the following settings on a data stream: [index.number_of_replicas]",
      "settings": {},
      "effective_settings": {},
      "index_settings_results": {
        "applied_to_data_stream_only": [],
        "applied_to_data_stream_and_backing_indices": []
      }
    }
  ]
}





















Get a document by its ID Generally available

GET /{index}/_doc/{id}

Get a document and its source or stored fields from an index.

By default, this API is realtime and is not affected by the refresh rate of the index (when data will become visible for search). In the case where stored fields are requested with the stored_fields parameter and the document has been updated but is not yet refreshed, the API will have to parse and analyze the source to extract the stored fields. To turn off realtime behavior, set the realtime parameter to false.

Source filtering

By default, the API returns the contents of the _source field unless you have used the stored_fields parameter or the _source field is turned off. You can turn off _source retrieval by using the _source parameter:

GET my-index-000001/_doc/0?_source=false

If you only need one or two fields from the _source, use the _source_includes or _source_excludes parameters to include or filter out particular fields. This can be helpful with large documents where partial retrieval can save on network overhead Both parameters take a comma separated list of fields or wildcard expressions. For example:

GET my-index-000001/_doc/0?_source_includes=*.id&_source_excludes=entities

If you only want to specify includes, you can use a shorter notation:

GET my-index-000001/_doc/0?_source=*.id

Routing

If routing is used during indexing, the routing value also needs to be specified to retrieve a document. For example:

GET my-index-000001/_doc/2?routing=user1

This request gets the document with ID 2, but it is routed based on the user. The document is not fetched if the correct routing is not specified.

Distributed

The GET operation is hashed into a specific shard ID. It is then redirected to one of the replicas within that shard ID and returns the result. The replicas are the primary shard and its replicas within that shard ID group. This means that the more replicas you have, the better your GET scaling will be.

Versioning support

You can use the version parameter to retrieve the document only if its current version is equal to the specified one.

Internally, Elasticsearch has marked the old document as deleted and added an entirely new document. The old version of the document doesn't disappear immediately, although you won't be able to access it. Elasticsearch cleans up deleted documents in the background as you continue to index more data.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    The name of the index that contains the document.

  • id string Required

    A unique document identifier.

Query parameters

  • preference string

    The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

    If it is set to _local, the operation will prefer to be run on a local allocated shard when possible. If it is set to a custom value, the value is used to guarantee that the same shards will be used for the same custom value. This can help with "jumping values" when hitting different shards in different refresh states. A sample value can be something like the web session ID or the user name.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter. If the _source parameter is false, this parameter is ignored.

  • _source_exclude_vectors boolean

    Whether vectors should be excluded from _source

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    A comma-separated list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. Only leaf fields can be retrieved with the stored_fields option. Object fields can't be returned; if specified, the request fails.

  • version number

    The version number for concurrency control. It must match the current version of the document for the request to succeed.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _index string Required
    • fields object

      If the stored_fields parameter is set to true and found is true, it contains the document fields stored in the index.

      Hide fields attribute Show fields attribute object
      • * object Additional properties
    • _ignored array[string]
    • found boolean Required

      Indicates whether the document exists.

    • _id string Required
    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • _routing string

      The explicit routing, if set.

    • _seq_no number
    • _source object

      If found is true, it contains the document data formatted in JSON. If the _source parameter is set to false or the stored_fields parameter is set to true, it is excluded.

    • _version number
GET my-index-000001/_doc/1?stored_fields=tags,counter
resp = client.get(
    index="my-index-000001",
    id="1",
    stored_fields="tags,counter",
)
const response = await client.get({
  index: "my-index-000001",
  id: 1,
  stored_fields: "tags,counter",
});
response = client.get(
  index: "my-index-000001",
  id: "1",
  stored_fields: "tags,counter"
)
$resp = $client->get([
    "index" => "my-index-000001",
    "id" => "1",
    "stored_fields" => "tags,counter",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_doc/1?stored_fields=tags,counter"
A successful response from `GET my-index-000001/_doc/0`. It retrieves the JSON document with the `_id` 0 from the `my-index-000001` index.
{
  "_index": "my-index-000001",
  "_id": "0",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "found": true,
  "_source": {
    "@timestamp": "2099-11-15T14:12:12",
    "http": {
      "request": {
        "method": "get"
      },
      "response": {
        "status_code": 200,
        "bytes": 1070000
      },
      "version": "1.1"
    },
    "source": {
      "ip": "127.0.0.1"
    },
    "message": "GET /search HTTP/1.1 200 1070000",
    "user": {
      "id": "kimchy"
    }
  }
}
A successful response from `GET my-index-000001/_doc/1?stored_fields=tags,counter`, which retrieves a set of stored fields. Field values fetched from the document itself are always returned as an array. Any requested fields that are not stored (such as the counter field in this example) are ignored.
{
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "_seq_no" : 22,
  "_primary_term" : 1,
  "found": true,
  "fields": {
      "tags": [
        "production"
      ]
  }
}
A successful response from `GET my-index-000001/_doc/2?routing=user1&stored_fields=tags,counter`, which retrieves the `_routing` metadata field.
{
  "_index": "my-index-000001",
  "_id": "2",
  "_version": 1,
  "_seq_no" : 13,
  "_primary_term" : 1,
  "_routing": "user1",
  "found": true,
  "fields": {
      "tags": [
        "env2"
      ]
  }
}

Create or update a document in an index Generally available

POST /{index}/_doc/{id}

All methods and paths for this operation:

POST /{index}/_doc

PUT /{index}/_doc/{id}
POST /{index}/_doc/{id}

Add a JSON document to the specified data stream or index and make it searchable. If the target is an index and the document already exists, the request updates the document and increments its version.

NOTE: You cannot use this API to send update requests for existing documents in a data stream.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:

  • To add or overwrite a document using the PUT /<target>/_doc/<_id> request format, you must have the create, index, or write index privilege.
  • To add a document using the POST /<target>/_doc/ request format, you must have the create_doc, create, index, or write index privilege.
  • To automatically create a data stream or index with this API request, you must have the auto_configure, create_index, or manage index privilege.

Automatic data stream creation requires a matching index template with data stream enabled.

NOTE: Replica shards might not all be started when an indexing operation returns successfully. By default, only the primary is required. Set wait_for_active_shards to change this default behavior.

Automatically create data streams and indices

If the request's target doesn't exist and matches an index template with a data_stream definition, the index operation automatically creates the data stream.

If the target doesn't exist and doesn't match a data stream template, the operation automatically creates the index and applies any matching index templates.

NOTE: Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation.

If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed.

Automatic index creation is controlled by the action.auto_create_index setting. If it is true, any index can be created automatically. You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to false to turn off automatic index creation entirely. Specify a comma-separated list of patterns you want to allow or prefix each pattern with + or - to indicate whether it should be allowed or blocked. When a list is specified, the default behaviour is to disallow.

NOTE: The action.auto_create_index setting affects the automatic creation of indices only. It does not affect the creation of data streams.

Optimistic concurrency control

Index operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the if_seq_no and if_primary_term parameters. If a mismatch is detected, the operation will result in a VersionConflictException and a status code of 409.

Routing

By default, shard placement — or routing — is controlled by using a hash of the document's ID value. For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the routing parameter.

When setting up explicit mapping, you can also use the _routing field to direct the index operation to extract the routing value from the document itself. This does come at the (very minimal) cost of an additional document parsing pass. If the _routing mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted.

NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing setting enabled in the template.

Distributed

The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas.

Active shards

To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation. If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs. By default, write operations only wait for the primary shards to be active before proceeding (that is to say wait_for_active_shards is 1). This default can be overridden in the index settings dynamically by setting index.write.wait_for_active_shards. To alter this behavior per operation, use the wait_for_active_shards request parameter.

Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is number_of_replicas+1). Specifying a negative value or a number greater than the number of shard copies will throw an error.

For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes). If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding. This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data. If wait_for_active_shards is set on the request to 3 (and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding. This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard. However, if you set wait_for_active_shards to all (or to 4, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index. The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard.

It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts. After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary. The _shards section of the API response reveals the number of shard copies on which replication succeeded and failed.

No operation (noop) updates

When updating a document by using this API, a new version of the document is always created even if the document hasn't changed. If this isn't acceptable use the _update API with detect_noop set to true. The detect_noop option isn't available on this API because it doesn’t fetch the old source and isn't able to compare it against the new source.

There isn't a definitive rule for when noop updates aren't acceptable. It's a combination of lots of factors like how frequently your data source sends updates that are actually noops and how many queries per second Elasticsearch runs on the shard receiving the updates.

Versioning

Each indexed document is given a version number. By default, internal versioning is used that starts at 1 and increments with each update, deletes included. Optionally, the version number can be set to an external value (for example, if maintained in a database). To enable this functionality, version_type should be set to external. The value provided must be a numeric, long value greater than or equal to 0, and less than around 9.2e+18.

NOTE: Versioning is completely real time, and is not affected by the near real time aspects of search operations. If no version is provided, the operation runs without any version checks.

When using the external version type, the system checks to see if the version number passed to the index request is greater than the version of the currently stored document. If true, the document will be indexed and the new version number used. If the value provided is less than or equal to the stored document's version number, a version conflict will occur and the index operation will fail. For example:

PUT my-index-000001/_doc/1?version=2&version_type=external
{
  "user": {
    "id": "elkbee"
  }
}

In this example, the operation will succeed since the supplied version of 2 is higher than the current document version of 1.
If the document was already updated and its version was set to 2 or higher, the indexing command will fail and result in a conflict (409 HTTP status code).

A nice side effect is that there is no need to maintain strict ordering of async indexing operations run as a result of changes to a source database, as long as version numbers from the source database are used.
Even the simple case of updating the Elasticsearch index using data from a database is simplified if external versioning is used, as only the latest version will be used if the index operations arrive out of order.

## Required authorization

* Index privileges: `index`
External documentation

Path parameters

  • index string Required

    The name of the data stream or index to target. If the target doesn't exist and matches the name or wildcard (*) pattern of an index template with a data_stream definition, this request creates the data stream. If the target doesn't exist and doesn't match a data stream template, this request creates the index. You can check for existing targets with the resolve index API.

  • id string Required

    A unique identifier for the document. To automatically generate a document ID, use the POST /<target>/_doc/ request format and omit this parameter.

Query parameters

  • if_primary_term number

    Only perform the operation if the document has this primary term.

  • if_seq_no number

    Only perform the operation if the document has this sequence number.

  • include_source_on_error boolean

    True or false if to include the document source in the error message in case of parsing errors.

  • op_type string

    Set to create to only index the document if it does not already exist (put if absent). If a document with the specified _id already exists, the indexing operation will fail. The behavior is the same as using the <index>/_create endpoint. If a document ID is specified, this paramater defaults to index. Otherwise, it defaults to create. If the request targets a data stream, an op_type of create is required.

    Supported values include:

    • index: Overwrite any documents that already exist.
    • create: Only index documents that do not already exist.

    Values are index or create.

  • pipeline string

    The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to _none disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, it does nothing with refreshes.

    Values are true, false, or wait_for.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • timeout string

    The period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards.

    This parameter is useful for situations where the primary shard assigned to perform the operation might not be available when the operation runs. Some reasons for this might be that the primary shard is currently recovering from a gateway or undergoing relocation. By default, the operation will wait on the primary shard to become available for at least 1 minute before failing and responding with an error. The actual wait time could be longer, particularly when multiple waits occur.

    Values are -1 or 0.

  • version number

    An explicit version number for concurrency control. It must be a non-negative long number.

  • version_type string

    The version type.

    Supported values include:

    • internal: Use internal versioning that starts at 1 and increments with each update or delete.
    • external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.
    • external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: The external_gte version type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
    • force: This option is deprecated because it can cause primary and replica shards to diverge.

    Values are internal, external, external_gte, or force.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. You can set it to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

    Values are all or index-setting.

  • require_alias boolean

    If true, the destination must be an index alias.

  • require_data_stream boolean

    If true, the request's actions must target a data stream (existing or to be created).

application/json

Body Required

object object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _id string Required
    • _index string Required
    • _primary_term number

      The primary term assigned to the document for the indexing operation.

    • result string Required

      Values are created, updated, deleted, not_found, or noop.

    • _seq_no number
    • _shards object Required
      Hide _shards attributes Show _shards attributes object
      • failed number Required
      • successful number Required
      • total number Required
      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Hide reason attributes Show reason attributes object
          • type string Required

            The type of error

          • reason string | null

            A human-readable explanation of the error, in English.

          • stack_trace string

            The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • caused_by object

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • root_cause array[object]

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • suppressed array[object]

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number Required
        • status string
      • skipped number
    • _version number Required
    • forced_refresh boolean
POST my-index-000001/_doc/
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
resp = client.index(
    index="my-index-000001",
    document={
        "@timestamp": "2099-11-15T13:12:00",
        "message": "GET /search HTTP/1.1 200 1070000",
        "user": {
            "id": "kimchy"
        }
    },
)
const response = await client.index({
  index: "my-index-000001",
  document: {
    "@timestamp": "2099-11-15T13:12:00",
    message: "GET /search HTTP/1.1 200 1070000",
    user: {
      id: "kimchy",
    },
  },
});
response = client.index(
  index: "my-index-000001",
  body: {
    "@timestamp": "2099-11-15T13:12:00",
    "message": "GET /search HTTP/1.1 200 1070000",
    "user": {
      "id": "kimchy"
    }
  }
)
$resp = $client->index([
    "index" => "my-index-000001",
    "body" => [
        "@timestamp" => "2099-11-15T13:12:00",
        "message" => "GET /search HTTP/1.1 200 1070000",
        "user" => [
            "id" => "kimchy",
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"@timestamp":"2099-11-15T13:12:00","message":"GET /search HTTP/1.1 200 1070000","user":{"id":"kimchy"}}' "$ELASTICSEARCH_URL/my-index-000001/_doc/"
client.index(i -> i
    .index("my-index-000001")
    .document(JsonData.fromJson("{\"@timestamp\":\"2099-11-15T13:12:00\",\"message\":\"GET /search HTTP/1.1 200 1070000\",\"user\":{\"id\":\"kimchy\"}}"))
);
Request examples
Run `POST my-index-000001/_doc/` to index a document. When you use the `POST /<target>/_doc/` request format, the `op_type` is automatically set to `create` and the index operation generates a unique ID for the document.
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
Run `PUT my-index-000001/_doc/1` to insert a JSON document into the `my-index-000001` index with an `_id` of 1.
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
Response examples (200)
A successful response from `POST my-index-000001/_doc/`, which contains an automated document ID.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "W0tpsmIBdwcYyG50zbta",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "result": "created"
}
A successful response from `PUT my-index-000001/_doc/1`.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "result": "created"
}




































Throttle a reindex operation Generally available; Added in 2.4.0

POST /_reindex/{task_id}/_rethrottle

Change the number of requests per second for a particular reindex operation. For example:

POST _reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1

Rethrottling that speeds up the query takes effect immediately. Rethrottling that slows down the query will take effect after completing the current batch. This behavior prevents scroll timeouts.

Path parameters

  • task_id string Required

    The task identifier, which can be found by using the tasks API.

Query parameters

  • requests_per_second number

    The throttle for this request in sub-requests per second. It can be either -1 to turn off throttling or any decimal number like 1.7 or 12 to throttle to that level.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • nodes object Required
POST /_reindex/{task_id}/_rethrottle
POST _reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1
resp = client.reindex_rethrottle(
    task_id="r1A2WoRbTwKZ516z6NEs5A:36619",
    requests_per_second="-1",
)
const response = await client.reindexRethrottle({
  task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
  requests_per_second: "-1",
});
response = client.reindex_rethrottle(
  task_id: "r1A2WoRbTwKZ516z6NEs5A:36619",
  requests_per_second: "-1"
)
$resp = $client->reindexRethrottle([
    "task_id" => "r1A2WoRbTwKZ516z6NEs5A:36619",
    "requests_per_second" => "-1",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1"
client.reindexRethrottle(r -> r
    .requestsPerSecond(-1.0F)
    .taskId("r1A2WoRbTwKZ516z6NEs5A:36619")
);

























Delete an enrich policy Generally available; Added in 7.5.0

DELETE /_enrich/policy/{name}

Deletes an existing enrich policy and its enrich index.

Path parameters

  • name string Required

    Enrich policy to delete.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_enrich/policy/my-policy
resp = client.enrich.delete_policy(
    name="my-policy",
)
const response = await client.enrich.deletePolicy({
  name: "my-policy",
});
response = client.enrich.delete_policy(
  name: "my-policy"
)
$resp = $client->enrich()->deletePolicy([
    "name" => "my-policy",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy"
client.enrich().deletePolicy(d -> d
    .name("my-policy")
);




Get enrich stats Generally available; Added in 7.5.0

GET /_enrich/_stats

Returns enrich coordinator statistics and information about enrich policies that are currently executing.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • coordinator_stats array[object] Required

      Objects containing information about each coordinating ingest node for configured enrich processors.

      Hide coordinator_stats attributes Show coordinator_stats attributes object
      • executed_searches_total number Required
      • node_id string Required
      • queue_size number Required
      • remote_requests_current number Required
      • remote_requests_total number Required
    • executing_policies array[object] Required

      Objects containing information about each enrich policy that is currently executing.

      Hide executing_policies attributes Show executing_policies attributes object
      • name string Required
      • task object Required Additional properties
        Hide task attributes Show task attributes object
        • action string Required
        • cancelled boolean
        • cancellable boolean Required
        • description string

          Human readable text that identifies the particular request that the task is performing. For example, it might identify the search request being performed by a search task. Other kinds of tasks have different descriptions, like _reindex which has the source and the destination, or _bulk which just has the number of requests and the destination indices. Many requests will have only an empty description because more detailed information about the request is not easily available or particularly helpful in identifying the request.

        • headers object Required
          Hide headers attribute Show headers attribute object
          • * string Additional properties
        • id number Required
        • node string Required
        • running_time string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • running_time_in_nanos number

          Time unit for nanoseconds

        • start_time_in_millis number

          Time unit for milliseconds

        • status object

          The internal status of the task, which varies from task to task. The format also varies. While the goal is to keep the status for a particular task consistent from version to version, this is not always possible because sometimes the implementation changes. Fields might be removed from the status for a particular request so any parsing you do of the status might break in minor releases.

        • type string Required
        • parent_task_id string
    • cache_stats array[object] Generally available; Added in 7.16.0

      Objects containing information about the enrich cache stats on each ingest node.

      Hide cache_stats attributes Show cache_stats attributes object
      • node_id string Required
      • count number Required
      • hits number Required
      • hits_time_in_millis number

        Time unit for milliseconds

      • misses number Required
      • misses_time_in_millis number

        Time unit for milliseconds

      • evictions number Required
      • size_in_bytes number Required
GET /_enrich/_stats
resp = client.enrich.stats()
const response = await client.enrich.stats();
response = client.enrich.stats
$resp = $client->enrich()->stats();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/_stats"
client.enrich().stats(s -> s);





Delete an async EQL search Generally available; Added in 7.9.0

DELETE /_eql/search/{id}

Delete an async EQL search or a stored synchronous EQL search. The API also deletes results for the search.

Path parameters

  • id string Required

    Identifier for the search to delete. A search ID is provided in the EQL search API's response for an async search. A search ID is also provided if the request’s keep_on_completion parameter is true.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=
resp = client.eql.delete(
    id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
)
const response = await client.eql.delete({
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
});
response = client.eql.delete(
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE="
)
$resp = $client->eql()->delete([
    "id" => "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_eql/search/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE="
client.eql().delete(d -> d
    .id("FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=")
);













Get async ES|QL query results Generally available; Added in 8.13.0

GET /_query/async/{id}

Get the current status and available results or stored results for an ES|QL asynchronous query. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can retrieve the results using this API.

External documentation

Path parameters

  • id string Required

    The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with the keep_on_completion parameter set to true.

Query parameters

  • drop_null_columns boolean

    Indicates whether columns that are entirely null will be removed from the columns and values portion of the results. If true, the response will include an extra section under the name all_columns which has the name of all the columns.

  • format string

    A short version of the Accept header, for example json or yaml.

    Values are csv, json, tsv, txt, yaml, cbor, smile, or arrow.

  • keep_alive string

    The period for which the query and its results are stored in the cluster. When this period expires, the query and its results are deleted, even if the query is still ongoing.

    Values are -1 or 0.

  • wait_for_completion_timeout string

    The period to wait for the request to finish. By default, the request waits for complete query results. If the request completes during the period specified in this parameter, complete query results are returned. Otherwise, the response returns an is_running value of true and no results.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • Time unit for milliseconds

    • is_partial boolean
    • all_columns array[object]
      Hide all_columns attributes Show all_columns attributes object
      • name string Required
      • type string Required
    • columns array[object] Required
      Hide columns attributes Show columns attributes object
      • name string Required
      • type string Required
    • values array[array] Required

      A field value.

      A field value.

    • _clusters object
      Hide _clusters attributes Show _clusters attributes object
      • total number Required
      • successful number Required
      • running number Required
      • skipped number Required
      • partial number Required
      • failed number Required
      • details object Required
        Hide details attribute Show details attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • status string Required

            Values are running, successful, partial, skipped, or failed.

          • indices string Required
          • Time unit for milliseconds

          • _shards object
            Hide _shards attributes Show _shards attributes object
            • total number Required
            • successful number
            • skipped number
            • failed number
          • failures array[object]
    • profile object

      Profiling information. Present if profile was true in the request. The contents of this field are currently unstable.

    • id string
    • is_running boolean Required
GET /_query/async/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?wait_for_completion_timeout=30s
resp = client.esql.async_query_get(
    id="FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
    wait_for_completion_timeout="30s",
)
const response = await client.esql.asyncQueryGet({
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
  wait_for_completion_timeout: "30s",
});
response = client.esql.async_query_get(
  id: "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
  wait_for_completion_timeout: "30s"
)
$resp = $client->esql()->asyncQueryGet([
    "id" => "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
    "wait_for_completion_timeout" => "30s",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_query/async/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?wait_for_completion_timeout=30s"





























Fleet





Run multiple Fleet searches Technical preview; Added in 7.16.0

POST /{index}/_fleet/_fleet_msearch

All methods and paths for this operation:

GET /_fleet/_fleet_msearch

POST /_fleet/_fleet_msearch
GET /{index}/_fleet/_fleet_msearch
POST /{index}/_fleet/_fleet_msearch

Run several Fleet searches with a single API request. The API follows the same structure as the multi search API. However, similar to the Fleet search API, it supports the wait_for_checkpoints parameter.

Required authorization

  • Index privileges: read

Path parameters

  • index string Required

    A single target to search. If the target is an index alias, it must resolve to a single index.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • ccs_minimize_roundtrips boolean

    If true, network roundtrips between the coordinating node and remote clusters are minimized for cross-cluster search requests.

  • expand_wildcards string | array[string]

    Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_throttled boolean

    If true, concrete, expanded or aliased indices are ignored when frozen.

  • ignore_unavailable boolean

    If true, missing or closed indices are not included in the response.

  • max_concurrent_searches number

    Maximum number of concurrent searches the multi search API can execute.

  • max_concurrent_shard_requests number

    Maximum number of concurrent shard requests that each sub-search request executes per node.

  • pre_filter_shard_size number

    Defines a threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method i.e., if date filters are mandatory to match but the shard bounds and the query are disjoint.

  • search_type string

    Indicates whether global term and document frequencies should be used when scoring returned documents.

    Supported values include:

    • query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.
    • dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.

    Values are query_then_fetch or dfs_query_then_fetch.

  • rest_total_hits_as_int boolean

    If true, hits.total are returned as an integer in the response. Defaults to false, which returns an object.

  • typed_keys boolean

    Specifies whether aggregation and suggester names should be prefixed by their respective types in the response.

  • wait_for_checkpoints array[number]

    A comma separated list of checkpoints. When configured, the search API will only be executed on a shard after the relevant checkpoint has become visible for search. Defaults to an empty list which will cause Elasticsearch to immediately execute the search.

  • allow_partial_search_results boolean

    If true, returns partial results if there are shard request timeouts or shard failures. If false, returns an error with no partial results. Defaults to the configured cluster setting search.default_allow_partial_results, which is true by default.

application/json

Body object Required

One of:

Contains parameters used to limit or change the subsequent search body request.

  • allow_no_indices boolean
  • expand_wildcards string | array[string]
  • ignore_unavailable boolean
  • index string | array[string]
  • preference string
  • request_cache boolean
  • routing string
  • search_type string

    Values are query_then_fetch or dfs_query_then_fetch.

  • ccs_minimize_roundtrips boolean
  • allow_partial_search_results boolean
  • ignore_throttled boolean

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required
      One of:
      Hide attributes Show attributes
      • took number Required

        The number of milliseconds it took Elasticsearch to run the request. This value is calculated by measuring the time elapsed between receipt of a request on the coordinating node and the time at which the coordinating node is ready to send the response. It includes:

        • Communication time between the coordinating node and data nodes
        • Time the request spends in the search thread pool, queued for execution
        • Actual run time

        It does not include:

        • Time needed to send the request to Elasticsearch
        • Time needed to serialize the JSON response
        • Time needed to send the response to a client
      • timed_out boolean Required

        If true, the request timed out before completion; returned results may be partial or empty.

      • _shards object Required
        Hide _shards attributes Show _shards attributes object
        • failed number Required
        • successful number Required
        • total number Required
        • failures array[object]
        • skipped number
      • hits object Required
        Hide hits attributes Show hits attributes object
        • total
        • hits array[object] Required
        • max_score
      • aggregations object
      • _clusters object
        Hide _clusters attributes Show _clusters attributes object
        • skipped number Required
        • successful number Required
        • total number Required
        • running number Required
        • partial number Required
        • failed number Required
        • details object
      • fields object
        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • max_score number
      • num_reduce_phases number
      • profile object
        Hide profile attribute Show profile attribute object
        • shards array[object] Required
      • pit_id string
      • _scroll_id string
      • suggest object
        Hide suggest attribute Show suggest attribute object
        • * array[object] Additional properties
      • terminated_early boolean
      • status number
POST /{index}/_fleet/_fleet_msearch
curl \
 --request POST 'http://api.example.com/{index}/_fleet/_fleet_msearch' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '[{"allow_no_indices":true,"expand_wildcards":"string","ignore_unavailable":true,"index":"string","preference":"string","request_cache":true,"routing":"string","search_type":"query_then_fetch","ccs_minimize_roundtrips":true,"allow_partial_search_results":true,"ignore_throttled":true}]'














































Get tokens from text analysis Generally available

POST /{index}/_analyze

All methods and paths for this operation:

GET /_analyze

POST /_analyze
GET /{index}/_analyze
POST /{index}/_analyze

The analyze API performs analysis on a text string and returns the resulting tokens.

Generating excessive amount of tokens may cause a node to run out of memory. The index.analyze.max_token_count setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The _analyze endpoint without a specified index will always use 10000 as its limit.

Required authorization

  • Index privileges: index
External documentation

Path parameters

  • index string Required

    Index used to derive the analyzer. If specified, the analyzer or field parameter overrides this value. If no index is specified or the index does not have a default analyzer, the analyze API uses the standard analyzer.

Query parameters

  • index string

    Index used to derive the analyzer. If specified, the analyzer or field parameter overrides this value. If no index is specified or the index does not have a default analyzer, the analyze API uses the standard analyzer.

application/json

Body

  • analyzer string

    The name of the analyzer that should be applied to the provided text. This could be a built-in analyzer, or an analyzer that’s been configured in the index.

  • attributes array[string]

    Array of token attributes used to filter the output of the explain parameter.

  • char_filter array

    Array of character filters used to preprocess characters before the tokenizer.

    External documentation
  • explain boolean

    If true, the response includes token attributes and additional details.

    Default value is false.

  • field string

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • filter array

    Array of token filters used to apply after the tokenizer.

    External documentation
  • normalizer string

    Normalizer to use to convert text into a single token.

  • text string | array[string]

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • detail object
      Hide detail attributes Show detail attributes object
      • analyzer object
        Hide analyzer attributes Show analyzer attributes object
        • name string Required
        • tokens array[object] Required
          Hide tokens attributes Show tokens attributes object
          • bytes string Required
          • end_offset number Required
          • keyword boolean
          • position number Required
          • positionLength number Required
          • start_offset number Required
          • termFrequency number Required
          • token string Required
          • type string Required
      • charfilters array[object]
        Hide charfilters attributes Show charfilters attributes object
        • filtered_text array[string] Required
        • name string Required
      • custom_analyzer boolean Required
      • tokenfilters array[object]
        Hide tokenfilters attributes Show tokenfilters attributes object
        • name string Required
        • tokens array[object] Required
          Hide tokens attributes Show tokens attributes object
          • bytes string Required
          • end_offset number Required
          • keyword boolean
          • position number Required
          • positionLength number Required
          • start_offset number Required
          • termFrequency number Required
          • token string Required
          • type string Required
      • tokenizer object
        Hide tokenizer attributes Show tokenizer attributes object
        • name string Required
        • tokens array[object] Required
          Hide tokens attributes Show tokens attributes object
          • bytes string Required
          • end_offset number Required
          • keyword boolean
          • position number Required
          • positionLength number Required
          • start_offset number Required
          • termFrequency number Required
          • token string Required
          • type string Required
    • tokens array[object]
      Hide tokens attributes Show tokens attributes object
      • end_offset number Required
      • position number Required
      • positionLength number
      • start_offset number Required
      • token string Required
      • type string Required
GET /_analyze
{
  "analyzer": "standard",
  "text": "this is a test"
}
resp = client.indices.analyze(
    analyzer="standard",
    text="this is a test",
)
const response = await client.indices.analyze({
  analyzer: "standard",
  text: "this is a test",
});
response = client.indices.analyze(
  body: {
    "analyzer": "standard",
    "text": "this is a test"
  }
)
$resp = $client->indices()->analyze([
    "body" => [
        "analyzer" => "standard",
        "text" => "this is a test",
    ],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"analyzer":"standard","text":"this is a test"}' "$ELASTICSEARCH_URL/_analyze"
client.indices().analyze(a -> a
    .analyzer("standard")
    .text("this is a test")
);
You can apply any of the built-in analyzers to the text string without specifying an index.
{
  "analyzer": "standard",
  "text": "this is a test"
}
If the text parameter is provided as array of strings, it is analyzed as a multi-value field.
{
  "analyzer": "standard",
  "text": [
    "this is a test",
    "the second text"
  ]
}
You can test a custom transient analyzer built from tokenizers, token filters, and char filters. Token filters use the filter parameter.
{
  "tokenizer": "keyword",
  "filter": [
    "lowercase"
  ],
  "char_filter": [
    "html_strip"
  ],
  "text": "this is a <b>test</b>"
}
Custom tokenizers, token filters, and character filters can be specified in the request body.
{
  "tokenizer": "whitespace",
  "filter": [
    "lowercase",
    {
      "type": "stop",
      "stopwords": [
        "a",
        "is",
        "this"
      ]
    }
  ],
  "text": "this is a test"
}
Run `GET /analyze_sample/_analyze` to run an analysis on the text using the default index analyzer associated with the `analyze_sample` index. Alternatively, the analyzer can be derived based on a field mapping.
{
  "field": "obj1.field1",
  "text": "this is a test"
}
Run `GET /analyze_sample/_analyze` and supply a normalizer for a keyword field if there is a normalizer associated with the specified index.
{
  "normalizer": "my_normalizer",
  "text": "BaR"
}
If you want to get more advanced details, set `explain` to `true`. It will output all token attributes for each token. You can filter token attributes you want to output by setting the `attributes` option. NOTE: The format of the additional detail information is labelled as experimental in Lucene and it may change in the future.
{
  "tokenizer": "standard",
  "filter": [
    "snowball"
  ],
  "text": "detailed output",
  "explain": true,
  "attributes": [
    "keyword"
  ]
}
Response examples (200)
A successful response for an analysis with `explain` set to `true`.
{
  "detail": {
    "custom_analyzer": true,
    "charfilters": [],
    "tokenizer": {
      "name": "standard",
      "tokens": [
        {
          "token": "detailed",
          "start_offset": 0,
          "end_offset": 8,
          "type": "<ALPHANUM>",
          "position": 0
        },
        {
          "token": "output",
          "start_offset": 9,
          "end_offset": 15,
          "type": "<ALPHANUM>",
          "position": 1
        }
      ]
    },
    "tokenfilters": [
      {
        "name": "snowball",
        "tokens": [
          {
            "token": "detail",
            "start_offset": 0,
            "end_offset": 8,
            "type": "<ALPHANUM>",
            "position": 0,
            "keyword": false
          },
          {
            "token": "output",
            "start_offset": 9,
            "end_offset": 15,
            "type": "<ALPHANUM>",
            "position": 1,
            "keyword": false
          }
        ]
      }
    ]
  }
}








Close an index Generally available

POST /{index}/_close

A closed index is blocked for read or write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. Closed indices do not have to maintain internal data structures for indexing or searching documents, which results in a smaller overhead on the cluster.

When opening or closing an index, the master node is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened and closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.

You can open and close multiple indices. An error is thrown if the request explicitly refers to a missing index. This behaviour can be turned off using the ignore_unavailable=true parameter.

By default, you must explicitly name the indices you are opening or closing. To open or close indices with _all, *, or other wildcard expressions, change theaction.destructive_requires_name setting to false. This setting can also be changed with the cluster update settings API.

Closed indices consume a significant amount of disk-space which can cause problems in managed environments. Closing indices can be turned off with the cluster settings API by setting cluster.indices.close.enable to false.

Required authorization

  • Index privileges: manage

Path parameters

  • index string | array[string] Required

    Comma-separated list or wildcard expression of index names used to limit the request.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • indices object Required
      Hide indices attribute Show indices attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • closed boolean Required
        • shards object
          Hide shards attribute Show shards attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • failures array[object] Required
    • shards_acknowledged boolean Required
POST /my-index-00001/_close
resp = client.indices.close(
    index="my-index-00001",
)
const response = await client.indices.close({
  index: "my-index-00001",
});
response = client.indices.close(
  index: "my-index-00001"
)
$resp = $client->indices()->close([
    "index" => "my-index-00001",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-00001/_close"
client.indices().close(c -> c
    .index("my-index-00001")
);
Response examples (200)
A successful response for closing an index.
{
  "acknowledged": true,
  "shards_acknowledged": true,
  "indices": {
    "my-index-000001": {
      "closed": true
    }
  }
}












Check indices Generally available

HEAD /{index}

Check if one or more indices, index aliases, or data streams exist.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases. Supports wildcards (*).

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flat_settings boolean

    If true, returns settings in flat format.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • include_defaults boolean

    If true, return all default settings in the response.

  • local boolean

    If true, the request retrieves information from the local node only.

Responses

  • 200 application/json
HEAD my-data-stream
resp = client.indices.exists(
    index="my-data-stream",
)
const response = await client.indices.exists({
  index: "my-data-stream",
});
response = client.indices.exists(
  index: "my-data-stream"
)
$resp = $client->indices()->exists([
    "index" => "my-data-stream",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-data-stream"
client.indices().exists(e -> e
    .index("my-data-stream")
);
































Get legacy index templates Deprecated Generally available

GET /_template/{name}

All methods and paths for this operation:

GET /_template

GET /_template/{name}

Get information about one or more index templates.

IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

Required authorization

  • Cluster privileges: manage_index_templates
External documentation

Path parameters

  • name string | array[string] Required

    Comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported. To return all index templates, omit this parameter or use a value of _all or *.

Query parameters

  • flat_settings boolean

    If true, returns settings in flat format.

  • local boolean

    If true, the request retrieves information from the local node only.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • aliases object Required
        Hide aliases attribute Show aliases attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • filter object

            An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

            External documentation
          • index_routing string
          • is_hidden boolean

            If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

            Default value is false.

          • is_write_index boolean

            If true, the index is the write index for the alias.

            Default value is false.

          • routing string
          • search_routing string
      • index_patterns array[string] Required
      • mappings object Required
        Hide mappings attributes Show mappings attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
          • mode string

            Values are disabled, stored, or synthetic.

        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

              Hide fields attribute Show fields attribute object
              • * object Additional properties
            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

            • input_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_index string
            • script object
              Hide script attributes Show script attributes object
              • source
              • id string
              • params object

                Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              • lang
              • options object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
      • order number Required
      • settings object Required
        Hide settings attribute Show settings attribute object
        • * object Additional properties
      • version number
GET /_template/.monitoring-*
resp = client.indices.get_template(
    name=".monitoring-*",
)
const response = await client.indices.getTemplate({
  name: ".monitoring-*",
});
response = client.indices.get_template(
  name: ".monitoring-*"
)
$resp = $client->indices()->getTemplate([
    "name" => ".monitoring-*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_template/.monitoring-*"
client.indices().getTemplate(g -> g
    .name(".monitoring-*")
);




Delete a legacy index template Deprecated Generally available

DELETE /_template/{name}

IMPORTANT: This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.

Required authorization

  • Cluster privileges: manage_index_templates

Path parameters

  • name string Required

    The name of the legacy index template to delete. Wildcard (*) expressions are supported.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE _template/.cloud-hot-warm-allocation-0
resp = client.indices.delete_template(
    name=".cloud-hot-warm-allocation-0",
)
const response = await client.indices.deleteTemplate({
  name: ".cloud-hot-warm-allocation-0",
});
response = client.indices.delete_template(
  name: ".cloud-hot-warm-allocation-0"
)
$resp = $client->indices()->deleteTemplate([
    "name" => ".cloud-hot-warm-allocation-0",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_template/.cloud-hot-warm-allocation-0"
client.indices().deleteTemplate(d -> d
    .name(".cloud-hot-warm-allocation-0")
);




Analyze the index disk usage Technical preview; Added in 7.15.0

POST /{index}/_disk_usage

Analyze the disk usage of each field of an index or data stream. This API might not support indices created in previous Elasticsearch versions. The result of a small index can be inaccurate as some parts of an index might not be analyzed by the API.

NOTE: The total size of fields of the analyzed shards of the index in the response is usually smaller than the index store_size value because some small metadata files are ignored and some parts of data files might not be scanned by the API. Since stored fields are stored together in a compressed format, the sizes of stored fields are also estimates and can be inaccurate. The stored size of the _id field is likely underestimated while the _source field is overestimated.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. It’s recommended to execute this API with a single index (or the latest backing index of a data stream) as the API consumes resources significantly.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • flush boolean

    If true, the API performs a flush before analysis. If false, the response may not include uncommitted data.

  • ignore_unavailable boolean

    If true, missing or closed indices are not included in the response.

  • run_expensive_tasks boolean

    Analyzing field disk usage is resource-intensive. To use the API, this parameter must be set to true.

Responses

  • 200 application/json
POST /my-index-000001/_disk_usage?run_expensive_tasks=true
resp = client.indices.disk_usage(
    index="my-index-000001",
    run_expensive_tasks=True,
)
const response = await client.indices.diskUsage({
  index: "my-index-000001",
  run_expensive_tasks: "true",
});
response = client.indices.disk_usage(
  index: "my-index-000001",
  run_expensive_tasks: "true"
)
$resp = $client->indices()->diskUsage([
    "index" => "my-index-000001",
    "run_expensive_tasks" => "true",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_disk_usage?run_expensive_tasks=true"
client.indices().diskUsage(d -> d
    .index("my-index-000001")
    .runExpensiveTasks(true)
);
























Get mapping definitions Generally available

GET /{index}/_mapping

All methods and paths for this operation:

GET /_mapping

GET /{index}/_mapping

For data streams, the API retrieves mappings for the stream’s backing indices.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

  • local boolean Deprecated

    If true, the request retrieves information from the local node only.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • item object
        Hide item attributes Show item attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
          • mode string

            Values are disabled, stored, or synthetic.

        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

              Hide fields attribute Show fields attribute object
              • * object Additional properties
            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

            • input_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_index string
            • script object
              Hide script attributes Show script attributes object
              • source
              • id string
              • params object

                Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              • lang
              • options object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
      • mappings object Required
        Hide mappings attributes Show mappings attributes object
        • all_field object
          Hide all_field attributes Show all_field attributes object
          • analyzer string Required
          • enabled boolean Required
          • omit_norms boolean Required
          • search_analyzer string Required
          • similarity string Required
          • store boolean Required
          • store_term_vector_offsets boolean Required
          • store_term_vector_payloads boolean Required
          • store_term_vector_positions boolean Required
          • store_term_vectors boolean Required
        • date_detection boolean
        • dynamic string

          Values are strict, runtime, true, or false.

        • dynamic_date_formats array[string]
        • dynamic_templates array[object]
        • _field_names object
          Hide _field_names attribute Show _field_names attribute object
          • enabled boolean Required
        • index_field object
          Hide index_field attribute Show index_field attribute object
          • enabled boolean Required
        • _meta object
          Hide _meta attribute Show _meta attribute object
          • * object Additional properties
        • numeric_detection boolean
        • properties object
        • _routing object
          Hide _routing attribute Show _routing attribute object
          • required boolean Required
        • _size object
          Hide _size attribute Show _size attribute object
          • enabled boolean Required
        • _source object
          Hide _source attributes Show _source attributes object
          • compress boolean
          • compress_threshold string
          • enabled boolean
          • excludes array[string]
          • includes array[string]
          • mode string

            Values are disabled, stored, or synthetic.

        • runtime object
          Hide runtime attribute Show runtime attribute object
          • * object Additional properties
            Hide * attributes Show * attributes object
            • fields object

              For type composite

              Hide fields attribute Show fields attribute object
              • * object Additional properties
            • fetch_fields array[object]

              For type lookup

            • format string

              A custom format for date type runtime fields.

            • input_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_field string

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • target_index string
            • script object
              Hide script attributes Show script attributes object
              • source
              • id string
              • params object

                Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              • lang
              • options object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • enabled boolean
        • subobjects string

          Values are true or false.

        • _data_stream_timestamp object
          Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
          • enabled boolean Required
GET /books/_mapping
resp = client.indices.get_mapping(
    index="books",
)
const response = await client.indices.getMapping({
  index: "books",
});
response = client.indices.get_mapping(
  index: "books"
)
$resp = $client->indices()->getMapping([
    "index" => "books",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/books/_mapping"
client.indices().getMapping(g -> g
    .index("books")
);
















Get index recovery information Generally available

GET /{index}/_recovery

All methods and paths for this operation:

GET /_recovery

GET /{index}/_recovery

Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.

All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.

Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.

Recovery automatically occurs during the following processes:

  • When creating an index for the first time.
  • When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
  • Creation of new replica shard copies from the primary.
  • Relocation of a shard copy to a different node in the same cluster.
  • A snapshot restore operation.
  • A clone, shrink, or split operation.

You can determine the cause of a shard recovery using the recovery or cat recovery APIs.

The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.

Required authorization

  • Index privileges: monitor

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • active_only boolean

    If true, the response only includes ongoing shard recoveries.

  • detailed boolean

    If true, the response includes detailed information about shard recoveries.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • shards array[object] Required
        Hide shards attributes Show shards attributes object
        • id number Required
        • index object Required
          Hide index attributes Show index attributes object
          • bytes object
            Hide bytes attributes Show bytes attributes object
            • percent
            • recovered
            • recovered_in_bytes
            • recovered_from_snapshot
            • recovered_from_snapshot_in_bytes
            • reused
            • reused_in_bytes
            • total
            • total_in_bytes
          • files object Required
            Hide files attributes Show files attributes object
            • details array[object]
            • percent
            • recovered number Required
            • reused number Required
            • total number Required
          • size object Required
            Hide size attributes Show size attributes object
            • percent
            • recovered
            • recovered_in_bytes
            • recovered_from_snapshot
            • recovered_from_snapshot_in_bytes
            • reused
            • reused_in_bytes
            • total
            • total_in_bytes
          • source_throttle_time string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • source_throttle_time_in_millis number

            Time unit for milliseconds

          • target_throttle_time string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • target_throttle_time_in_millis number

            Time unit for milliseconds

          • total_time string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • total_time_in_millis number

            Time unit for milliseconds

        • primary boolean Required
        • source object Required
          Hide source attributes Show source attributes object
          • hostname string
          • host string
          • transport_address string
          • id string
          • ip string
          • name string
          • bootstrap_new_history_uuid boolean
          • repository string
          • snapshot string
          • version string
          • restoreUUID string
          • index string
        • stage string Required
        • start object
          Hide start attributes Show start attributes object
          • check_index_time string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • check_index_time_in_millis number

            Time unit for milliseconds

          • total_time string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • total_time_in_millis number

            Time unit for milliseconds

        • start_time string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          One of:
        • start_time_in_millis number

          Time unit for milliseconds

        • stop_time string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

          One of:
        • stop_time_in_millis number

          Time unit for milliseconds

        • target object Required
          Hide target attributes Show target attributes object
          • hostname string
          • host string
          • transport_address string
          • id string
          • ip string
          • name string
          • bootstrap_new_history_uuid boolean
          • repository string
          • snapshot string
          • version string
          • restoreUUID string
          • index string
        • total_time string

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • total_time_in_millis number

          Time unit for milliseconds

        • translog object Required
          Hide translog attributes Show translog attributes object
          • percent string | number Required

          • recovered number Required
          • total number Required
          • total_on_start number Required
          • total_time string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • total_time_in_millis number

            Time unit for milliseconds

        • type string Required
        • verify_index object Required
          Hide verify_index attributes Show verify_index attributes object
          • check_index_time string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • check_index_time_in_millis number

            Time unit for milliseconds

          • total_time string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • total_time_in_millis number

            Time unit for milliseconds

GET /_recovery?human
resp = client.indices.recovery(
    human=True,
)
const response = await client.indices.recovery({
  human: "true",
});
response = client.indices.recovery(
  human: "true"
)
$resp = $client->indices()->recovery([
    "human" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_recovery?human"
Response examples (200)
A successful response from `GET /_recovery?human`, which gets information about ongoing and completed shard recoveries for all data streams and indices in a cluster. This example includes information about a single index recovering a single shard. The source of the recovery is a snapshot repository and the target of the recovery is the `my_es_node` node. The response also includes the number and percentage of files and bytes recovered.
{
  "index1" : {
    "shards" : [ {
      "id" : 0,
      "type" : "SNAPSHOT",
      "stage" : "INDEX",
      "primary" : true,
      "start_time" : "2014-02-24T12:15:59.716",
      "start_time_in_millis": 1393244159716,
      "stop_time" : "0s",
      "stop_time_in_millis" : 0,
      "total_time" : "2.9m",
      "total_time_in_millis" : 175576,
      "source" : {
        "repository" : "my_repository",
        "snapshot" : "my_snapshot",
        "index" : "index1",
        "version" : "{version}",
        "restoreUUID": "PDh1ZAOaRbiGIVtCvZOMww"
      },
      "target" : {
        "id" : "ryqJ5lO5S4-lSFbGntkEkg",
        "host" : "my.fqdn",
        "transport_address" : "my.fqdn",
        "ip" : "10.0.1.7",
        "name" : "my_es_node"
      },
      "index" : {
        "size" : {
          "total" : "75.4mb",
          "total_in_bytes" : 79063092,
          "reused" : "0b",
          "reused_in_bytes" : 0,
          "recovered" : "65.7mb",
          "recovered_in_bytes" : 68891939,
          "recovered_from_snapshot" : "0b",
          "recovered_from_snapshot_in_bytes" : 0,
          "percent" : "87.1%"
        },
        "files" : {
          "total" : 73,
          "reused" : 0,
          "recovered" : 69,
          "percent" : "94.5%"
        },
        "total_time" : "0s",
        "total_time_in_millis" : 0,
        "source_throttle_time" : "0s",
        "source_throttle_time_in_millis" : 0,
        "target_throttle_time" : "0s",
        "target_throttle_time_in_millis" : 0
      },
      "translog" : {
        "recovered" : 0,
        "total" : 0,
        "percent" : "100.0%",
        "total_on_start" : 0,
        "total_time" : "0s",
        "total_time_in_millis" : 0
      },
      "verify_index" : {
        "check_index_time" : "0s",
        "check_index_time_in_millis" : 0,
        "total_time" : "0s",
        "total_time_in_millis" : 0
      }
    } ]
  }
}
A successful response from `GET _recovery?human&detailed=true`. The response includes a listing of any physical files recovered and their sizes. The response also includes timings in milliseconds of the various stages of recovery: index retrieval, translog replay, and index start time. This response indicates the recovery is done.
{
  "index1" : {
    "shards" : [ {
      "id" : 0,
      "type" : "EXISTING_STORE",
      "stage" : "DONE",
      "primary" : true,
      "start_time" : "2014-02-24T12:38:06.349",
      "start_time_in_millis" : "1393245486349",
      "stop_time" : "2014-02-24T12:38:08.464",
      "stop_time_in_millis" : "1393245488464",
      "total_time" : "2.1s",
      "total_time_in_millis" : 2115,
      "source" : {
        "id" : "RGMdRc-yQWWKIBM4DGvwqQ",
        "host" : "my.fqdn",
        "transport_address" : "my.fqdn",
        "ip" : "10.0.1.7",
        "name" : "my_es_node"
      },
      "target" : {
        "id" : "RGMdRc-yQWWKIBM4DGvwqQ",
        "host" : "my.fqdn",
        "transport_address" : "my.fqdn",
        "ip" : "10.0.1.7",
        "name" : "my_es_node"
      },
      "index" : {
        "size" : {
          "total" : "24.7mb",
          "total_in_bytes" : 26001617,
          "reused" : "24.7mb",
          "reused_in_bytes" : 26001617,
          "recovered" : "0b",
          "recovered_in_bytes" : 0,
          "recovered_from_snapshot" : "0b",
          "recovered_from_snapshot_in_bytes" : 0,
          "percent" : "100.0%"
        },
        "files" : {
          "total" : 26,
          "reused" : 26,
          "recovered" : 0,
          "percent" : "100.0%",
          "details" : [ {
            "name" : "segments.gen",
            "length" : 20,
            "recovered" : 20
          }, {
            "name" : "_0.cfs",
            "length" : 135306,
            "recovered" : 135306,
            "recovered_from_snapshot": 0
          }, {
            "name" : "segments_2",
            "length" : 251,
            "recovered" : 251,
            "recovered_from_snapshot": 0
          }
          ]
        },
        "total_time" : "2ms",
        "total_time_in_millis" : 2,
        "source_throttle_time" : "0s",
        "source_throttle_time_in_millis" : 0,
        "target_throttle_time" : "0s",
        "target_throttle_time_in_millis" : 0
      },
      "translog" : {
        "recovered" : 71,
        "total" : 0,
        "percent" : "100.0%",
        "total_on_start" : 0,
        "total_time" : "2.0s",
        "total_time_in_millis" : 2025
      },
      "verify_index" : {
        "check_index_time" : 0,
        "check_index_time_in_millis" : 0,
        "total_time" : "88ms",
        "total_time_in_millis" : 88
      }
    } ]
  }
}

Refresh an index Generally available

GET /{index}/_refresh

All methods and paths for this operation:

POST /_refresh

GET /_refresh
POST /{index}/_refresh
GET /{index}/_refresh

A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.

By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the index.refresh_interval setting.

Refresh requests are synchronous and do not return a response until the refresh operation completes.

Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.

If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's refresh=wait_for query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.

Required authorization

  • Index privileges: maintenance

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • _shards object
      Hide _shards attributes Show _shards attributes object
      • failed number Required
      • successful number Required
      • total number Required
      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Hide reason attributes Show reason attributes object
          • type string Required

            The type of error

          • reason string | null

            A human-readable explanation of the error, in English.

          • stack_trace string

            The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • caused_by object

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • root_cause array[object]

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          • suppressed array[object]

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

            Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • shard number Required
        • status string
      • skipped number
GET _refresh
resp = client.indices.refresh()
const response = await client.indices.refresh();
response = client.indices.refresh
$resp = $client->indices()->refresh();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_refresh"
client.indices().refresh(r -> r);




Resolve the cluster Generally available; Added in 8.13.0

GET /_resolve/cluster/{name}

All methods and paths for this operation:

GET /_resolve/cluster

GET /_resolve/cluster/{name}

Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.

This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.

You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.

For each cluster in the index expression, information is returned about:

  • Whether the querying ("local") cluster is currently connected to each remote cluster specified in the index expression. Note that this endpoint actively attempts to contact the remote clusters, unlike the remote/info endpoint.
  • Whether each remote cluster is configured with skip_unavailable as true or false.
  • Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
  • Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
  • Cluster version information, including the Elasticsearch server version.

For example, GET /_resolve/cluster/my-index-*,cluster*:my-index-* returns information about the local cluster and all remotely configured clusters that start with the alias cluster*. Each cluster returns information about whether it has any indices, aliases or data streams that match my-index-*.

Note on backwards compatibility

The ability to query without an index expression was added in version 8.18, so when querying remote clusters older than that, the local cluster will send the index expression dummy* to those remote clusters. Thus, if an errors occur, you may see a reference to that index expression even though you didn't request it. If it causes a problem, you can instead include an index expression like *:* to bypass the issue.

You may want to exclude a cluster or index from a search when:

  • A remote cluster is not currently connected and is configured with skip_unavailable=false. Running a cross-cluster search under those conditions will cause the entire search to fail.
  • A cluster has no matching indices, aliases or data streams for the index expression (or your user does not have permissions to search them). For example, suppose your index expression is logs*,remote1:logs* and the remote1 cluster has no indices, aliases or data streams that match logs*. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search.
  • The index expression (combined with any query parameters you specify) will likely cause an exception to be thrown when you do the search. In these cases, the "error" field in the _resolve/cluster response will be present. (This is also where security/permission errors will be shown.)
  • A remote cluster is an older version that does not support the feature you want to use in your search.

Test availability of remote clusters

The remote/info endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not. The remote cluster may be available, while the local cluster is not currently connected to it.

You can use the _resolve/cluster API to attempt to reconnect to remote clusters. For example with GET _resolve/cluster or GET _resolve/cluster/*:*. The connected field in the response will indicate whether it was successful. If a connection was (re-)established, this will also cause the remote/info endpoint to now indicate a connected status.

Required authorization

  • Index privileges: view_index_metadata

Path parameters

  • name string | array[string] Required

    A comma-separated list of names or index patterns for the indices, aliases, and data streams to resolve. Resources on remote clusters can be specified using the <cluster>:<name> syntax. Index and cluster exclusions (e.g., -cluster1:*) are also supported. If no index expression is specified, information about all remote clusters configured on the local cluster is returned without doing any index matching

Query parameters

  • allow_no_indices boolean

    If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the _resolve/cluster API endpoint that takes no index expression.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the _resolve/cluster API endpoint that takes no index expression.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • ignore_throttled boolean Deprecated

    If true, concrete, expanded, or aliased indices are ignored when frozen. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the _resolve/cluster API endpoint that takes no index expression.

  • ignore_unavailable boolean

    If false, the request returns an error if it targets a missing or closed index. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the _resolve/cluster API endpoint that takes no index expression.

  • timeout string

    The maximum time to wait for remote clusters to respond. If a remote cluster does not respond within this timeout period, the API response will show the cluster as not connected and include an error message that the request timed out.

    The default timeout is unset and the query can take as long as the networking layer is configured to wait for remote clusters that are not responding (typically 30 seconds).

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties

      Provides information about each cluster request relevant to doing a cross-cluster search.

      Hide * attributes Show * attributes object
      • connected boolean Required

        Whether the remote cluster is connected to the local (querying) cluster.

      • skip_unavailable boolean Required

        The skip_unavailable setting for a remote cluster.

      • matching_indices boolean

        Whether the index expression provided in the request matches any indices, aliases or data streams on the cluster.

      • error string

        Provides error messages that are likely to occur if you do a search with this index expression on the specified cluster (for example, lack of security privileges to query an index).

      • version object

        Reduced (minimal) info ElasticsearchVersion

        Hide version attributes Show version attributes object
        • build_flavor string Required
        • minimum_index_compatibility_version string Required
        • minimum_wire_compatibility_version string Required
        • number string Required
GET /_resolve/cluster/not-present,clust*:my-index*,oldcluster:*?ignore_unavailable=false&timeout=5s
resp = client.indices.resolve_cluster(
    name="not-present,clust*:my-index*,oldcluster:*",
    ignore_unavailable=False,
    timeout="5s",
)
const response = await client.indices.resolveCluster({
  name: "not-present,clust*:my-index*,oldcluster:*",
  ignore_unavailable: "false",
  timeout: "5s",
});
response = client.indices.resolve_cluster(
  name: "not-present,clust*:my-index*,oldcluster:*",
  ignore_unavailable: "false",
  timeout: "5s"
)
$resp = $client->indices()->resolveCluster([
    "name" => "not-present,clust*:my-index*,oldcluster:*",
    "ignore_unavailable" => "false",
    "timeout" => "5s",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_resolve/cluster/not-present,clust*:my-index*,oldcluster:*?ignore_unavailable=false&timeout=5s"
client.indices().resolveCluster(r -> r
    .ignoreUnavailable(false)
    .name(List.of("not-present","clust*:my-index*","oldcluster:*"))
    .timeout(t -> t
        .offset(5)
    )
);
Response examples (200)
A successful response from `GET /_resolve/cluster/my-index*,clust*:my-index*`. Each cluster has its own response section. The cluster you sent the request to is labelled as "(local)".
{
  "(local)": {
    "connected": true,
    "skip_unavailable": false,
    "matching_indices": true,
    "version": {
      "number": "8.13.0",
      "build_flavor": "default",
      "minimum_wire_compatibility_version": "7.17.0",
      "minimum_index_compatibility_version": "7.0.0"
    }
  },
  "cluster_one": {
    "connected": true,
    "skip_unavailable": true,
    "matching_indices": true,
    "version": {
      "number": "8.13.0",
      "build_flavor": "default",
      "minimum_wire_compatibility_version": "7.17.0",
      "minimum_index_compatibility_version": "7.0.0"
    }
  },
  "cluster_two": {
    "connected": true,
    "skip_unavailable": false,
    "matching_indices": true,
    "version": {
      "number": "8.13.0",
      "build_flavor": "default",
      "minimum_wire_compatibility_version": "7.17.0",
      "minimum_index_compatibility_version": "7.0.0"
    }
  }
}
A successful response from `GET /_resolve/cluster/not-present,clust*:my-index*,oldcluster:*?ignore_unavailable=false&timeout=5s`. This type of request can be used to identify potential problems with your cross-cluster search. Note also that a `timeout` of 5 seconds is sent, which sets the maximum time the query will wait for remote clusters to respond. The local cluster has no index called `not_present`. Searching with `ignore_unavailable=false` would return a "no such index" error. The `cluster_one` remote cluster has no indices that match the pattern `my-index*`. There may be no indices that match the pattern or the index could be closed. The `cluster_two` remote cluster is not connected (the attempt to connect failed). Since this cluster is marked as `skip_unavailable=false`, you should probably exclude this cluster from the search by adding `-cluster_two:*` to the search index expression. For `cluster_three`, the error message indicates that this remote cluster did not respond within the 5-second timeout window specified, so it is also marked as not connected. The `oldcluster` remote cluster shows that it has matching indices, but no version information is included. This indicates that the cluster version predates the introduction of the `_resolve/cluster` API, so you may want to exclude it from your cross-cluster search.
{
  "(local)": {
    "connected": true,
    "skip_unavailable": false,
    "error": "no such index [not_present]"
  },
  "cluster_one": {
    "connected": true,
    "skip_unavailable": true,
    "matching_indices": false,
    "version": {
      "number": "8.13.0",
      "build_flavor": "default",
      "minimum_wire_compatibility_version": "7.17.0",
      "minimum_index_compatibility_version": "7.0.0"
    }
  },
  "cluster_two": {
    "connected": false,
    "skip_unavailable": false
  },
  "cluster_three": {
    "connected": false,
    "skip_unavailable": false,
    "error": "Request timed out before receiving a response from the remote cluster"
  },
  "oldcluster": {
    "connected": true,
    "skip_unavailable": false,
    "matching_indices": true
  }
}




Roll over to a new index Generally available; Added in 5.0.0

POST /{alias}/_rollover/{new_index}

All methods and paths for this operation:

POST /{alias}/_rollover

POST /{alias}/_rollover/{new_index}

TIP: It is recommended to use the index lifecycle rollover action to automate rollovers.

The rollover API creates a new index for a data stream or index alias. The API behavior depends on the rollover target.

Roll over a data stream

If you roll over a data stream, the API creates a new write index for the stream. The stream's previous write index becomes a regular backing index. A rollover also increments the data stream's generation.

Roll over an index alias with a write index

TIP: Prior to Elasticsearch 7.9, you'd typically use an index alias with a write index to manage time series data. Data streams replace this functionality, require less maintenance, and automatically integrate with data tiers.

If an index alias points to multiple indices, one of the indices must be a write index. The rollover API creates a new write index for the alias with is_write_index set to true. The API also sets is_write_index to false for the previous write index.

Roll over an index alias with one index

If you roll over an index alias that points to only one index, the API creates a new index for the alias and removes the original index from the alias.

NOTE: A rollover creates a new index and is subject to the wait_for_active_shards setting.

Increment index names for an alias

When you roll over an index alias, you can specify a name for the new index. If you don't specify a name and the current index ends with - and a number, such as my-index-000001 or my-index-3, the new index name increments that number. For example, if you roll over an alias with a current index of my-index-000001, the rollover creates a new index named my-index-000002. This number is always six characters and zero-padded, regardless of the previous index's name.

If you use an index alias for time series data, you can use date math in the index name to track the rollover date. For example, you can create an alias that points to an index named <my-index-{now/d}-000001>. If you create the index on May 6, 2099, the index's name is my-index-2099.05.06-000001. If you roll over the alias on May 7, 2099, the new index's name is my-index-2099.05.07-000002.

Required authorization

  • Index privileges: manage

Path parameters

  • alias string

    Name of the data stream or index alias to roll over.

  • new_index string Required

    Name of the index to create. Supports date math. Data streams do not support this parameter.

Query parameters

  • dry_run boolean

    If true, checks whether the current index satisfies the specified conditions but does not perform a rollover.

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

  • lazy boolean

    If set to true, the rollover action will only mark a data stream to signal that it needs to be rolled over at the next write. Only allowed on data streams.

application/json

Body

  • aliases object

    Aliases for the target index. Data streams do not support this parameter.

    Hide aliases attribute Show aliases attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • filter object

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

        External documentation
      • index_routing string
      • is_hidden boolean

        If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

        Default value is false.

      • is_write_index boolean

        If true, the index is the write index for the alias.

        Default value is false.

      • routing string
      • search_routing string
  • conditions object
    Hide conditions attributes Show conditions attributes object
    • min_age string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • max_age string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • max_age_millis number

      Time unit for milliseconds

    • min_docs number
    • max_docs number
    • max_size number | string

    • max_size_bytes number
    • min_size number | string

    • min_size_bytes number
    • max_primary_shard_size number | string

    • max_primary_shard_size_bytes number
    • min_primary_shard_size number | string

    • min_primary_shard_size_bytes number
    • max_primary_shard_docs number
    • min_primary_shard_docs number
  • mappings object
    Hide mappings attributes Show mappings attributes object
    • all_field object
      Hide all_field attributes Show all_field attributes object
      • analyzer string Required
      • enabled boolean Required
      • omit_norms boolean Required
      • search_analyzer string Required
      • similarity string Required
      • store boolean Required
      • store_term_vector_offsets boolean Required
      • store_term_vector_payloads boolean Required
      • store_term_vector_positions boolean Required
      • store_term_vectors boolean Required
    • date_detection boolean
    • dynamic string

      Values are strict, runtime, true, or false.

    • dynamic_date_formats array[string]
    • dynamic_templates array[object]
    • _field_names object
      Hide _field_names attribute Show _field_names attribute object
      • enabled boolean Required
    • index_field object
      Hide index_field attribute Show index_field attribute object
      • enabled boolean Required
    • _meta object
      Hide _meta attribute Show _meta attribute object
      • * object Additional properties
    • numeric_detection boolean
    • properties object
    • _routing object
      Hide _routing attribute Show _routing attribute object
      • required boolean Required
    • _size object
      Hide _size attribute Show _size attribute object
      • enabled boolean Required
    • _source object
      Hide _source attributes Show _source attributes object
      • compress boolean
      • compress_threshold string
      • enabled boolean
      • excludes array[string]
      • includes array[string]
      • mode string

        Values are disabled, stored, or synthetic.

    • runtime object
      Hide runtime attribute Show runtime attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • format string
        • format string

          A custom format for date type runtime fields.

        • input_field string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • target_field string

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • target_index string
        • script object
          Hide script attributes Show script attributes object
          • source string | object

            One of:
          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • type string Required

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • enabled boolean
    • subobjects string

      Values are true or false.

    • _data_stream_timestamp object
      Hide _data_stream_timestamp attribute Show _data_stream_timestamp attribute object
      • enabled boolean Required
  • settings object

    Configuration options for the index. Data streams do not support this parameter.

    Hide settings attribute Show settings attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • conditions object Required
      Hide conditions attribute Show conditions attribute object
      • * boolean Additional properties
    • dry_run boolean Required
    • new_index string Required
    • old_index string Required
    • rolled_over boolean Required
    • shards_acknowledged boolean Required
POST /{alias}/_rollover/{new_index}
POST my-data-stream/_rollover
{
  "conditions": {
    "max_age": "7d",
    "max_docs": 1000,
    "max_primary_shard_size": "50gb",
    "max_primary_shard_docs": "2000"
  }
}
resp = client.indices.rollover(
    alias="my-data-stream",
    conditions={
        "max_age": "7d",
        "max_docs": 1000,
        "max_primary_shard_size": "50gb",
        "max_primary_shard_docs": "2000"
    },
)
const response = await client.indices.rollover({
  alias: "my-data-stream",
  conditions: {
    max_age: "7d",
    max_docs: 1000,
    max_primary_shard_size: "50gb",
    max_primary_shard_docs: "2000",
  },
});
response = client.indices.rollover(
  alias: "my-data-stream",
  body: {
    "conditions": {
      "max_age": "7d",
      "max_docs": 1000,
      "max_primary_shard_size": "50gb",
      "max_primary_shard_docs": "2000"
    }
  }
)
$resp = $client->indices()->rollover([
    "alias" => "my-data-stream",
    "body" => [
        "conditions" => [
            "max_age" => "7d",
            "max_docs" => 1000,
            "max_primary_shard_size" => "50gb",
            "max_primary_shard_docs" => "2000",
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"conditions":{"max_age":"7d","max_docs":1000,"max_primary_shard_size":"50gb","max_primary_shard_docs":"2000"}}' "$ELASTICSEARCH_URL/my-data-stream/_rollover"
client.indices().rollover(r -> r
    .alias("my-data-stream")
    .conditions(c -> c
        .maxAge(m -> m
            .time("7d")
        )
        .maxDocs(1000L)
        .maxPrimaryShardSize("50gb")
        .maxPrimaryShardDocs(2000L)
    )
);
Request example
{
  "conditions": {
    "max_age": "7d",
    "max_docs": 1000,
    "max_primary_shard_size": "50gb",
    "max_primary_shard_docs": "2000"
  }
}
Response examples (200)
An abbreviated response from `GET /_segments`.
{
  "_shards": {},
  "indices": {
    "test": {
      "shards": {
        "0": [
          {
            "routing": {
              "state": "STARTED",
              "primary": true,
              "node": "zDC_RorJQCao9xf9pg3Fvw"
            },
            "num_committed_segments": 0,
            "num_search_segments": 1,
            "segments": {
              "_0": {
                "generation": 0,
                "num_docs": 1,
                "deleted_docs": 0,
                "size_in_bytes": 3800,
                "committed": false,
                "search": true,
                "version": "7.0.0",
                "compound": true,
                "attributes": {}
              }
            }
          }
        ]
      }
    }
  }
}




















Split an index Generally available; Added in 6.1.0

POST /{index}/_split/{target}

All methods and paths for this operation:

PUT /{index}/_split/{target}

POST /{index}/_split/{target}

Split an index into a new index with more primary shards.

  • Before you can split an index:

  • The index must be read-only.

  • The cluster health status must be green.

You can do make an index read-only with the following request using the add index block API:

PUT /my_source_index/_block/write

The current write index on a data stream cannot be split. In order to split the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be split.

The number of times the index can be split (and the number of shards that each original shard can be split into) is determined by the index.number_of_routing_shards setting. The number of routing shards specifies the hashing space that is used internally to distribute documents across shards with consistent hashing. For instance, a 5 shard index with number_of_routing_shards set to 30 (5 x 2 x 3) could be split by a factor of 2 or 3.

A split operation:

  • Creates a new target index with the same definition as the source index, but with a larger number of primary shards.
  • Hard-links segments from the source index into the target index. If the file system doesn't support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
  • Hashes all documents again, after low level files are created, to delete documents that belong to a different shard.
  • Recovers the target index as though it were a closed index which had just been re-opened.

IMPORTANT: Indices can only be split if they satisfy the following requirements:

  • The target index must not exist.
  • The source index must have fewer primary shards than the target index.
  • The number of primary shards in the target index must be a multiple of the number of primary shards in the source index.
  • The node handling the split process must have sufficient free disk space to accommodate a second copy of the existing index.

Required authorization

  • Index privileges: manage

Path parameters

  • index string Required

    Name of the source index to split.

  • target string Required

    Name of the target index to create.

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1).

    Values are all or index-setting.

application/json

Body

  • aliases object

    Aliases for the resulting index.

    Hide aliases attribute Show aliases attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • filter object

        An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

        External documentation
      • index_routing string
      • is_hidden boolean

        If true, the alias is hidden. All indices for the alias must have the same is_hidden value.

        Default value is false.

      • is_write_index boolean

        If true, the index is the write index for the alias.

        Default value is false.

      • routing string
      • search_routing string
  • settings object

    Configuration options for the target index.

    Hide settings attribute Show settings attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • shards_acknowledged boolean Required
    • index string Required
POST /my-index-000001/_split/split-my-index-000001
{
  "settings": {
    "index.number_of_shards": 2
  }
}
resp = client.indices.split(
    index="my-index-000001",
    target="split-my-index-000001",
    settings={
        "index.number_of_shards": 2
    },
)
const response = await client.indices.split({
  index: "my-index-000001",
  target: "split-my-index-000001",
  settings: {
    "index.number_of_shards": 2,
  },
});
response = client.indices.split(
  index: "my-index-000001",
  target: "split-my-index-000001",
  body: {
    "settings": {
      "index.number_of_shards": 2
    }
  }
)
$resp = $client->indices()->split([
    "index" => "my-index-000001",
    "target" => "split-my-index-000001",
    "body" => [
        "settings" => [
            "index.number_of_shards" => 2,
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"settings":{"index.number_of_shards":2}}' "$ELASTICSEARCH_URL/my-index-000001/_split/split-my-index-000001"
client.indices().split(s -> s
    .index("my-index-000001")
    .settings("index.number_of_shards", JsonData.fromJson("2"))
    .target("split-my-index-000001")
);
Request example
Split an existing index into a new index with more primary shards.
{
  "settings": {
    "index.number_of_shards": 2
  }
}

















































Start the ILM plugin Generally available; Added in 6.6.0

POST /_ilm/start

Start the index lifecycle management plugin if it is currently stopped. ILM is started automatically when the cluster is formed. Restarting ILM is necessary only when it has been stopped using the stop ILM API.

Required authorization

  • Cluster privileges: manage_ilm

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST _ilm/start
resp = client.ilm.start()
const response = await client.ilm.start();
response = client.ilm.start
$resp = $client->ilm()->start();
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ilm/start"
client.ilm().start(s -> s);
Response examples (200)
A successful response when stating the ILM plugin.
{
  "acknowledged": true
}





















Perform inference on the service Generally available; Added in 8.11.0

POST /_inference/{task_type}/{inference_id}

All methods and paths for this operation:

POST /_inference/{inference_id}

POST /_inference/{task_type}/{inference_id}

This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API.

For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation.


The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

Required authorization

  • Cluster privileges: monitor_inference

Path parameters

  • task_type string Required

    The type of inference task that the model performs.

    Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

  • inference_id string Required

    The unique identifier for the inference endpoint.

Query parameters

  • timeout string

    The amount of time to wait for the inference request to complete.

    Values are -1 or 0.

application/json

Body

  • query string

    The query input, which is required only for the rerank task. It is not required for other tasks.

  • input string | array[string] Required

    The text on which you want to perform the inference task. It can be a single string or an array.


    Inference endpoints for the completion task type currently only support a single string as input.

  • task_settings object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • text_embedding_bytes array[object]

      The text embedding result object for byte representation

      Hide text_embedding_bytes attribute Show text_embedding_bytes attribute object
      • embedding array[number] Required

        Text Embedding results containing bytes are represented as Dense Vectors of bytes.

    • text_embedding_bits array[object]

      The text embedding result object for byte representation

      Hide text_embedding_bits attribute Show text_embedding_bits attribute object
      • embedding array[number] Required

        Text Embedding results containing bytes are represented as Dense Vectors of bytes.

    • text_embedding array[object]

      The text embedding result object

      Hide text_embedding attribute Show text_embedding attribute object
      • embedding array[number] Required

        Text Embedding results are represented as Dense Vectors of floats.

    • sparse_embedding array[object]
      Hide sparse_embedding attribute Show sparse_embedding attribute object
      • embedding object Required

        Sparse Embedding tokens are represented as a dictionary of string to double.

        Hide embedding attribute Show embedding attribute object
        • * number Additional properties
    • completion array[object]

      The completion result object

      Hide completion attribute Show completion attribute object
      • result string Required
    • rerank array[object]

      The rerank result object representing a single ranked document id: the original index of the document in the request relevance_score: the relevance_score of the document relative to the query text: Optional, the text of the document, if requested

      Hide rerank attributes Show rerank attributes object
      • index number Required
      • relevance_score number Required
      • text string
POST /_inference/{task_type}/{inference_id}
curl \
 --request POST 'http://api.example.com/_inference/{task_type}/{inference_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"query":"string","input":"string","task_settings":{}}'












Create an Anthropic inference endpoint Generally available; Added in 8.16.0

PUT /_inference/{task_type}/{anthropic_inference_id}

Create an inference endpoint to perform an inference task with the anthropic service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The task type. The only valid task type for the model to perform is completion.

    Value is completion.

  • anthropic_inference_id string Required

    The unique identifier of the inference endpoint.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    Value is anthropic.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key for the Anthropic API.

    • model_id string Required

      The name of the model to use for the inference task. Refer to the Anthropic documentation for the list of supported models.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
  • task_settings object
    Hide task_settings attributes Show task_settings attributes object
    • max_tokens number Required

      For a completion task, it is the maximum number of tokens to generate before stopping.

    • temperature number

      For a completion task, it is the amount of randomness injected into the response. For more details about the supported range, refer to Anthropic documentation.

      External documentation
    • top_k number

      For a completion task, it specifies to only sample from the top K options for each subsequent token. It is recommended for advanced use cases only. You usually only need to use temperature.

    • top_p number

      For a completion task, it specifies to use Anthropic's nucleus sampling. In nucleus sampling, Anthropic computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches the specified probability. You should either alter temperature or top_p, but not both. It is recommended for advanced use cases only. You usually only need to use temperature.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Value is completion.

PUT /_inference/{task_type}/{anthropic_inference_id}
PUT _inference/completion/anthropic_completion
{
    "service": "anthropic",
    "service_settings": {
        "api_key": "Anthropic-Api-Key",
        "model_id": "Model-ID"
    },
    "task_settings": {
        "max_tokens": 1024
    }
}
resp = client.inference.put(
    task_type="completion",
    inference_id="anthropic_completion",
    inference_config={
        "service": "anthropic",
        "service_settings": {
            "api_key": "Anthropic-Api-Key",
            "model_id": "Model-ID"
        },
        "task_settings": {
            "max_tokens": 1024
        }
    },
)
const response = await client.inference.put({
  task_type: "completion",
  inference_id: "anthropic_completion",
  inference_config: {
    service: "anthropic",
    service_settings: {
      api_key: "Anthropic-Api-Key",
      model_id: "Model-ID",
    },
    task_settings: {
      max_tokens: 1024,
    },
  },
});
response = client.inference.put(
  task_type: "completion",
  inference_id: "anthropic_completion",
  body: {
    "service": "anthropic",
    "service_settings": {
      "api_key": "Anthropic-Api-Key",
      "model_id": "Model-ID"
    },
    "task_settings": {
      "max_tokens": 1024
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "completion",
    "inference_id" => "anthropic_completion",
    "body" => [
        "service" => "anthropic",
        "service_settings" => [
            "api_key" => "Anthropic-Api-Key",
            "model_id" => "Model-ID",
        ],
        "task_settings" => [
            "max_tokens" => 1024,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"anthropic","service_settings":{"api_key":"Anthropic-Api-Key","model_id":"Model-ID"},"task_settings":{"max_tokens":1024}}' "$ELASTICSEARCH_URL/_inference/completion/anthropic_completion"
client.inference().put(p -> p
    .inferenceId("anthropic_completion")
    .taskType(TaskType.Completion)
    .inferenceConfig(i -> i
        .service("anthropic")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"Anthropic-Api-Key\",\"model_id\":\"Model-ID\"}"))
        .taskSettings(JsonData.fromJson("{\"max_tokens\":1024}"))
    )
);
Request example
Run `PUT _inference/completion/anthropic_completion` to create an inference endpoint that performs a completion task.
{
    "service": "anthropic",
    "service_settings": {
        "api_key": "Anthropic-Api-Key",
        "model_id": "Model-ID"
    },
    "task_settings": {
        "max_tokens": 1024
    }
}
















Create an Elasticsearch inference endpoint Generally available; Added in 8.13.0

PUT /_inference/{task_type}/{elasticsearch_inference_id}

Create an inference endpoint to perform an inference task with the elasticsearch service.


Your Elasticsearch deployment contains preconfigured ELSER and E5 inference endpoints, you only need to create the enpoints using the API if you want to customize the settings.

If you use the ELSER or the E5 model through the elasticsearch service, the API request will automatically download and deploy the model if it isn't downloaded yet.


You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.

After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are rerank, sparse_embedding, or text_embedding.

  • elasticsearch_inference_id string Required

    The unique identifier of the inference endpoint. The must not match the model_id.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    Value is elasticsearch.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • adaptive_allocations object
      Hide adaptive_allocations attributes Show adaptive_allocations attributes object
      • enabled boolean

        Turn on adaptive_allocations.

        Default value is false.

      • max_number_of_allocations number

        The maximum number of allocations to scale to. If set, it must be greater than or equal to min_number_of_allocations.

      • min_number_of_allocations number

        The minimum number of allocations to scale to. If set, it must be greater than or equal to 0. If not defined, the deployment scales to 0.

    • deployment_id string

      The deployment identifier for a trained model deployment. When deployment_id is used the model_id is optional.

    • model_id string Required

      The name of the model to use for the inference task. It can be the ID of a built-in model (for example, .multilingual-e5-small for E5) or a text embedding model that was uploaded by using the Eland client.

      External documentation
    • num_allocations number

      The total number of allocations that are assigned to the model across machine learning nodes. Increasing this value generally increases the throughput. If adaptive allocations are enabled, do not set this value because it's automatically set.

    • num_threads number Required

      The number of threads used by each model allocation during inference. This setting generally increases the speed per inference request. The inference process is a compute-bound process; threads_per_allocations must not exceed the number of available allocated processors per node. The value must be a power of 2. The maximum value is 32.

  • task_settings object
    Hide task_settings attribute Show task_settings attribute object
    • return_documents boolean

      For a rerank task, return the document instead of only the index.

      Default value is true.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are sparse_embedding, text_embedding, or rerank.

PUT /_inference/{task_type}/{elasticsearch_inference_id}
PUT _inference/sparse_embedding/my-elser-model
{
    "service": "elasticsearch",
    "service_settings": {
        "adaptive_allocations": { 
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
        },
        "num_threads": 1,
        "model_id": ".elser_model_2" 
    }
}
resp = client.inference.put(
    task_type="sparse_embedding",
    inference_id="my-elser-model",
    inference_config={
        "service": "elasticsearch",
        "service_settings": {
            "adaptive_allocations": {
                "enabled": True,
                "min_number_of_allocations": 1,
                "max_number_of_allocations": 4
            },
            "num_threads": 1,
            "model_id": ".elser_model_2"
        }
    },
)
const response = await client.inference.put({
  task_type: "sparse_embedding",
  inference_id: "my-elser-model",
  inference_config: {
    service: "elasticsearch",
    service_settings: {
      adaptive_allocations: {
        enabled: true,
        min_number_of_allocations: 1,
        max_number_of_allocations: 4,
      },
      num_threads: 1,
      model_id: ".elser_model_2",
    },
  },
});
response = client.inference.put(
  task_type: "sparse_embedding",
  inference_id: "my-elser-model",
  body: {
    "service": "elasticsearch",
    "service_settings": {
      "adaptive_allocations": {
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
      },
      "num_threads": 1,
      "model_id": ".elser_model_2"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "sparse_embedding",
    "inference_id" => "my-elser-model",
    "body" => [
        "service" => "elasticsearch",
        "service_settings" => [
            "adaptive_allocations" => [
                "enabled" => true,
                "min_number_of_allocations" => 1,
                "max_number_of_allocations" => 4,
            ],
            "num_threads" => 1,
            "model_id" => ".elser_model_2",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"elasticsearch","service_settings":{"adaptive_allocations":{"enabled":true,"min_number_of_allocations":1,"max_number_of_allocations":4},"num_threads":1,"model_id":".elser_model_2"}}' "$ELASTICSEARCH_URL/_inference/sparse_embedding/my-elser-model"
client.inference().put(p -> p
    .inferenceId("my-elser-model")
    .taskType(TaskType.SparseEmbedding)
    .inferenceConfig(i -> i
        .service("elasticsearch")
        .serviceSettings(JsonData.fromJson("{\"adaptive_allocations\":{\"enabled\":true,\"min_number_of_allocations\":1,\"max_number_of_allocations\":4},\"num_threads\":1,\"model_id\":\".elser_model_2\"}"))
    )
);
Run `PUT _inference/sparse_embedding/my-elser-model` to create an inference endpoint that performs a `sparse_embedding` task. The `model_id` must be the ID of one of the built-in ELSER models. The API will automatically download the ELSER model if it isn't already downloaded and then deploy the model.
{
    "service": "elasticsearch",
    "service_settings": {
        "adaptive_allocations": { 
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
        },
        "num_threads": 1,
        "model_id": ".elser_model_2" 
    }
}
Run `PUT _inference/rerank/my-elastic-rerank` to create an inference endpoint that performs a rerank task using the built-in Elastic Rerank cross-encoder model. The `model_id` must be `.rerank-v1`, which is the ID of the built-in Elastic Rerank model. The API will automatically download the Elastic Rerank model if it isn't already downloaded and then deploy the model. Once deployed, the model can be used for semantic re-ranking with a `text_similarity_reranker` retriever.
{
    "service": "elasticsearch",
    "service_settings": {
        "model_id": ".rerank-v1", 
        "num_threads": 1,
        "adaptive_allocations": { 
        "enabled": true,
        "min_number_of_allocations": 1,
        "max_number_of_allocations": 4
        }
    }
}
Run `PUT _inference/text_embedding/my-e5-model` to create an inference endpoint that performs a `text_embedding` task. The `model_id` must be the ID of one of the built-in E5 models. The API will automatically download the E5 model if it isn't already downloaded and then deploy the model.
{
    "service": "elasticsearch",
    "service_settings": {
        "num_allocations": 1,
        "num_threads": 1,
        "model_id": ".multilingual-e5-small" 
    }
}
Run `PUT _inference/text_embedding/my-msmarco-minilm-model` to create an inference endpoint that performs a `text_embedding` task with a model that was uploaded by Eland.
{
    "service": "elasticsearch",
    "service_settings": {
        "num_allocations": 1,
        "num_threads": 1,
        "model_id": "msmarco-MiniLM-L12-cos-v5" 
    }
}
Run `PUT _inference/text_embedding/my-e5-model` to create an inference endpoint that performs a `text_embedding` task and to configure adaptive allocations. The API request will automatically download the E5 model if it isn't already downloaded and then deploy the model.
{
    "service": "elasticsearch",
    "service_settings": {
        "adaptive_allocations": {
        "enabled": true,
        "min_number_of_allocations": 3,
        "max_number_of_allocations": 10
        },
        "num_threads": 1,
        "model_id": ".multilingual-e5-small"
    }
}
Run `PUT _inference/sparse_embedding/use_existing_deployment` to use an already existing model deployment when creating an inference endpoint.
{
    "service": "elasticsearch",
    "service_settings": {
        "deployment_id": ".elser_model_2"
    }
}
Response examples (200)
A successful response from `PUT _inference/sparse_embedding/use_existing_deployment`. It contains the model ID and the threads and allocations settings from the model deployment.
{
  "inference_id": "use_existing_deployment",
  "task_type": "sparse_embedding",
  "service": "elasticsearch",
  "service_settings": {
    "num_allocations": 2,
    "num_threads": 1,
    "model_id": ".elser_model_2",
    "deployment_id": ".elser_model_2"
  },
  "chunking_settings": {
    "strategy": "sentence",
    "max_chunk_size": 250,
    "sentence_overlap": 1
  }
}












Create a Hugging Face inference endpoint Generally available; Added in 8.12.0

PUT /_inference/{task_type}/{huggingface_inference_id}

Create an inference endpoint to perform an inference task with the hugging_face service. Supported tasks include: text_embedding, completion, and chat_completion.

To configure the endpoint, first visit the Hugging Face Inference Endpoints page and create a new endpoint. Select a model that supports the task you intend to use.

For Elastic's text_embedding task: The selected model must support the Sentence Embeddings task. On the new endpoint creation page, select the Sentence Embeddings task under the Advanced Configuration section. After the endpoint has initialized, copy the generated endpoint URL. Recommended models for text_embedding task:

  • all-MiniLM-L6-v2
  • all-MiniLM-L12-v2
  • all-mpnet-base-v2
  • e5-base-v2
  • e5-small-v2
  • multilingual-e5-base
  • multilingual-e5-small

For Elastic's chat_completion and completion tasks: The selected model must support the Text Generation task and expose OpenAI API. HuggingFace supports both serverless and dedicated endpoints for Text Generation. When creating dedicated endpoint select the Text Generation task. After the endpoint is initialized (for dedicated) or ready (for serverless), ensure it supports the OpenAI API and includes /v1/chat/completions part in URL. Then, copy the full endpoint URL for use. Recommended models for chat_completion and completion tasks:

  • Mistral-7B-Instruct-v0.2
  • QwQ-32B
  • Phi-3-mini-128k-instruct

For Elastic's rerank task: The selected model must support the sentence-ranking task and expose OpenAI API. HuggingFace supports only dedicated (not serverless) endpoints for Rerank so far. After the endpoint is initialized, copy the full endpoint URL for use. Tested models for rerank task:

  • bge-reranker-base
  • jina-reranker-v1-turbo-en-GGUF

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are chat_completion, completion, rerank, or text_embedding.

  • huggingface_inference_id string Required

    The unique identifier of the inference endpoint.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    Value is hugging_face.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid access token for your HuggingFace account. You can create or find your access tokens on the HuggingFace settings page.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
    • url string Required

      The URL endpoint to use for the requests. For completion and chat_completion tasks, the deployed model must be compatible with the Hugging Face Chat Completion interface (see the linked external documentation for details). The endpoint URL for the request must include /v1/chat/completions. If the model supports the OpenAI Chat Completion schema, a toggle should appear in the interface. Enabling this toggle doesn't change any model behavior, it reveals the full endpoint URL needed (which should include /v1/chat/completions) when configuring the inference endpoint in Elasticsearch. If the model doesn't support this schema, the toggle may not be shown.

      External documentation
    • model_id string

      The name of the HuggingFace model to use for the inference task. For completion and chat_completion tasks, this field is optional but may be required for certain models — particularly when using serverless inference endpoints. For the text_embedding task, this field should not be included. Otherwise, the request will fail.

  • task_settings object
    Hide task_settings attributes Show task_settings attributes object
    • return_documents boolean

      For a rerank task, return doc text within the results.

    • top_n number

      For a rerank task, the number of most relevant documents to return. It defaults to the number of the documents.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are chat_completion, completion, rerank, or text_embedding.

PUT /_inference/{task_type}/{huggingface_inference_id}
PUT _inference/text_embedding/hugging-face-embeddings
{
    "service": "hugging_face",
    "service_settings": {
        "api_key": "hugging-face-access-token", 
        "url": "url-endpoint" 
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="hugging-face-embeddings",
    inference_config={
        "service": "hugging_face",
        "service_settings": {
            "api_key": "hugging-face-access-token",
            "url": "url-endpoint"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "hugging-face-embeddings",
  inference_config: {
    service: "hugging_face",
    service_settings: {
      api_key: "hugging-face-access-token",
      url: "url-endpoint",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "hugging-face-embeddings",
  body: {
    "service": "hugging_face",
    "service_settings": {
      "api_key": "hugging-face-access-token",
      "url": "url-endpoint"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "hugging-face-embeddings",
    "body" => [
        "service" => "hugging_face",
        "service_settings" => [
            "api_key" => "hugging-face-access-token",
            "url" => "url-endpoint",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"hugging_face","service_settings":{"api_key":"hugging-face-access-token","url":"url-endpoint"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/hugging-face-embeddings"
client.inference().put(p -> p
    .inferenceId("hugging-face-embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("hugging_face")
        .serviceSettings(JsonData.fromJson("{\"api_key\":\"hugging-face-access-token\",\"url\":\"url-endpoint\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/hugging-face-embeddings` to create an inference endpoint that performs a `text_embedding` task type.
{
    "service": "hugging_face",
    "service_settings": {
        "api_key": "hugging-face-access-token", 
        "url": "url-endpoint" 
    }
}
Run `PUT _inference/rerank/hugging-face-rerank` to create an inference endpoint that performs a `rerank` task type.
{
    "service": "hugging_face",
    "service_settings": {
        "api_key": "hugging-face-access-token", 
        "url": "url-endpoint" 
    },
    "task_settings": {
        "return_documents": true,
        "top_n": 3
    }
}












Create a VoyageAI inference endpoint Generally available; Added in 8.19.0

PUT /_inference/{task_type}/{voyageai_inference_id}

Create an inference endpoint to perform an inference task with the voyageai service.

Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are text_embedding or rerank.

  • voyageai_inference_id string Required

    The unique identifier of the inference endpoint.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • strategy string

      The chunking strategy: sentence or word.

      Default value is sentence.

  • service string Required

    Value is voyageai.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • dimensions number

      The number of dimensions for resulting output embeddings. This setting maps to output_dimension in the VoyageAI documentation. Only for the text_embedding task type.

      External documentation
    • model_id string Required

      The name of the model to use for the inference task. Refer to the VoyageAI documentation for the list of available text embedding and rerank models.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
    • embedding_type number

      The data type for the embeddings to be returned. This setting maps to output_dtype in the VoyageAI documentation. Permitted values: float, int8, bit. int8 is a synonym of byte in the VoyageAI documentation. bit is a synonym of binary in the VoyageAI documentation. Only for the text_embedding task type.

      External documentation
  • task_settings object
    Hide task_settings attributes Show task_settings attributes object
    • input_type string

      Type of the input text. Permitted values: ingest (maps to document in the VoyageAI documentation), search (maps to query in the VoyageAI documentation). Only for the text_embedding task type.

    • return_documents boolean

      Whether to return the source documents in the response. Only for the rerank task type.

      Default value is false.

    • top_k number

      The number of most relevant documents to return. If not specified, the reranking results of all documents will be returned. Only for the rerank task type.

    • truncation boolean

      Whether to truncate the input texts to fit within the context length.

      Default value is true.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • strategy string

        The chunking strategy: sentence or word.

        Default value is sentence.

    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding or rerank.

PUT /_inference/{task_type}/{voyageai_inference_id}
PUT _inference/text_embedding/openai-embeddings
{
    "service": "voyageai",
    "service_settings": {
        "model_id": "voyage-3-large",
        "dimensions": 512
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="openai-embeddings",
    inference_config={
        "service": "voyageai",
        "service_settings": {
            "model_id": "voyage-3-large",
            "dimensions": 512
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "openai-embeddings",
  inference_config: {
    service: "voyageai",
    service_settings: {
      model_id: "voyage-3-large",
      dimensions: 512,
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "openai-embeddings",
  body: {
    "service": "voyageai",
    "service_settings": {
      "model_id": "voyage-3-large",
      "dimensions": 512
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "openai-embeddings",
    "body" => [
        "service" => "voyageai",
        "service_settings" => [
            "model_id" => "voyage-3-large",
            "dimensions" => 512,
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"voyageai","service_settings":{"model_id":"voyage-3-large","dimensions":512}}' "$ELASTICSEARCH_URL/_inference/text_embedding/openai-embeddings"
client.inference().put(p -> p
    .inferenceId("openai-embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("voyageai")
        .serviceSettings(JsonData.fromJson("{\"model_id\":\"voyage-3-large\",\"dimensions\":512}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/voyageai-embeddings` to create an inference endpoint that performs a `text_embedding` task. The embeddings created by requests to this endpoint will have 512 dimensions.
{
    "service": "voyageai",
    "service_settings": {
        "model_id": "voyage-3-large",
        "dimensions": 512
    }
}
Run `PUT _inference/rerank/voyageai-rerank` to create an inference endpoint that performs a `rerank` task.
{
    "service": "voyageai",
    "service_settings": {
        "model_id": "rerank-2"
    }
}




Perform reranking inference on the service Generally available; Added in 8.11.0

POST /_inference/rerank/{inference_id}

Required authorization

  • Cluster privileges: monitor_inference

Path parameters

  • inference_id string Required

    The unique identifier for the inference endpoint.

Query parameters

  • timeout string

    The amount of time to wait for the inference request to complete.

    Values are -1 or 0.

application/json

Body

  • query string Required

    Query input.

  • input string | array[string] Required

    The text on which you want to perform the inference task. It can be a single string or an array.


    Inference endpoints for the completion task type currently only support a single string as input.

  • task_settings object

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • rerank array[object] Required

      The rerank result object representing a single ranked document id: the original index of the document in the request relevance_score: the relevance_score of the document relative to the query text: Optional, the text of the document, if requested

      Hide rerank attributes Show rerank attributes object
      • index number Required
      • relevance_score number Required
      • text string
POST /_inference/rerank/{inference_id}
POST _inference/rerank/cohere_rerank
{
  "input": ["luke", "like", "leia", "chewy","r2d2", "star", "wars"],
  "query": "star wars main character"
}
resp = client.inference.rerank(
    inference_id="cohere_rerank",
    input=[
        "luke",
        "like",
        "leia",
        "chewy",
        "r2d2",
        "star",
        "wars"
    ],
    query="star wars main character",
)
const response = await client.inference.rerank({
  inference_id: "cohere_rerank",
  input: ["luke", "like", "leia", "chewy", "r2d2", "star", "wars"],
  query: "star wars main character",
});
response = client.inference.rerank(
  inference_id: "cohere_rerank",
  body: {
    "input": [
      "luke",
      "like",
      "leia",
      "chewy",
      "r2d2",
      "star",
      "wars"
    ],
    "query": "star wars main character"
  }
)
$resp = $client->inference()->rerank([
    "inference_id" => "cohere_rerank",
    "body" => [
        "input" => array(
            "luke",
            "like",
            "leia",
            "chewy",
            "r2d2",
            "star",
            "wars",
        ),
        "query" => "star wars main character",
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"input":["luke","like","leia","chewy","r2d2","star","wars"],"query":"star wars main character"}' "$ELASTICSEARCH_URL/_inference/rerank/cohere_rerank"
client.inference().rerank(r -> r
    .inferenceId("cohere_rerank")
    .input(List.of("luke","like","leia","chewy","r2d2","star","wars"))
    .query("star wars main character")
);
Request examples
Run `POST _inference/rerank/cohere_rerank` to perform reranking on the example input.
{
  "input": ["luke", "like", "leia", "chewy","r2d2", "star", "wars"],
  "query": "star wars main character"
}
Run `POST _inference/rerank/bge-reranker-base-mkn` to perform reranking on the example input via Hugging Face
{
  "input": ["luke", "like", "leia", "chewy","r2d2", "star", "wars"],
  "query": "star wars main character",
  "return_documents": false,
  "top_n": 2
}
Run `POST _inference/rerank/bge-reranker-base-mkn` to perform reranking on the example input via Hugging Face
{
  "input": ["luke", "like", "leia", "chewy","r2d2", "star", "wars"],
  "query": "star wars main character",
  "return_documents": true,
  "top_n": 3
}
Response examples (200)
A successful response from `POST _inference/rerank/cohere_rerank`.
{
  "rerank": [
    {
      "index": "2",
      "relevance_score": "0.011597361",
      "text": "leia"
    },
    {
      "index": "0",
      "relevance_score": "0.006338922",
      "text": "luke"
    },
    {
      "index": "5",
      "relevance_score": "0.0016166499",
      "text": "star"
    },
    {
      "index": "4",
      "relevance_score": "0.0011695103",
      "text": "r2d2"
    },
    {
      "index": "1",
      "relevance_score": "5.614787E-4",
      "text": "like"
    },
    {
      "index": "6",
      "relevance_score": "3.7850367E-4",
      "text": "wars"
    },
    {
      "index": "3",
      "relevance_score": "1.2508839E-5",
      "text": "chewy"
    }
  ]
}
A successful response from `POST _inference/rerank/bge-reranker-base-mkn`.
{
  "rerank": [
    {
      "index": 6,
      "relevance_score": 0.50955844
    },
    {
      "index": 5,
      "relevance_score": 0.084341794
    }
  ]
}
A successful response from `POST _inference/rerank/bge-reranker-base-mkn`.
{
  "rerank": [
    {
      "index": 6,
      "relevance_score": 0.50955844,
      "text": "wars"
    },
    {
      "index": 5,
      "relevance_score": 0.084341794,
      "text": "star"
    },
    {
      "index": 3,
      "relevance_score": 0.004520818,
      "text": "chewy"
    }
  ]
}








Perform text embedding inference on the service Generally available; Added in 8.11.0

POST /_inference/text_embedding/{inference_id}

Path parameters

  • inference_id string Required

    The inference Id

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference request to complete.

    Values are -1 or 0.

application/json

Body

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • text_embedding_bytes array[object]

      The text embedding result object for byte representation

      Hide text_embedding_bytes attribute Show text_embedding_bytes attribute object
      • embedding array[number] Required

        Text Embedding results containing bytes are represented as Dense Vectors of bytes.

    • text_embedding_bits array[object]

      The text embedding result object for byte representation

      Hide text_embedding_bits attribute Show text_embedding_bits attribute object
      • embedding array[number] Required

        Text Embedding results containing bytes are represented as Dense Vectors of bytes.

    • text_embedding array[object]

      The text embedding result object

      Hide text_embedding attribute Show text_embedding attribute object
      • embedding array[number] Required

        Text Embedding results are represented as Dense Vectors of floats.

POST /_inference/text_embedding/{inference_id}
POST _inference/text_embedding/my-cohere-endpoint
{
  "input": "The sky above the port was the color of television tuned to a dead channel.",
  "task_settings": {
    "input_type": "ingest"
  }
}
resp = client.inference.text_embedding(
    inference_id="my-cohere-endpoint",
    input="The sky above the port was the color of television tuned to a dead channel.",
    task_settings={
        "input_type": "ingest"
    },
)
const response = await client.inference.textEmbedding({
  inference_id: "my-cohere-endpoint",
  input:
    "The sky above the port was the color of television tuned to a dead channel.",
  task_settings: {
    input_type: "ingest",
  },
});
response = client.inference.text_embedding(
  inference_id: "my-cohere-endpoint",
  body: {
    "input": "The sky above the port was the color of television tuned to a dead channel.",
    "task_settings": {
      "input_type": "ingest"
    }
  }
)
$resp = $client->inference()->textEmbedding([
    "inference_id" => "my-cohere-endpoint",
    "body" => [
        "input" => "The sky above the port was the color of television tuned to a dead channel.",
        "task_settings" => [
            "input_type" => "ingest",
        ],
    ],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"input":"The sky above the port was the color of television tuned to a dead channel.","task_settings":{"input_type":"ingest"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/my-cohere-endpoint"
client.inference().textEmbedding(t -> t
    .inferenceId("my-cohere-endpoint")
    .input("The sky above the port was the color of television tuned to a dead channel.")
    .taskSettings(JsonData.fromJson("{\"input_type\":\"ingest\"}"))
);
Request example
Run `POST _inference/text_embedding/my-cohere-endpoint` to perform text embedding on the example sentence using the Cohere integration,
{
  "input": "The sky above the port was the color of television tuned to a dead channel.",
  "task_settings": {
    "input_type": "ingest"
  }
}
Response examples (200)
An abbreviated response from `POST _inference/text_embedding/my-cohere-endpoint`.
{
  "text_embedding": [
    {
      "embedding": [
        {
          0.018569946,
          -0.036895752,
          0.01486969,
          -0.0045204163,
          -0.04385376,
          0.0075950623,
          0.04260254,
          -0.004005432,
          0.007865906,
          0.030792236,
          -0.050476074,
          0.011795044,
          -0.011642456,
          -0.010070801
        }
      ]
    }
  ]
}









Ingest

Ingest APIs enable you to manage tasks and resources related to ingest pipelines and processors.









Delete GeoIP database configurations Generally available; Added in 8.15.0

DELETE /_ingest/geoip/database/{id}

Delete one or more IP geolocation database configurations.

Path parameters

  • id string | array[string] Required

    A comma-separated list of geoip database configurations to delete

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ingest/geoip/database/{id}
curl \
 --request DELETE 'http://api.example.com/_ingest/geoip/database/{id}' \
 --header "Authorization: $API_KEY"








Delete IP geolocation database configurations Generally available; Added in 8.15.0

DELETE /_ingest/ip_location/database/{id}

Required authorization

  • Cluster privileges: manage

Path parameters

  • id string | array[string] Required

    A comma-separated list of IP location database configurations.

Query parameters

  • master_timeout string

    The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. A value of -1 indicates that the request should never time out.

    Values are -1 or 0.

  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. A value of -1 indicates that the request should never time out.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ingest/ip_location/database/{id}
DELETE /_ingest/ip_location/database/my-database-id
resp = client.ingest.delete_ip_location_database(
    id="my-database-id",
)
const response = await client.ingest.deleteIpLocationDatabase({
  id: "my-database-id",
});
response = client.ingest.delete_ip_location_database(
  id: "my-database-id"
)
$resp = $client->ingest()->deleteIpLocationDatabase([
    "id" => "my-database-id",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ingest/ip_location/database/my-database-id"
client.ingest().deleteIpLocationDatabase(d -> d
    .id("my-database-id")
);









































Get the basic license status Generally available; Added in 6.3.0

GET /_license/basic_status

Required authorization

  • Cluster privileges: monitor

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • eligible_to_start_basic boolean Required
GET /_license/basic_status
resp = client.license.get_basic_status()
const response = await client.license.getBasicStatus();
response = client.license.get_basic_status
$resp = $client->license()->getBasicStatus();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_license/basic_status"
client.license().getBasicStatus();
Response examples (200)
A successful response from `GET /_license/basic_status`.
{
  "eligible_to_start_basic": true
}




Start a basic license Generally available; Added in 6.3.0

POST /_license/start_basic

Start an indefinite basic license, which gives access to all the basic features.

NOTE: In order to start a basic license, you must not currently have a basic license.

If the basic license does not support all of the features that are available with your current license, however, you are notified in the response. You must then re-submit the API request with the acknowledge parameter set to true.

To check the status of your basic license, use the get basic license API.

Required authorization

  • Cluster privileges: manage

Query parameters

  • acknowledge boolean

    whether the user has acknowledged acknowledge messages (default: false)

  • master_timeout string

    Period to wait for a connection to the master node.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • basic_was_started boolean Required
    • error_message string
    • type string

      Values are missing, trial, basic, standard, dev, silver, gold, platinum, or enterprise.

    • acknowledge object
POST /_license/start_basic?acknowledge=true
resp = client.license.post_start_basic(
    acknowledge=True,
)
const response = await client.license.postStartBasic({
  acknowledge: "true",
});
response = client.license.post_start_basic(
  acknowledge: "true"
)
$resp = $client->license()->postStartBasic([
    "acknowledge" => "true",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_license/start_basic?acknowledge=true"
client.license().postStartBasic(p -> p
    .acknowledge(true)
);
Response examples (200)
A successful response from `POST /_license/start_basic?acknowledge=true`. If you currently have a license with more features than a basic license and you start a basic license, you must pass the acknowledge parameter.
{
  "basic_was_started": true,
  "acknowledged": true
}





Get Logstash pipelines Generally available; Added in 7.12.0

GET /_logstash/pipeline/{id}

All methods and paths for this operation:

GET /_logstash/pipeline

GET /_logstash/pipeline/{id}

Get pipelines that are used for Logstash Central Management.

Required authorization

  • Cluster privileges: manage_logstash_pipelines
External documentation

Path parameters

  • id string | array[string] Required

    A comma-separated list of pipeline identifiers.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • description string Required

        A description of the pipeline. This description is not used by Elasticsearch or Logstash.

      • last_modified string | number Required

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        One of:
      • pipeline string Required

        The configuration for the pipeline.

        External documentation
      • pipeline_metadata object Required
        Hide pipeline_metadata attributes Show pipeline_metadata attributes object
        • type string Required
        • version string Required
      • pipeline_settings object Required
        Hide pipeline_settings attributes Show pipeline_settings attributes object
        • pipeline.workers number Required

          The number of workers that will, in parallel, execute the filter and output stages of the pipeline.

        • pipeline.batch.size number Required

          The maximum number of events an individual worker thread will collect from inputs before attempting to execute its filters and outputs.

        • pipeline.batch.delay number Required

          When creating pipeline event batches, how long in milliseconds to wait for each event before dispatching an undersized batch to pipeline workers.

        • queue.type string Required

          The internal queuing model to use for event buffering.

        • queue.max_bytes string Required

          The total capacity of the queue (queue.type: persisted) in number of bytes.

        • queue.checkpoint.writes number Required

          The maximum number of written events before forcing a checkpoint when persistent queues are enabled (queue.type: persisted).

      • username string Required

        The user who last updated the pipeline.

GET _logstash/pipeline/my_pipeline
resp = client.logstash.get_pipeline(
    id="my_pipeline",
)
const response = await client.logstash.getPipeline({
  id: "my_pipeline",
});
response = client.logstash.get_pipeline(
  id: "my_pipeline"
)
$resp = $client->logstash()->getPipeline([
    "id" => "my_pipeline",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_logstash/pipeline/my_pipeline"
client.logstash().getPipeline(g -> g
    .id("my_pipeline")
);
Response examples (200)
A successful response from `GET _logstash/pipeline/my_pipeline`.
{
  "my_pipeline": {
    "description": "Sample pipeline for illustration purposes",
    "last_modified": "2021-01-02T02:50:51.250Z",
    "pipeline_metadata": {
      "type": "logstash_pipeline",
      "version": "1"
    },
    "username": "elastic",
    "pipeline": "input {}\\n filter { grok {} }\\n output {}",
    "pipeline_settings": {
      "pipeline.workers": 1,
      "pipeline.batch.size": 125,
      "pipeline.batch.delay": 50,
      "queue.type": "memory",
      "queue.max_bytes": "1gb",
      "queue.checkpoint.writes": 1024
    }
  }
}








Machine learning

Get machine learning memory usage info Generally available; Added in 8.2.0

GET /_ml/memory/{node_id}/_stats

All methods and paths for this operation:

GET /_ml/memory/_stats

GET /_ml/memory/{node_id}/_stats

Get information about how machine learning jobs and trained models are using memory, on each node, both within the JVM heap, and natively, outside of the JVM.

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • node_id string Required

    The names of particular nodes in the cluster to target. For example, nodeId1,nodeId2 or ml:true

Query parameters

  • master_timeout string

    Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object Required

      Contains statistics about the number of nodes selected by the request.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]

        Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason string | null

          A human-readable explanation of the error, in English.

        • stack_trace string

          The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • caused_by object

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • root_cause array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

        • suppressed array[object]

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

          Cause and details about a request failure. This class defines the properties common to all error types. Additional details are also provided, that depend on the error type.

      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • attributes object Required
          Hide attributes attribute Show attributes attribute object
          • * string Additional properties
        • jvm object Required
          Hide jvm attributes Show jvm attributes object
          • heap_max number | string

          • heap_max_in_bytes number Required

            Maximum amount of memory, in bytes, available for use by the heap.

          • java_inference number | string

          • java_inference_in_bytes number Required

            Amount of Java heap, in bytes, currently being used for caching inference models.

          • java_inference_max number | string

          • java_inference_max_in_bytes number Required

            Maximum amount of Java heap, in bytes, to be used for caching inference models.

        • mem object Required
          Hide mem attributes Show mem attributes object
          • adjusted_total number | string

          • adjusted_total_in_bytes number Required

            If the amount of physical memory has been overridden using the es.total_memory_bytes system property then this reports the overridden value in bytes. Otherwise it reports the same value as total_in_bytes.

          • total number | string

          • total_in_bytes number Required

            Total amount of physical memory in bytes.

          • ml object Required
            Hide ml attributes Show ml attributes object
            • anomaly_detectors number | string

            • anomaly_detectors_in_bytes number Required

              Amount of native memory, in bytes, set aside for anomaly detection jobs.

            • data_frame_analytics number | string

            • data_frame_analytics_in_bytes number Required

              Amount of native memory, in bytes, set aside for data frame analytics jobs.

            • max number | string

            • max_in_bytes number Required

              Maximum amount of native memory (separate to the JVM heap), in bytes, that may be used by machine learning native processes.

            • native_code_overhead number | string

            • native_code_overhead_in_bytes number Required

              Amount of native memory, in bytes, set aside for loading machine learning native code shared libraries.

            • native_inference number | string

            • native_inference_in_bytes number Required

              Amount of native memory, in bytes, set aside for trained models that have a PyTorch model_type.

        • name string Required
        • roles array[string] Required

          Roles assigned to the node.

        • transport_address string Required
        • ephemeral_id string Required
GET /_ml/memory/{node_id}/_stats
GET _ml/memory/_stats?human
resp = client.ml.get_memory_stats(
    human=True,
)
const response = await client.ml.getMemoryStats({
  human: "true",
});
response = client.ml.get_memory_stats(
  human: "true"
)
$resp = $client->ml()->getMemoryStats([
    "human" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/memory/_stats?human"

















Get calendar configuration info Generally available; Added in 6.2.0

POST /_ml/calendars/{calendar_id}

All methods and paths for this operation:

GET /_ml/calendars

POST /_ml/calendars
GET /_ml/calendars/{calendar_id}
POST /_ml/calendars/{calendar_id}

Required authorization

  • Cluster privileges: monitor_ml

Path parameters

  • calendar_id string Required

    A string that uniquely identifies a calendar. You can get information for multiple calendars by using a comma-separated list of ids or a wildcard expression. You can get information for all calendars by using _all or * or by omitting the calendar identifier.

Query parameters

  • from number

    Skips the specified number of calendars. This parameter is supported only when you omit the calendar identifier.

  • size number

    Specifies the maximum number of calendars to obtain. This parameter is supported only when you omit the calendar identifier.

application/json

Body

  • page object
    Hide page attributes Show page attributes object
    • from number

      Skips the specified number of items.

      Default value is 0.

    • size number

      Specifies the maximum number of items to obtain.

      Default value is 10000.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • calendars array[object] Required
      Hide calendars attributes Show calendars attributes object
      • calendar_id string Required
      • description string

        A description of the calendar.

      • job_ids array[string] Required

        An array of anomaly detection job identifiers.

    • count number Required
POST /_ml/calendars/{calendar_id}
GET _ml/calendars/planned-outages
resp = client.ml.get_calendars(
    calendar_id="planned-outages",
)
const response = await client.ml.getCalendars({
  calendar_id: "planned-outages",
});
response = client.ml.get_calendars(
  calendar_id: "planned-outages"
)
$resp = $client->ml()->getCalendars([
    "calendar_id" => "planned-outages",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/calendars/planned-outages"
client.ml().getCalendars(g -> g
    .calendarId("planned-outages")
);

Delete a calendar Generally available; Added in 6.2.0

DELETE /_ml/calendars/{calendar_id}

Remove all scheduled events from a calendar, then delete it.

Required authorization

  • Cluster privileges: manage_ml

Path parameters

  • calendar_id string Required

    A string that uniquely identifies a calendar.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ml/calendars/{calendar_id}
DELETE _ml/calendars/planned-outages
resp = client.ml.delete_calendar(
    calendar_id="planned-outages",
)
const response = await client.ml.deleteCalendar({
  calendar_id: "planned-outages",
});
response = client.ml.delete_calendar(
  calendar_id: "planned-outages"
)
$resp = $client->ml()->deleteCalendar([
    "calendar_id" => "planned-outages",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/calendars/planned-outages"
client.ml().deleteCalendar(d -> d
    .calendarId("planned-outages")
);
Response examples (200)
A successful response when deleting a calendar.
{
  "acknowledged": true
}