Authentication

Api key auth (http_api_key)

Elasticsearch APIs use key-based authentication. You must create an API key and use the encoded value in the request header. For example:

curl -X GET "${ES_URL}/_cat/indices?v=true" \
  -H "Authorization: ApiKey ${API_KEY}"

For more information about where to find API keys for the Elasticsearch endpoint (${ES_URL}) for a project, go to Get started with Elasticsearch Serverless.

Get behavioral analytics collections Deprecated Technical preview

GET /_application/analytics/{name}

Path parameters

  • name array[string] Required

    A list of analytics collections to limit the returned information

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • event_data_stream object Required
        Hide event_data_stream attribute Show event_data_stream attribute object
GET /_application/analytics/{name}
curl \
 --request GET 'http://api.example.com/_application/analytics/{name}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _application/analytics/my*`
{
  "my_analytics_collection": {
      "event_data_stream": {
          "name": "behavioral_analytics-events-my_analytics_collection"
      }
  },
  "my_analytics_collection2": {
      "event_data_stream": {
          "name": "behavioral_analytics-events-my_analytics_collection2"
      }
  }
}

Create a behavioral analytics collection Deprecated Technical preview

PUT /_application/analytics/{name}

Path parameters

  • name string Required

    The name of the analytics collection to be created or updated.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

    • name string Required
PUT /_application/analytics/{name}
curl \
 --request PUT 'http://api.example.com/_application/analytics/{name}' \
 --header "Authorization: $API_KEY"













Get aliases

GET /_cat/aliases/{name}

Get the cluster's index aliases, including filter and routing information. This API does not return data stream aliases.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.

Path parameters

  • name string | array[string] Required

    A comma-separated list of aliases to retrieve. Supports wildcards (*). To retrieve all aliases, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicated that the request should never timeout, you can set it to -1.

    Values are -1 or 0.

Responses

GET /_cat/aliases/{name}
curl \
 --request GET 'http://api.example.com/_cat/aliases/{name}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/aliases?format=json&v=true`. This response shows that `alias2` has configured a filter and `alias3` and `alias4` have routing configurations.
[
  {
    "alias": "alias1",
    "index": "test1",
    "filter": "-",
    "routing.index": "-",
    "routing.search": "-",
    "is_write_index": "true"
  },
  {
    "alias": "alias1",
    "index": "test1",
    "filter": "*",
    "routing.index": "-",
    "routing.search": "-",
    "is_write_index": "true"
  },
  {
    "alias": "alias3",
    "index": "test1",
    "filter": "-",
    "routing.index": "1",
    "routing.search": "1",
    "is_write_index": "true"
  },
  {
    "alias": "alias4",
    "index": "test1",
    "filter": "-",
    "routing.index": "2",
    "routing.search": "1,2",
    "is_write_index": "true"
  }
]

Get component templates Added in 5.1.0

GET /_cat/component_templates

Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • The period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

GET /_cat/component_templates
curl \
 --request GET 'http://api.example.com/_cat/component_templates' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/component_templates/my-template-*?v=true&s=name&format=json`.
[
  {
    "name": "my-template-1",
    "version": "null",
    "alias_count": "0",
    "mapping_count": "0",
    "settings_count": "1",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  },
    {
    "name": "my-template-2",
    "version": null,
    "alias_count": "0",
    "mapping_count": "3",
    "settings_count": "0",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  }
]

Get component templates Added in 5.1.0

GET /_cat/component_templates/{name}

Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.

Path parameters

  • name string Required

    The name of the component template. It accepts wildcard expressions. If it is omitted, all component templates are returned.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • The period to wait for a connection to the master node.

    Values are -1 or 0.

Responses

GET /_cat/component_templates/{name}
curl \
 --request GET 'http://api.example.com/_cat/component_templates/{name}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/component_templates/my-template-*?v=true&s=name&format=json`.
[
  {
    "name": "my-template-1",
    "version": "null",
    "alias_count": "0",
    "mapping_count": "0",
    "settings_count": "1",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  },
    {
    "name": "my-template-2",
    "version": null,
    "alias_count": "0",
    "mapping_count": "3",
    "settings_count": "0",
    "metadata_count": "0",
    "included_in": "[my-index-template]"
  }
]




Get a document count

GET /_cat/count/{index}

Get quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.

IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases used to limit the request. It supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • epoch number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • Time of day, expressed as HH:MM:SS

    • count string

      the document count

GET /_cat/count/{index}
curl \
 --request GET 'http://api.example.com/_cat/count/{index}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/count/my-index-000001?v=true&format=json`. It retrieves the document count for the `my-index-000001` data stream or index.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "120"
  }
]
A successful response from `GET /_cat/count?v=true&format=json`. It retrieves the document count for all data streams and indices in the cluster.
[
  {
    "epoch": "1475868259",
    "timestamp": "15:24:20",
    "count": "121"
  }
]

Get CAT help

GET /_cat

Get help for the CAT APIs.

Responses

GET /_cat
curl \
 --request GET 'http://api.example.com/_cat' \
 --header "Authorization: $API_KEY"




Get index information

GET /_cat/indices/{index}

Get high-level information about indices in a cluster, including backing indices for data streams.

Use this request to get the following information for each index in a cluster:

  • shard count
  • document count
  • deleted document count
  • primary store size
  • total store size of all shards, including shard replicas

These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs.

CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • health string

    The health status used to limit returned indices. By default, the response includes indices of any health status.

    Supported values include:

    • green (or GREEN): All shards are assigned.
    • yellow (or YELLOW): All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.
    • red (or RED): One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.

    Values are green, GREEN, yellow, YELLOW, red, or RED.

  • If true, the response includes information from segments that are not loaded into memory.

  • pri boolean

    If true, the response only includes information from primary shards.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • Period to wait for a connection to the master node.

    Values are -1 or 0.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

Responses

GET /_cat/indices/{index}
curl \
 --request GET 'http://api.example.com/_cat/indices/{index}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/indices/my-index-*?v=true&s=index&format=json`.
[
  {
    "health": "yellow",
    "status": "open",
    "index": "my-index-000001",
    "uuid": "u8FNjxh8Rfy_awN11oDKYQ",
    "pri": "1",
    "rep": "1",
    "docs.count": "1200",
    "docs.deleted": "0",
    "store.size": "88.1kb",
    "pri.store.size": "88.1kb",
    "dataset.size": "88.1kb"
  },
  {
    "health": "green",
    "status": "open",
    "index": "my-index-000002",
    "uuid": "nYFWZEO7TUiOjLQXBaYJpA ",
    "pri": "1",
    "rep": "0",
    "docs.count": "0",
    "docs.deleted": "0",
    "store.size": "260b",
    "pri.store.size": "260b",
    "dataset.size": "260b"
  }
]

Get data frame analytics jobs Added in 7.7.0

GET /_cat/ml/data_frame/analytics

Get configuration and usage information about data frame analytics jobs.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.

Query parameters

  • Whether to ignore if a wildcard expression matches no configs. (This includes _all string or when no configs have been specified)

  • bytes string

    The unit in which to display byte values

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): Contains messages relating to the selection of a node.
    • create_time (or ct, createTime): The time when the data frame analytics job was created.
    • description (or d): A description of a job.
    • dest_index (or di, destIndex): Name of the destination index.
    • failure_reason (or fr, failureReason): Contains messages about the reason why a data frame analytics job failed.
    • id: Identifier for the data frame analytics job.
    • model_memory_limit (or mml, modelMemoryLimit): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.
    • node.address (or na, nodeAddress): The network address of the node that the data frame analytics job is assigned to.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that the data frame analytics job is assigned to.
    • node.id (or ni, nodeId): The unique identifier of the node that the data frame analytics job is assigned to.
    • node.name (or nn, nodeName): The name of the node that the data frame analytics job is assigned to.
    • progress (or p): The progress report of the data frame analytics job by phase.
    • source_index (or si, sourceIndex): Name of the source index.
    • state (or s): Current state of the data frame analytics job.
    • type (or t): The type of analysis that the data frame analytics job performs.
    • version (or v): The Elasticsearch version number in which the data frame analytics job was created.

    Values are assignment_explanation, ae, create_time, ct, createTime, description, d, dest_index, di, destIndex, failure_reason, fr, failureReason, id, model_memory_limit, mml, modelMemoryLimit, node.address, na, nodeAddress, node.ephemeral_id, ne, nodeEphemeralId, node.id, ni, nodeId, node.name, nn, nodeName, progress, p, source_index, si, sourceIndex, state, s, type, t, version, or v.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): Contains messages relating to the selection of a node.
    • create_time (or ct, createTime): The time when the data frame analytics job was created.
    • description (or d): A description of a job.
    • dest_index (or di, destIndex): Name of the destination index.
    • failure_reason (or fr, failureReason): Contains messages about the reason why a data frame analytics job failed.
    • id: Identifier for the data frame analytics job.
    • model_memory_limit (or mml, modelMemoryLimit): The approximate maximum amount of memory resources that are permitted for the data frame analytics job.
    • node.address (or na, nodeAddress): The network address of the node that the data frame analytics job is assigned to.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that the data frame analytics job is assigned to.
    • node.id (or ni, nodeId): The unique identifier of the node that the data frame analytics job is assigned to.
    • node.name (or nn, nodeName): The name of the node that the data frame analytics job is assigned to.
    • progress (or p): The progress report of the data frame analytics job by phase.
    • source_index (or si, sourceIndex): Name of the source index.
    • state (or s): Current state of the data frame analytics job.
    • type (or t): The type of analysis that the data frame analytics job performs.
    • version (or v): The Elasticsearch version number in which the data frame analytics job was created.

    Values are assignment_explanation, ae, create_time, ct, createTime, description, d, dest_index, di, destIndex, failure_reason, fr, failureReason, id, model_memory_limit, mml, modelMemoryLimit, node.address, na, nodeAddress, node.ephemeral_id, ne, nodeEphemeralId, node.id, ni, nodeId, node.name, nn, nodeName, progress, p, source_index, si, sourceIndex, state, s, type, t, version, or v.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

GET /_cat/ml/data_frame/analytics
curl \
 --request GET 'http://api.example.com/_cat/ml/data_frame/analytics' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/data_frame/analytics?v=true&format=json`.
[
  {
    "id": "classifier_job_1",
    "type": "classification",
    "create_time": "2020-02-12T11:49:09.594Z",
    "state": "stopped"
  },
    {
    "id": "classifier_job_2",
    "type": "classification",
    "create_time": "2020-02-12T11:49:14.479Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_3",
    "type": "classification",
    "create_time": "2020-02-12T11:49:16.928Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_4",
    "type": "classification",
    "create_time": "2020-02-12T11:49:19.127Z",
    "state": "stopped"
  },
  {
    "id": "classifier_job_5",
    "type": "classification",
    "create_time": "2020-02-12T11:49:21.349Z",
    "state": "stopped"
  }
]








Get datafeeds Added in 7.7.0

GET /_cat/ml/datafeeds/{datafeed_id}

Get configuration and usage information about datafeeds. This API returns a maximum of 10,000 datafeeds. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.

Path parameters

  • datafeed_id string Required

    A numerical character string that uniquely identifies the datafeed.

Query parameters

  • Specifies what to do when the request:

    • Contains wildcard expressions and there are no datafeeds that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.

    Values are ae, assignment_explanation, bc, buckets.count, bucketsCount, id, na, node.address, nodeAddress, ne, node.ephemeral_id, nodeEphemeralId, ni, node.id, nodeId, nn, node.name, nodeName, sba, search.bucket_avg, searchBucketAvg, sc, search.count, searchCount, seah, search.exp_avg_hour, searchExpAvgHour, st, search.time, searchTime, s, or state.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • ae (or assignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.
    • bc (or buckets.count, bucketsCount): The number of buckets processed.
    • id: A numerical character string that uniquely identifies the datafeed.
    • na (or node.address, nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.
    • ne (or node.ephemeral_id, nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.
    • ni (or node.id, nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.
    • nn (or node.name, nodeName): For started datafeeds only, the name of the node where the datafeed is started.
    • sba (or search.bucket_avg, searchBucketAvg): The average search time per bucket, in milliseconds.
    • sc (or search.count, searchCount): The number of searches run by the datafeed.
    • seah (or search.exp_avg_hour, searchExpAvgHour): The exponential average search time per hour, in milliseconds.
    • st (or search.time, searchTime): The total time the datafeed spent searching, in milliseconds.
    • s (or state): The status of the datafeed: starting, started, stopping, or stopped. If starting, the datafeed has been requested to start but has not yet started. If started, the datafeed is actively receiving data. If stopping, the datafeed has been requested to stop gracefully and is completing its final action. If stopped, the datafeed is stopped and will not receive data until it is re-started.

    Values are ae, assignment_explanation, bc, buckets.count, bucketsCount, id, na, node.address, nodeAddress, ne, node.ephemeral_id, nodeEphemeralId, ni, node.id, nodeId, nn, node.name, nodeName, sba, search.bucket_avg, searchBucketAvg, sc, search.count, searchCount, seah, search.exp_avg_hour, searchExpAvgHour, st, search.time, searchTime, s, or state.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string

      The datafeed identifier.

    • state string

      Values are started, stopped, starting, or stopping.

    • For started datafeeds only, contains messages relating to the selection of a node.

    • The number of buckets processed.

    • The number of searches run by the datafeed.

    • The total time the datafeed spent searching, in milliseconds.

    • The average search time per bucket, in milliseconds.

    • The exponential average search time per hour, in milliseconds.

    • node.id string

      The unique identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The name of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The ephemeral identifier of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

    • The network address of the assigned node. For started datafeeds only, this information pertains to the node upon which the datafeed is started.

GET /_cat/ml/datafeeds/{datafeed_id}
curl \
 --request GET 'http://api.example.com/_cat/ml/datafeeds/{datafeed_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/datafeeds?v=true&format=json`.
[
  {
    "id": "datafeed-high_sum_total_sales",
    "state": "stopped",
    "buckets.count": "743",
    "search.count": "7"
  },
  {
    "id": "datafeed-low_request_rate",
    "state": "stopped",
    "buckets.count": "1457",
    "search.count": "3"
  },
  {
    "id": "datafeed-response_code_rates",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  },
  {
    "id": "datafeed-url_scanning",
    "state": "stopped",
    "buckets.count": "1460",
    "search.count": "18"
  }
]

Get anomaly detection jobs Added in 7.7.0

GET /_cat/ml/anomaly_detectors

Get configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.

Query parameters

  • Specifies what to do when the request:

    • Contains wildcard expressions and there are no jobs that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      Values are closing, closed, opened, failed, or opening.

    • For open jobs only, the amount of time the job has been opened.

    • For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.

    • The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed_record_count is the number of aggregation results processed, not the number of Elasticsearch documents.

    • The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.

    • The number of input documents posted to the anomaly detection job.

    • The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.

    • The number of input documents with either a missing date field or a date that could not be parsed.

    • The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing. If you are using datafeeds or posting data to the job in JSON format, a high missing_field_count is often not an indication of data issues. It is not necessarily a cause for concern.

    • The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.

    • The number of buckets which did not contain any data. If your data contains many empty buckets, consider increasing your bucket_span or using functions that are tolerant to gaps in data such as mean, non_null_sum or non_zero_count.

    • The number of buckets that contained few data points compared to the expected number of data points. If your data contains many sparse buckets, consider using a longer bucket_span.

    • The total number of buckets processed.

    • The timestamp of the earliest chronologically input document.

    • The timestamp of the latest chronologically input document.

    • The timestamp at which data was last analyzed, according to server time.

    • The timestamp of the last bucket that did not contain any data.

    • The timestamp of the last bucket that was considered sparse.

    • Values are ok, soft_limit, or hard_limit.

    • The upper limit for model memory usage, checked on increasing values.

    • The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of buckets for which new entities in incoming data were not processed due to insufficient model memory. This situation is also signified by a hard_limit: memory_status property value.

    • Values are ok or warn.

    • The number of documents that have had a field categorized.

    • The number of categories created by categorization.

    • The number of categories that match more than 1% of categorized documents.

    • The number of categories that match just one categorized document.

    • The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.

    • The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model_memory_limit. This count does not track which specific categories failed to be created. Therefore you cannot use this value to determine the number of unique categories that were missed.

    • The timestamp when the model stats were gathered, according to server time.

    • The timestamp of the last record when the model stats were gathered.

    • The number of individual forecasts currently available for the job. A value of one or more indicates that forecasts exist.

    • The minimum memory usage in bytes for forecasts related to the anomaly detection job.

    • The maximum memory usage in bytes for forecasts related to the anomaly detection job.

    • The average memory usage in bytes for forecasts related to the anomaly detection job.

    • The total memory usage in bytes for forecasts related to the anomaly detection job.

    • The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The average number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The total number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The minimum runtime in milliseconds for forecasts related to the anomaly detection job.

    • The maximum runtime in milliseconds for forecasts related to the anomaly detection job.

    • The average runtime in milliseconds for forecasts related to the anomaly detection job.

    • The total runtime in milliseconds for forecasts related to the anomaly detection job.

    • node.id string
    • The name of the assigned node.

    • The network address of the assigned node.

    • The number of bucket results produced by the job.

    • The sum of all bucket processing times, in milliseconds.

    • The minimum of all bucket processing times, in milliseconds.

    • The maximum of all bucket processing times, in milliseconds.

    • The exponential moving average of all bucket processing times, in milliseconds.

    • The exponential moving average of bucket processing times calculated in a one hour time window, in milliseconds.

GET /_cat/ml/anomaly_detectors
curl \
 --request GET 'http://api.example.com/_cat/ml/anomaly_detectors' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json`.
[
  {
    "id": "high_sum_total_sales",
    "s": "closed",
    "dpr": "14022",
    "mb": "1.5mb"
  },
  {
    "id": "low_request_rate",
    "s": "closed",
    "dpr": "1216",
    "mb": "40.5kb"
  },
  {
    "id": "response_code_rates",
    "s": "closed",
    "dpr": "28146",
    "mb": "132.7kb"
  },
  {
    "id": "url_scanning",
    "s": "closed",
    "dpr": "28146",
    "mb": "501.6kb"
  }
]

Get anomaly detection jobs Added in 7.7.0

GET /_cat/ml/anomaly_detectors/{job_id}

Get configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

Query parameters

  • Specifies what to do when the request:

    • Contains wildcard expressions and there are no jobs that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • assignment_explanation (or ae): For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.
    • buckets.count (or bc, bucketsCount): The number of bucket results produced by the job.
    • buckets.time.exp_avg (or btea, bucketsTimeExpAvg): Exponential moving average of all bucket processing times, in milliseconds.
    • buckets.time.exp_avg_hour (or bteah, bucketsTimeExpAvgHour): Exponentially-weighted moving average of bucket processing times calculated in a 1 hour time window, in milliseconds.
    • buckets.time.max (or btmax, bucketsTimeMax): Maximum among all bucket processing times, in milliseconds.
    • buckets.time.min (or btmin, bucketsTimeMin): Minimum among all bucket processing times, in milliseconds.
    • buckets.time.total (or btt, bucketsTimeTotal): Sum of all bucket processing times, in milliseconds.
    • data.buckets (or db, dataBuckets): The number of buckets processed.
    • data.earliest_record (or der, dataEarliestRecord): The timestamp of the earliest chronologically input document.
    • data.empty_buckets (or deb, dataEmptyBuckets): The number of buckets which did not contain any data.
    • data.input_bytes (or dib, dataInputBytes): The number of bytes of input data posted to the anomaly detection job.
    • data.input_fields (or dif, dataInputFields): The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.
    • data.input_records (or dir, dataInputRecords): The number of input documents posted to the anomaly detection job.
    • data.invalid_dates (or did, dataInvalidDates): The number of input documents with either a missing date field or a date that could not be parsed.
    • data.last (or dl, dataLast): The timestamp at which data was last analyzed, according to server time.
    • data.last_empty_bucket (or dleb, dataLastEmptyBucket): The timestamp of the last bucket that did not contain any data.
    • data.last_sparse_bucket (or dlsb, dataLastSparseBucket): The timestamp of the last bucket that was considered sparse.
    • data.latest_record (or dlr, dataLatestRecord): The timestamp of the latest chronologically input document.
    • data.missing_fields (or dmf, dataMissingFields): The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing.
    • data.out_of_order_timestamps (or doot, dataOutOfOrderTimestamps): The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.
    • data.processed_fields (or dpf, dataProcessedFields): The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.
    • data.processed_records (or dpr, dataProcessedRecords): The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed record count is the number of aggregation results processed, not the number of Elasticsearch documents.
    • data.sparse_buckets (or dsb, dataSparseBuckets): The number of buckets that contained few data points compared to the expected number of data points.
    • forecasts.memory.avg (or fmavg, forecastsMemoryAvg): The average memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.max (or fmmax, forecastsMemoryMax): The maximum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.min (or fmmin, forecastsMemoryMin): The minimum memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.memory.total (or fmt, forecastsMemoryTotal): The total memory usage in bytes for forecasts related to the anomaly detection job.
    • forecasts.records.avg (or fravg, forecastsRecordsAvg): The average number of model_forecast` documents written for forecasts related to the anomaly detection job.
    • forecasts.records.max (or frmax, forecastsRecordsMax): The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.min (or frmin, forecastsRecordsMin): The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.records.total (or frt, forecastsRecordsTotal): The total number of model_forecast documents written for forecasts related to the anomaly detection job.
    • forecasts.time.avg (or ftavg, forecastsTimeAvg): The average runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.max (or ftmax, forecastsTimeMax): The maximum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.min (or ftmin, forecastsTimeMin): The minimum runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.time.total (or ftt, forecastsTimeTotal): The total runtime in milliseconds for forecasts related to the anomaly detection job.
    • forecasts.total (or ft, forecastsTotal): The number of individual forecasts currently available for the job.
    • id: Identifier for the anomaly detection job.
    • model.bucket_allocation_failures (or mbaf, modelBucketAllocationFailures): The number of buckets for which new entities in incoming data were not processed due to insufficient model memory.
    • model.by_fields (or mbf, modelByFields): The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.bytes (or mb, modelBytes): The number of bytes of memory used by the models. This is the maximum value since the last time the model was persisted. If the job is closed, this value indicates the latest size.
    • model.bytes_exceeded (or mbe, modelBytesExceeded): The number of bytes over the high limit for memory usage at the last allocation failure.
    • model.categorization_status (or mcs, modelCategorizationStatus): The status of categorization for the job: ok or warn. If ok, categorization is performing acceptably well (or not being used at all). If warn, categorization is detecting a distribution of categories that suggests the input data is inappropriate for categorization. Problems could be that there is only one category, more than 90% of categories are rare, the number of categories is greater than 50% of the number of categorized documents, there are no frequently matched categories, or more than 50% of categories are dead.
    • model.categorized_doc_count (or mcdc, modelCategorizedDocCount): The number of documents that have had a field categorized.
    • model.dead_category_count (or mdcc, modelDeadCategoryCount): The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.
    • model.failed_category_count (or mdcc, modelFailedCategoryCount): The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model memory limit. This count does not track which specific categories failed to be created. Therefore, you cannot use this value to determine the number of unique categories that were missed.
    • model.frequent_category_count (or mfcc, modelFrequentCategoryCount): The number of categories that match more than 1% of categorized documents.
    • model.log_time (or mlt, modelLogTime): The timestamp when the model stats were gathered, according to server time.
    • model.memory_limit (or mml, modelMemoryLimit): The timestamp when the model stats were gathered, according to server time.
    • model.memory_status (or mms, modelMemoryStatus): The status of the mathematical models: ok, soft_limit, or hard_limit. If ok, the models stayed below the configured value. If soft_limit, the models used more than 60% of the configured memory limit and older unused models will be pruned to free up space. Additionally, in categorization jobs no further category examples will be stored. If hard_limit, the models used more space than the configured memory limit. As a result, not all incoming data was processed.
    • model.over_fields (or mof, modelOverFields): The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.partition_fields (or mpf, modelPartitionFields): The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.
    • model.rare_category_count (or mrcc, modelRareCategoryCount): The number of categories that match just one categorized document.
    • model.timestamp (or mt, modelTimestamp): The timestamp of the last record when the model stats were gathered.
    • model.total_category_count (or mtcc, modelTotalCategoryCount): The number of categories created by categorization.
    • node.address (or na, nodeAddress): The network address of the node that runs the job. This information is available only for open jobs.
    • node.ephemeral_id (or ne, nodeEphemeralId): The ephemeral ID of the node that runs the job. This information is available only for open jobs.
    • node.id (or ni, nodeId): The unique identifier of the node that runs the job. This information is available only for open jobs.
    • node.name (or nn, nodeName): The name of the node that runs the job. This information is available only for open jobs.
    • opened_time (or ot): For open jobs only, the elapsed time for which the job has been open.
    • state (or s): The status of the anomaly detection job: closed, closing, failed, opened, or opening. If closed, the job finished successfully with its model state persisted. The job must be opened before it can accept further data. If closing, the job close action is in progress and has not yet completed. A closing job cannot accept further data. If failed, the job did not finish successfully due to an error. This situation can occur due to invalid input data, a fatal error occurring during the analysis, or an external interaction such as the process being killed by the Linux out of memory (OOM) killer. If the job had irrevocably failed, it must be force closed and then deleted. If the datafeed can be corrected, the job can be closed and then re-opened. If opened, the job is available to receive and process data. If opening, the job open action is in progress and has not yet completed.
  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      Values are closing, closed, opened, failed, or opening.

    • For open jobs only, the amount of time the job has been opened.

    • For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.

    • The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed_record_count is the number of aggregation results processed, not the number of Elasticsearch documents.

    • The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.

    • The number of input documents posted to the anomaly detection job.

    • The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.

    • The number of input documents with either a missing date field or a date that could not be parsed.

    • The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing. If you are using datafeeds or posting data to the job in JSON format, a high missing_field_count is often not an indication of data issues. It is not necessarily a cause for concern.

    • The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.

    • The number of buckets which did not contain any data. If your data contains many empty buckets, consider increasing your bucket_span or using functions that are tolerant to gaps in data such as mean, non_null_sum or non_zero_count.

    • The number of buckets that contained few data points compared to the expected number of data points. If your data contains many sparse buckets, consider using a longer bucket_span.

    • The total number of buckets processed.

    • The timestamp of the earliest chronologically input document.

    • The timestamp of the latest chronologically input document.

    • The timestamp at which data was last analyzed, according to server time.

    • The timestamp of the last bucket that did not contain any data.

    • The timestamp of the last bucket that was considered sparse.

    • Values are ok, soft_limit, or hard_limit.

    • The upper limit for model memory usage, checked on increasing values.

    • The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of buckets for which new entities in incoming data were not processed due to insufficient model memory. This situation is also signified by a hard_limit: memory_status property value.

    • Values are ok or warn.

    • The number of documents that have had a field categorized.

    • The number of categories created by categorization.

    • The number of categories that match more than 1% of categorized documents.

    • The number of categories that match just one categorized document.

    • The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.

    • The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model_memory_limit. This count does not track which specific categories failed to be created. Therefore you cannot use this value to determine the number of unique categories that were missed.

    • The timestamp when the model stats were gathered, according to server time.

    • The timestamp of the last record when the model stats were gathered.

    • The number of individual forecasts currently available for the job. A value of one or more indicates that forecasts exist.

    • The minimum memory usage in bytes for forecasts related to the anomaly detection job.

    • The maximum memory usage in bytes for forecasts related to the anomaly detection job.

    • The average memory usage in bytes for forecasts related to the anomaly detection job.

    • The total memory usage in bytes for forecasts related to the anomaly detection job.

    • The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The average number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The total number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The minimum runtime in milliseconds for forecasts related to the anomaly detection job.

    • The maximum runtime in milliseconds for forecasts related to the anomaly detection job.

    • The average runtime in milliseconds for forecasts related to the anomaly detection job.

    • The total runtime in milliseconds for forecasts related to the anomaly detection job.

    • node.id string
    • The name of the assigned node.

    • The network address of the assigned node.

    • The number of bucket results produced by the job.

    • The sum of all bucket processing times, in milliseconds.

    • The minimum of all bucket processing times, in milliseconds.

    • The maximum of all bucket processing times, in milliseconds.

    • The exponential moving average of all bucket processing times, in milliseconds.

    • The exponential moving average of bucket processing times calculated in a one hour time window, in milliseconds.

GET /_cat/ml/anomaly_detectors/{job_id}
curl \
 --request GET 'http://api.example.com/_cat/ml/anomaly_detectors/{job_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/anomaly_detectors?h=id,s,dpr,mb&v=true&format=json`.
[
  {
    "id": "high_sum_total_sales",
    "s": "closed",
    "dpr": "14022",
    "mb": "1.5mb"
  },
  {
    "id": "low_request_rate",
    "s": "closed",
    "dpr": "1216",
    "mb": "40.5kb"
  },
  {
    "id": "response_code_rates",
    "s": "closed",
    "dpr": "28146",
    "mb": "132.7kb"
  },
  {
    "id": "url_scanning",
    "s": "closed",
    "dpr": "28146",
    "mb": "501.6kb"
  }
]




Get trained models Added in 7.7.0

GET /_cat/ml/trained_models/{model_id}

Get configuration and usage information about inference trained models.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.

Path parameters

  • model_id string Required

    A unique identifier for the trained model.

Query parameters

  • Specifies what to do when the request: contains wildcard expressions and there are no models that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, the API returns an empty array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    A comma-separated list of column names to display.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.

    Values are create_time, ct, created_by, c, createdBy, data_frame_analytics_id, df, dataFrameAnalytics, dfid, description, d, heap_size, hs, modelHeapSize, id, ingest.count, ic, ingestCount, ingest.current, icurr, ingestCurrent, ingest.failed, if, ingestFailed, ingest.pipelines, ip, ingestPipelines, ingest.time, it, ingestTime, license, l, operations, o, modelOperations, version, or v.

  • s string | array[string]

    A comma-separated list of column names or aliases used to sort the response.

    Supported values include:

    • create_time (or ct): The time when the trained model was created.
    • created_by (or c, createdBy): Information on the creator of the trained model.
    • data_frame_analytics_id (or df, dataFrameAnalytics, dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.
    • description (or d): The description of the trained model.
    • heap_size (or hs, modelHeapSize): The estimated heap size to keep the trained model in memory.
    • id: Identifier for the trained model.
    • ingest.count (or ic, ingestCount): The total number of documents that are processed by the model.
    • ingest.current (or icurr, ingestCurrent): The total number of document that are currently being handled by the trained model.
    • ingest.failed (or if, ingestFailed): The total number of failed ingest attempts with the trained model.
    • ingest.pipelines (or ip, ingestPipelines): The total number of ingest pipelines that are referencing the trained model.
    • ingest.time (or it, ingestTime): The total time that is spent processing documents with the trained model.
    • license (or l): The license level of the trained model.
    • operations (or o, modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.
    • version (or v): The Elasticsearch version number in which the trained model was created.

    Values are create_time, ct, created_by, c, createdBy, data_frame_analytics_id, df, dataFrameAnalytics, dfid, description, d, heap_size, hs, modelHeapSize, id, ingest.count, ic, ingestCount, ingest.current, icurr, ingestCurrent, ingest.failed, if, ingestFailed, ingest.pipelines, ip, ingestPipelines, ingest.time, it, ingestTime, license, l, operations, o, modelOperations, version, or v.

  • from number

    Skips the specified number of transforms.

  • size number

    The maximum number of transforms to display.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

GET /_cat/ml/trained_models/{model_id}
curl \
 --request GET 'http://api.example.com/_cat/ml/trained_models/{model_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/ml/trained_models?v=true&format=json`.
[
  {
    "id": "ddddd-1580216177138",
    "heap_size": "0b",
    "operations": "196",
    "create_time": "2025-03-25T00:01:38.662Z",
    "type": "pytorch",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  },
  {
    "id": "lang_ident_model_1",
    "heap_size": "1mb",
    "operations": "39629",
    "create_time": "2019-12-05T12:28:34.594Z",
    "type": "lang_ident",
    "ingest.pipelines": "0",
    "data_frame.id": "__none__"
  }
]

Get transform information Added in 7.7.0

GET /_cat/transforms

Get configuration and usage information about transforms.

CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.

Query parameters

  • Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, it returns an empty transforms array when there are no matches and the subset of results when there are partial matches. If false, the request returns a 404 status code when there are no matches or only partial matches.

  • from number

    Skips the specified number of transforms.

  • h string | array[string]

    Comma-separated list of column names to display.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

    Values are changes_last_detection_time, cldt, checkpoint, cp, checkpoint_duration_time_exp_avg, cdtea, checkpointTimeExpAvg, checkpoint_progress, c, checkpointProgress, create_time, ct, createTime, delete_time, dtime, description, d, dest_index, di, destIndex, documents_deleted, docd, documents_indexed, doci, docs_per_second, dps, documents_processed, docp, frequency, f, id, index_failure, if, index_time, itime, index_total, it, indexed_documents_exp_avg, idea, last_search_time, lst, lastSearchTime, max_page_search_size, mpsz, pages_processed, pp, pipeline, p, processed_documents_exp_avg, pdea, processing_time, pt, reason, r, search_failure, sf, search_time, stime, search_total, st, source_index, si, sourceIndex, state, s, transform_type, tt, trigger_count, tc, version, or v.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

    Supported values include:

    • changes_last_detection_time (or cldt): The timestamp when changes were last detected in the source indices.
    • checkpoint (or cp): The sequence number for the checkpoint.
    • checkpoint_duration_time_exp_avg (or cdtea, checkpointTimeExpAvg): Exponential moving average of the duration of the checkpoint, in milliseconds.
    • checkpoint_progress (or c, checkpointProgress): The progress of the next checkpoint that is currently in progress.
    • create_time (or ct, createTime): The time the transform was created.
    • delete_time (or dtime): The amount of time spent deleting, in milliseconds.
    • description (or d): The description of the transform.
    • dest_index (or di, destIndex): The destination index for the transform. The mappings of the destination index are deduced based on the source fields when possible. If alternate mappings are required, use the Create index API prior to starting the transform.
    • documents_deleted (or docd): The number of documents that have been deleted from the destination index due to the retention policy for this transform.
    • documents_indexed (or doci): The number of documents that have been indexed into the destination index for the transform.
    • docs_per_second (or dps): Specifies a limit on the number of input documents per second. This setting throttles the transform by adding a wait time between search requests. The default value is null, which disables throttling.
    • documents_processed (or docp): The number of documents that have been processed from the source index of the transform.
    • frequency (or f): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. The default value is 1m.
    • id: Identifier for the transform.
    • index_failure (or if): The number of indexing failures.
    • index_time (or itime): The amount of time spent indexing, in milliseconds.
    • index_total (or it): The number of index operations.
    • indexed_documents_exp_avg (or idea): Exponential moving average of the number of new documents that have been indexed.
    • last_search_time (or lst, lastSearchTime): The timestamp of the last search in the source indices. This field is only shown if the transform is running.
    • max_page_search_size (or mpsz): Defines the initial page size to use for the composite aggregation for each checkpoint. If circuit breaker exceptions occur, the page size is dynamically adjusted to a lower value. The minimum value is 10 and the maximum is 65,536. The default value is 500.
    • pages_processed (or pp): The number of search or bulk index operations processed. Documents are processed in batches instead of individually.
    • pipeline (or p): The unique identifier for an ingest pipeline.
    • processed_documents_exp_avg (or pdea): Exponential moving average of the number of documents that have been processed.
    • processing_time (or pt): The amount of time spent processing results, in milliseconds.
    • reason (or r): If a transform has a failed state, this property provides details about the reason for the failure.
    • search_failure (or sf): The number of search failures.
    • search_time (or stime): The amount of time spent searching, in milliseconds.
    • search_total (or st): The number of search operations on the source index for the transform.
    • source_index (or si, sourceIndex): The source indices for the transform. It can be a single index, an index pattern (for example, "my-index-*"), an array of indices (for example, ["my-index-000001", "my-index-000002"]), or an array of index patterns (for example, ["my-index-*", "my-other-index-*"]. For remote indices use the syntax "remote_name:index_name". If any indices are in remote clusters then the master node and at least one transform node must have the remote_cluster_client node role.
    • state (or s): The status of the transform, which can be one of the following values:

      • aborting: The transform is aborting.
      • failed: The transform failed. For more information about the failure, check the reason field.
      • indexing: The transform is actively processing data and creating new documents.
      • started: The transform is running but not actively indexing data.
      • stopped: The transform is stopped.
      • stopping: The transform is stopping.
    • transform_type (or tt): Indicates the type of transform: batch or continuous.

    • trigger_count (or tc): The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • version (or v): The version of Elasticsearch that existed on the node when the transform was created.

    Values are changes_last_detection_time, cldt, checkpoint, cp, checkpoint_duration_time_exp_avg, cdtea, checkpointTimeExpAvg, checkpoint_progress, c, checkpointProgress, create_time, ct, createTime, delete_time, dtime, description, d, dest_index, di, destIndex, documents_deleted, docd, documents_indexed, doci, docs_per_second, dps, documents_processed, docp, frequency, f, id, index_failure, if, index_time, itime, index_total, it, indexed_documents_exp_avg, idea, last_search_time, lst, lastSearchTime, max_page_search_size, mpsz, pages_processed, pp, pipeline, p, processed_documents_exp_avg, pdea, processing_time, pt, reason, r, search_failure, sf, search_time, stime, search_total, st, source_index, si, sourceIndex, state, s, transform_type, tt, trigger_count, tc, version, or v.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • size number

    The maximum number of transforms to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      The status of the transform. Returned values include: aborting: The transform is aborting. failed: The transform failed. For more information about the failure, check thereasonfield. indexing: The transform is actively processing data and creating new documents. started: The transform is running but not actively indexing data. stopped: The transform is stopped. stopping`: The transform is stopping.

    • The sequence number for the checkpoint.

    • The number of documents that have been processed from the source index of the transform.

    • checkpoint_progress string | null

      The progress of the next checkpoint that is currently in progress.

    • last_search_time string | null

      The timestamp of the last search in the source indices. This field is shown only if the transform is running.

    • changes_last_detection_time string | null

      The timestamp when changes were last detected in the source indices.

    • The time the transform was created.

    • version string
    • The source indices for the transform.

    • The destination index for the transform.

    • pipeline string

      The unique identifier for the ingest pipeline.

    • The description of the transform.

    • The type of transform: batch or continuous.

    • The interval between checks for changes in the source indices when the transform is running continuously.

    • The initial page size that is used for the composite aggregation for each checkpoint.

    • The number of input documents per second.

    • reason string

      If a transform has a failed state, these details describe the reason for failure.

    • The total number of search operations on the source index for the transform.

    • The total number of search failures.

    • The total amount of search time, in milliseconds.

    • The total number of index operations done by the transform.

    • The total number of indexing failures.

    • The total time spent indexing documents, in milliseconds.

    • The number of documents that have been indexed into the destination index for the transform.

    • The total time spent deleting documents, in milliseconds.

    • The number of documents deleted from the destination index due to the retention policy for the transform.

    • The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • The number of search or bulk index operations processed. Documents are processed in batches instead of individually.

    • The total time spent processing results, in milliseconds.

    • The exponential moving average of the duration of the checkpoint, in milliseconds.

    • The exponential moving average of the number of new documents that have been indexed.

    • The exponential moving average of the number of documents that have been processed.

GET /_cat/transforms
curl \
 --request GET 'http://api.example.com/_cat/transforms' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_cat/transforms?v=true&format=json`.
[
  {
    "id" : "ecommerce_transform",
    "state" : "started",
    "checkpoint" : "1",
    "documents_processed" : "705",
    "checkpoint_progress" : "100.00",
    "changes_last_detection_time" : null
  }
]









Ping the cluster

HEAD /

Get information about whether the cluster is running.

Responses

HEAD /
curl \
 --request HEAD 'http://api.example.com/' \
 --header "Authorization: $API_KEY"

Connector

The connector and sync jobs APIs provide a convenient way to create and manage Elastic connectors and sync jobs in an internal index. Connectors are Elasticsearch integrations for syncing content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. This API provides an alternative to relying solely on Kibana UI for connector and sync job management. The API comes with a set of validations and assertions to ensure that the state representation in the internal index remains valid. This API requires the manage_connector privilege or, for read-only endpoints, the monitor_connector privilege.

Check out the connector API tutorial

Check in a connector Technical preview

PUT /_connector/{connector_id}/_check_in

Update the last_seen field in the connector and set it to the current timestamp.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be checked in

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_check_in
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_check_in' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
    "result": "updated"
}

Get a connector Beta

GET /_connector/{connector_id}

Get the details about a connector.

Path parameters

Query parameters

  • A flag to indicate if the desired connector should be fetched, even if it was soft-deleted.

Responses

GET /_connector/{connector_id}
curl \
 --request GET 'http://api.example.com/_connector/{connector_id}' \
 --header "Authorization: $API_KEY"




Delete a connector Beta

DELETE /_connector/{connector_id}

Removes a connector and associated sync jobs. This is a destructive action that is not recoverable. NOTE: This action doesn’t delete any API keys, ingest pipelines, or data indices associated with the connector. These need to be removed manually.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be deleted

Query parameters

  • A flag indicating if associated sync jobs should be also removed. Defaults to false.

  • hard boolean

    A flag indicating if the connector should be hard deleted.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_connector/{connector_id}
curl \
 --request DELETE 'http://api.example.com/_connector/{connector_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
    "acknowledged": true
}




















Delete a connector sync job Beta

DELETE /_connector/_sync_job/{connector_sync_job_id}

Remove a connector sync job and its associated data. This is a destructive action that is not recoverable.

Path parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_connector/_sync_job/{connector_sync_job_id}
curl \
 --request DELETE 'http://api.example.com/_connector/_sync_job/{connector_sync_job_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
  "acknowledged": true
}




Create a connector sync job Beta

POST /_connector/_sync_job

Create a connector sync job document in the internal index and initialize its counters and timestamps with default values.

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • id string Required
POST /_connector/_sync_job
curl \
 --request POST 'http://api.example.com/_connector/_sync_job' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"id\": \"connector-id\",\n  \"job_type\": \"full\",\n  \"trigger_method\": \"on_demand\"\n}"'
Request example
{
  "id": "connector-id",
  "job_type": "full",
  "trigger_method": "on_demand"
}




Update the connector API key ID Beta

PUT /_connector/{connector_id}/_api_key_id

Update the api_key_id and api_key_secret_id fields of a connector. You can specify the ID of the API key used for authorization and the ID of the connector secret where the API key is stored. The connector secret ID is required only for Elastic managed (native) connectors. Self-managed connectors (connector clients) do not use this field.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_api_key_id
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_api_key_id' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"api_key_id\": \"my-api-key-id\",\n    \"api_key_secret_id\": \"my-connector-secret-id\"\n}"'
Request example
{
    "api_key_id": "my-api-key-id",
    "api_key_secret_id": "my-connector-secret-id"
}
Response examples (200)
{
  "result": "updated"
}

Update the connector configuration Beta

PUT /_connector/{connector_id}/_configuration

Update the configuration field in the connector document.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_configuration
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_configuration' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"values\": {\n        \"tenant_id\": \"my-tenant-id\",\n        \"tenant_name\": \"my-sharepoint-site\",\n        \"client_id\": \"foo\",\n        \"secret_value\": \"bar\",\n        \"site_collections\": \"*\"\n    }\n}"'
{
    "values": {
        "tenant_id": "my-tenant-id",
        "tenant_name": "my-sharepoint-site",
        "client_id": "foo",
        "secret_value": "bar",
        "site_collections": "*"
    }
}
{
    "values": {
        "secret_value": "foo-bar"
    }
}
Response examples (200)
{
  "result": "updated"
}

Update the connector error field Technical preview

PUT /_connector/{connector_id}/_error

Set the error field for the connector. If the error provided in the request body is non-null, the connector’s status is updated to error. Otherwise, if the error is reset to null, the connector status is updated to connected.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • error string | null

    One of:

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_error
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_error' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"error\": \"Houston, we have a problem!\"\n}"'
Request example
{
    "error": "Houston, we have a problem!"
}
Response examples (200)
{
  "result": "updated"
}




Update the connector draft filtering validation Technical preview

PUT /_connector/{connector_id}/_filtering/_validation

Update the draft filtering validation info for a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • validation object Required
    Hide validation attributes Show validation attributes object
    • errors array[object] Required
      Hide errors attributes Show errors attributes object
    • state string Required

      Values are edited, invalid, or valid.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_filtering/_validation
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_filtering/_validation' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"validation":{"errors":[{"ids":["string"],"messages":["string"]}],"state":"edited"}}'

Update the connector index name Beta

PUT /_connector/{connector_id}/_index_name

Update the index_name field of a connector, specifying the index where the data ingested by the connector is stored.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_index_name
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_index_name' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"index_name\": \"data-from-my-google-drive\"\n}"'
Request example
{
    "index_name": "data-from-my-google-drive"
}
Response examples (200)
{
  "result": "updated"
}

Update the connector name and description Beta

PUT /_connector/{connector_id}/_name

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_name
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_name' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"name\": \"Custom connector\",\n    \"description\": \"This is my customized connector\"\n}"'
Request example
{
    "name": "Custom connector",
    "description": "This is my customized connector"
}
Response examples (200)
{
  "result": "updated"
}

Update the connector is_native flag Beta

PUT /_connector/{connector_id}/_native

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_native
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_native' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"is_native":true}'

Update the connector pipeline Beta

PUT /_connector/{connector_id}/_pipeline

When you create a new connector, the configuration of an ingest pipeline is populated with default settings.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_pipeline
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_pipeline' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"pipeline\": {\n        \"extract_binary_content\": true,\n        \"name\": \"my-connector-pipeline\",\n        \"reduce_whitespace\": true,\n        \"run_ml_inference\": true\n    }\n}"'
Request example
{
    "pipeline": {
        "extract_binary_content": true,
        "name": "my-connector-pipeline",
        "reduce_whitespace": true,
        "run_ml_inference": true
    }
}
Response examples (200)
{
  "result": "updated"
}








Update the connector status Technical preview

PUT /_connector/{connector_id}/_status

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • status string Required

    Values are created, needs_configuration, configured, connected, or error.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_status
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_status' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"status\": \"needs_configuration\"\n}"'
Request example
{
    "status": "needs_configuration"
}
Response examples (200)
{
  "result": "updated"
}

Get data streams Added in 7.9.0

GET /_data_stream/{name}

Get information about one or more data streams.

Path parameters

  • name string | array[string] Required

    Comma-separated list of data stream names used to limit the request. Wildcard (*) expressions are supported. If omitted, all data streams are returned.

Query parameters

  • expand_wildcards string | array[string]

    Type of data stream that wildcard patterns can match. Supports comma-separated values, such as open,hidden.

    Supported values include:

    • all: Match any data stream or index, including hidden ones.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard expressions are not accepted.

    Values are all, open, closed, hidden, or none.

  • If true, returns all relevant default configurations for the index template.

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • verbose boolean

    Whether the maximum timestamp for each data stream should be calculated and returned.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • data_streams array[object] Required
      Hide data_streams attributes Show data_streams attributes object
      • _meta object
        Hide _meta attribute Show _meta attribute object
        • * object Additional properties
      • If true, the data stream allows custom routing on write request.

      • Hide failure_store attributes Show failure_store attributes object
      • generation number Required

        Current generation for the data stream. This number acts as a cumulative count of the stream’s rollovers, starting at 1.

      • hidden boolean Required

        If true, the data stream is hidden.

      • Values are Index Lifecycle Management, Data stream lifecycle, or Unmanaged.

      • prefer_ilm boolean Required

        Indicates if ILM should take precedence over DSL in case both are configured to managed this data stream.

      • indices array[object] Required

        Array of objects containing information about the data stream’s backing indices. The last item in this array contains information about the stream’s current write index.

        Hide indices attributes Show indices attributes object
      • Hide lifecycle attributes Show lifecycle attributes object
      • name string Required
      • replicated boolean

        If true, the data stream is created and managed by cross-cluster replication and the local cluster can not write into this data stream or change its mappings.

      • rollover_on_write boolean Required

        If true, the next write to this data stream will trigger a rollover first and the document will be indexed in the new backing index. If the rollover fails the indexing request will fail too.

      • status string Required

        Values are green, GREEN, yellow, YELLOW, red, or RED.

      • system boolean

        If true, the data stream is created and managed by an Elastic stack component and cannot be modified through normal user interaction.

      • template string Required
      • timestamp_field object Required
        Hide timestamp_field attribute Show timestamp_field attribute object
        • name string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • Values are standard, time_series, logsdb, or lookup.

GET /_data_stream/{name}
curl \
 --request GET 'http://api.example.com/_data_stream/{name}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response for retrieving information about a data stream.
{
  "data_streams": [
    {
      "name": "my-data-stream",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-my-data-stream-2099.03.07-000001",
          "index_uuid": "xCEhwsp8Tey0-FLNFYVwSg",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        },
        {
          "index_name": ".ds-my-data-stream-2099.03.08-000002",
          "index_uuid": "PA_JquKGSiKcAKBA8DJ5gw",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        }
      ],
      "generation": 2,
      "_meta": {
        "my-meta-field": "foo"
      },
      "status": "GREEN",
      "next_generation_managed_by": "Index Lifecycle Management",
      "prefer_ilm": true,
      "template": "my-index-template",
      "ilm_policy": "my-lifecycle-policy",
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false,
      "rollover_on_write": false
    },
    {
      "name": "my-data-stream-two",
      "timestamp_field": {
        "name": "@timestamp"
      },
      "indices": [
        {
          "index_name": ".ds-my-data-stream-two-2099.03.08-000001",
          "index_uuid": "3liBu2SYS5axasRt6fUIpA",
          "prefer_ilm": true,
          "ilm_policy": "my-lifecycle-policy",
          "managed_by": "Index Lifecycle Management"
        }
      ],
      "generation": 1,
      "_meta": {
        "my-meta-field": "foo"
      },
      "status": "YELLOW",
      "next_generation_managed_by": "Index Lifecycle Management",
      "prefer_ilm": true,
      "template": "my-index-template",
      "ilm_policy": "my-lifecycle-policy",
      "hidden": false,
      "system": false,
      "allow_custom_routing": false,
      "replicated": false,
      "rollover_on_write": false
    }
  ]
}




































































































































































































Get a specific running ES|QL query information Technical preview

GET /_query/queries/{id}

Returns an object extended information about a running ES|QL query.

Path parameters

  • id string Required

    The query ID

Responses

GET /_query/queries/{id}
curl \
 --request GET 'http://api.example.com/_query/queries/{id}' \
 --header "Authorization: $API_KEY"


























































































































Delete an index template Added in 7.8.0

DELETE /_index_template/{name}

The provided may contain multiple template names separated by a comma. If multiple template names are specified then there is no wildcard support and the provided names should match completely with existing templates.

Path parameters

  • name string | array[string] Required

    Comma-separated list of index template names used to limit the request. Wildcard (*) expressions are supported.

Query parameters

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

    Values are -1 or 0.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_index_template/{name}
curl \
 --request DELETE 'http://api.example.com/_index_template/{name}' \
 --header "Authorization: $API_KEY"